doi
stringlengths
17
24
transcript
stringlengths
305
148k
abstract
stringlengths
5
6.38k
10.5446/54809 (DOI)
Great, so and thanks to these three guys for checking my slides for typos. They I noticed they didn't find the one in their own name so that's a bad sign. We'll see here we go. Okay, so I'm going to start with the talk. I'm not going to talk about Vaffa Witton theory for the first lecture and a half, just about background material. I try to go as simply as possible, but of course it's not possible to describe everything in this field in intimate detail. So there's going to be stuff where I just maybe give a guide to how one will go about understanding something. And so I'm going to set a bunch of exercises, but maybe more suitable exercises are just flesh out the slides. So there'll be slides that particularly interest you that you feel I didn't give enough detail. Perfectly good exercises to flesh that out or to start fleshing that out. Maybe go and look at some of the references and to talk to the tutors about it or to talk to me about it. So these are some references, there's many more in some kind of order that we use them. Exercises will be in green, but as I say, much better exercises is just to flesh out all the bits that I skate over. So let's begin. So this course is going to be about coherent sheaves. So I'm always going to be on a smooth, projected variety over the complex numbers. N is its dimension. There'll be an ample line bundle, which I'll call 01. And its first showing class is H. So it's just notation. A coherent sheaf, what I'll mean by a sheaf will always be a coherent sheaf. So that means it's just locally, finitely generated by the structure sheaf. So over an open set U, it has a presentation like this. And then this is less an exercise than more sort of revision or go back and remember why that means that not just that it's finitely generated, but that it's finitely presented. So the kernel of that map is also finitely generated. And then after finitely many steps, the kernel becomes locally free. So you get a resolution like this. Okay, so the exercise is to go and make sure you're happy with this. It uses the fact that X is smooth. And if you want a reference, Griffiths and Harris, of course, it's analytic rather than algebraic, but the proofs are the same. Griffiths and Harris have a, I really like their way of dealing with this. It's very quick. Okay, so you should think of a coherent sheaf as some kind of vector bundle with singularities, where the singularities are the locus where this rank drops. So this map here from one vector bundle to another, this matrix, where its rank drops, F sort of picks up singularities. And away from that, it's a vector bundle. Its support is a subscheme of X defined by this ideal sheaf over an open set U. It's just those functions which annihilate all the sections of F, and define an ideal sheaf and therefore a subscheme called the support of F. And F is really pushed forward from that support. It's a sheaf on the support whose push forward is the original F. And then we talk about the dimension of a sheaf is the dimension of its support. So I'll give some examples. Any questions? So also F is a quotient of regular functions by this idea? No. No, so F is a module over the quotient. So what you just said is the structure sheaf of the support, and F is a module over the structure sheaf of the support. So F is a sheaf on the support, but it needn't be the structure sheaf. So you could imagine F being a rank two vector bundle on a point or on a subscheme. And then the support would be that subscheme. This ideal sheaf would be the ideal sheaf of that subscheme. But F itself would have, F would be the push forward of something of rank two on that subscheme, so it wouldn't be a quotient of the structure sheaf. It would be a quotient. In this case, it would locally be two copies of the structure sheaf of that subscheme. So the organizers, if there's questions online. Okay, great. Yeah, it's fantastic that people are able to attend this online, but I can't believe they're really paying attention. They're just looking at the internet. Yeah, but they're reading the news or looking at the sport. They're like checking out England's chances of winning the Euros. They're not really paying attention. Okay, and now sheaves are called pure. So if it has dimension D, it's called pure if it has no dimension strictly less. Sub-sweeps. Sure. Okay, you have to go back. Yeah. So you say R is equal to N in the first exercise. No, definitely not. I mean, F could actually be this guy, right? It could be R copies of the structure sheaf, where R is the rank of F. It could be. R is not the rank of F in general, but F could well be just this sheaf and R could be as big as you like. Okay, so there's this notion of pure for sheaves. I'll give an example in a minute. And it implies that both the support and the structure sheaf of the support are also pure. And exercise is to show that when the support is an integral sub-scheme, then pure means that the sheaf is torsion-free on its support. So it's the push forward from its support of a torsion-free sheaf. So a sheaf with no torsion, where there's no element of the structure sheaf that annihilates the section of the sheaf. So some examples, if D is a Cartier divisor, well, X is smooth. So if D is a divisor, then the structure sheaf is pure. And that's true even if the divisor doesn't have to be reduced. And you could take many copies of this example to answer some of the questions. So then it won't be rank one on its support. It can have higher rank on its support. So for instance, these are pure modules over the polynomials and two variables. But this is not pure. Okay, so if I take the structure sheaf of this sub-scheme that I've drawn in red. So where I've set X, Y to zero, so I get the two axes. And then I set Y squared to zero, so I don't get the full Y axis. I only get its first order piece at the origin. That's not pure because of this embedded point at the origin, which gives you this zero dimensional sub-module. So this is a one dimensional sheaf, but it has this zero dimensional sub-module, which is just the structure sheaf of the origin mapping into a sheaf or module by Y. Okay, and because Y times anything is zero, X times Y is zero and Y squared is zero, this really does define a well-defined module map from the structure sheaf. So this is sort of CXY divided by the maximal ideal XY. Okay, any questions about that? Okay, now I should tell you about stability. So we have the Hilbert polynomial of a sheaf, which is we twisted up many times here by T, by that polarization, O1, T times. And then we take the space of sections, so the holomorphic Euler characteristic for large T, the two are the same. And that's some polynomial in T, and the leading coefficient is given by T to the D, sorry, the leading term is T to the D, where D is the dimension of the support. So usually when we think of something like a torsion free sheaf of the whole variety, the leading term is T to the N, and then the leading coefficient is roughly the volume of the manifold, so like that H to the N divided by N factorial, times the rank of the sheaf. So you should think of up to some irrelevant constants, this leading term is the rank of the sheaf if D is N, so if F is torsion free, let's say, or support F is, if F has rank bigger than zero. And then the reduced Hilbert polynomial is where we make that monic, so we get rid of the leading term. And then the slope, there's different notions of slope, but what I'm gonna take is the leading term in the case where D is N. So to start with, let's assume F has rank bigger than zero, so its dimension is N, so it's supported over the whole variety. Then this sort of second term here, I mean this first term is the same for all sheaves, so the first interesting term here is called the slope. And more generally, if D is less than N, we just set that slope to be plus infinity, because AN will be zero. So if AN is zero, we just set this to be plus infinity. Okay, and then there's different notions of stability. There's slope stability, geyser stability, and others, which use this polynomial. So slope stability only uses its leading term, its first interesting term. Geyser stability uses the whole polynomial, and there's other things as well. But let's start with slope stability. So we have this notion of slope. So it's just by Riemann Rock, you can work out what it is, and it's the degree of the sheaf. So it's the first-chern class, the degree of the first-chern class of the sheaf, divided by the rank of the sheaf. Okay, so up to some constants. The leading term here is the rank of the sheaf, and then the sub-leading term is the degree of the sheaf. And then F is slope stable or semi-stable, if and only if. Whenever you have a sub-sheaf and a quotient like this, which is non-trivial, so neither A nor B should be zero. Then you want that the slope of A should be less than the slope of B. So the brackets are what you might guess. So if I want stability, then I should have a strict inequality here. And if I want semi-stability, I'll allow the non-strict inequality. Where these come from? So D here was the dimension of the sheaf. So if the sheaf is supported over the whole variety, then D is N. So this will be A N and this will be A N minus 1. This will roughly be the rank of the sheaf and this will roughly be the degree of the sheaf, the first-journ class dotted with the hyperplane class. And then the slope is the quotient of those. So if the dimension of the sheaf is less than N, then A N will be zero. The first term will be in lower degree, so this D will be less than N. And so A N will be zero, so then we just set the slope to be plus infinity. Did I answer your question? I mean, these are all given by Riemann-Roch formula. They're all just, these are all topological numbers. Yeah. Yeah, they're the Euler characteristic of this. The sheaf twisted up many, many times. We only consider this for large t. So, for example, when the sheaf is pure, would it use AB minus 1 over AB as the- You can, that's a different notion of slope and that's relevant, but we're not going to deal with it now. Yeah, that's absolutely right. Okay, so we have this definition of stability, which at the moment looks arbitrary, but I will try and motivate it a little bit, I'll show you it has good properties. And I'll say something about how you should think about where it comes from and why it's there. But for now, just accept it, for one more slide, just accept it. This is the definition. That sub-sheaves should have lower slope and quotient sheaves should have higher slope. Okay, that's the definition of stability. In particular, if you're looking at the definition of stability, in particular, if your slope is semi-stable, you must be torsion free. So maybe that should be an exercise. But that's because if you had a sub-sheaf of lower dimension, so if you had any torsion in your sheaf, that would give you a sub-sheaf of lower dimension. But that would immediately have slope infinity, because that would have an is 0. So you'd have slope plus infinity here, and your f would have finite slope, and your b would have finite slope, so you would destabilize it. Yeah, maybe that's an exercise. That gray comment should be an exercise. Because you have to pick the right, you need to pick the maximal torsion sub-sheaf in order to violate this. F could be torsion, right? In particular, slope. F could be torsion. I think I don't want to define slope stability using if f is torsion. I don't want to define. Yeah, maybe I should have been more careful. I would want to use your notion of slope at that point. Let's take f here to be, let's take f to have full dimension at the moment. Where's K? Yeah, I was trying to simplify things to make things, to do one example, which is simpler. But now everyone's finding the floor in that. We'll go on to Gizika stability in a minute and everything will be hunky dory. Okay, so exercise is the seasaw inequality. That the slope of A and slope of B, the relationship between the two is more or less the same as the relationship between the slope of A and the slope of F. So roughly speaking, this condition, that the slope of A should be less than slope of F. Is very similar to the slope of A being, sorry, this condition, slope of A should be less than slope of B. It's very close to the condition, the slope of A should be less than slope of F. So this is just some rearrangement of this inequality using these numbers. It's a good exercise to do if you've never done it. It's called the C-sortian equality. But it's not quite the same. So you should not define stability as being that the slope of A is less than the slope of F, because they're not quite the same. They're the same when the dimension of B is the same as the dimension of F. So here. But here's an example which will show you why they're not exactly the same. So here, this is the ideal sheaf of, let's say, a closed point in X. You should have a think about this example. And this does not destabilize according to my definition. But it does semi destabilize if you took the definition here. So I'll let you go away and think about it. We don't want to get into a big discussion about it. Okay, so Gizika stability. So this is going to be a bit more robust, I think. What we say is very similarly, instead of using slope, which was really just the leading coefficient of this, the first interesting coefficient of this reduced Hilbert polynomial, we're now going to use the whole Hilbert polynomial for large t. You can see this is an extremely similar notion. So you're Gizika semi stable if and only if the Hilbert polynomial of the sub-sheaf is less than the Hilbert polynomial of the quotient sheaf. Where less or less equal are described in the following way. So you should concentrate on the first line. We say that a polynomial is less than another polynomial, a monic polynomial is less than another monic polynomial. If and only if, well, when they have the same degree, you should, it's just the obvious lexicographic ordering that P of t should be less than Q of t for large t. Okay, so you can imagine that's very closely related to the first non-trivial coefficients satisfying the same inequality. But we have to be careful and stick in this second condition that when Q has lower degree, then we should say it's bigger than P. So that's confusing because the inequality seems to go the wrong way round, but it's because somewhere we divide it by zero. It has to do with all the questions on the previous slide. It's because some a n minus one is zero, so when we divide by zero, we should get plus infinity. And so that's the origin of this. But basically what it means is that you're destabilized either by things of higher, so if you have a sub-sheaf with a bigger reduced polynomial than the quotient sheaf, that destabilizes you. Or if you have a sub-sheaf of lower dimension than f, then that destabilizes you. So in other words, if f is not pure, then you're unstable. So you can either take this as a definition or you can take the definition that you only insist on the first condition, but you also ask that f must be pure and the two are equivalent. Okay, so what you find is that Gizika semi-stable sheaves are pure. These are quotient free sheaves. These are quotient free sheaves. Not now. No, these are arbitrary sheaves now. I think they were probably... Gizika is for quotient sheaves. No, no, no, you can have Gizika stability for any sheaves. Okay, and again, it's equivalent to the reduced Hilbert polynomial of the sub-sheaf being less than the reduced Hilbert polynomial of the actual sheaf whenever the dimension of the quotient sheaf is full, is the same as the dimension of f. But again, it's not equivalent when that condition doesn't hold. So you have to be careful. Okay, so I want to say something about these. Maybe another exercise would be to show that... I just write it here. I appreciate it. It's a bit small for the people checking the BBC News. But you could... A good exercise would be to check you're happy with these definitions. Slope semi-stable. Let's take the case where maybe dim f is the full dimension of x. Then slope semi-stable implies Gizika semi-stable implies... Gizika stable implies slope stable. Did I get it all right? Bogga. Oh, yeah. Oh, yeah. In France, you use the implications the other way, don't you? Sorry, this was the English way of writing. It's amazing how you can't think at the board. Okay, thank you. Please continue to do that. Okay, so let me... I'll leave that up just for a second. Let me try and motivate these notions of stability, which are a bit weird and show you some nice properties. Actually, I'll just tell you a fact that they arise naturally when you try and form the moduli space by geometric invariant theory. So you can try and form all these moduli spaces as quotients of quat schemes. So you try and see your sheaves as quotients of some fixed big sheaf, like many copies of the structure sheaf twisted by minus n for large n, something like that. So you try and see all your sheaves as quotients of a fixed sheaf. So you manage to see them all in a quat scheme, and then you have to divide by all the possible choices that got you, that described it as a quotient, in particular the automorphisms of the fixed sheaf. And so you end up doing geometric invariant theory, and that gives you a notion of semi-stability, and you end up with this one. Another way of thinking about it is that what it's saying is that quotient sheaves, within reason, should have more sections than sub-sheaves. So that's not quite true. To leading order, the amount of sections you have when you're twisted up by large t is given by the rank. So you can't get away from that. So this is not compatible with that. So the rank is what determines the number of sections to leading order. But to sub-leading order, to the next order, modulo the rank, what determines how many sections you have, is this either the reduced-tilt-but polynomial or the slope. And what this is saying is that quotient sheaves should have roughly the right amount of sections according to their rank, but then to next order, the amount of sections they have, they should have more than the sub-sheaves. And that makes sense, right? If you have sections of F, they give you, just by law, they give you sections of B. So B has lots of sections. Whereas getting sections, of course, many of them are zero because they lay in A, but generally speaking, it's easier to get a section of B by just taking a section of F and projecting it. What's harder is to get a section of A because your section of F has to satisfy lots of conditions to actually lay in A. So you should think of FB as having more sections in A. And that's, roughly speaking, what stability says. And this is the generic situation. So stability is a generic condition. In fact, it's a zirisky open condition. And the generic sheaf is stable. So if there's a single stable sheaf, then there's a zirisky open, you know, in the space of all sheaves in some sense. And those which are stable at zirisky open, so they're dense. So that's why it's an important notion. But now I want to show you why it's, that it has nice properties to motivate it, I think, to explain this a bit better. But first, are there any questions? Okay, so I give you two nice properties of stability. So one is this thing that's a bit like the Schur-Lemmer in representation theory, that if you have two stable sheaves of the same Schurren classes or same Hilbert polynomial, then they satisfy this property that there's no homes between them, unless of course they're the same. And when they're the same, the only automorphisms they have is multiples of the identity. Okay, so I do this for slope stability, but you could also do it for geysik stability. The argument's very similar and very simple. It's that if you do have a morphism from f to g, you just factor it in this way. You just write the map has a kernel and an image, and then the image, so the image is a quotient of f, and then it's a sub of g. And that gives you this chain of inequalities, that the slope of f should be less than the slope of the image, because always as you go to the right, the slope should increase. But the slope of the image should be less than the slope of g, because it's a sub, but of course g and f have the same topological type, so their slope's the same. So you get this contradiction here, unless one of the exact sequences is trivial, which means that phi is either the kernel zero or the image is zero. So either you have the zero map, so we're in this case, or you get that phi is an isomorphism, so now you can think of f and g as being the same. And in that case, when f and g are the same, then exercise, run this argument again for phi minus multiples of the identity, and pick, find an appropriate, essentially, eigenvalue of phi to show that eventually this can't be an isomorphism. For some lambda, this can't be an isomorphism, so it must be zero. So phi is a multiple of the identity. So that's one nice property of stability. So you're achieving c to be a base field for all this? Forever, yeah, yeah. And then this is the really nice property of stability. So this is really why it works and why we have this definition. It's separateness, essentially, of the modulized space of stable sheaves. So it's the one parameter criterion for separateness. So it's the moduli of stable sheaves is how stored. So here's the setup. Pick two families of sheaves, parameterized by a curve. So let's just say the affine line. So we pick these two families of sheaves. They should vary nicely. The right condition is flatness over the base. And think of them as one parameter families of sheaves, E, T, and F, T. And suppose they're all stable. Then what you find is if they're isomorphic away from the central fiber, then they're isomorphic over the central fiber and moreover the families are identical. So hopefully it's clear to you this is saying that the moduli space has a nice separateness property or a house torque property. And then the sketch of the proof is that when you take the homes down the fibers, you get a line bundle away from the origin just by the previous slide and base change. So on the fibers, on the non-zero fibers, because the sheaves E, T, and F, T are isomorphic, they have precisely multiples of the identity as their homomorphisms from one to the other. So you find that the homes on the fibers are one dimensional. So what that means is that the relative homes, the sheaf of relative homes, is a line bundle away from the origin. And it's torsion free. And that's because of the way you define this. You have to remember how this is defined, this sheaf. It's not defined fiber-wise, it's defined over open sets in the base. And so if the homes jump on the central fiber, you might think that this sheaf would be a line bundle with maybe a little bit of torsion over the central fiber. But that's not the case because you don't see, if you only have extra homes on the central fiber, you don't see that in this sheaf. Because this sheaf is defined by taking an open set of the origin downstairs and looking at the homes above that, upstairs. And there won't be any because there's just some homes on the central fiber, but they don't extend to homes on the open set. Okay, so it's an exercise to understand this statement here. I've been a bit slack about it. But it's a good thing to go away and check you're happy with. What happens is, if the homes on the central fiber jump, you won't see that in this sheaf, you'll see it in the next one. So the relative X1 sheaf will get some torsion. It'll get the structure sheaf of the origin or something in it. But in this sheaf, you won't see it. Okay, so it's actually a line bundle on C and therefore it's the trivial line bundle on the affine line. And so you can pick a nowhere vanishing section. And the fact that it's nowhere vanishing means that on the central fiber, it's a non-zero homomorphism from E0 to F0. And therefore, again, by the previous slide, it must be an isomorphism. So what you end up with is this section is an isomorphism on every fiber and so it's an isomorphism. So again, obviously, I'm sketching this and I'm not writing in all the details. So if it interests you, hopefully this is enough to give you an idea of why it's true. And if it interests you, then go in and flesh this out and make it a theorem. But I think more important is to give an example where this fails if you don't have the stability condition. So go away and check you're happy that moduli of sheaves is not a well-behaved thing if you don't have stability. You get hopelessly non-separated moduli. Okay, and there's also another property which is when you take semi-stable sheaves and then you have to divide by something called s equivalence, which I'm not going to go into, then the moduli spaces are proper. And so that's a wonderful property and it's very important in many things, but it's not so important in my lectures because in Vaffa-Witton theory, the moduli spaces which arise are not proper anyway. So I'm not going to go into this, but this is something very important that I don't have time to go through. Okay, so I want to do a little bit of defamation theory. Again, I'm just going to give you the rough idea. So let's start with vector bundles. How do I get rid of this thing at the top? Right, okay. So let's start with locally free sheaves, so vector bundles. So these are made from gluing trivial vector bundles on affine open cover and you glue them over overlaps by transition functions which satisfy the co-cycle condition. Well, you need to know that there is a moduli scheme, so that comes from geometric invariant theory. And then there's this, what's it called, the one parameter criterion for separating this? Is this basically? That if you look in Hart-Shorn for the one parameter criterion for separating this, then check that basically it's what I wrote down. So it basically says that, brilliant, thank you. When you have a family of vector bundles, you want to know, so separating this says that when you have a family, you can fill in the central fiber uniquely and the one parameter criterion says you can test this just with smooth curves. And that was what I did. Okay, so now we can deform, if we have a vector bundle, we can deform it by infinitesimally altering these overlaps. So I'm going to change my transition function on the overlap, I'm going to deform it by this little guy here, and then I'm going to work to first order, so I'm going to assume that t squared is zero. So when you write down, when you do this, when you change the transition functions over overlaps, then you have to check that the co-cycle condition still holds. When you do that mod t squared, so let's set t squared to zero, then what you find is that these EIJs here form a check co-cycle. So they're whatever it's called, check co-closed. And then moreover, so they define an element in H1 of nd, and moreover, when you only consider them up to isomorphisms, so when you consider two to be the same, if there's an isomorphism in the bundle which takes one to the other or something, then you find you divide out by check co-boundaries. So you end up with that this check group here is the first order deformations of your bundle. And then you can go further. I mean, I haven't done this exercise recently, but I promise I did it when I was your age. So these things are hard, but they're worth doing. I mean, they're easy when you see them, but they take forever to think up. Anyway, so when you do the co-cycle condition to the next order, then what you should find is you get this first order deformation here cupped with itself in H2 of nd. And for that, you've got to remember what the cut product is in check co-homology, and nobody knows that. So these are easy exercises for me to state, and they're kind of lengthy for you to do, and you shouldn't worry about taking hours or days over them or discussing them with your colleagues. And this is the obstruction to extending the deformation to second order. And I'm going to do all this in more detail for sheaves by a different method in a second. But there's a general principle that what you find in these deformation problems is that you usually find a bunch of co-homology groups where, let's say, the zero-th co-homology groups are the infinitesimal automorphisms, that's clearly the case here, and people tend to call those t0. The deformations are the next co-homology group, that's what we saw here. These tend to be called t1, and then the obstructions are the next co-homology group. So let's do this for sheaves. And here I really did the exercise, so I'm honest. And you'll see it's non-trivial to make it all fit together. Okay, so I'm going to do deformations of sheaves here. I'm going to work, I see people are taking notes. I can give you these slides afterwards, and I think they're going to be posted online without the pauses. So you will have something to look at. Okay, so I'm going to work over spec of the dual numbers. So I'm setting t squared to zero. And then later when I do obstructions, I'm going to go to next order by working over this a2 space. And a0 is just the origin, it's just c, spec of c. Okay, first order deformation of a sheave is a sheave over x times a1, spec of the dual numbers. So it's x plus a little vector normal to x, right? It's x times by a little fat point. I want things to be flat over a1. I'm not going to go over flatness, just the lack of time. And the sheave over this thickened space should restrict to my original sheave over x times the origin. So that's what a first order deformation is. I don't want to show how to describe them. Okay, so let's suppose we have a first order deformation, and we'll go in the opposite direction in a minute. Okay, so take a first order deformation, restrict it to x, then you get your original sheave is zero. Okay, and so what you end up with is this exact sequence, because the kernel of a restricting to x is multiplication by t. Okay, and since t squared is zero, what you find is the first map factors through e1 modulo t, it kills anything in e1 that's been multiplied by t, because t squared is zero, therefore it factors through here, and this of course is e0, alright? And then the exercise is to check that the result is an exact sequence, so that you can make this exact on the left if you replace that e1 by e0. So flatness makes this an exact sequence, alright? So you should think of this as the x direction at this stage is not so important. What's going on here, what this really looks like is up to the x, you know, modulo all the stuff going on in the x direction, what this looks like in the a1 direction is just, this is the functions on a1, I restrict them to the origin, and what's the kernel, it's just a copy of c, but it's the functions multiplied by x. Oh, sorry, t. So that's what's going on here, those are your two e0s, and this is the e1, up to what's going on in the x direction. Okay, and now if we just take sections in the a1 direction, so push down to x, then this gives us an exact sequence on x. So originally this was an exact sequence on x times a1, but we can think of it by, because the a1 direction is affine, we can just take sections in that direction and we get an exact sequence on x. Okay, so we get an extension, so that's classified by this extension group, alright? So we get an element of this extension group. And the claim is that this completely classifies the first-order deformation, so I need to go backwards. Given one of these, I claim I can produce a first-order deformation, e1. So conversely, given a first-order deformation, I get an extension on x, and I'm going to call these two maps iota and pi, that's fine, but that's a sheaf on x, and what I'm meant to produce is a sheaf on x times a1. So I need to make it, it's already an ox module, I need it to be an ox brackets t over t squared module. I need to tell you what the action of t is on this e1, alright? But we know what the action of t should be on e1 from this exact sequence, okay? It should kill everything coming from the left, because t squared is zero. So it should kill e0, because this multiplication by iota, we're expecting to be multiplication by t. So it should kill this guy, in other words, it should factor through here, so you should project to here. And then, because of the action of t, it should be multiplication by t, so you should take this guy, stick it there, multiply by t and end up in there, alright? So I probably confuse you completely, but anyway, my claim is that you can make this into an oxt over t squared module by making t act as this map. And then the exercises show that's correct, to show that the result is flat over a1. So there's two parts to this exercise. First, you should show that I have described an oxt over t squared module. In other words, that this map here has square zero, and commutes with all the x, you know, it commutes with ox. That bit's sort of obvious, it's an ox module map. But you should show that t squared, this map here has square zero, so it really defines the structure of an oxt over t squared module. And then, once you've done that, once you have this module, you should show it's flat over a1. Okay? And that's... When you get everything in the right order, it's completely trivial, it's one line, but getting everything in the right order is hard the first time you do it, and you learn a lot. Okay. So what you find is that first-order deformations are given by this x1, so that's the tangent space of the modular space. And another exercise is to relate that to the previous... the description I gave you for locally free shoes. So for vector bundles, when you have a vector bundle, this exact sequence on x splits locally. Alright, so exact sequences of vector bundles are always locally split. And so you should glue the splitings by transition functions over overlaps, where the two splitings are not compatible over overlaps, so you change them by... Instead of taking a direct sum here, you change them by some map from here to here, so you change them by this upper diagonal guy here. Okay, and you should see this recovers the previous description. Okay, so now to second-order, what I want to see is an obstruction... If I have a first-order deformation, there should be an... There might be an obstruction to extending it to second-order, and that should lie in x2 now, the next cormology group up, and I want to see that. So I really did the exercise, Pyrrhic seen this once before, do you remember this five years ago? So let's suppose we had managed to find a second-order extension. So we managed to find a deformation not just of my E0, but really of my E1, my sheaf over x times A1, I managed to find an extension flat over A2 to here. Okay, call it E2. Then we get an exact sequence here. So I can take my sheaf over A2, I can restrict it to A1. The kernel are the things which are already multiplied by T squared, because T squared is zero in A1. So what you find is that the kernel is, by flatness, is T squared times E0. But instead of restricting to A1, so this, you know, I think you can see, this is what I've written down there, modulo what's going on in the x direction. What I've written down in the A2 direction is just here's the A2 guy. I can map it to the A1 guy. And the kernel is just a single copy of C. But instead of restricting to A1, I could restrict to A0, I could restrict to the origin. So I could just map to the origin here. And then the kernel will be T lots of, you know, C of T modulo T squared. So this will be essentially the functions on A2, but I multiply them by T under this map. Okay, so let's put that on the diagram. So there are the two different ways I can look at this sheaf. It's on A2, I can either restrict it to A1 or to A0. I can do that to A1, that's the horizontal one and to A0 is the vertical one. Okay, and when I do that, they fit together in the following way. Okay, so when I restricted vertically to A0, I had this very big kernel, and the smaller kernel sits inside it. So the T's, you know, in this diagram here, the functions which are already multiplied by T certainly contain the functions which are already multiplied by T squared. So they fit into this exact sequence, and similarly down here. So once I restricted to A1 over here horizontally, on A1 I could further restrict to A0, so it's obvious this restriction map to the origin factors through restricting first to A1 and then restricting to the origin. Okay, so that's all this diagram says. So I end up with this. So you recognize this vertical guy is the original description of the extension E1 in terms of an X class on the E0s. This is the new one I've got. This is also the original extension of E0 by E0 to give E1 all multiplied by T. Okay, so everything's familiar in this diagram, but you know, you need some time to absorb it, and you won't do it right now. And then you can look at what this says in terms of extension groups. So my E2 is defining me an extension group on X, given by extensions from E1 to E0. So that's this guy. All right. When I go up to this row, that's giving me the extension I started with which defined E1. The original extension which defined E1 was an extension from E0 to E0, and that's this guy here. Okay, so my E2 is restricting to my E1 when I look at what the extensions do on the kernel of the map to the origin. All right, and this sits inside an exact sequence. So this is just the long exact sequence of extensions to E0. So I take, look at this vertical column here. I take extensions of this sequence of these sheaves to E0. That gives me a long exact sequence of X groups. Okay, and then the next X group along is X2 from this guy E0 to E0. Okay, and this is the co-boundary map. And what you see is that when I have an extension E2, that gives me this extension class which maps to E1, and therefore E1 must map to 0 here. All right, so when I have a second order extension in my sheave, then the first order extension class must map to 0 in this X2 group because it comes from here. And so what you find is that this E2 exists, so I can find an extension here, which maps to my extension E1 that I started with, if and only if E1 maps to 0 in X2 here. Okay, so we call this the obstruction class. So the logic is probably a bit confusing here because I assumed E2 exists, but I will go backwards in a minute. Okay, but given my E1 class, so given my first order deformation, I consider under this co-boundary map, I consider its image here. Okay, so I'm only using E1 to define this part of the diagram because I'm only using this exact sequence. Oh, it's come back again. Does that mean someone's asked a question? So I'm going to point anyway for you guys. So given an E1, I can still form this right-hand vertical sequence, and therefore I can take this co-boundary map, and so I can define the co-boundary of E1 as some class in X2, and I call that the obstruction space, alright? The obstruction element. So this obstruction element in X2 is going to vanish whenever I have a flat E2, whenever I have a sheaf to second order. So that's obvious from this exact sequence of X groups. Okay, and now I need to check the converse that when this obstruction class vanishes, can I produce an E2? And probably I just set that as an exercise, I can't remember. So now we have the same problem again. I shouldn't, yeah. So what happens is whenever I go into the chat, I can no longer move my slides, like on the keyboard or with this. So what did you do last time? I pressed some random buttons. Okay, please, just lean on the keyboard. I wore my magic. Okay, great. Thanks, Andrzej. Yeah. Yeah, so there's a steeper question probably about it. So the obstruction class for the first-order extension was kind of quadratic, right? Yeah. It took the hard product of it. Yeah, that's on the next slide. Yeah, but in the second-order extension, it seems like the obstruction class is linear because you take just the composition of the co-boundary map, which is linear. Yeah, but that co-boundary map is cut product with E. Yeah, you're going to see that. I think I'm going to answer your question. I'll check with you in a minute. Yeah, well, how do you do that? Ladies and gentlemen, Andrzej. Right, so I think this answers your question, but I'll check with you in a minute. So the right-hand vertical exact sequence was given by the extension class E1. So we just go back, right? This guy, the right-hand vertical exact sequence, is extension by E1. And therefore, this co-boundary map is cut product with E1. That's how X'd works. What's this called? Something X'd. Someone's description of X'd. There's a name attached to this property. Sorry? Is it Unada or X'd? Yeah, it is. That's correct. So that's Unada's description of X'd. So this obstruction class is quadratic. It's E1, cup E1. Is that what your question was? The question is the first-order extension. Yeah, that's true. And so you're saying that the second-order extension has obstruction class equal to E1, cup E1. No, so what I'm saying is the first-order extension class defines the obstruction class. And what this means is this vanishes if and only if my first-order extension extends to a second-order extension. So this vanishes if and only if E1 lifts to an E2. So if this vanishes, then there exists an E2. There's a whole choice of them, but there exists an E2, and now you get a second-order extension. So you also had previously a cup product of some class, an extra-curricular extension. Yeah, that was for vector bundles. So that was a special case of this. And there, for vector bundles, because everything was locally trivial, there were no problems with doing these extensions locally. The problem was, did they glue globally, so it was to do with the co-cycle condition? But these two classes are the same. If your sheet is locally free, then this is the same. The vanishes was the first-order extension, though. No. Let's check. Here. So, of course, I didn't do it, but I said, when you take the co-cycle condition to mod T cubed, so you look at it, whether it defines a second-order extension, then you find the obstruction is here. Yeah. Okay. I'm kind of keenly aware that you can't, if this is new to you, you cannot absorb it in 10 minutes in a lecture. This is a guide to what you should go away and work extremely hard on and suffer with. I have sympathy. I've suffered with it in the past. Okay. So we got to here. So when the obstruction is zero, I get an E2. I can lift my E1 to an E2, and that E2 gives me the central horizontal exact sequence. Okay. And again, my claim is that defines me a second-order extension. So it defines an OXT modulo T cubed module where the T action is given again by pi composed iota. iota composed pi. Okay. So extension, exercise, show that is indeed a CT modulo T cubed module and show it's flat over there. And then this is a bit evil. Maybe don't do that. Okay. So I don't think I didn't do this for the sake of exposition. I didn't do this example because I expect you to understand it. I did it to show you that these exercises can be done and they're hard and they take time and you have to suffer. And I just wanted to be honest and show you that I have done the examples myself at some point in my life. Okay. But yeah, they're tough. Yeah. So this absorption class would characterize a second row or it's just a complete existence. Just existence because there's choices because there's another group down here, right? So in this long as that sequence of extension groups, we should have underneath, we should have X1, E0 to E0. So the given your E1, the choice of lifts to E2, I give it, you know, there's choices given by this X1, E0 to E0. So they're not unique. They're only unique up to the action of X1. So as you go to second order, you can pick a first order deformation to deform it by. So instead of picking a straight line deformation, you can make it curve. Maybe that's not a very good way of saying it. Okay. So you can do all these things. I just wanted to do it because whenever you look at a book, they always say, oh yeah, here's first order deformations. And you can show that second order deformations live in this group and then no one ever does it. So there you go. So you do it. All right. How are we doing for time? We should stop. Yeah. Okay. So I carry on next time. There is a question. Does X build with third order imply that there is a need formula for the obstruction there? Yeah. Yeah, there is. So I guess the question is if we expect H3 or H3. No, I mean it also lies in X2. It lies in the same group. What was the obstruction to go into third order? What's the formula? What's the formula? Yeah. And it's very similar, right? It's basically, instead of E1 copy 1, I want to say it's E1 copy 2, but you have to interpret that correctly. It lies in the same group. It lies in X2, E0, E0. So yeah, that's important that the obstructions and all the choices and deformations to each order, they're all governed by the original E0. That's kind of interesting. Yeah. Okay. So whoever asked that question, that's good. You've set yourself an exercise. I'll let you do it instead of me. Anything else? Maybe this was a depressing note to end on. Yeah. Maybe the question is a bit off for people. I was wondering, so these two stability conditions will be fine. Are these stability conditions in the sense of questions? No, not quite. I mean, yeah, no, they're not. But they're not far off. In the space of Brigel and stability conditions, there's something called the large volume limit, and they're very, very close to being stability conditions there, but they're not quite. For curves, they are. For curves, they are. That's correct. Yeah. And then for surfaces, the way you deal with the zero-dimensional sheaves is slightly different. And it's to do with the fact that slope only sees the rank and the first-chern class. It doesn't even see the second-chern class. But for stable objects, you have a Bogomolov inequality, which means that you have some control on the second-chern class, and that's dealt with slightly differently in Brigel and stability from this. You need to do a certain tilt where those zero-dimensional sheaves get shifted by a minus one or something like that and then work in that Abelian category. I don't know if I'm saying words that mean anything to you. But yeah, you have to deal ever so slightly differently with the zero-dimensional sheaves. But it's very close. Yeah. Other questions, comments, or a clue? Someone? So let's thank each other. Thank you.
This course has 4 sections split over 5 lectures. The first section will be the longest, and hopefully useful for the other courses. - Sheaves, moduli and virtual cycles, - Vafa-Witten invariants: stable and semistable cases, - Techniques for calculation --- virtual degeneracy loci, cosection localisation and a vanishing theorem, - Refined Vafa-Witten invariants
10.5446/55111 (DOI)
Like my distinguished colleagues, I am also very happy to be here in Lindau. I am also a little mystified because I received this letter yesterday from a person I wanted to autograph. I would like to read it to you. He said, Dear Professor Gever, I study physics and what I recently present at the 41st Conference for Nobel Prize winners. There I listen with great interest to your lecture. Fractal noise from normal and cancer cells. Now, if any other people here have access to this time machine, please contact me. Right after the lecture. Now the good news is that the lecture was really unforgettable. Now I am a physicist who has wandered into biology. And if you work in biology, the object is to work with something which is living. And then you have a choice between many different objects. You could work with elephants or you could work with flowers. But coming from physics, the idea is to pick the simplest living thing you can work with and the simplest living thing you can work with are cells. If you magnify a bit from an elephant or from a flower, what you find is your fine single cells. And the amazing thing is that while the elephant is alive, the cells is alive by themselves. So each of you consist of roughly 10 to the 14 cells. And each of these cells has their own life, which is very fascinating to think about. So you are a very strange kind of organisms. Now the very interesting thing to me is that these cells can be grown in what we call tissue culture. Now let me explain to you how that goes. You take a little plastic Petri dish. And in the Petri dish, you put the saline solution, contain all sorts of goodies you think the cells would like to eat. Then you go and get a little fresh piece of meat. Normally you can't buy it in a supermarket, so you have to sacrifice a mouse or something like that. You put the meat, a piece of meat in a tissue culture dish and wait for a while. And out comes little cells crawling on the bottom. And now you have established a tissue culture. It's really a very simple experiment to do. And then you can study these cells by themselves and hope that what you learn from these living cells can be translated back to the person or the mouse which the cells came from. Now the cells are very simple life cycles. Really if you have the cells, the kind of cells I work with is called anchorage dependent cells. And most human cells are anchorage dependent. That means they have to attach themselves to a surface. And if they like now what you feed them, they will stretch out and start crawling on the surface. Then they will grow fat, round up, and then they will divide, and now we have two daughter cells who crawl around on that surface. And this process repeats itself. And if you deal with normal cells, it repeats itself roughly 50 times. Now 50 times is a biological number. It can mean 60, it can mean 40, but it's roughly 50. And when you have gone through this cycle 50 times with normal cells, they roll over and die. They refuse to crawl any more and they eventually die. So you say, what is the reason for that? Well we all started from a single cell. A chicken egg is a single cell. We started from a fertilized single cell, and when that's fertilized single cell, has divided roughly 50 times, you're looking at me. So any moment now, I may roll over and die. I hope it happened after I'd been to minehaw. Now what you can do with these cells, you can harvest the cells and start the process. Now however, I also study cancer cells. And cancer cells can take these cycles infinite number of times as far as anybody knows. A cancer cells have no natural death, which is very surprising. Nobody knows why. Now you recognize that if you want to live forever, the only way you can do that is to get cancer. And that's somewhere away, something or other. Now the only other thing you need in the plastic dishes is a tissue culture, is an incubator. And this is my good friend Charlie Keys here who works with an incubator, and these are the sizes of the plastic dishes. I might say here that growing cells in tissue culture is like an art really, like growing flowers. Some people have a knack for it, some people don't. Dr. Keys has a great knack for it. I actually caught him a few times talking to the cells, but he denies that. Now let me show you how these cells look like. This is a thing I've stolen from science. Science doesn't want to publish my papers, but at least I can steal their covers. Now here is an optical microscope of cells growing on some of these plastic dishes. And what you see on the top here is cancer cells, what you see on the bottom are normal cells. And you see the normal cell grows in regular swirling patterns, people describe it as. My cancer cells grows randomly. If you're now used to looking at that picture, this is a single cancer cell, this is a single normal cell. Now if you're unfortunate enough to get cancer, the medical doctor will take a biopsy. That means he takes up a little piece of meat and he says he looks at it and he says it looks like it grows random, so you need to operate. Or he'll say it looks like it's regular and you're okay. But you see it's completely subjective definition. It's very difficult for a doctor, but there's no other way of doing it. It's like if I show you a piece of black paper, you say what's the color, you say black, I show you a white piece of paper, you say white, I show you a gray piece of paper and you say gray and I say uh-uh, you can't say that. You either have cancer or you don't. A gray piece of paper, you gotta call it black or white. So this is a, and one reason people like me work in this area, we hope to find some sort of mystical powder you can put on the cells and the cancer cells turn red and normal cells turn green, but we don't have that powder yet. Now let me try to explain to you what we are interested in. So this is a simple cross-sational picture of a cell. And the cell will always grow on a protein layer which is on the plastic dish. And the protein layer is there because the medium which you feed the cell contains some protein. Now the cell then as I said is anchorage dependence, it means as it attaches itself to the surface. When it attaches itself to the surface, it uses some kind of glue. Nobody knows what kind of glue that is. It's not known. There are several theories and I don't agree with them so I don't want to mention them at all. But there are, so one thing we're interested in, what glues the cell to the surface? Another thing very interesting is the cell pulls on the surface because the cell is naturally spherical, it sits on the surface, it becomes flat because it pulls on the surface. The question is what kind of forces are the cells pulling on the surface with? And I'm presently measuring those forces but I'm not going to talk about that either today because I'm going to talk about cell motion. And one thing I'm very much interested in how cells recognize each other and how they move. And that's what I'm going to talk about today and we're going to show you how you do measure the cell motion using a new electrical technique. I actually, it's not all that new because I discovered I talked about it at Lindow here three years ago but we are learning as we go along. Now the principle of the technique is very simple. What you have, you have two electrodes. You have a large counter electrode and you have a small electrode. And remember now the cell grows in saline which conducts electricity. So now if you apply a potential here you will get a current flowing from one electrode to the other in the saline. Now if you block one of the electrodes with cells you will imagine that the resistance will change and that exactly what happened. You block some of the available electrode with cells which makes the resistance change. And this is very new in tissue culture but people do in tissue culture normally they only look at the cells in the microscope and describe to each other what they see. So what we are trying to do is introduce some absolute measurement into this particular kind of area. Let me show you one typical example. Here is the potential, equivalent to resistant measurement of such a petri dish. We introduce cells here at time equal to zero and we wait to see what happens. And you see as time goes along after one hour you have rather large resistance. And this is because you introduce the cells and they drift down to the surface and they start attaching and blocking the electrode. Then after a while then you get almost through this maximum which we don't quite understand and then the cells go on and you get all these wiggles which are due to the cell is not sitting still on the electrode they keep moving all the time because they are alive. And so we are much interested in measuring this kind of motion. Let me show you a little more detail how the actually let me explain to you actually first why this work. And I have difficulty with biologists here because they always think there is a mysterious kind of interaction here but it really is not. The cell is surrounded by a membrane and the membrane does not conduct electricity. So if you have an electrode here the current has to go around the cell. Now if the cell stretches out the current has to take a different path and if the cell blocks more of the electrode then you can easily get an increase in resistance. And so that is basically what you measure. Now the system we have look like this. Here we have a locking amplifier which is a fancy voltmeter. We apply a normally a 4000 hertz signal through a one mega ohm resistor so we get a constant current essentially flowing in this circuit. And we measure the voltage by the locking amplifier and since we have cells on this little electrode then the cells move because it changes in the voltage on the locking amplifier. And then we can interpret the changes as the motion of these cells. And the important thing is to have a small electrode because a small electrode acts as a bottleneck in the whole system. This experiment does not work unless one of your electrodes are very small. Let me show you a typical example which we are very interested in. We are very interested in what kind of surfaces do the cell likes. And if you introduce if you take an electrode and pre-coat it with different kinds of protein and you ask which protein do the cell prefer to attach to. It turns out that they prefer something called plasma fibronectin, far above something called Buh-Wen-Zeromalveum. And the reason we are interested in that is that for example if you have cancer, if you have a woman for example and unfortunately you get cancer of your breast then this by and large is not dangerous because we get big lumps in your breast, it doesn't hurt you. What is dangerous is the cell metastasizes. That means the cells move from the breast and into say the bone marrow and then cancer kills you. And it turns out that cancer cells for example breast cancer cells have a preference to go into bone merals. Why? People don't know. So we are then interested in investigating these kind of things in simple model system. And clearly you see a big difference between what kind of protein the cells are exposed to. Now let me look at some typical curves from these are some curves where we have a voltage up this way and time goes this way. And these are cancer cells and you see they get a large submotion from cancer cells. They oscillate a lot, lots of motion. Look at normal cells, they are more quiet. And if you look at cells that have been killed you see you get no noise at all. So there is no question that these things come from the noise. Now if you look at these curves you tend to fool yourself. But at least I do. Because if I look at that curve I will see wonderful regular oscillations. And I spent a lot of time trying to analyze these kinds of oscillations. And unfortunately it basically is noise, but it is a kind of interesting noise. Let me talk a little bit about that. This is the kind of noise which physicists recognizes. They recognize as white noise which everybody sort of think they understand. They recognize as brown in noise which goes as white noise has no dependence on frequency. Brown in noise goes as one over the frequency squared and physicists sort of knows about that. And then we have a very interesting noise because one over F noise which falls in the two. And one over F noise is apparent everywhere and we basically don't understand that. And it may come as a shock to you, but if you went to the concert yesterday and heard Mozart and Beethoven and look at music, if you analyze music it goes as one over F. So music as noise is one over F. And Brian Josephson had some point there if you look at everything as physics it doesn't quite work because clearly there is something else to music also than one over F. But Mozart or the Beatles or the Grateful Deadbits would mention all goes as one over F, which is very interesting. Actually there is a story here where I can't resist telling. There is a physicist who discovered that, his name is Vlas. He worked for the IBM company now. So he was a student in San Francisco and he discovered that music goes as one over F. And what he did, if you take an ordinary transistor and you measure it you find out an ordinary transistor has one over F noise. So we hooked the transistor up to an amplifier and up to a speaker. And to his surprise out came Chinese music. And living in San Francisco this is business opportunities because a lot of Chinese restaurants. So he took his invention to a Chinese restaurant owner and which he knew slightly I think and he demonstrated the whole thing and the Chinese man was very serious nodding his head and they said, what do you think? What do you think? And the Chinese said, well it sounds Vietnamese to me. So even though music goes over one over F, it depends upon how it goes over one over F I suppose. But anyway, when we analyze our data and if it took something you call the power spectrum, it turns out that this is random noisier one over F squared. Our normal cells are the slightly different slope and the candid cells are slightly different slope again they're much more noise than the candid cells but the power spectrum themselves are very disappointing because as a physicist or as a biologist even you would like this to have big wiggles in it. You want to see some preferred frequency. If you see some preferred frequency you can do something with it. If you just see them at a general level it's very, very hard to do anything about it. Now fortunately it turns out that these curves are fractal and I have to admit when I talk to my biologists friends about fractals their eyes glosses over. I mean they're not really very interesting to them and not really to me either but you got to do an experiment, you got to save something from the ashes and what we save here is that if the time current we get are fractal. So here is the resistance if I have a thousand minutes and here is the resistance for a hundred minutes, here is the resistance for ten minutes, here is the resistance for one minute and you see the statistical nature of these curves are exactly the same. You may get a little disturbed by the steps here when I come from the amplifier which is a digital amplifier and only can measure things in finite steps. But you see the statistical nature is exactly the same and that means if you can't tell what time it is then you know that these curves must be fractal. And then you can of course analyze it and even though the red light was taken off I think is put back in again so I don't think I have time enough to explain that let me just say that we use a method called Hearst rescaled range analysis and it is just a standard simple procedure and out comes on the other end if you go through that procedure out comes curves on the log log plot with a straight line. If you plot coin flipping you get a Hearst coefficient of point five which means the fractal dimension is one point five. If you do this with normal cells you get a Hearst coefficient the straight line is point six and the fractal dimension is one point four. You would with the cangid cells is one point three and the fractal dimension is one point three. There is a clear difference between normal cells and cangid cells and random noise. So you know that these things are not random noise. And these curves with a Hearst coefficient of greater than point five are called persistent. It's a very interesting phenomenon which is not understood and it turns out that all macroscopic phenomena measured to my knowledge are always persistent. That means if you go to the roulette table and play on red that means if red comes out red has a bigger chance of coming up again. When red comes out that's the persistency. Antipersistency if red comes out one time you're sure that black comes out the next time that antipersistency. And most people think antipersistency. I actually good news for you the weather is persistent. So there was good weather here yesterday and good weather today going to be good weather tomorrow. But I have better news for you is weather is persistent on all scales. They're going to be good weather a year from now. And for us physicists it's going to be good weather three years from now. As a matter of fact it could be good weather a thousand years from now. And this is really true. I mean this is how is a very peculiar thing in all phenomena analyzed like that is persistent. And if you're very interested in that you have to go through Madelbrough's book where it's nobody has been able to read but somehow it's in there someplace. And so the main the means then that the cells on this time scale they have a sort of memory. If you are persistent you know what happened in the past. Of course they don't have a real memory but this is a chemical reaction whatever it knows what had happened in the past. Now I may have misled you so let me not try not to do that. I know this can't be done one way or the other. Now I said that you get the noise from the cells because they move. And you may have thought that we have cells moving on and off an electrode. And then of course the cells to get more or less covered and we have indeed that. Matter of fact this curve here comes from a single cell on the electrode. But those curves we have now have enough time or things to analyze yet so we just have taken a few curves like that. But normally we work with confluent layers. That means the electrode is completely covered with cells. But even so we get a lot of noise. So how can that be because the cells the electrode is always completely covered. So we have then developed some theories which works very well. The first theory we had was I talked about was that the cells block some area. And that theory number one. Theory number two lets the cells stay up from pseudopods and current can flow under and around the cell. And theory number two is correct. The cells does indeed stand up and current can flow under and around the cell. Let me before I talk any more about that show you how these experiment look like. Now is a little bit different kind of experiment here. We have measured the resistance of the naked electrode which is here as a function of applied frequency. And you get a curve like that which basically we understand. Then if you put cells on the electrode you get the blue line when you measure it. If you use theory one where you block the electrode you get the green line which doesn't fit at all. If you use theory two you get the red line which fit more or less exactly. And the adjustable parameter is the distance the cells is up from the electrode. And so this is then what we do the cells sit up from the electrode. The main parameter is the distance between the electrode and underneath the cell. Now when you do this of course the cell more or less look like this but you have to choose a model. And you can take a circular disc or you can take a rectangular model. And it doesn't really make much different but I worked in applied mathematics ones and I know if you use a circular disc you get Bessel functions and I know that Bessel functions will be avoided at all costs. So therefore we normally work with a rectangular model. And I don't want to go through the detailed theory but this shows you how it goes and it's really very simple because all you do is using Ohm's law. Use one Ohm's law for current in the channel between the cell and the electrode. The current flowing out this way and other Ohm's law for the current going from the electrode out into the channel. So there are two Ohm's laws which you combine and you get the solution to the problem. And remember now that this impedance going out here that impedance you have measured. So there's no adjustable parameters there. The main adjustable parameter is this height here which is the interesting thing. And when we do this we get more or less an exact agreement with the experiment. There are some details that I won't go into but if you believe my model and if you look at the curve like this but we see the noise from the cells then this distance corresponds to 0.05 nanometers. So we can see a change in the height an average change in the height or much less than an angstrom which is absolutely astounding. Now you may say I don't believe your model so I tell you the truth is if you look in the microscope at the cells you see no motion at all. But if you look electrically you see the curves goes up and down. So clearly what you see you know what you can't really see the motion which you can measure electrically. So this is a very new way of measuring cell motion which you normally have no access to otherwise. What my friend and I are doing we are trying to develop this into a biosensor. And what I mean by that is that you want to have here you have a cell has a metabolic process and what we measure is really cell motion and actually changes in cell shape. What people are interested in measuring is the metabolic process. So we try to connect this motion to that motion. And then we can do that very in because then you can tell everything about the cells by just looking at the motion which would be very valuable. Now let me since the red light is on and since I know even less than Archdrago I have to stop for you soon. But let me talk about one or two more experiments. One experiment I'm now doing in Norway together with a biologist named Morten Loner. We work on a very peculiar organism what he knows about it called the fissarum. And that's a very complicated life cycle. Particularly since this is written in Norwegian it must be complicated for you. But the interesting thing with this particular organism is that we measure it with our system it oscillates which is wonderful. You get wonderful oscillations. I listened to Norman Ramsey's talk and he said that his time scale stopped about 5000 before Christ but this organism here has been oscillating for about a million years trying valiantly to keep track of time. And unfortunately it oscillates somewhat like 28 to 29 times an hour rather than 30. And that's very disconcerting for the Swiss I think. So what we are hoping to do with this particular system then is to make a biosensor where people can use this system to look at how things how various drugs and things interact with this organism. Very easy way to measure it. In the very end I like to everybody at my age like to give advice to students. I mean that's what we are here for after all. And I like to Schrodinger at once wrote a book about life and the question is what life is really all about. And in my opinion life is really governed by physics and chemistry. Matter of fact I'm going to one step forward and say life is governed by present known physics and chemistry. To understand life you don't have to understand anything more. When you know all the laws only how to do it and do it but you have to take it seriously. Somehow physicists and chemists don't take life seriously. They throw up their hands. Say if you can't do it it's too difficult. And I mean you shouldn't do that. You should see it's so easy to think about very important problems. If I take my favorite problem for example is memory. And here we have the brain and I've been talking to you and hopefully you remember a little bit of what I said. Which means that your brain has changed. But nobody knows how the brain changed. But it's protein molecules. What is the synapses. What is DNA. Nobody knows. So here I have changed your brain. Matter of fact I probably damaged your brain. But the interesting thing is nobody knows what it is. And this is such a wonderful problem to work on. If you have the courage to do so. I'm too old to do that. But it's a wonderful problem. Biology is full of such fundamental problems. They're very easy to state. But very difficult to solve. But I'm sure that some of you sometimes somehow will do it. Thank you. Thank you. Thank you.
This is a general comment to Ivar Giaever’s remarkable set of 11 recorded lectures on biophysics 1976-2004. Giaever has so far (2014) participated in no less than 16 Lindau Meetings, starting in 1976, when he received his first invitation to lecture at the Lindau physics meeting. But it wasn’t until the 2008 meeting, after more than 30 years, that he finally disclosed what he actually received the 1973 Nobel Prize in Physics for, the discovery of tunnelling in superconductors. In year 2000, he did not give a lecture, but sat in panel discussion, and in 2012 he gave a critical talk on global warming, which for a long time has been on top of the list of most viewed Mediatheque videos. But at all the other 13 meetings, he lectured on his activities in biophysics and how these led him into starting a high tech business in the US. It is fascinating to listen to the 11 existing sound recordings, starting with 1976 and following through all the way to 2004. Giaever is smart enough to having realized that the most important part of the audience, the young scientists, change from year to year, so some parts of the lectures (including jokes) appear over and over again. But as time goes, he makes progress in his biophysics research and this leads to important developments and inventions. The starting point in all lectures is the possibility to study biological phenomena in the laboratory using methods from physics. With his background in electrical engineering, it is not surprising that he in particular has used techniques from optics and from the measurement of very small electromagnetic fields. The first two lectures mainly concern proteins on surfaces, but already in the last ten minutes of the second talk, Giaever describes his ideas about working with cells on surfaces. The rest of the talks all concern his studies of the properties of living cells on surfaces. The cells are grown and kept in what is called a Petri dish, a cylindrical shallow glass or plastic container. By inserting a very small electrode made of a suitable metal (e.g., gold) at the bottom of the dish and another above, electronic characteristics of a single cell can be measured. This can be both static and time-dependent properties. A question that has been at the centre of Giaever’s interest has been to develop an objective method to measure the difference between cancer cells and normal cells. Such a method would be an important contribution, since the usual method to distinguish cancer cells from normal cells is by observing their growth pattern in an optical microscope, a highly subjective method where mistakes can be made and have been made. Another question, which Giaever has addressed, concerns what kind of surfaces cancer cells stick to. This can be important to know, because many cancers spread from the original tumour and cancer cells wander to other places in the body and form new growths in places where they stick (metastasis). When he began his activities in biophysics, Giaever worked at General Electric, but after leaving this company in 1988, he accepted a position as Professor at the Rensselear Polytechnic Institute. Together with a colleague he also started a company to develop and market a sensor for cells in tissue cultures. This apparatus is now being produced and marketed (www.biophysics.com). Some of Giaever’s lectures focus on the problems encountered when trying to start a small highly technological enterprise. His account and reflections are interesting and in parts very amusing. Some in the young audience certainly could profit from following in his footsteps, in particular from following this advice: If you don’t get funded for your research, start a profitable business to make your own funding! Anders Bárány
10.5446/55084 (DOI)
I should clarify two things in this introduction. My parents came from a town of Rovary in the Ukraine. It's not they who changed the name, but those days when you came across immigration, when the immigration officer had difficulty spelling a name, he simplified it. So that's how we got the name Brown. At first four letters, the same as Bobarnik. Anyhow, another experiment I want to make. In 1936, I got a bachelor's degree from the University of Chicago. At that time, my girlfriend, not yet my wife, gave me a graduation present, a book by Alfred Stock on the hydrides of borne on silicon. At that time, the high boring was a chemical rarity, available only in two laboratories in the entire world. And why did she pick this particular book to give to me? We were very poor in those days. It was a time of the Great Depression. And she got the cheapest chemistry book in the bookstore. But this is one way to find a rich, new research field that will lead you to Nobel. As it happened, I took my PhD in 1938, 60 years ago. I deliberately decided to speak on this topic because I feel it is inspirational to students. When I received my PhD degree, I thought that organic chemistry was a mature science. And there was relatively little left to be discovered. And in all probability, I would be spending my time working out reaction mechanisms, working to improve yields there. But I was completely wrong. And six years since then, I've seen discovery after discovery come along. I have no evidence at all that things are slowing down. So I want to leave you a message. I want to show you some of the things that we have done. Leave you a message that there are still many new continents out there in science, awaiting discovery by young, enthusiastic explorers. Now, as I told you, Diborin was a chemical rarity available only two laboratories in the entire world. World War II research led to the practical synthetic methods for Diborin and to the discovery of sodium borohydride. These turned out to be excellent reducing agents in organic chemistry. That's the beginning of the use of hydrogen compounds, hydrides, abhoron, for organic reductions. Exploration of their reducing action led us to discover the hydroboration reaction, the addition of Hb bonds, Diborin, and similar compounds to carbon-carbon double bonds to give us organoborins. Now, when I substituted, when I sent in a communication to the journal, reportedly I had made this discovery. The referee recommended that the journal not publish this discovery. They said organoborins have been known for almost 100 years, and nobody had found anything useful that they would do that would be useful for organic chemistry since the main value of this reaction is to produce organoborins, why publish it? I persuaded the editor that it's true that no one had published anything useful to be done with organoborins, but I pointed out that it's not that anybody had tried and failed. There was no evidence that anyone ever tried. So I persuaded the editor, he published it, and you'll see what happened from then on. So this is a general asymmetric synthesis based on chiral organoborins. We discovered hydroborations that made organoborins readily available. These are some of the characteristics we found in general in ether solvents. Diborin and similar BH compounds add with grates to carbon-carbon double bonds to give us the organoborins. If we took 2-butene and treated it with Diborin, 3 moles of olefin reacted, and we end up with trisectomobutol boron. We oxidized it with hydrogen peroxide, we got secondary butyl alcohol, 3 moles. Diboron tend to add to the terminal atom of a terminal olefin. So for the first time we could make primary alcohols readily from such olefins. We found that the addition to methylcyclo-pentene and similar cyclic compounds was a cis addition of the H and B atoms, and then when we oxidized it went with retention, so it made a pure trans alcohol. One time it used to be several weeks' work to make pure trans alcohols of this kind. Now we can make them easily. Norborny, we're under 100 borons, gave us 99% and 6% exo. This is one of the things that got me started on my questioning of the non-classical structure, because in the case of solvalluses of norbornyl tosylate, it was the fact that you get exo-compound both from the endo and endo derivatives that let people propose something new and different. But here's the molecule where there's no carbonium ion, and we're getting entirely exo. Then we took alpha-pynine, a very cheap olefin, very easily rearranged, and we questioned whether we want to try and see whether hydroboration would rearrange this compound. It didn't rearrange it. It went simply and smoothly when we were able to get the corresponding alcohol. But one thing we found was unexpected. Only two moles of alpha-pynine reacted, making the diisopinocamphyl borane. That gave us a new hydroborane agent, optically active. And that's where my story of today starts from there. So we began the study of the chemistry of organoboranes. Investigation revealed that organoboranes possess an exceptional personal chemistry for organic synthesis. Remember what the referee had said. Referees aren't always right. You can see here we have 24 major reactions, new reactions. Each of these was a discovery in our laboratory, and each of these was essentially published in a separate paper. Now we got a number of other reactions, but don't consider them the major reactions. So this was something which the referee had said there was no future in organoborane chemistry. There was unexpected development. When we study the substitution reactions of boron compounds, we find that the substitution groups usually proceed with completely retention configuration. That's different than substitution carbon compounds, which usually go with inversion of configuration or rasterization. Now as an example, if we take and make the dimethyl derivative BH, add it to this, we get this, treated all hydroxylamine sulfonic acid. We get the corresponding, the group goes with a pair of electrons from boron to nitrogen. And therefore you end up with the amine, pure trans amine. And in general, the reactions occur by primary coordination of the reagents with boron and then a rearrangement. So this is why we would count for the retention of configuration. Now if we could do asymmetric hydroboration, we would have a general method, general asymmetric synthesis. I had a student from the Etaeha who had come just the time we discovered hydroboration, George Spifle. He had worked there in sugar chemistry and spent three years as a postdoctor in England and came there. And I persuaded him to give up sugar chemistry and look at this new field of boron. And he was the God sent. Almost everything he tried worked like a charm. So this shows you that if we take, if we can make an opleyactive group attached to boron, we kill through all these reactions and make opleyactive groups out of them. This is what George Spifle did. I said to him, look, we have an opleyactive hydroboration. Let's take, we can't hydroborate a third mole of alpha pine. It's 200. Let's take a less hindered olefin such as cystic butene, hydroborate it and see whether we can find any activity. I was looking for the usual 10 to 20 percent, which people were getting in those days. And he ran the experiment. He came running to my office and says the compound shows an optimal activity of 87 percent EE. Since the alpha pine we started with was only 92 percent EE, we achieved an almost 100 percent asymmetric synthesis. Now we've improved the method you see here in number cases we get closest. I used to say 100 percent but someone would always give me an argument so there's maybe a hundredth of one percent there. We don't see it in GC and so on. But so it's equal to greater than 99 percent EE. Now we couldn't do it through trans or tri-substitutolophins. So we needed a compound that would have less steric hindrance than the dye compound. But when you try to hydroborate alpha pine you go right past the monol to the dye. So we had to go back, remove one group. If we put in the base like tetramethylethylene diamine, we remove the alpha pine, we get the monoisopinocampyl borane addition compound. This is crystalline, precipitates right out. If we take this and treat it with BF3, the BF3 compound of this base also is insoluble and crystallizes out. And you're left with this reagent in ether solution. So we applied it to these compounds and you see we got 53, 62, 66, 72. But we found an interesting thing. When we carry out the reaction, remember even if you get let's say 60 percent EE, that means 80 percent is one isomer and 20 percent is the undesired isomer. If we allow it to crystallize out, one isomer sits right out and 100 percent purity. And the other say the solution and didn't do us any harm. So that we had an easy way then to bring these up to 100 percent. And therefore, if we take this and hydroborate it to this and add to this, we have these two groups attached we don't want. We found an easy way to remove them. If we treat them with acid halide, they come off and give us alpha pine again, which can be recycled so that we now got a boronic ester, a reactive. We had to study the reactions of these previously we've been using R3B to get these 24 major reactions. We now had to learn how to do it with boronic esters. But we solved those problems and we could then make a series of boronic esters and we could use them then to make all Opli active compounds. Now, Don Madison at Washington State University has come up with another approach to make Opli active boron compounds. Asymmetrical moligation. He took alpha pine and made a dial out of it. Reactive that with boronic acid made this derivative. He found the fiattyl lithium CHCl2, minus 100 degrees. He got the addition compound attached to boron. Add a little zinc to help one chlorine ionize off. And therefore he came to this compound, Opli active. If you treat this with lithium alkyl or a green yard, you replace the chlorine by an SN2 displacement reaction and you end up with an Opli active derivative, the kind we can't get by hydroboration. So we have another approach to make these compounds. And here's an example where we've taken the cyclohexyl or the benzyl or the tertiary butyl derivatives, we've made the aldehyde, we've made the acryl-acid, we've made the ketone and the amines and so on. So now we can have these compounds made by the Don Madison procedure and we can apply it to 24 different reactions. Actually we want to apply it about seven or eight because we got tired, everything was working so there's no, it was not there. But let's see what the scope is now. Examination of scope of this boron based approach to the synthesis of pure anamir reveals a number of pure anamir can be readily synthesized by this approach is over 100,000. So we have taken and made our own laboratory 34 different R star Bs there. If we use minus alpha pine instead of plus alpha pine, you get total of 64 starting materials. Now we can also take and make the corresponding things through the homologation and this will go along, let's assume that you get the same number for simplicity, you get the same number by the Madison procedure, you get double number 136. But then you can put on one methyl group or two methyl groups or three methyl groups and you get different compounds, each of these is optally active in a different structure or you can do it in one step by using alkyl chloride and going one step to the three. So now we have to add these to the list. If we add these to the list and now we got a multiplier by the 24 reactions. So we will have first 34 minus alpha pine, we will get double that to 68. Homologation gives you the equal number, then adding one carbon atom, two carbon atoms, three to any one of these gives it new structures. And then so you have total of 544, had 24 major reactions, that's 13,056. But each of these can make many compounds. For example, if you make a carbon-silic acid you can make many esters. If you make the corresponding mean you can make primary secondary tertiary, means put a methyl ethyl, a spropyl tertibutyl cycle hexyl with an optally active group. So let's take roughly 10 per reaction, we will then get 130,000 optically pure compounds. Remember this is rather a new thing because all this optally active work was done actually since the Nobel, so that this represents a development after I retired in 1968 and was awarded the Nobel Prize the following year in 1979, 1978 and 1979. Now another approach is to do asymmetric reduction. Asymmetric reduction provides still another boron-based synthesis of pure nanomers. One of my students, Mark Midland, took alpha-pining, he added it to 9BBN, he got this compound known as alpine borane, he found that if he reduced the deuterolahydes he got 100% EE, there. If he applied it to acetylene ketones, again he got close to 100% EE. But these were very fast reactions. This was much slower, acetylphenone, and he got only 10% EE. The trouble is that if you got a cyclic mechanism, you retain the optically active, you get close to 100% EE. But if you have a slow reaction, another pathway takes place. There's partial association of this into 9BBN of this, this reacts rapidly to give inactive product, and then you don't get any. Fortunately, we found a way around it. If we took alpha-pining and made the diaspinal canfoborane and added HCL, we got diaspinal canfoborane, now sold commercially as dipchloride. And this then would give us, there we took this and reacted with acetylphenone, this is rather fast, and we got this intermediate, alpha-pining came off, and if we treated it with diaspinolamine, we could precipitate the IPCB boronic acid as the ester, and you got this, and this was 98% EE. So we're now getting up very high in the E-invibreduction. So we applied a large number of compounds, you see this is the cetylphenone, the ethyl derivative, the norepropyl, the isopropyl, all of them reacted comparably. If you put in a tertiary butyl group, it goes the opposite way, instead of giving you the S isomer, you get the R isomer. That's because apparently, fennel is larger than isopropyl, but tertiary butyl is larger than fennel, so that changes the direction the reduction occurs. Now, many people have been working on producing reagents, but usually they apply it to one type of ketone where it works well, ignore it to all the others, so you never really know how good the reagent is for all ketones. We suggested taking 10 different ketones and applying each new reagent to it so we can compare results. So here's what with isopinol can feel, methyl isopropyl ketone, no good. Here, 98% good, here's good, good, good here, not particularly good here, and so on, so we go through that way. Here I've listed the various reagents, alpine borane, macmedlan's reagents, our reagents, itsunoprimrose reagents, and bnentride, glucuride, and binal-H, which Neyari had introduced. And you see that I'm putting a double plus for those which are best, 1, 2, 3, 4, 5. And here we have 2, here we have 1, and so on, so that's the way we proceed it. Now, the way we analyzed it was that when we go through the transition state, the ketone comes in, coordinates with the boron, and you have two groups here, a large group and a small group, and that is affected by the methyl group and the alpha-pimene. And therefore, it's usually the smaller group will be here and the large group will be out here away. And I'll give you this. But if you try to make the other isomer, then the large group is being pressed against the methyl group, and that's undesirable. So he said, why don't we make that methyl group a larger group, make it into an ethyl. It's very easy to methylate alpha-pimene to go to the ethyl derivative. And look what happens when we apply this reagent. Now go up from 32% previously to 95%, equal greater than 99, equal greater than 99, equal greater than 99, et cetera. So we now have the best one, one, two, three, four, five, six, seven, of the, it does with this epine borane chloride. And then we want to do asymmetric allyl encrotal boration. And this is still another way. Organal boranes do not react like greenhouse reactions. The Vekhailov in Russia had made alyl boron, and found they added easily to aldehydes. And we then, Hoffman at Marburg, took and made this op-reactive derivative, and put alyl and found he got there. Reasonable results were not in the range that we today would like to have, from 36% to 86% E. Well, it occurred to us that we ought to try our reagents. It's good for hydroboration. Let's put on an alyl group on it and try it. And in our first effort, we got 93% of E here. And we tried it, and it seemed to be generally good. You see, we could take metalate alpha-icebutylene, carry it out, and we get this in 90% of E. Or metalate the methyl-al-ether, and we get this compound, go through it, and you get the same compound. You see, with two asymmetric centers, and again, it's 98% E. Or we could take this allene, hydroborate it, we get the alyl derivative, carry it out. Now, this is a natural product, and we can, in one pot reaction, go through this alpha product, 96% E. And we can take the corresponding diene, cyclohexadiene, hydroborate it. If we keep the temperature below minus 25, there's no isomerization of this. And add the alohide to it, and we go directly to this compound. And excellent, 97% E. Now, this has been a favorite reaction in the literature, and I believe there are well over several hundred of the applications of this alohine-crotoboration. For example, here, to make a crotal, we take the 2-butene, we can metalate it. Again, Schlosser had done this, but he said that if you make the trans, it is summarized as a 6. But if we keep the temperature below minus 45, it metallates and does not rearrange so that we can make the corresponding V, I, B, C, 2, both the cis and the trans. And you see, if we now use plus alpha-pining or D-alpha-pining or L-alpha-pining and the cis and trans, we've got four reactions. Treat them with acid alohide, and we get each of the four reactions pure. Now, we look forward to some possibilities for commercial applications. I'll mention this one, Prozac, here, and Ipsodinol. So, in this case, you see, we carry it out, we can make this. I mentioned this because this compound was proposed as an antidepressant. And unfortunately, it could not be, they couldn't resolve it. Today, we can make it 100 percent E. Now, Prozac is a very important reagent for antidepressants put out by I.L.L. And they tried to resolve their compound, couldn't do it. After 18 crystallizations, they got it up to 80 percent E. And they gave up. Here, we take this, very simple process, take this, treat it with our reducing agent, we get this. Got it amidst a noble reaction, we can get an inversion, we get this. And then treat it with methylamine, and we get this. And we get 100 percent E. And we get one that the Bristol-Myer squib organization is making, and antipsychotic reagent. And again, we can do it in a much simpler way. Finally, I'd like to mention this one. Here, I think, at one time in the Black Forest in Germany, we were having an attack by this insect, which was eating up all the trees. We were looking for a way of making these pheromones, which can control the insect, but it was a very difficult compound to make. But we found an easy way, if we take a metalate isoprene, put the potassium salt, put it on an ethyl group, and on to IPC2V there. And if you add it to this aldehyde, an isobutyl aldehyde, you get hypsinol. This aldehyde is there. Now, once you're up to 96 percent, the simple crystallization usually brings you up to 100 percent. Now, many carbohydrates always have been examined and explored. This will introduce a few half-months reagents, REITs, modified it. So when it came up with this and this, then Roush proposed the tartrates or this compound, and Cori came up with this. But alpha-pineene is a very cheap, solid material, and there's practically no sense of variations. And it provides a basic optimism anticipating development of practical, economical, asymmetric synthetism. So here we have shown, using the alpha-pineene, what the things you can do with it. It can be for hydruboration, asymmetric hydruboration, asymmetric reduction, asymmetric reduction, asymmetric topogylation, allylation, chlorolation, we can open up hot tides, and so on. When we take this, we're getting a much greater improvement. This is only a start. For example, if we take the methyl isopropyl ketone and use dipchloride, we get only 32 percent E. We use the epimomahood in 95 percent. If we take methylcyclohexyl ketone, we get only 27 percent E. Whereas with the Epochlor bond chloride, you get 97 percent E. If we open up the epoxide with the IPC, we get here 84 percent E, when we apply it to the cyclopenthale ether, only 49 percent. But the Epochloride gives us 99 for both. And finally, this is the things we'd like to do, but I'm 86 now. I don't think I did, but there's lots of things for the rest of you to do. So here shows what we ought to explore, what we'll do there. Finally, I thought I'd show you something that happened very recently. You all heard about the old bell metal. Now the ACS is the side of the half by AC Brown metal. So this is the AC Brown metal. Now the Nobel metal is 20-feet carat gold. The only thing I've ever seen is 20-feet carat gold. This is only 14, but it's still a nice metal half. So with this, then I'll close my talk and I hope I have run over time. But I want to give you a bird's-eye view of this thing so that if young people, you will be encouraged that there is still a whole content, many contents of knowledge out there waiting to be discovered by enthusiastic explorers. Thank you. Thank you.
This is the last talk of H. C. Brown in Lindau, for which a recording is currently available. Brown lectured at Lindau Meetings during twenty years and while his topic, organoboron chemistry, remained a constant during all this time, its dimension expanded significantly. So did the technical possibilities: while the pictures and schemes supporting Browns first Lindau lecture were analogously projected by an aide, the present lecture relies on a computer based slideshow. Unfortunately, Brown’s slides are not available anymore and so, Brown’s 1980 and 1998 lectures have to serve as a case example of the detrimental effect of information technology on the structure of language. While the 1980 lecture is easy to follow, even without seeing the actual projections, the 1998 lecture is almost impossible to grasp in its entire detail. In any case, unless educated in organic chemistry, you will probably ask yourself what that mysterious “ee” is, that Brown repeatedly mentions. One thing that seems to be certain is that the higher the ee the better... and indeed: the ee refers to a special kind of selectivity of an organic reaction, the enantiomeric excess. A reaction with a high ee selectively yields one of two possible enantiomers and may hence be used for so-called asymmetric syntheses. A nice way to understand what enantiomers are is to think of a pair of gloves: they are made from the same materials, they weigh the same and they feel the same but yet they are different, as one fits the right hand and one the left. In other words: they are mirror images. Still, the differences are crucial: fighting cold hands with left hand gloves only would not solve the problem, no matter how many of them are available.What appears to be an idle thought in the field of gloves is of prime importance in pharmaceutical research. Much like gloves, certain drug molecules can only fulfil their purpose if their chirality “fits”, i.e. if they occur as the correct enantiomer. In other instances, they may do a lot of damage. Contergan, a drug given to pregnant women as treatment of morning sickness around 1960 is now well-known for causing severe birth defects. The active substance, thalidomide, occurs as two enantiomers, only one of which can sustain contergan’s disruptive effect on child development.Due to the significant influence of chirality on the efficiency of many bioactive substances, pharmaceutical companies nowadays are required to strictly control impurities by undesired enantiomers. This is why reactions with high ee’s are so desirable. And Brown and his team have developed quite a few of them, as he points out in his talk. An efficient synthetic route to the well-known antidepressant Prozac is only one of many significant results mentioned.The reactions discussed thereby all have one thing in common: the chemical element boron. Brown, who had worked with boron all his scientific life, systematically built up its application in synthetic organic chemistry - from a landmark synthesis of one of the simplest boron compounds, diborane (B2H6), published in 1944, to the asymmetric syntheses of complex pharmaceuticals discussed in this talk.Although it might appear that the end of the flagpole has been reached in this particular area, Brown liked to repeatedly point out in his talks that even supposedly well-researched fields offer a lot of room for surprises. In 1959, when he tried to publish the hydroboration reaction, which can be considered the basis for his share of the 1979 Nobel Prize in Chemistry, reviewers were not in favour, stating that boron compounds had been around for a hundred years already and that no significant effect on organic chemistry could be expected from them. Some 40 years later, Brown’s lecture is a late triumph over this scepticism.Another 12 years later, a former PostDoc in Brown’s lab, Akira Suzuki, should share the 2010 Nobel Prize in Chemistry for his work on organoboron-based, palladium-catalysed cross-couplings. A further success of boron, which Brown was not able to witness anymore: in 2004, at the age of 92, he passed away after a heart attack. David Siegel
10.5446/55085 (DOI)
Good morning. That's all the German you'll get from me. Sorry. It's very pleasant to be here in Lindau. This is my first visit. I hope it's not my last. My talk this morning is divided roughly into three parts. The first part might be considered advice to young people. I do so with great hesitation. I'm not used to handing out advice, and I'm somewhat apprehensive about doing so. The second part of the talk will, in some sense, be an indication of how the remarks that I initially make apply to my own life experience. And then, since I can't resist talking about science and my own field, there will be some discussion about some applications of crystallographic techniques to chemistry. I would like to start off by illustrating my apprehension concerning advice by telling you a little story. It was a little girl, I suppose, about eight years old, about the third grade, and she had a homework assignment. The members of the class were asked to write a small composition about a famous person in history, and she decided to write about Socrates. She wrote a little composition that went something like this. Socrates was a Greek. He gave people advice. They killed him, and with that apprehension I'll continue. At some time in a person's life, it is worthwhile, if not necessary, to make decisions concerning the path to be followed, that is, the profession that would be most attractive and how it is to be pursued. The earlier that such decisions can be made, the better. Although enough interest and determination, with enough interest and determination, such decisions can often be delayed until adulthood. It is not so important to make a detailed decision. What is important is to decide upon the direction of one's interests so that appropriate education and training can be pursued. I had the good fortune of knowing at an early age that I wanted to be a scientist. I really did not know what kind of a scientist I wanted to be, and it did not seem to be important to make such a decision. By the time I was about nine years old, I was already reading about scientific and technical subjects, and I found that such reading was more interesting than the usual literature devoted to young people. There was little question in my mind that science would occupy my future, and I was committed and determined to make sure that that was the case. It is apparent that a deep interest in a subject or area of activity, and a personal identification and respect for a subject or activity, is a very strong motivation that can carry a person beyond the potential adversities that so often appear to inhibit the achievement of a person's goals. In my case, I obtained an advanced degree in science after many adversities, and artificial barriers were overcome. The barriers were of various types, economic and social, which had nothing to do with the science that held my interest or my competence to pursue it. They nevertheless could have been quite demoralizing, and may very well have discouraged other people under similar circumstances. The economic difficulties took the form of not being able to afford advanced education, and the social barriers concerned quotas and limitations on the support of advanced education. A strong motivation and dedication to the future can overcome impediments and lead to a successful career. It is important to recognize that in order to accomplish almost anything worthwhile, hard work and dedication are required. But if a person is content with his or her choices in life, even if the hard work is not always a joy, it can be satisfying. Satisfaction derives from progress and an occasional accomplishment. Hard work, especially intellectual hard work, for protracted periods is achieved only with motivation, the motivation that derives from commitment to a specific course in life. Numerous examples can be derived from the world of basic science. For example, a deep interest in basic science can motivate people to investigate problems simply for their scientific interest, and not primarily for the recognition that may come from colleagues for solving problems, for the immediate usefulness of solutions, or for potential monetary gain. Purely and simply, it is possible to have such an interest in one's chosen activity that other attractions and pressures are not able to distract a person from his or her true interest in life. Basic science very often ultimately leads to very useful results, but utility as a goal without the aspects of scientific challenge and an appeal to scientific curiosity would not generally be an attractive activity for someone with a strong interest in the pursuit of basic science. Just as there are often barriers to the achievement of an adequate education in the normal course of events, life and nature provide numerous barriers to the achievement of one's goals. To become discouraged instead of staying with one's goals will result, evidently, in the failure to realize one's potential. Again, motivation can play a major role in overcoming adversities. Along with dedication and hard work, it is often necessary to have persistence, sometimes great persistence. My own experience showed this to be true. It was about ten years of hard work and persistence before a general procedure was developed for making practical applications of the foundation mathematics for crystal structure determination for whose development the latest Nobel Prizes in chemistry were awarded. Under development of the foundation mathematics, there were many difficult steps that required testing and discovery. These steps are called by some bridging. In a simple way, I regard bridging as the modification of mathematics so as to be suitable for application to experimental data and modification of the experimental data so that it could be suitable for use with the mathematics. In complicated problems such as the determination of the three-dimensional atomic arrangements in complex structures, bridging can be a rather difficult and tedious area of intellectual activity. This was certainly the case, and it largely accounted for the considerable time between the development of foundation mathematics and the development of general procedures for widespread application of the theory. Structures in the range of about 100 to 400 atoms often present problems that are difficult to overcome. There are therefore still some hard problems remaining in structure determination. In addition, there are indications that improvements can be made in the way the techniques for macromolecular research are carried out. Macromolecules can be described as structures having about 500 or more atoms whose positions need to be determined. Several techniques can be applied to macromolecules that make them easier to solve than some structures having fewer atoms. The motivations that played an important role in bringing structure research to its present state of high competence and reliability also motivate the pursuit of current fourth end problems. I have been implying that it can be very satisfying to be strongly motivated and to set high goals in life. Such an approach may necessarily lead to an easy course through life and may not provide what one personally regards as success. It is important to be satisfied with trying one's best. There are many documented circumstances in which young people set very high goals which they may not ever achieve or at least as soon as they would hope or expect. With the consequence that a lack of self-confidence or great disappointment and this despondency may accrue, such mental states can be dangerous and damaging. It is important to learn how to live with adversity. As life proceeds, almost everyone has to make compromises with the circumstances in which they find themselves. It is easier to accept circumstances that may not be ideal when a person's primary interest in life, the one for which a person has strong motivations, can be pursued. If you are convinced that your goals are worthwhile, stay with them and try not to make destructive compromises. If your primary objectives are matters such as wealth, power, recognition, easy living, a scientific career is probably not a particularly good way to achieve such goals. Such motivations would appear to run counter to a creative scientific career. Naturally, someone may decide from an early age that his or her goal in life is to be wealthy. Many of the personal characteristics that I have described for successful pursuit would still be applicable. Beyond that, I personally have no recommendations for such a career. It is also important to understand that in the course of a person's life, especially in the sciences, it is very rare that high recognition will be achieved. And setting recognition as a primary goal can only serve to interfere with the pursuit of science for its own sake. In my view, the greatest satisfactions come from the act of scientific discovery, and it is a valuable asset to be able to derive great satisfaction from personal participation and discovery, whether or not one's colleagues recognize the value of the work. A large part of success in life comes from having clear goals and the proper outlook. If the proper outlook includes personal satisfaction from personal achievement, it is possible to look forward to a life of joy and a life of accomplishment. That's all the advice I have. I would like to continue with the talk now by discussing my own field of research in the context of motivations that I've had, and also to illustrate, if I may, how one and sort of broad outline, broad brush outline, how one can go from the measurement of the intensities of X-ray scattering to a complete molecular structure. The subject is the three-dimensional structure, atomic arrangements of molecules, as obtained from so-called X-ray diffraction applied to the crystalline state. Through the advent of modern technology such as computers, electronics, instrumentation, and sophisticated computer programming, it has become possible to mount a small single crystal of an unknown substance on an automatic X-ray diffractometer, and with minimal human intervention see the complete geometric arrangement of atoms in the molecule displayed on a video screen. Depending upon the size of the molecule, the process may take as short a time as one day. Automatic structure determination is feasible today for very many small and medium-sized structures, particularly if the crystal contains a center of symmetry. Larger and more complex structures require the special expertise of professional crystallographers. The present ease of structure determination, and even the possibility of deriving a structure from X-ray data, stems from the fundamental theoretical and practical advances made in solving the phase problem. What happens is that a collimated beam of X-rays of a particular wavelength is made to impinge on a single crystal, and that crystal is rotated about various axes, different sets of scattering planes. Acting like a ruled grating, come into position for the X-ray beam to be diffracted. The intensity and the angular orientation of hundreds or thousands of reflections are measured and recorded by an automatic diffractometer, formerly by photographing. The phases associated with the diffracted beam, however, with rare exception, cannot be measured experimentally, and you need the phases as well as the intensities. Let me see if I can get my first slide. This is called a Weisenberg photograph. Each one of the spots comes from a particular plane in the crystal, and its location can determine by a fairly simple geometric calculation what particular plane in the crystal scattered each of the spots. And as you notice, they have varying intensities, some are very light and very difficult to see, others are so black that it's difficult to estimate their intensity. Many years ago, perhaps over 20 by now, this was a general technique for obtaining X-ray diffraction information, and it was necessary to sit for sometimes several months with a comparison strip and make a comparison and estimation of the intensity in each one of these spots. Nowadays, all of this is taken care of for you in automatic instrumentation. I am not going to make a deep mathematical conversation about this business, but this mathematics here, this function, rho, represents the structure of a crystal because it represents the density, electron density, in a crystal. In order to compute this function over here, which is called the Fourier series, you have to know all about these numbers here. These numbers are complex numbers, and that means that there's a magnitude and some kind of an angle associated with them. And here's where the problem resides. Those black spots give you information about this part of the number, and superficially, it appears that they give you no information at all about this angle, which can be called the phase angle, and you need both this magnitude and this angle in order to compute this function. The reason why the electron density in the crystal represents its structure is because atoms are located where electron densities have their greatest value near the peaks of the electron density function. So that if somehow or other you could find this, then it's a trivial calculation in a computing machine to obtain this density function that you wish to have. Well it was decided that since you only get this part and not that part, that there was no chance of directly taking x-ray intensity information in order to make this calculation. And it was actually quite obvious to Herbert Houtman and I that the phase information was actually contained in the intensity information that was measured and seen in the x-ray diffraction photographs. Now it's one thing to know that the answer is there and quite another matter to find the answer. And so it took us a few years to be able to see through the mathematics in order to be able to derive equations for the phases. But the consequence of making the mathematical study was to give rise to a variety of relationships among phases and the phases and the intensities so that we would have formulas for carrying out structure determinations. These are the two most used formulas among the phases in order to carry out a structure determination. Some of you might have some interesting questions in your mind. If you define a phase in terms of other phases and you don't know the values of the phases, how do you ever get started to carry out a formula like this in order to get answers? I just might mention that the meaning of this is that this is a phase associated with a plane given by this vector H which actually has three components in the vector. And you need the three components to describe the three dimensional plane. This is a different plane and this is still another plane. And you see if you add this number, this vector, to these two vectors that you end up with that vector. And these phases contained in this formula, this symbol indicates that these are all associated with planes that have the highest intensities. Will we get back to the question, how do you use a formula like this if you have no phase information? And the answer is you do have phase information. You're allowed to specify a certain number of values for the phases and that corresponds to specifying an origin point in the crystal, a point from which all other atoms can be, all atoms can be located. It's like an origin in a coordinate system. And then one way to do is to assign values for a few more phases by the use of symbols. And it turns out that this problem is so over determined that you need very few initial phases in order to be able to proceed with formulas such as this. For example, when you have a center of symmetry, you have phase values that are only zero and 180 degrees. And it's possible to go through phase determinations with probabilities that are as high as 0.99 and greater. And so you just need a small set and you can get started with very high probabilities. And as more and more phases are added in a stepwise fashion, it cascades and you get very large numbers of phases. And you can then use these phase values together with their associated magnitudes, intensity values, and calculate the Fourier series and computer structure. This is an implication concerning the cascading of phase information that some initial specifications, alpha, beta, and gamma, lead to new phase information. And this continues along, the implication of this part of the diagram is that from two different paths you get information about the same phase. And when that starts to happen, you get indications of internal consistency and the fact that the phase determination is on the right track. Now I'd like to talk about some applications. You may recall that Cedric was talking yesterday about the very interesting photo rearrangement reactions that take place in organic chemistry. And I myself find photo rearrangement reactions very appealing because you get major changes major unanticipated changes in the structures. And they're good problems for structure determination because that's a good way to find out about the major changes that often take place in a molecule. This is what's called the pharmacodynamic amine, which means that it's active physiologically. And the people who began to study this and used some ultraviolet radiation were just hoping that they had a nice way to make ring closures in which this carbon atom would attach itself to some place like this and just make a nice closed ring system. Well, one of the major constituents that are major products that they ended up with was something with a formula such as this. Elemental formula, carbon, 12 carbons, 15 hydrogens, 1 nitrogen, 3 oxygens with a melting point of 123 degrees. A structure analysis indicated that the photons and the molecule had minds of their own. And what actually happened was to generate a fused ring system of this kind, something quite different than what appears here, in which one obtains two five-membered rings and two four-membered rings and this arrangement. After you obtain a result like this, it's possible to conceive of a mechanism, a path that may have taken place in the reaction. And as seen here, if this nitrogen attaches here and this carbon attaches here and some hydrochloric acid is lost, you can account for this fused ring system of two five-membered rings and two four-membered rings. This slide illustrates in somewhat more detail how the result comes up in X-ray diffraction analysis. Actually what you get is just a series of peaks such as this plotted at A-axis that refers to the X, C-axis, the Z. And these numbers associated with the points tell you how far along the B-axis perpendicular to these two-dimensional coordinates one finds the peak. Then the question is how do you connect up these points? Well you make a calculation of interatomic distances for the points and since interatomic distances are quite constant in structures, you can easily determine which atoms need to be connected to which. And this is a drawing of how the actual result appeared, the five-membered ring, five-membered ring and the two four-membered rings that were written out in a two-dimensional fashion on the previous slide. Here we have another kind of a study, adduct formation. This problem came up in an interesting way as a gentleman who is trying to find out if there is anything that ascorbic acid does in helping with cancer therapy. And this molecule of acrolein ascorbic acid and it turns out that a crystal structure analysis showed that this fused ring system had occurred between the two so that there is a reaction between the acrolein and the ascorbic acid and it forms a material such as this. Again the way to find out what this looked like readily was by crystal structure analysis. After you know this it's possible again to think about a possible mechanism which is indicated here. This is a rather interesting result we are told by the person who gave us this material that not only is this acrolein which is a rather toxic material in the system and comes from certain therapies. Not only is that in a sense its effect neutralized by attaching to the ascorbic acid but in addition to that this fused ring system has cytostatic effects which means that it stops cells from dividing. An additional application of structure determination is one in which in many circumstances one obtains very small amounts of material from particular reactions or extractions of natural products. A very small amount of natural product which has very potent behavioral characteristics that has been of interest to our department of agriculture. This material is called breast inline and it was observed by people in the department of agriculture that it's very likely that there is a growth hormone, a growth stimulator in the pollen of various flowers. And it was decided to try to obtain this potent growth stimulator. The way this was done was to fool bees in a beehive. Brushes were put right in front of the opening of many, many hives and literally hundreds of pounds of pollen were brushed off the legs of bees and collected for the extraction purpose. From this a very small quantity of material, active material was obtained that was grown into crystals. And the chemists who did not have much material to work with correctly determined that there was probably a steroid involved. This is just the crystal structure result of this triply fused system. This material was breast inline, the growth hormone, and what was very difficult for them to imagine and determine was the fact that the B ring in the steroid was really in this case a seven-membered ring with this extra oxygen here. And it was possible from the structure analysis, of course, to determine this. And it seems from all indications that this extra oxygen, the seven-membered ring formed out of B, is particularly associated with its abilities to enhance growth. Now, I might point out that its effects take place in nanogram quantities. And it is a general growth stimulator. Now, what the chemist wants to do when he finds out what it is that he would like to synthesize is then proceed to do this and do it on the large scale. Well, it turned out that it's quite difficult, at least up to this point, to synthesize this particular compound that has been done. But it's very easy to synthesize other compounds with some minor differences up here in this side chain. And they're also just as active. And so now this material or its close relatives is available in large quantity and is undergoing tests, field tests for the growth of a variety of materials. In nanogram quantities, it will easily double the size of the plant in the same time span. So it's a very effective material. This is an illustration of how you can get precise bond lengths. And the next slide will show bond angles. As a gentleman, Professor Gustav and his student at the Rockefeller University was hoping to make a compound that looked pretty much like a window pane. And that's where the fenestrain comes from. Actually so far all that could be done was to make this five-membered ring instead of a four-membered ring in this particular place. But what this illustrates here is that you can also get fairly precise information about interatomic distances and molecules, and the point about the homofenestrain is that the interatomic distances vary considerably more from carbon-carbon distances than one finds normally in organic compounds. Normally numbers like 1.54 to 1.56 are what are to be expected. But here we see with this strained structure numbers that go up as high as 1.60 and as low as 1.49. The bond angles are also pretty strange. Normally, again with carbon compounds, you find something near the tetrahedral angle which is around 110 degrees with some greater variation than in the case of bond angles, but nothing as great as normally as 86 degrees and 124 degrees and so forth. So there's really quite a lot of strain in this molecule and it's readily detected. The last application that I would like to show is a study of conformation. This is the molecule of Unkephalin. Its structure was solved as many of these others by Mrs. Carl. It occurred in the unit cell with four different conformers. You might notice here that the backbone illustrated by the open atoms is pretty much the same in all four conformers. But the side groups, made out in black, are considerably different on each one of these. There's always the question, do the conformations that occur in crystal structure analysis have anything to do with physiological activity and solution? And the answer is that one must always be cautious. However, in those cases when you get crystallization with large amounts of solvent, then they can be some confidence that there is a reasonable relationship between the crystal structure and the solution. This is particularly agreeable so far as macromolecules are concerned because as you know, for example, proteins always crystallize with very large amounts, 40, 50, 60 percent of solvent water. People here with this Unkephalin, some of you may know that it's a natural analgesic, a natural painkiller. Some people allege that its concentration goes up in the blood supply and the stress, and they think that it accounts to the fact that some people run too far and too much per week at any rate they think they get Unkephalin highs as they're called. And here we see that structure determination of this rather complicated structure that had over 200 atoms in the asymmetric unit could be carried out. This is how they appear in the packing in the unit cell. This arrangement actually is called the beta sheet, and it's formed from the attachment of the various conformers through hydrogen bonding. This, these long lines here are indications of hydrogen bondings to form this beta sheet. I think that's my last line. This sort of summarizes the scientific part of my talk, that the kind of information that you can obtain is chemical identification, structural formula, stereo configuration. If you do special things, you can get absolute configuration, conformation, bond lengths and angles, and if you do some very careful experiments, you can even get charged distributions. I've come to the end of my talk, and I hope that at least one part of it has given you some useful information. Thank you very much. Thank you.
“Socrates was a Greek.He gave people advice.They killed him.” - with this summary by a young scholar, Jerome Karle familiarizes the audience with potential side effects of well-intended advice. Still, despite the cautionary tale of Socrates, the first half of his talk is dedicated to the young researchers present and some general advice on how to be successful as a scientist.If one strives for wealth, power, easy living or even general recognition, science is the wrong profession, Karle says. If one, however, is able to derive pleasure from progress, even without the recognition of others, science might just be the right choice. In any case, the keys to success are motivation, hard work and persistence, Karle explains, not only with respect to scientific challenges, but also to those challenges life and nature pose. While he admits that everyone has to make compromises, he urges the young researchers to avoid “destructive” compromises, i.e. those that affect the foundations of their own motivation. Just one year before the present talk, the chemist Karle had shared the 1985 Nobel Prize in Chemistry with the mathematician Herbert Hauptman. The Laureates had been rewarded "for their outstanding achievements in the development of direct methods for the determination of crystal structures". These methods allow to circumvent the so-called “phase problem” of x-ray crystallography and to determine chemical structures from x-ray diffraction data in very short timespans. Today (2013), x-ray crystallography is routinely employed to elucidate or confirm the structures of unknown molecules, be they synthetic or natural, using merely minute substance quantities {Link to X-ray Topic Cluster}. Still, for Karle himself, the path to success was not always as exciting as it might seem from a today’s perspective. He points out that the translation of the mathematical solution to the phase problem into a technique that could actually be used by x-ray crystallographers took some ten years - and a lot of motivation. Fortunately, motivation for chemical challenges appears to be abundantly available in Karle’s family: his wife, Isabella, and two of his three children are chemists [1]. David Siegel [1]http://www.nobelprize.org/nobel_prizes/chemistry/laureates/1985/karle-autobio.html
10.5446/55088 (DOI)
Liebe Studenten, liebe Studentinnen, ich glaube, dass diese Anrede eigentlich alle Anwesenden treffen sollte. Denn Forschung gehört ja eigentlich zu den Urtreben der Menschheit und ist verantwortlich für alle unsere Freuden und Leiden. Und wer nicht mehr studiert und nicht mehr forscht, der hat ja eigentlich seine Menschenwürde verloren. Und wer möchte das von sich selbst behaupten? Nun, wie Sie hören, wurde ich gebeten, in dieser merkwürdigen Sprache zu sprechen, von der man behauptet, dass sie eine entfernte Verwandtschaft mit dem Deutschen hat. Nun, wir befinden uns hier ja im reizenden Lindau in einer Grenzregion mit einem starken Sprachgradienten zwischen dem gesellschaftsgängigen Hochdeutsch und dem urtigen und holprigen Schweizerdeutsch. Und so führt denn meine sprachliche Beschränktheit zu einem Kompromiss, der irgendwo in der Mitte des Bodensees anzusiedeln ist. Nun, also in der Mitte, wo es sehr viel Wasser hat und Wasser oder genauer Wasserstoff mit seinem Atomkern, dem Proton, das ist eigentlich das Hauptthema meines Vortrags. Und es sind sozusagen die Eigenschaften des Wasserstoffkerns, die die Grundlage der Kernresinanzspektroskopie ausmachen, die heute, wie in der Einleitung gesagt worden ist, eine so enorme praktische Bedeutung erlangt hat. Dies ist also des Pudelskern. Nun, betrachten wir einmal den Baum der Erkenntnis der Wissenschaften. Nun, die Aufgabe der Wissenschaft der Erkenntnis ist es ja, Phänomene zurückzuführen auf einfache Gesetze. Wir haben erstens einmal auf der obersten Ebene die Vielzahl der medizinischen Phänomene, die durch einfache biologische Erklärungen begründet werden können. Wir können dann die Biologie zurückführen auf einfache chemische Gesetze und schlussendlich auf die Physik, wo eben dann nur noch ein einziges Gesetz hier vorherrscht und wie Ihnen Professor Lampe heute morgen erzählt hat, das ist im Prinzip eben die Schrödingergleichung. Nun, um eben von Ebene zu Ebene zu gelangen, brauchen wir eine Leiter und die Leiter, die wird hergestellt beispielsweise eben durch die Kernresonanz spektroskopie. Und Sie sehen, die Leiter ist in blau und blau, das ist die Farbe der Physik. Sie ist eine physikalische Leiter, mit der wir versuchen eben diese verschiedenen Ebene miteinander in Verbindung zu bringen. Und ich möchte Ihnen das einfach anhand von einigen Beispielen gerade am Anfang zeigen. Und das erste Bild zeigt Ihnen einen Pilz. Das ist der grüne Knollenblätterpilz. Und der grüne Knollenblätterpilz, wie Sie wissen, ist extrem giftig und enthält verschiedene Gifte, die Phalloidine beispielsweise und Amanidin, das sind die giftigen Komponenten und gleichzeitig hält er noch ein weiteres Molekül. Ein sehr interessantes, das Sie auf dem nächsten Bild sehen. Das ist Antamanid. Und Antamanid ist ein zyklisches Deckerpeptid mit vier Rohlinen hier, vier Fenylalanin, Alanin und Varin sind die zehn Aminosauern. Und das ist nun ein Gegengift. Ein Gegengift gegen die Phalloidine und merkwürdigerweise ist es, dass im selben Pilzen halten. Heißt natürlich nicht, dass Sie den Pilz essen, genießen sollten, denn auch die verschwinden kleine Menge von Antamanid wird sie nicht vor dem sicheren Tod retten. Nun, wir möchten natürlich gerne verstehen, wie das Antamanid wirkt. Was ist seine Struktur, was ist seine Funktion? Und dazu müssen wir zuerst einmal eben die molekulare Struktur, die dreidimensionalische Struktur von einem solchen Molekül ergründen können. Das kann man im Prinzip mit der Kerne einandspektroskopie. Nächstes Bild zeigt Ihnen ein Bild vom Antamanid in drei Dimensionen. Nun, diese Struktur von Antamanid wurde schon viel früher mit Röntgenkristallografie bestimmt, und zwar durch Frau Isabella Carle, die in der zweiten Reihe hier vorne sitzt. Und im Prinzip also für Strukturuntersuchungen dieser Art ist an und für sich die Kerne einandspektroskopie nicht notwendig. Aber was man gleichzeitig eben kann, ist auch etwas über Dynamik von Molekülen zu lernen. Und auf dem nächsten Bild sehen Sie die Dynamik. Sie sehen das hier angedeutet, dass das Molekül eben nicht ein starres Gebilde ist, sondern das Gebilde sich bewegt. Und Bewegung von solchen größeren Molekülen ist eben extrem wichtig für die Reaktivität. Ein Molekül muss sich anpassen können, dass es mit anderen Molekülen reagieren kann. Und hier zeigen sich nun eben die Grenzen der Röntgenstrukturanalyse und die Kerne-Ansan-Ans hat hier nun wirklich Möglichkeiten, die anderswo nicht existieren. Man kann das alles in Lösung direkt im natürlichen Medium eben tun. Nun, es gibt weit, das wäre also im Prinzip ein biologisches Beispiel. Wir können auch eine Stufe höher gehen und die Kerne-Ansan-Ansan-Ansan-Ansan-Medizin anwenden. Das nächste Bild zeigt Ihnen ein Kerne-Ansan-Schnittbild durch einen Kopf, den dessen Umrisse hier Sie vielleicht kennen, wenn ich mich von der Seite zeige. Und Sie sehen merkwürdigerweise, hat das etwas im Kopf. Aber wenn Sie bedenken, dass man mit Kerne-Ansan-Ans eben nur den Wasserinhalt feststellen kann, dann ist das viel weniger eindrücklich. Nun, das ist nicht, das hier ist nicht Bodensee-Wasser, sondern das ist hochwertiges Schweizer Quellwasser. Nun, auf dem nächsten Bild sehen Sie etwas, was man eben sehr gut darstellen kann mit Kerne-Ansan-Ansas, kann ich Ihnen leider in meinem Kopf nicht zeigen, nämlich Tumore. Dazu braucht man eben hier eine weiße Maus, eine nackte Maus und da sieht man diesen Tumor sehr schön abgebildet mit Kerne-Ansan-Imaging. Nun, auf dem nächsten Bild sehen Sie, dass man auch Kerne-Ansan-Mikroskopie betreiben kann. Man kann sehr kleine Objekte anschauen, hier ein Froschein mit einem Durchmesser von 1,5 mm oder auf dem nächsten Bild auch sehr große Objekte. Hier beispielsweise ein Querschnitt durch einen Stamm, wo man feststellen kann, welches Material hier noch lebend ist und welches Tod. Wiederum aufgrund des Wassergehaltes. Was hell ist, ist Wasser und der Rest enthält ihm kein Wasser. Nun, auf was ist diese ganze Methodik begründet? Im Prinzip zieht man eben auf Eigenschaften von Elementarteilchen und man kann Elementarteilchen schematisch darstellen durch ein Diagramm dieser Art. Wir haben hier ein Massepunkt, der möglicherweise geladen sein kann, aber was uns speziell interessiert an einem Elementarteilchen ist, dass gleichzeitig eben ein magnetisches Moment vorhanden ist. Hier mit einem North Pole, hier unten mit einem Südpol und gleichzeitig hat man noch einen Drehimpuls. Das ganze dreht sich hier um diese Achse. Also wir haben kombiniert magnetisches Moment und einen mechanischen Drehimpuls. Und das führt eben zu den magnetischen Resonanz-Fenomenen, sobald man eben noch ein äußeres Magnetfeld anlegt. Und wir schauen uns vielleicht das nächste Diapositiv an. Ja, hier möchte ich nur noch rasch als Übersicht zeigen, wo die Kernresonanz überall einsetzbar ist und ich möchte das zusammenfassen in den 3 Ms der Kernresonanz, molekulare Wissenschaften für zu Flüssigkeits-Spektroskopie, Materialwissenschaften kann man mit Fesskörper-Kernresonanz unterstützen und zusätzlich die medizinischen Wissenschaften, die profitieren eben vom Kernresonanz-Imaging. Und in allen drei Bereichen hat man sowohl strukturelle wie dynamische Information, die man gewinnen kann. Nun, ich möchte Ihnen hier die Wechselwirkung mit dem Magnetfeld raschen Hand von einem kleinen Experiment zeigen. Ich möchte Licht haben. Ja, die sehen, was sie hier vor sich haben, ist eigentlich das, was Sie längst sehen. Ein magnetisches Device hier, das ist der Magnet und Sie sehen das Ganze ist magnetisch reagiert, also hat also inherent ein magnetisches Moment eingebaut. Und wir können nun diesem System gleichzeitig noch einen Drehimpuls verabreichen und dazu müssen wir das Ganze in Bewegung setzen und das mache ich jetzt. So, jetzt hat es gleichzeitig noch einen Drehimpuls und entspricht also dem Bild drüben. Sie kennen vom Kindergarten her, was passiert, wenn man einen Kreisel hat und den Kreisel belastet. Und ich hänge nun ein Gewicht an diesen Kreisel an und dann sollte er natürlich umfallen. Aber was er macht, er beginnt zu präzessieren. Er präzessiert um seine Achse, er weicht also dem Umfallen aus. Das kennen Sie vom Kindergarten und das ist auch nicht das, was uns im Moment interessiert, sondern wir möchten gerne einen magnetischen Kreisel sehen. Und nun, anstatt dieser mechanischen Kraft, wenden wir einfach eine magnetische Kraft an. Ich komme jetzt mit dem Magnetfeld hier und Sie sehen, er beginnt ebenfalls zu präzessieren. Und diese Präzessionsbewegung, nächstes Tier, bitte, hier sehen Sie, hier eigentlich auch dargestellt, das ist der Atomkern mit seinen magnetischen Eigenschaften, der also diese Präzisionsbewegung eingeht. Und das ist eigentlich das Grundphänomen, diese Präzisionsfrequenz, nicht die Rotationsfrequenz, die ich angekurbelt habe am Anfang, sondern diese Präzisionsfrequenz, das ist das, was man in der Kerne, das ist ans Wirklich Mist. Und auf dem nächsten Bild sehen Sie das noch etwas besser dargestellt. Sie sehen hier, dass abhängig von der lokalen magnetischen Feldstärke eben die Präzession verschiedene Frequenzen haben kann. Wir haben hier unser Magnetfeld vertikal angelegt, hier die präzessierenden magnetischen Momente, hier unten ein stärkeres Magnetfeld, hier oben ein schwaches Magnetfeld und die Präzessionsfrequenz Omega, die ist proportional zu lokalen magnetischen Feldstärke. Also hier eine langsame Präzession und hier eine rasche Präzession. Und man kann nun dadurch eben entweder chemische Umgebung unterscheiden, das ist die chemische Verschiebung, die in der Einleitung angesprochen ist und die im Prinzip Professor Lambe eingeführt hat, bevor die Kerne es nun überhaupt erfunden wurde. Und die zweite Möglichkeit ist, dass man eben von außen einen makroskopischen Feldgradienten anlegt. Und dieser makroskopische Feldgradient erlaubt einem dann unten hohe Frequenz und oben tiefer Frequenz zu unterscheiden. Und dadurch hat man eine Möglichkeit, um Abbildungen zu erzeugen. Und das ist die Grundlage der medizinischen Abbildung mittels Kerneesenanz. Nun zum ersten Mal wurde dieses relativ einfache anschauliche physikalische Modell vorgeschlagen von Ulenbeck und Gutsmiss 1925 oder an und für sich wurde es eigentlich schon früher vorgeschlagen, nämlich von Arthur Compton schon 1921, aber das wird im allgemeinen üblicherweise nicht erwähnt. Also das sind im üblichen Sprachgebrauch die Erfinder von diesem Modell. Nun die Herren sehen Sie auf dem nächsten Bild hier drüben. Hier der Herr Ulenbeck, er war damals ein Doktorant und Herr Gutsmiss, er war ein Diplomant und die haben dieses Paper zusammengeschrieben. Natürlich die etablierten Herren der Wissenschaft waren damit nicht zufrieden. Zum Beispiel der Herr Pauli, er hat schon früher ein Paper geschrieben über das selbe Thema, aber hier schrieb, schreibt er von einer klassisch nicht beschreibbaren Art von Zweideutigkeit. Er beschreibt dasselbe Phänomen, sagt aber, dass man es eben klassisch nicht darstellen kann. Und dass diese Modelle hier, die ich Ihnen vorgeführt haben, dass das eben nur ein Modell ist, aber da wirklich in keiner Art und Weise entspricht. Nun das hat natürlich den Herrn Pauli nicht davon abgehalten, später in seinem Leben auch mit Kreiseln zu spielen. Sie sehen hier den Herrn Pauli und das ist der Herr Bohr. Und man sagt ja, dass die meisten Menschen im Laufe ihres Lebens erwachsen werden und diejenigen, die dem erwachsen werden, wieder stehen können, die werden entweder Wissenschaftler oder im besten Fall werden sie Künstler. Nun die wirkliche Theorie, die ist viel komplizierter und stammt an und für sich von Herrn Dirac und geht wiederum wie alles auf die Schrödingergleichung zurück. Wir haben hier die Schrödingergleichung. Normalerweise braucht man einen nicht relativistischen Hamilton-Operator, der hier beschrieben ist und er hat nun versucht eben die Schrödingergleichung in Variant gegenüber Lorentz-Transformation zu machen und hat einen relativistischen Hamilton-Operator eingeführt. Der hat nun also ungenahme Eigenschaft mit diesem Wurzelausdruck, der eigentlich den Quantenmechanischen Prinzipien widerspricht. Der hat dann das Ganze quadriert und die Wurzel wegzubringen. Das ist noch relativ einfach und dann hat er eben diese unangenehmen Quadrate hier erhalten und die Quadrate, die heißen nun, dass man eine zweite Ableitung in der Zeit hat und eine zweite Ableitung in der Quantenmechanik, das gibt es einfach nicht. Also hat er versucht, ein lineares Eqirlent herzustellen, dass die gleichen Eigenfunktionen erzeugt, wie dieser komplizierte Hamilton-Operator. Und dazu muss er unbekannte Größen, diese Alphas einführen und er hat dann gezeigt, dass das nur möglich ist, wenn das vier mal vier Matrizen sind. Und das hat dann schlussendlich bei einer Reduktion in der nicht relativistischen Limite auf diese Gleichung hier unten geführt, wo nun ein zusätzlicher Term noch auftritt und dieser Term enthält nun, denn in diesem Fall Elektronenspin. Das S ist der Elektronenspin, der mit dem Magnetfeld in Wechselwirkung treten kann. Hier haben wir das gyromagnetische Verhältnis, das uns den Dreimpuls mit dem magnetischen Moment verkoppelt und damit war das ganze mathematisch erklärt, aber nicht physikalisch verstanden. Nun, heute weiß man, dass die meisten Elementarteilchen haben irgendwelche magnetischen Eigenschaften, man unterscheidet beispielsweise Leptonen, die sind Fermionen, haben einen Spin, ein Halbe, ein magnetisches Moment. Die Messonen, die sind Bosonen und haben keinen Spin, Baryonen haben wieder einen Spin, ein Halbe und vor allem sind das die Protonen, Neutronen und Elektronen, die also die semt magnetischen Eigenschaften haben. Nun, bei den Kernen ist das etwas komplizierter, es gibt dort gewisse Regeln, für die magnetischen Momente man unterscheidet die Anzahl der Protonen, die Anzahl der Neutronen in einem Kern und daraus kann man mindestens herleiten, ob die Spin-Quantenzahl, ob die Null ist, ob sie halbzahlig oder ganzzahlig ist, das kann man relativ leicht voraussagen, aber die genauen magnetischen Momente zu berechnen, das ist wesentlich schwieriger. Nun, warum soll man überhaupt Kernesanz brauchen? Nächstes Bild. Es gibt verschiedene Gründe und erstens einmal haben wir bei den Kernen eben Wechselwirkungen. Wir haben Wechselwirkungen mit der Umgebung und diese Wechselwirkungen, die sind informativ. Erstens einmal die wichtigste Wechselwirkung, die haben wir im Experiment gesehen, ist die Wechselwirkung mit dem äußeren Magnetfeld, die Zehmann-Wechselwirkung für Zutter, Larmor, Präzession, wir haben dann auch interne Wechselwirkungen, die ebenfalls von großer Bedeutung sind und über die ich später noch im Detail sprechen werde, insbesondere eben diese Paarwechselwirkungen, die sogenannte J-Kopplung, die Dippel-Kopplung, magnetische Dippel-Kopplung und dann eben die chemische Verschiebung, die Abschirmung des Magnetfeldes durch die Umgebung. Nun, was hier ganz wesentlich ist, ist die Größe dieser Wechselwirkungen und ich habe Ihnen hier eine Energieskala aufgetragen über einen enormen Bereich von 10 hoch 9 Elektronen Volt bis 10 hoch minus 15 Elektronen Volt und Sie wissen, dass normale Leben, die Chemie, die Biologie, die findet in der Größenordnung von etwa einem Elektronen Volt statt. Das ist die Stärke der chemischen Bindungen und Sie sehen wie gefährlich, dass die Teilchenphysik ist, hier mit 10 hoch 9 Größenordnungen mehr Energie und natürlich ist diese Strahlung gefährlich und zerstört eben chemische Bindungen. Auf der anderen Seite die Wechselwirkungen, die hier besprechen, die hier aufgeführt sind, die sind etwa 10 hoch 6 mal kleiner als die thermische Energie oder die Bindungsenergie in der Chemie und in diesem Sinn also völlig ungefährlich und deshalb ist es eine so gute Methode eben um chemische, biologische, medizinische Phänomene zu untersuchen. Mediziner verstehen natürlich nichts von Physik und entsprechend hatten sie Angst von vor dem Thermkern und haben den Thermkern gestrichen aus allen Ihren Begriffen. Es heißt nicht mehr kernmagnetische Resonanz, das heißt nur noch magnetische Resonanz, MRI, Magnetic Resonance Imitigging und nicht NMRI eben wegen dieser Furcht vor diesen bösen Kernen, die hier aber ganz zahm sind und die niemandem etwas zu leidet tun. Nun hier noch einmal eben diese Kerne, die sind sozusagen Spione, die wir benutzen können ohne wechselwirk, ohne sichtbare Wechselwirkung mit der Umgebung, aber liefern uns Informationen über chemische und biologische Prozesse. Betrachten wir also beispielsweise ein Modekül, wie wir das vorher gesehen haben. Nun in diesem Modekül haben wir diese Gelddivsweißenkugeln, das sind die Wasserstoffatome und in jedem Wasserstoffatom hat seinen Kern, ein Proton und wir haben hier also in einem typischen Modekül vielleicht etwa 100 Spione, die wir mit einem Sender ansprechen können, die uns Informationen liefern, die wir dann wieder empfangen können. Also wir haben, die Natur hat uns die Spione hier schon eingebaut und wenn sie sich nun einen menschlichen Körper vorstellen können sie sich vorstellen, wie viele Spione sie hier schon von der Natur aus drin haben. Die Zahl ist hier unten angegeben, etwa 10 hoch 27 und da würde wahrscheinlich jeder Chef einer Geheimpolizei gleich, wenn er eine solcher Anzahl von Spionen sich abzugeben hätte. Und diese liefern uns also Informationen, ohne dass dieser Herr etwas davon merkt, über die inneren Vorgänge. Nun die Informationsquellen sind hier aufgeführt, zuerst einmal die chemische Identifizierung der Umgebung. Wir haben hier vor allem diese Schwarzenkugeln, das sind nun wiederum die Wasserstoffkerne, die Protonen und die Umgebung liefert nun eben verschiedene Resonanzfrequenzen. Wir haben die blaue Umgebung, die zu dieser Resonanzfrequenz führt, die gründliche Umgebung, rote Umgebung und so weiter. Das gibt uns das Spektrum und die chemische Verschiebung im Spektrum. Wir können also die Umgebung der Kerne charakterisieren und damit Informationen über das molekül erhalten. Nun es gibt uns auch keine Strukturinformation, aber immerhin es gibt uns ein Spektrum. Nun schauen wir uns das nächste Bild an. Ja, was wir nun eben machen, ist sozusagen eine Melodie des moleküls zu spielen. Wir gehen hier frequent für frequenz, doch sie sehen hier, jeder Kern emittiert seine entsprechende Frequenz und wir haben hier die charakteristische Melodie dieses betreffenden moleküls oder hier entsprechend. Nun, nächstes Bild, eine solche Melodie zu spielen, das ist natürlich mühsam. Man muss die einzelnen Tasten auf der Klaffiatur anschlagen und das braucht Zeiten. Sie sehen hier das dargestellt durch diese Schnecke, die hier dieses Spektrum langsam in der Zeit eben zieht und das ist zeitaufwendig und nicht ekonomisch in unserer kurzlebigen Zeit. Wir müssen also einen effizienteren Prozess hier verwenden, um die Daten rascher zu erhalten und das ist auf dem nächsten Bild dargestellt. Anstatt dass man eben nun einzelne Tasten, einzelne Anschläge kann man hier gleichzeitig auf alle Tasten drücken. Man erhält hier einen Akkord, also sozusagen was hier eingeführt wird, ist die Mehrstimmigkeit in der Kernresonanz. Man erhält hier eine Überlagerung aller dieser Frequenzen und der Klavierstimmer, der hat natürlich Mühe zu sehen, von welcher Seite das der Ton herrührt und das Klavier zu stimmen, indem er alle Tasten gleichzeitig anschlägt, ist nicht ganz so einfach, aber dafür hat man Analyse gerät und auf dem nächsten Bild sehen Sie ein Furrieranalysator, das ist nichts anderes als ein Computer, der macht eine Furrieranalyse und greift sich hier nun die einzelnen Frequenzen aus diesem sogenannten freien Induktionszerfall hier heraus oder man sagt dem auch Impulse Response, man legt einen Puls an und bekommt die Oscillation und hier sehen Sie wiederum das Spektrum, das zur Identifizierung der kremischen Verbindung dann führt. Nun, nächstes Bild, das war im Wesichen das, was wir 1965 bei Varien Associates in Paloalta gemacht haben, die Idee wurde mir für sich von Wessend Andersen eingeimpft, das war mein Chef zu jener Zeit und er selbst hatte die Idee aus einem Patent von Rassel Varien, das war der Chef der Firma auf dem nächsten Bild. Sie sehen ihn hier mit dem ursprünglichen Magneten der von Herrn Bloch benutzt worden ist für seine ganz ersten Experimente, das ist Rassel Varien und er hat hier schon 1956 dieses Patent angegeben, wo steht Furrieranalysing Recorded Signal to Obtent, Furrier Component Ser of Leading to Enhanced, Sensitivity, das ist eigentlich der Beginn der Furrier Spektroskopie und wir haben das einfach dann ausprobiert und aus dem nächsten Bild sehen Sie noch das Resultat, die sehen hier oben wiederum so einen freien Induktionsverfall, die Furrier Transformation davon für eine bestimmte chemische Verbindung und hier unten noch ein normales Experiment, das in der gleichen totalen Messzeit aufgenommen worden ist, also hier ist die Schnecke durchgekrochen und Sie sehen, so viel ist nicht zurückgeblieben, diese paar Spitzen die kann man vielleicht irgendwo identifizieren, aber die Information ist hier oben viel evidenter in der gleichen Messzeit, das also der Vorteil und damit kann man eben diese chemische Information über die chemische Umgebung gewinnen. Nun was wichtig eigentlich ist für Strukturaufklärungen schlussendlich ist die Korrelationsinformation, wir müssen wissen welche Kerne hier benachbart sind, wir wissen jetzt was für welche Kerne vorhanden sind, wir wissen aber nicht wie sie angehört worden sind im Raum, wir haben hier zuerst einmal die magnetische Wechselwirkung durch den Raum, zwei magnetische Momente die miteinander wechselwirken und die Wechselwirkung hängt ab von der dritten Potenz des Abstandes und liefert uns deshalb Abstandsinformation, das ist die erste Information, hier noch einmal dargestellt die beiden magnetischen Momente mit ihrer Wechselwirkung proportional zu 1 über R hoch 3, wenn wir die messen können, haben wir die den Abstand von zwei Kernen bestimmt oder hier, in etwas anderen Art dargestellt Paare die benachbart sind führen eben zu solchen Wechselwirkungen, das ist die eine Art von Wechselwirkung die uns Information liefert, es gibt dann aber noch eine zweite Möglichkeit, nämlich die rote Möglichkeit, wir haben also eine gelbe und eine rote Möglichkeit und die rote Möglichkeit die geht hier durch das Bindungsnetzwerk und sagt uns welche Kerne benachbart sind innerhalb des Bindungsnetzwerkers und es sind vor allem solche drei Bindungswechselwirkungen die zur sogenannten IOT-Kopplung führen, die sie wahrscheinlich aus ihren spektroskopischen Experimenten in organischer Chemie oder so sicher können und diese Wechselwirkung die führt dann eben zur Multiplettaufspaltung in den Spektren, man bekommt hier solche Multiplets, hier ein Triplett, hier ein Duplett von einem Duplett, hier ein komplizierter Multiplett, das eben gegeben ist durch diese Wechselwirkung durch das Bindungsnetzwerk, das sind die beiden Informationsquellen, wir haben also die gelbe Information, wir haben die rote Information und damit können wir versuchen molekül Strukturen zu ergründen und schlussendlich dann wenn wir die Struktur haben auch die Dynamik zu betrachten. Nun die Frage ist wie stellen wir Informationen solcher Art dar, im einen dimensionalen Spektrum ist das nicht mehr richtig darstellbar, ich habe das so mit Pfeilen angedeutet, aber das ist unbefriedigend und die geniale Idee dazu stammt von Jean Genere, nächstes Bild, ein brillanter Physiker aus Belgien und der hatte die Idee der zweidimensionalen Spektroskopie, wenn man also Korrelationsinformation zwischen Paaren von Kernen darstellen will, braucht man eben ein Korrelationsdiagramm, wie Sie das hier sehen, wir haben die Diagonale, die für diese Elemente hier verantwortlich sind mit a bis h bezeichnet und hier sind die selben Elemente hier noch einmal dargestellt, irgendwelche Objekte und die Korrelationsinformation ist hier eben durch diese außer Diagonale Elemente gegeben, wir sehen dass das Objekt f irgendwas zu tun hat mit dem Objekt c, wir sehen dass das Objekt g irgendwas zu tun hat mit dem Objekt a, das ist die Art wie man Korrelationsinformation darstellt und die kann entweder Nachbarschaft im Bindungsnetzwerk sein und Nachbarschaft im Raum, Nachbarschaft im Bindungsnetzwerk oder kann eventuell auch kremischen Austausch darstellen, dass beispielsweise das Objekt f im lauf der Zeit in ein Objekt c übergeht, sich verwandelt. Nun vielleicht ist das etwas abstrakt für Sie und wenn ich ein weniger gebildetes Publikum vor habe, dann vor mir habe, dann versuche ich das mit der folgenden Folie zu demonstrieren, habe ein schlechtes Gewissen, aber ich zeige es Ihnen trotzdem. Sie wissen, die Menschen sind ja denkbar langweilige Objekte, man kann sie der Größe nachordnen, Männlein und Vibelein hier angeordnet und was eben die Menschen interessant macht, sind die menschlichen Beziehungen und das ruft natürlich sofort nach einem Korrelationsdiagramm und das möchte ich Ihnen hier zeigen. Und hier können wir nun Zeile für Zeile durchgehen und sehen, was für Informationen halten ist. Wir haben hier zum Beispiel die Dame A, ja, die hat besondere Vorliebe für die kleinen Knaben, das ist offenbar ein Muttertyp, dann haben wir den Herrn B, der liebt alle Frauen, Don Giovanni, wir haben den Herrn C, ja das sollte man nicht drüber sprechen, die Dame D, die liebt noch sich selbst und so kann man eben weitergehen. Sie sehen, welche ungeheure Information sie einem zweidimensionalen Diagramm haben. Nun werden wir wieder seriös und gehen wir zurück zum Antamanid und wir möchten gerne die Struktur vom Antamani bestimmen. Wir müssen dazu diese beiden Experimente machen, das rote Experiment und das gelbe Experiment, um die entsprechenden Informationen zu gewinnen. Nun auf dem nächsten Bild sehen Sie einmal ein richtiges zweidimensionale Spektrum, hier wiederum mit dieser Diagonale, mit den Außodiagonalpeaks und in diesem speziellen Fall stellen also diese Außodiagonalpeaks wirklich die Nachbarschaft innerhalb vom Bindungsnetzwerk da. Das ist also die J-Kopplung, die uns sagt, dass der Kern, der für diese Resonanz Anlass gibt, mit dem Kern, der für diese Resonanz hier Anlass gibt, eben durch drei Bindungen verknüpft ist. Drei Bindungsnachbarschaft. Wie macht man ein solches Experiment? Sie sehen hier das grundlegende Experiment. Anstatt dass man hier einen Radiofrequenz Puls anlegt, bracht man einfach zwei. Ein erster Puls, mit dem man hier die freie Präzession anregt, wie wir das vorher gesehen haben. Wir haben hier diese blaue Präzession, die beispielsweise hier in diesem zwei Spendsystem, ein AX-Zweispensystem mit einem Double Tier, einem Double Tier. Das Energienivor Schema ist durch vier Energienivors gegeben und wir können nun Übergänge erzeugen. Und angenommen der erste Puls errege hier diesen blauen Übergang. Wir erhalten hier sogenannte Kohreenz auf diesem Übergang, eine Präzession des Übergangs als Funktion der Zeit. Und diese Präzession mit ihrer charakteristischen Frequenz wird nun durch einen zweiten Puls unterbrochen. Das ist der sogenannte Mischpuls. Und dieser Mischpuls, der hat die Eigenschaft, dass er Kohreenz transferieren kann. Transferieren zwischen verschiedenen Übergängen in diesem Energischema und wir erhalten beispielsweise einen Übergang auf den roten Übergang. Der hat zum Kern X gehört, auf den Übergang B1. Wir transferieren also Information von A1 auf B1. Doch diesen Puls, der Mischpuls, der diesen Übertrag verursacht. Und es ist nun nur möglich, so einen Übertrag zu veranstalten, wenn eben eine Kupplung zwischen den beiden Kernen, die für diese Übergänge verantwortlich sind, besteht. Also wir können auf diese Art und Weise Paare charakterisieren, die miteinander gekoppelt sind. Und diese beiden Frequenzen, die blaue Frequenz, horizontal aufgetragen, die rote Frequenz, vertikal aufgetragen, erlaubt uns dann eben ein solches Korrelationsdiagramm zu konstruieren. Und wir sehen daneben, welche Frequenzen sich und welche anderen Frequenzen verwandeln können. Nun, ein Spektrum von Antemani selbst sehen Sie auf dem nächsten Bild. Ein sogenanntes Causi-Spektrum. Ein Korrelations-Spektroskopie-Experiment, das nennt man Causi. Wiederum die Diagnale und die Kreuzpiks hier. Nun, das ist also das rote Experiment. Das gelbe Experiment, Sie erinnern sich, die Wechselwirkung durch den Raum, da braucht man ein etwas anderes Experiment. Und das besteht hier einfach anstatt aus zwei, es steht hier aus drei Pulsen. Nämlich, wir beginnen wieder mit dem ersten Puls, um es aus Coherenz zu erregen. Ossiliert mit einer bestimmten Frequenz, der blauen. Und dann legt man zwei Mischpulse an. Und während diesen beiden Mischpulsen wartet man. Nämlich die Informationen durch die Diplolwechselwirkung, die braucht Zeit, um zu übertragen zu werden. Und zwar ist das ein Relaxationsprozess, der innerhalb von Millisekunden bis Sekunden abläuft. Und man muss also hier zwischen den beiden Pulsen warten, bis der Übertrag stattgefunden hat, von blau bis man dann eben die rote Frequenz messen kann. Nun, Sie fragen sich sicher, wenn Sie etwas genauer darüber nachgedacht haben, daraus kann man doch gar kein zweidimensionales Spektrum konstruieren. Die Informationen genügt doch nicht. Wir haben zwei eindimensionale Spektrum. Wie will man daraus ein zweidimensionales Spektrum konstruieren? Und in der Tat, man muss eben ein viel komplizierteres Experiment machen. Und hier ist es dargestellt. Man muss diese sogenannte Evolutionsperiode hier von Experiment zu Experiment und jede Zeile hier stellt ein individuelles Experiment dar, variieren. Variieren und dann eben schlussendlich einen ganzen Datenblock erzeugen, der hier durch diese rote Fläche dargestellt ist. Und Sie sehen nun die blaue Information, diese Endpunkte hier. Die stellen nichts anderes dar als diese Oszillation hier. Das sind die Abtastwerte dieser blauen Oszillation. Und die sind hier wiederum als Anfangsbedingungen in der roten Periode wiedergegeben. Wir haben also die beiden Informationen hier in diesem Datenblock. Die blaue Information als Anfangsbedingung, die rote Information als Zeitevolution. Und wenn man nun eine zweidimensionale Furretransformation macht, bekommt man dann schlussendlich wirklich ein zweidimensionales Spektrum. Das ist das Geheimnis der Kerneresinanz. Und so muss man zweidimensionale Kerneresinanz- Experimente machen. Und diese spezielle Experiment angewandt auf Antamanid führt auf das nächste Spektrum. Nächstes Bild. Sie sehen hier ein sogenanntes Nause-Spektrum. Das ist die andere Art, die gelbe Art von Spektrum. Ich möchte den Namen nicht erklären. Das ist ein Kreuz-Relaxations-Experiment. Und sagten uns also, dass zwei Kerne, die durch einen Kreuzpik hier verbunden sind, eben räumlich benachbarzt sind. Also schlussendlich haben wir zwei Informationsquellen, zwei solche Matrizen, die rote Matrix und die gelbe Matrix. Und damit können wir eben unsere Moleküle in ihrer Struktur bestimmen. Nun, die Verfahren hier für die biologische Anwendung sind vor allem von einem meiner Kollegen, von Professor Wütrich im Detail, ausgearbeitet worden und sehr schön an Beispielen demonstriert. Nun, es gibt hier eigentlich zwei Probleme. Nächstes Bild. Wir müssen zuerst einmal wissen, zu wem gehören eigentlich alle diese Resonanzlinien? Wir müssen die Zuordnung machen. Zu den einzelnen Protonen in einem Protein. Hier haben wir ein Ausschnitt aus einem Protein. Mit den Wasserstoffatomen wird wiederum schwarz eingezeichnet. Wir können nun zuerst einmal eben diese rote Wechselwirkung durch die Bindung benutzen, um Protonen, die zum selben Aminosäure gehören, zu identifizieren. Denn diese Koppelung, die rote Koppelung läuft nur innerhalb von einem Aminosäure-Rest. Und die verschiedenen Aminosäure-Reste sind hier durch die blauen Linien voneinander abgetrennt. Und wir können dann die räumliche Nachbarschaft noch mit der gelben Wechselwirkung feststellen. Nun wissen wir, wer gehört zu wem, welche Protonen geben Anlass zu welcher Resonanz und schlussendlich muss man dann eben im zweiten Schritt noch die Struktur bestimmen. Und hier nur ganz kurz zwei Computermethoden, die man benutzt. Es gibt verschiedene Algorithmen, die benutzen kann, den sogenannten Distanzgeometrie-Algorithmus oder man kann Molekül-Dynamik benutzen. Hier zum Beispiel einfach eine Modulierung im dreidimensionalen Raum. Man hat hier das Molekül dessen Struktur, man bestimmen will. Man füllt hier einfach rote Farbe ein, bis sich das Molekül nicht mehr bewegen kann. Und die rote Farbe, die entspricht nun den Constraints, den Abstandsbedingungen, die aus dem Experiment kommen. Wir wissen, dass diese beiden Kerne einen bestimmten Abstand haben. Also ist dieser Abstand festgelegt, hingegen hier außen haben wir wenig Information, das Molekül ist sozusagen noch frei. Auf diese Art und Weise kann man das Molekül dann eben modellieren, iterativ mit Computerprozeduren. Leider ging dieses Verfahren für das Antamanid nicht so, wie wir uns das gewünscht haben. Oder sozusagen auch glücklicherweise, denn das Antamanid ist eben viel interessanter, als man durch eine Struktur darstellen kann. Es ist nämlich ein dynamisches Molekül. Und im zweiten Teil, der in ungefähr zwei Stunden oder so folgt, falls sie noch hier sind, würde ich Ihnen etwas über Dynamik von Antamanid erzählen und wie man die Dynamik von Antamanid ergründen kann mit keine Resonanzmethoden. Aber höchstwahrscheinlich werde ich nicht die Gelegenheit haben, Ihnen das zu präsentieren. Aber wie gesagt, Antamanid muss man also, kann man nicht nur durch eine einzelne Struktur, sondern es ist ein dynamisches Gleichgewicht zwischen verschiedenen Strukturen beschreiben. Es gibt bei verschiedene Bewegungsmoden, die eben verantwortlich sind für die Bewegung. Nun, ich möchte Ihnen aber einige Beispiele hier trotzdem vorführen, die nicht aus unserer Küche stammen. Hier einmal der Insulin, ähnliche Grossfaktor, der ist verantwortlich für Selwachstum, für die Replikation von Zellen, ein Protein hier mit deiner Kettenlänge 70 Aminosäuren. Auf dem nächsten Bild sehen Sie die Kernersenanz-Daten, Chaosispektrum, Nausespektrum, wir gehen hier sehr rasch durch, nächstes Bild. Sehen Sie gerade die Struktur, die man erhalten hat davon und zwar wurde das von der Gruppe von ihren Campbell in Oxford gemacht. Die sehen hier auf der rechten Seite, hat man hier eine relativ wohl definierte Struktur, hingegen auf der linken Seite ist die Struktur schlecht bestimmt. Was im Wesentlichen heißt, dass hier drüben das Molekül eigentlich stark ist und hier auf der linken Seite ist es beweglich. Wir haben hier eine Helix-Struktur und hier eine Helix-Struktur, das ist was man aus diesen Kernersenanz-Experimenten gefunden hat. Ein etwas komplizierteres Molekül auf dem nächsten Bild, nächstes Diabit, Thioredoxin, wesentlich für die Reduktion von Dysulfitbrücken in Proteinen. Also wiederum eine biologische Aktivität. Sie sehen wiederum die Kernersenanz-Daten, Chaosis und Nauses immer dasselbe und auf dem nächsten Bild sehen Sie die schöne Struktur von diesem Molekül. Es ist so ein ausgedehntes Betafallblatt mit einzelnen Helices hier draußen. Das ist also dieses Molekül im dreidimensionalen durch Kernersenanz gefunden von der Gruppe von Reit an Skrips-Klinik in San Diego. Man kann damit natürlich nicht nur Proteine untersuchen, man kann auch DNA-Fragmente hier, ein DNA-Octamer, duplex, dann kann man auch mit Kernersenanz in seiner Struktur bestimmen. Nächstes Bild wiederum langweilige Kernersenanz-Daten und dann schlussendlich nächstes Bild die Struktur davon und insbesondere wollte man hier untersuchen, ob die Struktur hier mehr eine DNA-Einlichkeit hat oder mehr B-DNA-Einlichkeit. Man hat hier synthetische Strukturen und hier die Struktur, die man aus der Kernersenanz erhalten hat und Sie sehen die Ähnlichkeit gegen links ist größer, also dieses Fragment hat eine DNA-Struktur. Man kann auch Interaktion von Molekül feststellen beispielsweise hier das Antramycin, das eine Antitumorverbindung ist. Sie sehen auf dem nächsten Bild wiederum Kernersenanz-Daten und auf dem nächsten Bild dann die Wechselwirkung von Antramycin mit einem DNA-Fragment. Sie sehen hier das Molekül, das rote Molekül hier eingebettet und man kann hier die Wechselwirkungen studieren und kann hier die biologische Aktivität nicht hier auf mehr Details ein. Ich habe Ihnen noch gesagt, dass man auch chemische Reaktionen untersuchen kann und hier ein kleines altes mehr oder weniger Standardbeispiel. Wir haben hier das HEPTA-Methylbenzenonium-Ion, 6 Ring mit 7 Methylgruppen, eine Methylgruppe zu viel und diese arme 7 Methylgruppe weiß nicht wo sie sich anhängen soll und entsprechend springt sie von Platz zu Platz und ist nirgends geliebt und die Frage ist wie macht sie das? Springt sie nun einfach von Platz zu Platz oder ist sie mutig springt sie auch direkt in die Paraposition oder verlässt sie sogar das Molekül und hängt sich an einem anderen Molekül an. Hier unten die Reaktionsschäme, hier die zaghafte Methylgruppe, die geht diesen Weg und die mutigen Methylgruppen, die gehen diese gestrichelten Wege. Frage gibt es mutige Methylgruppen oder nicht? Und wir schauen uns das Spektroman, Kerneisenanspektrum auf dem nächsten Bild. 4 Linien, die diesen 4 verschiedenen Methylgruppen entsprechend, grüne, blaue, rote, schwarze, halbe Intensität hier bei der grünen weil es nur eine grüne gibt und die doppelte Intensität weil es hier je zwei gibt. Einfach verständlich aber hat das Dynamik oder nicht? Niemand weiß es. Nun in einem solchen Fall was macht man? Man variiert die Temperatur, denn Dynamik ist von Temperaturabhängig, wenn etwas passiert im Spektrum weiß man es ist dynamisch. Also nächstes Bild zeigt uns das selbe Spektrum als Funktion der Temperatur, 25 Grad bis 75 Grad und in der Tat es ist eine dramatische Änderung, also Dynamik vorhanden. Aber was für welche? Nun da nimmt man eben nun wiederum Zuflucht zum Computer. Man simuliert die beiden Modelle, ein zufälliger Sprung zwischen verschiedenen Positionen hier drüben, rot oder dann diese definierte 1-2-Bindungsverschiebung und hier für verschiedene Austauschraten die simulierten Spekten. Wurden schon sehr früh von Martin Saunders in der Yale University gemacht, 1967 und sie können nun links und rechts vergleichen und frage welches Modell passt besser. Nun wenn sich hier vergleichen ist ja praktisch immer dasselbe. Außer hier sehen sie so einen ganz kleinen Buckel und wenn sie den kleinen Buckel glauben und sie den noch finden im experimentellen Spektrum dann sind sie sicher. Nun gehen wir durch, wo gibt es einen Buckel hier, aber ist das nun wirklich ein Wählerbuckel oder ist das ein Artefakt? Ich würde dem nicht trauen, aber Martin Saunders ist ein guter Chemiker und er wusste das Resultat zum Vorraus und tippte richtig und in der Tat es ist eben die 1-2-Bindungsverschiebung die sich dann aus richtig herausgestellt hat, aber mit der 2-dimenzialen Spektroskopie kann man das viel viel schöner demonstrieren und ich zeige Ihnen hier ein 2-dimenziales Spektrum davon. Wiederum die vier Resonanzlinien auf der Diagonale, 3 Stärke, eine schwache Resonanzlinie und dann gibt es hier Außer Diagonal Peaks und die Außer Diagonal Peaks die sagen uns nun eben welche Methylgruppen sich in welchen anderen Methylgruppen verwandeln können im Laufe der Zeit. Zum Beispiel 1 verwandelt sich offenbar in 2. Nun wenn Sie das Resultat noch nicht schon sehen, dann machen Sie eben wiederum eine Computersimulation und Sie sehen was für einen einfachen Computer das sie hier brauchen, nämlich nur einen der Kreise zeichnen kann. Und für einen zufälligen Austausch zwischen allen Positionen, ja da müssten Sie eben alle Kreuzpeaks finden und offensichtlich fehlen gewisse, also dieses Modell stimmt nicht. Machen Sie eine zweite Simulation hier für die Intramolekulare 1-2-Bindungsverschiebung und es stimmt exakt. Damit ist das Resultat klar und sicher wesentlich evidenter als man aus einem 1-dimenzialen Spektrum früher mühsam erhalten hat. Nun ein anderes Beispiel, wir waren bisher immer in flüssiger Phase, man kann diese 2-dimenziale Spektroskopie auch anwenden im Fesskörper. Hier ein kleines Beispiel und zwar interessieren wir uns für Polymer Blends, also Mischungen von Polymeren die für technische Anwendungen wichtig sind. Wir nehmen beispielsweise ein blaues Polymer, ein rotes Polymer, Polystyrol, Polyvenyl-Methyläter, wir mischen diese beiden oder lösen sie entweder in Tolol oder in Chloroform. Wir fügen Betroleiter zu und dann fällt hier der Polymerblend aus. Und die Frage ist, was hier ausfällt, ist das nun homogen gemischt oder ist es eine heterogene Mischung? Mit anderen Worten sieht das aus wie links oder wie rechts. Das Resultat wird sein, dass aus Tolol eine homogene Mischung auf atomarem Niveau, dem molekularen Niveau resultiert, während dem aus Chloroform erhalten wir eine heterogene Mischung. Das möchten wir gerne mit Kernresonanz nachweisen. Nun wir können hier wiederum die Dipolewechsel-Werkung benutzen. Wenn wir feststellen können, dass ein blaues molekul mit einem roten molekul spricht, muss es im gleichen Land sein. Und hier werden roten blaue miteinander zu sprechen vermögen, hier drüben eben nicht. Also wir machen ein Experiment, wo wir die Dipolewechsel-Werkung zwischen den Kernen ausnützen. Das nächste Bild zeigt Ihnen das Resultat eben so hier. Wir haben hier das 1-dimensionale Spektrum. Proton-Resonanz-Spektrum von Polystyrol gibt diese Linie von Polyvenyl-Methyl-Liter gibt diese Resonanz-Linie hier auf der chemischen Verschiebungskala. Und hier das entsprechende 2-dimensionale Spektrum. Das ist ein sogenanntes Spindiffusion-Spektrum, wo die Spins miteinander via die Fusion der Spinnordnung sprechen können. Und wir müssen nun einfach schauen, wo spricht blau mit rot. Hier in diesem Spektrum, blau und rot, die sprechen nicht miteinander. Hier blau und rot, hier hat seinen Kreuz-Peak. Die sind zur Kommunikation fähig, die müssen benachbarbar sein. Hier handelt es sicher um einen homogenen Blend, wenn wir aus Tollowl fällen. Hier muss es sich um einen heterogenen Blend handeln. Nun, das Experiment ist nicht ganz so einfach, dass hier gemacht wird. Ich möchte nicht auf die alle Komplikationen hier eingehen. Hier ist die entsprechende Pulsequenz. Es ist wiederum ein 3-Pulse-Experiment, erster, zweiter, dritter Pulse. Dann braucht man aber noch zusätzliche Pulse, muss sogenannte Multi-Pulse-Kopplung machen, um die Dipol-Wechselwirkung zu eliminieren, um hohe Auflösung zu kriegen. Man muss gleichzeitig noch magische Winkelrotation machen. Die Probe hier wird um den magischen Winkel zum Magnetfeld rotiert, technisch anspruchsvoll und schlussendlich erhalten. Sie haben einen Spektrum, wie gezeigt. Man kann das dann noch weiter auswerten und kann beispielsweise einen drei Phasen-Modell machen, wo es eine gemischte Phase gibt und zwei reine Phasen, reines Polystyrol, reines Polyventylmethyläther. Man kann diese Prozentzahlen bestimmen, falls sie das interessiert. Mich interessiert es nicht. Aber eben, es funktioniert. Das waren so einige Anwendungen. Das ist eigentlich die Grundmethoden, die ich Ihnen vorgestellt habe. Die Zeit ist glücklichweise noch nicht um und ich möchte Ihnen noch zeigen, mit was, dass sich heute die Spektroskopik der Zeitvertreibenden Spaß haben. Hier eine Möglichkeit, wie man ein solches Grundexperiment komplizierter machen könnte. Wir modifizieren und das grundlegende COSY-Experiment, Sie erinnern sich, besteht nur aus zwei Pulsen. Es kann vorkommen, dass ein solches Kernerisenanspektrum, wie Sie es hier drüben sehen, einfach zu langweilig ist, um es publizieren zu können. Nun, was macht man? Entweder variiert man die Substanz, aber wenn man schon die Substanz drin hat, dann modifiziert man das Experiment. Ich kann Ihnen nun Experimente angeben, mit dem man das Spektrum Belly kompliziert machen kann. Es gibt die RILAI-Methode, die TOXY-Methode, mehrquanten Spektroskopie und Sie können hier weitergehen, bis das ganze Papier kolrabenschwarz ist. Irgendwo dazwischen müssen Sie aufhören, wenn Sie publizieren wollen. Es kann natürlich auch vorkommen, dass das Spektrum viel zu kompliziert ist, dann möchten Sie es vereinfachen. Wenn Sie es nicht analysieren können, dann gibt es Methoden, um die Komplexität hier zu reduzieren. E-Cause, mehrquanten Filterung, Spindopor, G-Filterung, bis schlussendlich das Papier wieder weiß ist. Auch hier der goldene Mittelweg ist wahrscheinlich das Richtige. Nun, ich möchte zuerst einmal hier in die oberen Etagen steigen und Ihnen zeigen, wie man die Spekten komplizieren kann. Sehr einfach. Hier ein Beispiel. Auf dem nächsten Bild, wenn wir Glück haben, ja, das können wir übergehen. Nächstes Bild. Ja, vielleicht doch noch kurz. Sie sehen, dass ein zweidimensionales Experiment besteht im Prinzip eigentlich aus vier Phasen. Wir haben hier oben, Sie sehen hier teilweise die Präparationsphase, wo Magentisierung präpariert wird. Dann die Evolutionsphase, die erste Oscillation, die Mischphase und schlussendlich die Detektionsphase. Man kann nun das System auf verschiedene Arten präparieren. Sie sehen hier, jede Zeit stellt eine andere Misch- oder Präparationssequenz dar. Nächstes Bild zeigt Ihnen verschiedene Möglichkeiten, wie man die Evolutionsphase gestalten könnte. Verschiedene Experimente. Nächstes Bild zeigt Ihnen verschiedene Möglichkeiten, wie man die Mischphase gestalten kann. Und Sie können nun alle diese Möglichkeiten kombinieren und jedes Mal gibt es ein Paper. Und Sie sehen, wie reichhaltig und nutzbringend die Keine Resonanz eingesetzt werden kann. Nun gehen wir weiter hier, also mit der Komplikation, die ich Relay-Experiment nenne. Wir haben hier, nächstes Bild kurz, ein Protei oder ein Peptid, das sich Buzerilin nennt, aus diesen Aminosäuren besteht. Und ich habe Ihnen hier die Resonanzen vom Leutzin herausgestrichen. Ein Haaresonanz hier unten, C-Alpha-Haaresonanz, die Beta, Px, Gamma und die Delta-Methylgruppen, also Alpha, Beta, Gamma, Delta und hier diese sequenziellen Verbindungen, Kreuzpx, die in einem CoSy-Spektrum sichtbar sind. Nun, also Kreuzpx gibt es nur zwischen Protonen, die drei Bindungen von ein anderem Fernsehen, wie Sie aus dieser Darstellung sehen. Nun, wenn wir mit einem Puls einen solchen Übertrag machen können, von einem Proton auf seinen entsprechenden Nachbarn, können wir wohl mit zwei Puls in einen Übertrag über zwei Stufen machen. Hier von A nach B nach C, A nach B nach C, mit eben zwei roten Mischpulsen. Und wir erhalten hier den übertragenen Querenztransfer. Und auf dem nächsten Bild sollten Sie ein solches Spektrum sehen, wo wir eben beispielsweise hier unten einen Rilla- IP haben, also Sie sehen hier oben auch zwischen dem NH-Proton und dem Beta-Proton hier über zwei Stufen. Und das gibt uns zusätzliche Information, die man in gewissen Fällen ausnützen kann, insbesondere dann, wenn man die Zentralresonanz des Alpha-Proton nicht identifizieren kann, kann man eben diesen zwei-Stufen-Prozess ebenfalls zur Zuordnung benutzen. Nun, wenn man über zwei Stufen übertragen kann, kann man auch über drei Stufen übertragen, wie Sie das hier sehen, oder wenn man über drei Stufen übertragen kann, kann man auch über Endstufen übertragen. Man braucht einfach mehr Mischpulse. Und das führt schlussendlich zur totalen Korrelationsspektroskopie, wo einfach jeder mit jedem korreliert ist. Nun, das hat natürlich auch seine Grenze in irgendwann hört es auf, lustig zu werden. Und glücklicherweise, beispielsweise bei einem Protein gibt es eben prinzipiell nur Kreuzpiek zwischen Protonen, die zum selben Aminosäure-Rest gehören. Wenn Sie noch eine komplizierte Pulsequenz anlegen, es gibt nie einen Korrektübertrag von einer Aminosäure auf den anderen. Sie können also Subsysteme auf diese Art und Weise identifizieren, indem Sie hier eine komplizierte Pulsequenz anlegen, die hier sogenannte kollektive Moden erregt. Sie haben zuerst die Ein-Spinn-Moden, mit denen Sie die einzelnen Spins charakterisieren. Und am Schluss sehen Sie, wer hat mit wem während dieser Austauschperiode gesprochen und können den Korrektübertrag dann eben festhalten. Und das führt dann schlussendlich zu einem Spektrum, wie Sie es auf dem nächsten Bild sehen sollten. Wo Sie beispielsweise eben auch einen Kreuzpiek, wie hier unten, den sehen Sie vielleicht nicht, oder hier oben, den sehen Sie auch nicht, der eben ein Vier-Kopplungsübertrag macht vom NH-Proton hinauf bis in die Delta-Methylgruppe. Also doch alle diese Partner, die in dieser Aminosäure im Leutzin verhandeln sind. Sie sehen das vielleicht hier auf der Folie etwas besser. Die blauen Piks, das sind diese totalen Korrelationspiks. Nun, dieses Experiment, das eröffnet eine weitere Klasse von Experimenten, über die ich leider keine Zeit habe hier zu sprechen. Nämlich die Rotating Frame Experiment, die Experimente im rotierenden Koordinatensystem. Und die sind heute sehr wichtig. Es findet hier während einer ausgedehnten Einstrahlung von Radiofrequenz sowohl der Korrenteübertrag, also der COSI-Übertrag, wie auch der Kreuz-Relaxations-Übertrag statt. Und gewissen Fällen sind diese Experimente eben notwendig, besonders bei Roteinen von mittlerer Größe, sind das die informativen Experimente. Nun, gehen wir zum nächsten Dia. Wenn wir so weitergehen, kommen wir eben zur Situation, wie Sie es hier sehen. Und irgendwann wird es einfach zu kompliziert und wir müssen an Vereinfachung denken. Und Sie sehen hier die Möglichkeiten, die wir haben, um Spektren auch zu vereinfachen. Es gibt die Mehrquantenfilterung, die exklusive Korrelation, Spintopologiefilterung, die ich vorher schon erwähnt habe. Ich möchte eigentlich ganz kurz hier durchgehen und Sie nicht allzu lange strapazieren. Nur ganz kurz die Idee von der Mehrquantenfilterung. Wie geht das? Nun, was sind überhaupt Mehrquantenübergänge? Das möchte ich Ihnen anhand dieser Folie hier zeigen. Wir haben oben ein Energischema dargestellt, mit den verschiedenen Energieniveaus, zwischen denen wir Übergänge erzeugen können. Und diese Energieniveaus sind hier gruppiert, gemäß der magnetischen Quantenzahl. Und es gibt nun in allen Spektroskopieren gibt es Auswahlregeln. Und die Auswahlregel in der magnetischen Resonanz, die sagt uns, dass nur Übergänge erlaubt sind, wo sich die magnetische Quantenzahl um eins ändert. Es sind also nur Übergänge möglich, zwischen diesen benachbarten Gruppen von Energieniveaus. Nämlich die grünen Übergänge, das sind die erlaubten. Die zwei Quantenübergänge, wo sozusagen zwei Spins gleichzeitig ihre Spindpolarisation ändern, oder drei Quantenübergänge, wo drei Spins gleichzeitig umklappen, die sind verboten. Sehen wir nicht im Spektrum. Aber es gibt trotzdem die Möglichkeit diese Übergänge zu erregen und das ist das Geheimnis der Mehrquanten- Filtron. Und ich zeige Ihnen das hier unten auf diesem Schema. Das ist nun kein Energieniveauschema mehr. Und Sie sehen hier dieselben Farben wie hier oben. Wir haben hier die einquanten Übergänge oder die einquanten Korrekt, durch diese horizontale Linie dargestellt. Das sind die grünen. Die zwei Quantenkorrekt, drei Quantenkorrekt, vier Quantenkorrekt. Und wir können nun ein Experiment durch einen Spaziergang in einem solchen Korrektniveau Diagramm uns darstellen. Wir beginnen immer auf dem Niveau Null. Hier unten ist der Eingang. Wir haben keine Korrekt am Anfang. Mit dem ersten Puls hier erregen wir ein Quantenkorrekt. Und mit einem zweiten Puls kann man nun alle beliebigen, höheren Quantenübergänge erregen, zum Beispiel die drei Quantenkorrekt. Damit wir dann aber schlussendlich etwas beobachten können, müssen wir wiederum hier einen Ausgang finden. Und die Ausgänge sind hier auf dem ersten Obergeschoss und dem ersten Untergeschoss. Einquanten, Korrekt, positiver Art und negativer Art. Hier müssen wir raus. Und hier können wir beobachten. Und wir können nun eben solche Experimente machen. Und nun bedenken Sie, dass wenn Sie beispielsweise nur zwei Spins haben in einer Probe, können wir nie einen drei Quantenübergang erregen. Denn drei Quantenübergang heißt, dass drei Spins gleichzeitig miteinander präzisieren. Wenn es nicht drei Spins hat, können sie auch nicht miteinander diese Bewegung eingehen. Das ist das Geheimnis der Mehrquantenfilterung. Hier schematisch dargestellt. Wiederum die verschiedenen Korrektniveaus, ein Quantenkorrekt, zwei Quanten, drei Quanten, vier Quantenkorrekt. Und nun haben wir hier verschiedene Spinsysteme. Ein Einspinsystem, zwei Spinnen, drei Spinnen, vier Spinsysteme. Hier die Pulsequenz. Mit dem zweiten Puls können wir eben nun nur die drei Spinsysteme bis auf Niveau 3 hinaufhissen. Zwei Spinsysteme bleiben hier zurück und die Einspinsysteme sogar auf dem Niveau 1. Und wenn wir hier nun eine Tür in die Mauer brechen, dann kommen hier eben nur moleküle durch mit mindestens drei Spins. Wir können sie runterfallen lassen auf Stockwerk 1, wo wieder den Dektor aufgestellt haben und beobachten hier das Spektrum von drei, vier und mehr Spinsysteme. Und haben sozusagen ein Hochpassfilter im Spinnzahlbereich realisiert. Wir haben die Kleinen rausgeworfen und die Großen zurückgehalten. Ein Beispiel auf dem nächsten Bild. Hier oben ein Spektrum besteht aus Ermischung von einem Einspinsystem, diese Linie, zwei Spinsystem, diese vier Linien und ein drei Spinsystem hier oben. Wir brauchen ein zwei Quantenfilter, wo die Türe hier auf dem Niveau 2 ist, erhalten dieses Spektrum, wo das Einspinsystem nicht durchkommt und das Experiment, das hier aufgeführt ist, lässt eben nur das drei Spinsystem passieren von diesem molekül hier und wir haben die anderen unter Druck. Das ist die Möglichkeit, wie man Spekten vereinfachen kann. Hier Trivial und Unnütz, aber bei komplizierteren Beispielen ist das eben sehr brauchbar, wie sie auf dem nächsten Bild sehen. Hier ein Beispiel von BPTI, Basis, Basis der branqueatischen Trypsin, Inhibitor, kleines Protein. Sie sehen hier die Resonanzen in einem COSI-Spektrum von Glycine 12, Glycine 28 und das ist im Prinzip ein zwei Spinsystem in D2o gelöst. Und wenn man nun ein drei Quantenfilter anwendet, dasselbe Spektrum hier drüben, sieht man, dass diese beiden Peaks hier und hier eben fehlen. Nächstes Bild zeigt Ihnen ein noch komplizierteres Beispiel, zwei Quanten, drei Quanten, vier Quantenfilterung, wiederum in einem anderen spektralen Bereich vom selben Protein BPTI und wir möchten eigentlich gerne die Resonanz von Arginin 42 sehen. Aber die sieht man hier in der niedrigen Filterung nicht, weil hier eine Überlatmung mit anderen Peaks besteht, die man eben bei der vier Quantenfilterung eliminiert hat, sodass dieser Peak hier nur sehr schön zu Tage tritt. Das sind die Möglichkeiten der mehr Quantenfilterung. Nun, man kann die Filterung auch weitertreiben. Beispielsweise hier haben wir uns die Aufgabe gestellt, Spintopologiefilterung zu machen. Wir möchten nicht einfach die Zahl der gekoppelten Spins unterscheiden, sondern die Art der Koppelung. Wir haben beispielsweise hier vier Kerne, die sind in einem linearen Netzwerk gekoppelt, nicht chemische Bindung. Das sind die Spinkopplungen, die hier aufgezeichnet sind. Die Spinkopplungen sind hier solignar angeordnet. Hier sternförmig, zyklisch oder hier jeder mit jedem gekoppelt. Wir möchten gerne ein Filter konstruieren, das nun auf eine bestimmte Topologie anspricht. Man kann dazu Filter konstruieren, die das eben tun. Sie sehen hier beispielsweise eine solche Pulsequenz. Ich möchte Ihnen diese nicht erklären. Hier das entsprechende Querenzniveau, Diagramm mit diesen Wegen für den Spaziergang vom Anfang bis zum Ende. Aber die Tatsache ist, dass solche Filter eben solche Filter-Eigenschaften haben. Beachten Sie nur die Bezeichnung hier oben. Das ist ein T, yz, y4 Filter und auf der nächsten Folie sehen Sie eine ganze Anzahl von solchen Filtern schematisch dargestellt. Hier die verschiedenen Filtertypen, hier die verschiedenen Spinsysteme. Hier zum Beispiel ein Linearis 4 Spinsystem, ein sternförmiges 4 Spinsystem, ein zyklisches 4 Spinsystem und diese grünen Blöcke die zeigen Ihnen nun an, welche Spinsysteme durch ein Filter durchdringen können. Und das Filter, das ich Ihnen gezeigt habe, vorher war dieses hier. Und Sie sehen, dieses filtert nun alles raus außer dem linearen 4 Spinsystem. Es ist also ein Filter, das Sie anwenden können, wenn Sie sich für lineare 4 Spinsysteme interessieren und das möchte ich Ihnen kurz demonstrieren. Nämlich hier haben wir eine Mischung von 4 Spinsystemen angefertigt. 5 4 Spinsysteme, die chemischen Verbindungen, die Kopplungsnetzwerke und die gravent theoretische Beschreibung der Kopplungsnetzwerke. Und wir schauen uns das Spektrum einmal von dieser Mischung auf einem der nächsten Dias an. Nächstes Tier. Ja, hier ist es. Sie sehen hier zum Beispiel die Peaks vom linearen 4 Spinsystem, vom sternförmigen 4 Spinsystem und so weiter. Diese Zuordnung hat man natürlich normalerweise nicht und wir wenden nun unsere Pulssequenz im Rahmen eines Causi-Experiments an. Und wir sollten also nur diese Peaks hier zurück behalten. Nächstes Tier. In der Tat, das lineare 4 Spinsystem bleibt zurück und der Rest ist eliminiert. Nun Sie sehen hier noch gewisse Artifakte, die durchkommen, die Ihnen nur zeigen, dass es ein reales Spektrum ist und nicht computersimuliert. Und auf dem nächsten Bild das zückliche 4 Spinsystem, wahrscheinlich im nächsten Tierkasten, ja hier das zückliche 4 Spinsystem funktioniert hier also auch. Man kann das auch auf kompliziertere Biomoleküle anwenden, beispielsweise hier wiederum BPTI, hier die Aminosäurekette. Wir möchten beispielsweise die Alanin Residues herauskriegen. Alanin hat diese chemische Struktur, ist ein 4 Spinsystem mit diesem Kopplungsnetzwerk, kein sternförmiges Kopplungsnetzwerk. Wir brauchen ein entsprechendes Filter. Auf dem nächsten Bild sehen Sie das normale BPTI-Spektrum kompliziert und irgendwo darin sind die Alanin Peaks. Wenn wir dieses Filter dann anwenden, erhalten wir ein vereinfachtes Spektrum, dass Sie hier sehen mit den rot eingekreisten Alanin Peaks. Geht also im Prinzip. Nun diese Methoden haben ihre Nachteile und das ist ein ideales Beispiel, das ich Ihnen gezeigt habe. Das Prinzip ist lustig, aber hat seine Begrenzungen in der Praxis und der Hauptnachteil ist, dass diese Pulsequenzen relativ lang sind und bei großen Molekülen die Gefahr besteht, dass während dieser langen Zeit die Kohärenz eben verschwindet durch Redaktionsprozesse und dann wird die Signale stärker entsprechend klein. Nun, Frage was wie weiter. Nächster Schritt wäre höhere Dimension. Wir können nun weitergehen zur dreidimensionalen Spektroskopie und die dreidimensionale Spektroskopie, die kann man erreichen entweder über den Weg zusätzlicher Information oder reduzierter Komplexität. Auf beiden Wegen kann man zu einem dreidimensionalen Spektrum gelangen. Das möchte ich Ihnen ganz kurz demonstrieren. Auf dem nächsten Bild sehen Sie einfach ein Beispiel von einem dreidimensionalen Spektrum, dass Sie etwas vor sich haben. Und es gibt nun eben die beiden Anwendungen. Obendoch oder unten doch. Und hier noch einmal dargestellt. Wir können entweder ein zweidimensionales Spektrum, wie Sie es hier sehen, spreizen in eine dritte Dimension und in dieser Art und Weise ein Spektrum vereinfachen. Sie sehen hier drei Peaks und hier unten drei Peaks und die drei Peaks stehen hier nun einfach auf verschiedenen Ebenen. Und auf diese Art und Weise können wir ein Spektrum auseinanderziehen. Das ist die dreidimensionale Dispersion oder dreidimensional Vereinfachung von zweidimensionalen Spektrum. Sie wissen auch, dass wir, wenn wir eine biomolekulare Struktur bestimmen wollen, wir zwei Spektrum brauchen. Ein rotes Spektrum und ein gelbes Spektrum. Und ein nose und ein cosyspektrum. Wir könnten uns nun ein Experiment konstruieren, wo wir einen dreidimensionalen Datensatz erhalten. So, dass wenn man die Daten von oben betrachtet, sie gelb aussehen und von vorne rot. So dass wir also die ganze Information in einem Datenblock haben und daraus daneben die Struktur bestimmen können. Nun, die Experimente, die zu solchen dreidimensionalen Spektrum führen, die sind sehr einfach hier angegeben. Anstatt dass wir nur einen Mischprozess anlegen müssen, brauchen wir eben zwei. Wir haben hier den gelben Mischprozess, hier den roten Mischprozess, der die Korenzen hier während der Zeit T1 in die Korenzen während der Zeit T2 verwandelt und die Korenzen während der Zeit T2 werden dann in die Korenzen während der Zeit T3 verwandelt. Wir haben dann drei Frequenzen, Omega1, Omega2 und Omega3, die uns den dreidimensionalen Frequenzraum aufsparen. Und das dreidimensionale Spektrum beschreiben. Nun, man kann solche dreidimensionalen Spektrum, kann man analysieren, genau gleich wie die zweidimensionalen. Es gibt Prozeduren, auf die ich hier nicht im Detail eingehen möchte. Eigentlich die wichtigere Art der dreidimensionalen Spektroskopie ist nicht diese Kombination von gelb und rot in einen Block, sondern eben die Aufspreizung. Die Vereinfachung von zweidimensionalen Spektren, so wie Sie das hier auf diesem Bild sehen. Ein zweidimensionales Spektrum, das uns hier die NH-Resonanzen mit den C-Alpha-H-Resonanzen korreliert. N-C-Alpha, die H-Resonanzen an Kohlendstoff gebunden hier entlang dieser Achse, die NH-Protonen entlang dieser Achse und hier ein kompliziertes zweidimensionales Spektrum. Wir können nun als Spreizparameter einen der Hetrokerne benutzen, zum Beispiel N15. Wir können hier spezifisch N15 einführen und mit dieser N15 chemischen Verschiebung das Spektrum auseinanderziehen. Oder hier unten mit C13, von hier drüben dem C-Alpha Kohlendstoff können wir das Spektrum auseinanderziehen und erhalten dann ein dreidimensionales Spektrum. Schön Sie hier auf der Bühne zu sehen. Und das führt dann eben zu einem Spektrum, wie Sie es hier drüben beispielsweise sehen. Nächstes Bild. Sie sehen es fast. Das ist Ribonuclease A mit ein N15 versehen und hier in dieser Dimension eben durch die N15 Resonanz gespreist. Man kann auf diese Art und Weise eben auch sehr komplizierte Spektren analysieren. Nun, die Entwicklung ist weitergegangen in der Zwischenzeit. Man ist heute schon weiter. Vierdimensionale Spektrskopie. Leider kann ich Ihnen kein vierdimensionales Spektrum zeigen. Sie sehen, die Projektionsmöglichkeiten reichen hier einfach nicht aus. Aber das Prinzip ist sehr einfach. Bei der dreidimensionalen Spektrskopie haben wir beispielsweise die N15 Resonanz benutzt, um hier das Spektrum zu spreizen. Oder die C13 Resonanz. Man kann nun beide Frequenzen gleichzeitig brauchen. Wir haben dann Korrelation zwischen zwei Protonen und benutzen zwei Spreizparameter. Wir haben dann diese beiden Spreizfrequenzen, zwei Dimensionen und hier oben zwei Korrelationsdimensionen gibt die vier. Das ist im Prinzip das Verfahren. Nun, ich wollte Ihnen noch ganz kurz etwas sagen über die medizinische Anwendung. Wie man solche zwei diese Schnitte erhält durch medizinische Objekte. Die Verfahren sind eigentlich genau dieselben. Ich unterlasse das hier. Ich gehe zurück auf die erste Folie, wo ich Ihnen eben gezeigt habe, dass man mittels Kernresonanz diese verschiedenen Gebiete miteinander in Kontakt bringen kann. Dass man medizinische Phänomene eben auf die Biologie, auf die Chemie, die Physik reduzieren kann. Nun, um einen solchen Baum der Erkenntnis zu erzeugen, braucht es natürlich auch sonst einen Stammbaum. Den zeige ich Ihnen hier auf diesem Bild. Und das sind meine Mitarbeiter seit 1969. Und diese sind zum Teil für die Arbeit verantwortlich, die ich Ihnen gezeigt habe. Die Arbeit wurde ebenso von vielen anderen Forschungsgruppen geleistet. Ich glaube, Sie haben den Eindruck erhalten, dass die Kernresonanspektroskopie eben sehr nützlich ist. Nun, allgemein hat eigentlich die physikalische Chemie zwei Aufgaben. Die erste Aufgabe ist es, die Grundlage der Chemie zu erarbeiten aufgrund physikalischer Prinzipien. Und hier wirklich auf die Grundphänomene zurückgehen. Das hat den Lord Porter und Professor Polanin gestern sehr schön gezeigt. Die Kernresonanz ist nun eben ein Beispiel, das ihnen zeigt, dass die physikalische Chemie auch dazu ist, Methoden dafür sie bereitzustellen, um Strukturaufklärung, Information zu gewinnen. Und ja, das ist etwa das, was ich Ihnen zeigen wollte. Nun, wenn man schon der Verurteilung sicher ist, ist es wahrscheinlich am besten, man legt ein Selbstgeständnis ab. Und das möchte ich hier mit der letzten Folie machen, die ich Ihnen hier auflege. Nun, ich hoffe, dass der Vorsitzende nach sich mit mir walten lässt und das Benton hier das Mikrofon nicht durch einen dickeren Stricker setzt. Und damit habe ich geschlossen. Danke für die Aufmerksamkeit.
Without Richard Ernst, nuclear magnetic resonance (NMR) spectroscopy perhaps would have remained an esoteric research tool for some specialists. His inventions, however, led to such a sharp increase in its detection speed and sensitivity that they started the era of high-resolution NMR. Thus NMR could become both an indispensable complement to X-ray crystallography in the field of structural biology and the basis for magnetic resonance imaging (MRI), which plays such an important role in medical diagnosis today. When he first came to Lindau in 1992, in the year after he had received the Nobel Prize in Chemistry 1991, Ernst explained the principles of NMR and their current application in this long lecture of nearly 80 minutes in German language. Through his characteristic combination of scientific gravity, humor and playfulness, Ernst ensures that his talk is worth every minute and never bores the audience, although he goes into much technical detail especially in its last third. Certain properties of hydrogen nuclei, namely their intrinsic magnetic spin, form the basis of NMR, Ernst initially remarks. In an external magnetic field these spins align in one of two possible quantum states, either parallel or anti-parallel to the field. Radio waves whose frequency match the energy difference between these two states cause the nuclei to flip over from one spin state to the other. As soon as the radio waves are switched off, the nuclei relax back to their initial state and send out radio signals themselves. When these signals are recorded in NMR spectra, they can help to reveal the structure of molecules. Applying this principle, however, requires exposing a compound to a steadily tuned sweep of radio waves, and makes NMR spectroscopy slowly as a snail. Ernst speeded up the process by exposing a compound to a series of short radio pulses and plotting all the signals together as a function of time after each pulse. A computer converts this complex graph into the conventional NMR pattern, using the mathematical calculation of Fourier transformation. Ernst compares this to striking all keys of a piano at once without losing harmony: “What’s being introduced here, is polyphony in nuclear resonance.” Resonating nuclei are like spies that we can use to gain structural information, says Ernst, and a typical molecule contains around 100 of such spies. To elucidate a structure and the bonding network of a molecule, it is necessary to detect the interdependencies between those spies and to know their spatial correlation. This information cannot be represented in one dimension, but requires two-dimensional (2-D) NMR, which Ernst invented in the early 1970s based on an idea by Jean Jeener, as he explains. 2-D NMR enabled NMR spectroscopy to advance to a stage at which it could be used to identify the structure of large biomolecules, a development pioneered by Kurt Wüthrich (Nobel Prize in Chemistry 2002). In his lecture, Ernst introduces several compounds whose structure and/or dynamics in solution have been analyzed by NMR, including antamanide, a cyclic decapeptide from the death cap fungus; the insulin-like growth factor; thioredoxin; and the antibiotic anthramycin in its interaction with a DNA fragment. He also discusses NMR in solids, using the example of polymer blends, and explains how one can study chemical reactions by means of NMR. The whole lecture resonates with Ernst’s delight in being a scientist. “Research belongs to the basic instincts of humankind”, he says. “Who does not research and study any more, has actually lost his human dignity.” For this reason, he welcomes his entire audience in Lindau with “Dear students”. Joachim Pietzsch
10.5446/55092 (DOI)
Well, it's a great pleasure to be here. I've learned a great deal from my colleagues. And one of the things which I'm sure many of you have seen is that everybody is so different. Different styles, different personalities, different ways of doing science, doing physics. And so before I talk about this experiment that we're working on now, you should understand where what my personality is and my interests are. I was trained as a chemical engineer and worked as a chemical engineer for a while before I went into physics. I'm an experimenter. I'm competent in mathematics and conventional theory. I can do quantum mechanics and find mid diagrams, those sort of things, those statistical mechanics. But I'm not a deep theorist and so I have to stay away from all those things. I don't like to do experiments which involve complicated theory or hard to analyze, deep theory of various sorts. I like to work on things which have interesting technology, interesting apparatus, and for which I can do interpretation myself. I also always like to work in two directions, two lines of research at the same time. One, a conventional line of research, sort of day-to-day work, and one, a speculative line of research. The speculative is also more fun, but most speculative work doesn't work out. The inventors are also at the same time being in the conventional work is that you're kept close to reality. You're kept close to having to make a careful, ordinary measurement. So I try to do both. And what happened with the tau was that started out as speculative research and I still work on that to some extent, but that's now actually very conventional research. So I started about four years ago on an old problem which is a question of the fractional, the existence of fractional electric charge in isolated particles. You find particles all by themselves. About 100 years ago, the electron was discovered and then through the work of Millikin and others, its charge was well established, 1.6 times centermonus-19 coulombs by about 1910 or 15. Since then, we have discovered many more particles, the proton, the neutron, mesons, the muons, neutrinos, the tau eventually, and all these particles have either zero charge or a charge equal to magnitude for the charge of the electron, either sine, and then some of the more excited elementary particles have two or three times the charge, but they're always integers as long as they're particles that can be isolated. So it's an old question as to whether there are any other kind of particles around, and I must tell you that there's no confirmed evidence for fractional electric charge particles which can be isolated, and conventional theory has built this in. Now, the only exception, but it's not one which falls within what we're looking for, is of course the quarks. Quarks which make up mesons and nucleons are given two-thirds or one-third of the charge on the electron, but it's believed again that you can never get one quark out by itself. For example, in a pie meson, one has a quark and an anti-quark, and they're always stuck together. It is believed you can never pull them apart. It's never been found. Now, therefore, this is a speculative field. When you go into something speculative, there are two things you should have. You should have a clear idea of what you're looking for, and secondly, you should have a good way to do it. I'll say a little bit about what we're looking for, and then how we do it. Okay. First of all, maybe it's not true that quarks are always bound together. Maybe once in a while you can find a free quark, and it would be such a thing as though when Newtonian mechanics went into large to special relativity, that is, the present theory of quarks, which is called quantum chromodynamics, would just be part of a more general theory. In such a speculation of work, you must not violate what's well known. Okay. So, it could be a quark. Another possibility is that pairs of quarks may be isolatable for reasons I won't go into. It could be a lepton, like the muon, the electron, or the tau, but with half a charge or pi times a charge. Any of these things are possible. Okay. Now, there are many different ways to look for such fractional things, and many people have looked. You could use an accelerator. You could look in cosmic rays. The way we've chosen is to look in bulk matter, and in our case, these fractionally charged particles would have to come from the early universe. They would have to have been made 10 billion years ago, somehow come through the things I barely understand, inflation and so forth, gotten into the stars, out of supernovas, and come into our solar system. Well, that's not so unexpected, as you might say, after all, we're here because of that. All but the brightest nuclei were made that way, the iron, the uranium, all the other things we depend upon came that route, so maybe a fractionally charged particle came that route. Okay. That's what we're looking for. Now, the method is very simple. It's the same method used 90 years ago by Milliken, but brought up to date by modern technology. Okay. We make small liquid drops, and we use oil, and they're about 10 or eight microns in diameter, and they fall through air. It must be in air. It's not in vacuum. And we have two metal plates with a hole in them, and then as you see in the diagram, we make drops which fall through the hole. We make the drops the way in technology works, but we make smaller drops, and that's been interesting learning how to do that. The difference, the big advantage we have over Milliken, of course, is electronics. What we do is the drop is falling. Over here, we have a light source which strobes every tenth of a second, and here we have a modern television camera, what's called the CCD camera, digitized face, and it follows the trajectory. That's fed, of course, to a computer. We use fairly high speed, but conventional PCs, and in the computer, we calculate the velocity of the drop. Now, here's a picture of it. It's in an E field, and we also, every tenth of a second, change the direction of the E field as the drop is falling. The drops fall slowly with a few millimeters of seconds, and this is the only equation, and we use Stokes law, which I guess is 150 years old, and Stokes law says that for small particles falling through a medium with some viscosity, such as air, you get actually Aristotle's form of dynamics. The velocity is proportional to the force, not the acceleration, and the equation is simple. When the electric field is up, when the electric field helps gravity, we get one kind of terminal velocity. When the electric field opposes gravity, we get the other. As it's falling, we measure it, and we change the field several times, and we're able to measure the mass and the charge of the drop. Okay, we do it for every drop, and the first experiment, which was done on oil, and was not successful. We didn't find anything, but we did publish it, because it's important to publish the results of things that fail as well as things that succeed, and this is sort of the crucial data. Along here, we have plotted the amount of charge in terms of units of the electron charge that we find in the drops, and we did about six million drops, about a drop every second. And what you see are peaks at zero, one, two, three, four, five. And what we're looking for is something in between, something here or in here, fractional electric charge. Now, what we do is we then superimpose all those valleys, so it's easy for you to see it and for us to analyze it, and this is what it looks like when we superimpose it. Okay, and this has been published in the physical review, and we find nothing in here, no fractionally charged particles in this amount of oil, which was about a milligram. Now, this was our first experiment, it works pretty well. We're now trying to improve things. We can now run maybe 10 to 100 times faster. That induces other intracranial electronic things, which we're doing and so forth. Now, we didn't find any fractional charge, but I didn't expect to find any, because if you have an atom or a molecule with fractional charge in it, it changes the chemistry of the atom or molecule. It changes the electronegativity. Therefore, you should not look in materials which have been refined. Our oil is actually synthetic silicone oil, particularly bad to look in. Where should you look? You should look in those materials which have come to us from 10 billion years ago as unhandled as possible. So, the things to look in are meteorites, rocks on the Earth's surface, which were formed early and have not, not in the weathered outer portion, but in the inner portion, and that's where we're really interested in looking. Now, so far, our experiment is not the most sensitive. There are other ways to do this. There's been some beautiful experiments by Marinelli and Morpergo at slightly different methods in the Smith. Some people looked at three or four milligrams of material. We just looked at one. Now, our favorite material is iron. I believe irons are a very bad place to look, and this is why. Iron is first of all refined itself in the blast furnace, and in molten iron, I think any free, any fractional charge will drift out to the walls of the blast furnace. But where does iron come from? It comes from iron ore, which itself has been accumulated in a very complicated geochemical process, and iron with a fractional charge in it would probably not end up in that iron mine. So, I don't think iron is good. Niobium has been done for strange reasons, mostly because a very famous man now dead, Fairbanks, thought he found fractional charge in niobium. It's doubtful that he's right on that. So, what we're trying to do now is, first of all, do a lot more oil because that's good practice. If you found anything in oil, I'd be very suspicious. So, that sort of is testing the background of the experiment. There are various things that can happen. Drops can fall together. That have to be done. But the things we want to do, meteorites, special rocks, and that has got us into an area, and all physicists should be humble because chemistry is harder than physics. And I not only said that because I was a chemical engineer. To do this, we have to get, grind up these various things and get them into oil in a colloidal suspension. And I must tell you, studying understanding colloidal suspensions, not so much their thermodynamics, but how to do them in this complicated situation, is a lot harder than studying string theory, neither which I've studied. I won't take time to study string theory, but we are trying to understand this. Now, I'm hurrying because out of this work, which is in its midst, came another idea, which I only got a few months ago, and I'm just writing a physical review letter about it, which I'm afraid maybe is so simple that we'll not get accepted. But it doesn't matter. I can tell you here, and then people will start knowing about it. Okay. And it has to do with high energy physics. You can use these falling drops, forget about their electric charge, to look for very massive particles. And here I have to go off a little bit into the way the high energy physicists talk about mass. Okay. We usually talk about mass, not in grams or kilograms, but in terms of Gv over C squared. The heaviest known particle has about 90 Gv over C squared. Remember, the proton is about 1 Gv over C squared. Now, the large Hadron colloidal, the beautiful machine being now built at CERN through European cooperation, the local operation from the European Union, will get us up to maybe 5,000 Gv over C squared, because it's a 10,000 Gv machine. But for many years, the high energy world has been full of speculations that something happens at 10 to the 16s, 10 to the 18s Gv over C. It's a so-called unification scale. That's an enormous energy and not reachable by present techniques. Though I'm a tremendous optimist and who knows what will be happening 500 years from now. Now, let's go between high energy units and ordinary units. 1 Gv over C squared, which is about the mass of a proton, is 10 to the minus 24 gram. Now, our drops have a mass of about 5 times 10 to the minus 10th grams. And we can control that very well. We can make smaller drops, larger drops. Take the mass of our drops and put them in Gv. So the mass of our drops, remember this is the rest mass, the velocity, is about 3 times 10 to the 14s Gv. And these drops are therefore lighter than some of the very massive particles that people have speculated about. Now, then it really follows, I'm always just going to talk about this, it's so straightforward. We make a lot of these drops and we can make them very uniform. So the mass is again the same, the same, the same to about a few percent. Suppose one of them in this line as we're making them has this very heavy particle in with that little red thing in there. Then that will have a higher velocity of full terminal velocity than the other drops. And it will stand out. So in fact, what we're just starting out to do now, and it doesn't require a modified diaparatus, is to look for the following. We make a lot of drops, again we're doing it, and this we're beginning to do with a colloidal solution. And right here you see this enormous peak, which will have 10 to the 6th, 10 to the 7th, 10 to the 8th drops in it. If one of those drops has a very heavy particle in it, its mass will be quite different. So the simple plan is to just measure drop after drop, and we use actually some of the bigger drops to get more volume and look for very massive particles. One, of course they have to exist. If they don't exist, let's hope this. Two, they have to be stable. Nobody, I think everybody who does high energy theory agrees that there are such heavy particles, but in most cases everyone also thinks they're unstable. They also don't like them, and they also try to get rid of them. But in fact though I refuse to study string theory, it is true that string theory also can predict some of these, which are also fractionally charged, so they are stable maybe. Okay. So one, these particles have to exist. Two, they have to be stable, and three, they have to be sufficiently abundant so that in a couple of years one can find them. And we can look through different kinds of matter, about a tenth of a gram of matter by this method. So there has to be at least one, a few of them. Now, I don't believe that finding one is of any use. What you have to do in science, in this kind of science, is find a lot of them, 10, 20, 30. I don't intend to try to save them or anything at this point, or I've had endless discussions how you might. Because what we'll do in this kind of very speculative work, suppose the fortunate, the gods of fortune shine upon us and we do see the second peak, then we publish it going through the referee's system, and what we have done is design an apparatus which is very easy to copy. It just uses ordinary machine shop and most of our components are commercial, easily bought from computer people, video frame grabber people and so forth. So that if we are lucky enough to see that second peak, we will publish it if the referees agree, and then other people, and we always say exactly how we build the apparatus, we'll then try it. And it's possible that we could find it or it's possible we could have made a mistake in some subtle way, two drops united, I don't know, there are many odds and ends on this thing. Anyway, that's where our research stands in this area. We are continuing with the fractional charge work and starting this work which almost uses the same apparatus. Now, my main point in telling you about this for the young people is that ideas come out of working for the experiment. This idea which is so obvious never came to me until we were doing the more complicated fractional charge search and it's easier to make big drops than small drops and I kept thinking to myself, why isn't there somewhere to use the big drops? The small drops are better for charge measurement. And it was just working with that, that this idea occurred. So I think one of the most important things for the experimenters in science is you must work at it. And that's where it stands. Thank you.
“I am an experimenter, I am competent in mathematics, but I am not a deep theorist, so I have to stay away from all experiments that are hard to analyze”, Martin Perl told his audience in Lindau in this only lecture he ever gave there. This was an understatement, of course. After a bachelor’s degree in Chemical Engineering, Perl had been trained as a physicist in the department of Isidor Isaac Rabi (NP in Physics 1944) at Columbia University in New York, where theory and practice met and merged at such a top level that it would breed many future Nobel Laureates. Without a deep theoretical understanding of particle physics and without detailed calculations of nuclear decay processes, Perl wouldn’t have been able to devise and conduct the experiment at the Stanford Linear Accelerator Center (SLAC), which in the mid-1970s led to the discovery of the tau lepton. It is 3.500 times heavier than the electron and 17 times heavier than the muon and turned out to be the first member of the last family of the standard model. For its discovery Perl was awarded with the Nobel Prize in Physics 1995 „for pioneering experimental contributions to lepton physics“. He shared it with Frederick Reines who had discovered the electron neutrino in 1956 together with Clyde Cowan. The tau neutrino was only discovered in 2000 by the DONUT collaboration group at Fermilab. In this lecture, Martin Perl does not talk about the three different families of elementary particles, however, but introduces the experimental work, which he had „started about four years ago“, with the objective of solving „the old problem of the existence of fractional charge in isolated particles“. In his famous oil drop experiment in 1909, Robert Millikan (Nobel Prize in Physics 1923) had proven the existence and first measured the magnitude of the elementary electric charge that is carried by a single electron or proton. Fractions of this elementary charge only occur in quarks, the building blocks of protons and neutrons, which carry either two thirds or one third of it. Yet isolated quarks have never been observed. Quasiparticles on the other hand, with whose formation Robert Laughlin (Nobel Prize in Physics 1998) could explain the fractional quantum Hall effect, are fractionally-charged, but no elementary particles. The search for fractionally charged particles needs not necessarily be conducted in accelerators. If they exist, one should be able to identify them with the same method “as used about 90 years ago by Millikan brought up to date by modern technology”, says Perl. He depicts his experimental set-up, which calls for studying bulk material “which has come to us from 10 billion years ago, as unhandled as possible, so the things to look in are meteorites and rocks on the earth’s surface that have been formed early”. To succeed in doing so, one has to grind up these materials and get them into oil in a colloidal suspension - a challenge that according to Perl is more difficult than studying string theory: “All physicists should be humble, because chemistry is harder than physics”. Another line of research, which Perl pursued with the modernized Millikan experiment was the quest for very massive particles. His intention was to identify falling drops, which have a higher velocity because they contain very massive particles. He admits that this idea may sound too simple to be taken seriously by his peers, but convincingly explains why he find its important to share it with young researchers it Lindau: “Ideas come out of working for the experimenter. This idea, which is so obvious, never came to me before doing the more complicated fractional charge search.” Joachim Pietzsch
10.5446/55093 (DOI)
Thank you, Professor Crotier. Count Bernadotte, my colleagues, distinguished guests, fellow students, ladies and gentlemen. It's a very great privilege to be here and to have the opportunity to talk in this company, and particularly under the warm hospitality of our hosts in Lindau. And I'm very grateful. I want to talk about the work in my laboratory over the last ten years in the field of developmental biology, not so much as work in itself, but as an example of some grand questions of biology. Whenever I discuss this subject, I have a considerable degree of embarrassment, I must say. It's an extraordinarily difficult subject at one level, and I always try to make an arrangement or a pact with the audience. And rather than to go into the legal arrangements of that pact before you, I would just like to exemplify it by telling you the story of two tourists who came to a foreign land and on their first night in a new city decided to go to a nightclub where they sat down and heard a comedian telling one-line jokes, and one of the tourists fell off the chair laughing and the other looked down and he said, what are you laughing about? You don't even understand the language. And he looked up and he said, I trust these people. What I would like to say in the beginning is that there are two major problems in developmental biology. The first I shall call the developmental genetic problem. How can a one-dimensional genetic code specify the shape of a three-dimensional animal in time? And the second problem I shall call the evolutionary problem. How, whatever the solution to the first problem, can it be that that solution is compatible with the fact that over very short periods of evolutionary time, the radiation of a taxon or a class or even a species can change in form very rapidly? Both of these issues were the great concern of Heckel and von Beyer and the questions of the relationship between ontogeny and phylogeny that they broached upon are still with us and indeed, I would like to say, constitute together the central unsolved problem of modern biology. The key issue is to understand the central importance of place or position during ontogeny, the recognition by a cell of its particular position in a historical sequence. Now, two strategies may be adopted to attack this problem. The first has already a considerable and distinguished history. That is, to search for genes regulating structures or their repetitions, such as those found in Drosophila, the so-called homeotic genes or homeotic mutations, which control or define particular structures during development. For example, the repetition of the thorax or the replacement of the antenna with a leg-like structure. Undoubtedly, this particular strategy shall yield deep insight into the questions that I have raised. But there is an alternative strategy, and it is the one that I would like to talk about today. I would like to discuss the discovery of several new molecules related to that strategy. In this strategy, one may attempt to define a molecule that is known to exert a specific function in morphogenesis to trace its function and then look at the sequence of its temporal expression and finally go back to the gene and see what this means for these two fundamental questions that I raised. As I said, this is, of course, the strategy of which I wish to speak. But although I hope to make a global survey, I hope you don't think that I have the presumption to think that the questions I raised are answered. Indeed, what I'd like to do instead, since these questions of morphogenesis take their most exquisite form in the central nervous system, is ask two quite delimited questions. The first question is, what kinds of molecules and what is the minimal number of molecules required to make that first dramatic distinction during embryonic development between the apparently homogeneous collection of cells and the first definition of the nervous system in a structure known as the neural plate? And that is my first question. The second question I would like to ask is whether the same or different molecules are used later during histogenesis in the formation of complex organs and to concern myself with the mechanisms that might govern this particular formation of organs? And so what I shall do in this lecture after considering some general issues is to discuss a bit about the isolation and the structure of so-called cell adhesion molecules, or CAMS, as I shall abbreviate them, look at their appearance then in early embryogenesis in these very critical early events of development, and then take up their function and sequence of expression during the formation of tissues or in histogenesis. And in the end, I hope to take up again the questions that I raised in the beginning and see how they look from that aspect. Well, on the first slide is a picture, may I have the first slide please, of one of the half a million or so chick embryos that we have worked on in our laboratory over the last decade. And you are looking at the so-called blastoderm, you are looking down on the egg, and this would be the head, this is the tail, and you are looking at that particular formation here known as the primitive streak in the fundamental process of the development of form known as gastrulation. Most of us consider that the three most important events in life are birth, marriage, and death. I would like to assure you that that is not so, all due credit to my wife and to the drama of life. In fact, it is gastrulation that is the most important thing in life. Now, during this process, this homogeneous sheet of cells, which could represent perhaps 100,000 in number, of which about 6,000 to 8,000 will become the chick embryo, organize themselves by a series of morphogenetic movements to cluster up in this structure which will advance towards the head and is known as the primitive streak. And cells will move in a series of movements known as, and I am glad to hear this since Count Bernadotte announced that we will have a pollinaires, in a series of movements that are known as pollinaires movements down through the primitive streak to make the various germ layers and thus constitute gastrulation. These three germ layers, you will remember, are the so-called ectoderm, which will form the skin, for example, in the nervous system, and the endoderm, which will form the gut and various structures, and the mesoderm, which will form muscles and other things of that kind. Now, in a very short order, a historical sequence of events involving some processes of which I shall speak shortly occurs. May I have the next slide? And this miracle ensues. The chick embryo is rapidly organized through a process known as neuralation into a nervous system with the eye cups and the forebrain vesicles, the neural tube, and a series of body segments known as somites, and indeed the gut and various numbers of different vascular structures. Anyone who witnesses this event is deeply impressed by both the historical aspect of the event and also its miraculous order. But I want to emphasize as well as its order the uniqueness of certain events and the variation involved. The key principle that I would like you to understand that this is an example of so-called regulative development in which cells of different history from those germ layers that I mentioned in showing you the first slide are brought together by morphogenetic movements to influence each other in a process known as induction or milieu-dependent differentiation, which leads successively to the formation of the brain, the heart, the gut, the somites, and the various structures in three dimensional space, but in fact in a four-dimensional process, a historical process. And this process is controlled both by the gene program and also by so-called epigenetic events, historic successive events that depend only indirectly upon the genes, but more relate to the relationship of the cells to each other. May I have the next slide? On the next slide, I have shown the processes, the considered primary processes of development, the control of cell division, the formation of cells from other cells, differentiation or differential gene expression, the movement of cells relative to each other, or of tissue sheets held together under tension, either in two dimensions or three dimensions, the interaction of cells, which will be my major theme today, and cell death. Now, these various primary processes in various ratios and at various times are the processes that lead to the development of organic form, and a consideration of them and of our knowledge of them easily points out to us why we don't have an adequate theory of development in the same sense that we have an adequate theory of genetics or of evolution. Each one of these processes involves myriads of molecules, most of which are yet to be defined. They are in parallel sometimes, and at other times are serially connected, and they are under multiple modes of control. Certainly the first is the gene program, so elegantly delineated by Dr. Arba this morning, for which I am grateful, but in fact the events and the observation of developmental biology, particularly in the vertebrates, indicates quite clearly that the gene program alone cannot account for all of the events that give rise to morphogenesis. An alternative candidate is a process that I have called surface modulation. Could I have the next slide? Some time ago I proposed that perhaps some of the epigenetic changes that one sees during development are the result of alterations either in the prevalence or amount of surface molecules or in their position on the cell surface, or alternatively the result of chemical alteration of these molecules in such a way during time as to alter their function. Now this of course depends ultimately upon the genetic expression of these molecules, and if you will take these molecules to be those that mediate the adhesion of one cell to another, we come very close to the subject at hand. What I would like to do today in fact is to show you something about the subject of local cell surface modulation to provide some evidence that it exists in its various forms, and that it follows a quite dynamic sequence. And indeed if there is anything that you could take away from this lecture that I think would be of central significance bearing upon the initial questions I asked, it would be this, that there is a very dynamic set of alterations at the surface of cells involving adhesion molecules but undoubtedly others as well, which are responsible for the regulatory loops that lead to form eventually during the development of the organism. Now what I intend to do in fact now is show you something about the molecules that undergo such modulations, particularly the cell adhesion molecules or CAMS. I won't dwell on the technical details but I would like to show you some pictures of these molecules because they are new and they have several unusual structural features. On the next slide is the strategy that we use to isolate these molecules. As I said I won't go through the details but basically the strategy was to take the cells of one organism, for example the chick, immunize another organism such as a rabbit, and search for antibodies that would block the function of cell adhesion molecules and connecting one cell to another. And having isolated those antibodies then to purify the molecules that they blocked using a series of graded assays and a series of criteria. As a result of that we were able quite early to isolate two kinds of antibodies. One called anti-N-CAM, antibodies against the neural cell adhesion molecule in a variety of species and another called anti-LCAM antibodies to liver cell adhesion molecules. And as you shall see these are quite different molecules with quite different functions but they are intimately related in their dynamic expressions during development. We found that fragments of the anti-N-CAM molecule, fragments because if we used the whole antibody the antibody itself would glue the two cells together by their adhesion molecules so we took advantage of making antibody fragments that had only single valence. These fragments could inhibit the adhesion of neurons to neurons of nerve cells to nerve cells, the fasciculation of the processes of nerve cells to form nerves, the retinal layering that occurs even in vitro of chic embryonic retiny and even the adhesion of neurons to primitive muscle precursors such as myotubes. And these antibodies were found in fact to stain the surface of all neurons both central and peripheral. Similarly antibodies to L-CAM which had quite different specificity blocked the interaction of liver cells in culture to form adhesive masses as well as the pseudo-tissue formation that one observes during the culture of such liver cells. Well when we may have the next slide use the quite specific antibodies to isolate these molecules and subjected them to fractionation by electrophoresis we immediately found a quite distinctive set of patterns. In the N-CAM or neural cell adhesion molecule a protean in fact a glycoprotein isolated by this method we found a very diffuse smear of multiple molecular weights extending from 180,000 to about 250,000 in the embryonic animal but in the adult much to our surprise we found that the molecule consisted of two very sharp bands at 180,000 and 140,000 and I shall come back to this transition from the embryonic form to the adult form because in fact it represents a form of local cell surface modulation. Now L-CAM showed no such change by this criterion it had a molecular weight of about 124,000 and also was a glycoprotein had sugar connected with it but did not show this e to a conversion. Neither molecule interacted with the other and indeed each is specific for itself and that is a very critical fact. What I would like to do without dwelling on the details is show something of the structural features of N-CAM and compare it very briefly with L-CAM may have the next slide. Here we have a pattern you may ignore this which is simply technique. We have a pattern in a linear form of the N-CAM molecule situated on the cell surface. This is the so-called aminoterminal portion of the molecule which is now known to be arranged in three domains. Aminoterminal domain, a middle domain and a carboxylterminal domain. The aminoterminal domain contains a small amount of carbohydrate and is responsible for the binding homophilically to another cam from an opposite cell through the same domain. It is the binding domain of the molecule. The middle domain which is not involved in binding has a most unusual sugar structure. Indeed it contains 30 grams per 100 grams of the polypeptide of an unusual charged sugar and usual in its linkage as polycyalic acid. A most unusual structure not usually seen in vertebrates, seen of course in bacteria, but carrying out a most intriguing modulation function of which I shall speak. And the third domain we now know is concerned with the relationship of the molecule to the cell surface. This is an intrinsic protean. It is inserted into the lipid bilayer and in fact it is mobile in the plane of the membrane. When we compare N-CAM which is only crudely depicted here with L-CAM we see a quite different structure. L-CAM is not arranged in this series of domains, has very little structural resemblance to N-CAM, is cleaved in a region over here which is possibly a domain and does not have the unusual cyalic acid structure. Instead it depends for its binding to L-CAM on calcium. And indeed if calcium ions are not present the molecule is rapidly destroyed by enzymes which cleave it into pieces. So both its structure and its binding function depend upon calcium. This is not the case for N-CAM. I would like to show you briefly before turning to my major theme a picture of N-CAM on an electron micrographic grid but with the caution that this is not a picture as it exists on the cell surface. Can I have the next slide please? On this next slide you will see an electron micrographic picture rotary shadowed with metal of the N-CAM molecule. Indeed to speak correctly of three N-CAM molecules linked through this hub here in the center. And I think you can see the domain structure in this so-called triskellion form of the molecule. It is not the only form by the way. It resembles the molecule clathrin but it is not that molecule. And the molecule can in fact exist in forms carrying four chains and what have you. My main point in showing you this picture is to indicate that one can visualize the domains responsible for the various functions of this binding structure. Well now let me turn to my theme. And I would like to caution you that on the next slide I am only showing you a sequence not to belabor the details but only to give you some form and frame into which to place some of the pictures I am going to show you of development and the relation of the CAM molecules to development. So having just shown you the structure of these molecules in a crude way and having mentioned that their binding is homophilic to each other, to each, to each but not cross from N-CAM to L-CAM. Let me now show you my main theme which is that these molecules appear in the most extraordinary distributions in time and space during development. That they first appear together on the blastoderm that you saw on the first slide all over the blastoderm. But at that just that moment when the nervous system or neural plate is induced in so called primary induction, that milieu dependent differentiation that creates out of the ectoderm, non-neural ectoderm and that which is to become the nervous system, there is a very sharp transition in which in the center of the blastoderm there is a very large increase of N-CAM and decrease of L-CAM and reciprocally around that ring a very large increase of L-CAM and decrease of N-CAM. And then in each one of the regions during so called secondary inductions as cells are moved together historically, there is an extraordinary surface modulation in prevalence of the two CAMs in a quite distinct order. And then subsequent to that in the formation of the nervous system as the glial cells, the support cells of the nervous system appear, yet a new CAM which is not seen in these earlier epochs appears in order to mediate this tissue formation and finally around birth, a most extraordinary set of changes occurs in the chemical structure of the CAMs that has to do with the formation of the detailed tracks of the nervous system as well as the tissue structures such as those of the pancreas, the gut and the liver. And in fact this is my theme. So if you will bear with me what I plan to do now is show you very quickly some slides of the distribution of the CAMs using the antibodies appropriately labeled in time in the embryo very rapidly to come to the fundamental point of early embryogenesis which is this question of positional specification or shape determination by place. And in order to do that I'm going to have to show you a series quickly of fluorescence photographs of labeled antibodies to CAM and their distributions. But my main point will be to come to a mapping of the CAMs and to a discussion of their significance before I turn to tissue structure. And thus answer I hope in a tentative form my first question which is how many of these CAMs are involved in making the neural plate the first determination of the nervous system. May I have the next slide please. Well here I come to perhaps the most dramatic observation although it might not appear so on this slide. Here you are looking at the surface of the blastoderm. You are looking before down upon it. Now we have sectioned it and stained it with antibodies to N-CAM. And in this so called epiblast layer of the gastrolating blastoderm you see that all of the cells are lit up or stained with antibodies to N-CAM as are the cells of the other layer. But the most distinctive observation is that if we stain for L-CAM we see exactly the same result all over the blastoderm. Both N-CAM and L-CAM stain the entire structure. With one exception that during the movement of the cells through the so called primitive streak which was that structure I showed you at the bottom of the first slide to form the middle layer or the mesoderm rudiments the CAMs disappear from the surface of the moving cells as they are moving. But even more dramatically during the formation of the so called neural fold or neural plate the N-CAM concentration increases enormously while the L-CAM concentration shown on this side and I don't know if you can barely see the outline disappears practically to zero in the neural tube. This is a symmetric cut. Whereas in the contrasting mode the L-CAM increases in the non-neural ectoderm and in the endoderm which is going to form the gut while N-CAM disappears from the non-neural ectoderm. That while not dramatically expressed on that slide is exactly what is occurring in time and space during the formation of the first neural plate which is going to become the nervous system. Thus setting the axis of the embryo on a good deal of other fundamental events. Now over here is a slide indicating how certain very specialized cells that are going to form from this neural plate the peripheral nervous system, the ganglia, the sympathetic ganglia, the dorsal root or sensory ganglia of the body, a good part of the peripheral nervous system, some of the bones of the face and other structures such as melanoblasts arise from a set of cells known as neural crest cells at the top of the neural tube. On this first panel here the neural crest cells have already begun their characteristic migration to form a ganglion and as they do that they are staining for N-CAM completely disappears. They move instead on a molecule which is a substrate molecule known as fibronectin which appears and is made by other cells as they move in this particular gully formed by the various structures of the embryo to their destination. But when they reach their destination after one or two cell divisions they re-express CAM on their cell surface as shown by this stain and they form a ganglion very rapidly as fibronectin disappears. So in the formation of this most exquisite set of movements correlated with the interaction and aggregation of cells to form peripheral structures there is a surface modulation of the CAM molecule that is coordinated in a very conjugate fashion with the appearance of substrate molecules that permit movement. On the next slide you will see another dramatic example of this kind of event if you will focus your attention on this structure here. This is a time sequence showing the so-called somite just before the somites which are the regular derivatives of the mesoderm and are going to form the segmented structures of the body various bony and muscular structures just before these have segmented. At the moment they segment a large star of N-CAM staining appears over the cells of the somite as it is organizing in that fashion. Well on the next slide let me turn your attention to the other molecule L-CAM which is not related to the nervous system which comes up in organs having to do with the endoderm largely but also other germ layers and show you that in the inductions responsible for forming organs such as the liver, the lungs and the formation of the connection between the pharynx and the skin in the so-called brachial arch this molecule plays a key role. Here we have just before the formation of the onlaga of the liver or the liver rudiment a large increase in the diffuse staining of L-CAM in the gut and we can practically diagnose the appearance of the liver rudiment by looking at the appearance of this molecule on the cell surface. The same is true for the two lung rudiments and the same is true in the obliteration of the primary germ layers the endodermal pharynx fusing with the ectoderm that event occurs through L-CAM which shows in that slide. L-CAM is also responsible in the kidney but perhaps we could go ahead and look at that. Those of you could we have the next slide. If we look at the kidney which is a mesodermal derivative and a most complex organ with a complex evolution we see the following striking thing. First in the inductive tissue known as the wolfie induct which is really the derivative of the collecting tubular the most primitive kidney and is responsible for organizing the loosely arranged mesonchym so-called to make kidney tubules L-CAM appears first and then as the tubules are organized by this inducing structure N-CAM appears on the tubules. Could you go back to the previous slide and then in a quite clear cut sequence could you go back one perhaps that's too complicated and I can't do it here but the main point is that when this extends forward this N-CAM disappears and L-CAM reappears on the extending collecting tubules. So what one has is a sequence of appearance of these surface molecules in a quite defined order. They're all carrying out adhesive functions and that adhesive function as we shall see depends very much on how many molecules are there where they are disposed and what the chemistry is. Well I forgive me for the next slide it is more for my purposes than yours and it simply summarizes what I have said. What I have said so far is that these primary CAMs or cell adhesion molecules appear in very early embryos for L-CAM in all three germ layers. L-CAM appears in the non-neural ectoderm in the mesoderm of the wolfie induct in the kidney and in the endoderm and is indeed responsible for all gut structure adhesions. And in the five to thirteen day embryo the various epithelial or tissue sheet derivatives of these layers are seen clearly stained. But the main point of this slide is to show that even in the adult structures that derive concordantly from these germ layers these molecules remain. The L-CAM remains in the stratum germ nativum of the skin in the epithelium of the kidney and the overdoct and in the whole successive set of epithelia that come from the gut derivatives including I should point out several immune precursors the thymus and the bursa, several glandular precursors the thyroid and the parathyroid and indeed Rathka's pouch which is the glandular portion of the pituitary gland. And L-CAM unlike L-CAM only belongs to two layers the ectoderm and the mesoderm and has a derivative comparable in a succeeding embryo and is found mainly in the nervous system in the adult and is found in all parts of the nervous system in the adult. This slide is incomplete we have recently discovered that the CAMs are present the N-CAM is present also in cardiac muscle and in some form of structure in the testis. Now if I could have the next slide I can summarize what I've been saying so far. What I began with is that the key question in development is the question of the relationship of place and time and the historic succession that leads to expression of genes and what controls that. Embryologists could have the next slide. Embryologists express this in terms of what is known as a fate map. Could I have the next one? A fate map is a virtual map constructed in two dimensional space to express what will become of a particular portion of the blaster derm for a particular portion of the four dimensional continuum. Let us for example say we want to make a fate map of the blaster derm as Luke Vacchette of Antwerp University so kindly provided me with these details for the period of organogenesis, the formation of all the tissues in that stage of embryogenesis. The trick is to label a particular cell in a particular place on the blaster derm and see where that cell would go. And thus to construct a map cells from this region will go to the nervous system. Cells from this region to the non-neuralectarderm cells here to the somites, cells in the lateral plate to the heart, the urogenital system and smooth muscle, cells here to the endoderm or the gut derivatives and cells here to the blood forming islands. This is just the primitive streak I started with. Well when we made a composite map of the CAM molecules I have spoken about so far we were rather delighted to see an interesting uniformity. And that is if we superimpose the distribution of CAM molecules in this fate map onto the classical map we see a rather simple topology. The CAM indicated by the dots forms a continuously connected simple topological domain all here consisting of the nervous system, the notochord, the somites and all the lateral plate derivatives here surrounded by the calcium dependent ring of endoderm and non-neuralectarderm which express L-CAM. So to summarize that I will come back to this at the end. The two CAMs appear in several germ layers, end-CAM in ectoderm and mesoderm. In L-CAM in all three germ layers they occupy and cross boundaries of various kinds of cell differentiation both within and across the CAM denomination and indeed as you will see there is a portion of the map that has no CAM so far suggesting that there will be other CAMs to be discovered in the primary map. Now what does this got to do with my original subject? Well first let me come to a tentative conclusion and that is to form these epithelial sheets through cell-cell interaction it seems fairly clear that you need at least two CAMs of different specificity. You need end-CAM and you need L-CAM. If you had only one and the various modulation events you would land up with one tissue sheet but just a loosely arranged set of cells around it and it is these tissue sheets which will fold up in space to form the various tubes. This to form the nervous system, this one to go in here and fold to form the gut and what have you. It is the particularly ordered relationship of the CAMs in time and space that certainly has something to do with the orderly process of that folding. Well now let me turn very quickly to my next few subjects. May I have the next slide please? On this slide you will see that if we add the CAMs and now I address the question are they present in other tissues. If I add the CAMs to nerve cells growing in culture I completely distort the ordered pattern of those nerve cells. Here for ganglion cells and tissue culture and here for retinal cells which completely have destroyed layers in the presence of the antibodies and their cells no longer interact with each other. May I have the next slide? Indeed we can disrupt the maps of the nerves to the brain. The next slide please. We can in for example in the frog disrupt the maps of the brain by adding antibodies to CAMs in the target portions of the brain as we have here in the frog. Could I have the next slide? And in that next slide you will see that a map of a frog brain which is quite orderly shining light in the eye and recording from the so-called tectum where the optic nerve will end is rather radically disrupted by the presence of these antibodies in a most dynamic fashion. And so we are quite convinced that the CAMs also are involved in this most exquisite of tissue formations. May I have the next slide? In this next slide I want to go back now having shown you the changes in surface modulation of a mount to the chemical change. As I said the embryonic form of the neural molecule changes from a high amount of sugar to a low amount of sugar and I want to now show that that alters its binding. The sialic acid changes from 30 to 10 grams indicated by this electrophoretic change. The next slide please. If that is so then these molecules ought to bind differently and we have found by a kinetic test indeed that that is so. The embryonic form binds four times less rapidly as the adult form. May I have the next slide? And in that slide that is indicated here. We have taken the pure CAM, put it into lipid vesicles, interacted the vesicles and find that the rate of binding is influenced by this sugar change. Even more dramatically if the amount of the CAM is doubled in the lipid vesicle the rate of binding goes up over 30 fold. So this chemical change which is occurring later on, the next slide please, also orders the distribution. And you can now see in a mouse embryo in the formation of the brain tissue the ordered change from the embryonic form to the adult form of CAM indicated by this very large smear of sialic acid finally to these forms which bind more rapidly. If this is true different areas the brain should do it on a different schedule. May I have the next one? And on that next one you will see such a schedule. The cerebellum does it faster than the spinal cord, the spinal cord faster than the cerebral cortex. The olfactory bulb where it's receiving cells that are destroyed and turned over in time has always an amount of embryonic form even in the 180 day old mouse. And that is true also of the portion of the brain known as the diencephalon. And so we suspect that this chemical change is very important in the mapping of the kind that I told you about in the frog. Now in the next slide you will see an example of that in a disease. We took the so called stagora mutation of the mouse, a mutation which causes a disconnection of the cells in the cerebellum or the balancing organ of the brain and we looked for the E to A conversion. And what we found is that at 21 days in the normal the E to A conversion had occurred quite nicely but in the stagora mouse it was delayed indefinitely. And that appears to be related to the formation of the connections of the particular cells in the cerebellum. And was quite pleasing to observe it in this mutant which relates to the nerves but not in these mutants of the cerebellum where it is related to the glial or support cells. And that turned our attention finally to the question of whether new molecules were involved. So far this says that the same molecules that are involved in forming the early embryo are involved in tissue formation, even that of the brain. But what happens when you form a new tissue? May I have the next slide? Here in this next slide you will see what the principle is. I have been talking about the binding of N-cam to N-cam and nerve to nerve or of N-cam to muscle via N-cam on the muscle. What of the case of nerve binding to glial cells where possibly two different molecules of different specificity are involved? Well recently we have isolated by the similar tests a molecule called NG-cam, neuron glia-cam. May I have the next slide? And it is different than N-cam. It appears at three and a half days in the chick long after N-cam has appeared. It is different in structure. Here it is over here. This is N-cam in the embryonic and in the adult form. This is N-G-cam. The two molecules have quite different structure. But interestingly enough on the next slide they appear on the same nerve cells. Here are some nerve cells in tissue culture as you see up here. Stained for N-cam and for N-G-cam and you can see that they are on the same cells. Well may I have that next slide which summarizes my comments. Yes, thank you. Well now I have gone through the question of the cams having to do with the early and most fundamental determination in space and time of shape. Now I turned to a rapid consideration of the nervous system, that most exquisite tissue, and we found that the same molecules were involved but that when particular cells arose an additional cam was added. Now I go back to that series that I showed you before. That cams appear together, diverge in early embryogenesis, change up and down in amounts, putatively altering their rates of binding, and finally new cams appear when new tissues are formed as for example the glial cells in the nervous system, and then presumably as the structures are finalized in a dynamic way a chemical change occurs in these molecules. The fundamental idea, you can leave that slide on and put up the lights please, the fundamental idea is this, that unlike the suppositions that animal form results from a very large number of specific molecules which form addresses for cells hooking together as you would form a jigsaw puzzle, the story looks very much more like a mountain stream. A mountain stream which is flowing as cells do under movement but under control of its barriers which we might consider the cams. Consider that mountain stream hitting a boulder, having a rock which then becomes icy and an ice flow developing and splitting the stream in two. This dynamic principle of growth under molecular control seems to be what guides this sequence and indeed the idea is that the modulation of cams which obviously are the result of gene expressions through these various processes feed back on the primary processes to alter the tissue patterns and that that picture is enriched as growth and differentiation arise. May have the next and last slide. In such a way as to alter the embryonic induction that is the genes controlling cam expression and modulation interact with adhesion so as to control the movements of the cells and thus control the induction which leads to the various formation of organs and tissues. And that undoubtedly as other regulatory genes involved in cell differentiation, for example the glial cells that I showed you about, intervene the combination of the two yield increasingly complex structures. May have the lights please. So what I have told you finally is that cams exist. They are glycoproteins. Could we have the lights? And that they are at the cell surface. They undergo major changes. They do have specificity but they are not the specificity of address. That they are under control of genes but not only under the control of genes and that they themselves regulate the tissue movements that lead to the tissue dependent differentiations that lead to animal form. Well I hope that this will be some example. I cannot help but take the opportunity since there are so many young people here and I look forward to talking about this subject to indicate to you that this is just the beginning of a subject and that science is about questions as much as it is about answers. I remember the remarks of van Toof who said that science is imagination in the service of the verifiable truth and as such it is eminently practical. This is true but I would like to emphasize for the young people who may be bedazzled these days by technique that science is as well like poetry about spirit, about imagination and about variation. And if there is one human lesson from this modest story it is that for example no two brains will be alike and significantly so no two individuals will be alike. No one can predict even an embryonic future even though there is regularity and it is minute details and that I share Dr. Orba's optimism. My optimism is in the range and the poetry of individuality and I hope that all of those students remember that little lesson. Thank you very much.
Gerald Edelman came to Lindau with a fascinating story about the discovery of CAMs, Cell Adhesion Molecules. By sticking to the surface of cells, these molecules guide the processes by which cells bind with other cells. In particular they play an important role in the way animals build their nervous systems and achieve their shape and form. This was the second time that Edelman lectured at the Lindau meetings, but already his first lecture in 1975 showed his interest in the research, which eventually led to the discovery of CAMs. With a considerable number of Nobel Laureates in Physiology or Medicine in the audience, it seems that Edelman to some extent gave his lecture for them and only now and then for the students and young scientists. That he was aware of what he was doing is underpinned by the story he tells in the beginning: Someone laughing to jokes given in an un-understandable foreign language just because he trusted that they were funny. Edelman’s lecture must have been to a large part far over the heads of the students and many of the young scientists. It is accompanied with very many slides, sometimes shown very quickly. But when he arrives at the end, a little bit out of breath and maybe with a little bit of a bad conscience, he stops and addresses himself to the young people in the audience. To these he gives several optimistic messages, one of them being that there is much more to find out, another that science is about questions as much as it is about answers. After quoting the very first Nobel Laureate in Chemistry, J.H. van ‘t Hoff, Edelman turns to poetry by noting the similarities of science and poetry, both being about spirit, imagination and variation. Finally, he ends with a more scientific message: No two brains or no two individuals will be alike and no one can predict the embryonic process in all its minute details! Anders Bárány
10.5446/55094 (DOI)
Mr. Chairman, Count Bernadette, organizers of this conference, fellow students, I'd like to thank you very much for this opportunity to talk to you today. This is the 34th such meeting and is devoted to, quite properly, medicine and physiology. This means that I've had a lot of heavy going in the last few days because I know very little about medicine and physiology. I'm going to talk about something to do with medicine and physiology. What I'd like to do is present this in a broader context of applications of certain principles in science and engineering and also to put it in a historical perspective, this subject, although it seems quite new in some respects, has been going on for almost 80 years in one form or another. I have found these historical developments very interesting and perhaps you will find this interesting also and maybe a bit of light relief from the heavy physiology that we've been having for the last couple of days. When I first started thinking about computer tomography in 1956, it clearly became obvious that the problem involved a mathematical problem and I went about solving it. It was only some time afterwards that I had learned that this problem had in fact been solved by an Austrian mathematician by the name of Johann Radon in 1917. Since then, in fact, just in the last half a dozen years, I have learned that the problem predates Radon's discovery. It goes back, as I've said, right to the beginning of the century, maybe even into the 19th century. We don't really know. That also has many applications, one of which I will show later on goes back to 1936. Today I'm going to give a very quick summary of the applications and history of this problem. I shall do this in two ways. One by doing a quick survey of images in ordinary two-dimensional space, starting off with something small like a virus and then going to something large, the moon, for example. Secondly, I'll talk about generalizations of this problem starting way back around about 80 years or 90 years ago and bringing it up to the present time. First slide, please. Let's first state what the problem is. Suppose you have a section of the plane here, this domain D, and you have a straight line across it, line L. If we could focus that a bit more please. Suppose we measure a quantity down here, GL, which is the integral of some density, which varies in this region, across along this line L. Now if you don't like the idea of an integral, forget about it. Just think of the GL as the average value of this quantity F along that line. And of course, then one can take other lines and take a line like this. One could measure, say, the average value along a second line. Then one can ask the following questions. Given a number of lines, perhaps an infinity of lines, which intersect this domain, and these lines all represent averages of F, can you then calculate F itself as it varies from point to point? It might be low here and high here and zero here and negative even in some cases. And the answer to this question is yes, one can do it. There is not just one solution to the problem. There are in fact many forms of solution, each of which is adopted to a particular application. And that is the end of the mathematics. All we have to do now is talk about some applications. And the next slide shows an application in electron microscopy. The slide was provided by Aaron Kluge and the work was done by Kluge and his group. And Aaron Kluge received the 1982 Prize for Chemistry for this work. Suppose that you have here a virus with an unknown structure and you look at it in a long-focus electron microscope with a number of beams in different directions, one like that, one like this, one like this. Then what you get is a projection in this direction of the structure of this virus. In this direction you get this projection. And each projection is a set of quantities, these quantities GL. GL corresponds to the different heights of this corresponding to different lines. And so Radon's problem can be stated in a different form. Given a number of projections of an object such as this, then can you reconstruct the object from the projections? The answer, as I've said, is yes. This thing here represents a numerical procedure and as I've indicated, there are several of those. And from this numerical procedure one can deduce the structure of the object that you set out to look at. In Kluge's case, he was looking at the human wart virus and these are the projections and this is the virus structure down here. Now let's go on to something a little bit larger where the, we're not dealing now with an electron microscope but with essentially a CAT scanner, but a very small CAT scanner, where what you're averaging along your lines is the absorption coefficient for X-rays along the lines. And this is the CAT scan of a mouse. It's about two centimeters across. And you can see a lot of the details in here, the ribs, for example. You can see the four legs here and here. You can see the structure of the lungs in the mouse. You can see the heart here. And all this is shown with a resolution of around about 0.05 millimeters, about a twentieth of a millimeter. This slide was provided by American Science and Engineering Company that make a little scanner like this. And this scanner, in fact, could be used to detect mammary cancer in mice. It would, of course, be more expensive than Professor Huggins's young ladies. On the other hand, and it would be a pity to put them out of business because they seem to enjoy their work so much. The next two slides I've thrown in for fun. This next one is the scanner that I built in 1963, which doesn't bear much resemblance to a modern scanner. Just, though, to get a sense of perspective, it took me, this is a model of a head with two tumors in it. It took two days to get 250 pieces of data from which to create an image. It cost $100. This represents a modern scanner, which costs about $1 million. It takes about a million pieces of data in about five seconds and, of course, produces a much improved picture. Now, there are so many of these that I won't bother to show them, but I'd like to go on to another application which is still on a human scale. And that concerns the subject of PET scanning or positron emission scanning. Oh, something's gone wrong. No, it hasn't. I'm sorry. I have slipped a little bit. This is a little bit larger than human scale. In fact, this is a picture of a rocket motor, by contrast with the little mouse scanner that I showed you. This represents a section through a solid fuel rocket motor with a diameter of about two meters. One can look for defects there and can see defects less than a quarter of a millimeter in size. The defects that one can see, they're actually on this photograph here, but they're small and totally confused by the little bits of dust and so on on the blackboard and on the board and on the screen and on the slide. Now, this is the next application I wanted to turn to. Let us suppose that this is an ordinary electron negatively charged. The universe is full of them. All matter is full of electrons. And we know the properties very well. Now, as Dr. Yallow pointed out, in the 1930s, it was discovered that one could make artificial radioactive materials. And in particular, one could make some that produced positrons which are identical with electrons except for their charge. They are positively charged instead of negatively charged. Because they have opposite charges, if they start together at rest and they will be attracted together, they will then undergo some amazing process which results in their annihilation and they turn into two photons, one going off in this direction, one going off in that direction. And if these things were at rest originally, they would have had, they would have no linear momentum. And if they had no linear momentum originally, they can have no linear momentum finally. And that is why these two photons have to go off in precisely opposite directions. And I think you can see that we are getting back to a straight line again and back to Radon's problem. And in fact, the way one uses this is as follows. Suppose you have here, say, a head and there is some distribution of positron emitting material in here. And suppose in this region, a positron annihilates with an electron and one photon goes off and is registered by that detector, then the other photon will go off and register in this detector. One can make a simple electric circuit which will tell you when these two detectors fire simultaneously and that will tell you that an annihilation took place somewhere in this column and that will tell you that there was a positron in this column here. Obviously, you can idealize that this problem shrink these detectors down and the rate at which this counter is counting will represent the rate at which positrons are being annihilated along a line. Switch the detectors around and you get another line, another line, another line, and so you are back to Radon's problem again. And one can use this then to look for the distribution of positron emitting material. This is a result, an early result from Brookhaven. And these are PET scans of the head taken at various times. Now what is going on here? The head metabolizes glucose and glucose can be made in a form that's labeled with fluorine 19 which emits a positron. The patient is injected with this glucose. The glucose then just goes through the body, arrives in the head and the various colors here represent the rate at which glucose and therefore the glucose is being metabolized. And so one can study this rate and one can look at it as a function of time. In a normal brain, the pattern here would be quite symmetrical. In this case, there is something wrong in this side of the head here and this is an important diagnostic tool for the physician. Now CAT scanning depends on physics of the 1890s. PET scanning depends on physics of the 1930s. The next subject I want to talk about is NMR scanning which is the latest medical scanning modality and that depends on the physics of about 1950. And you will note that the gap between discovery and medical application is closing but it's still 30 years behind us, the original discovery for NMR scanning. Now the physics in each step is becoming a bit more complicated and there's no really simple way of giving a full explanation of what's going on in NMR scanning except to say the following. Certain, I'll have to do a bit of hand waving here, certain nuclei which have magnetic moments when placed in the magnetic field will precess and when you apply a certain radiofrequency field which is proportional to the magnetic field, you will cause certain transitions to take place. After a while, the nuclei will jump back to their original state emitting radiation which can be detected and this is the basis of NMR. If one has a fixed magnetic field and one varies the frequency, the NMR signal only comes at a resonance frequency which is proportional to the magnetic field. Now for 30 years, chemists were using NMR and they wanted to look at very weak signals so they got larger and larger samples with more and more nuclei in it and larger and larger uniform magnetic fields and they must be highly uniform and you get a signal therefore from all over the sample. Now it was around about 1972 that Paul Lauderberg pointed out that if you apply a small gradient to the magnetic field, the resonance condition will be met only at a plane which intersects that magnetic field so that by putting on a gradient you can restrict the NMR signal to a plane. Put on a different gradient and a different from a different direction and you get a signal which comes from the intersection of two planes and what is the intersection of two planes but a straight line and we are back to Radon's problem. And by varying the gradients one can measure the signals over a number of straight lines and one can therefore obtain an image of the object. What I misspoke then, what I should say is that one can get many images of the object and this is both one of the strengths and the weaknesses of the NMR method. In CAT scanning one measures basically a single quantity, the local absorption coefficient for X-rays. In NMR scanning one can measure the density say of protons throughout the body and we'll get a map something like an NMR scan. But in addition to that there are two relaxation processes that take place, so called T1 and T2 relaxation processes which have characteristic times. And so there are three things that one can measure with NMR scanning, each of which gives somewhat different information. And by varying the pulse sequences of the radio frequency which is applied one can measure these things individually or collectively or somewhat mixed up. And so however the net result is the same one gets images and this shows you the sort of thing that can happen. These are three pictures, each pair corresponds to the same brain but this is taken with one technique, this is taken with another technique and so you see that you have quite different features showing. And this is going to be one of the problems for physicians to learn what exactly they're looking for because in certain circumstances you can arrange the signal so that say a tumor in the head will show up very beautifully, twiddle the knobs a little bit and the tumor goes away on the picture unfortunately only. Well so you have these different possibilities. On the other hand you can also collect data somewhat more easily in some respects than with CT scanning and this shows the kind of thing that will be available to a physician. Here you have a whole section, series of sections through the head taken with NMR scanning. You can see here the nose down here, nose, nose, nose, nose and the nose is sort of disappearing up here as one goes up to the top of the head. And the physician can look at this and say I want that section or this section and as one can do with CT but it's a little bit easier with NMR if one doesn't like a horizontal section like that one can take a vertical section or a section at any direction one pleases. Well that is all that I'm going to say which has any connection with medicine or physiology and I want to go on and continue my exploration of things, getting ordinary images in scale. May I have the lights for a minute please? If you have a source of radiation here, a source of sound let's say and a detector there and let's suppose this is water then you can measure the time it takes from the sound to get from the source to the detector. If this water has variable temperature the velocity of sound will be a variable and what you will measure the time will be the sum or integral or average value if you like of the reciprocal of the velocity. So if you have a region say of the ocean and you put a whole lot of sources around and you have a whole lot of detectors you can define a whole lot of straight lines along here for which you can measure the average value of the reciprocal of the velocity. We're back to Radon's problem again from these transmission times we can find out locally where the velocity of sound is in the water and therefore the temperature. The next slide please. I'm sorry I skipped one application by Professor Singer that still refers to NMR scanning. Here what is important are these little blue dots here. If the process of getting an NMR image consists of essentially magnetizing the body momentarily in a certain fashion. Now parts of the body are of course not at rest like the blood. Blood in a blood vessel is moving and so the magnetization gets carried out of the plane that you're trying to image and by studying this quantitatively one can look at and that represents these things. Here one can actually measure the velocity of blood in the patient totally non-invasively and this can be very important for giving the state of the arteries and so on. Well this represents these two things down here. Represent oceanography. Oceanographic tomography these slides were provided by the Ocean Tomography group at MIT Woods Hole, La Jolla and assorted other places that I don't quite remember. But can't forget these things here. There's a mistake. These regions here are 300 kilometers on a side and this represents variations in sound velocity about some average velocity denoted by zero and from that one can measure the temperature. And before the ocean tomography the advent of ocean tomography, how would one do this? One would start with a ship in one corner here and with a trailer thermometer and we'd go along and measure the temperature everywhere and back and along and back and along and back. And this is 300 kilometers on a side by the time you get down here a long time has elapsed and you really worry about whether the temperature that you measured back here has changed or whether it's still the same or not. Well with ocean tomography you can get the same results that you would have got if you had had a ship that traveled at something like 50,000 kilometers per hour. And under those circumstances you can be fairly sure there has been no change from here to here. Now going up a little bit larger in size into a different application, if I may have the lights for a minute, I'm going to talk about it radio astronomy now. Suppose you have an antenna with a parabolic cross section that looks like this and you point that say at the moon and that is going to detect microwave radiation, radio frequency waves of some sort. How will it detect it? It'll locate it very well in this plane, it'll conf… the emission that will be detected will be confined to a strip but it'll be very poor in this direction. And so what you will be doing if you point this at the moon will be integrating, averaging, whatever you want to call it, the radio emission from the moon along a line like this. By moving the antenna around or letting the moon move around by itself you get a number of lines, you have radons problem again and you can go ahead and reconstruct the emission of radio emission from the moon. And here is a map that Ron Bracewell took in about 1956, about the same time that I was starting working on computer tomography in South Africa. Well this completes my survey of going from the small, a virus to the large, namely in this case the moon. And now I'd like to look at this more historically and I'd like to point out that of course when mathematicians come across a problem involving lines and planes they immediately like to generalize it and the next slide shows the generalization to three dimensions. Suppose that this density that we have is distributed now throughout a volume in this sphere and we measure the integral or average value if you like of this density over a thin layer like this, ideally a plane and we do this for a number of planes, the same question can be asked. Can one construct f from point to point knowing measured values of g, the answer is again yes and in fact it turns out to be easier to do it for the three dimensional case than for the two dimensional case. The reason I mention this at this point is that this is the first version of the Radon's problem to be discovered. It was discovered by the distinguished Dutch physicist H. A. Lorenz prior to 1905. We don't know exactly why he did it or when he did it because he didn't even bother to publish it and the only reason we know of its existence is that one of his students quoted his results in a paper that he wrote. And this problem recurs. Radon as I mentioned solved it in the general n-dimensional case in 1917. In 1925 George Ulenbeck who was famous with Gartschmidt for the discovery of electron spin rediscovered it in Holland and in fact gave the mathematical treatment involving Fourier transforms which is so beloved by particularly crystallographers. And then there have been miscellaneous other applications. An important one was in statistics in Stockholm in 1936 and the other one in 1936 was in fact an actual numerical inversion of this problem. And I will have to, if I may have the next slide please. I'd like to explain this a little bit first. Here we're going to astronomy. You have many stars surrounding the sun. And an astronomer would like to know what is the distribution of velocities of those stars relative to the sun. Now if you pick on a particular star you can look at its spectrum and from the Doppler shift you can detect its velocity along the line of sight either away from you or towards you. And if the star is close and moving fast you can determine its proper motion namely its motion at right angles to the line of sight. And if you know those two things you know exactly what the velocity of the star is. Unfortunately there are only a very few stars for which you can measure the proper motions. The proper motions are all too small. So for most stars the problem is how do you determine the actual velocity of the stars when all you can measure is the radial velocities of the stars from their Doppler shifts. And this is a problem posed by Eddington in 1936 and solved by Ambert Sumien and he applied it directly to a sample of 500 stars. These are B type stars. He actually did three populations of stars. I'm just quoting his results here. The problem here is not now a problem in ordinary space. We are thinking of a space of velocities of stars. And if you think it through carefully you will find out that if you look in a particular direction in space and observe a particular blue shift or red shift and count the stars with those blue shifts and red shifts you are in fact averaging or integrating the number of stars over a plane in velocity space. And we are back to Radon's problem again. As Ambert Sumien informed me someone pointed out to him in 1937. Someone said oh you know that problem you solved about the stars. Well Radon solved that in 1917 and that seems to be the way it's gone for a long long time. Anyhow these are the results here. This now is a velocity space in which this is the origin. This is the velocity in a certain direction relative to galactic coordinates. This is the velocity in another direction. If we look at this number here this means that with this vector here this the length of vector represents the speed. The direction represents the direction of the velocity. There are 20 stars with that velocity and so on. Obviously there are no such thing as a negative star and this represents some noise in the system here. But this is a construction that was done by Ambert Sumien in 1936 and this gives the lie to the statement that's often made that you can't do computer tomography without computers. This is computer tomography in a velocity space and it was done with old-fashioned numerical methods by drawing graphs on paper and counting stars and that kind of thing. But I won't of course pretend that it isn't a lot easier when you're dealing with a million pieces of data to do it on a computer rather than doing say only 500 pieces of data by hand. Now the last thing that I want to talk about is a variation of the positron emission problem that I talked about before. This upper part of this diagram is what I showed you before. It shows you these two photons going off in exactly opposite directions because initially these two things were at rest. If when these things started to interact they were both moving down then they would have a certain amount of momentum. When the annihilation took place the two photons would have to have the same amount of momentum and therefore they would go off at some small angle to each other. This angle is very small and doesn't bother PET scanning at all. But you can see I think that the distribution of this angle would say something about the distribution of relative velocities of these two particles. Now in solid state physics the distribution of the momenta of the electrons is an extremely important quantity. And if one goes through the calculations again one finds that by setting up the right kind of equipment and measuring the distribution of this angle for various positron various annihilations then what one is doing is one is integrating the momentum of the electrons in the solid over a plane in the momentum space of those electrons. Radon's problem in a three-dimensional momentum space and we have the solution and one application of the solution is shown in the next slide. There's a in the momentum space of electrons in a solid there's a very important surface called the Fermi surface and by using this positron annihilation method here is the reconstruction of this Fermi surface. Here is the same thing this is for vanadium now done by a very tedious it's tedious even with a computer calculations known as band structure calculations. But you can compare the two and see that they're quite they agree pretty well. This thing is obtained directly though from the momentum distribution of the electrons in the nucleus. Well I hope I've been able to show you just briefly at any rate that one can this thing has many applications I've only mentioned a few of them and that it has a long history and the medical applications are just but one of many applications. As I mentioned one can go on and do this for n-dimensional space one doesn't can't visualize four and five dimensions but the mathematical problem can be formulated anyhow. Now I would just like to say a little bit about what I've been doing in the last couple of years and that is the following I asked myself the following question and that is what's so big about straight lines and flat planes and things like that and the answer is nothing. One can in fact look at a whole, may I have the next slide please, a whole family of curves this is one set of them I call them alpha curves because alpha can have any value from zero to infinity this just shows values for alpha equals a half alpha a quarter a half represents a parabola one represents the straight lines that I've been talking about all along two represents a hyperbola and so on and reciprocal to these in a sense are the so-called beta curves one of which is a circle for example and one can ask can the one solve Radon's problem for the integrals over these curves integrals or averages over these curves the answer is yes I've worked out the theory of these pretty completely and if you think of these curves being rotated around this axis that I've drawn of course you generate a three-dimensional surface and just before I came to Linda I started making some progress on doing that not just the three-dimensional case but the n-dimensional generalization of this now you might ask what is the use of this kind of stuff and the answer is I really don't know and further more I really don't care and the thing that the thing that disturbs me these days is as the government is because science is getting so expensive the government is being forced to provide more and more money to help people do their work and what disturbs me is that more and more people like congressman and department chairman even in the worst cases are asking well what use is this work that you're doing and if you restrict science to what prices for science to doing only problems that are patently useful then I'm afraid that science is going to go is going to quickly come to an end and we're going to be replaced it with a rather dull slowly progressing technology so here I'd like to cost one vote for doing useless projects like these which in the end may turn out not to be useless but we never know thank you
Allan McLeod Cormack attended six Meetings in Lindau, and with the exception of the Meeting in 1990, the meetings were dedicated to Physiology or Medicine. Cormack, a physicist by training, must have felt a bit out of place among experts in medicine and the life sciences, although he received the Nobel Prize in Physiology or Medicine, together with Godfrey Hounsfield, in 1979, for the invention of computed tomography, the groundwork for computer-assisted tomography (or CAT) scanning. “I know very little about medicine and physiology”, he introduces his lecture at the 34th Meeting, and states that his lecture will provide some “relief from heavy physiology”. The lecture provides a historical perspective on CT and other applications, which are based on the solution of the Radon transform by Johann Radon in 1917. Cormack himself conceded that he was unaware of the Radon transform when he began work on the CT in 1956, yet there are many forms of the solution, depending on the application. The basic question of Radon’s problem is, given a number of lines crossing a plane, or a number of projections, is it possible to calculate the function from the line integrals and thus reconstruct an image? Cormack gave a number of examples of the solution, on scales “from a virus to the moon”, in applications ranging from electron microscopy, CAT scanning and positron emission tomography (PET) scanning, and nuclear magnetic resonance (NMR) in medicine, to ocean tomography and radio astronomy. From the perspective of the physics of medical imaging, Cormack notes the interesting fact that CAT scanning, PET scanning and NMR depends on physics of the 1890’s, 1930’s, and 1950’s, respectively, thus the gap between discovery and medical application is closing. “You can’t do computer tomography without computers”; Cormack disproved this statement by describing the numerical methods developed by Lorentz, Uhlenbeck, Eddington and Ambartsumian in the first half of the 20th century. Cormack concluded his lecture with a short description of his latest research, on the Radon transform of curves in a plane, which, he admitted, may be useless. But his sentiments regarding “the usefulness” of science are echoed by many over thirty years later: “What disturbs me is that more and more people (...) are asking, ‘Well, what use is this work that you’re doing?’ And if you restrict science, or what passes for science, to doing only problems that are patently useful, then I’m afraid that science is going to quickly come to an end and we’re going to replace it with a rather dull, slowly progressing technology. So here I’d like to cast one vote for doing useless projects like these, which in the end may turn out not be useless, but we never know.” Hanna Kurlanda-Witek
10.5446/55097 (DOI)
Thank you very much. Ladies and gentlemen, it's a real pleasure to be here. And what I'm going to be talking about today is a little bit of an unusual field. It's one where I did some pioneering work. It's probably one that will end with whatever I end up doing because I have never discovered anybody else who's interested in the same subject. But it's intrigued me and I'll tell you a little bit about it. The subject is basically the manufacture, if you like, of atoms that are made of basically particles that don't exist very long. For example, a pion, which decays normally in two, say, two times 10 to the minus 8 seconds, coupled to a muon that might decay in the order of a microsecond. You might wonder how you can take a pion and a muon and put them together and then do some experiments with that combination. Of course, what we're looking at is exactly a hydrogen atom on a small scale. And the interest, of course, is to discover if there are any interactions between pions and muons, or between, as you'll see, muons and caons, things of that sort, any interactions which are not anticipated by the normal course of events. In fact, these days, just about everything in this world is anticipated by the standard model. And I begin to wonder if there's any more business left for people who do experiments. But I guess in the long run, there'll always be something left for us to do. Okay, now, how do we make such atoms? Okay, it's actually simpler than one might imagine. I started worrying about it some 20 years ago, and then, in fact, came up with some ideas but discovered that I had been beaten by a gentleman, lovely gentleman named Leonid Nemonov in the Soviet Union, who, in fact, had written a paper showing how one can make such atoms. Indeed, he did a lovely experiment in which he made such atoms out of electrons and positrons, and I'll talk about it a little bit later. If you take a K-long, which is a well-known elementary particle produced in great abundance at most accelerators, the K-long decays as one of its major decay modes into a pion, a muon, and a neutrino. And in its own center of mass, those particles come out more or less at random with typical relative momenta of the order of 50 MeV over C. Now whenever things come out together, there's always some probability that two of them may stick together if there is, in fact, an interacting force between them that allows them to stick. And obviously, a pi plus and a mu minus have the Coulomb interaction between them. So if somehow or other they came together close enough, so to speak, they might come out, in fact, stuck together as an atomic bound state. Now it's actually quite straightforward to calculate the characteristics of such a state. The reduced mass is 60.2 MeV over C squared. The internal momentum, which is, of course, alpha times C times its mass, is about a half MeV over C. The ground state binding energy is 1.6 KeV. Later I'll talk a little bit about the level structure and how one might conceivably make a measurement of the Lamb shift in this unusual atom. But it's rather simple to make an estimate of what the rate of production of such atoms would be. Basically, it depends, of course, on the extent to which the pi and the mu overlap, or at the same place at the same time, because the decay itself takes place at a point, the pi, mu, nu, decay. And the simplest way of estimating it and gives you an awfully good answer is to say, let's compare the volume of phase space contained by a half MeV over C cubed with the volume of phase space contained by the typical 100 MeV over C cubed, if you like, that are available for all the possible configurations. And that comes out to be of the order of 10 to the minus 7, which is, in fact, more or less what the rate is. It's actually, as you'll see, about 4 point something times 10 to the minus 7. The actual calculation, again, as I said, is not difficult to do. And it turns out that the rate at which these atoms come out relative to the production rate of a pi and a mu and a neutrino is 4.31 plus or minus 0.08, 10 to the minus 7. The theoretical uncertainty is largely because there is some uncertainty in understanding the relative spectrum of the pi and the mu as they come out in the normal decay, the so-called form factors related to the decay itself. And to the extent there's a slight uncertainty there, to that extent this uncertainty will be there. Now how does one actually see such atoms? Of course, you imagine there are caons coming along and those caons break up and somewhere, right? There's now an atom going this way, neutrino going some other way. How does one observe such an atom? Well, it's very simple, actually. You just break it up with a very thin foil because the binding is so small compared to the typical sorts of energies that are involved in collisions of elementary particles. You put a little foil in, it turns out, about 10,000ths of an inch or so would be quite adequate to break these things up. As they make their way through the foil, they suffer a sufficient number of collisions so that the pi and the mu are basically ionized. And from that point on, they proceed as two independent particles, but of course along the same line more or less as the atom was traveling along. Actually, it might be easier if I get one of these lasers here as well go over to high technology. Okay, well, here it breaks up. I have a pi and a mu coming out forward of this foil. And of course, they have the same velocity, which means that their momenta are in the ratio of their masses. And so now if I just put a magnet over here and measure the momentum of the pi and the mu, I will find, in fact, or should find in principle a peak corresponding to this situation in which the ratio of the momenta is equal to the ratio of the masses. Well we did this experiment back in the mid-60s at Brookhaven and we saw about 44 events, which turned out to be about a factor of three less than were anticipated on the basis of the theoretical calculations. But it was a terribly difficult and a very poor experiment in the sense that it had a very poor calibration. It was built in such a way that it was virtually impossible to calibrate adequately. And so none of us believed that that factor of three was real. But if there is a factor of three or if there is any factor of discrepancy, it can really be due to two alternative possibilities. One is, of course, that you make fewer atoms than anticipated for one mechanism, for one reason or another. And the second is the possibility that the atom disappears more rapidly than anticipated. The disappearance in principle should be just at the rate more or less that the pion decays. In fact, the disappearance should be predominantly a disappearance in which one gets an atom turning into a mu and a second mu and a neutrino just from the decay of the pion. On the other hand, it would be interesting to see if there is any alternative to that particular type of reaction. We then began an experiment at Fermilab, one which I'll describe to you in some detail because it has a number of elements that are rather nice and it's a very pretty experiment as experiments in high-energy physics go. It only had 10 people involved in it, not 1,000 or so. So it was one that was actually fun to do. That's not to say that there's anything wrong with 1,000 people on an experiment. You just have to have it if you're building a large piece of equipment, but it's not my style. Okay, anyway, this experiment was carried out from 1978 to 1980. It was designed from the very beginning to allow for very complete calibration against the standard decay mode, K long, into a pion, a mu and a neutrino. May I have the first slide? Okay, let me turn this. I hope you can see all of this. But basically, you start by, of course, producing a beam of K long. Now those of you who can read this will see that we're out here at 470 meters. That's 470 meters from the place where the K longs were first produced. So way, way, way down over there, we have a target. Actually it's about 30 centimeters of beryllium. And from that target come all sorts of particles, but it's very easy to filter down to the point where you have only neutral particles, and those neutral particles are predominantly neutrons, gamma rays, and K longs. The gamma rays are relatively easy to filter out, and they also don't do very much to hurt you, neither do neutrons for that matter. The K longs are the particles that we're interested in, and they come down a long, long, long vacuum pipe. The vacuum pipe itself was close to 500 meters, and it began at the 250 meter point and ended way out in the yard so as to avoid any particles scattering against the air and causing background. Now as the particles make their way down, the K longs, that is, some of them, of course, decay, and those that decay may have, we'll have some probability, of sending their decay products up, and that's essentially how our detection takes place. You see a view from above here, and this is a view from the side. Let me just run through the various elements in this and then show you in somewhat more detail exactly how the detection takes place. The first point that you must realize is that the biggest background are just the normal decays of K long in which the pi and the mu are coming close together, but not in a Coulomb bound state. For every one, obviously, that is in a Coulomb bound state. If I double the relative momentum and have it just two particles going parallel to one another, then I will have almost an order of magnitude more in the way of that sort of background than I have of atoms. So there are huge numbers of pi-mu pairings coming this way, and I must do something to keep those from being counted because they have essentially the same velocity, not the same momentum, but the same velocity, and because they're coming together, one would normally expect that these can confuse things. So we begin with a vertical magnet. This is about several meters of magnetic field here, and the magnetic field at this point is horizontal. In other words, normal magnets have their magnetic field at least at Fermilab vertical. This one was turned on its side, and so it would take a pair of particles entering this way if they had opposite charge and bend them apart. So they no longer appeared to be coming parallel to one another from this foil. So those particles were given then a slight kick, and they would make their way in here, and if they came in essentially on top of one another, they would come out above one another in this detector array. On the other hand, a particle which was a true atom would break up in this foil right here, leave the foil parallel, the two particles parallel to one another, enter a magnet whose magnetic field was vertical, and so they were separated at that point, and then enter another magnetic field which exactly castled out the momentum kick that was given by this so that the two particles leaving here were in fact parallel to one another, but some distance apart. Again to go up to this view, coming from this foil, there would be a pair of particles that would be separated at this point here, separated apart, and then made parallel, and then passed through this array. Finally there were a large number of multi-wire proportional chambers here to track the particles and an array of counters to allow you to trigger, and the triggering was very specifically organized so as to pick out only tracks that were more or less parallel with respect to one another as seen from above. Next slide. Okay, this gives you a picture of the various orbits through that magnetic system. A pi mu atom that's coming along here splits at this point. Looks as though the two tracks are originating at a point on the foil. They get separated, they get brought together. The muon of course is passed through a muon filter which is some meters, I guess it was about five meters of steel. Anything that passed through that was bound to be pretty much a muon. Just in front of the apparatus, just in front of the steel was a shower counter detector that was able to decide whether we were looking in fact at an E plus E minus pair. Why so? Because it's clear another major background, and one which originates in a foil, is an E plus E minus pair that would be produced from a gamma ray which in turn is produced through the K of a K long. So we need to be sure when we detect a pair of tracks that the pair of tracks doesn't include two electrons, or in fact includes no electrons at all. On the other hand, here is a situation as seen from the side. Two tracks that are almost together, they get bent apart by this horizontal magnetic field and then I see a muon and a pi and those of course even though they count, will show up as two tracks in the side view of the apparatus instead of like one track in the side view. Next. Okay, one of the, as I said, one of the major backgrounds and one that we needed very much to worry about were the electron pairs that came from that foil, from gamma rays which converted in the foil. The simplest way to separate the pions or muons for that matter from electrons was basically to make a graph of the fraction of energy observed in the shower counter, right, as a function of the number of events. The number of events is a function of the fraction of energy in the shower counter. Now it's clear a pion until it begins making pi zeroes and showering gives you predominantly a very small release corresponding just to the ionization loss as it makes its way through the counter. As electrons, of course, shower and they come out up here and you have essentially all of the energy visible in the shower counter. And so by making a separation, essentially at this point, one can eliminate essentially all of the electrons but still keep essentially all of the pions. Next. Okay, this gives you, well, let me just preface the next series of slides by a very simple statement. The key to this experiment is not just the running of the accelerator but as an extensive amount of Monte Carlo calculation in order to understand exactly what's going on in the various pieces of geometry. I think most people in high energy physics today realize that the Monte Carlo calculation generally will occupy at least as much time as the running time on the accelerator. And that's because the geometries are complex. The calculations to understand what you're doing depend very much on understanding those geometries and understanding the rates that one should receive in each of the various detectors. So we've done a series, we did a series of complex Monte Carlo calculations and compare them with the predictions for both the pi mu atoms and for the pi mu pairs where they started out really as individual tracks. Okay, this one, for example, is the Monte Carlo against the transverse, the atom's transverse momentum and it compares it with what we, I haven't shown you the atoms yet but this is a typical result. Next, okay, this is the momentum of the atoms that were finally detected. By the way, the interesting thing is their momentum is about 50, sorry, GeV over C. And that's very relativistic. So think in terms of this little atomic bound state moving along with velocity that's awfully close to the velocity of light and with enormous gamma, in the order of, I guess, must be 100 or so. Anyway, this is a map, a Monte Carlo versus actual on that next. Same on predictions, longitudinal positions of the events where they appear to come from. You can of course reconstruct where an event came from because you know its momentum, you know its mass, you know where it's going and you know exactly where it has to in fact have intersected a counterbalancing neutrino going the other way, going down. Next, okay, the transverse momentum of the pion and the muon in the K mu3 events, those were constantly monitored as a standard against which all of these measurements were to be made. Next, reconstructed K lab momentum for the K mu3. Sorry, the momentum of the KL, the K long. Again, this is the spectrum of the K longs that we were looking at as they were reconstructed firstly from the decays and secondly from the Monte Carlo. Next, and of course distance from the target for the reconstructed K mu3 decays. Next, okay, now we get to the real data. Those previous slides, their purpose was to show us that everything in the beam was well understood because unless we were at that point we really couldn't make a prediction of what to see in the case of the atoms. We next made a plot and this is the key plot in the entire experiment. We made a plot of a parameter that we call alpha which is the difference between the pion and the muon momenta divided by the sum. And of course as I told you before, you must have the same velocity so the ratio of the momenta must be in the ratio of the masses. It turns out then at point 1, 4 you should see a peak corresponding to the atoms and indeed there was a peak of some 300 or so events that went by all that passed all of the criteria and indeed a very small background, residual background of things that had randomized values of alpha all the way across. So this was in fact a demonstration that we were indeed looking at these atoms. Next, okay, one interesting thing I talked about earlier was what if these atoms decay away more rapidly than one might expect on the basis of the pion decay alone? Well unfortunately we're not terribly sensitive to that. This is an illustration of what the proper time is in the atoms frame of reference. Okay, and it turns out that because these atoms are so fast so to speak, we really only look at them over less than one tenth of their lifetime, in fact point 08 of a lifetime is the typical time. And that means that a fairly substantial fluctuation in lifetime will make very little difference in our measurement rate. So it turns out that's one thing that we are not at all sensitive to. Next, this is the last of the slides and we'll get back to the view graph. While we're taking data of course on these atoms, we also take data on the gamma rays that come out of the beam and convert in the foil and that's useful for a measurement for example of the rate K long into two gammas. And this is in fact the transverse momentum spectrum of these gamma rays as we measure them and indeed you can see an extremely good fit to the Monte Carlo including also this particular set of data which is what we expect, that's the Monte Carlo data for the K long into two gammas. As you can see Monte Carlo data isn't just one beautiful smooth line, it takes time to make a Monte Carlo calculation and you're limited by the time it takes on the computer system. Okay, let me get the lights back and we'll go back to this again. Okay, what's the result? By the way, the one thing that we were very careful of was to be sure that the foil thickness wasn't a big issue in trying to determine the rate of reduction of these atoms. And so we took half of our data with a 20,000 of an inch foil and the other half with 35,000. The calculations that we did told us that about 10,000 is quite adequate to fully ionize these atoms and indeed we saw no real difference between the two batches of data. The overall measurement then, the ratio between the production of atoms and the K going into pi mu nu was measured to be 3.90 plus or minus 0.39 times 10 to the minus 7. And the agreement with theory was adequate about one standard deviation away from the expected or the anticipated value. But as I said earlier, it's not sensitive at all to the lifetime of the atom. Now where do we go from here? Okay, if this were the end I think probably it wouldn't be worth talking about, but in fact there are a number of very intriguing things to do. First of all, the possibility of exploring energy levels. It sounds like a formidable job in the course of this very short time that you have to look at an atom to make any measurements at all. And it is formidable so I'm not going to say it'll be done within the very near term. But there's some interesting things to do. Firstly, unlike the hydrogen atom, the lamp shift has the opposite sign because it's predominantly in fact almost entirely vacuum polarization. And the 2P 3 halves, one half states are above the 2S a half state. The energy difference between the 2S one half and the 2P one half is 0.07 electron volts and then higher than the 2P a half by 0.053 electron volts is the 2P three halves. Now one intriguing way of making a measurement, although it's a very poor way in the end statistically, but it's cute, is to observe that these atoms being highly relativistic as they pass through magnetic fields of course see a very high electric field. So for example, the 2S one half state is I enter a magnetic field and I enter it adiabatically so I don't have any changes in the states, just changing the characterization of the state will develop a 2P one half and a 2P three half component. Now at the very beginning when I make the atom, I make eight times as many in the 1S one half state as I make in the 2S one half state. And that's because the wave function squared at the origin in this case is a factor of eight higher than it is for the 2S state. So I begin let's say with one in nine atoms up in this state and eight out of nine in here. Now if I enter a magnetic field of course and I begin mixing in these states then I can have transitions down to the ground state. And so it turns out after passing through of the order of a meter of say 20 kG worth of magnetic field and if I have a gamma of the order of 10 for these atoms then indeed this upper state will essentially depopulate down to the lower state. So in principle I could measure the rate of depopulation of the upper state and the way I would measure it is observed that the 2S one half state ionizes with a much thinner foil than the 1S one half state. So I now can have differential ionization depending of course on the relative population of these two. It's not exactly what I would call an easy experiment. There are other variations upon this which we've played with but I thought it would be sort of cute to describe some of the possible things. Finally in the last minute or so that I've got left here one of the things that we probably will do relatively near term is to measure the lifetime of the atom. When the atom decays you expect it to go into one muon which is the original muon and just keeps moving along and then the other muon which is the result of the pion decay. Of course the relative momentum then is 30 MeV over C because that's the momentum of the muon in the center mass of the original pion. And that's a relatively straightforward measurement and not terribly hard to do but like all things take some time. The other thing that's intriguing to me and become increasingly intriguing is the possibility of making other combinations of elementary particles. Now a lot of this becomes possible because of the construction in the near future of heavy ion colliding beam machines. This is not exactly what they built the heavy ion colliding beam machines to do but it turns out it's sort of a fun thing and as long as I can find some place where nobody will get in the way it will be worthwhile doing. If I take a collision between two heavy ions and typically at the new RIC machine you can get essentially head on collisions of the order of 500 or so per second if you like. In the typical one of these there are 2,000 tracks emerging from the collision region. The collision region is really quite small obviously very small compared to the size of the atomic system and so with 2,000 particles coming out some of them are bound to stick together. So now in fact you can look at all the possible combinations that might stick. Most of them just don't last very long. For example a pi plus and a pi minus will essentially disappear pretty much immediately into a pair of pi zeros if you like. K plus and K minus will disappear very quickly but a K plus and a mu minus will in fact in principle then last a fairly long time. In order that a mu attach itself to a K they have to be made more or less of the same place. So these cannot be muons that come from the decay of particles that travel some distance. They have to be very primary muons that originate right within the cork-gluon plasma if such exists. And so that's a rather interesting thing in itself. If you look at a muon tied to one of the other particles you can be very sure that it originated essentially right in the primal collision. So this is one of the directions in which I thought it might be fun to go. That pretty much covers it. It's an unusual field but I hope someday somebody else gets an interest in it. I should just mention that Nemonov did do one lovely experiment. He looked at E plus E minus pairs tied together basically positronium atoms if you like produced when the pi zero decays. Pi zero of course decays occasionally into a gamma ray and E plus and E minus and every so often namely one in ten to the ten the E plus E minus will tie it will stick together. Now that's a fascinating system because it is so fast so to speak has such a high gamma and it's such a large structure that the detection doesn't require any foil at all. All it requires is a few hundred gauss of magnetic field and that ionizes things. So you have your atoms coming along put them into a magnetic field and then they break up. Anyway that's one of the areas in which this has been pursued. Thank you very much.
Afternoon coffee at the physics department of New York City’s Columbia University once resembled a think tank where excellent theoreticians and experimentalists developed brilliant new ideas. One day in 1959, for example, the hypothesis that two different kinds of neutrinos existed was discussed. Tsung-Dao Lee (NP in Physics 1957) asked: “All we know about the weak interaction is based on observations of particle decay, and therefore very limited in energy. Could there be another way towards progress?”[1] Could it perhaps be possible to produce high-energy neutrino beams? Yes, said his colleague Melvin Schwartz, while others remained skeptical, and worked out an experimental concept and set-up. A few months later, in February 1960, he submitted a short paper to Physical Review Letters suggesting how such beams could be produced at one of the new particle accelerators.[2] If protons from the accelerator hit a target, they produce pions, which shortly afterwards decay into muons and a beam of neutrinos or antineutrinos of relatively high energy. In an extended form, the practical application of this idea at the Alternating Gradient Synchrotron in Brookhaven by Schwartz, his thesis advisor Jack Steinberger and his colleague Leon Lederman led to the discovery of the muon neutrino and hence to the proof that a second family of elementary particles exists.[3] Lederman, Schwartz and Steinberger were awarded with the Nobel Prize in Physics 1988 „for the neutrino beam method and the demonstration of the doublet structure of leptons through the discovery of the muon neutrino“. It was the first Nobel Prize dedicated to neutrino research. In this lecture, Melvin Schwartz talks about „the manufacture of atoms made of particles that don’t exist very long“, namely by coupling charged mesons like pions and muons. He is motivated by the interest to find out whether there are any interactions that are not anticipated by the normal course of events. “These days, just about everything in this world is anticipated by the standard model and I begin to wonder if there is any more business left for people who do experiments, but I guess in the long run there’ll always be something left for us to do”. The major source for his experiments are neutral K-long kaons that are abundantly generated in accelerators and, amongst others, decay into a pion, a muon and a neutrino. Because charged pions have a coulomb interaction between them, they may be “stuck together as an atomic bound state” if they come out of the decay close enough to each other. Schwartz discusses the conditions for producing and detecting relativistic hydrogen-like atoms in such a way in much detail. Some twenty years before, he had succeeded in observing 155 such events at Brookhaven and Fermilab. Melvin Schwartz was a scientist and a businessman. In 1970, parallel to his tenure at Columbia and Stanford, he founded the Silicon Valley start-up Digital Pathways Inc. together with a colleague. Between 1983 and 1991 he devoted himself full-time to his company as its CEO, before selling it and returning to academia. Perhaps he would have staid in business, if he had not declined the request of two of his Stanford students. In the winter of 1975/1976 they had shown him a prototype motherboard for a personal computer and asked him for an investment from Digital Pathways in the company they were planning to found. Yet Schwartz replied that “personal computers won’t go too far and that Apple is a bad name“.[4] Joachim Pietzsch [1] Samios NP and Yamin P. Melvin Schwartz. A biographical memoir. National Academy of Sciences 2012, p. 3ff. http://www.nasonline.org/publications/biographical-memoirs/memoir-pdfs/schwartz-melvin.pdf [2] Schwartz M. Feasibility of using high-energy neutrinos to study the weak interactions. Phys. Rev. Lett. 4:306-307. [3] for more details cf. http://www.mediatheque.lindau-nobel.org/research-profile/laureate-schwartz#page=all [4] Samios and Yamin. l.c., p. 13 f.
10.5446/55098 (DOI)
Music In the late fifties, shortly before his death, Albert Einstein issued a very dramatic appeal to scientists in particular and to mankind at large. He said, forget about everything but your humanity. Fortunately the danger which was impending as a time and he was afraid of did not materialize. Fortunately the two great powers did not come to war and it was a nuclear weapon used. So 35 years or more later we now believe that this danger is not with us as much as a time. Also the dangers with us still because the many large and small nations have possession of nuclear weapons. But other dangers are with us today which are equally frightening, mankind and all other species. And I decided to speak about this problem to young people today and to Nobel laureate. Of course I must confess I was debating if acceptance is very nice invitation. I should have spoken about my work which still fascinated me and I must say it is very gratifying and in fact I believe you would have certainly liked Bartler to listen about this new development in our gross factor, the broadening of a scenario. I'm going to speak about it in another session or two years. But it still remains as most of my interest is in science and particularly in this new scenario which opens in our gross factor. We are also going to speak about this. It's far more than a factor, a growth factor. It's just a factor modulating all the homeostatic system, nervous, endocrine and immune system. But rather than to speak on this, I will speak with you about problems and particularly I was interested in doing about social problems and what this title gives about the maniac art of duties. How and why do we speak about maniac art of duties? In 1991, two years ago, 1991, I was invited by the University of East which conferred on me a laureate honor. And in this particular condition, I decided to speak about this particular point which is very, very important. That is the declaration of the maniac art of not of rights. But I will do that with the letter on say something about. I was interested in this or at least fascinated, I must say, by an article and by a series and by most learned and extraordinary productive scientists. Many of you may know about this. Professor Roger Spear in Nobel laureate in medicine in 1981. Unfortunately, I could not come to the meeting we had in the first year due to his condition, his health condition, but he followed very closely what we were doing. Before speaking about what I'm going to speak, I wish to communicate with you to read some part, some paragraph or some of his paper which interested fascinated me and became the point which we discussed at the first meeting. It was held in Trieste, as University of Trieste, December 4 and 5. Professor Gaido Seguazar and made most productive participation in this meeting. In this meeting, as I say, we tried to find, it was a round table meeting where people, only 20 of all together scientists were discussing about this problem of the maniac art of duties. This center, the problem, which was first dealt with by Roger Spear in an article which, unfortunately, did not get too much attention by the scientific community. This paper came out 21 years ago and came out in a journal which is not too much read, although it is a wonderful, prestigious journal, Perspective in Biological Medicine. The title of the article, I must say, is more than article and essay, is Science and the Problem of Values. Then we read you some of this paragraph which impressed me greatly and was the object of the conversation which we hold in Trieste. Just speaking, thinking about our duties, that is, how to face the future, how to make the promulgation not of the maniac art of rights, but rather of the maniac art of duties. I will remind you that many maniac art of duties have been issued from 12th century by Jean-Gerard-Jones and it was the manifesto, the maniac art of duties, French Revolution and American Revolution. The last one was the beautiful and well-written maniac art of rights, Parmi not duties of rights, December 1948. This will be the first of maniac art of duties, rather of rights. I do believe that today people of affluent countries have so many duties, no more only right but duties, but this will come later. I wish to read to you some of the parts and paragraphs from the article which I found exceedingly well written by a saying by Roger Spurling. Science and the Problem of Values. By evolutionary time standards, the fate of life on our planet has suddenly and quite abruptly come to rest on an entirely new form of security and control based on the machinery of the human brain. The older, non-cognitive control of nature that have regulated events in our biosphere for hundreds of millions of years, the forces of nature that lifted life from the inorganic to the human level and created men are no longer in command. Modern men has intervened and now superimposed on nature is own cognitive brand of global domination. The outstanding feature of our times is the occurrence of this radical shift in biosphere control away from the vast interwoven metrics of pluralistic time-tested checks and balance of nature to the much more arbitrary, monistic and the relatively untested mental capacities and impulse of the human brain. Along with the weakness of our newly imposed human system of global regulation also contains tremendous new powers including the potential to affect the changes within a decade that formerly required thousands and millions of years. Almost the entire fabric of the Earth's Earth from the atomic to the cynical level is rapidly becoming subjected to disassembly and resynthesis along new patterns of human design. In all the human directed supervision the potential for utopian advancement throughout the globe seems endless. It is important that this utopian potentiality be recognized and remembered as we turn now to consider the other side of the coin. Despite the beneficial features of human domination it becomes increasingly apparent that our biosphere is set today on a disaster course. It is a direct consequence of human intervention. The entire grand design of life, pens takingly evolved over millennia, certainly subject to instant destruction, depending only on some passing twists in human affairs. If nuclear extermination is avoided, other inbuilt, self-destructed features are evident that threatens to bring all civilization to a halt if things continue as they are gone. And then it continues to say what we can do and it makes its observations is fine. If we could summon an extraterrestrial troubleshooter to examine our Earth predicament with an outer space perspective free of human bias, I believe it will very quickly would put his finger on the human value factor in our biosphere control as a primary underline cause of most of our difficulties. And this is a point which he makes and is an enlarge on this point. When Ken agrees that with tools who claim that excess population is a principal potentiating factor behind the large majority of today problems, yet beyond the population surplus when see always as a determining factor of human values with which it is necessary to cope. And thus to attain an effective control over human procreation. When for instance, you see when the society for zero population growth squirms off against the charge on issue of abortion, birth control, optimal population and related questions by what ultimate standard do we decide who is in the right? Similarly, when we other opposing factor factions that come to fundamental philosophical disagreement on issues like justifiable military killing, human exploitation of other species, eugenics, eugenic, plumber of natural resources, noble savagery versus the urban race, redwoods versus free waste and all the multitude of other value questions then now confront us by what ultimate do we attempt to distinguish right and wrong? Our tolerant, educated western society in particular see more and more to be lacking in convention with regard to any kind of ultimate standard. So we start to see what we can do about this problem because we have no way to discharge, we have no priority system, we don't know what to tackle first, who is right and what is wrong. And so he proposes what I feel very, very intuitive and very intelligent, a possibility to make a war to see the value, to make a sort of priority of values for which we can fight and must fight for. And I will read only some of this part because later on I'm going to say what we did in the past and what we are planning to do next. So, it's a quality of value now that should be founded on science. Consider for example, a tentative starting, so it says baseline, something like the following and this is what he proposes as the first axiomatic point which everyone would agree to. The grand design of nature perceived broadly in four dimensions including the forces that move the universe and created man with special focus on evolution in our own biosphere is something intrinsically good that it is right to preserve and enhance and drone to destroy or degrade. From such an axiom is axiomatic way like a spin-off kind of ethical axiom and all scientists in particular use axiomatic way of reasoning. From such an axiom they find straight in terms that are scientifically sound and extensive and coherent value belief systems can be constructed by logical deduction. Other axioms may be added as long as they are consistent. Once accepted as the starting axioms, the one I just read, and the logical implication come to serve as ultimate standard of reference for value judgment at all levels. As with any new set of laws below right, there will be considerable room for difference. The kind of value systems that logically emerges from any such foundation will contain much in common with alternative systems based on other world belief, intuition, or communist bodhisattvas and other doctrine. It may be noted that the grand design of the sample axiom, the one which we just said, was by definition the trends of evolution scientifically based. The upward thrust of evolution as part of the design becomes something to preserve and reveal. This would imply a commitment to progress and improvement, not in the municipal chamber of commerce science, but in terms of furthering the advancement of the evolutionary trend toward greater complexity, diversity, and improvement in the quality and dimension of life and the life experience. A sense of purpose and meaning is thus provided for the life of the individual and for society as a whole, a critical feature of which involves furthering the evolution of human understanding of the natural order. It is important to emphasize, César, that the starting postulate of the sort illustrated for based on science is not an irreverent one, reverence for the cosmic forces that control the universe and create a man is retained in full only the definition and conception are modified to conform with the modern evidence. Instead of relating to a single, unimportant personal control force, man would relate to a vast complex of forces hierarchically interlocked from the subatomic to the cellular organism, social, and even galactic level in a great pluralistic system of control, all differentiated from and united in a common foundation. Much of the great humanistic teaching of the past would be little changed in its basic impact by such an interpretation and shift. The grand design of nature, as seen through the expanding eyes of modern science, would appear already in its present form to contain as much to sustain the highest immense religion and spiritual experience as to some of the comparatively simple metaphysics schemes that have had wide acceptance. A scientific approach would not lead to a rigid, closed scheme, but rather to one that would continue to unfold and enlarge indefinitely as science and understanding advance. The practical consequence for action affected by a value shift of this kind can be seen to stretch out endlessly. Prevention of environmental pollution, the end-arriishment of the ecosystem, for example, becomes more than a mere expedient for human benefit. The ultimate meaning and purpose of all life are a stake and a corresponding conviction, conscience, and dedication come to reinforce the effort. Comparable changes are realized in respect to species' right, optimization of human population, nuclear escalation, and the like. Present trend to the contrary, humanity needs to see itself in terms of something greater and more important than itself to give meaning and purpose to human existence. The social system commune to all of humanity in general is not enough, with prior forms of metaphysical belief now widely rejected, something like the grand design of the sample axiom, the one we said is needed. It may be seen that science on the both terms acquired a social role, above that of providing of better things for better living, or predictor or controller of natural phenomenon, or even a set of advancing knowledge. Science becomes a source and arbiter of values and belief systems at the highest level and the most direct avenue to an intimate understanding and rapport with two forces and moves the universe and created man. This is about the most important. What he said and she's splendid as it. Unfortunately, it was not followed and had no practical scientific community, did not have to look as a shy person as much as intelligent person, sparely and was unable really to ever see the splendid article well known by the community. This was the reason why I decided and ask as a community of Trieste, particularly the Chancellor of the Trieste, to make a first round table, where just as I mentioned before, Professor Gallardo second attended and 20 other people attended. In this meeting, I would simply say what we were driving to and what are the next, the future which we hope for. I must first of all say that really we have no claim for originality as you will see and I will tell in a moment many other kind of initiative like our own took place recently from Stockholm, declaration 1972 to the conference of Nairobi, 1975 to the conference in Nairobi, 1982 and finally other conference particularly the Heidelberg conference which attended in April, 1992. All this conference had the same practical object and at least the same desire as we have now to find a way to just bring to completion season. Fortunately, all these different initiatives which took place are splendid. All what you said is very, very good but no one really took a step forward. The only difference which we hope to give is to just bring to completion to start working for evidences not only to make a declaration and this is the only thing which we can claim not originality but the hope that the next meeting which will take place in Trieste in November, 25, 26, 27, I hope some Nobel laureates which attend and perhaps young people also would come and accept this invitation by the Chancellor to take part in it and to try to find out how to promote initiative, how to come to work and not only to make a really beautiful declaration but leaving this out, everything out, just not proceeding beside making declaration. So I will say something about this maniac art of duties as it has been phrased in this first conference with this message. This was the message which we proposed and later on we made a promulgation. It was this effort, this first effort to make the maniac art of duties. This message with stress the concept of human duties toward mankind in addition to human rights. Rights and duties are two sides of the same coin. However, by stressing the idea of duties we emphasize an active role in upholding this fundamental concept rather than simply passive recognition of the legitimacy. Two revolutions have occurred in modern times in regard to the terms of our tenancy of the globe. The first is that as a result of the power of modern science we risk making our planet uninhabitable through the burgeoning of our population, modern war, ecological carelessness and social neglect. The second is that in the work of the whole era committed in the century it has become established that human beings have a responsibility which cannot be abdicated to the group of a designation, ethnicity, tribe or profession. Accordingly we call upon all members of our profession to accept a level of responsibility for public policy commensurate with the contemporary power of science. Scientists we believe are henceforth obligated to pay a thought of their knowledge by sacrificing a portion of their careers in order to make an informed contribution to the public debate on vital issues of our times on which it depends on the survival of mankind. Crucial problems concerning mankind as they don't of the 21st century urges the adoption of a different way of thinking and of a different value system. The change must be as revolutionary as that which emerged after the Middle Ages. The new way of thinking must be centered on humans as an integral part of the planet, not on humans as isolated individuals. We must present these ideas in university and school as what we are doing in Italy to seek the active participation of the younger generation while our hope for maintaining the quality of life on Earth. In this message which I am going now to read we stress the concept of what is already mentioned. The declaration of the human right come down with a decalogue of proposals. But first of the before reading the proposal the decalogue which was the result of this conference I would like again to stress that we have at the time have three different problems which consider at most urgent and where as the basis of our own decision. It first we started with these premises the protection of the biosphere from further degradation by pollution and the abuse of natural resources. Second immediate aid from affluent and technological developed countries to those oppressed by hunger misery and disease which constitutes the major part of the world population. The stipulation and disease I must say is the only new point because everything else has been well said in other initiative like our own as I will say in a moment. The message is something rather new that is just for young people. The stipulation of a new moral contract between the older and the younger generations based on the principle of the total equality and not as present the case upon a paternalistic or hierarchical system and on a worldwide resolution to uphold this contract in view of the mentioned obligation. The Maniac Heart of Deutis in no way is intended as a substitute as I mentioned for the Maniac Heart of Human Rights but is in a confronting with the greatest urgency the danger which threatens the globe, the biosphere and all living species. The preparation of Maniac Heart of Deutis is a most artist task. The university has responded to this call for action and offered to collaborate with members of international scientific and humanistic community in the realization of this document. The Maniac Heart of Deutis has been then brought to the attention of cultural, religious and political leaders concerned about the destiny of our and all living species, now in the threshold between survival and destruction. As I say a group of 20 people about what is now the promulgation of this Heart of Deutis draft for it because in next meeting which as I say will be November 25, 26 we will consider this point in details and now I'm going to read all this point and what has been said as I say by all other scientific community. I must say that science is best suited, best adapted to the deal with this problem perhaps more than any other section of mankind because of the rational first because language is universal and everyone understand each other in scientific language and also because of the rigorous way of tackling problems. The tremendous success of science is due just to this rational way of facing problems and just as we have sparely proposed to us to rational way of considering axiom and going from one to another. I don't know how much we will be successful. I must say that I'm here in hope that somebody will join us in Trieste and that we can process the proceed from having my declaration to when to start working on. So far as I say, this has not been done. Unfortunately from 1972 when it was the first declaration by Stockholm as the following one, many beautiful documents have been written but nothing has already mentioned has been done. So I will now little by little consider the 10 and it will be not taking too much of your time. So it is the manual to reduce the message we have conceived in a moment and also how the 10 point have been already treated and perhaps better than we did by the new consultant of scientists by the Heidelberg appeal which preceded the meeting in Rio de Janeiro. So it's nothing new in what we are saying nor we believe that the scientists have the monopoly of wisdom, they certainly have not. The only way they proceed by rational way to face problem and not by emotional way. This is what is common to all scientists no matter what is the field particularly there in. Now I would like to look point by point what has been considered. I just already mentioned that with this message we stress the concept of human duties towards mankind in addition to human rights. This principle is reinforced by a piece which were just said even better than we did by the union of consultant scientists in 1992. A new ethic is required, a new attitude towards discharging our responsibility for caring for ourselves and for the Earth. We must recognize the Earth's limited capacity to provide for us. We must recognize Earth's fragility. We must no longer allow it to be ravaged. This ethic, this union of consultant scientists, this ethic must motivate a great movement convincing reluctant leaders to which we have to speak and reluctant government and their reluctant people themselves to affect as needed the changes. The scientists issuing this warning hope that our message will reach and affect people everywhere. We need the help of many. We require the help of the world community of scientists, natural, social, political, economical, political. We require the help of the world, business and industrial leaders. We require the help of the world religious leaders. We require the help of the world people we call on all to join us in the task. Other people said in a similar way things equal important. But besides this, we come now to the decalogue which I mentioned was just issued by our conference which took place as I said December 90, December 1995, December 1992. The first point which we considered it was the respect for human diversity of race, genetics, religion. More than at that time is important to have the respect, human diversity of race, genetics, religion, nationality, language, culture and ethnicity as well as that of the sexes, the aged, infant and the disabled. Now this is the point which we made as being said and beautifully so is the Stockholm Declaration 1972. Man as if on the right to freedom, equality and adequate condition. This is practically the bill of right and condition of life in an environment of equality that permits a life of dignity and well-being and the abuse of solemn responsibility to protect and improve the environment for present and future generations. In this respect the policies promoting or perpetuating apartheid, racial segregation, discrimination, colonial and other forms of oppression and foreign domination stand condemned and must be eliminated. In the 1992 declaration by Rio it is said, indigenous people and their communities and those local communities have a vital role in environmental management and development because of their knowledge and traditional practices. States should recognize and truly support the identified culture and interest and enable the effective participation in the achievement of sustainable development. The second of the Decalogue read, respect for the genetic pools of the biosphere representing millions of years of evolution and experience. This was in the three-step promulgation of the Manicato duty and this was stated in even perhaps more forceful way by the Union of Consulted Scientists 1992. The reversible loss of species which by 21 century may reach one-third of all species now living is especially serious. We are losing the potential as a holder for providing medicinal and other benefits and the contribution that generated diversity of life forms, of life forms gives to the robustness of the world's biological system and to the astonishing beauty of the Earth itself. Much of this damage is reversible on a scale of centuries or permanent. Natural processes appear to pose additional threats, increasing levels of gases in the atmosphere from human activities including carbon dioxide released from fossil foil, burning and from deforestation may alter climate on a global scale. Prediction of global warming are still uncertain with projects which with projected effects ranging from tolerable to very severe, but the potential risks are very great. Overmassive tempering with the world's interdependent web of life coupled with environmental damage inflicted by deforestation species loss and climate change could trigger widespread adverse effect including unpredictable collapses of critical biological system with interaction and dynamic. We only perfectly understand uncertainty over the extent of this effect that cannot excuse complacency or delay in facing the threats. The caring for the Earth in 1991 and other of this kind of initiative, Red Conservation Biodiversity, this is the most important and I believe young people should be aware of the importance of biodiversity in the moment where it is almost conflict and really aggressiveness towards the diverse. This includes not only whole species of plant, animals and other organisms, but also the range of genetic stocks within each species and variety of ecosystems. This is the number one of our own. Number three of the decalogue which we just issued is protection of the biosphere from further degradation by pollution and the abuse of natural resources such as the destruction of agricultural soil and deforestation. The same was practically, I must say, even in better said by Union of Consonant Scientists 1992. Human beings and the natural world are on a collision course. Non-activities inflict harsh and often irreversible damage on the environment and on critical resources. If not checked, many of our current practices put a serious risk, the few, a serious risk of the future that we wish for human society and the plant and animal kingdoms and may so alter the living world that it will be unable to sustain life in the manner that we know. And the mental changes are urgent if we are to avoid the collision our present course will bring about. The environment is suffering and the decision is at least what is going to, what is happening right now is the atmosphere, water, the ocean and the soil and so on. The same about, as I said also by another institution caring for the Earth 1991. Minimize the depletion of non-renewable resources. Minerals, oils, gas and coal are effectively non-renewable and life plants, fish or soil, they cannot be used to sustain a well. However, their life can be extended for example by recycling, by using less of a resource to make a particular product or by switching to renewable substitute where possible. High spread adoption of such practices is essential if the Earth is to sustain billions more people in future and give everyone a life of decent quality. Number four of these are tunnel together. Encouragement by Trieste issue, Maniacato duties. Encouragement by every means available of the reduction of the use of fossil foil and the development of alternative sources about the same as we said of safe energy. This requires recognition of our responsibility to ourself and the world in lieu of the also limited capacity to provide for all mankind, union of consorted, consorted scientists. Developing nations must realize that environmental damage is one of the greatest threats they face and that attempts to blunt it will be overwhelmed if the population go unchecked. The greatest peril is to become trapped in spirals of environmental decline, poverty or rest leading to social, economic and environmental collapse. The developed nations are the largest polluters in the world today as well known. They must greatly reduce their over consumption. If we are to reduce pressure on resources and on global environment, the developed nations have the obligation to provide aid and support to developing nations because only the developed nations have the financial resources and the technical skill for the task. Acting on this recognition is not altruism but enlightened and self-interest whether industrialized or not, we all have but one life both. No nation can escape from injury when global biological systems are damaged. No nation can escape from conflicts over increasing scarce resources. In addition, environmental and economic instabilities will cause mass migration as we are now witnessing to within the consequences for developed and underdeveloped nations alike. The same has been said with some different words by other institutions. And then I will come to the fifth of our own proclamation of decalogue. Assistance of people oppressed by hunger, misery or disease in all the private regions of developed and developing countries by increasing the education of women and old age support. The Heidelberg appeal said it in perhaps even better way. And I will read to you Heidelberg appeal 1991. We draw everybody attention to the absolute necessity of helping poor countries attain a level of sustainable development which matches that as the rest of the planet, protecting them from troubles and dangers stemming from developed nations and avoiding the entanglement in a web of unrealistic obligation which would compromise both the independence and the dignity. The greatest evil which talks in this important point, which talks about our earths are ignorance and oppression and not science. Technology and industry whose instruments when adequately managed are indispensable tools of a future shaped by humanity by itself and for itself overcoming major problems like overpopulation, starvation and worldwide disease. This is the most important in a moment where it is such an anti-science movement. Young people here should be well aware of the importance of science and necessity to go on. We cannot stop it unless we want to kill homo sapiens himself. We fully subscribe to the objectives of a scientific ecology for a universe whose resource must be taken stock of, monitored and preserved. But we are with demand that this stock taking monitoring and preservation be founded on scientific criteria and not on irrational preconception which stresses that many essential human activities are carried out either by manipulating other substances or in their proximity and that progress and development have always involved increasing control over hostile forces to the benefit of mankind. We therefore consider the scientific ecology is no more than extension of its continued progress to improve the life of a future generation. Well it's almost as good. Number seven was the proclamation of promulgation. Recognition is the danger of degrading human dignity by any forces of exploitation of one individual by another or the exploitation of human body, organisms which are inviolably tend to inalienably should be preserved and this is go back to bioethics which is now in the way of being just promulgated by all nations. And I believe that this bioethics problem are very important and perhaps we are the only one as the other initiative is the other institution did not speak about it. But I believe it's a hard time to think about this problem of transplantation and social consequence of it. Finally I will consider the number eight and last that is the promotion of improved urban environment or urban environment in order to alleviate the dehumanizing condition and to reduce the rapid urbanization of rural areas with the resulting destruction of land and water resources. And this is a very important point not only the underdeveloped country but the ghetto which are really tremendous that the problem of all developed countries as our own and your own and all these it is one of these. So caring for the earth just deal with this problem. Adopt and implement an ecological approach to human settlement to begin planning. This need to adopt and implement an ecological approach to human settlements planning to ensure specificity and bottom of environmental concern in the planning process and such promote the sustainability. This is really until planning and management of human settlement to satisfy the physical, social and those needs of the inhabitant on a sustainable basis by maintaining the balance of ecosystem which is a settlement is an integral part. Harmonious combination of human produced and natural element to provide the habitat within which human dwellers seek they will be. A strategy for sustainability based on ecological approach that is expected to improve and show both the supply and zone. Finally we come to the very last and to conclusions. The nine is support of effective voluntary family planning in order to regulate world population growth by improving the education of women and by standing all day to support. This again I will quote and will be the very end the Union of Concerts Scientists 1992. The earth is finite. Is the ability to absorb the west and destructive effluent is finite. Is the ability to provide the food and energy is finite. Is the ability to provide the for growing number of people is finite and we are fast approaching many of the earth's limits. Current economic practices which damages environment in both development and under developed nations cannot be continued without the risk that vital global system will be damaged beyond repair. Resulting from unrestrained population growth put a demand on the natural world that can overwhelm any effort to achieve a sustainable future. If we are to halt the destruction of our environment we must accept a limit to that growth. A World Bank estimate indicates that world population will not stabilize at less than 12.4 billion while the United Nations concludes that the eventual total could reach 14 billion in the tripling of today 5.4 billion. But even at this moment one person in five lives in absolute poverty without enough to eat and one in ten suffers serious malnutrition. No more than one or a few decades remain before the change to other authors threats. We are now confronting will be lost and the prospect of humanity miserably, immeasurably diminished. We must stabilize population so it will be possible only if nations recognize that it requires improved social and economic condition and the adoption of effective voluntary family planning. Finally, the very last condemnation of armed forces as an instrument of national policy by calling for a decreased military spending in all countries and restriction of proliferation and dissemination of arms. Even of concerned scientists success in this global endeavor will require a greater reduction in violence and war. The sources now devoted to the preparation and conduct of war amounting to over one trillion annually will be barely needed as a new task and should be diverted as a new challenge. So this is all I wanted to say to you now. I suppose that we have to move from this beautiful declaration to action and this is why I am today here plea in asking, begging all of you particularly young people, we believe that young people are terribly needed because the future is their own more than our own, of course, not my own. And I believe that capacity to face problems has far better than the one of all people. So I hope that not only Nobel laureate, which are here, but some of them attend the invitation which we will receive from the Chancellor of Trieste, the University of Trieste, by very much hope that young people will think about this and just pray to other people to see the point that we have to come to action. It's not enough to make beautiful declaration which are all absolutely sustainable and all necessary. But what we need is young people to whom we ask to participate in this duty, come to our aid. We will, I hope, when we will come to Trieste to this meeting, try to find the way of really progress to have a political leader participating and really taking some participation. It's disappointing to see that after these talk-ons, that is more than 21 years ago, nothing has really been taken. Many, many declarations, as you might have read to you, but no step has been taken forward. Thanks a much.
When Rita Levi-Montalcini for the first time visited the Lindau meetings, she chose to speak on a theme which had a lot to do about her role as a scientist, but very little to do with the particular research for which she received the Nobel Prize in Physiology or Medicine. Instead she turned to the young people in the audience gave a progress report on a work that she had actively taken on a few years earlier. This work had as a final goal nothing less than a declaration like the United Nation’s Declaration of Human Rights, but this time concerned with a set of scientifically based Duties instead of Rights. The inspiration originally came from an article published in 1972 by her friend and colleague Roger Sperry, Nobel laureate in Physiology or Medicine 1981, entitled “Science and the problem of values”. In the article, brain scientist Sperry argues that “the world we live in is driven not solely by mindless physical forces but, more crucially, by subjective human values. Human values become the underlying key to world change.”. At a small international meeting in Trieste organised by Levi-Montalcini in 1992, the first steps towards a declaration had been taken under the name “The Magna Carta of Duties”. In her lecture, she reads from Sperry’s article, explains about the work and the arguments brought forward during the meeting and ends by reading the “axioms” making up the declaration. After her appearance in Lindau, work progressed and the declaration became in 1994 “A Declaration of Human Duties”, which was submitted to the United Nations. At the same time she set up an organisation IHCD, the International Council of Human Duties at Trieste, of which she is President. In 1997, this organisation received the status of “non-governmental organization in special consultative status with the Economic and Social Council (ECOSOC) of the United Nations”. Anders Bárány
10.5446/55102 (DOI)
Thank you very much. Mr. Chairman, ladies and gentlemen, students and colleagues, this is somewhat of a change of pace. I will not talk about easy things like condensed matter physics or the origin of the universe, but I will discuss something somewhat more difficult, which is the mind of a child. I also want to remind my colleagues from the United States that this is 4th of July. Happy 4th of July. We do not have any firecrackers. Instead, I have invented a firecracker I used to amuse my children with. Let me make a confession. I offered the organizers of Tagung a choice of topics, and to my pleasure, they selected science education. Now, how did I, a high-energy physicist, come to this subject? Most of us with this very curious color of hair look for new challenges outside of our well-established metiers. After all, Mossbauer changed to neutrinos, Glacier went into billiards, and of course I had, as essentially all of my colleagues, have lectured seriously to physics students. I had for a long time lectured seriously to students who are not in physics, non-science students, for it is the students of the social sciences, none of law, and of economics, and of journalism, and even of humanities, that sooner or later come to organize and control our society. And I reasoned that the more science they could be helped to understand, the safer I could sleep. However, I must confess that it was only after I received the Nobel Prize that I found my ideas about science education were being listened to. Two things happen when you come home from Stockholm. One thing is very ordinary, my wife keeps insisting that I take out the rubbish. And after I resisted and said, I am a Nobel Prize winner, I then looked at her and I took out the rubbish. The second interesting thing that happened is you became slowly aware of the mystique of the Nobel Prize. For example, you automatically become an expert on all subjects. And I have discussed learnedly the Brazilian foreign debt, the length of women's dresses, and the relative merits of the grateful dead over the Beatles. For example, journalists, tell me, Professor Letterman, how long should women's dresses be in 1992? Letterman, as short as possible. I also found that competition and experimental physics was getting very serious. And I was always aware of the dangers of competition. You all know the sort of competition between Bohr and Einstein to talk about a famous incident in our subject where they would argue about quantum mechanics and argue all the time in this friendly competitive way. And there is a very little known story that Bohr and Einstein took a walk in the woods. And suddenly while arguing about whether quantum mechanics was real or not, they saw an enormous bear whereupon Einstein took his Adidas out of his knapsack and started putting them on. And Bohr, always being a little more pedantic, said, what are you doing, Einstein? And Einstein says, I'm putting on my Adidas sneakers so I can run away from the bear. And Bohr said, everyone knows you can't run faster than a bear. Whereupon Einstein answered, I don't have to, dear colleague. I only have to run faster than you. Now, I am deeply involved in a very serious effort to reform science education in the city of Chicago. I've given many talks on this subject and I had a lot of problems in trying to prepare it for this audience. I somehow feel like Shajah Gabbur's seventh husband. I know what to do, but how do I make it interesting? Just wondering how the translator goes with that. Actually, what I found in the last few months was that there's a kind of new international invariant. Educational reform and a global concern for science and math education seems to be prevalent. No subject that seems arouses so much chauvinism in people as their education. A sense of concern and anxiety with the products of the educational system seems to have set in country after country. This touches us in two fundamental ways. One has to do with culture and the other has to do with economics. In the United States, a traumatic re-examination of the values, the content, the goals, and the infrastructure of our educational system began in 1983 with a national commission report called A Nation at Risk. It was filled with nationalistic rhetoric, with military metaphors, something like we have committed unilateral educational disarmament, and so on. We are drowning in a rising tide of mediocrity. Now the same sort of examination is going on in most of the industrialized world. In most cases, resulting in a loss of confidence in national educational systems. In the U.S., George Bush campaigned as the education president. In Japan, in France, in the UK, in Sweden, educational reform has become front-page news. In Holland, educational reform is in full swing and students are rioting, not for time-honored reasons, but because of educational dissatisfaction. Now my title also includes the fact that the world is changing. The question is why are we so concerned? Now why is this happening? Citizens of industrial society live in an age of science and technology driven by a kind of interactive spiral where science generates technology and technology then enables new science to grow. For example, the new instruments you've seen, the particle accelerators and so on, vast new instruments depending on a technology which itself was born from a science. So you get science, begets technology, begets more science, and the technology produces economic benefits and prestige which encourages governments to continue to invest in science and technology. And this is an ever-ascending spiral and accounts for the fact that the pace of change keeps increasing so that what happened in the last 10 years is equivalent to what happened in the previous 30 years and so on. Additional ingredients in this increasing pace of change has to do with modern communications and information transfer. For example, I can't imagine how I could have lived without my fax machine. And I think the pace of change, even the political changes that took place with such an amazing rapidity in Eastern Europe, very close to here, must have been affected by the rapid communication so that if somebody throws a stone at Ulan Batur, you can read about it in a Brazilian village almost immediately. Finally, there is this rising dominance of sort of global institutions, corporations, so IBM as we know very well does research in Zurich and the Japanese, I understand that have just bought Princeton and so on. National board has become increasingly transparent to those issues which traditionally thought to be national and to contribute to national competitiveness. This is beginning to lead some economists and some national leaders to a recognition that human resources, scientists, engineers, a highly skilled workforce, the people in the nation are among the qualities that remain more or less as national assets and therefore may require more intention. Science and technology have brought huge social benefits to the citizens of industrial societies. Relative economic prosperity, a very high standard of living, increasing longevity, access to art and music of the world, and the leisure to make use of it. But as we all know now, now recognize there is a dark side and just to cheer you up before lunch, I'll give you a listing of it, clearly an incomplete listing. Perhaps foremost, I don't know if it's foremost is the question of the greenhouse effect and global warming, still with very large uncertainties. How much warming and what are the effects? A recent US National Academy of Science study gives us very uncomfortable review of the prognosis. NASA has recently measured the thinning of the ozone layer and concludes that the effect is much worse than had been anticipated. Then there are problems like acid rain and the seemingly insoluble problem of toxic and nuclear waste disposal, urban and air pollution, unstoppable apparently industrial accidents like Bhopal and the Three Mile Island in Chernobyl and the Normus fire in Louisiana and the Great Rhine spill and oil spills almost everywhere, Kuwait oil fires, destruction of the rainforests, everybody very happy now, which contribute to the greenhouse effect and also destroys this irretrievable biodiversity of species. Pandemic diseases like AIDS and Legionnaires keep appearing and looking ahead there is a longer-age concern for depletion of natural material resources in which high-sex society is increasingly dependent, nickel and cobalt and chromium, even high-grade iron ore. Both the social benefits and the social penalties require science, engineering, technology and therefore science education. All of this indicates to me three segments of the population which must be targeted for improved education in science, mathematics and technology. One, they must be an assurance that science continues to receive the essential flow of young people well trained and eager to continue the kind of research that you have sampled in these last few days and in so many other fields from anthropology to zoology. Industry is increasingly dissatisfied with the additions to the workforce where increasingly the worker must know some scientific thinking, must have control of scientific thinking and some mathematical skills. And thirdly, the preservation of democratic government depends in my opinion on the expansion of public understanding of science and of technology. The number of public policy controversies that require some scientific and technical knowledge and thinking is increasing. Many issues, there's the question of population growth on the planet with its inevitable consequences of enhancing the ecological problems. Again, overriding problems of science and technology, the wide gap between developing nations and the third world. In the era of instant communication, as we've already seen, it seems to me to be politically totally unacceptable that we maintain this enormous standard of living gap between the so-called North and the South. And finally, we have to remember that the Cold War is over, yes, but as long as we have 50,000 nuclear warheads, we are all living under a sort of Damocles, the ultimate ecological catastrophe. And we must find a way of instilling rationality into this very dangerous issue. So you see that we have these tremendous problems and we have these three kinds of skills that we need. We need skills in the workforce, we need a flow of scientists and engineers, the professionals. And then above all, I think we need the expansion of public understanding of science and technology because that's the only way to preserve democratic government and so on. In fact, what we see is a general decline if measurements are any good in the understanding of science and technology in the part of the general public. Still, the general public increasingly demands some part in decision making about the practice and applications of science. Look at the green movement, at animal rights activists, at concerns about pesticides and gene splicing. And in all of these things which have valid elements of concern, there is a spectrum of activities, including activities which really are based on ignorance and an ignorance merging into fear and irrational fear, which tend to have a tremendous negative effect on many fields of science. Popular interest in science, and there's a lot of popular interest in science, doesn't reflect in an understanding of science or an understanding of scientists. And I can tell you my personal experience. I was on a train coming out of Chicago and on to the train came a nurse with a group of patients from the local mental hospital. And as soon as she got them settled, she counted them, one, two, three, four, and then she looked at me and said, who are you? And I said, I'm Lee Ilederman. I won the Nobel Prize. And she says, I know five, six, seven. Or take the standard picture of the scientist that you see in a movie from Hollywood. He's always wearing a white coat, very thick glasses. He always carries a cap, which he strokes as he describes how he's going to destroy the world. An important and of course even a crucial aspect in the public understanding of science is the fact that science is expensive. And the public support is crucial. So far, the public has been willing to support science as a matter of trust and the fact that they credit science with the miracles of modern technology. But this is a fragile relationship unless scientists make a strenuous effort to communicate with the public through books, which they do very well, newspapers, magazine articles, and above all, TV as a fantastic possibility for communicating with the public. Well, to summarize, I have sort of three reasons that I see for myself as to why general science, literacy is an important and that it has to be targeted everywhere from the youngest child to the public already beyond school. Science, mathematics, and technology are part of our culture, of course, like art and literature and the humanistic studies that enriches the life of the individual. Also, we also mentioned the workforce, working occupations of all kinds tend to need some understanding of science, and we already mentioned the public. And again, in the U.S., and I think in other countries, I've seen with less certainty, I've seen demographic projections which indicate that there are possibilities of large shortages of scientists and engineers in many countries. In the U.S., the demographics seem to indicate this. These projections are always full of uncertainties. They're not clear as to what will happen. But if you add to the traditional pursuits of science the kinds of things I listed, the kinds of ecological problems that lie ahead is an extra burden on science that I'm quite confident that we will, in fact, in all industrial societies and in developing countries, too, find shortages of scientists and have to concentrate on bringing into science and certainly into physics groups that traditionally have not been represented. Minority groups, women count the number of women among your students, and clearly we all agree there are not enough. I think these things I tend to think are international variants. So, just to summarize this part, we see a changing world which is placing great new burdens on how we educate our children for life in the 21st century. These new burdens apply to all stages, from five-year-olds to 22-year-olds, and indeed to a need to raise a level of science literacy of the general public. I see this in the U.S. and I see concerns also in Europe of projected shortages of trained scientists, in part through the demographic trends and in part through the new scientific challenges. Let me now switch over to some slides to continue and tell you a little story about what I'm doing and what's happening in the U.S. and Chicago. Just to review or illustrate some of these things, here are some headlines. This is actually from the Wall Street Journal in which you very rarely expect to see the word revolution, but this just has to do with examining the kinds of training that people need just to get jobs. Our schools aren't teaching what tomorrow's workers need to know. And here are some of these projections, demographic projections, which are, there are too many significant figures clearly, but there are projections which have some validity. They may not be completely accurate, they might be because they are long-range projections of severe shortages of scientists and engineers to do the job that has to be done. Now with all of this interest, the question is what are the what are the consequences, what are the results, what are people doing to react to this? And the general word are school reform. And there are serious reconsiderations of how we teach children and why it is that at least in many countries, there's probably not in Germany, but that may be temporary. Increasing number of students who go away from science, who shift from science. So the general components in school reform, which I list here, have to do with the fact that there are new developments, new ideas in how you teach, new curricula, a new curricula which says you don't teach everything that is known, but you teach a large, largely less in the way of information, but in greater depth. And there's a shift from a simple transmission of knowledge where the teacher stands up, reads from the textbook, or recites all the things that the child must know to a kind of student-centered stimulation of learning. An important component is in order to do that, the teacher must be much more comfortable with science than most teachers are. So improved training of teachers in the content, namely in the physics, in the chemistry, in the mathematics that teachers have to teach, and in teaching methods and in treating teachers as professionals like engineers or lawyers. And then there's problems of how you measure achievement. When we do experiments in physics, if we don't get feedback, while we're doing the experiment, we know that a year or two later we'll have a lot of data that doesn't mean anything. In the same way, when we're trying to modify an educational system, we have to have very rapid methods for measuring achievement. There are advances in understanding of how students learn. This comes out of research and cognition about how children think. And this always points to new constructive, active view which replaces sort of a passive absorption of information. You want the student to be involved, to be active, to be caught up in the information as relevant to themselves. Then of course, there are new technologies. Computers and calculators and software, for example, all of the mathematics taught from five years old to 20 years old can be done with a hand calculator. And if you ignore that, you're ignoring a major change in what is important in the teaching of mathematics and science. Many of these things have been tried. They've been tried on small groups and small numbers in one school here and three schools there and so on. And in general, this so called activity-based, these are the key words, hands-on, playing and learning. I think Professor Binig mentioned about the importance of playing when you're involved in some intellectual activity. Textbooks are replaced in early grades with active investigation, something like inquiry method, where the child is led to try to discover things about the science that is being presented, replacing standard sort of memory, memorizing of formulas and facts. One of the hardest things you get into in educational reform, although you know how to do it, there are really two main obstacles. One, and they're not unrelated. One has to do with the general problem that there's a kind of inertia, the public, I think, in the United States certainly, doesn't appreciate the need for a rather radical reform of science education. And the other obstacle is the bureaucracy. An entrenched educational administrative infrastructure has many reasons for getting in the way of reform. And so central control seems to be the major obstacle. I noticed that in the discussions about the Japanese reform, which is going on now, and certainly in France, where somewhere in some building, somewhere in the middle of Tokyo or Paris and certainly in Washington, there are people who know how to reform education and they write the rules and they will not do anything in the way of making the kinds of radical changes that are needed. As you can see from this, that I've had some bitter experience in my own efforts, mostly based on my trips to Washington DC. Washington DC is an interesting city. I recommend you all visit it. It's the only city in the world in which the speed of sound is more than the speed of light. Chicago is a typical U.S. city, and I want to particularly bring that because when we say the north and the south as a distinguished, we also in the U.S. and I know in certainly in Europe too, in some places, there are parts of the industrial societies which mirror the developing countries in all sorts of ways. Chicago in the school system, and I'm not talking about the city now, but the school system. The school system has 400,000 students. So it's a big school system, the third largest in the U.S., some 24,000 teachers of which about 20,000 must teach some math and science, even though they're not trained to teach math and science. In the U.S., if you're teaching five or six or seven or eight-year-old, the teacher teaches all the subjects, history and language and science and mathematics. In the city of Chicago, 88% of the students are black or Hispanic, 10% are white, 60% of the students belong to families that are below the poverty level, 46% never finish high school even though that's a legal requirement. They score very low on any national tests, and of course, like any large major city in the U.S., there's crime and drugs and teenage pregnancy and many, many other problems. But on the positive side, Chicago has instituted in the last two or three years the most dramatic school reform ever, and I'll say a few words about that. And in Chicago, there are enormous intellectual resources. All of them have some interest in science education. For example, there are 14 universities, there are nine science museums, there are two large national laboratories, an enormous amount of research done by industry, by large multinational corporations that are headquartered in Chicago and have research labs in the area. And so if you put all of these things together and you say, and you add one Nobel Prize winner with gray hair, you might form, just possibly form some new institution that could bypass in some sense the educational bureaucracy and create some new way of dealing with these particular terrific problems, very difficult problems. Well, it's called the Academy for Teachers of Mathematics and Science in Chicago, and it starts out with some fundamental philosophical beliefs. One is that all children can learn, even poor children can learn, even children with no parental support, even children who live in ghettos and neighborhoods full of all sorts of problems, even children arriving at school sometimes hungry because they haven't had a breakfast. We also believe that teachers are the key to learning, but teachers in general are themselves uncomfortable with mathematics and science, and so that has to change. And then we also believe that these teachers can learn, and that scientists in fact can help to organize their learning, because it'll take too long to sort of repair, even though that has to be done too, the educational bureaucracy which produces teachers in the first place. There are new techniques for delivering math and science education which work for teachers and work for students, and they are already tested, and I'll try to give you some brief example of some of these new techniques. And then of course we also believe that education is a key ingredient to try to break the cycle of poverty and lack of jobs and crime and so on. So this is the basic plan. The big problem is, the big problem is, does it work? And what we're trying to do in some very simplistic way is to take ideas that have been tried in many places. It's interesting that when I arrived in Chicago just two years ago, I found that many of my colleagues in the universities, physicists, even a chemist or two mathematicians, were in fact busy trying to help the schools and inventing ideas for how to teach children and how to teach teachers to teach children science in a way that would appeal to children with some of these new techniques, these activity-based hands-on techniques where, as you'll see, you use the simplest kind of apparatus, sugar cubes, not sugar cubes that come from the neutron star, but real sugar cubes in which children can make dots on the sugars to convert them to dominoes, or the one and the two and the three, and then do numerical studies on them. Concentrating on fundamental concepts like lengths and areas and volumes and masses and time, doing a lot of graphing. I mean, there's a technique which is called teaching integrated math and science, where the science and the math are taken together and related to the world of the child, not as an abstract subject, but as a practical subject and a subject in which you can play as well as learn. Children are taught to draw a picture, identifying a particular experiment. It's amazing to see six and seven-year-old children who have difficulty speaking, but saying smoothly, independent variable and dependent variable. They learn to collect data and organize the data in a table to make graphs and to analyze the entire experiment. I'll give you some, just some ideas of some of the simplest things, for example. There's something called the grab bag, where the children reaches into a bag and picks out some shapes. They could be triangles, squares, stars, and then is taught to list the number of each kind and to draw a graph. So many triangles, so many squares, so many stars, and then the child has asked questions to make sure they understand the exercise. What is the manipulated variable? It's not the dependent variable. What shape was most common in Mary's pile? Well, the square was most common. How many triangular shapes did Mary pull out? Well, from the graph she reads two. Another graph has to do with the size of spheres, and there are three sizes, large, medium, and small. Again, you put numbers, you put numbers on a graph, you analyze the graph, more sophisticated, less sophisticated. Here's a, you teach measures of various kinds. For example, here's where each child lists the number of streets they have to walk to school. Mary walks three streets, and Bill walks four streets, and Jan walks two streets, and they make a graph. So they learn little by little something about distributions and how to read the data, and then they learn something about estimations. A very interesting graph had to do with soap bubbles experiment, in which the children blew soap bubbles, which was a lot of fun, caught them, and then with a stop watch calculated the lifetime of the soap bubble. So here's a distribution curve for lifetimes of soap bubbles, and then to cheer the children up. All right. Well, here's some real experiments where they slide little toys down incline planes and see how much they slide up on the other side. There's a lot of emphasis on proportional reasoning, where you take a little tube, you roll up a piece of paper, and you look at a ruler, paste it on the wall, and then you change your distance and look at the different size of rules. I've seen these things work. I've seen teachers that have taught for 20 years absolutely ecstatic about the results of learning how to do this with children. This isn't the total solution to the problem of education. Clearly, you need more rigor, but this is certainly a very good way to start. Okay. Let me conclude. Let me take some concluding remarks, because this problem of applying it to an entire city is still a long way from being realized. Social activism is considerably helped by having a Nobel Prize. I recommend that to all of you. Physics students, of course, you young students, must stay with your studies and with your research 99% of your time. I believe in that. But if you occasionally look away and keep involved in aware that it's important for you too to keep science healthy, at some point some of you may want to consider teaching as an important addition to your career. So, of course, my distinguished colleagues here, I know many of them are well aware, all of them are well aware of all of these problems. They communicate with you so beautifully. Communicating with a general public is a special effort, which of course all of us know we have to do. But while I have the chance, let me say a few words to the physics students here, which I enjoy so much talking to. In the past few days, you've been very briefly exposed to some fantastic array of incredibly beautiful physics. But don't be misled because it looked so easy. Physics research is full of frustration and disappointment and agony if you're more or less a normal human being. You will certainly be subject to disappointments, even despair at the ignorant resistance of your accomplices in research, your bureaucrats, and your professors, and even more at the stupidity of your equipment, which refuses to work the way it's supposed to, and of the resourcefulness of nature in hiding your secrets. But with all of that, let me warmly welcome you and recommend to you the life of a physicist. Today, as you've already seen, the subject literally dances with vitality. Astrophysics, particle physics, solid state, the physics of materials, observations of new phenomena, and in the fantastic precisions that can be achieved. In solitary contemplation at three o'clock in the morning, or in collaborative groups going after some profound concept with huge devices, physics is full of rewards from Galileo's demonstration. But if you throw a stone, it describes a parabola. The glass shows mysterious billiard ball, which also describes a parabola. I don't know how. There is exciting challenge, there is incredible beauty, and there is ultimate social utility. What more can you ask? Thank you very much. Thank you.
Advancements in neutrino research were long neglected by the Royal Swedish Academy of Sciences. Frederick Reines who had discovered the electron neutrino together with Clyde Cowan (1919-1974) in 1956 received a Nobel Prize in Physics almost forty years later in 1995. Leon Lederman was luckier. In 1988, he belonged to the trio who received the first neutrino related Nobel Prize in Physics. Together with Melvin Schwartz and Jack Steinberger Lederman had discovered the muon neutrino in the early 1960s. Their discovery was so important because it established the existence of a second family of elementary particles. “I have offered the organizers a choice of topics and to my pleasure they choose science education”, Lederman justifies the non-scientific subject of his lecture. Referring to the US National Commission’s 1983 report “A Nation at Risk: The Imperative for Educational Reform” he emphasizes the importance of science education. Driven by a kind of interactive spiral “where science generates technology and technology then enables new science to grow” the world is changing at an accelerating pace. This development is - at least in the industrialized countries - associated with huge benefits such as economic prosperity, high standard of living, and increasing longevity. Yet it has also a dark side, exemplified by phenomena such as air pollution, global warming, destruction of biodiversity, and a widening welfare gap between the developed and the developing world. Consequently, Lederman says, the number of public policy controversies that requires scientific and technical knowledge is increasing. In his opinion, education in science, mathematics and technology has to address mainly three target groups: Young people to ensure a continuous flow of new scientists and engineers, the workforce of the industry to keep their skills adapted to the technological progress in their sector, and, above all, the general public - because the preservation of democratic government will depend on a sufficient public understanding of science and technology. In the second part of his lecture, Leon Lederman describes his work for the Teachers Academy for Mathematics and Science in Chicago. His commitment is well motivated: 20.000 of the 24.000 teachers of the school system of Chicago and Illinois “must teach some math and science even if they are not trained to do so”. Of the 400.000 students (at the end of the 1980s), 60 percent lived below the poverty level, and 46 percent never finished high school. The educational bureaucracy was slow in coping with this challenge. At the same time, Chicago with all its universities and research institutions had enormous intellectual resources. This led to the foundation of the “Academy”, in which teachers were taught how to teach science to children. It rests on the fundamental belief that all children can learn and that “education breaks the cycle of poverty, lack of jobs, crime and so on”. Teachers can also learn - and scientists can make them familiar with improved techniques: Teach less information, but in greater depth; move away from simple transmission of knowledge to a student-centered stimulation of learning; replace textbooks by active investigation. “Activity-based - hands-on - playing and learning are the keywords”, Lederman argues and gives some examples of respective tuition techniques. Turning directly to the young scientists in his audience at the end of his talk, Lederman reminds them that physics research is not as easy as it looks in all those fascinating lectures they had the privilege to witness in Lindau, but that it is rather “full of frustration and disappointment and agony if you are more or less a normal human being”. Nevertheless, he continues, “I recommend to you the life of a physicist. The subject literally dances with vitality. Physics is full of rewards. There is exciting challenge, there is incredible beauty and there is ultimate social utility, what more can you ask?” The audio tape records almost one minute of enthusiastic applause for Lederman’s warmhearted, humorous and ambitious talk. Joachim Pietzsch
10.5446/55009 (DOI)
Welcome everybody to this talk, where along with my colleague Carlos Bude we're going to present you our work relevant simulation for non-Marcovian repairable fault trees. This is a joint work between the Dependable System Group of the National University of Córdoba in Argentina and the Formal Methods and Tool Group at the University of Twente in the Netherlands. I will walk you through the first part of the talk where I will tell you what a fault tree is and how complex it can get. I will also dive a little bit into input-outpost-tocastic automata, which we used to give semantics to one of the most complicated variations of fault trees in order to be able to analyze their reliability. Then Carlos will present you relevant simulation and will also present you the main contribution of our work, which are heuristics for the automatic derivation of importance functions which are at the core of relevant simulation methodology. So fault trees are just a graphical way of representing the failure of a system as the combination of failures of smaller parts of the system. These failures of smaller parts of the system are referred to as basic events and they conform the lower layer of the tree. These failures are combined by gates, which are at the same time combined by other gates, all up to you get to the top gate, which we refer to as the top level event and represents the failure of the system under study. So fault trees enable fault tree analysis, which is used to calculate the reliability of a system. You do this by assigning first a formal semantics to the gates and to the basic events and then doing some numerical calculation or statistical calculations in order to calculate the reliability. So while fault trees can be very simple and just have gates as in the static fault trees, logical gates as in the static fault trees, they can get a little more complicated and even include repair boxes, which allow to repair these parts that get broken. In this case and in the case where the failures and repairs respond to general probability distributions and not only Markovian ones, approaching the fault tree analysis with numerical methods is just infeasible because of the complexity between the repairs and the failures of the basic elements. So for this case, we actually propose to use simulation, but in order to simulate on our trees, we need to first give them a formal semantic, which allows to simulate on them. So it should be a formal semantic that's deterministic. So you always know what's the next step to simulate. So we propose to use input-outpost-tocastic automatas. Input-outpost-tocastic automatas use clocks to control the occurrence of events. They are guarding the events. So clocks get assigned a value according to a sample from their probability distribution that is assigned to that clock. And when a clock expires, all clocks count down at the same time and when a clock expires, it may enable a transition. Clocks can also be reset in the transitions to new value samples from their probability distributions. So input-outpost-tocastic automata has some rules to build them up and they ensure that the final model is simulatable. So it's deterministic. They are compositional and they can be known Markovian. To model actually a full tree using the language of input-outpost-tocastic automata, we use this symbolic language very similar to that of prism. But actually to build the trees, we give a simpler language as just a declarative language where you can define a full tree similar to the way you do it with the Galileo language, which is the most famous one for defining full trees. So to illustrate how we use simulation to analyze the reliability of the full trees, let's take this simple example where the top level event is just the combination of all the failures of the basic event. So if they all fail at the same time, then the top level event will fail. So calculating the reliability up to a time horizon t is just calculating the probability of all these basic events being failed at the same times. Then in the most basic case of simulation, we just use Monte Carlo's where we run a lot of simulations and we just write down how many of those runs reach the actual top level event. So how many of the runs were we able to see that all the basic events were failed at the same time. Then we just divide those amounts of runs between the total amount of runs and we get our reliability value, the one we were looking for, the probability. So of course, when the system we are studying is a system that's designed to be full tolerant or resilent, the probability of actually reaching this event in a run is very low. And the amount of runs you have to make to have some confidence in the results you get by Monte Carlo simulation is immense. So it's actually not the best approach for analyzing these kind of trees which are highly resilent. So for this we propose another way of simulating which is Red Event Simulation and which Carlos is going to present you now. So there are many ways to do Red Event Simulation to overcome this problem. One of them is called important splitting. Essentially it starts like standard Monte Carlo and when we have a trace that evolves in time, when it approaches the red event that we want to observe, we clone this trial so we split it into trials that evolve independently from now on. Some of them may get truncated for different reasons, some of them may further approach our event of interest upon which we do the splitting again. So eventually we will observe some events that we want to count. When all the simulations are done, then we can estimate the probability of Red Event by counting the number of successes and dividing that by the maximum amount of splitting that we could have had. The details are not important now, this is well known theory. What matters is that the efficiency of my method depends crucially on how much do we split on each of these thresholds, where the thresholds are located and this notion of importance. So in this important splitting way of doing Red Event Simulation, we have a function called the importance function that assigns a value to a state called its importance which should be proportional to the probability of observing the red event from that state. So the higher the importance, the more likely to observe the red event from a given state. And this function can be easily performed for some systems, for instance in the AND gate here, we know that the more basic events that are failed, the closer we are to observing a propagation of a top level event here. So by counting the number of failed basic events, essentially we have an ocean of importance function. But already for priority AND gates, this is more difficult because we then need to also take into consideration the order in which things have failed. So essentially the take home is that this importance function is hard to come by for general systems and you usually record an expert, both in the system that you are modeling and in the modeling approach that you use, so the modeling form is in. Our contribution for this paper was to come up with a notion of importance function that can be deducted inductively from the faultless structure. This can be worked for repairable dynamic faultless. It doesn't matter the kind of distributions that the failures and repairs have, they can be normal Cobbian and this can be used to estimate tracing as steady state metrics. So let me walk you through the construction of this importance function. Let's start from the simplest case, for tree which is just a BE, a basic element. We have something that is fail operational, so we can use as importance function only two values. So we give it importance 1 when the system is failed, when the BE is failed, and importance 0 when this operation. That was very easy. Let's go a little bit more complex to an AND gate. We know that we could just add together the states of these things, so count the number of failed basic events and call that the importance of my AND gate. But then this is the same given the previous equation to the summation of the importance of these basic events. So now we have written the importance of my AND gate as a function of the importance of its children. But this can be extended to arbitrary subtree. So in this case we have a voting gate, the voting gate count up to k fail elements out of its held children and then it fails. So again we can write the importance of the AND gate as the summation between the importance of this child plus the importance of this other child. So we have the same structure as before. Notice however that from the AND gate perspective the failure of either of its children is equally important. So it's in a sense unbalanced that the failure of this child can add up to let's say one importance unit, whereas the failure of this child can add up to k importance units. Because they both are equally important in terms of contributing to the failure of the AND gate. So instead of just adding these things together we have to scale these functions so that they contribute equally in terms of importance to the failure of the AND gate. So to do that we use a scaling factor that we are going to introduce next. This is how we presented the importance function in the paper. I'm going to present an alternative definition for this explanation. I just do this to show you that for each of these gates we are writing the importance of the gate as a function of its children, of the importance of its children, a scale by certain factor that makes them all kind of level equally together so that each of the child contributes equally to the importance of the gate. So the functions for the basic, the static gates are only just a function of the children. For the dynamic gates we also have to take into consideration the state of the gate itself because there's some information there of how close is the gate to failing. Remember for instance the case of the priority AND gate that I mentioned before where we have to take into consideration the order of failure of events. When we do that here, this is an importance function for a binary priority AND and essentially if the right child failed first, this value becomes minus 1 so the importance is decreased. So essentially this is a way in which we can construct inductively the importance function on a tree by starting on its top level event and propagating, depending on the case of the gate, each of these importance until we reach the level of the basic events. So this is what Raoul called heuristic to build the importance function which we use on several experiments to see its efficiency to implement rate event simulation. Before that we constructed a tool chain which takes a repairable fault tree model, gives a Yoast as a semantics as Raoul explained and then we also use the structure of the fault tree to build the importance function and give it to FIG, a statistical model checker that can perform rate event simulations, importance splitting, to compute transient and steady state measures. We did use this tool chain on several case studies of different complexities and sizes and I wanted to mention just that the fail and repair times of the PDFs of the basic events really use various normal code and distributions to just experiment with the tool chain. So what does an experiment consist in? For instance let's take the fault tolerant pattern processor, a standard benchmark in fault tree analysis and let's say that we want to compute its unreliability. So the probability of failure of this system within a time horizon T. We estimate this value as a confidence interval doing statistical model checking, in particular the rate events and we know that a good algorithm is we give it a time limit, it's going to give us back a confidence interval which is very narrow, the width of this confidence interval is going to be very small. So how do we compare different algorithms and how do we put our importance function here into the picture? Well essentially we are going to estimate the unreliability of this system using standard multi-carrel on one side and on the other side we are going to use different algorithms of rate event simulation which use as a backend the importance function that we compute from this structure automatically using the method that I just mentioned. Now to see how this increases in efficiency as the system becomes more resilient we parameterize all our models for the fault tolerant pattern processor we increase the number of spares here, the more spares that we have here the more resilient the system is and the lower the probability of failure given a fixed time horizon. So for these three variants of the FTPP we tested all our algorithms and checked the width of the confidence interval built for a given time limit in runtime of execution. So the algorithm that could achieve the narrowest interval in the most resilient case is the most efficient algorithm. So we did this for different systems each with its own parameterization we studied transient and also steady state properties of the systems, all the details are in the paper. I just want to focus on the results a little bit to show you that well as expected the more resilient any of these systems became the more the bridge between the standard Monte Carlo and the rate event simulators that use our importance function as a backend right? So that's essentially what we wanted to observe that our importance function is an efficient automatic way to implement rate event simulation. These are the results for the transient analysis so for unreliability we have similar outcomes for the steady state analysis of unavailability. Then as the system become more resilient essentially the rate event simulation methods that use our importance function can build for the same time limit of simulation a narrower confidence interval than standard Monte Carlo. I also wanted to perform a live demo with the tool that we use for this tool chain but essentially we are already out of time already so I just mentioned you to this tool demo paper which includes an artifact which you can try on your own and with which you can reproduce many of the experiments that I showed here. So to conclude we showed how to implement Yoast as a Mantis for DFTs, I mean we showed that we have a tool that implements this and we did show how to build an importance function inductively from the structure of a dynamic for tree that can be repairable to perform rate event simulation on it in a way that is as automatic as standard Monte Carlo but is much more efficient to compute metrics when we are talking of rate events and we have a tool chain that can implement all of this. As future work we wanted to for instance try other importance functions based on the FD structure we actually did this already this was in an MMB paper with Marielle and me and other things we want to do in the near future specifically this one for instance is to put information on the time limit that we have for instance for transient analysis of unreliability into the importance function instead of just counting the number of failed things in general. So this is a snapshot of the people that contributed to this and that was our contribution to Tecas 2020.
Dynamic fault trees (DFTs) are widely adopted in industry to assess the dependability of safety-critical equipment. Since many systems are too large to be studied numerically, DFTs dependability is often analysed using Monte Carlo simulation. A bottleneck here is that many simulation samples are required in the case of rare events, e.g. in highly reliable systems where components fail seldomly. Rare event simulation (RES) provides techniques to reduce the number of samples in the case of rare events. We present a RES technique based on importance splitting, to study failures in highly reliable DFTs. Whereas RES usually requires meta-information from an expert, our method is fully automatic: By cleverly exploiting the fault tree structure we extract the so-called importance function. We handle DFTs with Markovian and non-Markovian failure and repair distributions—for which no numerical methods exist—and show the efficiency of our approach on several case studies.
10.5446/55010 (DOI)
Hello, my name is Seng Shik Yongmans. I'm assistant professor at Open University of the Netherlands and guest researcher at CWI in Amsterdam. And my student Ruben Hammers and I, we've been working on runtime verification of communication protocols in Clojure. The plan for this talk is to give a brief overview of the tool that we developed called Discourse. But before we get started, I really do want to emphasize that a large majority of the material in our TACAS 2020 paper was developed by Ruben. So he definitely deserves most of the credit. In general, my long-term research aim is to design and implement new foundations and tools to make concurrent programming easier. And in this presentation in particular, I'll concentrate on dynamic analysis of application level message-passing communication protocols on shared memory architectures. Now, the motivation is that in recent years, several modern programming languages like Go, and Rust, and Clojure, they've all started to offer channels as a programming abstraction for shared memory to make concurrent programming easier. But at the same time, by now, there's also evidence that channels have their own issues too. So in a nutshell, the aim of this work was to provide some kind of tool support for this. OK, now the problem can be described as a classical verification challenge. So imagine that we have a specification as an implementation I, such that the specification prescribes the following elements. First, we have the concurrent processes that the program consists of. Second, we have the communication channels that the processes can use to send and receive messages to and from each other. And third, we have the communication protocols that need to be followed. So for instance, in natural language, we could specify that first, a number needs to be communicated from Alice to Bob, and then a number from Bob, either to Carol or to Dave, et cetera. So we specify a whole tree of admissible communications. Now, assuming that we have such an SNI, the question is then how to ensure that the implementation is safe and live relative to the specification, where safety and liveness can be understood in the classical sense that bad communication actions, according to the specification, never happen in the implementation, while good communication actions eventually happen. Now, in this talk, we'll look at runtime verification as a method to answer this question. And perhaps I should also clarify that my own background is actually in static analysis. So over the past years in particular, I've been working a lot on multiparty session types, abbreviated MPSD, which constitute a behavioral type system to reason about safety and liveness at compile time. Now, potentially, this MPSD approach is really quite powerful. But the trouble is that it's also a bit limited in expressiveness. So the reason we initially got into runtime verification as an alternative was just to explore how much more expressive we could get by using dynamic analysis. OK. Now, in more detail, here's a graphical one-slide summary of the runtime verification approach that we use. So at the bottom, we have the implementation of a concurrent program. And on this slide, it consists of three concurrent processes, Alice, Bob, and Carol. At the top, we have a specification. Now, to verify the implementation against the specification, we need to add two ingredients to this picture. First, we add a thin wrapper around the implementation called the instrumentation. The only purpose of the instrumentation is to make certain events of interest observable. And in our case, the events of interest are, of course, the communication actions that the processes perform. So it's the sends and the receives. Second, we put the specification in a new runtime component of the program called the monitor. And the purpose of the monitor is to actually verify every send and receive that can be observed through the instrumentation against the specification in an event-driven fashion. So more concretely, this works as follows. Imagine that we start executing Alice, Bob, and Carol in this example. Then at some point, for instance, Alice may try to send number four through which channel from her to Bob. And on the slide, this action is denoted by AB, exclamation mark four. Now, right before the send actually happens, the instrumentation will quickly intervene, interrupt the send, and temporarily block Alice. Then, while Alice is blocked, the instrumentation will ask the monitor if the communication action is actually allowed by the specification. Now, upon receiving this request, the monitor will consult the specification. And if the send is indeed allowed, it will inform the instrumentation that everything is indeed OK. Besides, the monitor will also update the specification to its remainder, so to speak, as if it makes a transition. So we're executing a specification similar to a state machine, so to speak. At the same time, the instrumentation will unblock Alice and allow the send to actually happen. Now, sometime later, Bob may try to actually receive the value. And again, the instrumentation quickly intervenes and asks the monitor if the receive is allowed. The monitor will check the specification, see that it's OK, and inform the instrumentation accordingly, so everything is still fine. But again, sometime later, Bob may try to send value true to Carol. But now, this is apparently not OK according to the specification. So the monitor will inform the instrumentation of a violation. And in this case, the instrumentation will not allow the violating communication action to actually happen, but throw a runtime exception instead. Now, the key point here is that all of this is enough to ensure safety. But in contrast, liveness is a bit more difficult to guarantee using an approach like this. We do have some ideas to explore, but it's beyond the scope of Fortacus 2020 paper. OK, so here's a more precise overview of our contributions. Our paper consists of two parts. In the practical part, we first introduce a specification language for communication protocols. This is essentially a closure library to write specifications in a domain-specific language and to define monitors. Second, we present an implementation language, which is essentially another closure library to add instrumentation to closure programs. And finally, we describe non-trivial examples. And we also report on a number of benchmarks to evaluate the amount of overhead that our dynamic analysis inflicts on executions at runtime. OK, so that's the practical part. In the theoretical part, we formalize the specification and implementation languages into Calculi. So the specification calculus is based on global multi-party session types, but more expressive. While the implementation calculus is essentially a miniature version of closure, including the channel library. Now, in the rest of this talk, in the interest of time, I will skip the theoretical part of the paper and only briefly summarize our practical contributions in a bit more detail. OK? So our practical work targets the closure programming language. And closure is a version of Lisp on the JVM. Now, there are several reasons why closure is interesting for us. First, closure supports shared memory concurrency, and it has a core library for channels. So, well, that fits the premise of this project very well. Second, closure is dynamically typed, which makes it a good fit with runtime verification. Third, closure has a powerful macro system, which made it possible to embed the specification language in closure itself. So, well, this should be nice from a usability perspective. Fourth, in the yearly closure developer survey, closure programmers indicate that ease of development is generally more important to them than runtime performance. So they might be slightly more inclined than, for instance, embedded C programmers to pay for communication safety guarantees with a bit of extra runtime overhead. And finally, closure was the seventh most loved programming language in 2019. So it's really not a niche language anymore that it used to be. Of course, this is not extremely important from a scientific point of view, but for us, it was, well, more interesting to conduct this research for a mainstreamish programming language. OK. Now, to explain how our tool works, let's have a look at the simple tic-tac-toe example. In this example, there are two processes, Alice and Bob, with two channels between them. Furthermore, Alice and Bob both have their own private copy of the game grid. Now, at runtime, it works as follows. So suppose Alice goes first and puts an X in some space of her own grid. Bob can see this, right? So Alice needs to inform Bob about her move explicitly. So she sends the index of the space to Bob. Now, Bob subsequently receives the message and updates his own grid accordingly. Next, he selects and puts an O in a blank space of his own grid. Then he sends a message to Alice, et cetera. So in this way, Alice and Bob continue to take turns to play the game until Alice makes a winning move and Bob is informed accordingly. At this point, Alice and Bob both know that the game is over. So they close the channels to free up resources and then determine it. So this is how the program works. Now, the specification for this program looks as follows in our DSL. First, we define two conceptual roles identified by A and B to represent Alice and Bob. Next, we define an auxiliary specification identified by TTTCL. And we'll use it later in the main specification and it has the following meaning. The DSL keyword indicates that we're writing a specification. The PAR keyword prescribes free interleaving of its operands. And the three hashes operator prescribes the close of the channel from the first operand to the second operand. So if you put all of this together, then this auxiliary specification simply prescribes that the two channels between Alice and Bob are closed in no particular order. Now, the main specification looks as follows. It's a bit more complicated than the auxiliary one, but the general anatomy is still the same. So we again start with the DSL keyword. The fixed keyword prescribes recursion, where colon x is the recursion variable. Square brackets indicate sequencing. The arrow keyword indicates a communication through the channel from the first operand to the second operand. That additionally satisfies a constraint on the message. So in this example, we have a type constraint, but in general, it can be any boolean predicate. The alt keyword indicates choice. And finally, the ins keyword prescribes insertion of an auxiliary specification. If you again put all of this together, then the main specification states that first, a message of type long, so a number, is communicated from Alice to Bob. Next, there is a choice to either close the channels and terminate or to continue. In the latter case, another message is communicated from Bob to Alice. And finally, there's again a choice either to close and terminate or to continue recursively. Intuitively, this is quite a simple communication protocol, but actually, it's not supported by existing MPST tools. So this example, tic-tac-toe, it has actually been our motivating example to do this work. So let's briefly also have a look at a tic-tac-toe implementation enclosure. So first, we need to have a bunch of generic tic-tac-toe concepts. So for instance, we represent the game grid simply as a list. And we have some functions to find the blank space on the grid and to update the grid. And to check if the grid is not final, mean all kind of basic stuff. Next, we import the core library of closure that provides channels. And this library is called closure.core.async. Using the channel function of this library, we define two asynchronous channels with a one-capacity buffer identified by C1 from Alice to Bob and C2 from Bob to Alice. And so now, the only thing that we still need to do is to actually define Alice and Bob. And if we first look at Alice, she's basically implemented as a thread. And the final action of this thread is to close the outgoing channel. But before Alice does so, she executes a loop. And in each iteration of the loop, she first finds a blank space on the grid and puts an x on that space. Next, she sends the index of that space through channel C1 to Bob. Then she checks if the grid is not final. If so, she tries to receive a message through channel C2 and update her grid accordingly. And if at that point the grid is still not final, Alice enters another iteration of the loop. If the grid is final in contrast, then Alice breaks the loop, closes channel C1, and terminates. Now, Bob is defined similarly, so let's not go through those details as well. So what we now have on the slide is basically a full implementation of tic-tac-toe in Clojure. And it's important to emphasize that this is actually all pure Clojure. So we haven't used our tool yet to write this implementation. Now, to add runtime verification to it and really start using our tool, there are basically three steps. First, instead of loading Clojure's core library for channels, we need to load our own library. Second, we need to construct a monitor for the TTT specification from the previous slide. And third, we need to add instrumentation to the channels. And more precisely, we annotate the channels with the intended sender, the intended receiver, and the monitor to perform the analysis. Besides these three small changes, nothing else needs to be modified. And in particular, the implementations of Alice and Bob, including the generic tic-tac-toe concepts, they can remain exactly the same as they were. Now, when we execute the program, the monitor will verify all sends and receives. And completely unexpectedly, we actually found an unsafe execution of the tic-tac-toe program in this way. So that was a good surprise to us. OK, now, this slide shows a few other examples of specifications that we support. The interesting thing here is that they involve parametrization in the number of processes. And this is traditionally quite difficult to do with multi-party session types. Now, to give some examples, we can specify a parameterized ring network or a parametrized star network and many other parallel topologies as well. But the details of those, well, they're not really important in this talk. The last thing I want to show is some experimental results. Because, well, conventional wisdom is that runtime verification may inflict serious monitoring overhead. So we also wanted to study to what extent this is true for our tool. Now, we had a look at four existing concurrent programs from a third-party benchmark suite called NASparallel Benchmarks. And essentially, what we did is we just wrote specifications for the communication protocols in these programs. And we extended the existing reference implementations with monitoring. So then, once we had done that, we ran the implementations both with and without monitoring on a sizable multi-core machine for increasing numbers of processes to also investigate scalability. And we repeated all these runs 50 times to smooth out variability. Here are the results for each of the four programs. The horizontal axis shows a number of processes, while the vertical axis shows the slowdown of the monitored versions relative to the unmonitored ones. And I just want to highlight two observations. First, if we look at the middle two charts, then the overhead of using monitors can be less than 5%. And this was really encouraging for us to see, especially since 5% seems low enough to use this technology in actual production environments as well. Sort of as a fail-safe mechanism. At the same time, the charts on the two sides, they show that overhead can also be substantially higher, up to 4 and 1 half times with 16 processes. Now, this is definitely too much for production environments, but it should be acceptable just for testing and debugging. So we basically identified two possible usages of our tool. OK, so this concludes my talk. Here is, again, the slide with a summary of our contributions. Regarding future work, I think there are at least two interesting avenues. First, we want to investigate support for liveness in combination with a mechanism to automatically recover from violations. The second thing that's interesting to us is verification of specifications, because it's not always easy to get the spec right, actually. OK, so that's all. Thank you for your attention.
This paper presents Discourje: a runtime verification framework for communication protocols in Clojure. Discourje guarantees safety of protocol implementations relative to specifications, based on an expressive new version of multiparty session types. The framework has a formal foundation and is itself implemented in Clojure to offer a seamless specification-implementation experience. Benchmarks show Discourje's overhead can be less than 5% for real/existing concurrent programs.
10.5446/55012 (DOI)
Hello everyone, my name is Vietse Orthwein and I'm going to talk about the work we did on the automated verification of the ParallelNestaDefaultSearchGraph algorithm. This is joined work with Marika Huisman, Sebastian Jooster and Jakov van der Poel. As the name suggests, ParallelNestaDFS is a parallelized version of NestorDFS, which in turn is a model checking algorithm, so the context of this work is a model checking. Model checkers are used to verify properties of reactive systems. In order to do that reliably, it is crucial that model checking algorithms are themselves correct. However, correctness of such algorithms is hardly non-trivial, as model checking algorithms are often paralyzed and heavily optimized to be able to go quickly through large state spaces. But this comes at a price. As the complexity of model checkers increases, so does the difficulty in achieving that correctness. Proving the correctness of model checking algorithms is a big challenge, as far as we are aware, no mechanical verifications of parallel model checking algorithms exist. To the best of our knowledge, this work contributes to the first mechanical verification of a parallel model checking algorithm, ParallelNDFS. The verification is carried out in the automated code verify of Vercor. We encode the ParallelNDFS in Vercor's verification language and specify the all-correctness properties as pre- and post-condition annotations, loop invariance, etc. Vercor then automatically proves correctness of the algorithm. Mechanizing the correctness argument of ParallelNDFS was hardly non-trivial. We had to rephrase the original handwritten proof quite a bit to make it suitable for proof mechanization. Ultimately, we verified memory safety, race freedom, and full functional correctness of ParallelNDFS. Our vercor formalization consists of reusable components that can be used to verify variations of the algorithm. We demonstrate this by also verifying two optimizations of ParallelNDFS with little additional effort. In particular, we propose a fix to one of these extensions by adding a check that was missing in the original paper. The remainder of this presentation consists of three parts. First I will give some background on ParallelNDFS. I will illustrate how the algorithm works and why its correctness is so difficult to establish. In part two I will go into our approach for mechanically verifying ParallelNDFS with Vercor. I will also show the actual vercor encoding and highlight some interesting parts. After that, in part three I will briefly discuss specification overhead. The NDFS algorithm operates an automata, which are the central data structure in this talk. An automaton is a directed graph consisting of a finite set of nodes that are connected via edges. Exactly one of the nodes is marked as initial, in this case as one, which is indicated by the flying incoming arrow. Notes can also be marked as accepting, which is indicated by a double border. Here as one, NS3 are accepting. Automata have many important applications. One of them is LTL model checking, checking whether some system model M satisfies an LTL property phi. It has been shown that, under certain assumptions, the LTL model checking problem can be reduced to the following automata theoretic problem, no reachable accepting cycles exist, in a particular automata construction that involves both M and phi. An accepting cycle is a cycle that includes an accepting state, like for example the one shown here. And the accepting cycle is said to be reachable if it can be reached from the initial state, like so. Nestor depth-first search is a linear time algorithm for finding reachable accepting cycles. This algorithm forms the basis of the spin model checker. And the FS1st is an outer depth-first search that searches for accepting states. And every time it encounters an accepting state while backtracking, an inner depth-first search is started from that state to search for any cycles. Let me show an example run, starting from S1. And the FS uses colors to indicate the status of exploration, where light blue means partially explored by the outer search. From S1, the outer search may go down and explore S4 and S5. It cannot explore further now, so S5 will be marked dark blue, which means fully explored by the outer search. S4 will be marked as well, after which exploration continues to S2, S3 and S6. S6 is now fully explored, and so the outer search will backtrack to S3. But since S3 is accepting, an inner search is started before backtracking further, the search for cycles. Now S3 has two colors, one for the outer search and one for the inner search. This nested search uses pink to indicate partial exploration and red to indicate full exploration. The inner search may fully explore all blue nodes, like so, before it finds S2 and thereby an accepting cycle. Here you see sequential NDFS in pseudo-code. Both the outer and inner searches are just standard instances of DFS, and indeed maintain their progress with color sets. We are interested in proving the correctness of paralyzed versions of NDFS, where correct means sound and complete. The property of soundness is that if NDFS reports a cycle, then it must be an accepting one. And completeness means that if there exists an accepting cycle, then NDFS will report it. One naive way to paralyze NDFS is biotechnical swarming. The idea of swarming is simple, just spawn n parallel instances of NDFS, each working on a private set of colors. Different threads might then choose to explore different parts of the input graph in a hope to find accepting cycles faster. However, threads don't cooperate as they now each have their own color set. So if the input graph for example doesn't have any accepting cycles, then this swarming version might actually be slower than the sequential version. Barmam, among others, found a way to make threads cooperate, and thereby improve on his swarming approach. They proposed a parallel NDFS algorithm, which is the algorithm that is central in our work, and also used in the LTS min model checker. The key idea is to make the red colorings shared, and to skip red states during both the outer and inner search. By doing so, threads avoid exploring parts of the graph that have already been explored by others. And this leads to speed up. However, sharing the red colorings also significantly complicates the correctness argument. Not only do threads now cooperate, they can actually now hinder each other. By covering a state red, a thread may actually block the detection of accepting cycles. Let me show an example of this. Suppose that we execute parallel NDFS on this automata with two threads, T1 and T2. Suppose that T1 gets scheduled first, and starts exploring left. So it sees as 1, as 4, as 5, as 6, backtracks, then starts an inner search as it backtracks on an accepting state, which will explore as 4, as 5, as 6, and finally call us S6 red. Now suppose that thread 2 gets scheduled. So we switch to the colorings that thread 2 can see, which is only red since red states are shared now. Thread 2 could explore right, in which case it sees as 2, as 3, and that's it, since S6 will now be skipped. And since thread 2 backtracks on an accepting state, it will start an inner search. And this inner search will immediately complete since red states are skipped. So as a result, S3 will become red, which means that this accepting cycle, S2, S3, S6, S6, S5 will never be found. We say that this cycle is obstructed. Let us now go into the argument of why parallel NDFS is correct and our approach to mechanize this argument in a Verkor code verifier. The original paper of Laarman et al, contains a handwritten, complicated correctness argument stating that not all accepting cycles can be missed in such a way. However, this argument is not an inductive one, and therefore not directly suitable for mechanization. So we first had to rephrase the correctness argument to make it inductive. I will not explain this new correctness argument in detail, but let me try to give an outline. We started by identifying a number of invariants on a color configuration. To give an example, we've already seen that any inner search can only explore states that are already blue as cyan. Most of the color invariants have been taken from the original paper of Larmann, but we also needed an extra one to be able to close our new proof. We showed that our color invariants hold by proving that every line of the algorithm preserves them. From these color invariants, we can prove that, under certain conditions, there must exist paths in the graph that have certain coloring patterns. We call these paths special paths, and they are an important ingredient for mechanically proving completeness. In particular, these special paths allow proving that every time an accepting cycle is obstructed, there must exist another accepting cycle that is not yet obstructed. This implies that not all accepting cycles can be missed, which is used to prove completeness. If we go back to our earlier situation, we can indeed see that there is another accepting cycle, namely this one. This accepting cycle cannot be obstructed and will eventually be detected if the algorithm would continue to execute. After writing our new pen and paper proof, we continued by mechanizing it using FACOR. FACOR is an automated code verifier that is specialized in reasoning about concurrent programs. For the verification, we took an incremental approach. We started by porting an already existing Daphne verification of sequential NDFS to FACOR as a stepping stone to further verification. We then adopted this sequential version to perform swarmed NDFS to get some initial parallelization. This verification of swarmed NDFS was straightforward after having verified the sequential version, as every thread works on their own disjoint color set. We continued by sharing the red colorings, adding the necessary thread synchronization to get to parallel NDFS, as expected this step drastically complicated the verification. First, we proved that parallel NDFS's memory is safe and free of data races. Then we encoded the coloring invariance and proved that every line of the algorithm persists then. After that, we encoded all arguments and proves around special path and used these to close the correctness proof. Finally, we verified two optimizations of parallel NDFS with very little additional effort. We thereby demonstrated our verification components are reusable. Let me show you the actual FACOR encoding of parallel NDFS. I'll not manage to go through everything of course, but I'll highlight some interesting parts. Here you first see several thread local fields, followed by several operations for updating node colorings. Going down a bit, you'll find the inner search procedure, together with its contract and loop invariance. In the contract you'll find the color invariance, for example that nodes cannot be pink and red at the same time. Going down further, you'll also find the outer search. Here you see the locking variant, which protects the ownership to all shared information. These perm predicates that you see here are predicates of concurrent separation logic and are used to establish memory safety and data-race freedom. All properties about special path are encoded as so-called lima functions, like the ones shown here. Lima functions are side-effect-free programs to which the lima property is encoded as a contract. There are quite many of them. And finally, here you see the main function, together with the stoutness and completeness properties. If the algorithm returns positively, then the REC system knows that it's accepting and is an accepting cycle. Completeness states the conference. If the REC system knows that it's accepting and is an accepting cycle, then the algorithm will return positively. Let us now very briefly discuss the amount of specification overhead. In order to do that, let me zoom out a bit. Here you see the faircore verification file I just showed you, but now zoomed out enough to make it fit on one slide. This file is about 1270 lines of code, after removing several comments and empty lines and such to make it fit better. Of these lines, roughly 240 encoding of the actual algorithm. So about 16% of the total. This means that the rest of the file, about 84%, is specification, which is rather a lot. We can actually distinguish different kinds of specification. Of the part highlighted now, about 460 lines are for encoding the properties of interest, so the ones for proving memory safety and full functional correctness. And all other lines of specification are for the encoding of the lima functions and other properties that don't directly have to do with code execution, like for example the properties related to special path and for proving completeness. We found that an automated code verifier really helped us in our verification. As code verifiers do a lot of work for you, you don't need to reason yourself about what happens after every execution step the program might take. However, I think that all the auxiliary lemmas and meta properties, the part that is highlighted now, might be done more conveniently in an interactive theorem prover like Isabella Coq. Therefore, I think that a combination of concurrent code verification and interactive theorem proving could be a very interesting direction of future research. To conclude, we present the first automated verification of a parallel graph algorithm, parallel nested that first search. We hope that you now have an idea of what the algorithm does, why its correctness is so difficult to establish and how we managed to mechanize the correctness proof. Thanks a lot for watching and I'd be happy to answer any questions.
Model checking algorithms are typically complex graph algorithms, whose correctness is crucial for the usability of a model checker. However, establishing the correctness of such algorithms can be challenging and is often done manually. Mechanisation of the verification process is crucially important, because model checking algorithms are often parallelised for efficiency reasons, which makes them even more error-prone.
10.5446/55013 (DOI)
Hello and welcome everyone. My name is Philipp and I will tell you about NITWID, our interpretation-based violation witness validator for C code. First bit of background about witnesses and model checking. So what you might be familiar with is that with a typical model checker, you have an input. In our case, it's a C program and you have a property. We are, for example, interested in reachability properties. So if an error state is reachable or not, and you feed both of these the program and property to model checker, and in the end you will get a result and the model checker will tell you either that the property is fulfilled, it's not fulfilled, or it simply doesn't know because of an error or a timeout. In recent years, there is an additional output available from many model checkers which is called a witness. And what is this witness? Witness, you might be familiar with certificates in complexity theory. So the witness in the case of model checkers is basically trying to or should encode everything that the model checker learned while doing the verification, everything it generated for instance invariance or simply decisions it made to reach certain states in the program. So the idea is that with the witness in hand, retracing the steps that the model checker took should be easier. So the verification should be easier. The next time around, there are currently two types of witnesses available. One is for reachability properties that are violated. So these are violation witnesses. And the other one are of course then the correction witnesses for a model checker testing that a property is indeed or a state is unreachable. A simple example for a violation witness would be in the program depicted on the bottom, the violation witness could encode the path shown in red towards the error state. So there are decisions that were made and these are then encoded in the witness. Our focus is of course on violation witnesses as the title already implies. And for violation witnesses, we can say that they can restrict the state space of the program that is analyzed. They can provide resolution of non-determinism and they can guide the search towards the error location. The problem is the can at the beginning because what is shown on the bottom now is also valid violation witness, which is simply telling us that an error state is reachable at some point in the program, but nothing more. So regarding motivation, our situation at the beginning of developing nitwit was the following. We were testing open source model checkers on some C code that we sourced from the automotive industry. The C code in question was quite large, so 120 kilobytes per file. We had a lot of properties, so in the end we were interested in verifying properties on around 1,000 instances of these code files. And we had a problem because we got conflicting results. And of course now we thought, okay, what to do with conflicting results? And as you might already have imagined, the answer is of course witness validation because if you have these model checkers that produce witnesses, you can just use the witnesses to verify whether property actually holds or maybe it's just a spurious error in the model checker. There is a problem because back then we only really had two violation witness verifiers, namely CPA checker and ultimate automizer, which are model checkers that could use the witnesses to enhance their verification attempt. I said enhance, but we observed that the witnesses didn't really strengthen the tools, so both of the tools basically performed their standard model checking procedure using the witnesses as enhancement, but the time and complexity that it took them did not substantially decrease by using the witnesses. There were also some other problems that we encountered. So here are two examples, one is that we had CBMC, Abundant Model Checker, detect property violations, so CBMC said an error state is reachable, but CPA checker on the other hand established correctness on the same property and program. So we encountered the problem that at least back then the witnesses generated by CBMC were not in the right format that CPA checker or ultimate automizer would accept. Another issue that we encountered was that the tools simply didn't agree, so CPA checker and ultimate automizer had different answers. And if these tools are also the only ones available for validating witnesses, do I really trust their witness verification? And as I said earlier, if they basically just use their normal model checking procedure, can the result really differ based on the supplied witness? And in our case, it didn't. Another problem is that the entire procedure, as I stated earlier, is very time and memory intensive, so if you want to validate the 1000 properties that we were interested in, which will basically, back then, it would take you as long as it took you to validate the original properties with the model checkers. So we came up with NITWID, which stands for Interpretation-Based Violation Witness Validator, and this is a new execution-based validator. So it's not, for example, a CPA checker or ultimate automizer, a full-blown model checker, and it only explores a single path through the program because it just interprets and executes the program. Because of this, it's also quite fast and memory efficient, and because it supports a lot of the basic C99 standard elements, it's applicable to a lot of C programs, for example, the entire, or almost the entirety of the SVComp C program set. How does it work? How is it implemented? Basically we have, on the one hand side, the interpreter driving everything and working on the C code, and on the other hand, we have the witness automaton that we generate from the supplied witness. And we use these two in lockstep, so the interpreter goes through the program, and at every point in a program where we could derive control flow edges, or where, for example, we go into a function, we return from a function, we set a variable, we declare a variable, we will hand this information over to the witness automaton, the witness automaton will check if this is also represented in the witness, and if this is represented in the witness, for example, if there is information on how non-determinism is resolved, so how a decision is made, this information is then fed back into the interpreter, the interpreter will resolve the non-determinism in its internal program state and continue execution. At some point the interpreter will either reach the error marking and return false, which marks that this is indeed a correct violation witness, or it will terminate or run out of memory or time and return the result unknown. There are some things that are quite nifty in our approach with the interpreter, for example, if you need to check assumptions, so one example would be states-based guards in the witness that restrict where you go in the program or that resolve non-determinism are quite easy to do in the interpretation approach, because you can just clone the entire state of the interpreter and then have the interpreter interpret the states-based guard in your violation witness in the context of the current program, as if it were an if statement, for example, in the program. Extract whatever is, for example, resolved in terms of non-determinism, and then simply restore the origin state that we copied of the program. Something that I've told you about is based on PicoC. PicoC is a C interpreter originally designed to work in embedded systems that is available for free online, but for our use case it didn't support enough of the C standard. So we added quite a number of features and functions. Of course, a lot of these are not strictly required, for example, in its original use case. So, for example, the callbacks to extract information about program states and where we are in the program, what functions are called, or also the non-deterministic types and function support isn't necessarily part or doesn't need to be part of PicoC, but, of course, it's required for our use case. And to give you a bit of an overview about how NITWIT performs, we have collected the results from the recent SVComp 2021. Here you can see all the six validators that participated in SVComp 2021. Blue means that a witness was successfully validated as a correct violation witness. So these presented witnesses are only the violation witnesses. And as you can see, for example, here NITWIT only shows blue and unknown or a timeout. So we don't have the green color for true because NITWIT can never decide that a program holds because it only explores a single state through the program. So this is in stark contrast to, for example, CPA checker or ultimate automizer. To give you a bit more information on how NITWIT performs, you can see time consumption plotted over the witnesses. So what you can see is basically that because we're using the interpreter approach, we have a very, very low overhead for program start and setup, which you can see that, for example, the next runner up in terms of startup time is the C-Prover witness to test program, also called F-Shell, which is two orders of magnitude slower in initial startup and overhead than NITWIT is. Of course, C-Prover uses a similar approach to ours just that they use compilation in the beginning. So they first compile the program and then execute it instead of interpreting right from the start. And that's a trade-off because if you have a large program, the compiled program will be a lot faster than the interpreter will ever be. And you can clearly see that in how linearly it scales over larger programs, whereas NITWIT starts taking up more time quickly with larger programs. If we look at memory consumption, we can also see NITWIT has the advantage of the low overhead, but of course, with larger programs, its memory consumption grows as well. Here you can see an n-dimensional Venn diagram for coverage. So this tells you which witnesses were covered by what tool. We can see that, interestingly enough, there is something special for everyone. So every tool has a couple of witnesses that only this tool really covers. So every tool is a necessary and welcome addition to the pool. At least that's what I would conclude from this. Looking at the numbers, on the left you can see which verifier generated the witness and then the columns tell you what tool could validate how many of these generated witnesses. And one interesting takeaway I would say is that, of course, every tool is usually the best verifier for its own witnesses. So if you look at CPA checker, generated 1,293 witnesses, 1,327 witnesses, as seen in total on the right, and CPA checker could validate 1,216, which is, of course, again a hint that the witnesses are not always strong enough to really make the validation possible even for the original tool. And we can see a similar thing, for example, with Ultimate Automizer, which, as Marty and Bolt could also verify the most witnesses of itself. So to sum this up, we had a data set of 12,873 witnesses in total from REACH safety properties in the recent SVCOM. We could validate 8,100 of those successfully with NITWIT, which is the most under all participating tools. If we look at a bit of timing statistics, so for example, only on witnesses that were successfully validated, we can see with the median time of 16 milliseconds, NITWIT is blazingly fast. If you look at the average, of course, it's a bit slower because of programs that get larger and larger. Remember the overhead tradeoff with the compilation. If we look for comparison on all witnesses, so also those that we couldn't validate where we, for example, ran into timeouts, which is, I guess, our biggest loss factor there, you can see the median climbs slightly, but the average climbs a lot more. And the same also applies for memory. So in median, we only take about two megabytes, but of course, for the larger programs, you can see in the standard deviation, we will take a lot more. So in summary, our contribution is a new validator for violation witnesses called NITWIT. It's available on GitHub. It's open source and free. It's quite fast. It's independent of existing model checkers. So if you don't trust them, you can use NITWIT. It borrows semantics from your compiler. So you can use different compilers to get an insight, for example, in differences between compilers. It supports 32 and 64-bit programs. There's a lot more options available using compile time flags, and maybe in the future, we will also implement witness refinement within NITWIT. Thank you.
As software verification is gaining traction in academia and industry the number and complexity of verification tools is growing constantly. This initiated research and interest into exchangeable verification witnesses as well as tools for automated witness validation. Initial witness validators used model checkers that were amended to benefit from guidance information provided by the witness. This approach comes with substantial overhead. Second-generation execution-based validators traded speed for reduced strength in case of incomplete and non-exact witnesses. This was done by extracting test harnesses and compiling them with the original program. We present the NITWIT tool, a new interpretation-based violation witness validator for C programs that is trimmed to be fast and memory efficient. It verifies a record number of witnesses of SV-COMP'20 in the ReachSafety category. Our novel tool exchanges initial compilation overhead and optimized execution for rapid startup performance. NITWIT borrows C semantics from the compiler used for compilation. This offloads this hard-to-get-right task and enables using several compilers in parallel to inspect possible semantic differences.
10.5446/55014 (DOI)
Welcome to this talk about our paper, FACA certificates and witnessing subsystems for probabilistic reachability constraints, which was accepted at FACA 2020. My name is Simon and this is joint work with Florian Funke and Crystal Bayer. We're all from the Technical University of Dresden. So the context of our work is the reachability problem in Markov decision processes. So a Markov decision process is something like this. So this is an example. We have some states and possibly multiple actions in each state. And to each action, we have an associated probability, well, distribution over the states, which determines with which probability a certain successor state is reached if choosing this action in a certain state. We're interested in maximal and minimal reachability probabilities of a dedicated goal state, which we will assume exists always. So the maximal reachability probability is under the optimal choice of actions in each state. What is the probability to reach goal? And the minimal reachability probability is assuming the worst case choice for actions in each state. What is then the probability of reaching goal? And we will look at threshold problems for these two values in the dedicated initial state against some rational lambda. We will assume in our work that the MDP we consider satisfies this property. So the minimal reachability probability of goal or fail should be one. So intuitively, this means that there is no scheduler that avoids reaching goal or fail, or both goal and fail together completely. Okay, so before going into our results, I would like to sketch briefly how one can use linear programming to compute these values, the minimal and the maximal reachability vectors. So here's an example of some states in an MDP with some concrete probabilities added. And what we will do is we will construct a linear in-equation system. So it will have for each pair of states and actions, it will have a row, a constraint, which says basically that the value of the state should be at most the value one gets by choosing this action. And this value is weighted sum basically over the successor states, the values of the successor states. And we have such a lesser equal constraint for each of the actions possible in state S0. And extending this to the entire state space yields and in-equation system, which I call AZ less equal B. And intuitively, if that satisfies these in-equations, then it must be a lower bound for the minimal reachability probabilities. Right, and now this can be used to compute these values as follows. Basically the minimal reachability probability is the unique solution of maximizing Z such that it satisfies these constraints. And the maximal reachability probability is the unique solution of minimizing the Z such that these constraints are satisfied. And here we have swapped the lesser equal with the greater equal. So this, we will use these characterizations in our work. And the first result that we have is, we introduced so-called Parker certificate. So the idea is the following. The table should be read as such. So here we have a threshold property, minimal reachability greater or greater equal lambda. And this property holds in an MDP, if and only if there exists a vector Z that satisfies a linear in-equation system. So this is basically the one we have seen before, together with the constraint that the value in Z should be greater or greater equal lambda. So if and only if we find the Z satisfying this, the property will hold. So the two that I have shown here, this is lower bound on minimal and upper bound on maximal probabilities can be derived from the constraints that we have seen before, basically using the fact that satisfying the constraints yield lower or upper bounds on the corresponding property. Now, here is where we use the so-called Parker Schlemmer. So it says that every system of linear in-equations has another system, a dual system, such that one is unsatisfiable, if and only if the other is satisfiable. And we use here that for the properties, the threshold properties that we still haven't characterized, their complements are already solved in this table. So we can basically apply a version of this Parker Schlemmer to fill them with the gaps. And this gives us a full picture, so for each property, each threshold property will have a characterization as satisfiability of a certain linear in-equation system that is easily derivable from the MDP. So why are we interested in this? Well, as for certificates in general, the main point is that correctness of verification results can easily be verified independently. So if the model checker not only returns yes, the property holds, but also gives us such a certificate, such a vector, then we can easily check that indeed the property holds by simply inserting in such an equation system. So simply checking whether the given vector satisfies the in-equation. Another benefit of these certificates is that mainly depending on the lambda, the set of viable certificates can be really large. So it may actually, so if I'm interested only in the threshold property, it may be more efficient to actually check for emptiness of this constraint system rather than computing the actual value of min or max reachability, and then checking against the threshold. And third point, which is interesting, is a connection that we found to so-called witnessing subsystems. So the FACA certificates, a priori they don't tell you much about the system, they just say, well, in a way, they are a mathematical certificate that the property you're interested in holds. But you don't really have any more information about why it holds and so on. On the other hand, these witnessing subsystems somehow have operational meaning in the system. So it is interesting to see that actually there exists a link between these two. And this is what I want to talk about next. So what is a subsystem? What is a witnessing subsystem? So here again, we have an MDP. And a subsystem of this MDP is any MDP that you can get by redirecting certain edges to fail. So here there is an example, and you can see that for example, the state T is now entirely unreachable. So the subsystem may have fewer reachable states than the actual system. And now we say that the subsystem is a witness to some lower bounded threshold property, either for min or max reachability, if the subsystem itself already satisfies the bound. And while this is grounded somehow in the fact that the probabilities in subsystems can only decrease or breach in goal. Because of this, having a subsystem that satisfies the lower bound implies that the original system already satisfies the lower bound. Okay, so what is the connection to Parker certificates? So let's define these two sets, F min and F max. Basically they're just the Parker certificates. So these are the conditions I showed you before for some lambda in some MDP. For the lower bound on minimal reachability, this is F min and the lower bound on maximum reachability, this is F max. Now what we show is that there exists a certificate, so a vector in this set, satisfying these constraints, with K non-zero entries. So K positions of the vector are not zero. If and only if there exists a witnessing subsystem for the corresponding property, so either min or max reachability, greater equal lambda, with at most K states. So basically it says that the zero entries in the Parker certificates correspond to states in the system that you can remove to obtain a witnessing subsystem. This new view on witnessing subsystems yields new algorithms. So we're interested in computing minimal or small witnessing subsystems and here we will mean by this witnessing subsystem with as few included states as possible. This problem, so the threshold problem corresponding to minimal witnessing subsystems is NP complete. And we show that the minimal witnessing subsystems, they corresponds to vertices of the polytope of Parker certificates. So we can compute them by enumerating vertices of these polytopes. And we have alternative algorithms that are based on mixed integer programming on top of the constraints defining the Parker certificates. An interesting observation for Markov chains, which are MDPs with a single available action in each state is that of course, in that case, the minimal and the maximum reachability probabilities coincide. So we can use both formulations, both sets of certificates when we consider the threshold property that's the probability of reaching goal is higher than some lambda. And using the different certificate sets actually yields to different results. Because the problem is hard LP, well, heuristics are important and we introduce a new LP based heuristic, which iteratively solves a sequence of LPs to find small witnessing subsystems. We want to find Parker certificates with many zero entries. So vectors satisfying the constraints that have many zeros. What we do is we solve a sequence of LPs over, so the constraints stays the same. It's always the set of Parker certificates and only the objective changes of the LP. And it works as follows. So we start with the objective all once, the vector containing just the ones. And we solve this LP, so it says minimize this O times P for all the P satisfying the Parker constraints. So in particular, the solution is the Parker certificate where the sum of the entries is minimized. After having done this, we save the solution into the QS1 vector and now the iterators follows, we define the new objective value of a state I to be one divided by the solution value of I in the previous iteration. And the point is the following. If the solution had a very small value in I in the previous iteration, we would like to push it to being equal zero and not just very small. And for this, we assign it the very high value in the next iteration in objective function. That isn't the idea. And then we solve the same basically LP just with this new objective function. And this actually works quite well. So here we have run some experiments. This is a Markov chain benchmark from the Prism benchmark suite. And what we compare are four heuristics for computing small witnessing subsystems. So the first two correspond to our LP based algorithm either over the max polytope or the min polytope. So as I said, we can use both the min and the max characterization for Markov chains as the subsystems or as min and max coincides. But the polytopes are still different, yes. And we compare against state of the art tool comics used for that also computes small witnessing subsystems heuristically. And so we fix a reachability query and we increase the threshold on the lower bound. And what we see is that one of our configurations using the max polytope, performs quite bad. So it has a lot of ups and downs. And in general, it is much higher than the other approaches. So I should say that here we see the increasing threshold against the number of states of the subsystem that we get as a result. So we can see that this configuration is not really optimal, but on the other hand, the other one using the min polytope yields quite good results. So indeed it's usually or almost always below or equal to the results of what comics has in the better configuration. Another interesting or observation is that our approach is quite indifferent to the threshold that we use. So which is logical because the threshold is just one parameter in the linear program. So the linear program does not increase in size with increasing threshold. On the other hand, comics uses an approach where iteratively more and more paths are added until the threshold is satisfied. So it's clear that for comics increasing the threshold will increase the computation time. Okay, so another result we have in the paper is that, well, I told you that computing minimal with this subsystem is NP complete. And we show that it's NP complete already for a cyclic Markov chain. So for a very restricted class of systems. On the other hand, it's NP if the underlying graph of the Markov chain is a tree. And since our paper last year at TACA's, we have worked on an actual tool that implements these approaches and also extends them. And this is now available. It's called SWITS and there was a tool paper at FMCUT last year as well. So that's all I wanted to say. Thank you very much for your attention.
This paper introduces Farkas certificates for lower and upper bounds on minimal and maximal reachability probabilities in Markov decision processes (MDP), which we derive using an MDP-variant of Farkas’ Lemma. The set of all such certificates is shown to form a polytope whose points correspond to witnessing subsystems of the model and the property. Using this correspondence we can translate the problem of finding minimal witnesses to the problem of finding vertices with a maximal number of zeros. While computing such vertices is computationally hard in general, we derive new heuristics from our formulations that exhibit competitive performance compared to state-of-the-art techniques. As an argument that asymptotically better algorithms cannot be hoped for, we show that the decision version of finding minimal witnesses is NP-complete even for acyclic Markov chains.
10.5446/55017 (DOI)
This is a video recording of a paper presentation for the European Joint Conferences on Theory and Practice of Software, E-Tabs 2021. The paper that I am presenting here is submitted to TACAS 2020 and it is the verification of OpenJDK's linked list using key. My name is Hans-Dieter Hiep from Centrum Wiskunde in Informatica in Amsterdam. The research agenda behind the work is to study the correctness of real-world and widely used software. So you could immediately think of the Java programming languages, widely used programming languages in the industry and we, in particular, focus on the Java collection framework because it is a library that is used by lots of applications. And in this work we have been looking at the linked list. How do we study the correctness of such software? For that we make use of the key theorem prover. It is a theorem prover that inputs Java programs that are annotated by the Java modeling language. So these are assertions put into the comments of a Java program. So without modifying the Java program that you want to verify. The theorem prover is automated and interactive. So parts of how it works can be done automatically. However it is indispensable in certain cases to have the capability of interacting with the theorem prover. In particular, a good point of the linked list is that OpenJDK has, in the effort of OpenJDK, the source of linked list became public so we could study it. And in general, these holes that study the correctness of software requires the source code of that source software to be public. So let's just immediately start with the news that there is a bug in Java's linked list. Well, this is not very surprising. I mean, there are more than 800 bug reports for the collection framework in Java. And we've been focusing on this particular bug for a number of reasons and I'll come back to that later. First of all, what is a linked list? It's a lovely linked list and it implements a number of standard methods that every collection in Java collection framework implements, which is contained size and add. An example of a linked list looks like this. Here you see a linked list object, which is empty. It has a first pointer to some internal node, a size field and a last pointer to some internal node. And now whenever you add an object, say this object here to the linked list, then internally a node is created to which the first and last pointer point and the size is updated to one. Whenever you add another object, say it's the same object, so the nodes point to the same object, then we create doubly linked lists out of the internal nodes. So as you would expect, that the collection contains objects. If you are given a linked list instance, if you call the add method on that object, that afterwards whenever you query whether it contains the object you have added, it should return true. But in actuality, you create a new linked list, you do some operations like adding no objects to it, and then at some point you add an object and contains returns false. You would expect that if you're given a linked list that whenever you query how large it is that it would never be negative. It is not possible to have a negative size. But in reality, you have some linked list, you perform some operations on it. You assert that its size is not negative, which succeeds. You add an object and now you assert that the size is not negative, but that fails. So what's going on here? So let's dive a little bit into the source code of the linked list and we can do this because it is open source. So if we want to call contains, contains is implemented in terms of index of. And index of itself consists of a loop which loops through the nodes, the internal nodes of the linked list from the first towards the last. And it checks whether the item contained in the node is equal to the supplied object. If so, it returns the current index accumulated and otherwise it increases the index. And here you see the problem in integer overflow. It could occur. And that's the problem in the linked list as it currently is implemented. There is a mismatch between the cached size that is stored in the linked list instance itself and the actual size being the number of nodes that comprises the linked list. They could differ at some point. And it turns out that this bug is already more than 20 years old. It was introduced at the moment that the linked list was introduced in the collection framework in 1998. However, the bug wasn't activated yet. It only became active at the moment that Java supported 64-bit architectures because this now allows you to create more than 32 bits, 32, the number of objects that you can store in a 32-bit integer. And this is the integer field size that the implementation uses. And that's also how you can reproduce the bug. It requires a lot of memory. So what have we done? We have supposed fix and bind or make sure that the size never outgrows whatever can be stored by a science 32-bit integer in the cached size field. And then we ask ourselves, can we now verify that the linked list is correct? So verifying the correctness of the linked list requires you to do a lot of effort. So in our case, it has taken us approximately seven man months. The workflow we've been doing, we've been following along, is you first try to come up with the specification. And this is done in the Java modeling language. And then given the specification, you load it in key, you start verifying the program. And if your verification fails for some reason, you can go back to the specification step and refine your specification. For instance, your specification of a loop invariant wasn't enough to show that, say, a loop terminates. Another possibility is that the verification fails because there is an error in the program. And in that case, you can generate a test. And the test demonstrates the presence of the programming error. Now after you have detected the error, you can revise the code. And after the code revision, you go back again to the verification step. This is, in short, the workflow we've been taking. And in particular, it's important to say that the specification part of our work took way more than the verification part. Whenever the specifications are all set in stone, the verification can be done in a week. What the specification comprises are the method contracts associated to all the methods of the linked list. We have to formulate a class invariant which holds at any point in which a particular method is not actively invoked. And we also have to introduce additionally conceptual storage, which is necessary to do the verification, but which does not exist at runtime. These are ghost fields and ghost parameters. Then during verification, we can automatically generate and verify certain verification conditions. But sometimes it is indispensable to use interaction, where you can introduce new kinds of formula by which shortens the proof significantly using a cut of the proof. We stare the quantifier elimination and we can perform equation reasoning. And as a human, you can put your intuition into the system way more efficiently than what can be detected automatically. So some proofs could not be done automatically. So to show you a little bit about what we've done in modeling the linked list, I want to tell you a little bit about what the structure is. So this internal node, it consists of a previous reference, it's an item reference to an element and an extra reference to a node. And then the linked list itself is constructed by forming a chain of nodes. Now a chain is a sequence of nodes such that the first node in that sequence does not have a predecessor. The last node in the sequence does not have a successor. And for all the nodes, the predecessor is S given in the sequence of nodes. So for instance, the last node, its previous field indeed points to the node that occurs before it in the sequence. And the same applies for the next field of the nodes. So the next points to the next node in the sequence. This sequence now is a conceptual unit. It does not exist at runtime, but it is necessary for us to do verification. So now a linked list can be seen as a first and a last reference, except for its size as well. And it is either the case that the linked list is empty, and then both first and last are now, or it is non-empty, and then both first and last are not null. And there is a chain between the first and the last node. So this can be modeled in key using JML. Now during the proof of certain important methods such as the add or the contains method of the linked list, it is necessary to add further information during the proof. So this is information that is implied by this invariant of the structure. So to demonstrate a little bit abstractly what the key property of the linked list is, is to say that every such chain is acyclic. So that means that whenever you start at the first and you traverse through next pointers, you do not encounter the same node again. So there is no cycle whenever you go from the first through the next. And the same thing holds, of course, also from the last with the previous traversal. So to show that this lemma holds, we can prove it by contradiction. So suppose it now holds, then we can derive at some point a contradiction. So suppose we have two nodes, i and j, in the sequence. Then if they are the same, then you can imagine we have built a chain as shown. We have a node i and we have a node j. And then whenever you move to the next node, if these are the same, if you move to the next node on i, then you must obviously also move to the same next node from j, because i and j are the same. So this for any k up to the end allows you to conclude that all these nodes would have been the same. But then if that is the case by some rewriting, you can find out that there is particular distance from this node k back, which should be then the same. This can now be found out to be contradictory, because the last node has the next pointer null, while this intermediary node, since it is intermediary, does not have a null next pointer, but points to the next node. And null is not equal to not null node. Therefore we reach a contradiction. And this argument now abstractly explained can also be performed within the key theorem proof. So to conclude our work, we have fixed and verified a real world software, namely Java's link list. And we have used the key theorem prover and JML for doing that. If you want to know more about this work, please read the paper we have submitted. It contains lots of interesting detail about the whole process, and I hope that you enjoy. In case of any questions, feel free to reach out to any of us. And thank you for your attention.
As a particular case study of the formal verification of state-of-the-art, real software, we discuss the specification and verification of a corrected version of the implementation of a linked list as provided by the Java Collection framework.
10.5446/55018 (DOI)
Hello everybody, my name is Alex Dixon and I'm going to talk to you today about KReach, which is a tool we've built for reachability in general PetriNets. This is work that was produced for the TACAS 2020 conference and is joint work with my supervisor, Ranko Lasinch. So a quick overview of what we've been up to. We are presenting as our primary contribution here, KReach, which is a tool for designing reachability in general PetriNets. And this is based on the work of Coseradjou, which was published at stock in 1982. We also include a suite of libraries in the Haskell language for writing and interfacing with vector addition systems with states. So if that's something that's interesting for you. And we wrap up with some results, which are surprising in that even this 1982 algorithm on certain classes of coverability and festive against what were at the time in 2019-2020 state of the art solvers. So let's quickly make sure that we all know what we're talking about when we say reachability problems. So our input to this is a PetriNet, which is composed of places and transitions, as well as a flow function, which relates those places to those transitions. So the transitions can move tokens between places, according to this flow function. And we will equip this net with an initial marking, which represents the initial arrangement of sediments and a target marking, which is the arrangement that we're looking to attain. The output to this algorithm is simply whether or not there is some sequence of transitions, which gets us from the initial marking to the target marking, according to the flow function. And if it is, then we output reachable. If not, then we output not reachable. Note that I'll be using the terms PetriNet and vector addition system and vector addition system with states interchangeably. There are some well-known results that show that all three of these formalisms are equivalent. So at the time of writing the paper, the bounds had recently moved from comically far apart to interestingly close. So as of stock 2019, we now know that the reachability problem on general PetriNets is at least not elementary. And the upper bound is known to be Acha-Manian. By comparison, we already know that the coverability problem in which we seek to either meet a target marking or exceed it is known to be X-Base complete. So cost-radius algorithm, it built on the earlier work of Maya, Saccadot and Teni. And it is a complete algorithm, which is to say it gives us an affirmative or negative answer for deciding reachability, and it will always terminate given enough time. And the good news is that this algorithm can be implemented and tested, which is what we've done here. To the best of our knowledge, this is the first time that cost-radius algorithm has ever been implemented and tested on real-world instances. At the time, this was a proof of decidability, so there was no real impetus to provide complexity bounds. But we know it's got to be at least not elementary, and we will see just how good or bad it is in practice shortly. So let's quickly introduce the algorithm and see how it works. It's essentially a search through a state space where the states are decompositions of the original vector addition system that we put in, or Petrin that we put in. We compute such decompositions using a structural predicate of these systems, and that predicate is called theta. Eventually, as with all searches, we will find such a decomposition that fulfills the requirements, in which case we know for a fact that the original VAS problem was reachable, or we run out of decompositions to check, in which case it was not reachable. So when I talk about decompositions, what I really mean is simplification of the vector addition system by removal of things which we know do not affect reachability. And we compute these decompositions on a generalization of VAS, which are called GVAS. So a GVAS is, in essence, a sequence or a chain of these vector addition systems which are joined together with arcs. And we can annotate each of these individual vector addition systems with some metadata and some constraints on the configurations that may appear in here, inside of each of these component vector addition systems. There's a very simple process by which a vector addition system can be lifted or augmented to become one of these generalized VAS. And as mentioned just now, we're able to show, or excuse me, it has been shown that reachability in the original VAS V is exactly equivalent to reachability in any valid decomposition of the lifted version of the same VAS. Here is an example that's been lifted from the paper. On the left, with two states, Q and R, two transitions, T0 and T1, an input configuration MI, which is 1, 0, in state Q, and a target 0, 0 in state R. And we lift this VAS into GVAS in a fairly simple way. The most important change that we make is that each of these component VASs has to be strongly connected, which is to say you can reach any state from any other state. We note here that we cannot reach Q from R, so we have to separate the two states out into their own kind of self-contained strongly connected components. So in a slightly more complex example, this may generate more than one vector addition system sequence, because we need to take all possible paths through these strongly connected components, and that may include or disinclude different subsets on such paths. So it will actually create a family, an exponentially large family, of input GVASs, depending on the input. Now let's talk a bit more about this theta condition, which is really the heart of Kostrade's algorithm. So there are two properties involved here, which are theta 2. Theta 1 is a global property of the entire GVAS chain, and theta 2 must hold on every small component VAS along the way. And depending on which of these two properties is violated, we're able to decompose the full chain in a different way, and we simplify it according to the strategy for each of theta 1 and theta 2. So theta 1 is this global property that I said about, and this relates to reachability in a very direct way. So theta 1 is slightly more relaxed instead of pure reachability from the beginning of the GVAS all the way through to the end of the last element of the chain. We consider pseudo-reachability, so pseudo-runs instead of regular runs. And a pseudo-run means that we're allowed to get in certain places in the vector. So we no longer care about being bounded below by zero, as in normal VAS. But we also need to make sure that every transition in every vector addition system with states along the way to addition system with states along the way must be able to be fired an unbounded number of times along such a pseudo-run. We are able to compute this particular property using integer linear programming. So we construct a large ILP problem for the entire GVAS, which says the output of which is a semi-linear set, which shortly summarizes all possible pseudo-runs. And from this semi-linear set, we transition along this, along all such pseudo-runs is always bounded. If there is such a bounded pseudo-run, then we know that theta-1 is violated. If there is no such bounded pseudo-run, then theta-1 holds. If theta-1 is violated, we can unfold a violated, we can unfold a violated transition by taking the maximum bound for the number of firings that it may perform, which we got from the semi-linear set, and just render out every single possible number of these firings. But instead of Inglomer's transitions, we now encode them as arcs between between GVASs. And if we do that, then we no longer need that transition to exist. So we've vastly increased the number of component GVASs, but we have removed one transition from all of them. I have an example here. So we know in this case that T0 can be fired at most one time. And therefore we can just render out the instance where it's fired zero times, which is the top one on the right, and the instance where it's fired once, which is the bottom one on the right. Notice that minus one one now appears as an arc between Q and Q prime, both of which are duplicates of Q, but without the T0 transition, which is no longer needed. These two are properties that holds on each of the component VASs. And one of these VASs has a path from the start to the end of the VAS in which all places increase. And the same is true if you reverse all the arcs. So effectively a path from the final to the initial state via which all places increase. To compute this, we compute coverability from the start to the end with all places increased by one. And if we can do this, then we know that these two holds. If these two is violated, then some place has an upper bound on its value everywhere inside the VAS, in which case we can encode all of its possible values into the state of the machine via a product construction. As you might imagine, this significantly increases the size of the instance, but we've reduced the number of places that we care about by one. We started by testing reachability on some synthetic samples, the sample I've been using. Surprisingly, the example runs in linear time in the CBC4 solver, but exponentially inside of Z3. The reasoning for this is not clear, but we included CBC4 with the benchmarks. Notice that the time is measured in whole seconds. Notice that the time is measured in whole seconds, even for a relatively small example like this. Again, at time of writing, there wasn't a large quantity of reachability problems out in the wild. So what we were able to do instead is reduce from coverability to reachability, saying that we augment the coverability instance with something which winds down places to zero after the target is covered and then reach the zero vector. The upshot of this is that we can test our reachability procedure by design for coverability. What we found is that K-reach is actually able to outperform some state-of-the-art coverability checkers for particular types of instances. We believe that these instances are ones that K-reach does not need to decompose the tree in order to rule out all cases via Theta1 or Theta2. However, as you might imagine, the more decomposition K-reach needs to perform, the worse the outcome is, and in fact almost all instances which required more than two or three levels of decoupling. If you're interested in trying the tool yourself, you can grab it from this GitHub link which includes both the Haskell source code which can be compiled and run by yourself or you can download Linux 64-bit binaries which include all of the benchmarks used and a copy of CBC4. Check the readme for usage guidance but the tool itself is used either with the dash r flag which means run the reachability checker or dash c for coverability and there are instances of both including the synthetic examples used in the paper included in the release. While the tool is fully functional as is, there's still plenty of work that can be done on extensions. One would be to build out some additional parsers for different interchange formats that are standardly used for PetroNets including the PNML format. We may also investigate whether or not it's possible to make use of the invariance from the ILP construction to dedicated coverability tools that can make use of them statically and there may also be scope to introduce some new optimizations based on developments from the Acomanian upper bound proof of Lorenz Schmitz from 2019.
We present KReach, a tool for deciding reachability in general Petri nets. The tool is a full implementation of Kosaraju’s original 1982 decision procedure for reachability in VASS. We believe this to be the first implementation of its kind. We include a comprehensive suite of libraries for development with Vector Addition Systems (with States) in the Haskell programming language. KReach serves as a practical tool, and acts as an effective teaching aid for the theory behind the algorithm. Preliminary tests suggest that there are some classes of Petri nets for which we can quickly show unreachability. In particular, using KReach for coverability problems, by reduction to reachability, is competitive even against state-of-the-art coverability checkers.
10.5446/55022 (DOI)
Welcome to this presentation for the paper Practical Machine Check Formalization of Change Impact Analysis. I'm Carl and this is joint work with my colleagues Akmet at Facebook and Miloš at UT Austin. Before I tell you about our formalization, let me give you some context for this work. What we're considering is building and testing a software development. In a typical workflow, a programmer submits a diff or a change to a software system through a version control system. The version control system in turn invokes a continuous integration system, which builds the software and performs other tasks such as running tests and other things. Typically, this continuous integration system uses build systems like Make, Bersal, Dune or Cloudmake. Anno may also use regression test selection tools like Ecstasy and Starts. The key point of using these kinds of tools is to scale over the size of the change rather than the size of the code base. You don't want this to take a really long time just because you have a lot of code. You want it to be because the change was large. One way to deal with this is to perform change impact analysis. We take this as the activity of identifying potential consequences to change the software system. Also, after this analysis, we want to use this information to perform fewer tasks than we might have to do without this analysis. So we save some effort. This kind of analysis is at the core of a lot of these build systems and tools. Usually, when you want to reason that your build system, for example, is correct, you will have to reason both about the domain you're applying it to, and also the general parts of change impact analysis. The question we asked in this work was whether we could separate out the reasoning on the change impact analysis so that this can be done once and for all, and then a domain expert can solve the other problems, for example, for the Java language where you might do test selection and say that this is correct. Our contribution is that we present this form of model of change impact analysis, and this is done in the Cookproof system. So we provide a library of definitions and proofs. And from this code in Cook, we also produce a verified tool called chip, and then we evaluated chip by integrating it with building and testing tools. And so now we're getting to this part where we present our model, and the basic concept in our model is of a component. So a component can be thought of, for example, as a file name, and we think of the set of components before and after a change is enacted to a system. So we call the set of components before v, and then we call it v prime afterwards. And then we have a set of artifacts, and the intuition for this is that this is the content of a file, and this may, of course, change when a change is done. So we track this with two functions called f and f prime that map these components to artifacts before and after the change is made. So and then we have some kind of dependencies between these components, and this is stated by these dependency graphs, g and g prime, and these are just binary relations on these component sets. And then we have some set of checkable components that we can run some operation on. So in this case, we call this operation check, and we assume that it's side effect free. So this might be running a test case or invoking some kind of test running program. And based on this kind of initial concepts, we can define some derived concepts like a modified vertex, and we say that a matrix is modified if the artifact changed. So we can use this by evoking these functions. And then we can define what it means for vertex to be impacted. So this means that it's reachable in the inverse graph. So we sort of take the graph and then we flip all edges, and then we see whether we can reach something from a modified vertex, and then we say it's impacted. So of course, a modified vertex is going to be impacted. And then finally, we have the set of fresh vertices. So these are vertices that were added in this revision that were not present before. And the key idea here is that when we take this set of both impacted and the fresh vertices, and then run this check on all the executable vertices in the union of these two sets, then we have done all the work required. So that's kind of the main idea here. So let me give you an example here. So here we have a dependency graph on the left-hand side in the before state. And it has six components, out of which two, these five and six are executable. And then we perform a change, namely we change the artifact for this vertex one or this component one. And then we sort of flip the edges of the graph and we see that oh, one, three, and five are actually impacted by this. And then in this case, it was only five that needed to be executed or checked. So all we need to do here is to run check five. So we don't, for example, have to run check six. So we might save a lot of time and money by doing this analysis. Okay. So the correctness, of course, we have formal statements, which I will not show you, but the intuition behind it is this, that if we only execute impact and fresh vertices, this is actually sound and complete. So we are checking precisely the vertices we need to check and all the other things that we did not do have the same results as before. So this, of course, we cannot prove this out of thin air. So we have to assume some kind of sanity properties. So specifically for example, we have this property that the direct dependencies of a vertex are the same in both revisions if the artifact is the same. So if kind of your dependencies or a vertex direct dependency is changed, then you also have to change the vertex. And this is kind of typically what happens in most programming languages. And a vertex with the same artifact in both revisions is checkable in both revisions, checkable in new revisions, if and only if it's checkable in the older revisions. So basically you can turn something into a checkable vertex or non-checkable vertex if it was checkable without changing the artifact. So this is again, sanity property. And finally, we have that outcome of executing a checkable vertices is the same in both revisions if everything kind of in the closure is the same. And this, of course, does not hold in practice sometimes. For example, if you run a test two times with exactly the same system, it might return different salt because of non-determinism and so on. Your test is flaky. So every, we assume that this does not happen. So this sort of key assumption to prove correctness. Now we're moving to a hierarchical analysis. So and usually this is when you run make, this is what you usually have in mind. So you have a set of coarse-grained components or files in this case. And then these are connected inside these files or some kind of fine-grained components like methods or other kinds of structures. And the files, of course, are somehow contained in or the methods are contained in these files. So our model sort of extends to this by we have these now two sets of components. And one is the coarse-grained set, one is the fine-grained set. And then there's a partition from the coarse-grained to the fine-grained ones. So and then, of course, we have two dependency graphs. And we have, we can use the graph to analyze the components V and the graph, this key bottom here. So we have two key strategies to do this. We have the over-approximation strategy, which is kind of running everything in, we sort of figure out which files have been impacted and then we sort of execute everything inside those files regardless of whether that changed. And this is sort of what makes us or typically what makes us. And then we have the compositional strategy where we're a little bit more clever. So we sort of do this, we find the impacted vertices in this U set. And then we sort of figure out a subgraph on the V set. And then we perform change-effect analysis in this subgraph. And then we can sort of avoid running check on these these kind of fine-grained components that did not change while getting some kind of boost from our analysis of the U sets. And the correctness is the same here as essentially the same here as for the basic model. And of course, there are some assumptions here that are kind of sanity properties on this partitioning. So for example, if we have two different coarse-grained components and there are two fine-grained components that live in each of these and they are related, then of course the coarse-grained components have to be related as well. So this is again, if you're using file system analogy, this would absolutely hold. And you also have that if these functions for the coarse-grained, if the artifacts for the coarse-grained sets are the same, then they have the same partitioning. So again, this would absolutely hold in a file system world, but maybe not in other domains. And finally, if you have the same artifacts for the coarse-grained sets, and then you, for all of the things inside this coarse-grained component, then they have to be unchanged as well. So you can't have a method that's changed when the file is unchanged, for example. And let me give you some idea of what the encoding code is. So I want to give you too many details, but it's around two thin lines of specification and five thousand lines of proofs. And it uses finite sets and graphs. And the definitions are quite similar to what I just show you in terms of all sets and so on. But you have some notations like connect here, which is transducer closure. That's a little bit different. And we use some kind of predicate here to define whether something is fresh or not. And this is the union operator. And to evaluate our formalization, so again, we prove these things under the assumption. So now we extracted some executable code from this, and we defined a general interface and the tool called chip. And then we integrated chip with three tools. So one regression testing tool, one regression proof selection, one regression test selection tool, one regression proof selection tool, and a build tool. Just to kind of show, to cover all bases. And then we compare the results and the time taken to run chip, the tool with the chip and without chip on a bunch of projects. So here is the list of projects we use. For example, here we use a set of Java projects for Ecstasy, which is a Java tool. And it's quite big projects. You can see URLs here on GitHub. And then for Coq, we used three quite big projects, or at least three big projects and one smaller one. And for the build system top, we used these projects, which used top as a build system. We used a bunch of revisions to run chip on. And here you can see the results for Ecstasy and chip. So you can see that the time increases a bit. So we went, for example, here from 188.67 seconds to about 195 seconds. So a bit slower. And you can see the change of the analysis time also went up a bit. But the results were actually all the same. So we selected the exact same things. So this was some kind of validation. And overall, the time was not that much worse. So we did not have to pay too much to get the verified components. And here are the results for ICOC, the regression proof selection tool and chip. And again, we have the story where we increase a bit in the time and sometimes a bit more. But overall, the results were the same and the increase was not, in most cases, not too bad. And finally, we have the build system. And here we have quite small changes in most. So here we only report the change impact analysis time. You can see that the changes were quite small. In most cases, this is milliseconds. But the results were all the same. So in conclusion, we represented this conversation of change impact analysis using the code persistent. And we integrated, we took a verified code, integrated it with build and test tools. And we sort of validated that in some sense that this model is capturing something that we think it is. And we provide all the code on GitHub. And feel free to ask questions after the presentation. Also feel free to contact me or open issues on GitHub if you have questions specifically about, for example, the COC code, which I did not go into detail all that much.
Change impact analysis techniques determine the components affected by a change to a software system, and are used as part of many program analysis techniques and tools, e.g., in regression test selection, build systems, and compilers. The correctness of such analyses usually depends both on domain-specific properties and change impact analysis, and is rarely established formally, which is detrimental to trustworthiness. We present a formalization of change impact analysis with machine-checked proofs of correctness in the Coq proof assistant. Our formal model factors out domain-specific concerns and captures system components and their interrelations in terms of dependency graphs. Using compositionality, we also capture hierarchical impact analysis formally for the first time, which, e.g., can capture when impacted files are used to locate impacted tests inside those files. We refined our verified impact analysis for performance, extracted it to efficient executable OCaml code, and integrated it with a regression test selection tool, one regression proof selection tool, and one build system, replacing their existing impact analyses. We then evaluated the resulting toolchains on several open source projects, and our results show that the toolchains run with only small differences compared to the original running time. We believe our formalization can provide a basis for formally proving domain-specific techniques using change impact analysis correct, and our verified code can be integrated with additional tools to increase their reliability.
10.5446/55025 (DOI)
I am Frédéric Long from Inria-Gronable. This talk is co-authored with Radou Matissekou from Inria-Gronable and Franco Masanti from CNR PISA. This talk deals with the verification of systems consisting of processes that run in parallel. The model of concurrency inherits from process algebra where parallel processes are represented by labeled transition systems. That is, automata whose edges are labeled by the system's communication actions. Those actions are executed asynchronously, meaning that from an observer point of view, independent concurrent actions interleave. There are operators enabling actions to be synchronized, renamed, or hidden. That is, renamed into a special internal action written tau. The properties to be verified on the system are expressed in the action-based model mu calculus, L mu, which is an expressive temporal logic that subsumes CTL, ACTL, PDL, and so on. Of course, verification has to tackle the famous state-space explosion problem. To do so, we rely on LTS reductions based on action-idling and bismillation, such as strong or deep branching bismillation. Strong bismillation allows those states which have exactly the same feature to be identified as equivalent without making any distinction in the treatment of internal and visible actions. Bismillation reduction consists in merging equivalent states, as in this example, where all equivalent states have been merged. Strong bismillation equivalence preserves all properties of L mu. Therefore, systems can always be reduced for strong bismillation. Deep branching bismillation is a shorthand for divergence preserving branching bismillation. It is a variant of the famous branching bismillation, which additionally preserves the cycles of tau transitions. Deep branching bismillation allows the source and target states of tau transitions to be considered as equivalent as long as they exhibit the same divergences and lead transitively to the same choices of visible actions. Deep branching bismillation reduction is illustrated in this example, where we can see that much more compression is obtained than with strong bismillation. When verifying a given temporal logic formula, we have to choose a reduction relation that preserves the formula. Based on this criterion, strong bismillation can always be chosen. To reduce the risk of explosion, deep branching should be favored as much as possible. With the same fragment of L mu called L mu dB was identified, which is preserved by deep branching. This fragment resembles L mu, but the modalities of L mu are replaced by restricted versions called weak modalities. This fragment subsumes other well-known logic fragments, such as CTL and ACTL, without next operator. Therefore, if we can prove that the formula can be expressed in L mu dB, then we can reduce the system for deep branching bismillation. However, if this is not the case, we have to resort to strong bismillation. In fact, we will see that we can do better. The fragment L mu dB preserved by deep branching bismillation offers three weak modalities written in PDL style. Note that we have to distinguish here between action predicates that are satisfied by the internal action tau and which are written alpha tau and action predicates that are not satisfied by tau and which are written alpha a. The first modality is called ultra weak modality. It expresses that there exists a finite sequence of actions satisfying alpha tau, which traverses only states satisfying the formula phi 1 and n in a state that satisfies the formula phi 2. The second modality is called weak modality. It differs from ultra weak modality, only in that the sequence ends in a state in which there is an action satisfying alpha a and leading to a state satisfying phi 2. The third modality is called weak infinite loop modality. It expresses that there exists an infinite sequence of actions satisfying alpha tau, which traverses only states satisfying the formula phi 1. Note that as soon as a formula contains a modality that cannot be expressed in a weak form, then deep branching reduction cannot be used because it may not preserve the truth value of the formula. From now, modalities that are not expressed in a weak form are called strong modalities. There are interesting properties that combine weak and strong modalities, such as searching for the occurrence of two consecutive actions or of a choice between actions after traversing an arbitrary sequence of possibly internal actions. In these examples, we can see that these formulas can be represented in a form where only a subset of the actions satisfy action predicates contained in strong modalities. We will see that this can be exploited to refine reduction with the new notion of sharp bisymbulation. From now, we assume that phi is represented in a form that combines weak and strong modalities. This form induces a partitioning of the system actions between those satisfied by the action formulas occurring in strong modalities, which we call strong actions, and the others, which we call weak. Sharp bisymbulation is parameterized by the set of strong actions. We show here its definition for those who already know the definitions of strong and deep branching bisymbulations. You can see here that sharp bisymbulation actually combines those definitions by distinguishing how weak and strong actions are treated. In particular, under some conditions, the transition labeled by a weak action can be matched by a sequence of tau transitions followed by the same weak action as in deep branching bisymbulation. Whereas this is not allowed for strong actions. If we now look back to our previous example, we can see that sharp reduction here, considering A as the only strong action, merges much more states than strong reduction, and almost as many as deep branching bisymbulation. This is a real progress as formulas which have only A as strong action can now be checked on an LTS with three states and three transitions instead of the strongly reduced LTS, which here has twice as many states and transitions. Remember that such formulas cannot be checked on the deep branching reduced LTS since they are not preserved by deep branching. Sharp bisymbulation has nice properties, which we have formally proven. These properties are not surprising, but they are useful and generalize non-properties of strong and deep branching bisymbulations. First, sharp bisymbulation with respect to set of strong actions, A as, is adequate with the fragment of L mu consisting of the formulas with strong modalities match on the actions in A as. Adequacy here means that the equivalent processes are those satisfying exactly the same formulas of the logic fragment. Second, of course, strong and deep branching are no instances of sharp bisymbulation. For strong bisymbulation, the set of strong actions is the universe of actions, including tau. For deep branching bisymbulation, the set of strong actions is empty. Moreover, the parameterization with respect to a set of strong actions defines a lattice of bisymulation relations. Third, the less actions are strong, the better the reduction is potentially. Finally, sharp bisymbularity is a congruence for parallel composition and hiding, which allows sharp reduction to be applied compositionally to processes and intermediate compositions. We applied sharp bisymbulation reduction in several contexts. The first one was the rarest challenge in 2019. We participated in the category consisting in verifying CTL properties and concurrent systems. In this category, 180 CTL properties had to be checked on nine different systems of increasing size. We used the CADP toolbox developed at INRIA to tackle these problems, and in particular, its compositional verification tools. Most of the properties were not preserved by deep branching. So we first tried compositional strong reduction, but this was not sufficient to palliate state space explosion for medium-sized problems. This is where the idea of sharp reduction arises. It turned out to be extremely successful, allowing all properties to be checked in only five rows on a standard laptop. We want all possible gold medals in this category. To give you an idea of the gain obtained using sharp reduction instead of strong reduction, I will only take a single example. This property of the challenge is not preserved by deep branching bisymbulation, as it contains strong actions, namely A8, A34, A57, and A60. Note that the other actions can be hidden in the system without changing the truth value of the property, and also that apart from the strong modalities that can be seen in this formula, it also has hidden weak modalities, encoding the CTL operators always finally and always weak until. The system on which this property was checked had 54 million states when unregistered. When using compositional strong reduction, the size of the largest LTS that had to be generated was about 10 times smaller. But when using compositional sharp reduction, the state space reduction was dramatic, leading to a largest LTS having only 200 states. We also applied sharp reduction into other contexts. First, we reviewed some experiments done initially in 2009 by Garavelle and Tyvol, using strong reduction at that time. The green line on top of the first diagram shows the number of states of the largest LTS generated using compositional strong reduction, whereas the red line concerns sharp reduction. The blue line concern or approach presented at FM-19 and can be here ignored. Note that the vertical axis is on the logarithmic scale. This diagram shows that the gain between strong and sharp bismillation is at least one order of magnitude. We also applied sharp reduction to example of the rarest 18 challenge. The diagram shows here that sharp reduction allowed us to solve problems that could not be tackled using strong bismillation. In this talk, we presented a new family of bismillation relations named sharp bismillations, which fills the gap between strong and deep branching bismillations. These relations are useful to verify formulas containing both strong and weak modalities, which in general do not enable deep branching reduction. The reduction in number of states may be dramatic as compared to strong bismillation and often close to the reduction obtained using deep branching. The experiments presented in this paper can be reproduced using CADP and the material made available in a Zenodo archive. Note that our tools only implement a partial sharp reduction, which is described in the paper. In the future, it will be nice to implement a full minimization algorithm. This is unfortunately not as simple as combining the algorithm for strong and deep branching bismillation minimization. There are issues related to cycles of tau transitions, which cannot be compressed beforehand, unlike what deep branching minimization algorithms do. Another point that should be improved is how to help the user extract weak and strong actions from the property. We already identified formula patterns in particular for CTL, but automated tools would be welcome. The smaller the set of strong actions, the better. However, for some formulas, there is no unique minimal set. In addition, we have shown that extracting a minimal set is at least as hard as testing the satisfiability of new calculus formula, which makes the automatic extraction of a minimal set of actions an x time hard problem. However, not necessarily minimal sets of strong actions can be useful. Finally, at first 19, we also used the sharp reduction approach in the context of LTL verification, with good results, as we also won all gold medals in this category. But this approach remains to be formalized. Thank you for your attention.
We showed in a recent paper that, when verifying a modal $\mu$-calculus formula, the actions of the system under verification can be partitioned into sets of so-called weak and strong actions, depending on the combination of weak and strong modalities occurring in the formula. In a compositional verification setting, where the system consists of processes executing in parallel, this partition allows us to decide whether each individual process can be minimized for either divergence-preserving branching (if the process contains only weak actions) or strong (otherwise) bisimilarity, while preserving the truth value of the formula. In this paper, we refine this idea by devising a family of bisimilarity relations, named sharp bisimilarities, parameterized by the set of strong actions. We show that these relations have all the nice properties necessary to be used for compositional verification, in particular congruence and adequacy with the logic. We also illustrate their practical utility on several examples and case-studies, and report about our success in the RERS 2019 model checking challenge.
10.5446/55026 (DOI)
Good morning, good afternoon, ladies and gentlemen. Welcome to this TACAS 2020 presentation. I'm going to show you the main ideas of our fast algorithm for branching by similarity on label transition systems. Perhaps you remember that five years ago, TACAS was held as a physical conference in the Netherlands. At that conference, two authors presented a paper entitled, An O of Mlog N algorithm for stuttering equivalents and branching by simulation. The two other authors then showed that in a few cases, it's not actually O of Mlog N. I don't want to be too negative about the algorithm. It was quite fast already, we will see that also. Only it handled a few corner cases suboptimally. The four of us together repaired the problem in the TACAS special issue of transactions of computational logic by putting a large and ugly bandage over the problem. And actually, the algorithm handled branching by simulation by a translation to stuttering equivalents. This translation blows up the problem and so the time complexity becomes a little bit bigger. Today, I'm presenting a solution to both. We peeled off that bandage and simplified the handling of new bottom states. Our current article also handles branching by simulation directly, so it is truly O of Mlog N. And that explains the title of our TACAS 2020 publication, an O of Mlog N algorithm for branching by similarity on label transition systems. Now, what is a label transition system? Label transition systems are often explained using coffee machines. This is the model of a coffee machine that also can provide hot water, the favorite drink of many Chinese. The gray arrows show internal transitions for the user. The three states at the top and at the very bottom cannot be distinguished. So there is a non-trivial set of indistinguishable behaviors here. And when we minimize this LTS, the three states become just one and this is what remains. Now let's make this model a little bit more interesting. At some moment in time, the coffee may run out. This is also an internal action. The coffee machine decides on its own when there is not enough coffee powder left to provide one more cup of coffee. Still, the two states on top can be distinguished by visible actions that happen afterwards. Therefore, we have two non-trivial sets of indistinguishable behaviors. And when we minimize this, we find the internal transition still to be in the minimized LTS. Branching by similarity is the relation that describes these indistinguishable behaviors. First, let me show you what a branching by simulation is. If a relation describes these indistinguishable behaviors and two states S and T are related and state S has a transition to S prime, then T should have a similar transition to T prime. And S prime and T prime are also indistinguishable. For T to T prime, we allow that T first takes a few internal steps before it actually takes the visible transition with the same label as from S. But to avoid that the states that T visits in between jump too far off, we require that the last state Tn, just before it takes the visible transition, is also related to the original state S. Branching by similarity now is the largest of these relations. It is the union of all branching by simulations. And it is the coarsest branching by simulation. What is the smallest model that is indistinguishable? This question is answered by branching by similarity minimization. There are a number of algorithms to minimize a label transition system. For strong by similarity, that is by similarity without internal actions, there is the algorithm of Kanalakis and Smolka that was quite quickly accelerated by Page and Tarjan to O of Mlog M. For branching by similarity, including these internal actions, Grote and Vandrager found an algorithm shortly after Kanalakis and Smolka found one for strong by similarity, but it took much longer to accelerate these. More than 25 years later, we in our previous publication accelerated this algorithm with the ugly bandage. I have to say that actually these O of Mlog M algorithms, both of Page-Tarjan and our earlier algorithm, had almost complexity O of Mlog M. And Valmarie found a trick to make the algorithm truly O of Mlog M on label transition systems in 2009, and we combine our earlier algorithm with the one of Valmarie here. Additionally, we simplified the handling of neo-bottom states. All these algorithms that I've mentioned are based on partition refinement. The idea of partition refinement is to approximate an equivalence relation from above. You start with a coarse partition. For example, you assume that all states are equivalent. You have a partition with only one equivalence class or one block. Namely, all states are in one class. And then there may be some problems with this equivalence relation. So you repair the problem by refining one of these blocks into multiple sub-blocks. Once all the problems have been solved, you have reached the true equivalence relation. Now to work efficiently, the faster algorithms remember the solved problems to avoid repeating work. They do that by a second partition. Page and Torsche use a second partition on the states to remember the solved problems, and Valmarie actually uses a partition of transitions. And that is what we are using here too. So we have these two partitions. A main partition of states into blocks, states in different blocks are shown to be not branching by similar. So the algorithm has found a proof that they are not branching by similar. And the auxiliary partition of transitions into bunches, transitions in different bunches, cannot simulate each other. I will explain that shortly. And blocks have been split accordingly. So problems caused by these two transitions have been solved. What does it mean that transitions can simulate each other? Remember the definition of branching by simulation. We had a transition from S to S prime and a transition from Tn to T prime. If two transitions can be in such a diagram, then they can simulate each other. If the transitions cannot be in such a diagram, then they cannot simulate each other. In our algorithm, we refine these two partitions alternatively. So we start with two course initial partitions. We first find a problem in the partition of transitions and replace it by a finer partition. That violates the invariant. So there is now a problem also in the partition of states that we resolve and replace the partition of states with a finer partition. Then there may be another problem in the partition of transitions that we solve. And again, a problem in the partition of states pops up and we solve that as well. And so we go on alternatively refining the partition of transitions and the partition of states until we find that there is no more problem in the partition of transitions. And then by these invariants also the partition of states has no problems anymore and it contains the true equivalence classes. To make it efficient, we use the principle process the smaller half. Because the actual refinement of these partitions is very similar to what we did in the previous publications, I only mentioned this principle. When we refine a block, we only process the states that are in a smaller resulting sub-block. And if we process a block of states, then we also process their transitions. As a result, each transition is processed at most log n times. And that gives a total runtime in the order of number of transitions times logarithm of number of states. The main new part of our algorithm pertains to bottom states. What are bottom states? Look, in this block, the state on the left has a transition, an invisible transition, a gray arrow to another state in the same block. That means it is a non-bottom state. Bottom states, like the two states on the right, have no such transitions. Now I can ask, this bottom state down here has a yellow transition. Can the left non-bottom state simulate this transition? Well, itself it doesn't have a yellow transition, but what it can do is you can just take the gray arrow, the internal transition, and try again. Oh yes, that state has a yellow transition. What it means is we can ignore non-bottom states when we decide whether a block needs to be split or not. So non-bottom states simulate other states trivially. But what may happen is that for some other reason, this block is split into two parts, and that non-bottom state over there becomes a bottom state. Bottom states can cause problems, but we don't know whether this state simulates or does not simulate other states. So we need to resolve these problems. What I want you to remember is only bottom states can cause problems in the partition of states. And when splitting a block, more states can become bottom states. Now, to resolve these problems, we only need to register additional splitters to restore the invariant for these new bottom states. Splitters, that is a way to record problems that need to be solved right now. And to register these splitters, we only require time that is proportional to the outgoing transitions of the new bottom states. So when we sum up over all states, we get a total time in the order of number of total transitions. Overall time complexity. Let's look at that. I've already explained the time spent on new bottom states. Further running time is assigned to splitting the two partitions. So for the partition of states, a state is processed at most log n times. And every time when this happens, we spend time that is proportional to the incoming and outgoing transitions of this state. Summing up over the states, we spend time, total number of transitions times log n. For the partition of transitions, a single transition is processed at most log of n squared plus one times. And whenever this happens, constant time is spent on that transition. Now, log of n squared is two times log n. So when we sum up over all the transitions, we spend time in order of number of transitions times log n. Overall, everything fits within number of transitions times logarithm of number of states. And that is the worst case time complexity. The algorithm is efficient in theory. But sometimes the overhead of an efficient algorithm is so large that it is slow in practice. To check for that, we ran the 31 largest benchmarks from the very large transition system suite and three of our own. We compared our algorithm with three older algorithms all in the toolset MCRL2. So the traditional Grote Fahndraher algorithm from 1990 has time complexity O of mn. It takes a long time, but it is quite memory efficient. Blom and Orzahn in 2003 have devised an algorithm that has an even worse theoretical complexity, but in practice it is much faster. It comes at the cost of using additional memory. Our previous algorithm, which was almost O of m log n, is much faster than either of these. But again, at yet more cost in memory. I think the main problem of our previous algorithm is that it handles labeled transition systems by a translation to Kripker structures to stuttering equivalents. So it needs to store all the data twice. And today's algorithm, which is truly O of m log n, is the fastest of all and also needs the same amount of memory as the Grote Fahndraher algorithm. So it doesn't have any memory disadvantage. To get at these figures, we run every combination 10 times. And here we report the averages rounded to significant digits. In conclusion, the algorithm that I presented is the fastest and most memory efficient algorithm for branching by similarity, both in theory and in practice. It is simpler than our previous algorithm from ACM TOCL 2017. Now, branching by similarity is not only useful in itself, but it is also used for pre-processing if you want to calculate another behavioral equivalence, for example, weak by similarity. You can accelerate the calculation of that by first calculating branching by similarity and then weak by similarity on the quotient. There is a technical report version of our publication that contains a few additional details. And it's all implemented in the tool MCRL2 from Eindhoven University. And you can download that from the website www.mcrl2.org. I thank you for your attention and I hope to see you in person soon again.
Branching bisimilarity is a behavioural equivalence relation on labelled transition systems (LTSs) that takes internal actions into account. It has the traditional advantage that algorithms for branching bisimilarity are more efficient than ones for other weak behavioural equivalences, especially weak bisimilarity. With m the number of transitions and n the number of states, the classic O(mn) algorithm was recently replaced by an O(m(log |Act| + log n)) algorithm [Groote/Jansen/Keiren/Wijs, ACM ToCL 2017], which is unfortunately rather complex. This paper combines its ideas with the ideas from Valmari [PETRI NETS 2009], resulting in a simpler O(m log n) algorithm. Benchmarks show that in practice this algorithm is also faster and often far more memory efficient than its predecessors, making it the best option for branching bisimulation minimisation and preprocessing for calculating other weak equivalences on LTSs.
10.5446/55027 (DOI)
Hello, my name is Vakai, and I'll be discussing our work on partial order reduction applied to model checking of synchronous hardware, in particular for finding deep bugs. We're going to use one of the standard symbolic transition system models, where x is a set of current state variables, x prime is a corresponding set of next state variables, and then we have the normal i of x, which is the initial state constraints, and t, which is a transition relation encoding the dynamics of the system. We'll use p of x to denote a property, and a state s is a full assignment to the current state variables. In this talk, we'll also assume that the symbolic transition system is functional, meaning given all of the current state variable values, there exists exactly one next state. In addition to the standard symbolic transition system model, we'll also use something that's not so standard for synchronous hardware. We're going to define actions. These actions are predicates that hold when the system performs some operation, and the action might also have an enable condition. This means that the action can only be taken if the enable is high. This is built on top of the symbolic transition system. So given a symbolic transition system, you could look at it and decide to split up the different operations it can perform into actions. In a synchronous system, you can perform multiple actions at once. And so we define an instruction set to be all the action configurations, or the power set of the actions, and enable conditions are lifted in the natural way. So if you have actions a within your instruction i, that instruction is enabled if each action is enabled. I'd like to emphasize the difference between asynchronous and synchronous systems within our framework. For those of you familiar with partial order reduction, you'll know that it's typically applied in the asynchronous case, which makes this distinction very important. In our framework, an asynchronous system would have interleaving actions. This means that you only apply one at a time. And these systems such as concurrent programs tend to have syntactic hints about how about any symmetry in the system. In a synchronous system within our framework, actions can be applied simultaneously. That's why an instruction is a member of the power set of actions. And so you have to consider every possible configuration of each of the actions true or false values. And we're very interested in pairs adjacent pairs of instructions in this work. And it'll be important that there's a much larger number of pairs of instructions than there are a number of pairs of actions. Additionally, synchronous systems don't tend to have syntactic hints about any symmetry within the system. It's more of a semantic symmetry. Now let's consider a potential problem. Let's say we have a system with two actions. What happens if we unroll for bounded model checking run up to K? There are many possible action configurations when you consider this unrolled trace. At any step, a0 or a1 could be enabled or disabled. So you have two to the K possible configurations. And if you generalize this to multiple actions, that would be the number of actions to the power K. This can be very difficult on the solver as it has to consider an exponential number of possible action configurations. In particular, this might be too much work because you could have symmetric actions where when you start from the same state, let's say you start from some state f0 and you apply a0 and we're going to assume that all the actions are enabled here, you get to state one. And then if you apply a1, you get to state three. Now maybe you could have applied them in a different order a1 and then a0 and still reached the same state. Now this kind of semantic symmetry can be very hard to identify. However, if you have a lot of number of actions and a lot of symmetry, this can be very difficult for a model checker, even though there's actually not too many states to explore overall. And because there's many symmetric paths reaching the same states. Now the question is, people to know that this can happen in concurrent software, but can this really happen in hardware? Well, here's a simple example. Let's say you just have a FIFO implemented in hardware, and you have operations pop that remove an element from the FIFO and push that adds an element to the FIFO. Well, let's consider two possible traces. You start in a state where you have one data element, d0, in your FIFO. In the first case, you pop, which makes your FIFO empty, and then you push the value d1, and you end in this state here. Instead, if you had pushed d1 first, then you would have a state in the middle, which has both items, and then you would pop d0 off if you pop next. The final state is exactly the same, but there are two ways to get there. And if you imagine a bounded model checking trace, which is extended to a large, unrolled to a large bound, this could cause a lot of problems, and we actually saw this in practice. So what would you want to do? The idea of partial order reduction is to reduce the number of paths. This was originally pioneered for explicit state model checking, and it was later extended to the symbolic case for concurrent programs, but it's mostly focused on asynchronous systems. With this same diagram, the goal would be to choose one of these paths and disallow it. Notice that we've only disallowed the second transition because we don't want to rule out S2. So if there were a bug in the state S2, this could be the end of the trace, and you would still hit it. But we're not ruling out S3 even by removing this transition because you can still get there through this path, and the other path was just another symmetric way of reaching the same state. It's also optional to include a guard. This gives you a more expressive partial order reduction, which means that you only disallow a transition if some guard holds on that state. It allows you to do kind of more targeted conditional pruning of paths. And now that brings us to the ideal partial order reduction. If you had a system with a lot of symmetry, the state space might look something like this, almost a grid. Now imagine one symbolic step we're considering each individual transition. You could add a constraint to the transition relation to rule out some of that symmetry. Here we say if the instruction i0 is enabled, but instead you apply i1, then you can't apply i0 in the next transition. And in some sense, you're just giving preference to i0. So if you could apply i0, you have to apply it first, but you can still switch to i1 later. And so the point is if you're interested in some state, let's say your starting state is here, and you want to reach this state, you can't get there through this path anymore, or this path, but you can still get there through here. So again, the goal with partial order reduction is to rule out as many paths as possible without ruling out any reachable state. And so even with these transitions disabled, you can still reach any state that was reachable in the original system. Something to notice here is that if you don't have the option to apply i0, you have to be careful to not rule it out in subsequent transitions. So in this case, this transition does not exist. There's no i0 here. So you might apply i1, but now you can't disallow i0 or else you wouldn't be able to get to these states over here. That's why this enable condition is here. So if you didn't even have the option to apply i0, you still have to allow you applying it after i1. And in this case, if you're interested in this state, for example, you can still get there through this path. So any reachable state is still reachable. This is the goal. We want to apply this to our system. And if you had a guard, that's just a more nuanced case. Perhaps there's only symmetry in your design, under certain conditions. And so you can guard this constraint with some arbitrary condition. And that's the only time where you disallow these transitions. This is a good time to recall the difference between asynchronous and synchronous systems in our framework. If you remember, the asynchronous systems have independent interleaving actions, and a synchronous system has instructions. So an instruction is some configuration of actions where you can have multiple enabled. And the important thing here is there are a large number of pairs of instructions, exponential and the number of actions. And what this means is that it's very challenging to apply partial order reduction for all instructions. Because there's so many of them, so that all the different pairs overwhelms the solver if you try to enumerate all the symmetry. Additionally, since you can apply multiple actions at one time, many pairs have no symmetry. So if you take our FIFO example, if you push and pop versus just pushing, there's not really any symmetry there. The symmetry exists between a single push and a single pop, depending on the state of the FIFO. So our solution is something called reduced instruction sets. And the idea here is we want to find a new instruction set. So recall the instruction set was the power set of actions, and you're considering applying multiple actions at once. It might be that not every configuration of actions is necessary to maintain the reachable states of the system. So if we find a reduced instruction set that still allows reaching any state that was reachable in the original system, and we hope that that instruction set is much smaller than the original one, then we can now apply partial order reduction by first limiting the actions or the instructions that you can apply, and then using partial order reduction on this smaller instruction set. It turns out that automatically finding a minimal instruction set is very difficult, because you need to show that the system with IR can simulate the system with the full instruction set. So we focus on a special case, just splitting instructions into sequential actions. The idea for sequentially splitting instructions is just the most straightforward way of reducing the size of an instruction. For example, say you had an instruction that was applying a0, a1, and a2 simultaneously. It might be that you could apply a0 and a1, which gets you to some intermediate state, s1, which was already a reachable state because a0 and a1 is a valid instruction, and then apply a2 and get to the same state. If this is true for any starting state that's reachable, then you could just disallow the a0, a1, and a2 instruction and instead only apply a0, a1 together, followed by a2. And now you've been able to rule out one of the instructions from your instruction set. So we have this high-level algorithm, sketch, and the idea here is to show that you can reduce your instructions to a smaller set. You want to start with some c, which is equal to the number of actions, and it's a bit of a proof by induction where c is decreasing. And you want to show that c simultaneous actions can be split sequentially into smaller instructions. In this case, we're going to try to say that c can be simulated by i0, where i0 is some subset of the original instructions, and it's less than c, the size of it is less than c, followed by i1, where i1 is just a single action, and still reach the same final state. If we continue this process in our algorithm, and decrease until c equals 1, then we've split, we found an instruction, a restricted, a reduced instruction set, where only one action is applied at a time, and this is much more like an asynchronous system. So the big idea here is we want an algorithm for a targeted for all exist decision procedure. That is, we'd like to show that for all reachable states, and for all instructions, such that the number of actions in the instruction is equal to c, there exists some decomposition, where i0 is a subset of i, and then there's a single action, a, that was in the original instruction, such that applying i0 and then a reaches the same final state in all cases. Note that this algorithm does not need to return the decomposition, we're just trying to show that we're not ruling out any reachable states. And so we don't actually need to know what the exact decomposition is, just that it's possible. We need to know which instructions are kept in i, and the reduced instruction set i are, but we don't need to know exactly how it's decomposed and in what order. In particular, different instructions of the same cardinality might require different delayed actions. And to handle this, the algorithm uses counter examples to determine this automatically. So for example, let's consider two different instructions that you have size three. So the first one is a0, a1, and a2. The second one is a0, a2, and a3. So in the first, perhaps the best decomposition is delaying a2. So you have a0 and a1, and then if you apply a2 after that, you reach the same final state. Now this instruction is also size three, it also contains a2, but maybe delaying a2 doesn't actually result in the same final state. So in this case, you have a0 and a2, but followed by a3, and then you reach the same state. And that's what I mean when you don't need to return the decomposition. So knowing the difference between these two is not something that the algorithm has to return. It does have to consider these cases, but it's not something that the user needs to know. The user only needs to know that any instruction, of size three, can be decomposed into a smaller instruction followed by a single action. This is our sequential splitting technique. The overall methodology of our technique is to first manually annotate the actions of the system. So you want to take a transition system representation and partition the functionality into different actions. You then use the algorithm to find a reduced instruction set, or to just show that one is possible. And then more complicated system might need user guidance. Then you restrict the system to only use instructions from the reduced instruction set. And finally, you check all the unique unordered pairs of instructions in the reduced instruction set, which is smaller for path symmetry. And for any of them which are symmetric, you can rule out one of the paths. And if necessary, you can suggest guards manually, if there's certain conditions that need to hold for symmetry. And we had an approach in our tool, which would, you could supply guard hints, and it would try to find a small subset of them to use. So our experiments are going to be a set of parametrized hardware designs, such as FIFOs and arbiters. And we also have a commercial library component, which are also FIFO implementations that we're going to evaluate on. And we're checking data integrity to make sure that no packet is dropped or reordered. And all of these have injected deep bugs in them. The parameters, these are parametrized design, and we swept the data width and the depth of the FIFOs from two to 128 by powers of two. And we just ran with and without partial order reduction coupled with reduced instruction sets. So here are the first set of results. These are on the commercial designs. This line in the middle is if they ran for the exact same amount of time. And we have regular BMC on the x-axis and with partial order reduction and reduced instruction sets on the y-axis. And then dotted lines are an order of magnitude improvement. So here you can see that on small run times, which happened to be the small parameters, that it's faster to just run regular bounded model checking, which makes sense because there's overhead in our technique. But as the parameter sizes increase and the run times grow, you see a big speed up. And here you're even getting two orders of magnitude. And you have lots of timeouts for regular BMC, but not for the ones with our technique. Of course, there are still some that timeout for both. And we see a similar pattern with our open source designs. We get up to multiple orders of magnitude run time improvement. And you have timeouts at two hours for regular BMC, but without our technique. This is also true for the arbitrated FIFOs. Although they seem to be more clustered, you still get a similar impact. And by the way, the N number here is the number of FIFOs. And the parameters are swept for all the FIFOs in the design. Thank you for your attention. And please feel free to reach out with any questions or comments.
Symbolic model checking has become an important part of the verification flow in industrial hardware design. However, its use is still limited due to scaling issues. One way to address this is to exploit the large amounts of symmetry present in many real world designs. In this paper, we adapt partial order reduction for bounded model checking of synchronous hardware and introduce a novel technique that makes partial order reduction practical in this new domain. These approaches are largely automatic, requiring only minimal manual effort. We evaluate our technique on open-source and commercial packet mover circuits – designs containing FIFOs and arbiters.
10.5446/55028 (DOI)
Bonjour à tous, mon nom est Claude Marché, je vais présenter le travail sur l'analysation de la sécurité de la installation des passages de software de distribution de la DBN. Je suis de l'Université de Paris-Saclay, et c'est un travail joint avec des collègues de l'Université de Paris. Je vais expliquer à l'abord ce que nous devons dire par sécurité de l'installation des packages de software. En packages de software de distribution de l'Université de la DBN, un package de software est un bundle qui contient des files à installer sur le disque, mais aussi des scripts qui peuvent être exécutés pour procéder avec l'installation, et en fait aussi pour procéder avec l'upgrade ou l'installation de la DBN. Ces scripts sont souvent rétés dans le langage shell, et l'un des choses importantes à noter est que ces scripts sont exécutés avec les permissions administratives, signant que les erreurs dans ces scripts peuvent avoir des conséquences très mauvaises sur le système de installation. Par exemple, les dégâts d'un certain nombre d'explications de données qui viennent de l'autre package ou de l'utilisateur. Cette issue de sécurité est bien sûr adressée, d'abord par l'intensif de test de package installation, mais c'est impossible de couvrir tous les scenarios possibles de package installation avec des tests. C'est pourquoi nous proposons de procéder avec une forme de stratégie, pour analyser les scenarios de installation dans un contexte arbitraire. Pour exemplifier ce que peut généralement se passer, nous allons montrer un scénario de failure que nous identifions dans notre étudiant. Nous commençons avec une installation minimale de UBuntu dans une image docker, et nous créons un file slash etc slash sgml. Ensuite, quand on essaye d'installer l'installer, le package sgml, le résultat est comme il se passe. Le commande de package, qui est le command responsable de procédier avec l'installation interne, rapporte que le script préinstallé inexplicablement retient la statuce non-zero, signifiant que l'erreur est en fait harmless, mais la police de débiens pour l'installation de script forbid des erreurs fatales, qui requiert d'encapture des erreurs possibles et d'encair les erreurs en une manière appropriée pour protéger le système dans un bon état. En général, ce qui devrait être vérifié sur l'installation de scripts, comme exemplif, nous devons vérifier que les scripts ne produisent pas d'erreurs d'exécution, c'est-à-dire qu'ils ne devraient jamais rétablir un code non-zero. Mais il y a aussi un document qui décrive la police de débiens pour les packages, qui requiert beaucoup plus, c'est-à-dire que les propres compositions, comme le fait que si l'installation de packages ne faîte pour aucun raison, et cela pourrait toujours se passer, en fait, si l'installation de scripts est complète, et puis, en plus, si vous essayez de réinstaller le même package et que cela s'y arrivent, le résultat devrait être le même que si l'installation a déjà réussi à la première fois. Donc, ce sont les compositions qui ont faim, qui sont suivies par le style de succès, qui sont équivalentes à un style de succès. Donc, dans le document, il y a une liste de scénarios qui sont descrivées comme « flowcharts » comme shown ici. Ce flowchart décide de l'installation du processus de package non install, donc il explique l'exécution de scripts d'installation et de l'arrêt. Donc, par exemple, ici, le pré-instruct est appelé « with argument installed » et si cela ne faim, par exemple, le post-RM script devrait être appelé « with argument abort installed » donc, présumably, il devrait être appelé « to clean up the installation in case of failure ». Et en fait, le diagnostic de ce failure devrait être donné avec une message appropriée. Dans le document, il y a huit autres scénarios de ce genre, et certains sont signifiquement plus complexes que celui-ci. Donc, notre rôle dans le travail présent ici est de vérifier si les packages suivent correctement cette police et, en particulier, ces scénarios. Dans notre projet, le Koli project, nous avons créé un tournevis pour procéder à une certaine analyse. Dans ce diagramme, nous avons schématiquement présenté le travail de l'analysation du package. Si vous regardez à la gauche, vous voyez que le contenu de package est extracté pour les premiers contenus de statuels. Ce sont les files que vous avez à installer sur votre disc et l'installation de scripts. Donc, de cette date, et en fait, de n'importe quel scénario, nous avons procédé avec une exécution symbolique de la scripte. Cette exécution est symbolique dans le sens que le système de file sur lequel la scripte opérate n'est pas un concret, mais est représenté comme une abstraction. Cette abstraction est comme la forme de trois constrées. Un trois constrées est en fait une représentation abstracte qui couvre un set de possibles files systèmes, et peut-être même un set infinit. Les décisions sont ensuite ce que nous appelons la relation symbolique entre trois constrées. Ces relations réglent les systèmes de file sur les intérêts et les systèmes de file sur les outils. Ce système de exécution symbolique est responsable d'exécuter l'exécution abstracte dans la scripte, et cela comporte une abstraction de l'exécution possible de tous les sets, et de tous les études que vous produisiez en diagnosis. Tout ce scénario pour l'exécution symbolique de la scripte a intégré plusieurs prévus works qui sont summarisés sur ce côté. Nous avons à commencer avec un parcer, un script fourchel, qui nous appelle Morby, qui a des intérêts sur sa propre. Nous avons créé un interprète concrète pour Shell, et ensuite un interprète symbolique, qui combine notre concept délicat de trois constrées et un engine d'exécution symbolique. Nous avons également une proof formale que l'interprète symbolique correctement apprécie le set de exécution concrète. C'est tout pour le plateforme de travail. Nous avons aussi besoin d'une spécification formale de commandes atomiques, comme la création de directrices de Mkdir, ou commandes pour les files de compagnie, etc. Ce sera formulé et spécifié en termes de relations symboliques entre l'input et l'output. Nous devons mettre tout cela ensemble, dans un engine global pour l'exécution et la production de diagnosis. Je vais montrer ce que l'on donne sur mon exemple sur le package de SGM. C'est le contenu de la préinstallation script de ce package. Si le script est appelé avec l'installe d'argument, il s'attente à créer deux directrices si elles n'existent pas déjà. Si il n'y a pas de directrices, le nom est «etc.hgm». Le problème est que, vous pouvez voir ici, il n'y a pas de protection contre le fait qu'il y ait peut-être un file qui existe déjà le même nom. Le test va dire qu'il n'y a pas de directrices et qu'il s'attente à créer une directrice. Il s'y perd, car il y a déjà un file de ce nom. Ce qui donne à notre interprète symbolique, peut-être que vous pouvez avoir une vision ici, une petite vision sur cette ferme qui introduit la construction de la relation symbolique entre l'input et l'output. C'est une formule logique, où les variables sont des nodes du système de file. Les relations sont des directrices, ou une symlique, ou le fait que le node est un enfant d'autre. Une chose qui est plus rédable pour les humains est la traite de l'exécution qui est considérée ici. On peut voir à l'end, le 5e point est le fait que le commande test récompense que il n'y a pas de directrices, mais peut-être qu'il n'y ait pas de directrices, donc cela va retourner à l'autre. Et la execution du commande de l'MKDR qui va fail, car le targait déjà existe. Ceci est un exemple de la execution symbolique. Je vais donner une picture de tous les résultats. Nous avons exécuté notre plateforme sur un corpus, qui est fait de toutes les distributions de débiants de la débiant, comme l'a été fait en octobre 2019. Il y a plus de 10 000 packages et 28 000 scripts. Cela signifie qu'il y a plus de 100 000 scénarios à gérer. En fait, plus que la moitié des executions sont failures, mais elles sont failures de l'analyseur. C'est à dire que notre plateforme est incompli. La majorité de failures sont les factures que le script a été réalisé par un commande non supporté, donc notre plateforme n'a pas encore été complétée. Mais il y a aussi 19 expériences à gérer, donc cela signifie qu'il y a des bugs potentiels. Nous avons identifié les bugs, donc nous avons identifié les bugs dans des états de l'analyseur, pas seulement au niveau de l'exécution symbolique. Nous avons identifié des bugs de parcs, donc le script shell ne confirme pas le standard de poste pour shell. Nous avons, pour exemple, une option incorrecte pour les commandes atomiques, qui est une étude étrange. Nous pouvons peut-être noter les 5 bugs identifiés par l'exécution symbolique. En fait, nous avons également identifié les bugs dans un script système appelé d'un script main de la DPKG, donc plus que 150 bugs ont été reportés. Nous avons donc une plateforme en projet de parcs qui a été très successful. Nous avons donc offert des semantiques pour shell et des commandes atomiques. La chaine de tour est opérationnelle et nous a permis de identifier beaucoup de bugs. Nous avons également des rômes pour les improvement parce que, comme vous l'avez vu, les scénarios sont toujours pas fully supportés par la plateforme. Cette étude a donné quelques lessons, et je dirais que même si toute cette plateforme est automatisée, il y a besoin d'intervention pour les commandes humains. Nous devons toujours en déboucher le sens de la police, ou de l'exact sens de la commande atomique. Quand la plateforme reporte un bug, ce n'est pas automatique de remplir un bug. Nous devons investir, comprendre un possible scénario avant d'exacter un bug report. Je vous invite à regarder le papier de procédure pour plus de discussions et conclusions. Merci pour votre attention.
The Debian distribution includes more than 28 thousand maintainer scripts, almost all of them are written in Posix shell. These scripts are executed with root privileges at installation, update, and removal of a package, which make them critical for system maintenance. While Debian policy provides guidance for package maintainers producing the scripts, few tools exist to check the compliance of a script to it. We report on the application of a formal verification approach based on symbolic execution to find violations of some non-trivial properties required by Debian policy in maintainer scripts. We present our methodology and give an overview of our toolchain. We obtained promising results: our toolchain is effective in analysing a large set of Debian maintainer scripts and it pointed out over 150 policy violations that lead to reports (more than half already fixed) on the Debian Bug Tracking system.
10.5446/55029 (DOI)
In this talk I will describe a new algorithm for the solution of MinPay of Games, obtained by integrating the traditional concept of progress measure with the notion of quasi-domainion, originally introduced in the context of parity games. MinPay of Games are turn-based games of infinite duration played over a arena between two players here called diamount and square. Each player controls position of the same shape. Position are labeled with a numerical value called wakes. Alternative wakes can be placed on edge. The game consists in players choosing edge exiting from the position they control. This choices induce infinite paths called plays. The goal for a player is to force a player on which a given winning condition is satisfied that is opposite to the one of the opponent player. The MinPay of a player is the average of the sum of the weight of its position. If such payoff is strictly positive, then player diamount wins, otherwise the opponent does. For this reason, in the context of MinPay of Games, the two players are also called positive and negative. In the example, if the two players make their lighted choices, we can identify two infinite plays, a b with MinPay of minus one half, winning for negative, and a f d c with MinPay of a quarter, winning for positive. The decision problem is known to be both NP and co-NP, and the currently known upper bound is pseudo-polynomial, being linearly dependent on the maximum positive weight in the game. MinPay of Games have application in the context of formal specification, verification and synthesis of system with quantitative requirement, where the weight can be interpreted as penalties or reward associated with system choices. Our aim here is to show how to, the conversion to the solution of the most efficient algorithm, the small energy progress measure, can be sped up by using quasi-dominions, a notion that was originally introduced in the context of Parity Games. Small energy progress measure is a progress measure improvement algorithm. It is based on the idea of computing a progress measure for a player, namely local condition between a adjacent position that witnesses the existence of a winning strategy for that player. Concretely, a measure move associates to each position a natural number that corresponds to an estimation of the payoff of the best place starting in that position. The correctness of progress measure approach relies on the following invariant property that states that the measure of any position must always under-approximate the actual payoff that the positive player can enforce along some finite place starting from that position. For instance, in the example, a positive player can enforce at most a measure of value 4 for position a. Indeed, if a positive player chooses the move AC, a negative player can in turn choose the move CD, inducing the path ACD with payoff 5 or the move CB inducing the path ACB with payoff 4. Since his goal is to keep the measure as low as possible, he will choose the first one. Hence, the best play player positive can enforce from a as payoff at most 4. Now, observe that a finite play whose payoff is greater than the sum of all the positive wakes in the game must contain some positive wake position twice, and therefore a positive wake cycle. These, together with the invariant property above, entail that a position with measure greater than the sum of all positive wakes in the game guarantees that a positive player can enforce a finite play that contains a positive wake cycle. Therefore, he can enforce a divergent infinite play from that position. As a consequence, every such measure can be substituted with the value of infinite, signaling a win for the player positive. The approach essentially works as follows. Players are allowed to change the measure of their position by choosing moves. Choosing a move, let's say VU, we result in assigning to V the measure obtained by adding the wake to V, and then we can enforce a divergent infinite play from that position. The measure obtained by adding the wake to V with the measure of the adjacent position U. Intuitively, the positive player will try to increase the measure of its position as much as possible by choosing the appropriate moves, while the negative player will try to keep the measure of its own position as low as possible using the moves at his disposal. The process terminates when all the measures stabilize, basically when none of the two players is able to change its measure. This is captured by the notion of progress measure. A measure function is a progress measure if each position of the positive player, player-diamond, has a measure that dominates the measure granted by all of its outgoing moves, while each position of the negative player dominates at least one of the measure granted by its moves. If we assume for simplicity that there are no position with infinite measure, then we can easily prove that a progress measure witnesses the existence of a strategy for negative player, that induce only plays that eventually get trapped into a non-positive cycle, all of which are winning for him. To see this, it suffice to take any strategy for player negative that chooses a move that satisfies condition 2 on all of its position. Once this strategy is fixed, the game becomes a single-player game in which all positions satisfy the progress condition with respect to their outgoing moves. Let's now take an arbitrary cycle in this single-player game, for example the cycle from V0 to V3 and let's sum up the inequalities of the progress condition corresponding to the edge along that cycle. After rearranging the terms, we obtain the weight of the cycle on the left-hand side and a partial summation of a telescopic series on the right-hand side. The value of this last summation is the difference of the measures of position V0 and V4, where V4 coincides with V0 itself. Hence the cycle has a non-positive weight and the infinite play induced by that cycle cannot have a positive payoff. Therefore, all the plays compatible with the chosen strategy are indeed won by a negative player. If the progress measure contains position with measure greater than the sum of the positive weights which are winning for player positive, we can simply remove this position from the game and apply the same argument to the resulting sub-game. In conclusion, once a progress measure is obtained, the set of position with measure greater than the sum of the weights form the winning region of player positive. While the remaining ones form the winning region for player negative. A progress measure can then be obtained as the least fixed point of the monotone lift operator reported here, in which the positive players apply the maximal measure increase policy. By updating this measure with the move that grants him the maximal value, while the negative player follows a minimal increase policy. The following example serves two purposes. On the one hand, it shows how a measure function is obtained and on the other hand, it shows the weaknesses of the approach. The table on the right reports the values of the measure for each position in the game. We start with the measure function assigning 0 to all position. The first application of the lift operator simply assigns to the positive weight position their own weight as measure. At this point though, the negative player is forced to increase the measure of position C by choosing the minimal increase it can get, namely choosing the move C D. This increase in turn triggers an increase for position D. This supposition will keep increasing their measure until they both obtain measure K. And the negative player will keep choosing the move C D every time as it grants the minimal increase in measure. Until after 2K plus one iteration, they both reach measure K. However, while position D still needs to increase its measure by one, C can stop since now it dominates the measure grant by the move C A. The final measure function is a progress measure. Indeed, C dominates the move C A and it dominates the only move DC. And the entire game is won by player negative, with the winning strategy corresponding to the red arrows. It is worth noting that the number of iterations needed to reach the progress measure here is linear in the weight K. And that this simple example already represent absurd polynomial worst case for the algorithm. The problem here is that the minimal increase policy adopted by the negative player forces him to keep choosing a losing strategy, namely the move C D, for a very long time. Now, getting rid of this minimal increase policy is not immediately obvious, as in general a different policy may lead to unjustified measure increases that it can in turn lead to incorrect results, such as wrongly declaring as losing a position that is in fact winning for the negative player. Here is where the notion of quasi-dominion comes into play. If we define a positive quasi-dominion as the set of position on which positive player has a strategy to win all the plays that remain in the set forever, then the set C D would actually qualify as a quasi-dominion, since the infinite play that remains forever in that set has a divergent payoff. Now in this, then justify the negative player to deviate from this minimal measure increase policy and choose a move that escapes from the set, even if this requires a known minimal increase of its measure. We only need a way to correctly identify quasi-dominions of the positive player. It is relatively easy to prove that any set of position with positive measure is indeed a quasi-dominion of the player positive. This follows from the observation that the measure of a position correspond to the minimal payoff that positive player can enforce along some finite play from that position. This in turn entails that if negative player chooses not to escape from that set, then positive player would have a strategy such that any infinite play that remains in the set forever has indeed a divergent payoff, as was the case in the previous example. In this example, given the measure move2, both ACD and CD are quasi-dominions of player positive. Now we have all we need to describe how the algorithm works. The idea of the new algorithm is then to exploit the previous observation to define a better increase policy for the player negative. Essentially, we treat different the position inside the quasi-dominion. For the position outside the quasi-dominion, we proceed exactly as the original algorithm does. At the other end, for the position inside the quasi-dominion, player negative discards the moves remaining in the set and always chooses an exiting move. If more than one such move exists, it chooses the one granted the minimal increases. Player positive instead always try, if he can, to remain inside the quasi-dominion, where he knows he can win. In the example after the first lifting round, we can identify the quasi-dominion ACD. Position A can escape choosing the move AB, as its measure takes value K. As A escapes the quasi-dominions, reduce to CD. Now, from position C, player negative will choose a move that leaves the quasi-dominion, increasing the measure of C directly to K in a single iteration. Finally, the measure of the remaining position D is set to K plus one. Unlike the original algorithm, we have reached the progress measure into two iterations, independent of the value K. This phenomenon does occur quite often in practice, as the experimental results in the next slide show. We implemented both algorithmic AC++ framework originally developed for parity games, but that can be easily extended to deal with mean payoff game as well. The table on the left shows the results of two concrete verification problems, resulting from converting parity games into mean payoff games. They correspond to the verification under fairness of the model of an elevator and the language inclusion problem between an undeterministic bushy automaton and a deterministic one. The quasi-dominion approach significantly outperforms the classic progress measure approach. However, it is worth noting that as a consequence of the translation from parity to mean payoff game, the waves are exponentially distant from each other, and therefore these games may not be representative of the general case. On the right instead, the figure displays the performance results on 2800 random-age generated games with maximal weight 15000 and up to 40 moves per position. The x-axis represents the size of the game ranging from 1000 to 10000 positions, while the y-axis represents the solution time in a logarithmic scale. The gap between the two algorithms is often more than three-order of magnitude, showing that speed-up-to-do convergence is quite significant in practice. It is also worth noting that the new algorithm can be implemented in such a way that the time required to solve the game is asymptotically equivalent to the one of the original algorithm in the worst case. Summing up, the notion of quasi-dominion seems to be general enough to be applied to all the type of games, offering a speed-up-in-conversion to the solution. In particulate the integration with progress measure required no additional computational cost, indeed we developed an algorithm that matched the best tax-intuitive complexity or small-energy progress measure. Experimentally it significantly outperformed the classic progress measure approach being order of magnitude faster. Finally, we proved the existence of infinities families of games on which the combined approach can perform arbitrary better than the classical algorithm. Thanks.
We propose a novel algorithm for the solution of mean-payoff games that merges together two seemingly unrelated concepts introduced in the context of parity games, small progress measures and quasi dominions. We show that the integration of the two notions can be highly beneficial and significantly speeds up convergence to the problem solution. Experiments show that the resulting algorithm performs orders of magnitude better than the asymptotically-best solution algorithm currently known, without sacrificing on the worst-case complexity.
10.5446/54923 (DOI)
Hi everyone, my name is Rija and I'm presenting our work on proving the safety of highly available distributed objects. This is a joint work with Gustavo Petri and Mark Shapiro. First, let's consider a centralized publication that serves users across several geographical regions. All users from all locations are connected to a single replica and thus the perceived latency for the distant user would be very high. In order to lower the latency, the centralized object is distributed into three replicas across different continents and the users are connected to the nearest replica. This ensures that we provide low latency service to the users, but this situation gives rise to concurrency bugs. I'll show you an example how concurrency can violate and invade. Let us examine an example of a distributed auction. Assume we have three different replicas distributed across the globe and different users are connected to different replicas. So we have three users, Rija, Gustavo and Mark, who are connected to three different replicas at Asia, Europe and South American. A user, Rija, that's me, who is connected to a replica in Asia, has a painting to put up for an auction. The green dot near the painting indicates that the auction is open to receive bids. So Rija opens the auction and this state is propagated to both Mark and Gustavo, who are connected to the European and American replicas respectively. Both of them are interested in the painting and Mark plays the first bid for 100 euros. This update is propagated to Rija and Gustavo. Gustavo sees Mark's bid and places a higher bid for 105 euros. Unfortunately, at the same time, the connection between the replicas in Asia and South America broke down. Gustavo was able to propagate his update to Mark, but not Rija. Meanwhile, Rija is happy with Mark's bid, which is the only one she's ever done. She closes the auction declaring Mark's 100 euro bid as a winner. When this is also propagated to Mark, he observes that despite placing the lower bid, he won. This is a violation of the invariant of a distributed auction that requires the highest bid to win. In the rest of the talk, I will show how to formally either prove the invariant is satisfied or point to a specific concurrent execution that violates it. In the later case, I'll show how to fix it. Now let us look into the trade-offs for distributed objects. The ideal world for distributed objects would be having high availability and strong consistency. This is reasoning about safety because the user has only to think about a sequential model of execution. But Capteran has proven that this is in fact a dream world. That is, distributed objects can either have high availability or strong consistency, not both. So, where should we compromise? We have to maintain high availability because the whole point of distributing the service was to provide users a low latency. So if the service is not available, it won't work. The next option we have is compromising on strong consistency and opting for eventual consistency. Eventual consistency is a very relaxed model of consistency where it is assured that once all updates are eventually propagated to all replicas, all of them will be in the same state. We just observed in the previous example how this would violate safety of the object. So what should we do? In our real world, we have to provide high availability while being safe. For this, we present a proof rule to verify the safety of highly available distributed objects. Our proof rule is modular. I will explain how the specificity of propagating the states between replicas help us replace the more complex, reliable guarantees type of reasoning without the need to consider interlapings. We have implemented a tool that automates our proof rule that verifies where the region specification is safe and if not, provide counter examples that help the user get insight into issues. Now, let us examine our proof rule using the auction example we discussed. Let us take a closer look into the auction example we discussed and observe how the state evolves. Initially, all replicas are in the same state without any active auction. When street just starts an auction, the status of the auction changes to open, indicated by the green circle. This updated state is propagated to both Mark and Gustavo and they merge the incoming state to the respective local states. So they have the auction status as open. Now Mark places a bid of 100 euros and the state is propagated to Streetia and Gustavo who then merge this to their respective local states. Gustavo places a higher bid of 105 euro which is propagated to Mark and concurrently, Streetia closes the auction declaring Mark as the winner. Mark receives the updated state from Streetia as well, merges them into his own state and observes that the state is unsafe because the bid with the lower amount. To address this problem, we first need to define safety. Basically, we must identify the ingredients of an object. For our auction example, the object specific ingredients are that the bids can be placed only when the status of an auction is active. So before starting an auction and after closing an auction, bids cannot be placed. We are specifying that in the bid. When an auction is closed, there is a winner declared and the winner is the bid with the highest amount. These are the object ingredients of an auction application and let us see how we use this information to maintain safety. So first, let us look at the updates happening on a similar topic. For now, we are not considering the concurrent operations happening elsewhere. To maintain the safety of the local operations, we require preconditions. Consider start auction issued in a state like this where an auction is already active and there are some bids. This violates our invading. So we strengthen the precondition of start auction, requiring the status of the auction to be invalid. So the start auction can be issued in an invalid state and changes the state of the auction to active. This ensures a safe execution of a single start auction operation. Similarly, for place bids to avoid placing bids for an already closed auction, we require that the status be active and no winner declared yet. This enables safely placing a bid without violating any invading. For closed auction, we need to ensure that there is a winner declared and it is the highest bid. For this operation, for this, the operation should be issued in a state where the auction is active. There are some bids and the highest bid is selected as the bidder. So now we have all our operations safe with these added preconditions. Let us see now how concurrency comes into picture. Going back to our state evolution, let us zoom into the European replica where Mark was connected. We can observe that for a single replica, the state evolves either through a local update operation or by merging a remote state containing one or more updates. In this particular case, Mark has issued only a single place bid operation and he observes that Striga is starting and closing the auction and Ustavo placing a bid through Merge. So he was informed by the propagation of the remote states from Striga and Ustavo. So as far as a single replica is concerned, the only point of observable concurrency is during Merge. Now let us look into the Merge function on a single replica. So we have a local state and an incoming state from a remote replica. In this case, the local state has closed the auction declaring a winner and the incoming state has a higher bid than the winner. When we merge those two states, the winner is no more the highest bid and the state is unsafe. As we did for the other operations, we strengthen the preconditions such that if the status is closed in either of the states, the winner declared should be the highest bid in both states. The prime state represents the incoming remote state. So if we have the same local state, the auction state has been closed and a winner declared and an incoming replica state with a lower bid, that would still be safe. Now thinking about a precondition for Merge, it seems problematic. We cannot block merging a remote state if the precondition for Merge is not satisfied. The state of a local replica should allow a merge at any point of time because merge can happen anytime. Hence we call it our concurrency invariant. The concurrency invariant should be preserved to ensure safe concurrent operations. It is essentially the weakest precondition for a safe merge satisfying the object invariant. So now we have our global invariant as a conjunction of the object invariant and the concurrency invariant. We need to make sure that each update maintains this global invariant. This ensures the safety of all concurrent operations. So now we come to a proof for verifying safe and highly available distributed objects. If the initial state satisfies both the object invariant and the concurrency invariant and each operation and merge preserves them, the distributed object is safe in all executions. We use whole logic style assertions to verify the safety of each operation and merge. Assume we are executing an operation. We start from a state with a prospects and object invariant, the concurrency invariant and the precondition of an operation. We obtain a new state sigma nu after applying the new operation. We require sigma nu to uphold both the object invariant and the concurrency invariant. In the case of merge, we require the states sigma and sigma prime which are being merged to respect the object invariant and the concurrency invariant. So here the prime state indicates the remote state which is being merged into the local state. The state sigma nu which is obtained after merge should also uphold the same set of invariants. Now let us come back to our option application and try to apply our proof for. We have a set of object invariants and concurrency invariants. We already discussed about strengthening the preconditions of the operation to make sure that the object invariants are respected. So now let us look into the concurrency invariant. The concurrency invariant states that the winner is the highest bid in both states. Starting an auction has no impact on this condition since it neither selects a winner nor updates a set of bids. But in the case of a place bid, we observe that a concurrent close auction may violate the concurrency invariant. So if the, similarly for close auction, a concurrent placing of a higher bid also violates the concurrency invariant. We can observe that scenario in our early state evolution diagram. Here we see that place bid and close auction happening concurrently, whose results merged into an unsafe state. This can be fixed in two ways. We can either weaken the invariants such that the winner might not always be the highest bid. Now we need to coordinate to avoid unsafe concurrency. So now we are focusing on coordination. Let us consider fixing the auction object by using an async bid lock. All replicas require the lock in async mode when starting the auction. As long as the individual replicas have the async mode of lock, they may place bids. When Sreeja starts the auction, all replicas also acquire the lock in async mode. Mark and Gustavo can place bids as long as they have the async lock with them. So Mark places a 100 euro bid, propagates his updates to Sreeja and Gustavo. Gustavo places a higher 105 euro bid. When Sreeja wants to close auction, she requests the wait mode of the lock. This request triggers the release of async mode lock in from all replicas. When Sreeja receives the release locks from Mark and Gustavo, the state also contains all their bids. So now aware of all the bids, Sreeja can safely close the auction declaring Gustavo as the winner since his bid has the highest amount. We have implemented this proof rule in a tool called Sotaria, which sits on top of Booby, which is an intermediate verification language. Booby in turn uses Z3 SMT solver to discharge the verification conditions. Sotaria requires the specification of a distributed object. It includes a state represented as the global variables in Booby. The object invadence given as a function in Booby with a special annotation H invading. Sotaria also requires a comparison function to determine the relative ordering of states. This is useful in checking convergence properties of the distributed object. We have provided the conversion properties in our paper in detail. So we suggest to refer to that. This comparison function is annotated with a special notation called at GTEQ, which is a short form of greater than or equal to. So that's the ordering relation. The next component Sotaria needs is all operations including merge with their pre and post conditions. Sotaria should be specially annotated with at merge to distinguish it from the other operations. The left hand side shows the input format for Sotaria. We have only shown the skeleton of the specification of an option. It would also contain the support functions pre and post conditions and implementations of all procedures and of course the function body system. So Sotaria takes the specification as the input. It first checks the sanity of the specification. It looks for any syntax errors and it checks whether the pre and post conditions are valid conditions in fact with respect to the implementation. And it also uses a boogie to do this checks for it. Then it checks for the convergence properties. So in short we are looking for the properties to ensure a semi lattice so that it is guaranteed to converge. And then Sotaria discharges the proof rule which we mentioned right now as safety properties. If all the checks pass, the specification is safe. Otherwise Sotaria will provide counter examples to guide the user through fixing the specification. The tool is available on GitHub at the URL shown here. So to conclude we presented a simple modular, a whole logic style proof rule for verifying the safety of highly available distributed objects. We introduced the notion of concurrency impedance that can be derived from the specification which is basically the precondition of merge. We presented a tool based on this proof rule, the tool uses boogie which in turn uses Z3 which is an SMT solver to discharge verification conditions. As a next step we are working on synthesizing efficient locking protocols. So we consider the dynamic characteristics of the object as well like the frequency of the workload for on each replica and also the network characteristics on the underlying network characteristics on which the replica is distributed. Thank you.
To provide high availability in distributed systems, object replicas allow concurrent updates. Although replicas eventually converge, they may diverge temporarily, for instance when the network fails. This makes it difficult for the developer to reason about the object’s properties, and in particular, to prove invariants over its state. For the subclass of state-based distributed systems, we propose a proof methodology for establishing that a given object maintains a given invariant, taking into account any concurrency control. Our approach allows reasoning about individual operations separately. We demonstrate that our rules are sound, and we illustrate their use with some representative examples. We automate the rule using Boogie, an SMT-based tool.
10.5446/54924 (DOI)
どうぞ、まず、OK、そう、I didn't see the presentation screen yet, but I don't know you are seeing,but first talk is by Rana's in action.It is exciting talk this morning and it is first show the recording talk.Andrei will answer the question. Thank you.Hello, my name is Andre Bauer and I'm going to talk about joint work with Daniel Ahman on runners.Let's look at the typical situation on how programs are actually executed.Every program runs in some sort of runtime environment that controls access to actual hardware.This situation occurs in other scenarios as well.For instance, a user process runs inside an operating system which has access to the actual hardware.There is a user mode for the user process and the kernel mode for the operating system.Another example is a web page that runs inside a browser.Here too, we can think of the browser as a runtime environment for the web page.In fact, the situation is slightly more complex because these can be nested.You could have a web page running inside a browser, which is its runtime environment running inside an operating system,which is its running environment,and the operating system itself may be running inside a virtual machine.There can be layers of such runtime environments.In our work, we looked at how runtime environments can be treated using programming language techniques.Here is the overview of our contributions.First of all, it has been known for a while now that runners, also known as co-models, are a programming language concept that can be used quite successfully to model state-like computational effects.In fact, runners are a fairly natural model for a particularly simple kind of a non-nested top-level runtime environment that only has state.So our first step was to generalize such runners to runners for any monad T so that we can allow other possibilities.Such runners are quite general, even though they have some good properties, we still have to specialize them.So we identified a particular kind of family of runners, T, for which the resulting T runners give a principled and composable model of general, not necessarily top-level runtime environments.We took these theoretical observations and designed, based on them, programming constructs that allow us to combine runtime environments and guarantee orderly resource initialization and finalization.We formalized all of this by giving a calculus, a small calculus called lambda-coup, and we studied its properties.We considered a number of examples to showcase runners in action, and we provided two implementations.So let us first briefly review how computational effects are treated with algebra.So we start with the signature sigma, which is a collection of operation symbols, and each operation symbol op has two associated types A is that the parameter type and B is the return type or also called arity.Now, to model computations, we select a carrier set C, think of its elements as effectful computations, and then we model theoperations, such as print, set, get, by maps, which take as a parameter, a continuation, expecting a result of type B, and then such an operation combines the parameter and the continuation into a new computation.Now, the situation with runners is somewhat dual.So we still have the same sort of signature, but now we take the carrier set C to be the set of states, and the operations become cooperation, so this is where duality comes in.So a cooperation is a map that takes a parameter, the current state, and then it returns the result B and the new state.And now if you look at this, you will see that this is really just a map into the state monad, because here we have the state monad with state C.So in order to generalize runners, we can just replace this monad with some other arbitrary monad T.So this is precisely what we can do.So say that the T runner is given by a signature of operation symbols, like before a monad T, and then the co-operations now are mapped from A into T of B.While generally runners still have some good structural properties, they are a bit too general to allow the sort of modeling that we would like to do.So we need to identify a particular kind of monad monad that will work well.So here's what we did.So this is a bit complicated, but let's go through it.So the monad that we are going to use to model the kernel computations, that is to say the runner, the resource management, it has several parts.So we still have a state.So think of this as a state-like resource.It could be just actual memory, or it could be open files or communication channels.We then allow some further operations that could be called by the runner.So these would then be handled by some outer runtime environment.And the sigma here is a signature, which is a parameter.We can change it.It's not fixed.And then there are two kinds of failure.So it's important to include failure because you can imagine that when you make a call to the outer environment, the call may actually fail.So we include two versions of this.First we have the recoverable exceptions that can be intercepted and handled by the user code.And then we have the irrecoverable signals, which kill the code off.This fact is only half of the story because, as we said, we're going to have the user code.So think of the operating systems.You have the process, which runs in user mode.And then you have the kernel, which runs in kernel mode.And these two have different access to different computational effects.So we also need another monad for the user computations, which is similar because it doesn't have access to the kernel mode.It doesn't have direct access to the kernel state.And it cannot do anything about the signals.So all that is left with are some algebraic operations, not necessarily the same ones as with the kernel and exceptions.Now these can then be formulated in terms of a calculus, which we call lambda-coup.It is a fine grade code by value lambda calculus, which has, of course, values.So some values v of type A.And then it has two kinds of computations, user computations and kernel computations.And the types here record the effect information.So they correspond to the user and the kernel monads.But briefly, if you have a user computation, m, then the judgment here states that m has a type A.So it's going to compute a value of type A, but it may also call operations from the signature sigma or raise exceptions E.And then similarly, a kernel computation has a return type, and then it has its own operations, exception signals, and the state, the type of the state that it uses.Lambda-coup has two central new programming constructs for combining runtime environments with user code.So let's have a look at these.So first of all, we have runners, and you can think of them as handlers, because they can be related in a precise manner to a special kind of handlers.We shall talk about this.Ossentially, it's just like a handler, it handles some operations and each operation that it handles, it handles with some piece of kernel code.And then corresponding to the handling construct, we have a using construct, which is slightly more involved, which also takes care of resource initialization and finalization.So it's maybe best to look at this together with runners.But a typical situation would be a program like this, which has several components.First, we have the runner.So the runner is going to encapsulate the user code, which you see in the middle, and it will control its use of resources by handling the operations.And then we have the initialization code, which will initialize the runner state.And then we have a finalization block, which is going to take care of cleaning up the resources at the end and intercepting any signals and exceptions.So let's see how a typical scenario might work out.So suppose the user code calls an operation opi and passes some value v. Then control is passed over to the runner, which is going to run the corresponding handler for the operation, ki.Now this is kernel mode, which could call some further operations and these would be then taken care of by some outer runner.And now assuming that such operations come back, the control then, then ki can do two things.So one of the things is that it can return control back to user code, either by returning a value or raising a recoverable exception.If it returns the value, then the code will just continue from where it left off.If it raises an exception, then the code may intercept the exception using an exception handler.But if it doesn't, then the exception will propagate all the way to the finally block, where the corresponding handling of exception will be executed.Notice that this exception handler, raise, gets access to see, which is the state of the runner, so that finalization can be properly done.For instance, if sees the file handle, then maybe this code is going to close the file handle by calling some outer close operation.The kernel code ki may also kill the user code with the signal, in which case the user code is not resumed and the control is passed immediately to the signal handler.The signal handler does not have access to the state because the signal should be used when no further resource management makes sense due to some irrecoverable error.So let's compare runners and handlers a little bit.A handler for algebraic effects can use continuations in an arbitrary fashion.It may store them.It may call a continuation several times.If it implements state, then there is no control of how the state might be used, so you can have handlers that do strange things with state.And as far as these exceptions go, handlers really only have the kind of irrecoverable exceptions, which just don't call the continuation.And then there isn't any sort of built-in way of finalizing code.If you want to finalize code, then you have to do that as a programmer with your bare hands.Whereas, runners are more restricted than handlers and have used continuations in a controlled way because every continuation is used at most once depending on whether a signal is raised or not.And the continuation is always used in the tail-call position.That is to say, the last thing that ever happens is that the continuation is triggered.This is part of the operational semantics of LambdaCoup.Furthermore, the kernel state is passed around in a linear fashion, so it cannot be discarded.It always has to be properly finalized.And there is no worries about it being copied or anything like that.And of course, as we just discussed, there are two kinds of exceptions.They are the recoverable and the irrecoverable exceptions with finalization.In fact, we have also given the notational semantics for LambdaCoup and we proved the theorem that states in a semantic way that finalization always happens.So, speaking vaguely, the theorem states that if you have a well-typed using finally block, then it's the notational semantics can be written as the composition of the semantics of the finally block and the rest of the construct.which means that finally will always happen and it will happen last.So, this is a semantic way of saying that the design guarantees proper finalization.Now, what the finalization actually does, the code in the finalization actually does, that's the responsibility of the programmer, of course.But we can guarantee that it will happen.In contrast, handlers can be pretty tricky and they may intercept any kind of operations and prevent intended finalization from ever happening.So, we feel that the restricted forms provided by runners are actually quite useful and they go a long way.In fact, the only kind of handlers that are not taken care of are the non-determinism and probabilistic choice style of handlers.Those are non-linear.We do not unfortunately have time to look at examples, so I invite you to look at the paper, but let me just say what sort of things one can do with runners.So, just like handlers, runners of course can be nested and this leads to ideas.Of course, one of the motivating ideas was to provide several layers of runtime environments and you can think of these as sandboxing or virtualization.Now, done using PL techniques and even within a single program, then runners can be used for various kinds of resource monitoring or access control.For instance, you can keep statistics and resources and things like that.And you can also augment raw resources with additional functionality.For instance, if the operating system provides raw I.O., maybe you can have a runner that adds caching or buffering or things like that to it.There are some examples of that in the paper where we show how one would take random access memory and turn it into ML style references.But runners can also be combined in ways that handlers cannot easily be combined.Namely, they can be paired.So, think of this as a kind of horizontal composition of runners where you can take several runners each offering its own resource capability.And then you pair them and you get all the resources done in parallel.And one possible use for that would be to have a more principled and fine grained control of runtime capabilities of a programming language.So that you don't have to have just like one monolithic runtime provided by the programming language like the I.O. Monad of Haskell or the one big implicit monad of the ML style languages.But this way, it should be possible to express using the effect system that we provide or some variation of it.Express more precisely what exactly is provided by the runtime or what exactly the program needs from the runtime.We have provided two implementations of LambdaCoop.One is a library called HaskellCoop.And this one is more readily available for experimentation if you want to see how these ideas might combine with something else that you want to try.And we also provide a prototype language called Coup, which is based on LambdaCoop but has several other features.Nothing essentially different from LambdaCalculus, which allows you to run these programs.And both implementations provide a number of examples, which are also described in the paper.Thank you for your attention.Andre is here.I am correct. I may be able to start from my questions.One of the things is very nice, a computational way to express exception handling.But as you said, you can guarantee user will write finally, but finally block.But finally block actually can be even empty, which means that you don't know user will write finally, but finally block.User can write correctly recovered like L-lang supervision.I am correct.Sorry, can you just repeat the last sentence that the user can?It cannot be guaranteed.User write correctly finally block.And this relies on user.And have you thought about that kind of correctness like reversible computing?You ensure really over supervision tree of L-lang.You really ensure that all state are recovered correctly, rather than providing just finally block.So, if I understand your question correctly, the type system will guarantee that the finalization actually is valid code in the sense that it's type correct and that it will happen.So, what the language guarantees is that the finalization blocks will be executed.But what the finalization blocks actually do depends on the user.So, for instance, if you have like suppose you want to do some sort of file handling, then the finalization block is going to presumably close the files.Now, if the user forgets to close the files in the finalization block, then of course that's going to be a problem.I suppose what you're asking is how to solve that problem.And I think the answer there is that at some point somebody has to understand what resource management means in any particular situation.And so we are modeling the part of the language which gives the tools to the person who actually has to implement the resource management.So, we're not trying to provide magical resource management support by the language.What we're trying to provide is tools that make tools that give you guarantees that your resource management code will actually be executed.So, maybe this is the case.We would envision that this sort of thing, these runners and the using blocks, this is not something that you might write like every day all the programmers would be using this.This is more at the level of designing some sort of a new resource capability and whoever is designing the new resource capability would be then writing down these finalization codes.And so they are responsible for correctness of the finalization code.I don't see how you would just always do it automatically because in the general case, I think this could be the finalization code could be fairly arbitrary.Because you implemented Haskell library already.So, I want to ask many situations exception happened due to the time out.How you integrate this time out mechanism or you assume time out happen outside because you are not modeling time out.Sorry, time out of what?Time out of waiting something and then go to raise and something and but you are not modeling time out.So, have you considered integrate more like very primitive?That's an interesting question.It would be interesting to see whether time out could be done and I think we'd have to combine that with some sort of asynchronous cooperation.So, there's some work that my co-author Danelle Amman and Mathia Pritnar have done on asynchronous algebraic operations.So, that would be an interesting direction to investigate what we haven't done that.Thank you very much.Thank you very much.So, it seems the time to move to the next talk.I just say very wonderful slide and talk.Thank you very much.It's very clear.Thank you very much.
Runners of algebraic effects, also known as comodels, provide a mathematical model of resource management. We show that they also give rise to a programming concept that models top-level external resources, as well as allows programmers to modularly define their own intermediate "virtual machines". We capture the core ideas of programming with runners in an equational calculus λ-coop, which we equip with a sound and coherent denotational semantics that guarantees the linear use of resources and execution of finalisation code. We accompany λ-coop with examples of runners in action, provide a prototype language implementation in OCaml, as well as a Haskell library based on λ-coop.
10.5446/54926 (DOI)
Hello, my name is Andrea and this is a joint work with my collaborators Amy Nadia and Ilia. This talk highlights the benefits brought to the synthesis of programs with pointers when the synthesis process is guided by read-only specifications. First a few words about the synthesis of programs with pointers. In a nutshell, a synthesis framework offers the means to write the intent of a program by means of specifications which are then automatically translated to a program code. One of the challenges in synthesis comes from the difficulty of navigating through the large space of possible programs. Another challenge is to come up with a specification mechanism that is expressive enough to capture the real intent while keeping the specifications concise. This second challenge has been addressed in the state-of-the-art in synthesizing programs with pointers by building on top of a flavor of separation logic called synthetic separation logic. Synthetic separation logic is a deductive system which takes as input a separation logic specification in the form of pre and post conditions and guided by the shape of the hip derives the intended program. The main benefit of this approach is that the end result is a program which is proved to be safe and correct by construction. Suslik, a tool implementing synthetic separation logic, has been used to synthesize small to medium-sized programs with complicated pointer manipulation. Let's start by looking at one such example, the copy of a linked list. A definition of a linked list in separation logic is described by an inductive predicate parameterized by the root pointer to the list and the set of values contained within the list. If the root pointer is null abstracted by zero in our definition, then the set of values contained by the list is empty and so is the corresponding hip. However, if the list is non-empty, then the head of the list is a node comprising two contiguous chunks of memory cells where the first cell stores the data and the second stores the pointer to the tail of the list. Finally, the separating conjunction asserts that the head of the list and its tail reside in this joint memory locations. Given this definition, the precondition of the copy method states that the argument R stores a value X that is the head pointer to the list to be copied. The post-condition asserts that the final hip, in addition to containing the original list, will also contain a new list starting from Y with the same content as the original list and that the pointer R will now point to the head of the copy list. With this specification, Sussex synthesizes a recursive program that iterates through the list pointed by X until it reaches the end of the list. Upon return from the recursive call, the synthesized program starts to allocate memory for the new list and to update the pointers. But wait, why are these two statements updating the tails of both lists? And what's with this spaghetti looking like final hip? What's happening here? Did we find a bug in Sussex? Let's check the specs again. Given a list with elements in S, create a new list with the same elements. Yes, Sussex does exactly that. So no, there is no bug. However, this was not the intent. Why do we have such spurious statements in our synthesized code? We should keep the initial list to its original form. How to do that? We should make sure that the initial list stays read-only in order to prevent the synthesized program from altering it. Our goal is to show that by synthesizing programs with pointer via read-only specification, the synthesis will be more effective. We expect the synthesized code to be shorter and more natural. It should also be more efficient by reducing the search space and the time to derive a program while being more robust too. Let's next understand a little bit more about how Sussex operates in order to understand what can be improved to its synthesis process. At its core, the synthesis with separation logic generalizes the whole triple where the program is given to a triple which quantifies over those programs C such that for any initial state satisfying assertion P, program C will execute without memory errors and upon its termination, the state will satisfy assertion Q. This approach to program synthesis is grounded in proof theory and it generalizes the classical notion of heap entailment to incorporate a possibility of transforming a heap satisfying an assertion P into a heap satisfying an assertion Q. The resulting program represents a proof term for a transforming entailment while the synthesis procedure corresponds to a proof search. The derived programs are thus correct by construction in the sense that they satisfy the ascribed pre-post condition and are accompanied by complete proof derivations. Let's look at an example. The Hello World of program synthesis is the PIC method which non-deterministically chooses to update one of the memory cell with the value stored in the other. The corresponding specification in separation logic states that x and y point to disjoint memory locations and upon return from a call to this method, the memory location that x and y point to store to the same value without mentioning which value that is. With this loose specification, the synthesis gives us a program where the value stored in the cell pointed by x is copied into the cell pointed by y. Of course, choosing to update the memory cell pointed by x instead of that pointed by y would also satisfy the ascribed post condition making this program a correct synthesis result too. This two example was meant to highlight the fact that for the same specification, certain rules emit many alternatives and therefore the synthesis must often rely on heuristics to choose the next goal candidate. The whole synthesis process stops when the first program that satisfies the given specification is found. Consider next a variation of this example where a pure constraint would force the cell pointed by y to keep its value unchanged. If SUS-LiQ would first attend the solution where the cell pointed by y is modified, it would synthesize the entire program before realizing that the pure constraint which forces the cell pointed by y to keep its value is not satisfied and would have to backtrack to the branching point and try another trace until it finds a suitable program. Our aim is to minimize the backtracking effort by reducing the branching possibilities irrespective of the search heuristics the synthesizer operates on. We show how we do that by guiding the synthesis with read-only specifications. The high-level idea is straightforward. Annotate each memory cell with a permission. A beautiful permission is denoted by the constant M while the read-only permissions are abstracted by ghost variables also called boroughs. Consider next an attempt to synthesize a call to pick given the goal on the left-hand side which states that both Z and T have mutable permissions. At the core of the synthesis process lies a special unification method which non-deterministically chooses a substitution such that the client's spatial frame entails the callee's precondition. The synthesis continues with the current synthesis goal only if such a substitution exists. In this case besides substituting the usual parameter of the method definition with the call's argument the callee also borrows a mutable permission from the caller and gives it back upon return via the previously computed substitution. An attempt to synthesize the call to pick in a state with insufficient permissions where the caller is annotated with a borough by the callee requires a mutable permission would fail to generate suitable substitutions since a mutable permission which is a constant cannot be substituted by a borough which is a variable. Returning to our motivating example we parameterize the inductive predicate for a linked list with boroughs. All the instances of the linked list predicate contain no arguments to denote that the list pointed by X has read-only permissions while the copy list is mutable. One attempt to synthesize the list copy again. This time though any attempt to synthesize a statement which mutates the initial list now annotated as read-only would fail forcing the synthesis to change its goal to one which does not mutate the list pointed by X. The resulting program contains no more spurious writes offering thus the expected output. Suslink now has a sibling with support for boroughs which we call RoboSuslink and we use it in our experiments to confirm our initial claims. In measuring the effectiveness of the synthesis we ran both tools on a standard benchmark suite containing examples with linked list and trees. Using RoboSuslink denoted by blue bars against Suslink orange bars shows that the enhanced synthesis with read-only specifications offers either programs of the same size as those offered by Suslink or shorter ones. The bar plots visually demonstrate that as the complexity of the problem increases approximately from left to right RoboSuslink produces notably more concise code than Suslink does. Efficiency wise we observe the same trend where RoboSuslink outperforms Suslink as programs involve increasingly more complicated pointer manipulation schemas. And finally we look at the robustness of our enhancement. Since the synthesis relies heavily on a set of search heuristics when navigating through the available search space, search space which increases dramatically for programs manipulating linked data structures, we asked ourselves whether is RoboSuslink always outperforming Suslink irrespective of the employed search heuristics? For this evaluation we chose four of the more complex programs in the available benchmark and varied the properties captured by their specification. In an attempt to stress the synthesis algorithm we implemented six different unification strategies and designed seven different search strategies to measure the number of fired rules for each such combination. A box plot in this figure, where shorter is better, corresponds to the distribution of about 100 data points where each data point corresponds to running one of the tools with a unique combination of problem specification and search strategy. RoboSuslink fires fewer rules in all the cases. Moreover, with the exception of insert it is also more stable to the proof search perturbations and in some scenarios it varies a few orders of magnitude less than Suslink does for the same configuration. This is obvious if you notice that the y-axis is actually a log 2 of the number of fired rules. And with this we have shown that our initial claims hold. While as far as we know this is the first work on synthesizing programs with read-only specifications, there are actually many approaches which focus on the verification of pointer programs with read-only permissions. Fractional permissions proposed almost two decades ago are a popular approach to reasoning about programs that used shared memory concurrency. However, the tool support is still scarce due to the difficulty of reasoning about fractions. Different flavors of abstract permissions were designed to overcome these difficulties, but integrating any of these approaches to synthesis doesn't come without any friction. We have tried most of them, but unfortunately none worked. Please refer to our paper for more details on the related work. As we have shown, read-only specifications make synthesis more effective, more efficient and more robust. Thank you.
In program synthesis there is a well-known trade-off between concise and strong specifications: if a specification is too verbose, it might be harder to write than the program; if it is too weak, the synthesised program might not match the user's intent. In this work we explore the use of annotations for restricting memory access permissions in program synthesis, and show that they can make specifications much stronger while remaining surprisingly concise. Specifically, we enhance Synthetic Separation Logic (SSL), a framework for synthesis of heap-manipulating programs, with the logical mechanism of read-only borrows. We observe that this minimalistic and conservative SSL extension benefits the synthesis in several ways, making it more (a) expressive (stronger correctness guarantees are achieved with a modest annotation overhead), (b) ffective (it produces more concise and easier-to-read programs), (c) efficient (faster synthesis), and (d) robust (synthesis efficiency is less affected by the choice of the search heuristic). We explain the intuition and provide formal treatment for read-only borrows. We substantiate the claims (a){(d) by describing our quantitative evaluation of the borrowing-aware synthesis implementation on a series of standard benchmark specifications for various heap-manipulating programs.
10.5446/54929 (DOI)
Hello, my name is Somsik Jongmans. I'm assistant professor at Open University of the Netherlands and guest researcher at CWI in Amsterdam. And together with Noboko Yoshida from Imperial College London, I've been working on exploring type-level b-similarity towards more expressive multi-party session types. And the plan for this talk is to give a brief overview of our first series of results on this topic as published in the ESOP 2020 Proceedings. Now, in general, my long-term research aim is to design and implement new theoretical foundations and practical tools to make concurrent programming easier. And in this presentation in particular, I'll concentrate on an improved method to statically analyze application-level message passing communication protocols. Now, the problem can be described as a classical verification challenge. So imagine that we have a specification as an implementation I, such that the specification prescribes the following elements. So first, we have the concurrent processes that the program consists of. Second, we have the communication channels that the processes can use to send and receive messages to and from each other. And third, we have the communication protocols that need to be followed as the program is executed. So, for instance, in natural language, we could specify that first, a number needs to be communicated from Alice to Bob, and then a number from either Carol or Dave, etc. So we specify a whole tree of admissible communications. Now, assuming that we have such an SNI, the question is then, how to ensure that the implementation is safe in life relative to the specification, where safety and liveness can be understood in the classical sense that bad channel actions, according to the specification, should never happen in the implementation while good channel actions can eventually happen. Now, over the past decade, an influential approach to answer this question has been based on multi-party sessions type. They constitute a behavioral type system that is capable of automatically ensuring safety and liveness of communications. In more detail, here is a graphical one-slide summary of the MPSC approach, by example, and it sort of works top-down. So, at the bottom, we have the implementation of three processes, Alice, Bob, and Carol. And, for instance, in the example implementation, Alice sends number five through channel X, Bob receives value V through channel X, and then sends true or false through channel Y, depending on V, and Carol receives a value through channel Y. Next, at the top, we have a global specification of the intended communication protocol among Alice, Bob, and Carol. And global here means that the specification prescribes every admissible communication action, so sensor receives, of every process, from their shared perspective. So, for instance, the example specification states that first a number is communicated from Alice to Bob, and then a boolean is communicated from Bob to Carol. Now, the idea is that the implementation and the global specification are written manually, possibly even independent of each other. And then next, the global specification can be automatically decomposed into a number of local specifications by projecting it onto every process. And local here means that the specification prescribes every admissible communication action of one specific process from its own perspective. So, for instance, the local specification of Alice states that Alice sends a number to Bob, indicated by the exclamation mark, and similarly, the local specification of Bob states that Bob first receives a number from Alice, indicated by the question mark, and then sends a boolean to Carol. The final step is that process implementations are compositionally verified against the local specifications by means of type checking. And specifically, the main theorems of MPST guarantee that if every process implementation is well-typed against its local specifications statically at compile time, then their parallel composition is safe and live dynamically at runtime. And in particular, this means that communication errors and deadlocks are ruled out by construction. So, potentially, this MPST approach is really quite powerful, right? In the example on this slide, for instance, if the typing context indeed entails that access used as a channel from Alice to Bob and y is used as a channel from Bob to Carol, then all process implementations are well-typed, so they indeed enjoy absence of communication errors and deadlocks. Okay. Now, one of the main open problems of the MPST approach pertains to limited expressiveness. So, the approach is applicable only to global specifications that can be decomposed in some semantics-preserving way, and this is often not the case. So, in more technical terms, the projection operator is only partially defined and it remains undefined for many global specifications. And in the rest of this talk, I will explain our new technique to support a larger class of global specifications according to this MPST approach. So, to introduce our contribution at a conceptual level, consider the following three global specifications in which the plus operator indicates choice. Okay. The first specification states that either numbers or booleans are communicated from Alice to Bob and from Bob to Carol, followed by some continuation. Okay. The second specification states that a number and a boolean are communicated from Alice either to Bob or to Carol. And the third specification states that a number is communicated from Alice to Bob and a boolean from Alice to Carol either in this order or in the reverse order. Now, the interesting thing is that these three specifications have different properties in terms of whether they are intuitively okay, in the sense that semantics reserving decompositions exist. And if so, whether they are actually supported by the existing MPST approach in the sense that their projections are defined. So, let's have another look at the example specs in terms of these properties. The first specification is both okay and supported. And intuitively, the crucial point here is that all processes enjoy a property called choice awareness. This choice awareness means that every process is informed about which branch was taken in a timely fashion. So, Alice is sort of trivially choice aware because she is responsible for choosing a branch in the first place, right? By sending either a number or a boolean. Furthermore, Bob and Carol are choice aware because they can infer which branch Alice chose by analyzing the message types that they receive. So, after the first two communications, Alice, Bob and Carol, they all know exactly which branch was taken so they can consistently proceed with the same continuation. In contrast, the second specification is not okay. And intuitively, the problem is that Bob and Carol are not choice aware. Regardless of who Alice chooses, the unchosen one has no way to find out. So, in other words, if we were to decompose the specification, what should the result be described for Bob? On the one hand, Bob cannot assume that Alice will choose Carol and proceed with the continuation because if Alice then happens to choose Bob, a communication error occurs. However, he also cannot assume that Alice will choose him in a way to her message because if Alice then happens to choose Carol, a deadlock occurs. So, it does not exist a reasonable, semantics-preserving local specification of Bob and the same applies symmetrically to Carol. So, the second specification is neither okay nor supportive. Now, the third specification is quite interesting because intuitively, Bob and Carol are not classically choice aware because they never know which branch Alice chooses. And since the existing MPSC approach requires choice awareness, this specification is just not supported. But, it is actually okay because it is no problem that Bob and Carol cannot locally distinguish the two branches because the communication actions that they need to perform are not at all affected by the choice of Alice. They're always the same regardless. So, this means that the decomposition in which Alice sends a number in the Boolean, Bob awaits a number, and Carol awaits a Boolean, this decomposition is actually semantics-preserving. Now, we call this phenomenon choice indifference. It is actually an alternative to choice awareness and it's not yet covered by the existing MPSC approach. So, the main contribution of our ESOP 2020 paper is basically the machinery to make all this work. Okay, here's a more precise overview of our contribution. Our paper consists of two parts. In the theory part, we formalize the core syntax and semantics of global and local specifications as a variant of process algebra. And we also formalize the notion of choice indifference. Finally, most of our efforts were geared towards proving that choice indifference is indeed a sufficient condition to guarantee semantics preservation through decomposition. And we do this more to the week with similarity. In the practical part of the paper, we present an implementation of a theory in the prototype tool that can automatically check choice indifference. And we also evaluated these tools in terms of performance and we demonstrate the choice indifference essentially because it can be checked compositionally is much more efficient to check than brute forcing week with similarity. So, in the rest of this talk, I will briefly summarize these contributions in a bit more detail. Let's start with the theory. So, the syntax looks roughly as follows. As global specifications, we have atomic communications, binary operators for choice, sequencing and interleaving and recursion. As local specifications, we have sends, receives, tau actions which represent the form of idling, binary operators and recursion. Now, the projection operator denoted by the harpoon, it consumes a global specification on the left-hand side and the process name on the right-hand side and it produces a local specification for that specific process. And in particular, if the process is the sender in a communication, then of course it should send. If it's the receiver, then it should receive. And if it's not involved at all, then it should idle. Now, the letter is actually a bit weird when you think about it. Traditionally, in concurrency theory, tows, they represent internal actions by a process that are unobservable from the outside. But in our theory, we actually use tows to represent external actions by the environment that are unobservable from the inside. So, this is sort of the opposite interpretation. Still, these tau actions, they do play a crucial role in our formalization of choice and difference as we'll see on one of the next slides. Now, the semantics of specifications are defined in terms of labeled reductions. And at this point, it is important to emphasize that we consider only synchronous channels in this work. So, as a result, reductions of global specifications are labeled with whole atomic communications. Reductions of individual local specifications are labeled with sends, receives, and tows. But reductions of families of local specifications are again labeled with whole atomic communications and tows. And this is because families of local specifications, they sort of represent parallel compositions of processes which sends and receives need to synchronize. Okay, so our main theorem then states that choice and difference formalized in the CI predicate implies weakly similarity denoted by this quickly equal signs. And it is important to note in particular that choice and difference can be checked separately for every process. So, our work, it really preserves the sort of modular nature of the MPS2 approach where all static analysis can be done on a per-process basis. Okay, here's the formalization of choice and difference. It's a bit simplified just to convey the general idea. There are basically two conditions. The first condition states that if a local specification can choose between performing some arbitrary action alpha and tau, then choosing to perform tau should not affect the enableness of alpha. And because tows intuitively represent unobservable actions by the environment, this condition essentially means that from the local perspective of a process, no action in the environment can decrease its behavioral options. Okay, the second condition basically says the opposite. It states that if a local specification can choose to perform tau followed by some arbitrary action alpha, then alpha must have been enabled already before performing the tau. So, this condition essentially means that from the local perspective of a process, no action in the environment can increase its behavioral options. So, our formalization of choice and difference requires that these two conditions hold for all reachable successors of the initial local specification. So, in other words, in every reachable successor, the environment can neither decrease nor increase the behavioral options of a process. So, this process is indeed indifferent to any unobservable choice made in its environment. All right, so let's have a look at an example of a non-trivial global specification that has supported in our theory, but not an existing one, namely a synchronized key value store. So, the idea is that each of n clients tries to log the store, and the first one to succeed can repeatedly read and write values, and finally unlock the store again, of course, to make it available to the next client. So, in a bit more detail, the specification consists of three recursive subspecifications. Recursion variable X represents the outer loop in which clients compete for access to the store. Y represents the inner loop in which the client that successfully locked the store repeatedly reads and writes. And recursion variable Z represents an inner inner loop in which a reading client sends a number of read messages in sequence, but he can receive the corresponding value messages out of order asynchronously. And in this way, to make the next request, a client does not need to wait for a response to its previous request, which can, of course, be better for performance. And this also shows that some form of asynchronous processing can be expressed in our theory, even though we have only synchronous channels, right? And this is somehow similar to asynchronous pi calculus. Finally, the existential quantifier on the first line of the specification is an example of one of our convenience macros, which we define on top of the core syntax. And this one basically expands to a large sum in which every sum is the same as the quantifier's body, except that every occurrence of meta variable R is replaced by a process name from the quantifier's domain. Okay, that's the theory that I wanted to show. Now, we implemented our theory in a prototype tool, and it consumes as input a global specification. And after passing the file, this global specification is decomposed into local specifications. Then next, the local specifications are checked for choice and difference, and only if all these checks succeed doesn't make sense to generate communication APIs for the local specifications. And these APIs, they can be used to implement their processes in a type safe way. All of this is very similar to how the existing Scribble tool chain for the MPSD approach works, and the details are not very important in this talk. Now, the potential advantage of checking choice and difference relative to brute force b-similarity checks is that choice and difference is compositional, right? So it should be more efficient to check. Now, to study the extent to which disadvantage is really true, we conducted benchmarks with our prototype tool. So for six classes of specifications, parameterized in a number of processes to also study scalability, we compared the verification times of choice and difference-based analysis and brute force analysis. And for the latter, we encoded global and local specifications in the MCRL2 process algebra, and we used the state-of-the-art equivalence checker in the MCRL2 toolset to check weak b-similarity. Here are some of the results. So the horizontal axis indicates the number of processes, while the vertical axis indicates the relative speedup of brute force analysis. And all speedups are below the y equals one line. And this essentially means that brute force analysis is always slower than choice and difference-based analysis in our benchmarks. Now, because the scale of the vertical axis is furthermore logarithmic, actually, brute force analysis is orders of magnitude slower. Now, there are also a few bars missing in the charts, and this means that brute force analysis failed to produce a result for the corresponding number of processes. Instead, MCRL2 just crashed. Whereas the choice and difference-based analysis worked fine also in those cases. So all in all, we believe that these are quite promising results. Okay, now this concludes my talk. Here is again the slide with the summary of our contributions. Regarding future work, there are at least two interesting avenues, support for asynchronous channels and incorporation of process implementations and type checking. Okay, so that's all. Thank you for your attention.
A key open problem with multiparty session types (MPST) concerns their expressiveness: current MPST have inflexible choice, no existential quantification over participants, and limited parallel composition. This precludes many real protocols to be represented by MPST. To overcome these bottlenecks of MPST, we explore a new technique using weak bisimilarity between global types and endpoint types, which guarantees deadlock-freedom and absence of protocol violations. Based on a process algebraic framework, we present well-formed conditions for global types that guarantee weak bisimilarity between a global type and its endpoint types and prove their check is decidable. Our main practical result, obtained through benchmarks, is that our well-formedness conditions can be checked orders of magnitude faster than directly checking weak bisimilarity using a state-of-the-art model checker.
10.5446/54930 (DOI)
Hi, my name is Siddharth Krishna and today I'd like to tell you about our work on verifying weekly consistent data structures. This is joint work with Michael, Konstantin and Deyant. So what is a weekly consistent data structure? Let's look at an example. Here we have a simplified implementation of a concurrent hash map where we assume that the hash function is identity. This means that we don't worry about hash collisions and we don't have a linked list of nodes at each bucket. This is just in a sense a concurrent array. We have a put method that takes a key value pair Kv and simply writes the value v at index K. So put to 3 does this, put 5 8 does this. We have a get method that's given a key K and looks up the value associated with it. This is simply an array read. Finally, we have a contains method that's given a value v and has to determine whether there is a key value pair in the data structure that has the value v. So it does this by starting at array index 0 and walking down the array and looking at each element to see if the value equals v. Note that it doesn't hold any locks. It sort of does this in a lock free way. So the put and get operations of this data structure are atomic. They appear to happen instantaneously. In fact, their implementations are just a single read or write from the array table. On the other hand, contains is not atomic. Here is an example execution that illustrates why. So here you see a program that's a client of this data structure. It uses two threads. The left thread does a put 5 9 and then checks contains 9. And the right thread does a put 0 9 and a put 5 8. Here is one execution of this program. First the left hand side thread begins to execute and puts 5 9 that results in the state you see here. Then contains 9 begins to execute and it starts from 0. Now contains moves past 0 and reaches index 3 before the right hand side thread is scheduled and puts 0 9 into the structure. This means that the contains doesn't see the 9 at location 0 because it's already moved past it. However, before it reaches index 5, the right hand side thread puts 8 into location 5. So by the time it reaches location 5, there is no 9 on location 5 either. So contains reaches the end of the structure and reports false. Note that this return value could not possibly have been obtained if contains was an atomic or linearizable method. There is no point in time when the data structure did not contain the value 9, yet contains nand returns false in this case. So you might look at the example execution from the previous slide and say that this implementation of contains is just plain wrong. And while it isn't atomic, it isn't entirely unreasonable either. In some sense, you can't rule out the existence of a key value pair with a given value in a data structure unless you held an exclusive lock on it or use some other expensive method of synchronization. And contains does have some guarantees. For example, it is guaranteed to see the effect of any put that has completed before it begins. So prior work has looked into the question of what guarantees exactly methods like contains provide and has postulated so-called visibility based weak consistency criteria to describe methods like contains. And use testing also to show that many methods in Java's concurrent library behave like contains. So the question we address in this work is how do you formally verify that an implementation like contains satisfies a visibility based weak consistency specification? So here we present a proof method for verifying weekly consistent methods like contains. Our methodology reduces the satisfaction of weak consistency specifications to the problem of safety or invariant checking. Technically, we establish a simulation between some annotated implementations and reference implementations that we derived from the specifications. We also have an annotation strategy that is systematic and is potentially automatable. Our simulation can be checked by existing off the shelf deductive verification tools. We evaluate our methodology using the civil concurrent verifier and we verify models of Java's concurrent hash map and concurrent linked queue. And just in case you think that contains is inter or sort of realistic example, here is a table from prior work that tested Java's concurrent objects and I've highlighted all the methods that satisfy weak consistencies and are not linearizable. So what are visibility based weak consistency specifications? So let's look at specifications in the sequential or non concurrent setting first. We usually think of data structures as implementing abstract data types or ADTs. These are mathematical objects like sets or maps. And we can think of an ADT spec as basically a set of invocation sequences where each invocation is a method name like put a list of arguments like one nine and a return value or return values. The set of invocation sequences tells you what are the valid executions of this abstract data type. So if you do put one nine, put one eight, and then you get one and you get the value eight, that's a valid specification. So this sequence belongs to the set of map invocation sequences. In the concurrent setting, we don't just have one sequence of invocations, you have one per thread. So you have essentially multiple sequence of invocations. So here you see a concurrent history where thread one does a put one nine and then I get one returning nine and thread two does a put one eight. And these invocations can actually overlap. So in this case, the get one doesn't see the result of put one eight and returns a nine. The standard correctness criterion for concurrent objects is linearizability. Concurrent object is linearizable if all its histories are linearizable and a concurrent history is linearizable if there exists a sequential history with the same return values. So in this case, we have to linearize the get one nine before the put one eight in order to get a sequential linearization that belongs to the map ADT. Note that the linearization must be consistent with the concurrent history so you can only reorder overlapping events. However, our contains method isn't linearizable. And here's an example history for which we cannot find linearization. This is the same as the example we saw earlier. And you can see here that there are three possible linearizations because contains can be reordered with the two puts in the second thread. But in none of them, the return value of false is consistent with the map ADT because as we saw earlier, there's always a value nine present in the data structure. So this means that contains isn't linearizable. So weak consistency extends the notion of linearizability by allowing each invocation to specify the set of preceding invocations that it saw. This is basically a visibility mapping from each invocation in the linearization to a set of the preceding invocations. So for example, the put zero nine here sees put five nine, the put five eight sees both the puts preceding it in general puts and gets sort of see all the prior invocations and the contains nine here does not see put zero nine, but see the others, which is why it can return false. And the weak consistency criteria is that instead of the entire unitization being admitted by the ADT, the projection of the linearization on each invocation's visibility set should be admitted by the ADT. So here this allows us to for contains to return false because the linearization project projected to its visibility is put five nine, put five eight contains nine, and of course contains nine is then expected to return false. And finally, we have a constraint on the visibility set of each invocation. They can't just choose any arbitrary set to justify the return value. So we have so-called visibility predicates as shown on the right here, which, which, which describe these constraints on the visibility sets. And for example, the monotonic predicate says that the visibility set of an invocation must include everything seen by an invocation that happened before this one. The absolute predicate says that the visibility set of an invocation must include every invocation that linearized before this one. And the absolute predicate in some sense corresponds to linearizability, the standard sort of atomic atomic city linearizability condition. A visibility specification for concurrent object consists of one such predicate per method. So in our hash map example, the put and get methods have absolute visibility and the contains method as we will show has monotonic visibility. So coming back to our tricky execution involving contains, we can see that in fact contains is visibility set satisfies the monotonic predicate because the visibility set is put five nine and put five eight and contains happens before predecessor in this case is put five nine whose visibility set is the empty set. Therefore included in contains is visibility set. So now let's see how we can formally verify that a given concurrent data structure implementation satisfies its visibility based week consistency specification. Let's start with proving linearizability. A popular way to prove linearizability is to annotate the code of the implementation with linearization points one for each method and construct the linearization for every concurrent history by looking at the order in which the linearization points are executed in real time. For instance, the realization points of put and get are at the points at which they write or read from the table as shown by the yellow arrows here. This helps us construct a linearization in this example that is consistent with the sequential specification. But linearization points aren't enough to verify weekly consistent methods like contains even if you had linearization points and could determine the linearization order, you would still need a way to determine the visibility set of each invocation of contains. Note that the visibility set is dynamic and depends on concurrently executing puts and at exactly what order they write or read from the hash table. So the proof can be broken down into two steps. The first step is to compute visibility sets for each operations invocation and we show how to do this in a systematic way by adding annotations like linearization points that compute the visibility set for each invocation of an operation. We prove that if you follow our strategy, then the constructed visibility sets will automatically satisfy the visibility predicates such as monotonicity. Second step is to prove that the return values of each invocation are consistent with both the visibility set and the linearization order. We do this by simulation and we show how you can do this step in an off the shell to detect a verification tool using standard methods. So how do we compute the visibility set of an operational like contains? If you think back to our example, you'll see that contains sees all the concurrent operations that write to locations that it reads from and it essentially misses concurrent operations that write to locations that it has already read from. So the key is really reading and writing from any locations and we exploit this by storing auxiliary information at each heap cell. We store a set of invocations that have written to this memory location and we construct the visibility set of monotonic operation by adding to its visibility set all the content of this auxiliary sets at each heap cell it has read. Let me show you by example. This is the same trace again and here you'll see that the array also has with it these gray boxes that denote these auxiliary visibility sets. So when the first thread puts five nine, it also adds its own invocation identifier to the grip to the gray visibility set. When contains begins its iteration at location zero, it sort of carries with it a visibility set which starts off as empty and as it walks down the array, it adds the locations of all these auxiliary visibility sets to its visibility set. So at this point, it's still empty. And this is why when you have the concurrent put zero nine, since contains has already moved past location zero, it doesn't add put zero nine to its visibility set. On the other hand, put five eight happens before contains has reached location five. And so when contains is just location five, it adds both put five nine and put five eight to its visibility set. And that allows us to justify its return value of false when it reaches the end of the array. So we have the following auxiliary state. As mentioned, each heap cell also stores an adjacent set of invocations that are written to the heap cell. We also store the global linearization order and the visibility mapping mapping currently linearized invocations to their visibility sets. We add the following auxiliary actions. We have a Lynn action, which is essentially the linearization point that we mentioned we saw earlier. It calling this action adds the current invocation to the linearization. We have the biz action that is given an invocation set and adds it to the current invocation's visibility mapping. We also have a helper function get Lynn that returns a set of currently linearized invocations. We modify the heap load and store primitives to essentially add the current invocation to the auxiliary set of writer invocations when storing and load not only returns the value stored at location X, but also this auxiliary set of writer invocations. So our annotation strategy for adding these annotations to implementations is as follows. We assume that we know the linearization points for each method. There's been a lot of work on inferring linearization points, so I'm not going to go into that. So assume we have these Lynn actions in orange in the code. Our annotation strategy for adding the visibility annotations is for absolute methods such as put and get. We essentially also add call to viz get Lynn at the linearization point and this signals that these absolute methods essentially see every single linearized prior invocation which is consistent with what it means for them to be absolute or linearizable. For monotonic methods like contains, we add a viz get Lynn at the start of the method. This basically signifies that contains sees the effect of any operation that has already been linearized at the time when it begins. Also at every memory read that returns an auxiliary set of writer invocations O, we add a viz O intersection get Lynn at that point and what this signifies is that contains sees the effect of every, for example, put that writes to a location that it actually goes over. And we show in our paper that the visibility sets constructed in this way for absolute monotonic methods satisfy their visibility predicate, they are absolute or monotonic. And the nice thing about the strategy is that it's sort of simple and systematic and so is potentially automatable. Once we have visibility sets, then step two is to prove that the return values are consistent with the linearization and the visibility sets we computed. This is essentially a standard sort of concurrent proof. And so what we need essentially are invariants. So for example, for contains, we need to figure out what is the loop invariant, which in this case actually turns out to being something like for all indices I that contains a scene so far, the map as constructed by the linearization so far projected to contain this visibility set does not have the value v in it. So we have mechanically checked this proof that I presented to you using the civil concurrent programmer file. We use civil as a sort of supports concurrency out of the box and can reason about arbitrary threads. And not only did we verify the simplified map implementation, we also verified a queue implementation, which is the lock free linked list queue from Java's concurrent library. That's based on the Michael Scott algorithm. For the map, we verified the push get and contains methods. And for the queue, we verified the push pop and the monotonic size method. While we simplified both the implementations, they note that they have the same sort of weekly consistent behavior that the full Java implementations have. And contains and sizes are representative of the sort of two ways in which Java implementations are weekly consistent. We had some civil restrictions such as our mechanism as proofs assume sequentially consistent memory model and currently prove things that have fixed linearization points. This isn't an issue for our case studies since these Java implementations didn't use any of the weak Java memory model behaviors and had fixed linearization points. But note that our methodology is entitled to civil and so we can reimplement our methodology in another tool that perhaps supports weaker memory model reasoning or can reason directly about Java implementations. So also in the paper is a formalization of our methodology over general notion of transition systems. We showed that our forward simulation methodology is complete with certain types of specifications and we have more details of our civil encoding. Our full proofs are also available online. To sum up concurrent objects with weekly consistent methods are prevalent in the real world. Our work provides a simple methodology to prove weak consistency for such objects and our methodology can be implemented in off the shelf verifiers. I've listed some avenues here for future work and I'd like to thank you for your attention and I'm happy to take any questions.
Multithreaded programs generally leverage efficient and thread-safe concurrent objects like sets, key-value maps, and queues. While some concurrent-object operations are designed to behave atomically, each witnessing the atomic effects of predecessors in a linearization order, others forego such strong consistency to avoid complex control and synchronization bottlenecks. For example, contains (value) methods of key-value maps may iterate through key-value entries without blocking concurrent updates, to avoid unwanted performance bottlenecks, and consequently overlook the effects of some linearization-order predecessors. While such weakly-consistent operations may not be atomic, they still offer guarantees, e.g., only observing values that have been present.
10.5446/54932 (DOI)
Hello everyone, I'm Yusuke Matsushita at the University of Tokyo. Now I talk about our ETH-2020 paper, Rust-Hone CHC-based verification for Rust programs. Our work Rust-Hone presents a novel reduction from Rust programs to CHC for automated verification. We explain Rust and CHC later. Our method removes pointers by leveraging Rust's ownership guarantees. It turns a point of PA into the pair of its current and final target values, a A-circle, using the technique of prophecy. It supports various features, including recursive data types and reborrowing. We proved soundness and completeness of our reduction and evaluated our method using benchmarks. So I talk a bit about the background. CHC stands for Constraint Home Clothes, and CHC is commonly used for automated verification. Suppose you have a function mc91, and you want to verify its functional correctness in terms of this test. Then you turn this program into these logic formulas called phfeef. Here mc91 is a predicate variable that represents the input-output relation of the original function. The functional correctness of the original program is equivalent to the satisfiability of these phfeefs, and indeed they have a solution like this. There are a number of automated phfeefs always, such as spacer and hoist. They can automatically find solutions like this. So by reducing a program into phfeefs like this, we can perform automated program verification. However, when the program has pointers, verification is often hard. A naive approach is to model the memory as our array h. It's simple, but it easily fails in the presence of dynamic memory allocation. For example, consider this recursive function jreq. If we apply the naive approach to this program, we get these phfeefs. They are satisfiable, but the solution requires universal quantification. In particular, we need a quantified invariant that states that some memory region is unchanged. And automatically finding this kind of solution is very hard. So our Rust home removes pointers from programs to perform verification smoothly. Now I explain Rust in a nutshell. Rust is a systems programming language that allows low-level efficient memory operations and also provides safety guarantees by its unique type system. A keyword is ownership. In Rust, when you want to update an object, you need ownership on that object. And the ownership cannot be shared. In order to handle ownership flexibly, Rust has an operation called borough. It is temporary transfer of ownership to a newly created reference. Here's an example of borrowing. First, we create an integer object and name it A. And by borrowing from A, we create a new pointer or reference Pa that points at the integer object. And at this point, we determine the deadline of the borough. Until the deadline, Pa has the ownership on the integer object. Here, it increases the integer by 10. And after the deadline, Pa loses the ownership and A retrieves the ownership. So A can read the integer object and then know that it has been updated to 11. So this is basics of borrowing. Here's an interesting but basic example of borrowing, which is important in terms of verification. The function max takes two integer references Pa and PB and returns the one with a larger target integer value. The interesting point about the function max is that the return address is determined by a dynamic condition. The function test takes two integer objects, A and B, and borrows them until the same deadline. And passes the created references to the function max and take the reference PC and performs increment on the PC. And after the deadline, it checks that the values of A and B are now different. So now I describe our method. Our motivation is to remove pointers in Rust programs for smooth verification. A naive approach is to model each reference just as its target value. However, by doing so, we can't model the lender after the borough state line. For example, if we apply this naive approach to the previously discussed program, then we get THC's like this. It seems that we can represent the max function, but when we want to check the assertion here, we realize that we don't know what are the values for A and B at this point. So we cannot model this Rust program appropriately in this approach. So here's our method. The key idea is to take the final target value A circle for each borough. When we borrow A to reference PA, we prophesy the final target value A circle and model PA as A A circle, the pair of the current target value A and the final target value A circle. When we update the target of PA, we accordingly update A in the model. And when we release PA with the value A A circle, we set A circle to A at that point. Okay, so let's see an example. The Rust program with max and test is now translated into ZVTHC by our reduction. When we borrow A and B, we prophesy A circle and B circle, the value of A and B at the point here. And then we pass the references to the function max and suppose A is larger than B. Then we throw away PB here, so we constrain B circle to B here and we return PA, so r equals to A A circle. And now the returned reference is named PC. We perform increment on the PC and then we throw away it. We throw it away. So the constraint is C circle equals C plus 1. By doing so, we have the complete and sound information on the final target values A circle and B circle. So now we just need to check that A circle is not equal to B circle. And by this reduction, we have sound and complete representation of the original program, so we can successfully verify the Rust program. Let's see an advanced example with a recursive data type. The function pick takes a reference to a list PLA and returns a reference to some random element of the list. The test function inputs some list and borrows the list and calls the pick to take some reference PA and performs increment on that reference. And then checks that the sum of the list has increased by one. This Rust program turns into Zeev-CHC. And interestingly, Zeev-CHC have a very simple solution to pick. Like this. And indeed, we successfully verified this verification problem completely automatically in our experiment. We also formalized our core Rust and our reduction from Rust to CHC and then proved soundness and completeness of our reduction. The statement is that for any Rust function F that does not input references, the input output relation of F and Rust is equivalent to the least solution to F in our CHC representation. The proof goes by constructing a bifumulation between Rust and CHC resolution, modeling each prophecy a circle as a logic variable. Details are described in the paper. Now I talk about the evaluation. We implemented a prototype Rust verifier that uses our method, which is named RustHome. We did analyze Rust's mid-level intermediate representation and suppose various features of Rust. The back-end CHC solvers of RustHome are spacer and hoist. Then we evaluated RustHome in comparison with SeeHome, which is a CHC-based verifier that uses the array-based reduction that we discussed earlier. We used 58 benchmarks written both in Rust and See. 16 of them were from SeeHome's tests and 42 were made by us, featuring various use cases of borrowing. Here's an overview of experimental results. RustHome succeeded in various benchmarks. In particular, RustHome success-free verified a number of very interesting verification problems featuring borrowing and recursive data types. So now I talk about related work. There are a number of studies and CHC-based automated verification of pointer programs. SeeHome for See, C++ and JHome for Java do not use ownership unlike RustHome, but they easily raise false alarms. Rust for Java uses a fractional ownership model, which is different from Rust's ownership model. But it requires extra annotations on ownership unlike RustHome. Also, there are a number of studies and verification of Rust programs that leverage Rust's ownership guarantees. Rusty models Rust programs in a separation logic, and Electrolysis models Rust programs in a purely functional language, leveraging Rust's ownership guarantees. But they do not support some reference operations such as split-up references, unlike our Rust home. So here's a summary. Our Rust home proposed a novel approach to CHC-based automated verification of Rust programs. It leverages Rust's ownership guarantees. It turns a reference PA into the pair of its current and final target values ASL. It supports various features including recursive data types. Although we did correctness proof and experimental evaluation. Our ongoing work is to prove the correctness of our reduction in quark, supporting unsafe code extensively. It is under the project name RustHomeBuild. It unifies our Rust home and another existing study RustBuild. That's all for my talk. Thank you for listening.
Reduction to the satisfiability problem for constrained Horn clauses (CHCs) is a widely studied approach to automated program verification. The current CHC-based methods for pointer-manipulating programs, however, are not very scalable. This paper proposes a novel translation of pointer-manipulating Rust programs into CHCs, which clears away pointers and memories by leveraging ownership. We formalize the translation for a simplified core of Rust and prove its correctness. We have implemented a prototype verifier for a subset of Rust and confirmed the effectiveness of our method.
10.5446/54933 (DOI)
So welcome, my name is Aqush Haidu and I'm going to present an SMT friendly formalization of the Solidity memory model, a paper originally published at ESOP 2020. Solidity is the most prominent programming language of the Ethereum blockchain, so it is a rather specialized language. Nevertheless, its memory model has some interesting aspects, so I hope that you can take away some general ideas from this talk. So first, just let's start with a little bit of context. In blockchain-based distributed computing platforms, there is a bunch of nodes storing the same data and executing the same code. This is what they call smart contracts. Now the main feature is that there is no distinguished central party that has to be trusted, but rather a consensus protocol ensures that they all share the same view as if it was a single-world computer that stores data and executes code. The code is called a smart contract and it is mostly written in the Solidity programming language and these contracts are similar to classes at the first glance. They can define data types, for example, a struct called record with a Boolean and an array and they can also define state variables. This is the data that is stored permanently on the blockchain. For example, here, records is a mapping. Contracts can also define functions, which can be called as transactions. For example, a PAND gets a record at a given address, sets its flag and pushes the data to the end of the array. And as I already mentioned, state variables live in the permanent storage on the blockchain, but in contrast, parameters, return values and local variables live in a transient memory that only exists locally when the individual nodes are executing the transaction. And in an internal scope, it is also possible to define pointers to the permanent storage as seen, for example, in the parameter of the internal ISSET function. Internal verification has gained a great interest in this field, mostly due to the financial consequences of bugs. There are tools that operate on the compiled bytecode of smart contracts, which has various formalizations, but these tools are usually limited to common vulnerability patterns. And on the other side, there are tools that operate on the Solidity level, so directly on the level of smart contracts. They can check high-level functional properties, and they are usually based on SMT. And here, a precise formalization is really required to ensure that they follow the actual execution semantics. And basically, we observed that the memory model lacks a detailed and effective formalization that could be used as a building block for automated verifiers. So based on the motivations presented before, we did our formalization of the memory model in terms of simple SMT-based programs. We used SMT types, we allow variables to be declared, and we allow basic statements and expressions from SMT so that these programs can be expressed in any modern SMT-based tool and can be checked by SMT solvers by simply translating it into an SSA form. So let's jump into the actual formalization. Let's start with an overview. As introduced, one of the locations where data can be stored is the permanent storage on the blockchain. This is where the state variables are stored. It has pure value semantics, which means that there is no overlapping, there's no aliasing, and so on. For example, suppose that in a contract, we have a struct T with an integer Z and a struct S with an integer X and an array of T's. For example, if we have state variable T1, it will have its own slot. If we have a state variable S1, that also has its own slot, including its own T instances. And the same holds for arrays, they all have their own slots recursively for the inner members. And the other location for data is the transient memory that is used locally during the execution of the transactions. And this is where parameters, return values, and local variables are stored. And in contrast to the storage, it has pure reference semantics. For example, we can allocate a new S instance with the T or A and T instances inside, and they are all pointers. Then we can just point to, for example, the second element of the array, or we can also allocate a new S instance, which shares the array. So basically, arbitrary aliasing graphs can be created in memory. Now a nice property is that there is no mixing in storage. There are no pointers. And in memory, you cannot store by value. There is one exception, though. In a local internal context, it is possible to define pointers to storage. So for example, here, TP is a pointer to one entity in a storage. And interestingly, these pointers can be passed around in internal functions, and they can also be reassigned to point somewhere else. So let's start the formalization with the memory, for which we use a standard heap model, defining a separate heap per type, because solidity is a strongly typed language. Pointers are simply SMT integers, structs are SMT data types with their fields inside, and arrays are data types with the actual SMT array inside and the length. So for example, if we have this struct T and struct S from the previous example, then these data types can be encoded with the following SMT data types. You can see, for example, that the T array is an array from integers to integers, basically pointers. And then for each type, we have an actual heap to dereference the pointers. Now interestingly, there are no null pointers in solidity. If we allocate something, its members and elements are recursively allocated to default values. So let's see an example, allocating a new S instance with a T array of size 2. This will be a pointer, so an SMT integer. And first, the two T instances inside the array are allocated and stored on their respective heap, and we use an allocation counter that is incremented for each allocation. Then the array is allocated in its heap with the two pointers from the previous steps. And finally, the S instance is also allocated with a pointer to the actual array from the previous step, so that in the end, we get our pointer. Now then accessing an element can be done in the opposite direction, basically accessing and dereferencing the heaps step by step. So for example, SM is heap S3.TA, again with the heap, the first element, and the members that can be accessed this way. So a nice aspect of the memory model of solidity compared to traditional languages is that its scope is limited to the execution of a single transaction, and this is what makes this heap model tractable. We don't have to reason about allocating permanent and global stuff. Now one thing that needs to be ensured though is that new allocations should not overlap or should not alias with previous allocations. So for example, if we have this function f and we want to allocate something inside, then we know that this should not overlap with the parameter SM. And this can be done by assuming that the parameter SM is less than the allocation counter and also recursively for its members, like the array, and again recursively for each array element. So this requires quantifiers in the general case, but this is limited to a decidable fragment. This is work in progress. Now let's move on to the storage, which has pure value semantics, so we will not be using heaps here. For example, if we have this struct T and S as previously, they are encoded like this. The main difference is that now nested structure arrays are not pointers, but the actual data types. So the t array now stores actually t instances and not just pointers. Now if we have these state variables, t, s, and an s array, then they simply become variables of these data types. And a nice property of this encoding compared to using a heap-based model is that it ensures non-aliasing and deep copy out of the box. And this is especially useful in modular verification, otherwise we would require many framing conditions, and we also know precisely what is being modified, which makes reasoning more effective. Now the one big question remains, how do we deal with these local storage pointers if we use these data types to encode the value semantics of storage? And the key observation here is that storage is basically a finite depth tree of values. For example, let's say we have this struct T with an integer z and struct S with an integer x and t, and also let's suppose we have a t array, and suppose that we have state variables t1, s1, and an s array. So here the storage tree looks like the following. In the root, we have the contract itself, and the state variable t1 is simply a t instance, and s1 is an instance of s, which has a t inside, and also a t array, which again has a number of t's inside. And sA is an s array, which again has s instances inside, which are again recursively consisting of t's and array of t's. Now the main observation is that each element can be identified by a path in this tree, and basically by indexing the edges, a path will be just an array of integers. We can assign an index to each state variable, and then recursively to each member, and so on, and we can also use array indices. So now if we want to, or if we need to point to an expression, like for example sA8.ta5, then we have to fit this expression onto this tree, which we call packing. And in this example, sA8.ta5 identifies this path highlighted within the indices 2, 8, 1, and 5. So the pointer is just an array from integers to integers with these values mentioned just before. Now the opposite is if we have a pointer, say as a function parameter, and we want to use it, like this tStorage ptr. Then we have to deconstruct where it can possibly point, and we call this unpacking. Here we create a conditional expression based on the tree. For example, if with this tree, if we have some pointer to a t instance, then we first have to check the following. If it starts with a 0, then it can only be t1. Otherwise, if it starts with 1, then it can be either s1.t, if the next element is 0, or s1.ta, index with the second element, which is the array index, if the next element is 1, and so on. And since the tree has a finite depth, we can always deconstruct a finite conditional expression. Now I mentioned that there is no mixing between the different data locations, so there are no pointers in storage and no values in memory. However, it is possible to make assignments between the two locations, which can result in very different behaviors, like deep copy or just pointer assignment. And the solidity semantics can be summarized with this table, and I'm not going to go into the details, but our formalization covers all the cases with different aspects. And these were just the key ideas. The paper has all the details and all the corner cases, most importantly, formalized, so I encourage you to check it out. And let's now evaluate if we achieved our goals. We implemented this encoding in our tool, SolC Verify, which is a modular verifier based on boogie and SMT, and this encoded that I have just presented. And we compared it to material, which is a symbolic execution engine operating over the bytecode, and to verisol, which is again a modular NBMC-based tool, also based on boogie and SMT, but they use a heap-based model for the storage as well. And we also compared to SMT checker, which is the intra-function analyzer built into the SolDity compiler. Now just a side note, these experiments were performed for ESOP 2020, so basically we did them almost one and a half year ago, so the tools, including also ours, could have improved since then. And we developed a test suite with around 300 tests organized into different categories like assignments, deleting, initialization, storage operations, storage pointers, and so on. And each test exercises a specific feature and checks the result with assertions or with an assertion. So for example here, the contract initializes a fixed-size memory array and checks whether it is really initialized to the proper length and the proper default values. So this way, the tests should only pass if tools properly translate and run the analyzes with appropriate semantics. Now you can see the results grouped by categories, each tool having a bar with a color coding, green means correct, gray is unsupported, and red is a wrong result. And highlighted with these purple boxes, you can see that the bytecode level material is precise. It does not have to deal with many of these high-level features, but we still managed to find some bugs. See for example this issue on the slides. And you can also see that SolC Verify comes close, especially compared to other SMT-based tools. We still have some unsupported features that we formalized but did not implement yet, like quantifiers, and there are some that we choose not to support because it is rare or it will be removed in newer versions of Solidity. And another aspect is efficiency, and you can see that SolC Verify, represented by the green bars, is one or two order of magnitudes faster than material, which is represented by the blue bars. And it is also faster than very Sol, and yeah, with SMT checker, it did not support most of the features at that time, so we couldn't really compare. So to summarize, I presented an SMT-friendly formalization of the Solidity memory model. There are two data locations, memory, which is a model with a standard heap, and storage, which has a pure value semantics, with the exception of pointers in a local context. And the formalization is implemented in SolC Verify, modular verifier, and our conclusion was that SolC Verify is on par with bytecode level tools, but at a low computational cost. And since then, we have actually been working on some stuff like verifying properties that involve Solidity events. We also started to support quantified properties, and we are working on upgrading to the latest version, Solidity 0.8, because these experiments were still performed with 0.5, which was the latest back then. So thank you for your attention, and please check the paper, our tools, or our websites. Thank you.
Solidity is the dominant programming language for Ethereum smart contracts. This paper presents a high-level formalization of the Solidity language with a focus on the memory model. The presented formalization covers all features of the language related to managing state and memory. In addition, the formalization we provide is effective: all but few features can be encoded in the quantifier-free fragment of standard SMT theories. This enables precise and efficient reasoning about the state of smart contracts written in Solidity. The formalization is implemented in the solc-verify verifier and we provide an extensive set of tests that covers the breadth of the required semantics. We also provide an evaluation on the test set that validates the semantics and shows the novelty of the approach compared to other Solidity-level contract analysis tools.
10.5446/54941 (DOI)
I was a developer, maintainer of OX 10 installer and so on so forth, blah, blah, blah, blah. But I just a plain user, actually I use plain to find excuses to come here and meet you guys every year, even though I failed for two in a row. Who here knows what a single page application is? Please raise your hands. Okay. Who of you were at the keynote earlier today? Okay. So just to call this presentation really short, basically Angular Universal is the solution. Thank you for coming. Let's go home. Because Angular is the solution, Angular Universal solution for every single problem you have and some you have no idea. It's really amazing. It's fantastic and I never made it work. I remember Timo yesterday saying, oh, Eric did that. No, I did not. I'm not smart enough. Eric can do it. I can't. I tried. I failed miserably. So basically I like the idea. It's important that I cannot do it so far. But let's go back to the basics. When we talk about SEO, rendering is part of the problem, right? Honestly, it's not the biggest part or even the most challenging part. Of course, it's kind of depressing when you're shattering to your amazing web on Skype. And then you see a bunch of women's Skype tries to, oh, let me get a nice presentation with scrap on the other side or on Slack and so on. But there are more things. First thing is content. I work in this small company in the past where they believe anything can be solved with marketing spending. So, okay, we create MVP. We don't care that much about SEO because we have money. By advertising, group people on your side, everything is solved. But I come from this other place. You know, the planning community. I used to have my own company and we dealt with customers that had no marketing money. But they required traffic coming from Google and other places. So, content. Content is king. Okay, so, to get better ranking on Google, you need to feel all the inputs on your form. For you earlier today on his talk, mentioned this add-on called Collective Jackal. I never heard, I'm going to try it. But basically it tells you, oh, it's missing a summary in here. It's missing tags. It's missing some metadata you could put in there. So, if you have good content, you use Plurna as you always use, putting nice title and so on. You get good results by default. Then we go for code standards. Come on. Everybody knows, even guys that like SEO experts that you hire over the Internet from India, even those guys know you put meta tags and everything happens. Magic happens, right? So, you need to put additional tags in there. There was this crazy moment a few years ago that for one of our customers, they were quite picky saying, you need to have all those tags and honestly, I got a file with 10K of meta tags. The page did not have that much content. But meta tags was covered. But, you see, when they speed, be mobile friendly, everything counts nowadays for Google, Google is kind of picky with that. And site maps and robots, you know, you need to tell robots they are welcome. We love them. We want them to crawl site. And usually, that's the kind of trick that you put your site live and you never, oh, I need to tell Google, no, but Google is going to find me. Yeah, eventually. But you can make your life easier. So, okay. That was not supposed to be here. So, I got one from the board. So, am I allowed to say fuck or not? Code of conduct. Even so, in fact, they say fuck nowadays. So, I don't think it's a problem. But this community is awesome. Blown is amazing as a product. And, Blown was ready for SEO. When I say ready, it's not like, oh, it's perfect. This is a bit, but I had this more friendly competition with her friends of Drupal in Brazil. And I said a few years ago, like 2012, Blown is so much better than Drupal out of the box. Let's play a game. Let's put aside the same site for a government initiative. One for the Drupal community, one for Blown. No add-ons, just content and playing out of the box installation. We basically run circles around them. A bunch of links coming, of course, we played some dirty tricks as well, because we will not lose to them. But it works. It works. It was perfect. And most people have no idea that you go to the site settings, you say, okay, here's your sitemap for free. Here's doble and co-need-a-data, add-more stuff. Everything's kind of there. You just need to fill the forms. And also caching, because that's also something that if you get a $5 machine on Digital Ocean and put an old plain-for-site in there, you go and just activate memory cache. Don't worry about validation. So it works. It's amazing. It looks fast. If you put code flying front, it's even better, but it's quite impressive how Blown got in terms of performance. But we're not talking about the Google Play, because now we like new shiny things. So we like Angular. First thing, why Angular? Anyone in here use React? Okay. Angular. I'll put this way. David raised his hand. David Glick in there. Only React. That says a lot. I was told by his multiple people that I am like, yeah, React is so much better. But it requires better developers. And until two years ago, I would say, oh, I hate front-end development and so on, because I'm really bad at it. And someone told me, oh, Angular is like, it's everything value. It's easy to get up and running and go there. It's more comfortable. Things break last lie between Angular 2.0 alpha something and the final release. It was like, it's working. Oh, let's upgrade. It's not working. What happened? And then you go to change log and so on. But it's not things you need to do. I'm not talking about content. Come on. We know content is king. But semantic, you're writing your own templates now. You do not rely on plans. Things we do, things actually, some morning, the programming for us, you need to do by yourself. So if you love div, nested inside div, nested inside div and things wasted there, good work. But it's better when you use those semantic tags we have nowadays. Also, basic metadata you can put. I remember on Angular 2, it was, okay, you can choose the title. Everything else, there are like 20 different tutorials. And they would work for one really specific version and so on. And then we use that few months later. Not working. Read again. Nowadays, we have meta services. I think I implemented that a few weeks ago. So every time I resolve a route, and that's important because I found another small issue with AngularTex. Every time I resolve the route and get data, I update stuff before I go to the component view. Otherwise, what happens with AngularTex? AngularTex say, oh, another page view. The URL is about us and the title is Welcome to a Company. Because it was the title of the previous one. So everything else was lost. Honestly, it took me a while to figure out why every single page view on AngularTex for me was off by one. It's like the title from the previous page and everything else from the new one. Twitter card, open graph. That's basic, right? Especially because when you share a link on Twitter or on Facebook or on Slack, Slack goes through Twitter and a Twitter card and open graph tags. They look into this information to handle your page. So in your Angular application, put this kind of stuff. And schema.org. In here, I have an example. As we are a B2B company, we never care much about all we need to get the best results on SEO. Because people know us because they know us because we provide services for everyone and their dogs as well. But schema.org about organization and person and so on, in my previous company, every single listing and we had thousands of them, would have all the information, everything about when you could book the car, where is the car and so on and so forth. And when I would go to the Google Webmasters too, it was like a dream. Like a bunch of lines in here, thousands of items for everything. But again, every mileage will vary according to your business. Normally in speed, as I said, Eric can make Angular Universal work. He told me, Zizi, I was not able to do it. I used ProHandlerIO. I'm going to show the code. It's quite simple. By the way, the guy that created and maintained ProHandlerIO, he's really nice. If you deploy and it's not working, send him an email and say, oh, please move to Chrome Headless and do not use PhantomJS. Two minutes later, everything is working. Oh, perfect. And by the way, I'm not paying, so I felt really bad. But in any case, I used server-side handling only for some user agents. And what Eric asked before, it's like, oh, Google could penalize you. One thing you can do is basically say, OK, Google, pass, but again, Skype bought your stupid, here's a ProHandler page for you. And that's true, honestly. I forgot at some point to configure properly ProHandlerIO. Google was fine with that. But I found out when sharing links on Slack and sharing links on Skype. AOT and LaysLoad, it was something that took me a while to make it work. But the difference, it's amazing. I got twice the speed, half the size for our Briffy.io website. Kashi, I'm going to talk more about that and responsive images a little bit more in the future. Site maps and robots, come on. Put robots on your site. Who in here has an Android application without robots being served on the same engine X? Raise your hand. Sorry, I found out today as well. That's shameful. There's a bug in my Docker file and it's not being served. Site map, it's quite important. Basically, if you provide Bing and Google and Master Studio with your site map, you get really early on problems and issues and they complain about the quality and you can work on it. Otherwise, you rely only on people finding you and putting you into your site and so on. This is so low-hanging fruit, that's like a... At Simplice in the past, we would basically say, okay, let me take a look at your site. Put us at... Add us to your organization on Webmaster Studio. What? Okay, we can help you, really. If you don't know about this, it's like step zero. How we did it? We use Angular, Proginder.io. Cloudflare is not mentioned here, but Cloudflare is really important for us. And then we use a bunch of solutions from our companies. Things that we cooked in some way or other. If you don't see code, I can show code. Rubine here can show the code. And we are going to open this big part of it, so it's quite easy if you want to see. API gateway based on varnish. Why? Because I have been running inside a Docker on cluster Kubernetes. According to the API, I cache the code. Because the second one, immediate third one. If I change the content, what I do, invalidate the cache. By the way, if I change the content, I invalidate the cache on varnish, meaning the API code. I tell also Cloudflare. In the third one, I tell Proginder.io to try to handle me again. Meaning next time any bot arrives, there's a fresh new version. And that was... I'm going to show the code. It's quite, quite simple. And it goes a long way. CMS is this tool called Plone. It's basically Plone out of the box. There's a... Pallies configuration package because I like to keep the code on the... on a Git repository. That's basically dexterity through the web creation. And that's it. It's not really hard to duplicate that. Then there's Timber. Timber is an image server. Who in here heard about Timber in the past? Yeah, because you were on my talk last year, even though I was not there. I'm going to... If we have time now, I'm going to do a little demo of Timber. If we do not, I have a lightning talk about content rules as lack. What I'm going to do is that instead of doing one live demo, I'm going to do three because it leaves dangerously. So Timber basically shows the problem of scaling, cropping and doing caching of your image. You can basically serve it from a different location. There are a bunch of add-ons, including ones like, okay, if the browser is coming to get the image, supports WebP, give WebP, or apply automatically some kind of compression if it's a mobile device. Deliver progressive images in here and so on and so forth. Everything is done by our friends of the Brazilian Python community. They did that because they run one of the largest portals in Brazil. It's like a huge news portal and they serve millions of images, different images every day. It's insane. And their company is basically getting their solution and selling you like, oh, I have this amazing image solution. I know them. And a sitemap, Microsoft's written in the pyramid, really simple, that basically talks to the plan, get the sitemap, talks to some configuration, get the static route, and deliver that to Google. This is the configuration for ProHeader. It's a simple configuration. This is the block that actually does the magic. Try to find the file in the file system. You don't find, go to this block in here. I have to put this in here because of our environment, especially because I have a local service we need to talk to. So do not send to ProHeader, setting here the token. If it's a robot and there's a list of user agents here, send us a robot, set ProHeader, scape it, ProHeader. If user agent is ProHeader, please do not use ProHeader, otherwise you'll get into a crazy loop. If the static file, forget about it and then make the call there. It's quite easy. And in here, if it comes a request for sitemap, sitemap00001, send to this internal microservice that basically deals with sitemaps. Okay, second one, Cloudflare. We basically listen to events if you're hearing the previous talk. Every time there's events, we have a service that listen to it and purges stuff away. In here, Purse on Cloudflare. In here, ProHeader.io. So the beauty of this is like for us we have 40 pages. If we change something, we can invalidate cache manually anytime. For other companies that have, for instance, listings, they have 3,000, 4,000, they change all the time. It's a different game. You need to have something like that just to keep everything working as expected. And we have this small service that runs PyroMid 1.9, latest Python. And basically it's a view that has an index adapter to generate sitemap index and then CMS adapter pointing to plan. Static adapter, it's a bunch of small rules like, okay, slash in print. All German companies need to have input into their site. And honestly, it's not something you change like every single week. So put on the front end. In form, you have even able pages if you want to be funny. Static adapter here and listening adapter. We do not implement that, but in the previous company, basically these will talk to the API, the main API of the business and say, get me all the card listings, order in this way and get me also the information about the last modification. Get this cache from time to time, of course, and generate a new sitemap. Let me show you the code for this. It's like a PyroMid app. Three different routes. Okay, configuring here. The view is this one, sitemap view. It's going to be text XML, some cases compress, generate the sitemap, and it calls depending on the adapter. And index sitemap, fire name, the adapter I'm using, static, CMS. And here index sitemap, you see how concerned we were with coming back to this code. Instead of having a template command, that's for... It's another file to maintain. Content sitemaps, this is like... We have two at the moment, and just iterate, generate the sitemap index. For the static one, these are the routes we have on Angular that our clients are only on Angular, no calls to the... to Plone. Same thing. Plone, I have some... For Plone, we have different languages and so on, so far, even though we never implemented. And what we implemented there was a browser view, returning the results from the sitemap.xml.gz as JSON. And then there's some rules to make the call basically use virtual hosting mapping, because I want all the URLs inside my... of my public-facing website instead of the internal Plone service. And in here it's quite easy, but there was another case in the previous company that I had to do some mangling with the data saying, okay, if the data has this name... If the entry has this name, I know by default it's not this, do that, and so on. But message the data and generate the sitemap. Okay, that being said, I want to say thank you. I want to say thank you for my friends in here, especially to Eric. Asco is not here, but Eric and Asco, they are part of my daily development cycle with Plone and Angular, because every time is like, okay, I want to do this, I have no idea how to do it. I Google. The best result is either Asco when it's Plone or Angular, or when it's Angular, it's Eric. To a point I was joking the other day, it's like, why do I waste my time? I basically should look for anything Eric did and implement. And Hoda knows this one, we are trying to basically make a monolithic Angular application to smaller things, like we use this form library called ngFormly, and we implemented the material design on top of it. Let's give it back to the community. Oh, God, it takes forever. But looking at what they did with Plone Rest API SDK, it's possible to be done. I have faith, even though I do not have time. And I'd like to thank you all for coming. I'm available for any questions. If you want to see code, web, Huda, or myself, we're going to be here for the sprints as well. And questions. Thank you very much. Do we have any questions? Did it, um, did it, um, no? Could you talk a little bit more again about the responsive images? Oh, Timber. Okay, you said, so... Okay, I'm going to do something I might regret later. Okay? It's not about the picture. The picture is not that bad. Picture 2011 Plone conference, right? In here, it's, I'm running on my machine, a Plone, a TEMBO server with an image from the file system. Okay? If it was in production, the image would be on S3 or any crazy other setup. It's possible to load them from another site. But in here, so basically same original size, okay? But I want to get a scaled version. Let's say 64 by 64. Okay? We resized this. But that was easy. Let's get something like 600 by 200. That's dumb. Because if I keep playing with this, like, okay, by... It's really useful, right? This is basically what we do with Plone. If we start playing with different sizes, at some point, we end up with this. There are many plugins that allow you to resize and crop on the client. But one thing these guys at Global, the guys that developed this, they found out is like, when you need to produce like 20 news articles per day with dozens of pictures, sorry, the journalist really doesn't want to. Let me try to crop. So what you do in here is you basically say, be smart, please. And now Tumble knows what's important. Actually cut my bunny ears not so smart. But if I want to know exactly what happened, in here I think I basically say the bug. And it's really hard for you to see, but there's a boundy box in here, another one in here, a third one in here. And then it's using open CV. It knows, okay, there's faces in here. What should I do? So it calculates an average in here and tries to find the best possible matches. What if we're not talking about faces? It goes to the second mechanism is focal points. When you take a picture, select some focal points, it goes for the focal points and crop accordingly. There's many types of filters. You'll not remember. Everything is configured by the URL. And in here you see it's written unsafe because I'm not using the signing algorithm, because otherwise it would take forever. But usually what you do in production, you come in here and you send a signature for the URL to avoid the nigh of service. Because someone could basically say, okay, let's try every single possible size and permutation. Also you can add filters in here or you can add filters by default. For instance, I want all images we produce to have a watermark, a really, really discrete watermark. John, you basically add that to the configuration, every picture comes with a watermark. Or I want to blur images because I want to play a prank on someone. I don't know. Or white balance or whatever. One of the things we use is basically we optimize PNG files. And also there's one thing more. I think it's this. 300 by 200. It also has an endpoint telling you what the hell is happening. So origin, face detection, and so on and so forth. Honestly, this, if you host content that requires images in large scale, this solution is like given. You have Docker configurations for it, download, just say, okay, I want to load the image from here. I want to save the images in here, cache in here, and go for it. You save a lot of time. So responsive images, if you detect you're dealing with the bio instead of serving the big one, you serve this my one. But it's not getting the big one and changing your generation, a new one in here with that. Okay. Thank you very much. Can you say anything about the page with the solution or is it just always you'll be calculated? Actually, if cache is everything. Yes. So for instance, the second time I access this, either it will be a file system version already or with this cache back end. There's a cache because we're talking about a solution that needs to handle lots of traffic. Okay. Thank you very much. Thank you.
Plone provides, out of the box, a very good and user-friendly SEO story but with the brave new world of single page applications and headless CMS this could change. In this talk I summarize the last 3 years of challenges and integrations needed to make a Headless Plone plus Angular application SEO friendly.
10.5446/54943 (DOI)
I think some of you missed the On My Bass song by Plone. I would suggest you Google it. The band that is also a CMS called Plone. Okay. So this morning I want to talk to you about how Triment improved their search. They were having challenges with search and I was called in to help with that. So who is Triment? Triment runs the entire public transportation system for the metropolitan area of Portland, Oregon. And just a few numbers. They have a budget of 507 million. They cover a pretty large area, lots of people, and they are a free-standing government agency that is not part of the city or any state or any or the state. They run a lot of vehicles, buses with lots of buses, lots of stop, lots of bus lines. The mass, which is the light rail, sort of like a long-distance streetcar, with again lots of vehicles, five rail lines, lots of miles of service and stations. The Westside Express service is actually a full-size train for commuters, which has a almost 50 mile service line with five stations and six vehicles. And then they run power transit. The Lyft, which is power transit service, and they run the streetcar, which is owned by the city of Portland, but it's actually maintained and operated by Triment. Some of you, probably, I know some of you went to Plycon in Portland this year and last year, and you will have experienced how awesome the public transportation system of Portland is. It is really one of the best in the US, in my opinion. In terms of trips, people take 101 million trips every year. In terms of coverage or size of the transportation system is the ninth per capita in the country, even though Portland is a relatively small city. It's the 24th largest city in the US, but in terms of transportation system, it's the ninth, which would put it on par with Atlanta, which is a very large city. And I doubt that Atlanta has such a good system. Talking about employees, we get to the point where we start thinking about users for our systems. So Triment has 2,900 employees. 90% of these work in the field. So they are bus drivers, train conductors, operators, supervisors, maintenance workers, construction workers. And so therefore they do need access to their intranet, but they don't access it very frequently. They don't have, unlike the 10% that is administrative, and they sit in an office and work in an office and presumably have access to a computer all along the restroom. So the majority of the employees have the Can-Re-Roll in-plome, and it's there authentically with a web server off. There are just about a dozen or maybe two dozen, roughly speaking, of content developers, which have the Editor of the Row. So that gives you an idea of the user base for Triment. So if it's not in the browser, we don't really know what to do with it. So let's talk about their IT infrastructure. So the public facing site, Triment.org, is not actually Chrome. So if you go there, that's not Chrome. Also not Chrome. You can imagine an organization of this size and complexity has a lot of IT systems, internal web apps and internal sites. Obviously they run on Chrome, but they do have Chrome. They do run Chrome sites. And before this project, they had over five Chrome sites, and I say over because some five sites were actually actively being used, and there were some others that were basically abandoned. So we got rid of them. Of those five, three were blogs. One was a knowledge base, sort of a document management system, and it is still the repository for all of their technical documentation manuals for all the technical hardware that they have. So for a train nerve, this is a paradox. You can find information about anything concerning trains, buses, communication systems, which is anything you want. It's just awesome. I talked about the other side, and then they have the internet, which is what the majority of us talk is about. And after this project, so there were two sites left because three blogs were merged into the internet. The knowledge base was upgraded from Chrome 3 something to Chrome 5, the latest one at the time, and the reason why they were stuck on three was because they had some Chrome 4 artists add-ons installed. So they could not upgrade, as you all probably are familiar with. And I would like to thank Nathan for the well-carned, fixed persistent utilities, saved the day many times. And the internet was already on Chrome 4.2 something or other, and so we upgraded to the latest. By the time we were done, there were actually four blogs, so they all got merged using lineage. So while this project was in process, the four blogs kept looking exactly the way they were before just by having a sub-skin for the sub-site in lineage, which is awesome. And then I'm going to talk a lot more about train nerve, so please stay. Alright. Now you know who Trimec is. The project was basically the main goal was to improve searchability of their internet, apart from upgrading the knowledge base. And there were three pieces to this. One was obviously, if it's not responsive, people are not going to be able to use it on their phones. So it's got the bear's perspective. But this was not really a re-thieving project, so we decided to just stick to basic bootstrap with just a few color changes. So they have an internal design team, and so their directive was to do not get fancy. Just don't make any different than bootstrap so that I don't have to do a lot of work with Diazo. Just give us a couple of templates that look like bootstrap and we'll get fancy. Another thing was the use of covers, collective cover, because the idea was news publications, magazines and newspapers and so on, know that they have developed an art to basically guiding you, the user, to what they want you to find. And collective cover is a perfect application for this. And so the idea was, well, let's see if we can surface the information that we want users to find in a dynamic way so that people don't have to create a bunch of links on pages manually and so on. And so I created, I developed a few custom tiles. At the time, collective cover did not have a calendar tile, no it doesn't, so I developed a calendar tile, and that is an example. So you as a content developer, you just create the events wherever they need to be on the site, and the page with the calendar tile just surfaces them and you don't have to do anything about it. Just one word here about this process of using covers and bootstrap. Lineage was great for this because I created some new landing pages using collective cover that were obviously themed with bootstrap and tested and worked with the whole composite composition process works with bootstrap. So, but they needed to be built by the content developers. They needed to do a bunch of work to populate the tiles just the way they wanted them. And these tiles had to contain a bunch of links to production content. So instead of doing this whole work on a, as usual, on a staging server, on our dev server, this all actually happened on the production server by using subsites with the niche. So the rest of the site was completely unchanged, still the old theme, nothing changed, but the subsites had the new theme, bootstrap, cover installed running in there. And so they could create these covers and then on the product, on the launch day, I could just turn on the theme for the rest of the site, copy and paste the pages, the cover objects over to where they needed to be and switch them to be the default landing pages for those folders. So really, really, that's one reason why I did it, that's one of my stars in the, in the contest that we have out there. All right. So let's dig a little deeper into TriNet, the TriMet's intranet. So the challenge, the challenge is, oh my God, our search results are useless, but we're gonna do it. And IT gave us mandate, no elastic search, they did not want to have this additional dependency and stack installed on these virtual boxes that they provided. So what are we gonna do? Again, let's look a little bit closer at that intranet, it was running on a phone 4.2.x. It didn't actually have a lot of add-ons because of the phone for artists, they were completely allergic to any add-ons, they just wanted a plain phone as much as possible. So they, these were basically the main add-ons that they had, if you can't read them, they are scroll, for blogs, content, well, porplets, web server, all, it's the XOR, which is only in one place and really, like, could do without it, but, and platformed it. It's basically, some burst, almost unchanged, it's just a little bit customized in the portal skin, it's custom. And in terms of content, it had, or has about 10,000 items. 70% of those were files and images, and the rest were basically, most of them were pages, blog posts, and folders. So this is not really a huge site, but it's, it's, it's where I think Sloan's default search starts falling down. Probably even, even before 10, before you reach 10,000, but by that point, it becomes useless, especially with this many files. So they used a bunch of work arrivals. One was to exclude files and images, which you can do very easily in the search settings control panel, just uncheck the box for file and image, and those, they will no longer appear. And the result of this was, for example, if I search for non-revenue vehicle, I, before I got 226 results, most of which were files, and most of which, the files, okay. So the snippets, the description under there does not tell you anything about the content, where Sloan's search found the keywords. In other words, you could have these words anywhere in a PDF, and you wouldn't know by looking at this, why, why am I getting this result? So they were really frustrated by that, and therefore, they turned up files and images, and now we have, for the same search, we have 28 results. And this is what it looks like now, for the same, for the same search. So this is just a little preview of more to come later. Another workaround is keyword stuffing. So somebody at some point figured out, what is this keyword thing? I wonder if I can trick the search by putting, you know, all the keywords that I think people want to find, want to use when they search for this particular page. I put them all in here, and I can get my results up, like there's this whole ICO craze. And so there were about 80 keywords, most of which were duplicates, where they were like singular and plural forms of the same word, and hyphenated and non-hyphenated versions of composite words, or capitalized in the lower case, which obviously doesn't make any difference, or stuff like that. So that was useless. And then link force. I don't actually remember 1995, 1995, like hosts, or excites, or alt-tubist, or even Yahoo, the way they looked. They were basically just, just, because they're actual, you know, the keyword search that you typed wasn't really that helpful before Google came along. So what they did, they had basically all these categories with links, and you clicked one link in there, and it took you to another place with lots of other subcategories and lots of other links, and that's how it worked, and that's basically, they figured out how to replicate this insanity. And, you know, when you are a user and you're looking for something in particular, having a whole page full of links isn't really going to help you. So, let's talk a little bit more about search. Let me get there. Okay. I don't know if you've ever taken, like, a bird's eye view of a Google search results page and realized how little effort it takes for your eyes to scan it and to immediately discard the things that you're not interested in, and to zero in on exactly the right thing that is the good match for you. And that, and they do, and it's done with that, a whole lot of fancy design stuff. Look at this. This is an individual search result, and look at the colors, look at the font sizes, and the spacing, and the font weights, and I think we can do worse than just copy what Google has figured out. And also, okay, so the search page at a micro level, at the individual search result level, has some metadata that we want people to see. So, definitely we want a title. We want a good descriptive title. Google puts URLs, I'm not sure if you noticed, Google puts URLs, and you'd think that in this day and age, a lot of people don't really know what to do with URLs, but actually it's vital. It's really, really important to know what you're clicking on. Like, just looking at the domain, it kind of gets a good idea of where you're going. So that is really important. The byline, Google doesn't show it, but in an intranet, I think it's very important, you know, having a date, last modified, and the author. The snippet, a.k.a. a description, a.k.a. a summary, which, you know, the thing that we can do in clone is different from what Google can do. But anyway, I think, when we decided that tags are useful to have in search results, and we'll talk more about those later. And then icons or thumbnails could also be very good hints that I implemented something for that, but then it ended up not being used for now, maybe later. So, okay, let's skip a bit of these. All right. This is a typical, well, not typical, but it's one of the worst examples of a client on the file. This thing only makes sense to the person who created it. Obviously, how could this ever deny blurred out the name to protect the user? So, title to title is really important, and just using the file name for a title is not as less than usual. A lot of people, a lot of times, even though the description field is not mandatory to save your content and clone, a lot of people think they have to put something in it, so just copy and paste the title. I mean, that, again, is another example of, I mean, give me some information about this thing, type 5-a-dometer location, you know, just repeat the title. So, but to be honest, this is not the user's fault or the content editor's fault. It's a little bit, a lot, a clone's fault that a clone doesn't give editors any feedback on the content quality that they're creating. So, you know, these examples that I've showed you, there's nothing to prevent people, or to give people any hints that there's anything to improve there. And even if they wanted to improve something, where, like, okay, say me as a content editor, I know I have created a bunch of files, and I did not give them any titles and the file names are approaches, but I created hundreds of them, and they're all local place. How do I find them? I mean, do I really have to go through folder contents and search from that way? It's, or even worse, description or other metadata that we would want to fix. So that's where we are falling down. We're not giving editors any help at all. And, well, just as an example, this is a screenshot of a search result as it looks now. So you see there's a tag, there's a violent, there's a URL without the HTTP in front because that's useless. And a good title and a good description. So that's a good tip. That's if all the search results were like that, we would still be starting working with. There's more. Okay, Adam Acroletta. No. I'll get there. Okay, fine. Adam Acroletta. Google, you had one job. Give me the picture of a guy without a tattoo. Actually, Adam Acroletta, search is about two jobs. Sorting and filtering because Google has, let's just start with, one trillion possible search results. It doesn't just sort them, it doesn't just give you a trillion search results sorted by the thing that you, the keyword that you typed. No, it also filters. But, okay. In Plone, the scoring algorithm that we use is something called Ocfee at the end 25. The end stands for best match. And not to scare you with this formula, but there is actually, even though, even if you don't have any idea what this means, and what this does, there is a lot you can learn just by looking at the formula and by looking at what it depends on and what it does not depend on. So, what it depends on is the frequency of each keyword in the particular document that is computing a score before. So, how often does this keyword appear? The length of this particular document, the average length of all documents, the ratio of those two, and then the total number of documents in the whole set and the number of documents containing this particular score. That's it. That's all this depends on. Okay, if you think about that for a minute, you realize that this scoring algorithm does not understand anything about the context of a keyword in terms of either the location in the site or the location of a keyword inside the document. It doesn't include any document that does not contain any other keywords. So, if you're using a synonym, for example, that's your own fault. If you misspell something, you can't help with that. And obviously it doesn't offer any suggestions. So, this is why a clones default search is, you know, it's fine when you install a clone site and you start creating content, you know, 10 pages, a few dozen pages, a couple hundred pages. Oh, search actually works pretty great. Now you get to a thousand, or 10,000 and forget it. So, thinking about that, I thought, okay, I'll make my own custom sort, or I'll make an index that depends on these four elements in this order. So, for content quality, two items that have the same score and content quality, it will then score higher, one item that has tagged versus one that doesn't. And then for the same score, it will rank content that is not a file higher and then files. And then also last modified seems to be a pretty useful thing to sort on. And we'll get into content quality. So, that's about the sorting. And now it's about filtering because that's the second big job of the search engine performance. It's eliminating anything that doesn't have anything to do with what you're looking for. And so, what if users could do it themselves if clone doesn't help you? And we have a perfect solution for that, I mean, fast navigation. But first we have to decide which facets do we want to use. And that's where that's what our process entailed. So, in this project, we needed to decide and make a bunch of decisions. So, in terms of metadata, we said, okay, we want title, description, URL, tags, and so on. And there is a really nice add-on, collective Jekyll, which I like a lot and I also gave it a star in our contest. And let's see that. So, if in case you haven't seen it, it gives you a little viewlet that appears in the byline of content if you're editing. You see that red little thing there? That's a viewlet that collective Jekyll puts there and it has a little drop-down arrow so you can click on it. It says warning and when you click on it, it drops down this summary of content quality symptoms. And in this case, this page does not have a summary and that's the one that's read everything else is read. So, me as a content editor, I can just go in, give it a summary. And in this case, the summaries are hidden so you don't see it but now it's green. It says okay. So, this is what Jekyll gives you. And it's really nice. And I looked at the code, it's very actual. I like that. It's really well-written, I think. And it's really easily extensive. So, we decided, okay, let's start brainstorming what symptoms do we want to use? And priorities. Which ones are we going to fix first? So, we want to have really good titles. So, we want them all to be title-pays, no all caps titles, no all lowercase titles. We don't follow the Aiki style guide and be really clean about it. But then what do we do about acronyms that are in all and all in other double case? So, that's something that I handled in a particular Jekyll filter that I created. Then the summary, aka description. So, we want every content and to have a description, it has to be a complete sentence. It doesn't have, it has to be not the same as the title, or even contain the title as a sub-script. It has to be properly spelled with a capital letter and so on. So, that's another symptom that we, there is some stuff already in collective Jekyll, but we improved it. And then page ID, when you're creating a copy, the page ID always has a copy of in front of it. And so, that is, first of all, it's ugly, but moreover, it's actually like a symptom of work that was probably left undone, unfinished. And so, it's good to fix that. So, the other thing is, collective Jekyll does not do is, it gives you that little viewlet with a okay old morning. And also gets you a collection that you can use, but it computes all these symptoms on the fly when you're actually requesting a page. So, I created a custom index so that all that, those symptoms are persisted in the catalog and we can create reports. And so, reports we can create with faster navigation and here you see a widget in fast navigation that has the symptoms. And so, then the managers could give the task to their editors to go and, okay, let's start fixing all the titles. And so, people could, and there are other widgets down there that you don't see so that people could find all the things that they had to fix. And so, that's how the process worked. Then tags, we decided to do a control vocabulary, none of this folksonomy stuff. So, with Plum Keyword Manager, we eliminated all the bad keywords, duplicates and so on. And with the ASO, we removed the widget that lets editors add new keywords. Instead, they can only pick from the control vocabulary. And that's helpful. Let's see, let's do something. Okay, then we have to decide what facets do we want in our search page. You know, that's, and for those, we need to create some custom indexes. The division is one. It's based on the path, but it's not the same thing as the path. The categories, just default category index, the type. Okay, let's be honest. Users do not care about the content types that we create. So, we like our posterity and our type schemas and all that stuff, but users do not care. So, that's consolidate. So, all of these document folder, link collection form folder are all bundled into a page. A try-net web page type. But there are some types that they do care about. So, whereas Word Excel, Part-Paint, and videos and all of that other stuff is just a file to us, to the users, those are really important things to be able to filter out. We don't want to show images in the search results. And we want to keep log entries and events as separate types, so those stay. The last one is another fasted search widget. This is the last one, if I'm relative date widget. So, you know, we use this week, this month, this year, over a year ago, all. And so, this needs to be updated every day with a cron job. And finally, this is the result. And you don't see all the widgets on the left, but we talk about it. Oh, and I'll just say this is Bootstrap, so it's responsive. And like Amazon does, you don't see the widgets when you first load the page, but you can. They pop up when you click the button. And you can set them, apply them, and then the search results are updated. So, we talked about almost everything. Just want to give you a few takeaways. So, for Trimet, this meant they needed to decide on which facets they wanted. This was a process. They needed to decide on a control vocabulary for tags. And they needed to decide which kind of quality symptoms they cared about and prioritized them and worked on them on a schedule. So, for me, I had to create a bunch of indexes, I had to create the reports, and I had to create the search page, and so on. For Plum, okay, this is for us as a community, we need to get better editor feedback on content quality. And that's something that CASEL does, and I think we should definitely do something like that. We need to be able to give users a way to both filter, like, now the folder contents view in Plum 5 is great now. You can do bulk updates, but you still can't really do a lot with the metadata, like content quality of the summary, for example. You can't really do it. And bulk edits. So, think about content quality, it always matters. And, that's it. Thank you.
TriMet runs the public transportation system for the city of Portland, Oregon, and the surrounding area. Over several years, TriMet's Plone-based intranet had accumulated lots of content, and the built-in search was not working very well anymore. For this case study, I will show how we solved the problem by focusing on content quality. Faceted search was helpful both in the push for content quality, as well as in the final search functionality. These ideas can help any Plone site, large or small, and should be considered for additional default features of Plone.
10.5446/54944 (DOI)
My talk is rather shorter than it has to be, so I'm expecting a lot of questions, or if you have questions in the middle of the talk, just raise your hand, shout it out, and then I'm going to repeat it, so it's going to be under recording, or otherwise they'll find it. So, yes, I'm currently still the release manager for the 2.12 and 2.13. There's now the release management team, at least Michael Puglitz or IceMaker from GoSet has also stepped up to be one of the release managers. Tressieberg also has sort of been the release manager for the 2.13 and lots of maintenance releases on that one. So we have a group of active people again who do actually do releases and take care of this. Ian Tvarger also has been doing lots of work, both on CMF and various sort of project and packages, he's broken out of the observe itself. So what I'm going to talk about is just an overview. I still have a slide from the talk I gave about 2.12 and just sort of further still, but actually fit in and what it is. A bit about the tool kit, about the for itself and a bit about CMF, what the status there, a bunch of changes which are just in the before, which isn't necessarily about Python 3 but sometimes related, one big topic there is obviously support and then a bit of an outlook about what's next. So this is a slide which looks almost exactly the same as it did eight years ago, except for one thing which I've left off. So this is generally sort of beyond the dependence on Python as a language, there's this notion that perhaps the zero degree user can really stand alone project, still managed largely by Jim Fulton, there's a bunch of contributors there as well and the component architecture reduces minimum core of the interface of the component which he can use to have a hard access to component architecture. Perimeter there to the side because it does depend on this component part, you can optionally use it as your experiment but it doesn't have to and it only depends on this tool with the small part. The tool kit is, you can think of that essentially as everything else which has no dot something but is it a dot app something, any project package which is in the dot namespace, basically that's kind of what the tool kit is. The tool kit isn't special because it's not a framework, it's a collection of libraries and projects and you can't really install the tool kit, you can't start it out, it doesn't really give you a server, it's lots of libraries which have all uses component architecture and all uses the system but you don't have to use the chip. The two releases, the release is sort of a weird concept for just the back of sort of libraries but we did two releases of one panel and one panel release, the tool kit and at some point we also had a release management team out of three groups and three people for the tool kit. One of the other consumers is the tool kit is GROC which does have some kind of app app and it just is well but largely it's also the tool kit. The one which I left off there is a project which was most recently known as Bloom Dream and before that it was unfortunately known as Zilx3 and that's a slide back that unfortunately has this name over, named something Zilx3 as a project which was not related pretty much at all to code base that was there before, it wasn't a major new version of what was there before but it was really a separate project. So at some point it split off and we got the Zilx tool kit as sort of a base layer for multiple actual frameworks. We're using those components and then different kind of actual installable frameworks on top of it called GROC Zilx2 and RootBrain. RootBrain pretty much as far as I can tell I think that many should do it over these many many years ago but it's basically has not gotten anywhere. GROC is still active so we had a bunch of sprints, a number of sprints last year, this year about Zilx3 and sort of re-surecting it that's what we called it, a base agent just doing active maintenance on it again and the release manager of GROC can be about Cobra, Pullman and Cobra and Gatreson still, they still work at a company still using GROC, there's still sort of interest and development going on GROC. It's also largely stable and sort of a true code base so there's a huge amount of future development going on but it's still a maintain code base and there are also interest in the birth of sprints because they were also interested in Python 3 and porting to Python 3 and if there's anything for those two frameworks can help each other it turns out basically on the Root2 lookable at the point where we started the Root2 it was largely already ported to Python 3. It's happened gradually over the last I would even say five years where people started porting some things especially sort of the component architecture since pyramid, depending on the pyramid, Python 3 support much much earlier so they were really interested in getting component architecture to have ported so gradually some people, one person in particular to shout out there is Jason Madden who's done an incredible amount of work. He's the current maintainer of G-Event double storage, likes to work on acquisition as well as porting the opens of app packages, huge out of them took him. There's other people like Mario is the hemias from programmers, Bill Nios who also did a bunch of work on the Zope Toolkit level as Tresever and a bunch of others and so he has a Zope Toolkit, he has some of the Rock, so depending on the base pieces it said well the things he hasn't come in in terms of Zope and Rock are pretty much done, he has a Zope Toolkit. One discussion that came up is like do we need to tag a new set of versions here like the Zope Toolkit release and we basically decided that the dozen meaning is any advantage. Base of things in the Toolkit is so mature these days that it just takes the latest version of all these packages and the only thing you get from the new version of these packages generally is you get Python 3.6 support, officially rather than Python 3.5 support at the most or maybe soon down to Python 3.7 support. But there's very little feature development in the Zope Toolkit that's basically all about making sure these packages continue to work as the latest versions of Python. So you basically said on the Toolkit level that's kind of done, you don't need releases anymore. There isn't, you don't need to synchronize feature development across multiple packages there but basically it's stable and it's true and that's what it does today and that's fine. On the other hand on the Zope level we have a whole bunch of more work to do because there's a whole bunch of packages that the Zope depends on which are part of the Zope Toolkit and these are things like, like for example, if you talk about it and talk about all the difficulties that involved porting it. And other things like acquisition and extension class and document template and access control and whatnot. But we have the reporters first in order to then say now that we have all the dependencies that are ported you can also get what's said for as a part of itself and then start porting things to Python 3 as well which depends on it. One decision or one thing we have to do there is by access the 4.x there is since this confusion was with Zope 3, we couldn't, we had two choices. We could either name the project still Zope 2 and make it power of the name and then it was Zope 2 version 2.13 and Zope 2 version 3.0. And that's a whole lot of confusing. So the way we choose to avoid that is to say the next version of Zope 2, the old full stack application server is going to be called 4. It's also confusing but there isn't any good naming scheme we can use here. What we basically said is if you say just the word Zope without saying anything else, it usually means the old full stack application server. If you put anything like toolkit in addition to the component architecture as an ecosystem of foundation or what not, it's clear what else you mean but if you just use the word without anything else, usually you refer to this old application server. And so if we choose to say well that's kind of a fault but also without having anything, we keep that thing. And otherwise then on top of it we have CNFs, the content management framework and then again some applications on top of it, like the content management system there are also other applications which there are also other CNS systems that don't even enter the CNF and then on top of that you have distribution these days like how to pass the CNS or various others built on top of that. So as I said Zope 2 version 2 got support for quite a newer version to pass fairly early on or much earlier than the others. As of today the Zope toolkit, the latest version in support, Python 2.7, 3.4, 3.5, 3.6, Python and the Python version which also corresponds to Python 3. The way that was done especially in Python support, Python still has problems running arbitrarily, C extensions and C code. You kind of have to do C, F, I, you have to wrap C code in special ways in order to actually really make it work and also get a performance or performance improvement out of PyPy. So the way this works is that all on the toolkit level all of these packages which in some cases only had C implementation on some functionality beforehand now have a Python reference implementation as well. So that's something like persistent has there, there's a Python implementation of persistent and so if you run on the PyPy what you get and what you use is a Python implementation of it and you just don't use any of the C code at all. That means if I have a handle it's dropped and do its different kind of drop and everything and you cross all the code and there's no C and there's no R in it. There's a status page by MaintainVermaris with EMEAS so if you can look at the long list of packages for each package which it says, this package supports the following Python versions based on the setup you buy through classifiers whether or not that package advertises that. It's compatible with something and then also has links to Kizopitapico and his current build status if it's on Travis CI and you can just look at that list, any thing which is in the GitHub, so foundation account, you can look at that and say, oh yeah, this is how, I'm depending on this one package, is that actually, is that kind of the toolkit or was it already parted and you can look at that and say, well, thanks, I can't look at the full status. We did move, there's a lot of documentation there but we did move the documentation as well from, they used to be Zopet.org and docs.org and all of those, anything which is on Zopet.org services, fairly un-maintained and sometimes these servers go down because there's really nobody taking active care of them anymore so we try to move anything we can to those services which we don't have to maintain. And so the documentation generally, these days you will find somewhere on the read the docs and for the Zotoket you'll find that on the Zotoket read the docs that I have. Next episode, so, and what we call it now, before, we did a beta release last month at the end of one of the last, this retransprints, we did beta release two last week, by now there's already been changes because we've been working on something, some fixes from the clone community, yeah, there's a box, there's a corner case we didn't text for and there's a bunch of changes there so we'll be able to make those beta release probably very soon so you can depend on it. Generally speaking, beta for us means there shouldn't be a, no feature should be removed anywhere anymore and it's no major feature should be introduced. It should now all be, all development that happens on Forza 4 is all about backfixes where the test coverage might not be good enough if somebody finds a corner case or we broke something and we didn't know about it. So the clone community has been really good at taking those codebase out and saying yes and it'll be out now actually testing against this, testing against it only on Python 2.7, but we are at least making sure that whatever existing codebase we have on Python 2.7 runs against Zotoket 4 there isn't any obvious problems with that. And now it's time to give that feedback and back it back to the rule we pressed and whatnot in order to tell us and say yes, I owe you. There's something you should do. On the Zotoket 4 level, we only support sort of Python 2.7, 3.4, 5.6 and we do not support Python. The reason for that is basically to fault. There's two issues there. One issue is that in respective Python there's a bug, basically one of the ways respective Python implement some functionality is to say I'm going to execute your Python code but I'm going to fill it around with the environment you've executed it. So it usually takes away or overwrite some of the built-ins. So to get active function or you can look up some type or screen type or what the built-in keyword it overwrites those specialized versions. And in PyPy currently you can't do that. Whatever you do, you can't pretend that you see you overwrite these built-ins or something else and PyPy still gives you the original bit. And so right now, whether or not this is going to be fixed in PyPy nobody can tell but right now in other release words, PyPy can keep, can do the guarantees we need on the respective Python level. The other issue is that we have wonderful combination of when you combine persistent objects and you combine them with extension class and acquisition. All of those essentially want to overwrite how the access attribute is quite an object. The persistent module wants to essentially say well I'm going to, this object comes from the database, I'm not going to load it, I'm not going to populate the instance dictionary, I'm basically just giving you a class here and then if you actually access sort of the.typilotity or.idea attribute I will get myself say oh I haven't actually loaded this information from the database so I'm not now going to do this and write it in and later I'm also going to do things like oh you've actually changed some of my content so there's a Py changed in attribute but it's setting something as well. Okay this has changed so if you've transmitted a few committed transactions at the end, yes I need to commit this back to the database. Acquisition on the other hand also wants to interfere with the process and say well if I have an access attribute I want to basically wrap this thing in acquisition or if I don't find the attribute of the class itself I want to load this up on the acquisition context and do some metrics there. And the way those are implemented is a combination of some of these objects as hope state, internal state that you can load up in addition to whatever the class itself has. Plus acquisition means unfortunately it's not possible to do that on the Python level at all. So you have to use C implementations or if you use a code for it you have to use a C implementation of the system and you have to use a C implementation of that extension class. Acquisition there is an uppercase persistence module and we haven't found before how that long and sort of how access is implemented in Python itself and we got through all that C code. I think there might be a version if you would basically replicate and combine and copy paste and adjust this entire code base and basically make it one giant version that knows about everything but there is no clean way to do that. So right now these things do require C code they're not wrapped in a single file or anything so you can use them on the PyPy to play PyPy as an unfortunate thing. So the history about how we got here and when we started working on this, there's.glocept.com where there was Michael Hoverts and Chef Alana from Gocept where he wrote all the narrative stories about sprint reports, came up with this little story about the world of two and the three wonderland and it provides a nice narrative story around it so it's not just about backfix, this is a status but it's a story you can read. So if you want to read about more of the history of how we got there, there's a bunch of clock reports, there's not much else on it so it didn't provide a category, you'll find those pretty quickly. Generally as well, as a set, if we don't the documentation we have that maintains generally on the docs so there's current data that we can use to do the installation. So we did actually update the installation documentation and how we operated it, how we installed it, there's a version which tells you here's PyPy install, here's the requirements file you need and then all the work was PyPy and it's officially supported as version Cfg for build out and it tells you what it says. Onwards to the Cmf, there is I think the Cmf 2.4 beta release, this targets for compatibility but right now it only supports Python 2.7. My last status is that on the Cmf level I think products that may have been used to product generic setup and products Cmf4 are all not thought of yet and some of those there are too far, there's just a lot of work, especially email, you can imagine emailing, binary encoding and text encoding and that's a big issue, I have to be able to try to tweak that, can't multi-part my messages, what do I actually have to encode in what way and what accepts that. Genetic setup is lots of XMLs and some of those come from the file system, some of them not, so that's the next step which has to be done and I don't know if we have seen that. But I think the for compatibility in general is already there, that's kind of finished and there are a bunch of new steps which need to be made, it can work on in order to get the Cmf as well to Python 3. The phone community already has started porting some of their packages which don't depend on this entire stack but some of these standalone packages to Python 3 but the problem with the Python 3 support is it has this large dependency tree and you kind of have to do a bottom up approach where you first, whatever will depend, whenever you want to port something you really need to port everything it depends on first. One trick a bit there is always testing dependency, so if you have a huge functional test layer and it kind of depends on your entire application logic then you need to port the testing layer and the application at the same time and you can't, sometimes hard to make, sometimes hard to make progress. One thing we did notice as well in the supporting effort is that usually we have bottleneck where, well this is a one library that now everything else that we want to port actually depends on it, so you have to port this one first and that unfortunately sometimes means you can't parallelize it to kind of 10 people, effectively working on this because sometimes it's one thing and everybody would have to open this one thing in order to make progress. But I think the phone community has already also started there, from what I can tell they're also going to target, sort of what the status is, is to see an F here first target before compatibility but just use price 2.7 support and that gives you sort of a longer run rate as an F, slowly work yourself up and work on prices recompatibility during maintenance cycle. Other things have changed, so this is sort of what I alluded to, if you go to PyPy, so distribution or project is what I call that, if you go to PyPy and essentially say what's the name on PyPy or if you say in the setuppy requirements file sort of what's the thing they want to install, then for the longest time Zope was called Zope2 and it's part of the name and then for Zope2 it's part of the name and then in some version like 2.13. For the 4 we changed that and so if you pip install Zope without anything else that now works and that now doesn't install Zope4 and so that also means you can't say Zope4 without anything in the middle because yeah that's really what you install from PyPy. The GitHub repository and the Zope foundation repository is always part of Zope without the 2 so it's kind of confusing but the thing going forward is going to be Zope and it's going to be sort of Zope4 and that's how you can refer to it because this name shows up in all the setuppy files and the requirements and you have to say what will depend on basically the empty meter distribution port Zope2, it doesn't contain any code, the only thing that it does is it depends on Zope itself. What that gives you is you can keep all your code you can still say that depending on Zope2 everything is going to work and you keep backwards compatibility with Zope 2.13. The event through a similar shift and the ZODB3 project was split up in multiple components and there was a ZODB3 release, 3.10 or 3.11 I forgot what the last number was, that basically also was nothing else and depending on the ZODB package and the system and transaction package and the VTREES package the problem there also was once you started depending on the VTREES package that immediately meant that your lost compatibility was basically the old ZODB versions because what you said is well the VTREES package only exists starting with version number, I think it was 4 and so you immediately did this update. So I would advise everybody if they have existing codefaces not to change that name then you were just depending on Zope2 to keep backwards compatibility with Zope branches and also 2.13 and only make that switch. I know that happens sometimes where some package then especially is used in the old maintenance branch or some version of what I have and then somebody updated this and this breaks in all real ways and something like that, a newer version of what you expected in there. So don't do this for now, hold off on this for now until you really say this is code that only targets of the world does not need backwards compatibility at all. The other thing that we did is largely to help with porting to Python 3. So a large part of Python 3 porting is figuring out what is actually text and what is actually binary content and Python 2 didn't really help you with this. There was a unit type but pretty much everybody just says ah this is probably kind of SKE so this is pretty cool. This especially is the problem whenever you read anything from the file system or what network sockets or anything like that. If I go back for reason you have to know is what I'm getting here now actually text even if it's SKE only text or is it binary and therefore whites. And the one thing where that happens a lot on web servers of course there's an incoming HEV request and now going HEV response and you have to know the headers are a certain encoding and content and the binary of both the body that's something as well. The risky standard has been around in Python 4.0 on the web server interface standard to solve exactly that problem and say here is a standard of code web server can talk to a back end application and it's clearly defined this is a way I'm not going to give you information about the HEV request and this is a way I expect HEV response to come out. It's slightly difficult but with the function call and turning something into dictionary and then there's the sort of literal white binary point that comes out. We said at that point we have this old Z server code that was a back server and in there there was something called the dozer. This is code that largely was written in the days of Python 1.5 and maybe 2.0 has largely been unchanged but that's code that actually deals with opening a socket reading data from the network screen and then doing something like this and parsing the HTTP sort of request and response and we said we don't want to over support this. There is this risky standard everybody else is using this. Let's use that, get out of the business of being the one that opens up the network sockets here and just loses the standard and then they have a clear way of saying well this is the one way we expect this information to come in and then everything else afterwards is much clearer. So what we did here to not completely sort of loses the old project is to say basically split the code at this point as well into pieces and what's now also does not contain the Z server code and with that it also does not contain the HTTP or back down or other protocols. The only thing that supports is HTTP and we split all this functionality to the site but we make it so that it's compatible and so if you install the Z server project it only runs in Python 2.7 but it runs with Z4. So you can install it and say well I kind of want to go forward but I'm not ready yet to deploy my applications to the risky, I'm not ready yet to change the way I run my demons or how I control my drosses or this logging infrastructure I have which is really attached to the way things were done before. So you can install it and get all the functionality back or you also get that HTTP support and get that support back. But going forward, the only thing to do with the Z server project is going to be imported to Python 3. That codebase is just way too ugly and way too little tested, there's always little tests for it. So probably what's going to happen is if you have some more time generally you should probably search for risky. Risky also sort of means a way, sort of risky application means the way that the app starts, say here's a pricing function you need to call and some web server needs to call your pricing function, provide a dictionary with it, with information and do pricing calls so you don't really start Zilver anymore. What you start is a process which happens to be a web server like VATRES or Foonicon or Tornado or Apache or Risky but Zilver itself basically doesn't contain the Zilver CTL script and the ZDemon support for running things in the background. All of that is gone or all of that is in the Z server project. And Zilver itself just has a little helper script which is basically a copy of what Pyramid did, this is a p-server command which is a tiny file you can say I kind of want to have VATRES and here's a config file which tells VATRES here's the port number, listen on in here or here's the IP address, listen on and this is the RISKY application that I should talk to and this is what I want to start. There isn't any DEMON support in there, there isn't any sort of other sort of background support in there. If you want that, then either your web server, the Petching and RISKY might ask for that and do that part for you, you might be supervised by a DEMON tool, system D or ETC entity files or whatnot that can kind of act as a tool that you can do about your set that depends on the web server which is now sort of out of scope of what Zilver does. The other thing we did is that's continually a trend we did basically in Zilver 2.12 and 2.13 as well. The Zilver distribution itself under a lot of code in terms of product.com type. And over time we basically put some of those out into their own standalone repositories. The first one was Zsql methods which I think got moved out as early as then and some other things over time became their own projects. When I spoke so far about Python 3 support, what I meant is the core Zorg thing which is not Z server and if not any one of those is ported. Some of these things are ported, so XtronoMeset Python script is ported. Mailhost is yet to be done in temporary folder in session that kind of gives you session support if you want to have session information stored in the ZODB. As far as I can tell nobody did that for the past 10 years but it was still there and these things were still installed by default so that you set up and created the news of instance there were still persistent objects for those. So you might have them around but you don't actually use them. If you ever call request and then basically dot uppercase session that's when you access this kind of thing otherwise you basically don't. Type error log is another one that basically gives you the error log persistent entry in your control panel where you click on it and look at sort of the recent log messages. Logging is basically these days all the pricing built in logging module and if you have a risky server you usually contact and you want to lock things and where you want to lock things is kind of standard configuration for the pricing logging module that could be screen handed, you send it out, or set an error that might be you want to write this into an email or you can write this some other way but also kind of doesn't make sense anymore for ZODB to say we have our own solution to logging or how you can put logging information. Just reuse all the tools which were written for standard pricing logging module and use those to kind of work. Other things is in testing there's one library you can use a test browser that sits somewhere in the middle between functional testing and Selenium based testing where what this does is it emulates and tries to emulate the browser in the pricing code and then you can say go to some page, figure out the form, there's a form on it that fills in some form elements and then click on the send link and say send us off. This used to be based on a library called Mechanize. Mechanize was basically unmaintained for quite a while and nobody ever bought this twice in three. So what the test browser did was to say well this is another project called WebTest based on WebHop and that's actually maintained and developed and has a lot of those features and it does really the same job. So let's instead use that as a backend. But the thing is because it's sort of back up on WebTest it's fairly modern it only supports WISC because WISC is the one way that everybody supports it when it's set up your applications. So test browser only supports talking to WISC applications. You have to change the API of the test browser itself if you just use the public API and the sub-engine function support and so this well if you only use that it's going to continue to work but if you have any sort of test layer set up which is more complicated and probably you're going to run into this and say well I need to adjust something about how the way I construct my application here in order to then tell test browser to test against it. Other sort of nice things we did is if you use VATRS you get things like IP version 6 support for free. So we can actually there's no IP version 6 addresses something we never did for Zilv itself. HTTP 2 support is sort of the thing where that doesn't make a lot of sense with the way Zilv is structured. So it's still very much as soon as you send in an HTTP request you work on it and then you deliver an HTTP response and really you kind of want to close the connection afterwards. So the way you would get HTTP 2 support usually these days is to put a front and web server in front of it like nginx or Apache or something else and that will support HTTP 2 and then would proxy that connection back to your application or Zilv itself as HTTP 1.1 requests. So we don't really have to support it. The way Zilv is structured doesn't really support keep alive and long running transactions the same for sort of web software support or something the way Zilv is structured doesn't really lend itself to having long running transactions. You want to minimize the number of open database connections at any one point so long running connections doing that with any of the code you have in Zilv is really yeah this doesn't really fit to those things that aren't in there and they're not expecting to be supported. You have to use something in front of it or something to do something with it. Chameleon page template is an engine which has been around for quite some time it's just a different engine of how you take page template code and execute a tile and tile send metal code. It's a separate project for the longest time there's been a 5.pt package which you could install and if you did that it kind of changed Zilv itself as well to always use chameleon engine images that well. It's just the fast look. It's usually 99.9% compatible with the old engine. It's just faster. There's no reason why not to use this and we have a major release here so we can make such a change. It might possibly break something very very small and it just took the code and said you don't have to install this extra package anymore. I think there's a new release of 5.pt out which is basically everything just said well the only thing you can need here is a port. You also have a port to get this functionality for free anyways. The other thing is with the package of global request which again has such a threat out which is that 5.pt request you install that you have the functionality of global request. The thing that that does is it gets it explores the request object in a threat. Many people always thrown about with that but sometimes you write this code where suddenly you figure out somewhere deep in the application and you're like I kind of need to change something here based on the language of what the user wanted and that really depends on the request. I can now write my entire code so it passes on this information all through the call stack I'm just saying this is backcode and just use getting the current request out of the threat local. So that was a global request. That doesn't really also just said this is kind of in so basically also written in a way that you either assume you could get the request object from an acquisition chain always or this is just another way of always getting it and it makes sense to just do it. So this is set up. It's also integrated with the code. You don't have to load any CCML or any event and so on but it's just part of the standard publisher and so as well if a request is thought and accepted in this threat local then the request ends and then you can feel out and so you can use this as well. One last change on that, one of the minor changes there is the ID of an object which is stored in the database and it was part of both the URL and part of the physical so few URL was a physical part in such a database and so always could only be script as key only strings and then a bunch of characters are up in it. There were a couple of approaches that tends to change that all the time but none of them really worked or were never finished. Right now it's an open code request that's already been approved but there's one question about that. The block list of what characters you really shouldn't allow inside an ID that might be a slash or a dot or what exactly should you allow there. In the end it gets you well quoted and you send it to the browser back but what you're going to get is you're going to get support for the full range of basically unicode IDs or your assistant object and an interview. Have a document and some Japanese characters in it and that's what it was doing with the file system and you're uploaded it and it's not made in some way but it really looks to the user and yeah this is what I saw before and this is what I'm going to see afterwards. The only thing there is you only get that if you run on the Python speed. You can only really tell the risky environment, really only tells you, yeah okay. This really is now text, it's not just the type which might be one or the other and we are not sure what encoding it is really in but on the Python speed we can just ask Python itself as a type system which is actually helpful at that point and it just tells you yes this is the string so it's not binary, it's really a string. So again you're really sure, yes it's a string, it's a unicode object, I can do the pan-concluses. It might be possible to call that back to Python 2 support as well but that would be much more tricky and all of some guessing about sort of what that actually is and how it works. So that's a whole bunch of sort of smaller changes you've made. There isn't a lot else except that I think so four started sort of five or six years ago and at the time when it started the general assumption of the sensors was still yeah we should kind of reduce the functionality and scope of what's of dust so there was a usual dance of if you did deprecate some functionality in the past and you should really remove the deprecated functionality out of it it really shouldn't raise deprecation warnings and then double follow up on that should do something about it. So there's a small number of things, there was a global package where you put the main thing you did there was trying to import from global to figure out if you put kind of a spin in the back of the guesser there that was one of the main things you did with that and that sort of all of these things that raised deprecation warnings in the past and so if you fixed those before it wouldn't run into problems but if you didn't then you might run into some of these small things where well yeah some code got either moved or removed or generally just moved. We're going to be going from here, we're going to do more beta releases over the next months depending on the feedback and the severity of what kind of issues get recorded. Basically we want to, the criteria for when we call it a 4.0 final is basically we want to get the feeling that there are enough people have tried to test this in different existing applications to tell us yeah this doesn't look like it's completely wrong we missed some really big obvious topic but we get the feeling that whenever we run into it yeah it's patch cases, it's corner cases where it is one function then slightly changed something if you get the feeling that well it's really down to a point fixes and there's no major major problems anymore and they are going to call it a final but it's open source so it's down to when it's down you basically don't do this on it, it needs to be released on this date and then whatever happened before or after they'll deal with it but they're going to say when you get the feeling that this is stable enough and enough people have tried it and gave us positive feedback then that's when we call it a final. I think we can all say something about like maybe autumn next year, maybe it's in three months I'm not going to put a date on it, I'm going to say well when it looks stable enough then that's when we call it stable. So where do we go from here, it's not quite right, it's sort of whatever we name the next release as well so everybody wants to use semantic versionings these days so if your background has been compatible changes it should be called a final though if it's just feature editions it might be four point five point one, there's one larger thing somebody is working on which is to read themes as EMI, management interface, wonderful frame based environment which looks like it's done in the 80s and was probably done in the 80s. Somebody said here's a bootstrap as a CSS JavaScript framework, here's a version of that and we have just basically, the only thing we have to do is apply lots of CSS classes and lots of the right places and if you do that and ship the right CSS and it looks at least like it was done sort of after 2010 rather than somewhere before. So that is a patch which is being worked on but it's a company who wants to contribute that and the patch based on that again is a point 13 but there was a bunch of changes and now they have to work on this and whether or not this is done in the next months or in the next three months nobody knows it's done when it's done to be said they're not going to hold up a four point O release before that but if that happens and if you get this done then maybe you also might say this is a four point O release. This is sort of nice which looks not quite as ancient as it does right now. This is one feature, it shouldn't break things because it's just CSS classes inside the ZMI screens hopefully and that might be sort of a four point one feature. For five point zero there isn't really anything regarding content so like this is the one thing, this is one major feature we really need. Basically the content is right now as Python 3 support was the one thing that everybody wanted and everybody essentially got contracts from companies for public checklists which is yeah but do you support Python 3 or you want to host this in house then the only thing we can support for in house hosting is Python 3 support so if you can't do this you're basically out and you can't go further. So the one thing we have there is Python 3 support but now that we have it there isn't on the Zilp level it's stable, it's mature, it hasn't really changed for the last five years there's nothing that everybody says this is the one thing we really need. So I don't know if or when there's going to be a five point O release as in right now most people are going to spend the next one or two years on hosting all the app for Python 3 or the 2020 deadline and they're like oh that's when Python 3 support ends so you kind of have to do something at the moment. So I'm guessing for that time frame most people will focus on Python 3 support and maybe by that time due to the next two or three years there's got to be something else right in the water days that everybody says yeah we kind of need this in order to go further and just keep updating and keep maintaining our existing applications. We don't have this, we can't go further. But right now there isn't any big topic on the horizon which says well we kind of need to have to do this. And if you have any questions so far do you have any now? First of all thank you Anna. So you mentioned that the server was out and that implied webdivis out as well. What is the WISC with doing on the HTTP? As far as I know it's HTTP so I'm doing three numbers and what do you mean? It's the way the webdiv support is written inside ZO is. Yeah, you know ZO right. The main thing there for me was most people don't use webdivis. It is a protocol on top of HTTP so you can do it via WISC. But the way it was integrated into ZO and that if a request came in and it was a put request or a delete request or anything else then ZO assumed that yes it's webdiv. And you couldn't in many cases without monkey catching or anything else. You couldn't actually override this. And so the main key thing I got was basically well but we wanted to rest applications these days and we wanted to have objects which actually respond to and say here's how we want to respond to a put request or an option request or a delete request. And so far the code was written in a way that's like well if it's one of those you don't get a chance to implement that. Basically this is webdiv and I'm going to assume it's sort of some webdiv encoded stuff. So it's more of a I put it in the same packet as FTP as sort of a different protocol and we don't want to support it but technically it has different reasons and it's more about most people don't use it and it gets in the way of the one thing that everybody uses these days so just rest. I actually had the same question like before with webdiv and just one word about it. I never used webdiv but recently a customer of mine said it was about webdiv, it's broken with Chrome 5 and so. And then I tried to use it and it was surprisingly well integrated into the Linux desktop and it was really nice to upload and download images through a mounted share in my file problem. So I would have to see some webdiv stuff in the future. The usual block grant for these days is that you can drag a file to your browser in some area and that triggers an output. That's the usual way or you have like you can call it as better as you can for the listing API as a batch API now where you can essentially do the same functionality. The problem with webdiv was always that the build-in clients and the operating systems were all different and none of them exactly corresponded to the standard and you had to maybe call an external editor I think it was and there was various reasons for that but it kind of never webdiv worked out as a box and most of the code was never webdiv maintained and then it was used by so few people and it's what you're used to most and most applications these days you drag a file to your browser and that triggers some asynchronous uploads and I think most applications were that kind of feature these days. So if you have an existing installation of people who actually have this well yeah there's these over it's still there. You're also welcome to start working on and what does code do by this rate? It's open source codes and nobody's preventing you from that. But we said most people let's find a focus and get something out which mostly depend on and only do that. So the afterwards we can get make for the progress and say these are the things which only some people depend on and that can fall afterwards. So that was just a comment. Is there a checklist of the open tasks which are necessary for getting your family members of so forth somewhere? We have mentioned this process with just GitHub issues and GitHub code requests so there is I think there's still I think close to the ticket which was the meter ticket which said what do we need to do until the final release? They mentioned there's one code request about the Unicode IDs, there's one open code request which we have but otherwise the criteria is well we don't know of any blocking issues anymore. So the only thing that is blocking is the issues you are going to create once you test your own applications and depending on how many of those comes in that's going to stop us. But we don't know, I don't think there's any open issue with the issue tracker but we would say this is actually a blocker. So we don't have a known blocker as you just know that the test coverage is not bad but there's so many edge cases especially once you go to lots of functions how you copy paste what are content and how exactly everybody monkey patched a slightly different thing so we know it's a we need real applications tested against in order to really be confident that that thing can stop us. But we don't know of any blocker but we would say we couldn't release this tomorrow. We are just waiting on more feedback. I just wanted to give an update on support for so forth in Plo. We've been working on this at Alpine City spread in February we actually got well just after the script we got all the tests passing with so forth. This is on you know mostly a lot of different branches of packages. Some of them we've been merging back to the same branches that are used in Plo. 5.1 on other cases there's big enough changes that we have to stay on a branch until we decide it's time to create Plo. 5.2. Since that sprint a bunch of the tests have broken again mostly because we switched to CMF master and that changed some things. So we've been working this week on getting through those. I think we've gotten from like 200 something failures down to 60 something and we'll be continuing to work on the sprint. If we get all those fixed then we'll move along to adding support for Python 3 in Plo. So that we're ready. Eric, do you want to say anything about timeline for creating a Plo. 5.2 or should I think that for later? We'll create a branch this weekend and we're probably getting early out of the site of Plo. Am I really out of time? We'll create a branch this weekend and then I'll tell you what we're going to do on the way home. Right that's how I understood it. It depends on whether my international flight has Wi-Fi or not. So we're running out of time. Thank you all and another thank you for you Hannah.
Learn about the current status of Python 3 support for the Zope Toolkit, the Zope 2 application server and the Zope CMF and the road ahead. We'll also cover some related changes in the upcoming Zope 4 release.
10.5446/54945 (DOI)
Hello, everyone. The resource registry I'm talking about, the rewrite started in 2012, I guess, and I actually witnessed the birth of it, and it has come a long way since then, and in Plom 5 it was introduced a new resource registry, and it was significantly improved over the years, and I would recommend you to use Plom 5.1, because there are big improvements for the resource registry, even though it's not finally released. But let's start. What is it, actually? The resource registry is basically a way to register JS and CSS resources and deploy them. You can organize dependencies between JavaScript resources among each other and CSS resources with both versions of Plom 4 and 5, actually, and the resources are optimized, they are concatenated so that you don't have too much requests to the server for getting the resources, so they are bundled, and minified, so the sizes shrinks and not so much payload has to be transferred over the web. Just a second, I think I have to log in. XSF.ConV-dot2017-systemo, so, OK. Okay. There's my mouse. Ah, here. Yeah. Okay. And these goals are actually the same for Plon 4 and 5. And now let's look at the resource registry in Plon 4. I did not prepare a screenshot, actually. I mean, I did prepare one, but I have some problems now with the setup. Just quit it. I guess most of you probably know how the resource registry in Plon 4 looks like. It does its job by, if you register a resource, it just added to a list of resources. And this list of resources are grouped into bundles, and each resource in the bundle is concatenated and minified after that. It has a simple architecture, which is a good thing, and add-ons can really easily register the resources, and they are immediately available after that. And the resource registry also adds caching headers and optimizes the resources which are delivered to the client, to the browser via caching headers. And the negative parts are there's no formally defined dependency between the resources. So you just add your resources to a list, and the dependency is made up, actually, of the order in which the resources are in the resource registry. And it's really hard to keep the requests optimized. So if you add on, injects a new resource which is only available for $80, for example, and it adds this resource after one which is available for everyone, then the bundle is actually split at this point because one bundle with the restrict, the resource cannot be delivered for everyone. So it's really hard to keep those requests optimized, and actually, that's not so good. And the resource registry in Plone 5 is made up of, it's based on Plone registry, so each resource and each bundle which you register is added to the Plone registry, which you can configure with the registry XML file, and it uses RequireJS for the finding dependencies and modules, actually, and it uses less and gulp. And these three technologies have their strong points. RequireJS allows us to create bannins and create formally defined dependencies between JavaScript resources, and it gives us import and import system, more or less, allows us also to define dependencies and to nicely structure our CSS code and use variables and everything which less gives us. And gulp is a high-quality build system with a lot of plugins which you can use, but all of those three technologies are actually getting a bit obsolete currently because many projects use SAS instead of less. RequireJS gets a bit overhauled by the JavaScript ES6 release, and you can already use imports via the standard JavaScript library and web, and gulp is more or less superseded by web back. But, however, and we also still have support for legacy scripts like in Plone 4, so if you are not happy with or not comfortable with using RequireJS and all this new JavaScript stuff, you can just write JavaScript code as you were used to and register it more or less as you were used to and still have a plug-and-play experience. And also it adds automatic cache management and adds caching headers. So my advice is not to bypass the resource registry but to use it to get those caching headers injected. The negative part, you have to precompile bundles if you want to use the RequireJS way of the resource registry. The complexity is higher, it's hard to debug sometimes, and as I mentioned before, some technologies are already getting obsolete. But still, it works quite well, even though it's sometimes hard to debug, and I really had my bad times with the resource registry, but in the end, when you got it working, it's really reliable, and it's in my opinion a really huge improvement of JavaScript and CSS development compared to Plone 4, and it works a bit like magic. And what I mean by that is you have the ability to compile bundles through the web, and this is really somehow a genius, but even though I never use it, I always use the command line for compiling bundles, but you can just press a button in the resource registry field, and to compile your bundle, then all the resources which are defined, which the bundles requires, are downloaded to your browser and compiled in your browser with RJS. And after that, the compiled bundle is uploaded to the resource registry again, to the Plone server again, and stored there, so it's quite funny how this works. So let's look at some code. This is the interface definition of resources and bundles. Here we see which attributes a resource definition is made up of. You can define an url, and the url is just a base url which you can use in the JavaScript less files, and then we have a JS attribute. This is your JS file, the url to your JS file. Then you can define a list of CSS files. You can add some initialization code, and all those attributes here in its steps and export for legacy scripts. So those attributes map to the RJS technologies, RJS attributes. If you do a RJS configuration, you can use all those similar things like here. And here is the bundle registry definition. You have a JS compilation attribute. This is a url to your compiled JavaScript file. You have a CSS compilation attribute, and this is the url to the compiled CSS file. Then you have a last compilation attribute. You just configured a date when your last compilation happened, and then the resource registry can decide if it should deliver a new bundle or not, or a already cached bundle. Then you can define an expression. This is a button expression, which you can define a condition under which, when the condition is met, then the bundle, for example, is delivered, and in other cases not. For example, the blown locked in bundle uses that. It only delivers the bundle if you are locked in. And then you can add a conditional comment like, you maybe have seen those conditional comments in an HTML header to include a special JavaScript or CSS file for Internet Explorer less than something. And then you define a list of resources. And you can enable it. Then you have some booleans. If the bundle should be compiled, if you say compiled is false here, then you have a legacy bundle, which is not compiled. And we'll look at this a little bit later. So let's go to the next slide. Here is our registry XML file of blown. And this is really huge. Just go and search for an example registration of a resource. For example, related items. Here is it. The related items thing. This is, you define a resource always prefixes with blown.resources. And then what follows is the name of the resource. In that case, mockup minus patterns, minus related items. And this line here is the path to the JavaScript file. It's defined in the resource directory and related items, sub-directory and patterns JavaScript. And the next line, you can define more, less files, but most of the time we just use one, actually. And then we have a url. And this url is used to get other resources by relative path within the pattern JavaScript file itself. Let's go on to a blown bundle definition. No, but I want to show you before I say blown locked in. The bundles, the default bundles in blown are made of a bundle resource and a bundle definition. And the distinction between those two is the bundle resource is a JavaScript file, which just will have a look later on it, JavaScript file which defines the dependencies of the bundle. And this is the bundle resource in that case. Same definition like with related items. And you define the path to the bundle definition and the CSS file, the less file. And then this is the bundle definition itself. It just uses those attributes we have seen before in the interface definition file. And bundle definitions start with the prefix blown.bundles and the name blown locked in in that case. You have an attribute merge with and this means that the compiled bundles are merged with other bundles and have the same merge with attribute value. So in the end you get even less JavaScript files and CSS files delivered to the browser with that technology. And if I remember correctly, this was added with blown 5.1 somewhere. And then you have a list of resources in that case just one. It's our blown resource bundle as I call it. And we of course want to enable it. There are some situations where you don't want to enable a bundle so that it is not delivered with each request you do. And you can then, for example, manually register the bundle for a specific view, for example. Then here we have an expression. This bundle is only used when member is none. Member is a variable which is available at this point. And this is so only for locked in users. The JS compilation and CSS compilation paths are here. And this is the last compilation date. You always have to update it if you compile the bundle newly. This bundle depends on the blown bundle. So this one is just injected in the header after the blown bundle is injected. Because the blown bundle defines some JavaScript resources, which this bundle also needs, like backbone and so on. And for example, jQuery. And so that this bundle also depends on those JavaScript resources, otherwise you could not be able to compile it. But you do not have to include those resources again here. But this is what the STAP JS modules is for. If you define some STAP JS modules, then those are not included in the compile bundle here. There's just a comment then in the compile bundle for those resources. And let's, for example, here jQuery is the most obvious, which is used almost everywhere in Chrome if you like it or not. Okay. Good. Let's continue. As I said before, the resource and bundle definitions in the resource registry maps more or less one-to-one to config JS, to require JS configuration, except for the less part and CSS part. And here is more or less the same configuration JavaScript-wise for, like we've seen before in the registry XML file. And this is every character as configuration. It lives in mockup. If you have a look at my presentation afterwards, you find all the files I present here in the header and you can just open it by clicking it. So I encourage you to look at the code because you can learn a lot of that. So where is the mouse again? This is a resource, a pattern, a JavaScript resource, an example. The most simple one I found in the mockup project. You have here a define call. This is a required JS define statement. And what this does is it defines a module. And you can add dependencies to other modules here, like jQuery and part-minus-base. It's the base pattern which are all mockup and patterns, the patterns extend. And here you have the initialization and the name, the trigger for the pattern and the init function, which is always called, and you determine at the end the module itself. But that's actually not so much part of this talk. Let's look at a bundle resource bundle definition. I used here the plone, not plone, locked in bundle. Compared to the resource module definition before, this one uses require and not define. So it's not defining a module, it requires other modules. And each module which you require here will be compiled in the end result. And here we have a lot of patterns which we depend upon. And here's an initialization code. And that's more or less it. You might want to customize this bundle. This is an example which I'll show you later how this is done. So it's gone. A legacy resource is just a resource which has this compile is false key. Actually there are some other attributes involved, but it's just to show you that compile is false, makes a legacy resource. What I show you now is a legacy resource. There are two special things. One is it uses, it does the compilation of the bundle like it was done in plone four. It just concatenates each resource and minifies it at the end. And the other thing is it wraps the whole bundle in this part. It undefines define and require before the compile bundle and redefines it afterwards. And that step helps that you do not get those, how is it called, this require.js and then you have the error of some anonymous define stuff. And this also happens if you send your resource registry in developed mode and click on develop JavaScript on your legacy bundle. This is done here. In that case, if you have your resource registry in development mode and your bundle in development JavaScript development mode, then each resource is injected in the header individually. And before each resource and after each resource, this undefined of require and define and redefine of require and define happens again. So, yeah, I think that's definitely new in plone five. And now to less variable expansion. How much time do we have actually? 50 minutes more? Okay, so I am good in time. Here you see the blown less resource bundle definition file and it imports the less files from some patterns and it's done like that. So each pattern or each resource actually gets a special variable name which can use in less. And also some other variables like static path, power path, and the bounce prefix are available in less files. Some of them you can define through the resource registry view or the resource and configure via plone resource.xml and others are just injected automatically. And the injection happens here. The path is frozen with plone resources, and here for example the side path is injected, then some other variables static path, and here each registry, each resource in the registry is iterated and here happens that those patterns and the resources you have defined in your resource.xml are made up to less variables. So this is the file where those less variable expansion happens. So let's go on to the next thing. Okay, that was it. Some examples. If you have some questions just ask. For example if you want to have a custom plone or plone locked in bundle because maybe you just don't need all of those patterns or want to just include some more patterns without defining a custom bundle, it's actually super easy. One thing you have to do is to provide a bundle resource file like the plone.js or plone less which I showed before. And the other thing is you have to, you can just more or less overwrite some specific parts of the resource bundle definition. Like here plone resources plone, it defines a custom plone bundle and here you define a bundle resource and a bundle less resource and both of them live in a plone resource resource. The plone resources compared to the browser resources which are provided by Tope, the difference between those two is that the plone resources allow you to customize the resource and store it in the CODB. Whereas the browser resource you cannot customize so easily without overwriting. And yeah, and then here the bundles are customized. Here's a custom plone bundle defined and one advice here just create those bundles before you try to compile them because otherwise you will get some error. So don't forget to just touch those files. They can be empty but they should exist unless this bug is fixed. Another thing if you want to have those plone for add on installation, plug and play experience, you can, you have two options. You just use the legacy way to define bundles. You just maybe add your JavaScript and less files to the plone legacy bundle just by extending or overwriting some records in the registry, in the plone registry. And then you define a custom bundle like it's done here, plone bundles, lazy sizes. You can just look at this and see how this is done. And yeah, this one just adds a new bundle and it uses two resources which are defined, LASER sizes minus Twitter and LASER sizes and those two resources are defined below here. I think I should be able to scroll down a bit more but however, you can just look it up in the web. And the other thing would be you don't have to use a legacy bundle which is not compiled as far as you can also define a new style bundle which uses require.js and everything but you have to pre-compile it and add it to the resource, add it as an extra bundle. But however, it's merged via merge with this variable anyways to one JavaScript file in the end. If you want to add some resources or bundles only for specific views, then you can just use add resource on request or add bundle on request. Both of them are defined in CMFplone products, CMFplone resources in the enuplier there. And these are the two methods which you can use and this is an example how to use it. This one actually defines a tile because we were speaking a lot of tiles at the conference. The tile definition is more or less the same for a few and here add bundle on request, then you pass the request in and you pass the name of the resource or the name of the bundle at the last thing. It's just a name as a string. And the top request here, this is a special case, a tile uses sub requests and we have utility method in CMFplone utility which gets the top request, the top most request if you use sub requests. Otherwise you might run into problems because the compilation mechanism of Chrome might not find this special bundle in the request where it looks at. I think this is the last how to build a bundle. You can build a bundle through the web as I mentioned before by just clicking a button, but for production environments where you have everything at your hands, I would not actually recommend it, just use Chrome compile resources. Put straps, a grant set up with require.js compilation and less compilation stuff and minification and everything. And just try to look at the help of this script. It's quite interesting what you can do here. You can define a different instance. You can define a different site. You can define a bundle which you actually have to. It's not optional. You have to define this one. You can define a different compilation directory and so on. You can skip the generation of the grant file. You can skip the NPM in style step and everything. By the way, because I see here the compile and minus directory attribute, when you define a bundle, the path where the bundle is defined, just go back to this so that it's more clear. Here, for example, I go to find a bundle. And for example, if you, the JS compilation definition here with clone static, this is the location where your compiled bundle is put in. You can just override this with your own resource directory or your own clone resource directory actually and then your compiled bundle lands at your own location. Otherwise, if you just compile it directly, it lands in product team of clone slash static which you might not want. Let's go back to the bundle compilation thing. This is an example called compile bundle, compile resources minus B, then the bound bundle is compiled and stored at the location which is defined, for example, scene of clone static and so on, your own project. And this is necessary to get this script, otherwise you won't have created it. Just, more or less, add this part in your build out configuration, use ZC result egg, depend on all your build out eggs and use the script clone minus compile minus resources. And after the build out run, get this script in your build out location. So, I guess that was it more or less. Oh, not a future. Webpack. Haskell started the project to use webpack for compile bundles. And we, in different discussions, 20 discussions and so on, agreed that we also want to have this modern stack in our, in our, in clone. But, you can already use it. The Askels webpack, it uses the clone resource directory and compile the bundles. And a bit out of time, I will maybe skip it. For inclusion in quite much early, we have to work on it. But I hope there will be some progress in the next year, maybe. And there's some plip resource registry improvements which are unholy at the moment because due to lack of time and lack of vision, maybe we want to use webpack. Then maybe those resource registry improvement plips just have to be adopted to, but it's not done. So, I can just read it. I think those two interesting plips which after they eventually get implemented would further improve the whole stack. Yeah. That's, that's all for now. Thank you. Thank you very much. We have a little bit time for questions. Wonderful talk. Thank you. This is much easier than it looks like. Can you, do you have a feeling and an estimate of the effort to move this to webpack? You already have an idea of what needs to be done. I mean, Askels and the other experiments with webpack are already at the state where they can use it in production. So there's no problem with that. But to have it as part of our plan stack, there are some things to fix. For example, I would love, for example, to not use RequireJS anymore, but move on to a different technology. That would make it actually easier, I think, the, this webpack integration thing. And also, currently, the legacy script-centrum sources have been specially treated. And for each legacy resource, there's some special configuration in this webpack plug-in. This is something which is not very extendable. So we have to find a way to automate this, I guess. I think there's significant effort to get to this point. And as always, there need to be some projects which funds the development of that. I don't know, actually, when this will happen, I can't say, no, the effort is significant, at least. And if you want to go away from RequireJS, where do you want to go to? My preference at the moment is RequireJS, because it would just, the mock-up patterns ideal and still require JS components similar somehow. So I think that would be a good fit. Thank you. Thank you very much. I think we have to stop now. We're going to do the group photo now on the, on the actual. We're going to do the group photo now on the entrance of the building. There's a stairs that gives the, to the picture of the supplier. So most of the people is already there. So I'm sorry. We'll have to finish. Thank you. Thank you.
Plone's resource registry has changed significantly with Plone 5 and its adoption of RequireJS and Less.js. Now we can define JavaScript modules and their dependencies and use the power of LESS to better structure CSS code. We can even use Webpack to bundle our resources. This talk demystifies the resource registry by explaining its concepts, outlining workflows to accomplish common integration tasks and by giving tips and tricks to make it work for your project. I will also give an overview of Webpack-based resource bundling and discuss the future role of the resource registry.
10.5446/54946 (DOI)
Thank you guys, thank you for coming here. I know there are plenty of very interesting dogs at this time, so I really appreciate that you took the time to come here. So today we're going to talk about synchronous and asynchronous servers and see how they perform in different environments. But first, who am I? I'm a software engineer. I'm working at Skashaner, a lot of Python. I'm organizing one of the local meetups from Python, Tivision. If you ever are here in Barcelona and want to send a talk, please do. We really appreciate it, especially for people from abroad. I was a citizen for a long while, but now I'm working back in systems. And you can find me at JoJo. This dog and all the code samples are available in this link below. I don't know if you can see it. It's bit of a link slash only sync. You will have everything in the meetup and then on the slides, if you want to ignore me and just read it, you can get it there. So what are we going to talk about? First, I want to start by introducing how do the synchronous and asynchronous models work, especially in Python. Then we will see what is expected from these systems and how do we expect them to work. Then we will do some real benchmarks and then we will revisit our assumptions and get some conclusions around this. So how do sync systems work? Basically, what we do when we have sync syn instances, generally what we do is we put one worker per CPU or thread. Each of these workers will handle one request at a time. So when you have a request, it will take ownership of that thread and it will run from the beginning to the end. If we lose the control of these threads, because the kernel has to remove that from us, but no other request can actually get in the middle of that. So we always know that all the contacts are owned by us at any given time. So we will, at this particular code, we will see how we will start with request one, we will start running out logic. And then when we are waiting for a yo, we will do a syscall. When we are doing this syscall, the kernel will remove this and the control from us. And then we will start running another request in the CPU. And then when the kernel is on the side, we will get back to the request one. So the kernelness of our education path will be decided by the syscalls that we make and the kernel, whenever he decides that we don't need that CPU anymore or another process needs it. On the other hand, when we have a synchronous server, it works in a very different manner. We usually just have one worker per CPU. And the only bottleneck that we have when we are running code and the only thing that decides when we switch from one request to another is our code. So basically when we have an await in the middle of the code, the reactor, the io or whatever we have that is handling that will be the one deciding which is the next task that will get the CPU. But if we don't do that, the same request will be kept there forever unless we release it manually. This is a good and a bad thing. It gives us more control. But on the other hand, it also means that we won't have the kernel applying furnace principles to this. So if we add a code that does a busy way or something like that, like Nekan was explaining a bit before, we will end up in a situation where all the other requests are blocked in there and we won't be able to see any results. And this is really, really bad. So for the benchmarks that we're going to see now, we will have the following environment. I prefer three different containers. The first one is a very simple HTTP application. And we have a flash application. And then Jinx container that will have static web pages. This Jinx container is using OpenRyste, so it has a Lua engine on top of it. So it will be slightly slower than usual Nginx. To perform the Burge Mark, I'm using WRP. This is a benchmark stressing tool. It talks HTTP to servers that will give us metrics on how the servers perform. For this test, I used two threads from 100 to 10 connections depending on the test. And all the tests have been running during 30 seconds. To gather memory and CPUs that they used oversats. And I've interested in my laptop's server specs. So let's start with OpenRyste. OpenRyste will add the capability of adding the middle of the locations here in Nginx. So what we can see is that we will be able to have different locations that have different delays. So if we access the locations last 100 milliseconds, we will have us leave for 100 milliseconds and then we will return the result. This will make it easier for us to test with the same container, different delays that our dependencies may have when they are answering. So with these, we will test how the Flask application that we have, and the OpenAoSTPP application that we have, behave when our endpoints have different response times. And we go all the way up to one second to see what happens if we have a release load event. So let's go with the Flask container. It's probably what you're more used to. It's a very simple Dockerfine. How many people here is used to Docker or good? I don't even explain this, basically. That's great. So basically, we're running Unicorn with 100 workers and we have Flask in started by this. Nothing really fancy here. We will have two endpoints. The first one has an IO bottleneck. So it will go to the engine container that we have. And depending on the delay get parameter that we pass, it will take more or less time to get the response. And we're using requests to gather this data from engine. The CPU bottleneck what we'll do is basically do a lot of iterations and then return. We use that to make it look like it's doing some actual real work and that we would have like rendering a template or something like that. But this is easier to control and we can define easier how many times we want this to keep busy. With AHDP, we have a very similar stuff. The main difference is that as AHDP is using a different way of running, like we explained at the beginning of the talk, we only need four workers because it wouldn't make sense for us to have more workers. It would be counterproductive. We just want one worker per CPU. Leaving that aside, it uses Unicorn. Everything else is exactly like the Flask application. For the IAM handler, the code is slightly more complex. With Flask, instead of doing directly requests.get, we will have to initialize the client session. In this case, we want to do up to 10,000 concurrent requests per second so we have to define that. And then we will need to use asynchronous context managers to actually do the code. But if we forget about this with minor implementation detail, it's exactly the same code. Any questions so far? Is this clear? Can you read it? Good. For the IAM handler, the code is exactly the same. The only difference is the way we get the parameter because Flask and AHDP have slightly different syntax. That's it. So when we discuss synchronous and asynchronous servers with our colleagues and stuff, I was gathering some opinions on what do we expect the asynchronous servers to behave like. This is what we came with. It's like, okay, asynchronous servers theoretically use less memory. Well, the latency is smaller in asynchronous servers. Synco is simpler. Synco is harder to reason about. And synchronous servers can handle more requests per second or have a higher throughput. So let's see how many of those are true or not. But to do that, we will take a lot of measurements. The other day I saw this in Twitter when I was preparing this document. That's part of the Google. They're right with me. So let's start with this. This is not going to be verified. Flask is not going to have a happy ending in this case. It's not going to be verified in any way. So if you want the furnace, please leave. That's not going to happen today. So response time. Here we have five examples. The first one is a hello world. So not always nothing. It's just a sample page with nothing in it. It's the simplest you can get. And as we can see, the fastest one is NGNX. No surprises here. Then A, H, T, B, and then Flask. I was kind of surprised by this because H, T, B is actually faster than Flask for synchronous requests. Then we have the CPU Waster. The CPU Waster is the busy loop with iterations. We don't have an NGNX column here because NGNX doesn't let you add this kind of busy loop. But we have this example. And we can see that H, T, B is actually faster than Flask responding to this connection. When we start getting delays, we have the NGNX columns as a baseline to know what's the maximum performance we can expect. We can see that the performance is similar in the same order, I may do it. But they actually attempt to perform a little bit better, except in the case with one second where Flask is more or less the same. But what's the main difference? Because if we think about it, okay, the results are more or less the same. But is it worth it? Let's see how many requests per second we can get. This was very surprising to me. H, T, B is actually faster than OpenRest with Lua. I don't know how to explain that. If someone has some idea of why this might happen, I'm going to open to discussing it. It was really interesting. With the delays, we can see that the performance is quite similar between H, T, B and Flask. So again, the real gains here, nothing interesting. But here comes the interesting part of it. In order to be running H, T, B, we need four workers. In order to get this kind of performance with Flask, we need 100 workers. What does this mean when it comes to memory requirements that we will have in our application? It means that Flask will take 25 or 50 times more memory than it would take to actually run the same application in A, H, T, B. For a very, very slight increase in load complexity. As we have seen in both examples, we can see that really the difference in code is negligible. It's really easy to compare one code base to the other. Even if it is slightly more complex, the memory gains that we get out of this are incredible. I was trying to get the local differences between these and it didn't look good because A, H, is basically in the negative values. It didn't even give us a clear picture of what's going on here. The differences are so big that it's outrageous. So let's revisit the assumptions that we had some time ago. A single server is doing this less memory. That's true. Let us see the non-smaller scene servers. At least not in these tests. That was surprising to me. I always expected that this would be the other way around, but I got with it. I got with these results. Scene code is actually simpler, as you can see. Slightly simpler at least, but let's give some points for Flask. A synchronous code is harder to reason about. That's true. It would be harder to actually profile your application. It would be harder to test it. It would be harder to trace your app code because the workflow might be interrupted in the middle of the request and the logs would be harder to read. Then a synchronous servers can definitely handle more requests per second, even the same machine specs. That's clear if we look at the RAM requirements that we have here. I think it's gone very, very, very fast. I'm so sorry about that, but we can spend some time with questions then, I guess. What are synchronous servers great for? You can have slow dependencies. That's a no-brainer. Flask will be using 25 megabytes of RAM for every request that you have. You don't want that at all. Please use the synchronous servers if you need to fetch data from a very slow API or a database that needs to do a very long query. The memory state will be significant. Same thing if you have many external dependencies. It's very easy to multiplex your dependencies so you can grab them in parallel, spinning out your application. Nathan was doing an example before where he was fetching five or four URLs at the same time. With Flask, you would need to do this with threads. It gets complicated. Threads are expensive to create. If you were doing this with a Sync-Aero, it gets much faster, easier, and at the same time, you don't have to go to the kernel to ask it to actually create an int thread. It's a win-win situation. You will always be faster. If you want to brigade data for multiple servers again, it's a very good fit. If you're working with microservices that talk HTTP to each other, it's a natural fit. If you're doing lots of bio or any kind, it also works very well for the same reasons. Asynchronous, it tends to be bad for static content. That's not incredibly true. With Python applications, if you're just serving static traffic, you might not want to go with HTTP. But again, you should be using Nginx or something like that probably. If you're doing CPU intensive work, that's not a good fit either. What you can do if you want to use Synchronous servers is upload your load to use in salary or something else. You just put the tasks there. You get your workers to do the work, and then when the information comes back, you display this information. You can still do that. But if you blow up, what's going to happen is that all your requests will suffer for it, and your clients will start seeing the delay and you don't want that. You will have lower memory footprint if you use the Synchronous servers most of the time, and you will get higher throughput with the same machine specs. On the other side, latency can be higher in some cases, not always. And chasing bottlenecks for Synchronous applications is harder, especially if you need to chase your libraries if they are not well written. And that's all I have for now. All feedback is welcome. You will have the code samples if you want to reproduce these results in this URL. You have the QR code. And if you have any questions, we will be happy to answer any of them if you want. So we've got plenty of time for questions and even discussion. I have 50 minutes. Please say something. I guess there is a lot of room. So for the Flask set up you did 100 workers or 100 threads? 100 workers. So how does the Unicorn do that? Is it just one process per worker? So it's not doing the threading on the process, it's just doing the single threaded process for each of those. It might be slightly faster if you do two threads per process or something like that. I don't really know how the Unicorn threading works. I thought I read that Unicorn is slightly slower at AIOHTP as well. You might also get better performance if you just went with the AIOHTP. Yeah, definitely. The thing is, doing it with Unicorn, I was getting something out of the equation. I thought that Unicorn in front of both services, I'm kind of not giving an unfair advantage to the AIOHTP because there's an extra set in the middle. So it will make it more fair, even if it's not very fair. If I may, I would like to add a couple of comments also. First of all, regarding the thread and the contention of the GIL, I think it's worth it to mention that Python interpreter automatically releases the GIL when there is a network operation. That's important because when you have just a few more threads, you shouldn't pay the contention in this operating system. Therefore, the GIL is a pain, but it's a pain when you have many, many, many threads. This is automatically handled by Python interpreter. And also, I think it's also worth it, regarding the memory. It's about Python by itself as an interpreter language. We cannot use the copy-on-write system in the kernel. It means that every time that you fork a process in Python, it basically creates many, many dictionaries that, in fact, are go. Because each dictionary is referenced in the PokerMash collection, all of the pages that belong to that page that means code, they are duplicated. Therefore, we are wasting a lot of memory every time that we do a fork in Python. Just a couple of calls. That's one of the reasons of the memory, the problem of the memory in synchronous servers. That's it. I'm very good at this. Thank you.
We will cover the main differences between sync and async servers. After that we'll go through a few example scenarios and benchmarks showing their strengths and weaknesses.
10.5446/54948 (DOI)
My name is Mike Derstappen. I'm from Germany from Leipzig. Also half of living in Bucharest. I just wanted to show the current state of my work on Bob templates, Plone, and also I recently started to work on Bob templates itself. Once upon a time, there was a time when starting with Plone and creating Plone add-ons was a little bit easier than it was in the last few years. When I started with Plone, I did something like this. I was inserting SoapSkell. I was not textivity back then, but I just did something like that to create a package. Then inside the package, I created content types. I had some content types and then I had some documentation when I could just fill out some pieces and I had a working add-on. This usually didn't take more than 15 minutes or something, but SoapSkell is not really maintained anymore. It had some issues and maintenance. Also, it was basing on paste scripts and this is not really a good paste. That's why the Plone community moved to Mr. Bob and created the Bob templates Plone. This was supposed to be the new way of doing things. It is smaller, the templates are easier to build and easier to test. The existing templates were really basic. No real option to add content types or other pieces like vocabulary or theme. It was really like take it or leave it and then you could head a starting point and customize. You also have no list of available templates. In SoapSkell, you could just show the templates you have. If the community provided new templates, you just see them and try them out. This is also a thing I was missing in Mr. Bob. Also, who can really remember that? I mean, now I can because I used it a lot and also had some variations, but it was always hard to remember. I had to go to the Bob templates Plone documentation to copy and paste this crazy command. I had to fix that. I had a project like, I don't know, half a year ago where I wanted to create a lot of textivity types and then it was like, no, I fix that now. My vision, give me a tool which helps. I want to create different packages. I want to create an add-on. We already could. I also want things like we had in SoapSkell, like give me a build out skeleton for development, for developing a project and maybe other things. More important, I want to extend the package with content types, vocabularies, scene and whatnot. Also, I want to have, when I started with new skeletons and creating new package, I want to use best practice. I also want that when people creating the stuff using the tools we gave them, that they have already a really good practice. Of course, they can change it, but they have a good starting point. When I use the tools, I want already a lot of basic tests. Tests we usually have in all our packages, like testing if I can actually install this add-on and check if I can use the content type under certain conditions and stuff like that. When I create a content type, the template should provide me already tests for that. I want, of course, easy usage on the command line. I want to install clone CLI with pip, like everybody else. I want to say, it should give me a list of templates. This is also nested, so there's an add-on template. It has some sub templates, which I can use to extend it. Then I have another template build out. Then I want to use it like this, clone CLI, create add-on, collectee, to do something like that. I also want to use it to extend. I go into the call of duty package I just created. Then I do again list templates. It just shows me the sub templates, because it doesn't make sense to create the standalone template, like an add-on in and out. We only care about the sub templates here. Then I just use the sub templates to create one or more content types, one or more vocabularies, and also maybe a scene. It could be handy if I just want one package, which is a scene, but also needs some content types, probably, to provide some structure and data, which I want to use. That would be nice, wouldn't it? When can we have it? Most of it is actually done. The current state of opt-templates-plone in the free version is we have these standalone templates. We have free currently. We have some sub templates, which we can use to gradually extend a package. We also have some basic test structure inside the single packages, the single template. The add-on template itself has some tests. The content type part has some additional tests, which will be added when you add a content type and the scene and stuff like that. We have to extend this, so it's not like rock solid. The structure is there. We can use it. We can extend it, make it better. But it will be already tested. We currently run all the tests on Travis with talks. Alexander helped me a lot with that. What we are doing is we are actually using this stuff. We are creating an add-on. We are creating content type in the add-on. We also add a scene and vocabulary. Then we run the test on it from the package. We build the package, run build-out and all the stuff. As you would use it, you would create a package, you would create a virtual amp and build-out, and then you would run the test, and they should pass. That's what we are doing on Travis. Basically, the packages should work if the tests are enough. We have this and a lone template. These are currently this free. We have the basic add-on, which was before when you answered the first question of type like basic. There was a textivity and there was also a scene, but this was just a variation inside the template. We have a build-out, which is really a basic build-out to start. I use this stuff usually for project build-out when I have some packages for this project. We have a scene package. There is also a sub-template scene later. You can get confused, but the scene package is what we had before. It's a full standalone scene package. It's based on Baselneta. It has also a grant setup, so this is the whole stack. That's one way to go. I can show you another way. The sub-template content type, this gives you some options. You can use the XML model, so the supermodel, which is the default because it's extendable through the web. You also can decide, no, I don't want that. I'm a Python programmer. I don't want to write XML. I don't need through the web, so I just go with the schema and you're fine. You also have vocabularies, which will basically create a structure. You have a vocabulary and all the registration. You just have to fill out the Python method, which actually collects your data, whatever this is. You can use it like a static list or you do a catalog call and create your terms. This is really handy then. The scene. The scene is like what Asco was showing us. He built this really nice packages, the scene setup and the scene fragment. When you add the scene, it will add these dependencies to your Python package. When you deploy it with a zip file, of course, you have to have it on the server, too. Whoever maintains the server should provide this. The examples already have some basic data set up, really basic index, HTML, and some basic rules. There's a rule also in it, which gives you the clone toolbar. But when you first use it and install it, you will have the toolbar and you will have completely the clone content. We just pull everything over, but it's unseen. Usually, you will just remove this rule and then you have a bunch of uncomment rules, which you have to adjust to your scene. Usually, with this approach, you will go to a scene force or whatever or to your SEMA and they will give you just a bunch of static HTML. You will put it inside and then you just map it. You have examples in it and you can use this. What you also got inside the scene package is some of these registry configuration examples so that you can say, I don't want anything else than folders in the portal navigation and also I don't want news items and images and files in the normal navigation, which I always find stupid, but it's your decision. So you need some configuration of clone in your SEMA and as we have these options now, you can build your SEMA. So we have examples in it. The future, we can of course extend that. I'm pretty sure that there will come more soon. I started already with behavior, tile, portlet, whatever these pieces are, I think useful and we can extend that. So I think that's the structure there. I also refactored the Python code for that before we had in the Bob templates just one file which was called hooks. There was basically everything in it and this was more like handling the questions we answered before. I want the textivity type so okay. Mr. Bob is just pushing and rolling out the structure we gave them. And with Python, after that we can adjust that and before it was more like cleaning up like if it's not the SEME, then remove the SEME folder and stuff like that. As we are now more modular, we don't have this need. So we have single Python files for the single modules. We have also a base file which has some base stuff like finding the package route and stuff like that. So extending that should be really easy. You can just follow the existing stuff and the work you have to do for adding that is usually providing the structure like files and folders you want to create and then probably update some files and this is the most hard part depending on what you want to do. But there are examples like updating the XML files or the types XML. So it's not magic. This is an example of how you can currently create this. You might notice the Bob template dot plon colon add-on if you ever used Bob templates before. That was actually a back before because even though the name was showing us a name space, there was no name space inside. So actually it was conflicting with other Bob templates dot something and there are also some of them which also don't have a name space. So if you call a template Bob templates dot add-on and the pyramid guys do the same, then you will not get your add-on probably because it's the same name space. This is now fixed so you have the dot plon before the colon as we don't need, we are done in our focus, we can just call it add-on in our case and not clone add-on as before. And the rest is the same like before. Then you go into your package and you call Bob templates, a MrBob Bob template colon content type or colon seam or colon vocabulary. That basically works. I just realized that there's an alpha version already released but there's an arrow in one of the things. But I will spend on that so yeah, to MrBob itself, as I mentioned, I want to have a list of existing templates. So I have a fork where I'm working on this and if that goes upstream then we have an option like I showed before in my vision. So we are actually not that far from the whole plon CLI which is basically, yeah, it installs Bob templates plon and MrBob and maybe some other packages which will provide Bob templates. Because now we have a central registry in MrBob so you can actually, to your package as in Bob templates plon, you add an entry point, this will register your templates to MrBob so MrBob can list them and can use them. So that is, we're really handy then. On plon CLI we want even shorter commands as an example and also I want auto completion on the command line so that I not even have to use the list template. I just do this and it will show me what I can use. And I think we can reach that on the weekend. Yeah, we will spend on that. So please join if you just have ideas, just want to work on the command line client. I will use this probably with the click module but yeah, I will see. But also contribute to the templates. I mean this is really easy. If you think templates should do something else like you read some of the 100,000 perhaps clips whatever and you say this has to be like this then just get in touch or make a pull request. So let's just let discuss on that pull request and it should be easy. Yeah, that's basically it. So I hope it's interesting and I will see some of you at the screen. Thank you. Yeah, we can think about that. It's probably more complicated. I mean one feature I want to have in Mr. Bob itself is the option to not override files. I mean by now if you use a some template it will warn you and you have to actually say yes I know what I'm doing. The reason for that is if you use this it will just pull out the files and generate the stuff. So if you have already files because you added it manually or whatever it will override your files. So the way now you should use it is work with a clean repository state. So if Git status says everything is checked in you're fine you use the templates and you do Git diff and you see the differences. You can fix if you have like overriding stuff. It would be nice if we have an override option or not override option. That would be the first thing and updating that's more like an audit. So we could do or add some functionalities in Bob templates which gives you a command which checks your structure. Something like that. But it will totally like fit in. So I mean we have to write the Python functionality for that because I mean we will update the templates. So we have to template always in the past shape and communities like Aurelia, for example, they say OK just use the template and then compare it. I mean this is easy. You have different you can just see what what did that change. But I can imagine that we create the Python command for that or whatever to point on that. Thank you.
I'll show new bobtemplates.plone features such as subtemplates and the new, more modular structure. We also plan to create a plonecli (command line interface) which will use bobtemplates.plone but will be easier to use. It will make the process of learning to develop with Plone easier than ever before.
10.5446/54949 (DOI)
I'm Mark Pysack. I'm the founder and CTO of Dev Help Online. It's a consulting, development, training firm. We're located in Florida right now, but work with a lot of different startups, the Fortune Drive, and our companies right now trying to help them upgrade their Angular projects and basically get their teams up to speed, things like that. Today we're going to talk about Angular Universal. Before I get into that, I wanted to give you guys just a general background and a nice basically history of the web. Let's talk about server-side rendering, JavaScript in general. I think to me it's like the future of the web, or at least for some of it. Anyway, like I said, I'm Mark. Check out our website, Dev Help Online. It's still in the works, so right now it's very bare-bone. There's a really serious photo of me. I'm actually a phone guy. I don't know why. I think I need to update that. I'm actually also part of the Angular Universal team. I work with a couple of guys from the Google Angular core team and a lot of open source people to kind of build upon and improve the Angular Universal, which if anyone's ever heard of it, it's basically rendering Angular in Node. So get into that. So just so you remember my name because it's kind of silly, it's Mark Pizak, but just felt funny. It's Polish, so go figure. You can find me online at markpizak. I try to blog as much as I can, hopefully more soon. But if you want to talk about anything Angular in general, just hit me up on Twitter, always for your chat. All right, so story time. I kind of wanted to start out with talking about, in general terms, the web, right? Where we came from, where we've been, and kind of at least where I see it going. And to me, it kind of, obviously, it started way before this, but everything started out kind of server rendered, right? So we had things like PHP.net, anything, a JSP, whatever, right? And there was a lot of pros to that, a lot of cons. So kind of to get started, let's say, I'm just going to generalize the whole period of time and call it server-side rendered, right? Obviously, it's gotten a lot better since then, but I kind of wanted to have some laughs and how bad it used to be, right? So this used to be Google, pretty ugly, right? I mean, this was Amazon, if anyone remembers. And we used to do a lot of chatting. There was instant messenger. There was chat rooms. And some things have changed, but some things kind of stay the same, right? Everything's been replaced. Now, we have Facebook Messenger. We have Slack. And we've almost gone full circle, right? It's pretty strange, but it's the way the world works, right? So some of the things with server-side rendering that I thought was always really beneficial and they did well was, you know, we didn't realize how good we had it back then when, let's say, you're doing a PHP app or whatever. SEO is as easy as just basically adding a bunch of tags, right? Just meta tags or title. It was super simple, right? It was great for static sites, a lot of other things, right? But some of the things that, you know, it was lacking, right, was mainly user experience. You know, we had to get to any page. You had to do a full reload, right? Everything constantly. So, like, it wasn't that rich immersive experience, you know? So, you know, and everything, there was a lot more server requests in general, right? So, you know, what did the web do? You know, flipped everything upside down and all of a sudden the cool thing was single-page applications, right? So, whether you did a hybrid with a little jQuery or you went full react or Angular, you know, what have you, single-page applications now are cool, right? Kind of cool. So I'm going to call them client-side rendering from now on, like CSR. So if you've never seen these acronyms, don't worry. I think we made them up. So, but with single-page applications, basically this happened and we had this massive ecosystem which is basically a nightmare to work with and every few weeks you kind of felt like this, right? I mean, you're all, you kind of were never sure of yourself. It's like, am I making the right choice, you know? One day Angular, next day View, then there's React, then there's a million other ones. So you get a little frustrating. But some of the things that client-side rendering does really well, of course, is user experience, right? So that's kind of why everyone switched, right? So we got that really immersive experience, you know, everything you could immediately see when things were happening, right? The really quick page transitions. It's perfect for really complex web apps and things with real-time data. And you know, the best thing is really you just, it feels like things are happening, right? And maybe a modal moves, there's animations, right? You can't do that with server-side rendering. So a lot of good stuff. But some of the downside, right, of course, is no SEO, right? So typically when you, you know, view source on any front-end app, this is what you see, right? You just basically nothing. And it's not personalized for that exact route or anything, right? So basically nothing. And of course, it generally longer load time. So you get to that route, and then you have to load, you know, megabytes of JavaScript files. And then all of those have to run and process and then bootstrap your app. So you see this a lot. And that's also kind of a con, right? I mean, if someone is on a slower network, let's say 3G, and it takes five seconds for your app to start up, you know, Google's done experiments and stuff on this. Maybe if a page takes longer than three seconds, most people will actually leave. So you actually get a lot of bounce rate because of that. So once again, kind of not a good thing, right? So with all this together, you know, you basically just rewrote your app. Now you did it in React or Angular. All of a sudden you realize this isn't good either, and your boss is pissed, and then you're throwing your phone on the ground. You know, so what do we do from here? So the goal really, ideally what we want is we want the best of both worlds, right? We want SEO. We want that really quick paint. You know, you see the application right away. We want, you know, if I copy and paste the link to Twitter or Facebook, you want to see that preview with the image and metadata. You know, we want the user experience. We want it to be interactive. It almost sounds delusional, right, if it's possible. So, you know, can we get that, right? So the next kind of shift that I noticed in, like, the web in general was someone in, I think, 2013 coined the term isomorphic JavaScript, which sounds kind of like a weird chemistry experiment. I think it's a crazy word. I prefer the word universal, and I think that's why the creators of it called it that. I mean, the idea of it is basically you want it to run on both environments, right? The browser and the server. In general, like, the how it all works, right? You have your JavaScript code, and we want that same code to run on a server and on the browser. So that's basically isomorphic or universal, whatever you want to call it, which to me sounded like a fairy tale, because I was like, how could you ever do that? Like, there must be a million, you know, that's kind of how I felt. This was years ago when I first heard of it, and I thought it was a joke, you know, I thought there's no way that's possible. You know, I have to see it first, right, to believe it. But someone then explained to me what it really was. So it's basically the code, you have your code, you send it to the server, it's supposed to serialize it, right? So it takes your entire app at whatever URL or whatever portion of it you're at, creates all the HTML for that, sending it to the browser. The browser then in the background, since it has those big bundles, starts downloading them bootstrapping, and at the end of the day, everything finishes, and all of a sudden you have a client side out. Did everybody catch that or did it go too quick? We'll be going over it in a second anyway. So here comes Angular Universal. So I came about to the project about two years ago, and Angular Universal was their solution to kind of handle this and make this work. Because if you remember an AngularJS, which is the older version, right, it wasn't possible to do this because AngularJS was really tied to the browser, right? Everything was based on the DOM and things like that. The new Angular is very abstract. So that's why everything is done using all these abstract syntax trees and things like that. So you're able to create your app in different environments. So if you've ever used native script as well, it's done similarly, right? So native script takes your app and basically renders it on a mobile device for iOS or Android, and it's able to do this because of how abstract it is. So they basically create their own engine on top of Angular's. So Universal does the same thing. The founders of our creators were Patrick Stapleton, you might know him as PatrickJS, and Jeff Welpley, really good friends of mine. So they created this project because they needed this for their projects, right? And Jeff Welpley actually has created a, I think, he has it working in production, but for AngularJS, he actually has a way to server render it. So if you want to find him on Twitter, bug him about it, I think it's online somewhere, I forget the link. So anyway, so these guys actually created it. And to me, it's kind of amazing because the whole project actually didn't come from the Angular core team itself. It's originated by people like us, right? Just random open source people, and then eventually got moved into core this year. So some of the other people on the team, me, of course, right? Jason Jean, who doesn't put his picture online, Alex Rickva and Vikram from the Angular core team, Jeff Cross, who was also used to be on the core team, now he's at Norwalk, and Wessie Cheggman from Saphir, and a lot of other people, a lot of big contributors. But yeah, so like I was saying, it was open source, it was its own repo, and now it's actually part of Angular. So if you've ever tried to use it in the past, it's a lot safer now. It's a lot more tested. And it's actually within the same code base. So you can kind of, it's really production ready now. It was before, but not really. So kind of like we were just showing, so the way we handle it, right, is you take your Angular app, right? So that's your root component, everything inside of it. You're going to run it on Node using something called render module factory, right? This is just like an overview. Don't worry. We're going to take the whole app, basically turn it to a long string, send it to the browser. In the background, we load all of your Angular bundles, everything, and then platform browser does its thing, and then, whoop, and then boom. So everything you've already known, right? Now we have a normal single page application with Angular, right? And actually recently, Jason Jean from Forbes got Angular, the CLI integrated with Universal. So this actually makes it a lot easier to start. And if you even want to take a look at it now, if you're on a computer, just go to Angular slash Universal starter. Just make sure you go to the CLI folder, because there's two different versions on there. So now with the CLI, it's a lot easier. All of the crazy web pack stuff is hidden, and it's a lot easier to get started, right? And you can see there's two different ways to get it running and everything like, I'll get into those in a second. Oh, and if you already have an Angular CLI, has anybody used the CLI before? I haven't got a few. Awesome. So if you already have an Angular app written in it, you can check out the CLI wiki. It's actually a really long link. I tried to shorten it. Check it out there, because it has how you can get started with it easily. So before I get into how it works, I wanted to show you guys what it is. So you know what I'm talking about, right? So this is right here, a static, like this is a regular Angular app, this one, right? And what I'm going to do is I'm going to slow it down a little bit to do like a fast 3G, and just so we can see the transition a little better, and I'm just going to reload real quick. So this is kind of what you're used to, right? You get that loading screen, this is also a really simple app. So the fact that it even takes a few seconds is crazy enough, right? So you can see we get that loading, which is kind of ugly, right? Sorry I didn't style this very much. And you can see it took about like four seconds on a fast 3G just to finish bootstrapping the app. So like we were saying earlier, if certain people will leave a site within a few seconds, if they don't see something they can work with, start scrolling, interacting with it. So this is just a regular Angular CLI loading, everything client-side rendering, right? Right over here, we have the same exact app, but we're going to do fast 3G as well, and we're going to... Hopefully this is the right one. There it is. Okay. So we have it using Universal. In this case, I don't really have much going on in this app, so it took 100 milliseconds for it to get that initial paint. So basically the device... It's usually a little slower than that, but I'm obviously a local host. Yeah, so I mean, we basically hit the server, the server rendered it so quickly, and the device immediately got a painted app, which is great. And then you could see behind the scenes, it was then creating it, and it also took 3.5 seconds for it to become a client-side app. But anyone looking at your app or on a device or on the web, they got the impression and like the perception, right, that it's really quick, and there's something there. And I mean, no one's going to go to it and immediately start clicking on stuff. So even if it takes a few seconds, you're fine, you know? And more importantly, if we do view source for both of these, this is the client-side rendered one. And can you guys see that? It's obviously all truncated, this is like production, right? You can see there's really nothing in here. We have this app root, and I statically put in this loading text, right? We don't have... We have the meta viewport, but there's no title. The title is the original index title. We don't have any meta tags or anything, right? We go over to the universal version, same exact thing, but now you can actually see the whole app is... Oh, maybe you can't see it. There it is. Is that better? Right? So now you can see everything actually got rendered. So inside of this app root, you can see we took the H1s, the image, all the URLs, the code that we got from Node, and even some of the meta tags. You can see the title changed to Plone Fanpage Home. I don't know if I added any metas. I'll show you. This one has some. Bear with me. Here we go. So this is just a different route. So you can see in here, I actually loaded it up with all the meta tags. And it's literally just as easy as it used to be. There's an Angular service called meta. Just include it in any component. You can have it be part of the router, whatever. And you basically just update tags in there, and you say, for meta, name, Twitter card, I want it to be this. For Twitter site, I want it to be my hashtag, whatever. So you can see everything here. We got Plone Conf, Lazy Loading Page. Later on today, I'm going to post this whole example, so you guys can take a look and mess around with it. It's really easy to use. And, you know, does that make sense? Right? Pretty cool, I thought. All right, so behind the scenes, basically it's a lot of dependency injection, right? Which does anyone unfamiliar with dependency injection? Right? It's, you know, you're basically swapping things out. So you're saying every time I want to use logger, swap it out for super logger, right? And you don't have to change your code base at all. All your code can stay the same because it still reference logger, but behind the scenes is actually going to call super logger, right? And a lot of these things that I was saying earlier are possible because Angular's compiler and the renderer, they're not based on the browser, right? So we're able to swap these things out and say instead of creating the app for a browser, create it from Node, right? And, of course, a little bit of magic. So these are some of the things that are typical, you know, when you're making, when an app's being created, right? Like the renderer, you have, anytime you're doing HTTP calls, typically going to call browser HTTP, right? Platform location is anytime the route changes, things like that. And obviously the way all of your styles are rendered, right? Because on the browser, there's native, you can emulate things. So what we do for Universal is basically swap them out and provide something different, right? So anytime the app is about to render things, we use a server renderer, which is, it knows you're a Node land, so it creates them differently, right? And anytime you make an HTTP call, we're actually going to use Node's XHR to make the request in Node, but, you know, everything's wrapped in zones, right? Which is a part of the Angular library. And zones is basically like, it's just a wrapper around, let's say, timeouts, anything async. It lets you know when things are done, when things go wrong, right? And we wrap everything around in that so we're aware of it, right? So if any of your HTTP calls make an error, like when they're done, we're Angular knows, and basically this is how the app knows it's finished rendering, right? Because let's say you get to an app and you make two HTTP calls, one for products and one for to see if they're authenticated, right? Because of this, we're able to be aware of it in Node when they're both done, and since the rendering, everything else finished, all the HTML, your HTTP is done, we say, okay, we're all set, and then we send it off to the browser, you know? So kind of a long story there, sorry. I don't know if anybody cares. You know, and then a lot of other fun stuff like document has been changed to domino, which is just an abstract syntax tree. It's just a different, it's like a JSON representation, not JSON, it's like a long object of your HTML, right? So think of your whole app as objects, you know? So you have like your body tag and then it has children, and that's all of your divs and everything in there. So, you know, like I said, things are different in Node, right? And the other two, same kind of thing, we're just making sure we know how to deal with styles and how to deal with routes changing. So sorry, that was a little behind the scenes, right? So this is the typical Angular app, right? You've got your app module, you have your main TypeScript file, and then platform browser. So basically with Universal, all we're doing is forking it, right? We have two different branches of it. So we're going to have an app server module, a main server, and platform server, right? And this is for our Node server. Everything in Angular app is basically your main component and your normal app, so you don't have to touch anything there. You're basically just creating a little bit different starting point for your app, right? Based on the environment. And this would be really similar if you, like I was saying, if you've done native script, that's done similarly. So just to dig in a little deeper, right? Normally you're using browser module, right? So an app server, we're actually going to import everything that your app module has. So you can see it's like an extension of your normal app. And then we're going to import server module, right? And what all server module is doing is a lot of that dependency injection we were just talking about. So in there, this is like in the actual code, these providers are like, there's like 30 things inside of there. They're basically just dependency injecting over top of everything normal, right? You don't have to worry about this. I'm just showing you in case you're curious. So this is all you need, but behind the scenes it's doing all the magic, right? I'm just trying to make it not as mysterious, you know? And so the other thing that's cool is with something like this, there's two options you can go, right? You can statically generate all of your pages at build time, or you can do them at run time, right? And so what does that mean? Like if you wanted to generate static ones, you could think of like a CMS, right? And your pages aren't that dynamic, or there's maybe only certain pages you want to generate. So in this case, I'm statically going to generate index, about, and contact us, right? And once you create them with Universal, they're just going to toss them in your disk folder, and you can literally just serve that up however you'd like. And those will be regular HTML pages, but they'll be fully rendered versions of your app. Actually rendering is done at run time, right? So in this case, we're actually going to have that node server, and every request that comes in for whatever route, for homepage, for about us, for products, it's going to do it at run time because let's say you're Amazon, you know, you can't statically generate bicycles, right? Every time it's going to be completely different. So this is usually the use case most people are going to go down, you know? And I'll get into this. This is kind of the gist of it. Don't use this code right here because it's kind of just mock. But you can see right here, so basically saying for every request, this is a node server, by the way, for Express, for every request, I'm just going to pass in that AOT Angular module, right? And AOT just means ahead of time compiled. I'm going to pass in that original index HTML page, which usually is very blank, or it just has your app route, the current URL, and when it's finished, I'm just going to pass the HTML to the browser. So really it's nothing fancy. This is kind of all you need. And then a few things, like I was showing in the previous slide, it takes more than a few minutes, but if you download the starter, all this will be hooked up for you. You don't have to worry about it. It's pretty well documented as well. So in there, you can kind of get a good idea of where to start, what's what, you know, what's the purpose of everything. So this is doing everything manually and with render module factory, but I'd recommend just use it. We created an Express engine, which kind of does some of this piping for you. You can find an Angular slash universal. And there's actually more libraries in there, and there's more to come. So this namespace, and this is actually where the old universal used to be, it's going to be, now it's going to be filled with like third party and open source modules, things that make your apps better, pretty much anything for universal, anything you'd want. So keep an eye on that. And using the engines, look, it's a lot cleaner. We pass in that app server module, and that's it. And you can see right here, we're actually passing in the same thing. So for every request, we're just rendering the engine. If you're not familiar with Express engines, don't worry about it, but it's doing what we did on that last page and some just behind the scenes. And we're actually passing in every single request and response. So why does this matter? What's nice is now, because we're doing that, inside of your Angular app, you can actually, using the injector, you can grab these things. So you can grab the current, during a server render, you can actually grab the request and response. So let's say you have cookies or authentication data or anything, you could pass in anything through Node and access it within your app. So let's say they're already logged in and you want the server render to say, welcome Matt or whatever. Using this, you can grab all that data and actually make it within your app. Everybody with me so far? Sorry if it's a bit much. I know it's a little crazy. It's worth it. So I kind of want these, some of these, the next few slides are basically, I wanted to go over typical gotchas and problems with SSR. So these will happen no matter how you do it. So no matter what framework or anything, one important thing to realize is that each one is going to render. So when you hit the server, it's going to render your app, it's going to do it one more time on the client side, right? So why does that matter? If you make an HTTP call and component, let's say component C here, it's going to trigger it again on the client. And the reason it's something like that might matter is because it's actually going to flicker. So first you're going to get all those bicycles, let's say, then they're going to disappear right before bootstraps or when it does bootstrap, and they're going to appear again. So not only are you hitting the server twice, you're getting kind of a bad user experience, things like that. And I'll show how to fix that in a second. If you use anything like window or document inside your app, it's not going to end well, right? Those things exist in the browser, but Node is just going to throw a real sweet error that doesn't really explain anything and just shut down. So no bueno. So how can we take care of some of these problems, right? So coming up in the newest N5.0 is actually going to be something called transfer state. So right now that's an RC, I don't think it came out just yet. But what this is going to do is automatic, you can either manually set things to transfer or using the transfer HTTP cache module, it'll automatically do all of that wiring for you. And it's as easy as literally adding these few imports. So for your app module and your app server module, you just include the cache module. You could see there's one for the browser and one for the server, right? Now when we make that call, it's actually going to send all of that data down over the wire in the index file as like a JSON blob. And it's going to reuse it on the client. So it's kind of nice. So not only did you save or, you know, you didn't make two round trips to the server, you actually got rid of the flicker and the user once again had no idea. They just think your app's crazy fast. So yeah, like another one I was saying, be careful with the window, right? It's only on the browser. But with dependency injection, you could take advantage of it in your app and do whatever you want. So right over here, I kind of just made a, this is kind of like a window, right? Very mock. You could put as much or as little as you want in here. And in here you can see I just kind of stubbed a few things, right? For the browser, I'm going to use dependency injection and provide the actual window. And you can see I'm doing type of window. You want to make sure it's not undefined. Otherwise, you know, you don't want the server to use this because like if I just had window there, once again, no one would have blown up. So this is kind of like a work around hack, right? So for the browser, we're actually getting the window. This whole server has actually got replaced with the massive window object with everything that's normally in there. For the server, you can see we're actually providing and using that mock. So because, you know, in the server, we don't want to touch the navigator. We don't want to do location. We want to make sure anytime someone does window, nothing happens, right? And that line's a little there. Don't worry about it. Right in here, another big, I'd say, pro tip is with Universal, we have something called is platform browser. So whenever you want in your code, you can actually make tests to see and make sure which platform you're in. So you can see here, I'm taking advantage of a couple of things, right? So one, we're making sure we're in the browser. And then I'm using that window service to do something crazy like use jQuery, right? Which everyone, you're not supposed to do with Angular, but people do. Apps sometimes need certain things, right? The world's not perfect. But by doing this, we can actually do whatever we want. So this code basically is completely ignored on the server. But in this case, in the browser, I'm going to do something to the window or to the body, sorry. So kind of cool. It's just a way of basically ignoring things, like hiding it from Node. You know? No, it's pretty strict. And also something new in 5.0 is you can actually use that document that is provided in the Node environment, the one I was talking about that's like an object, JavaScript object. Anywhere you want, you can actually use it like almost like it's the browser. So in here, I'm actually querying a selector. And so anything you normally could do, you could do, and this would actually work in Node. Which is kind of cool. So this kind of just shows what we were just talking about. So this is the same thing, component using it, the browser can that actually uses Window, but because of this, Node keeps running, we don't actually use Window in here. So far so good? Is anybody really confused or not yet? One other big gotcha is just make sure, you know, with any, this is the same thing with any framework, really be careful with set timeouts and intervals, right? Because these things are going to slow down your server render, right? If you put a set timeout or an interval that just never ends, you're never going to get a server render, right? It's just going to keep going in Node and never stop. So same thing, wrap them with this browser. If you want to be really careful, which I usually like to just use that Window service just to make sure you're using the right one. And this, like I said before, just ignores it in Node. That way you get that really fast paint. Because if, you know, let's say in your code you have a little set timeout of a second and then you do some kind of animation. There's no animations in Node, so, you know, might as well ignore it, right? And of course, if you guys want more, there's a bunch of more gotchas and definitely more documentation coming soon. We know that's one of our biggest problems right now. So check out Angular Universal if you want to see a little bit more on that. Let me see. Oh, yep. So in conclusion, you know, I think with all this you can get, I think the beauty of it now is we can get SEO, we got social media previews. We can, oh, why is it missing some of my stuff? Of course. There was a couple more in there. Sorry about that. But basically, now I think we can get the best of both worlds. We can, you have SEO, you have really fast initial paint of your application. That is still really interactive, right? So, yeah, rich user experience. You can post it on Twitter, all everything is there you want as long as you provide it. You know, just be really careful of what you do. So, you know, if you're used to doing kind of things, however, it's a little more strict, you know, but that's okay. The good thing is by doing this, you're at the end of the day is cleaner, it's more platform performance and more reliant, right? And at the same time, you can even take your app and move it to something like native scripts, which has done a lot of these things applied to native scripts. So, if you get used to these things, you can really easily make your app mobile as well. So they have server render, mobile, everything. And one thing I can't stress, be really careful with third party libraries, because if you pick, I don't know, Joe Schmo's random library, he could have window everywhere and it will just blow up. You know, and when something like that happens, you have to actually use Webpack and swap it out and that gets a little confusing. So try to find ones like, you know, ng bootstrap. Material is like almost compliant. Like there are certain libraries that are, they really do things carefully and you won't have any problems with, right? Oh, there they are. All right. That's strange. So with server side render, we also got rid of that loading splash screen, right? Which I think, I don't know if we saw in the demo. All right. So this is the server side render one. When I push F5, immediately we see the app, right? Right over here. Get that loading. So now that's gone. Which normally doesn't matter. If you have a quick internet or anything like that, it's fine. But you know, most of the world has 3G, right? If your customers are using 3G, the stuff should matter. It's going to be a lot better for them. They're going to have that perception. Even if it takes a few seconds for them to get that render, they're used to it probably taking 15 for some complex site to load. And a nice, very loose metric is typically it'll be like two to three times faster for like an initial paint. So depending on what you have going on. Just remember if you have, you know, an HTTP call that takes five seconds, it's going to wait five seconds for it to render, right? So doing some of those tricks we were showing earlier, you can actually, maybe make sure you ignore that call on the server, things like that. So we went from this to this ugly thing. But now we have all of our code, right? We got our titles, our metas. You can see down at the bottom even like, that's like some of the HTTP call that was cached and sent along with it. So like we were saying earlier, can we get the best of both worlds? I mean, I think yes, but it takes a little bit of work, but I think it's worth it. And thank you guys. Thank you.
The web has gone from back-end server-rendered pages, to client-side SPAs,... and back again?? Let's take a deep dive into how server-rendered Angular applications work, and why you might want to consider using it for your next application. Learn how to bring great search engine optimization, social media previews, deep-linking, and improved perceived performance by rendering your application on the server.
10.5446/54960 (DOI)
My name is David Glick and I'm a consultant. I work with Javs Carter and Odd Bird and do some things on my own. I'm very excited to be here in Barcelona because this is my 10-year ploniversary since I started working with Plon in 2007. Nice to see you all here. This talk is called, Nice Blobs. Be ashamed if anything were to happen to them. And it's a talk about an experience that we had moving a large number of blobs, mostly images on a site to cloud storage and how we went about doing that out of the ZODB. You're going to hear me say S3 a lot because we did use Amazon Web Services, their S3 storage product. But if you're not able to use Amazon, I think you won't be able to use the specific code that I wrote, but you may be able to use a lot of the same principles to build something very similar for a different cloud storage provider. This is an image from the site that we worked on. Washington Trails Association, WTA.org, has a site where if you're in Washington state, you can go there and learn about where to go hiking. They've been around for 50 years and they promote hiking and outdoor activities. They do a lot of work doing trail maintenance in the state. And they have this site where one of the main features is that if you're going hiking, you can log on to your profile on their site or in their mobile app and you can upload your own photos to a trip report that describes where you've gone hiking, what the state of the trail is, that sort of thing. So as a result, they have lots and lots of images. In May of this year, when we were facing this problem, they had about 650 gigabytes of images and that was growing by a few gigabytes a day. And the disk on the server where those images were stored was about 98% full and we were trying to figure out what to do. And problems with this, of course, you have to keep taking the site down so you can read resize the disk so that it can hold more blobs and we were tired of doing that. It also meant that it was hard when we were working, we just did a big upgrade of the site, not to PON5 yet that's coming, but to PON4.3. And we had all these blobs that we wanted on the staging server, but that's a lot of data to move around. So that was difficult. And then also difficult to deal with backups. We have an offsite backup that copies those blobs to a server on a different provider, but we only have one of those. We don't really want to deal with moving that much data to other servers. So the staging server was actually on the backup server, which didn't have as much power as we would have liked. It was limiting us in a number of different ways. So as we started looking at how to solve this problem, we had various requirements. We wanted to move the data to cloud storage of some sort. Amazon was the obvious choice for us just because we used a lot for other things. We wanted to keep reasonable performance. Moving to cloud storage means that there's probably going to be more latency in loading files. We wanted to make sure that that wasn't really going to impact the user experience of somebody using the site. We wanted to make sure that it didn't change how Plone worked if there were custom code or add-ons that were working with images. We didn't want them to have to suddenly use some different API. We wanted this to be as transparent as possible to them. And finally, we wanted a smooth migration path. So we wanted to be able to make the switch gradually. We didn't want to have to take the site down, move all the blobs, and then bring the site back up. So I'll talk a little bit about some of the things we considered for how to solve this. We thought about handling it at the app level. So we thought about just let's take the image content type, let's add a new field which is storing an S3 URL, and then let's arrange to have a job that runs after you upload an image. It'll copy it to S3 right to that field on the object and then delete the local copy of the file. But then we started thinking about how images are actually used in Plone. And we realized that it's not just getting the full copy of the image to display it to the user. You also need it if you're going to generate image scales. You also need it if you're going to figure out what the dimensions of the image are that actually reads the file and looks at the header and figures things out. And we realized that we didn't really understand all the places that Plone might work with images. What do add-ons do with images? So we really kind of ruled this out pretty quickly and started looking at what can we do at the ZODB level. Pretty quickly, I remembered that ZEO has a cache of blobs. If you've got a ZEO client running on a machine and it's connected to a ZEO server, it'll fetch blobs when it needs them from the server. It'll put them in a local directory on disk and then it'll open up the file and use it from there. So I figured maybe it wouldn't be too hard to modify that and say when you would go get it from the ZEO server, we could also add a step that would go get that from Cloud Storage. And also, we have this directory that there's already code in ZEO that manages the size of that directory. It keeps it from getting too big. You can configure how big it is. Great. So maybe all we need to do is modify that. And as I looked further, I realized that actually our old friend Jim Fulton had already created the thing that did just that, S3 Blob Storage and S3 Blob Server. He built this while he was at XOPE Corporation and it's sort of what I just said. It replaces the ZEO client storage, so the piece of ZODB that is in the client. It replaces that with something that's very, it's just a subclass that changes a few things so that it will actually download blobs from a URL that's configured. But as I looked at it more closely, I realized that it wasn't going to work great for us. It really seemed kind of experimental. There wasn't much documentation on how to use it, which, you know, isn't necessarily a deal breaker, but it means you have to look at the code more carefully. The thing that I was really unsure about is this, it uses the separate process called the S3 Blob Server that actually deals with fetching the things from S3 and serving them to you. And that thing is written in Scala, which I don't know anything about Scala. I didn't really want to learn about Scala or how to deploy it. So I looked at the options more carefully because of that. And also, it assumes that you're configuring the location of that server using ZooKeeper, which is something that does get used, they use the ZOPE Corporation, but we weren't using it. So I came up with something that is a variation on a theme. It works a little bit like this. It's called collective.s3blobs. And what it is, is it's this S3 Blob cache that lives within your ZooClient process and takes care of figuring out where to get the blob from. So the first place it'll look is the ZODB, either with ZO or without. It'll go load the blob like it would normally if it's there. If it's not there because you've moved it elsewhere, then we will look in a new file system cache that belongs to S3 Blob cache. If the file isn't there, it'll get it from S3, put it in the cache, and so it can load it from there as the file is used again and again. And that cache works similarly to the one that's built into ZO. The last piece then is this archive Blob script that you can run whenever you want. That will take the blobs that you want to move from the ZODB into S3, move them there, and then delete the local copy. So as soon as it's not here locally, it's already in S3, and then things will flow from S3 into ZO instead of from the ZODB into ZO. So we're prioritizing local storage. We're trying the ZODB first. That's great because it means that as you're making changes to your Plone site, those are just getting written the same normal way into your ZODB and will be accessed from there. And also, you know, your local or relatively local file system is presumably a more quick way to load that data. And then, yes, secondly, we'll try to fetch it from S3 and cache it so if it's getting accessed frequently, we don't have to incur that cost every time. Like I said, that cache, we're actually reusing a lot of the code from the ZO client cache for blobs. So it has the same properties of you can say how big it should be, and it will run a thread that cleans it up periodically to make sure it doesn't get too big. Also that cache, it's just a file system directory that holds blobs. So it can be shared between multiple ZO client instances if you have a bunch of instances running on the same machine. And also that cache is only holding the things that come from S3, so you aren't duplicating wasting disk space just to deal with serving the things that you can get locally. So this archive script has various properties you can pass to it. You don't have to move all of your blobs to S3. This command here, A is age, so this is saying move any blobs that are older than one day and S is size, so it's move anything that's over about two megabytes. And D is whether to actually delete it locally or not, which I mean, you're generally going to want to do, but I used that flag when I was developing to make sure that it was working. So this gives you a fair amount of control so you can prioritize fast local access. Again, it's lowering your disk use and figure out where the right balance is for you. Like I said, when you create a new file, it gets accessed normally until you run the script so you can wait a couple of days, make sure that all your image scales are generated or whatever, and then you can move it to S3. And finally, this means that you don't have to move everything at once, so you can do it like progressive migration. Like when we rolled this out, we first did the very largest files just to make sure that it was sort of working and then we sort of went from there. To configure this thing, you write something like this in your build out which ends up generating the S3 blob cache section which goes into Zoop.com. So you know, S3 blob cache, it has a cache directory which is where do you want to put the blobs that are getting loaded from S3. It has the size limit, it has the name of the bucket that you want to get the blobs from. It's also actually making use of your Amazon Web Services keys which are loaded from environment variables. And then there's this percent S. Percent S is whatever ZODB storage would normally be in use. This is actually a feature I added to the Zoop 2 instance recipe for build out so that you can wrap your normal ZO storage or file storage with something else that modifies it. So this thing is actually operating as a wrapper that proxies access to your underlying storage. So you have your S3 blob cache that can talk to client storage which is part of ZO, so that talks to a ZO server, or it can just talk directly to file storage and get things directly from the disk. This can be really nice for local development. So you can point your local copy of the site where you're making changes at the same bucket that has the blobs on S3 and you don't have to deal with copying anything, it will just fetch things as it needs and it's read only so you don't have to worry about overriding things with your staging data. Any changes that you make are just going to get written to your local blob directory and you don't have to ever run that archive script on your development copy of the site. So this thing is working pretty great. We moved, like I said, gradually decreasing sizes of files. I think we got down to one or two megabyte files that we moved which was something like 50 or 60 percent of their disk usage. And I had a hunch when we started that most of those files were going to be like the original size images that people uploaded to these trip reports, which we aren't actually really using very much on the site. I've tried time and time again to talk them into limiting the size of the files that people upload to the site, but they have good reasons for one of the original sizes so that they can print out poster size images of the beautiful scenery that I'm showing you some examples of. So anyway, they've got lots of large files that aren't actually accessed, which means that at least in this particular site, a lot of the data that we're moving is only infrequently getting moved back into the cache and accessed. So that means in this case, there's not much impact on performance. Most of the images that are actually served are still like thumbnail versions that are served locally. And it's also low cost because we aren't actually doing a lot of data transfer out of cloud storage. So of course, your mileage may vary with your site, but it worked really well for this sort of big archive of images. I need to talk about some caveats. Bucket security, how do you make sure that once you've got your blobs into S3, they aren't going anywhere? We're relying somewhat on just the durability that S3 claims to offer. They are, Amazon is presumably not storing just one copy of the file. I was more worried about user error, what happens if somebody who has access to the bucket via the Amazon console clicks on the wrong thing and suddenly the bucket is gone. So we actually figured out that we could set up a special security policy for the bucket that basically says nobody's allowed to delete it. You can delete the policy and then delete it, but it's a barrier to prevent the accident. You can also enable versioning on a bucket in Amazon S3, which helps with changes to individual files. Like if something happens, if a file accidentally gets deleted, there would still be a copy that you could restore in some way. And we also thought about just backup strategies. There's another feature of AWS, which is you can take a bucket and mirror it to a different region of AWS so that if one region has some catastrophic failure, you would be able to switch over to the other one and have a reasonably up-to-date copy of your blobs. And finally, we haven't bothered to do this, but if you're worried about just the viability of Amazon or something, you could, of course, copy your data to a third party. And I haven't written a script that will read through your bucket and pull the files down, but that wouldn't be a terribly hard thing to do. There's a few more things that I haven't implemented that I think would be useful additions to this tool. Packing is something that you can still do to your ZODB. It'll work just fine. What it won't do is the blobs that you've moved to S3 and that only exist there, the packing isn't going to know anything about those. So it's not going to delete files from your blob storage on S3 that are no longer in use on your site. I sort of have some ideas of how to implement this. I mean, either you keep track during the pack of which files were referenced and are no longer, and then you can delete them afterwards, or you can just scan through all the blobs in the bucket, try to load each one, and see which ones are no longer present in your data.fs. And another feature which we sort of thought at the beginning we were going to do, but we ended up not doing, was once you've got the blobs in S3, you can actually serve them directly out of Amazon's CDN, which is called CloudFront. You just point to a particular URL, and then it'll... So this actually does require some app level changes on your view where you are displaying an image, you would have to find out what the correct S3 URL is and use that in that view. We didn't end up doing this because of some particular considerations for this client. They already had data transfer that was included with their hosting package so that there was no particular reason to switch to something where they would have to pay separately. Matthew also pointed out that there's security considerations you need to make sure, in this case all the images are public anyway, so it didn't really matter if we're making them publicly available, but that's certainly not going to be the case for every blob in every site. So this is another thing that we didn't build for WTA, but it could probably be added on somebody else's use case. So finally to summarize the status of Collective.S3 Blobs, it's in production use, it's stable, it's not released mostly because I've been busy trying to fix bugs and so forth this week. It's within easy reach of a release. It needs some documentation. It needs more documentation, it needs more tests, but it's a tool that I don't feel bad about recommending to people. So yeah, that's Collective.S3 Blobs. I'm happy to take any questions. I think we have a little bit more time. Yeah. The question was how do you make sure that when you're writing changes and you've got multiple Zope instances that they all see the changes. This doesn't actually affect rights. It's not a write through cache. I guess maybe it is. But the rights go into the ZODB and the ZODB is the first place that all of the clients are looking for loading things. So it looks in the ZODB first, then it uses the file system cache, then it gets it from S3. So I don't think that's a problem. So it's a tool for a first time, it's a tool for years. Since it's a non-due policy for the working, I want to answer the question. Is it that we just did a reference to the S3 that you wrote there? So the question is what happens when the user deletes the folder that has images? That was the slide where I talked about packing. So normally in the ZODB you do a pack, it figures out, oh, there's these blobs in the database that are no longer referenced from anything so we can delete them from the ZODB storage. Yeah. There isn't right now, but it could be created pretty easily. Yeah. The question is does our staging server, where we're working on things, point at the same bucket or does it use a copy? We haven't actually done any large-scale work on the site since we rolled out this, but I would anticipate that we'll probably point it at the same bucket and then any changes to the blobs that happen while people are working on the staging site, we'll just get written into, we'll still have a copy of the data.fs, so they'll get written to the local blob storage, but we won't run the script to modify the bucket on S3. So yeah, they aren't changing once they're written generally. Have you thought of the need of implementing a list archive tool that you have in the CLI way of not only archiving by date of a site but also by quality, like for instance we in the whole page. Maybe you're in a situation where today you send it and it gets hit, yes. Right. The question was, can the archive script commit, in addition to paying attention to size or how recently the blob was added, can it pay attention to popularity, how frequently it's being accessed? I don't really know how we would know which blobs are popular. I mean, the blobs are fairly opaque as to how they're actually being used in the site. But what does happen is, if it does get moved to S3, it'll end up in the cache and presumably stay around in the cache because it's being accessed. Although I don't make any promises about the details of that cache because I need to go look at the code again. Is it useful for more than images? Is it useful for more than images? I think you'd have to think about how the files are being used and estimate what your costs will be in terms of how frequently things will actually be getting loaded from S3. But yeah, I think it has the potential to be. I mean, there's some large PDFs that I'm sure are included here, too. Any more questions? Feel free to find me afterwards. Thank you very much for coming to the talk. Thank you and thank you for being with us here today.
Managing directories full of gigabytes of ZODB blob data can become cumbersome. This talk introduces collective.s3blobs, a newly developed tool making it possible to offload selected blobs to cloud storage on Amazon S3. It will discuss how it works, how to set it up, and how to tell if it is the right solution.
10.5446/54894 (DOI)
I'm working with ECBC, which is the European Centre for Disease Prevention and Control, which we obviously have quite a different approach to the sort of research that is done here as compared to academics working in universities. So some of what I'll be doing is framed within that context, but you'll see here at the bottom if it's visible for you. You can see I've written pre-2019, in other words, pre-pandemic. I was working as an associate professor in a university in Sweden, so I also do have a background in academia, so I understand where you're coming from in that sense. So both have got two bits in my talk, one pandemic-wise and one pre-pandemic-wise. So basically, if we look at a slightly different take on things here, I'm going to present two sort of broad issues. One is to do with online vaccine misinformation and how we handle that. And it's very obvious that if we're trying to understand why people are not being vaccinated, a lot of materials are going around online that we need to understand and we need to look into and try and address. And we need to be looking at community perceptions, understandings, health behaviors and trust. And the key word there is also ongoing. This is not something you can just do and then it's done because things change very, very fast over time. If you don't have that kind of understanding, whatever response you're going to be making will be essentially, as I say here, shooting in the dark. You can't address online misinformation unless you know the particular types of misinformation that are being disseminated, how they're being disseminated. To some extent, who is disseminating them? In other words, are these individuals who are doing it unknowingly or they think this is correct, but they're just disseminating it or is this willfully disseminated by actors who have a different agenda? So obviously, if you're going to be trying to understand this, you need to be looking here as, because this is an online thing, it's going to be spread through social media and must therefore be the core component of any approach to this work. I'm not sure if you're familiar with this word, the infodemic. This is a word that WHO has recently coined a couple of years ago, which really describes the context we're living in, which is exceptionally challenging. And it describes what could be defined as an overflow of information, a varying quality that moves through digital and physical environments during an acute public health event. So in other words, what we've got here during the COVID pandemic is an enormous infodemic of information, much of it misleading, much of it incorrect, also much of it correct. But to try and find your way through the jungle is really, really difficult, both as a consumer who is there online reading these things, but also as a researcher trying to figure out is this actually correct or is this not correct? Importantly, the word infodemic includes misinformation, which is false information circulating and has been done willfully and deliberately by an actor for political or economic gain, one way or another. Importantly, within the infodemic, you've also got a lot of information gaps. And so it's not that everything is covered. There are actually, if you start looking at it, there are quite significant gaps. So here is a document that we produced earlier this year, which was looking at countering online vaccine misinformation in the EU. And within the context of your work, I understand you're looking at looking for signals of new outbreaks. That's not necessarily what this is all about. But if you have an outbreak, if you could say of misinformation, you can say that it can lead to an epidemiological event. It can lead to an exacerbation of the pandemic. If the information is circulating about the vaccine, for example, or about the apparent inefficacy or efficacy of masks, et cetera, et cetera, that can lead to different behaviors which can have an impact on the epidemiology. So it's not only looking for new outbreaks, it's also looking at what outbreaks within outbreaks. Here is an infographic that we produced, which shows some of the issues that you can be looking for within if you're trying to counter vaccine misinformation. So there on the left at around 10 o'clock, you've got monitoring misinformation on social media. You need to correct misinformation. You need to evaluate your strategies, which of course, methodologically is extremely difficult. If you're trying to understand you have an intervention which is targeting misinformation, what are you actually measuring to ascertain whether or not that has worked or not? So methodologically, this is a really challenging area. And that would be something if anybody had thoughts on that, I would really welcome some discussion. So you can talk in terms of the social listening cycle if you're trying to actually act upon what's out there. So first of all, and this is something that, excuse me, UNICEF has produced as a sort of schematic for how you can set up a system for listening to social media and to different media. Preparation, you've got to have a team who do the work. Sounds obvious, but it's often not present. When you're listening, which is I think the key thing, which I think is what Diana was talking about just now, you get your system going up, you're doing your social listening and now within the context of this, you detect your misinformation and you develop a log of the sorts of rumors that are out there that may be incorrect. You then try and understand it. You assess it, you synthesize it, you have your analytics, and then you figure out what can we actually do anything, what is actionable here. And then hopefully you react, you engage, and you then hopefully get an impact addressing the misinformation. Of course, this may seem very obvious to you perhaps, but obviously there are, and as Diana has just been telling us, incredible potentials, but also limitations, the potential being predominantly just the sheer scale. And to some extent, the representativeness of the data that are easily available and accessible. I say to some extent accessible representative because of course there are, as I say here, a number of populations that are not online. And we all think about, I think you can think about our parents or our grandparents who may not have the same level of digital literacy that perhaps we have. And so their presence on social media is not reflected and therefore the issues that are important to them are not reflected in the dataset. So if you're looking at, Diana was talking about depression in kids, depression in the elderly would be much more difficult to look at using social media analysis because elderly people simply aren't on social media to the same extent. Another issue, as Diana mentioned, was private groups which are closed. And I think it's also very important to recognize that where there are gaps in the dataset in terms of who is putting material out there, those people are invisible. And if you are invisible, if there's no data about you, there will be no response to any group you have. So if this is the extent of your data that you're collecting. So no data can be very problematic if you're looking to respond. So therefore you really do need other complementary sources. And I'm also very keen to highlight the importance of constructing facts based on incomplete data and not just saying, well, we recognize that this is not complete, but we're still going to say it's a fact anyway. It's very important that we actually really take that in. When we use these data, we must really, really walk and we're acknowledging the limitations of the data. I won't spend long on this, but this is really giving us an indication of the sources that we can use for social listening. And if you see here, this is what we've been talking about predominantly just now, interactive open data sources, but we can also be looking here at broadcast materials. A lot of this is also online and so it's therefore accessible. And then you've got your more sort of old fashioned, but equally excellent qualitative methods, focus groups, observations and interviews, quantitative methods. These are also invaluable for social listening and then materials that can be collected through the health system. So I think very often we tend to focus on this corner here and we may be not utilizing the full range of data that are actually available. And I think that's something. Once you've got your data and once you've got your analysis, it's certainly from our point of view with ECDC, we then think, okay, so what? What's the next step? How do we respond to this? And we very often use this really useful, people-combie behavioral change model, which refers to capability, opportunity and motivation to change your behavior or adapt your behavior. And this is if you frame any sort of intervention within the context of this, it's a very useful way of defining how you might want to act. Capability to your ability to engage, motivation, obviously. Do I want to do this as opposed to something else? An opportunity, what sort of other factors are actually available to get in the way to facilitate your work? Hang on, now it's not going down. So that's a little bit about that. Am I jamming up here? Yes, good. I want to move now to the second key issue very briefly, which is about trust. If you've gone through your process of collecting data, you've gone through your Combie model and you now want to reach out to people and address the depression they may be facing or whatever it may be, or whatever public health concern or the vaccination, poor uptake, etc., you need to have messages who are trusted. Now this picture is, I took this in Sierra Leone in 2015. I had a project there during the outbreak and I went around and I took as many pictures of posters and billboards and banners like this that were to do with the outbreak just to see what messages were out there. And this was one of the big ones, Ebola is real, help stop Ebola. And so here you've got a picture of an old man washing his hands, which was something which was predominant and very, very common at that time. Now you wouldn't necessarily know it unless you know Sierra Leone from that period of time, but this is actually the president. And the president of Sierra Leone was in this time very polarizing and you loved him or you hated him. So basically the question was where they put these banners really made a difference. If you put a panel like this in an opposition area, you would have a counter reaction, which was not very helpful. So they were trying to get people to believe that Ebola existed. It was difficult. So this went up. Did it facilitate motivation? And some people know it didn't. And so what they did, which was rather a clever solution to this, was to get this young man from a few years ago, you'll recognize him, together we can beat Ebola. Now you might think what on earth are they doing with a football player in this kind of context. He was somebody, Sierra Leone is a football man country. And he has no stake in Sierra Leone whatsoever, except for entertaining people with the football. So someone thought, well, it would be good if we could get him and his face to say which is real because people like Cristiano Ronaldo, Sierra Leone, you may not be Fiora and just a city fan these days, but people generally quite like him. He certainly plays good football. So this actually had a fantastic impact. So using a messenger who was apolitical, who people recognized and actually quite liked, was really effective. Another final point here, we also tried to, what the purpose of my project was to develop messages. And we, again, this is part of these messages that I saw around in Freetown. This one says, if Ebola is real, and it's a killer disease, and it shows the signs and symptoms. The problem here is if you say it's a killer disease, people are going to say, well, if I'm going to die, I definitely don't want to go to hospital. I'll stay at home and die with the people I love. And so people did not go to hospital in the early stages of the epidemic, which of course amplified the outbreak enormously. So this type of image intentionally backfired. What we did was to develop a set of questions that we thought were, we identified a set of issues that we thought were important. We investigated these, we discussed these with the community, we developed messages on a graph basis based on them, we went back to the community, and then we refined messages and disseminated them. So there was an iterative process there, which is very, very important, because that way we were pretty confident the messages we thought we were putting out were the ones that were being received. And here's one of them. These shows of an ambulance, because one of the big concerns that people had was to do with the ambulance drivers. They didn't like the ambulance drivers. There's a quote here that they drink alcohol and whenever they drink, they're running in high speed. And when they talk to them to reduce the speed, they will not love, they will not listen. And when you say drivers slow down, they just keep quiet and move with high speed. But this was a concern that people have when they go in the ambulance. They thought they were dangerous. So we had to set this straight along with the Ministry of Health, trust the ambulance. It's the best and the safest way to go. Excuse me. Last slide. Looking forward, clearly we need to triangulate. We need to use multiple data sources if we're doing social listening, because no one single type of data can provide the whole story. And we need to have a multidisciplinary approach to that. We need to look specifically at populations that are marginalized, vulnerable, offline or un-network. If we don't, their voices are not heard and we cannot serve them. And it's also important that when you're doing social listening, this is not an end in itself. The work, the findings, everything must be embedded in the health system. The findings can then be taken out and acted upon. And I've got a question to you. I'd be very interested in your thoughts on this. How will AI change social listening? That means that aims to support the response to outbreaks. And specifically, if you're looking at qualitative data, will we ever be able to do meaningful inductive analysis? In other words, will it ever tell us things that we don't know we're looking for, which is what qualitative research can do so well? And that I think would be a topic that we might want to discuss. So that's what I wanted to say. I'm now going to try and stop sharing. I hope it works. It seems to have worked. And I welcome any questions. And once again, apologies for the chaos there at the start. So please, the floor is open. OK, here we go. I've got a question from Eleanor Arsepska. Which sources do you monitor or listen to for the COVID-19 misinformation at ECDC? How do you deal with many EU countries and languages? And how do you coordinate actions to be taken by public health services at national level? Wow, big questions. OK. In terms of which sources we monitor, we at ECDC don't monitor misinformation specifically. What we do is to provide support to a member states in the hope that they can do it themselves because you've got 27 countries and it's way beyond the resources at ECDC to do social media listening in 27 countries. Essentially, this document that I referred to is providing some simple principles for countries to work with, whether it's to do and whether it's to do with the types of things you should do the thing for or how you analyze it or what you do with the data. So that's a, we don't do it ourselves, but we've helped to try and support the country to do it. How do we deal with the many EU countries and languages? Well, since we don't do it ourselves, that's not an issue. But it is an issue. I mean, in broad terms, the languages can be an issue with when we produce guidance documents, they are in English. Some of the work that we do is in some of it is translated into all all member states languages. And if I can think manage myself technically, I can find a website that shows you that it's about vaccines and vaccine information and it's available in all countries languages. How do you coordinate actions to be taken by public health services at national level? We don't coordinate at national level because that's extremely much the competence of the different member states. It's not our job. They don't want us to coordinate them. That definitely they wouldn't welcome us meddling in their work. They know their countries, they know their problems, they know their settings. So they're the ones who do the coordination. So what we do do is to provide as good technical support in that process and as good synthesis of the scientific evidence in any areas that they're working on, that we're able to do. And one of the things we've done a lot of during the pandemic is producing what we describe as rapid risk assessments. And then another one actually coming out tomorrow, the 16th rapid risk assessment. And these documents are extremely comprehensive showing, covering all of the different elements that we probably want to know as a national institute of public health from the epidemiological developments through to what we know about the immunology to what we know about the vaccines, virological knowledge, risk communication, which is part of the documents that I work with and community engagement and so on. It covers all of these different elements. So that would certainly not be coordinating. But what we do do in these documents is to present to the member states a whole set of options for response, like a smuggler's board. And we're in Sweden, if you know a smuggler's board, you have a table, heating with all these different wonderful dishes. And you as the person who's coming for the meal, you pick different meals that look good to you. So essentially what we do with these problems, the easy point of view is we try to present a smuggler's board of options for the countries to choose from. They know which ones will taste good in their countries. They know which ones won't work in their countries. Not for us to decide that because what might work really well in Poland may just completely fall flat in Portugal. And because they're very culturally different, politically different, socially different. So that can't be our task. But our task is to provide options that they can then choose from. And I mean, I hope that on the whole, I think they've been pretty well received and they have helped at least in the decision making processes during the pandemic as to how the country should move forward. So any other questions or discussions, I would really welcome anybody's thoughts on this question about your intelligence and inductive analysis. If anybody has any thoughts on that, personally, I'm very doubtful. But probably there's someone here who's done it or has insights and it would be really interesting to hear what you guys think over. Diana. If I may just comment on the AI, to understand topics and to understand arguments for this opinions is somehow doable in computer science. So as long as you can collect relevant enough relevant data, you can have algorithms that summarize tweets or forums and summarize the opinion and the arguments, even on simple kind of word similarity phrase similarity, you can see the main things as long as you can collect enough relevant data from enough sources. That's my feeling. Do you have to guide the analysis? Do you say, okay, just go and do it? Or do you have to say, look for this and this and this, because I think that's my point. Can it tell you things that you didn't know you were looking for? It can be semi-automatic in a sense that you can detect the relevant topics and then decide which topics you want to know more about and opinion about these topics and the kind of aspects or arguments that come, pro and cons, it's called stance detection. So there is work on that for various reasons. Let me give you a specific example. One of the things we found that really took us by surprise in the work in Sierra Leone was again, going back to this political polarization in the country. We produced our draft posters and images and so on, and then we made them colorful and they were locally produced and some were red, some were green. And then we gave them to people in different parts of the country. So what do you think about this? And it just didn't cross my mind that the opposition color was green and the government color was red. So if you give a green poster into a government area, people will respond to it differently than if you give a green poster to an opposition area. Now that to me is a really important insight and we could never have understood that if we hadn't been actually talking to people and listening to people as humans. So I think my question would really be, is there any way that AI could pick up something like that through reading transcripts? If you have transcripts of the qualitative interviews and transcripts and social media messages, it could detect kind of salient phrases and topics and with some manual checking of those, you could maybe figure out. Interesting. Okay. But Joan, there is one more question about the COVID pandemic. So different countries in Europe across the world, they have these different acceptance of vaccinations. Right? Some, like in Netherlands, I think almost up to 90% of people once get vaccinated, 92%. But in some places it's much lower. So some of these techniques you mentioned, do you think they could improve that percentage? Yeah, it's a very, very big issue right now. Exactly. I mean, there is overall coverage of vaccination, full coverage of people who are 18 and above in the EU is now about 72, 73%. So that's not bad. But there is, as you say, this incredible discrepancy between countries. I think the top vaccinated country is Ireland with about 91% and the bottom vaccinated country is Bulgaria with 23% fully vaccinated. So the range is enormous, even though the mean is good, the range is enormous, a big problem for those at the low end. So we are, of course, working with those countries to the extent that we're able to provide input. But unfortunately, a lot of it is built on trust issues. And unfortunately, politicians over time have not always been trusted in a lot of countries. And so if a politician from a certain country says, you must be vaccinated with a COVID vaccine, but that everyone knows for whatever reason that politician has a history, which maybe they question for whatever reason, for ethical or whatever reasons it may be. And we'll then have questions about whether or not they should really do what that politician says. And we've heard this from many people from a number of different countries. And that's therefore very problematic. What do you do about that? Who should be the messenger? It goes back to that issue of the old man washing his hands in the Ebola outbreak. Is he the right messenger? And actually, what I think we have really, really found very clearly is the people who, in surveys, no matter where you do it, when you do it, how you do it, the people who are most trusted with regards to vaccines are health workers. So if you get health workers trained to support not only the physical doing of the job, but also the ability to talk to people and explain to people about the risks and the benefits and listen to people's concerns in the proper way and take people's concerns very seriously, then you actually can have a really good impact. So a lot of the work is actually about training health workers to support the vaccination process themselves in a very, very active way. So the politicians, we want them to do the best they can do, but for some, they're not trusted. And so it's very difficult. But if we can get the health workers more on board so much for better, and we do have trainings that we're developing for exactly this purpose. So it's a long game. It's a slow story. But we fight the fight and move in the right direction slowly, slowly. We'll get there, I hope. More Christiane Ronaldo in Bulgaria, basically. Absolutely. Absolutely. We need to know who is the cultural icon who can turn people. Exactly that. And do you think it's more like this, basically the people? I mean, they have the more impact, more impact than some story. Is it really the model, the role model? So is it the story? What do you think? It's both. It's both. It's very much both. It's the messenger. It's the channel. In other words, is it social media? Is it a broadcast on the radio? Is it the newspaper? And is it the message? So the combination of the three is really important. The message, the messenger and the channel. And if you can put those three right, then you have a chance of good success. Or maybe I can ask a quick question. This is for talking about communication. And you have experienced in the last pandemic. If you have any example of how any national public Earth agency has implemented a training or a training for healthcare workers in this sense, is there something already going on in which country? Yeah, it's a good question. But off the top of my head, I don't have anything I can share with you. And I'm quite sure it's happening in different places, but I don't have it on the top of my head. And I wouldn't want to single out a country and say, I think it's probably happening there, which I think it probably is, because then someone will say, no, no, no, it's not happening there, but it's happening here. I better be quiet on that one. I'm sure it is happening, but I don't know the details enough to share. Right. Right. Because for example, I saw a following the Guardian on social media and they tend to put a lot of healthcare workers in their pictures with us, for example, saying how the vaccines change their life or that they're kind of using healthcare workers as a figure to build trust. Absolutely. Absolutely. And again, that creates a sort of norm. If the more I go out, it creates a sort of social norm whereby people say, yes, this is what it should be. And again, this is another problem with a lot of the coverage of the suboptimal, let's say suboptimal vaccine uptake. A lot of the coverage of this creates a sense that it's actually normal not to be vaccinated within some populations. And that's problematic. So in some sense, there's actually an over coverage of the suboptimal vaccination, which causes more suboptimal vaccination because it's seen as something which is normative. And so in other words, I think we need to be also careful about how we frame story making. What you're saying there is very good because it frames it as something which is a very positive thing, which is actually saving lives and reducing the pressure on health services. So that's something which I think is also the media has a responsibility to try and get that right too. Yeah, exactly. A relatively small proportion of the people who feel that strongly. But when they get all this big coverage, because it's exciting, you get a crowd of a few thousand people getting angry with the police, it's big media coverage. And then it plants a seed in people's heads. It's like, oh, maybe this is something I should also be thinking. And that's also very, very hazardous. So the media have big responsibilities to actually be careful how they cover these things. I think I mean, misinformation, we should be clear what it means if it's referring to factually or scientifically incorrect information, which is circulating largely online, but not exclusively online. And over the course of the COVID-19 vaccination campaign, there's been a lot of it, a lot of it about. And it has caused a lot of problems through creating doubt in people's minds, in particular about effectiveness of the vaccine and about potential side effects of the vaccine. So or I should say the vaccines, because there are several different sorts. So this has been very problematic. And you can see the impact of this, but when you look at the images of protests and so on of people who don't want to be vaccinated. So it is very problematic. And I think it's important to say that it has varied quite a lot in different countries. We have done a survey in six different countries earlier this year, looking at online vaccine misinformation, and it was not only about COVID vaccines, it was, excuse me, it was also about MMR and about HPV vaccine and about influenza vaccine. And those in six different countries from different parts of Europe. So we tried to get quite a spread across different countries. We found that the extent of misinformation in the different countries varies a lot, at least from the point of view of the people who we were discussing this with in the National Institutes of Public Health. So from their understanding, misinformation in some countries is really not a serious problem. But in other countries, it is a major problem. And in terms of how public health agencies are monitoring it, there's a big range of activities. I would say probably most countries, even within the EU, don't have a very sophisticated system of monitoring online vaccine misinformation. That's not to say there's nothing, but I think probably it's not as advanced and well-resourced as it could be in many countries. Trust is a very, very difficult, slippery animal because you take a long time to build it and it can take a very short time to lose it. And this is really, really, very challenging. And if you don't trust the people that are telling you to get vaccinated, you're really going to think twice. So why is there lack of trust in some countries? I think to some extent it has to do with the messengers of the information, the messengers of the message saying to people, get vaccinated. And if you're receiving a message from someone you intrinsically don't trust, whether it's a politician or whoever, you may question the message. So the messenger is a very important element to do with getting to get the vaccination. So there have been some, a lot of work has been done to try to identify cultural or sporting heroes or icons that actually facilitate people's thinking on this. They don't necessarily want to use prime ministers and presidents because people may not particularly like them. A lot of the work has therefore gone to using cultural or sporting people. And also to engage with people online who online are actually spreading this information. But it can also be about complacency. I don't believe that it's a problem for me. And earlier in the pandemic, we talked about really problematic that elderly people and medically vulnerable people equals not particularly problematic for young people. Now we're saying young people also need to be vaccinated. So that can be problematic for people to actually understand there's a bit of change. So complacency can be an issue. There can be access constraints. There can be a lack of a sense of collective responsibility or there can be a sense of collective responsibility. And then there can also be people and individual calculations in their head risk benefits of the vaccination against the disease. So there are a lot of different elements that can lead to people not being vaccinated. And trust isn't really only one of them. They have policies for removing misinformation and I can't give you figures of which social media company has removed how many pieces of misinformation. But I know they're active. But I'm also pretty sure they could probably be more active. I think they recognize it's a problem, but it would be nice to see more because it's it pops up so, so much. And of course, no one's saying this is an easy issue to address if even with the technology they have. But it would be nice to see less of it out there because it's rather like you push it down here and it just comes up over there. You push it down there and it comes up over here. So the social media companies are incredibly wealthy and they have obviously by definition the best technology in this area in the world. So it would be nice to see more, more effort from them to do that. That would be to shut down misinformation would surely be a good thing. I think it's also important and of course there is an issue between balancing censorship and taking away misinformation. And that's not an easy line. I realize that we can talk about you got to get rid of misinformation and there are some things which are clearly misinformation. But there are other things which may be a bit marginal. And then so slightly where do you draw the line and I can understand that's a very, very challenging, challenging area to think about. So I understand that it's not a straightforward issue at all, but it would be nice if with the amazing technologies you think it is.
John Kinsman’s work has been focussing on behaviour change interventions since 1996, when he joined the UK’s Medical Research Council (MRC) Programme on AIDS in Uganda as a behavioural scientist. Since then, he has worked as an action-oriented researcher on behaviour change issues: through much of the early 2000s, John focused on issues relating to HIV testing and counselling, and adherence to antiretroviral therapy in a number of African countries, while subsequently he worked on several WHO-designated Public Health Emergencies of International Concern (PHEICs). In 2019 John moved to the European Centre for Disease Prevention and Control (ECDC), taking up a position as their in-house expert on social and behaviour change. Since the emergence of the COVID-19 pandemic, his work has been focused exclusively on the response, with direct support to EU/EEA Member States as well as regular input on behavioural and risk communication issues into ECDC technical reports and rapid risk assessments. John has also led or been closely involved with projects on addressing pandemic fatigue in the population, examining Behavioural Insights research in the Member States to support the response to COVID-19, supporting socially vulnerable populations, preparedness and implementation support for the COVID-19 vaccines, and countering online vaccine misinformation. John presented his work on "Social listening and the use of qualitative data for monitoring health behaviours and trust", exploring the role of social listening via social media, and its related challenges, in support of the "infodemic" and COVID-19 outbreaks responses. See the following link for the ECDC publication on countering online vaccine misinformation: “Countering online vaccine misinformation in the EU/EEA”
10.5446/54895 (DOI)
Well, hello. Pasta Naga concept was born when I was reviewing some content and I realized that we were missing some parts on Plone. And technically, Plone is competing maybe with Rupal or some others. But on the front end, we're not competing with Rupal. We're competing with the experiences that the user has with the Twitter application, with the Google Drive, and it makes things more complex. And we need to create something on top of that that we can manage. And together with Guillotine, I was starting thinking how to manage this complexity to try to simplify and I will say kind of give a path to all the community to have kind of guidelines, not only visually, but also about the user experience and how to even mark up the things. Okay? So this is the idea of Guillotine. It's trying to put the steps to make something great and orient everything and everybody to work together. Okay? And the main goal is to simplify and focus the people. Okay? And so sometimes I realize that because we're maybe sometimes too much oriented on development, we forgot about the user. And the user is just trying to do small actions on Plone. Our users are not admins. They're just people editing the content. So what the user wants to do in Plone? It's mainly editing, checking maybe a folder and adding a new content. So that's a concept that borns to, for example, simplify the toolbar with just three main actions. So this is theoretically and visually is just orient the user to have the three main actions and remember the user which is the main action probably on that flow. So the user don't want to think. The user wants to kind of follow the story and be oriented by the application itself. So one thing that we need to review in Plone is the workflows. The workflows send back, retract, return. It's too difficult. And then using the happy path action to orient the user helps them to know what's happening. So if a user wants to go to ROM, don't tell him the route. Just tell him the final place. So the idea behind this new concept is just if I'm in rough, I want to make it public. Don't tell me how difficult it is to go public. Just make it public. And at the actions level, the user doesn't need to read just to follow paths. So an approach is to just show intuitively which are the actions. If I'm okay, I will click on the arrow. But if I'm not okay with the action, I will dismiss it. But that action is an icon that can mean several things. So what you are really doing is to avoid the stress on the user like, hey, go ahead. Go ahead. And you have to understand if it's okay, if it's done, if you are agreeing, just do it, no? And also I want to add some kind of reassurance to the user. So he has some kind of hints that let him to understand the situation on the plot. So for example, playing with colors to understand if he's editing a draft content, an Internet content or a public content, no? Or even for example, this is a mobile tool bar. In the color on top, give you the hint of a draft or document. Then because it's complex inside, we need to give a bit of visual hierarchy. It means to order everything, to have a bunch of elements that we can reuse. And so we don't need to think when we develop, what are we developing? So we have, for example, titles, we have legends, we have forms, and we maybe have 30, 40 elements on all the code that we can reuse. And anyone trying to help the community can understand quickly what he has to use, no? Also another lack on our system is that because everything is going super visual, no? We have a complete lack of icons, no? So we are trying to also to provide an icon experience. So I did that SVG to provide any kind of icon that we need for any development, no? Then navigation, okay? Before you do an action, you need to understand which is the consequence of that action, no? So also to do so, I tried to conceptualize with small hints that, for example, if the arrow is going down, the selection will happen there. But if the action is going to the right, the selection will lead you to some place so the user knows that the view changes will happen there while the history will happen in another place. Also at the usability level, if we can kind of formalize all kinds of contents can be similar, reuse, and oriented to have that kind of, I don't have the word right now, but this stability, okay? And to do so, I'm trying to generate patterns. Patterns that are not just visual, I mean, it's our patterns at accessibility level, at UX level, and at visual level, no? So to create consistency, no? So at accessibility level, which is really important for Prone, we need to follow the standards and make everything reusable and accessible. So the idea behind the patterns is to make it easier to become involved because you know everything available on the system and orient the community to help, no? So if someone new comes, he knows where are the elements, which are the elements and who he can manage the elements to create new contents. Then this is another trend that we need to take into account. Right now the UIs, instead of giving all the possibilities that you have at some point, are trying to orient the user. So show only to the user what he needs to know at that point, no? So if I need to edit something, just show the consequences of that addition, no? And if I need to remove something, just show me that, no? And so for example, when I am editing a link, the moment I am editing a link, I only need that inserting the link. I can type that link, no? If I want to create a new document, the moment I create a new document, I just need a UI, sorry, because it seems that it's really wrong visually. But the moment I need to edit a UI, I only need to save, add a new content, or maybe change the state here. The contrast is not showing that you can edit the title and the content, but well. And so for example, in this more complex UI, when you are selecting an element, you are probably want to make it bold, make it italic. When you are editing a link, you want to edit a link. When you are changing a color, you only need the colors. So the user only has the essential path of the consequences of the actions, no? If you are deleting an image, you have a context dialogue that leads to confirm this, to remove the image, okay? So at the end, we need to focus on the user, no? And so PastaNaga, it's born to try to explore that conditions to be added to Plone or Agilutina, no? So let me show you some examples, no? This is a simple login to Plone. And the user logs, no? Now you can access to the main menu and open the options, no? Only three options. Other options are hide in the more options icon, because rarely ever you will need to change other elements, no? So let's follow the flow. If you go here, because you are an admin, you get all the extra information, but this is not a common user. Okay, now I will open the history, which is not called history, by the way. And in here, the user knows the changes made on history also. At visual level, you can realize that it gets public just after the yellow. So you have also visual hints to understand what is happening, no? Also even for the administrators, we need to order the site setup. This is an example I made. So it's all the site setup and extra contents behind, no? And then at desktop level, no? At desktop level, the contrast is really bad on the screen. At desktop level, the user wants to open the edit tool bar. He gets the tool bar. He wants to edit the user. He knows that elements of his user profile. Now he is, for example, accessing the more options. This is the history again. And for example, if you want to go to the folder contents, you get a simplified view of the folder contents. You can open some menus, everything is more in context. And these are the examples, for example, of the folder contents and the user view in a mobile. We are reusing everything, really. Okay. This is the tool bar adaptation. So if I want the tool bar on top, I can drag it just there and it appears on top. Sorry. I skip it. Then to reduce the components, I create a kind of a bunch of visual elements to try to simplify everything that we have. So those are, for example, the form elements, checkboxes, radio buttons, toggles, the dialogues. Those are the calendar and color pickers. Content rule settings, if you see, I'm reusing the structure so it has a hind that it can be handled. It has the visual status and here the edit or remove. So that kind of visual structure will be repeated across all the platform. This is an ad content rule. This is, for example, a really complex rule creation that I cannot see nothing but well, there is the if and you see that the structure of editing and removing and drag and dropping is there. And then to do so, I need to create a reference for everybody, not only for developers but for everybody. So we are trying to create that visual reference in mobile. We are creating bigger elements than in desktop because in mobile you have a finger to touch the screen which is really unnecessary for desktop. Well, in desktop unit, you can be more compressed and has smaller touch areas. For example, for the dialogues, we have the icons, we have how to build a warning and then even we are providing the best markup. This markup is kind of ideal with all the roles and area life and area levels and so trying to cover not just the HTML5 or 5 of 1, also the way areas to make everything accessible. And okay, this is what I'm doing but then how I can give the information to the community and to do so, we are trying to document like this. So those are the others, the classes affected, the example of code, there is another kind of element that is an out and so in here, for example, you have a confirmation dialogue with code example so the community can take this as a reference and reuse. These are, for example, the explanations of buttons and when we have to use things which code and the states of the buttons, if the buttons has tooltip, if the buttons have shortcuts, for example, shortcuts is something that we are not using right now in Plon but the simple way of saving something is just typing on the keyboard control save or command save. This is simpler than going with a mouse or with a finger to some place. So well, this is the concept for Guillotine and for the new Plon and I prefer that time to have questions from you and thanks for listening. Thank you all for this was impressive. Any questions? I have no question. Great. Eric. Yes, thank you for the great job. That's awesome. Just one question, at the moment, do you have any idea which could be the CSS framework we could use to implement it? Do you have any preference? I don't have any preference at any level. So I'm not thinking on frameworks but thinking on which is the, so if I have to do a dialogue, there is several ways of doing that but the best dialogue, so covering accessibility, having the contrast in mind, having the structure of the elements, it's what I'm proposing. So if at the moment to implement that, I know that we cannot go directly to that because that's crazy. But at least we need to orient to that and maybe we can select the semantic UI or whatever or even bootstrap but this is an example. I'm mainly using classes based on bootstrap because it's what I'm used to but it's just an example of what I say. Hi Albert. These icons and all this stuff, is it available somewhere now? Yeah, it's available but not everything and not in the same place. We need to organize really. The icons are in GitHub. This is still internal but some people have access but at some point everything needs to go somewhere. My problem, probably I need to change my way of working because I'm working on a sketch right now. But you don't have, yeah, I cannot give you contents in a sketch because you will never be able to use that so maybe I need to transform into Google Docs or something so the community can manage. There is something called Figma. It's basically like sketch and Google Docs put together. It's cloud based. It's something that the community needs to discuss but I'm open to anything. Okay, so we'll talk about it. Any further questions? Okay, yeah. I think you covered everything. I'm shorter than expected, yeah. Sorry for the contest because I'm seeing nothing, there is a lot of information missing on the screen. That's a projection. Okay. But well. Thank you very much. Thanks.
Earlier this year, we presented to the Plone community in diverse forums the Pastanaga UI project. This aims to be a new generic proposal for building a powerful and modern UI for powering generic CMSs. Pastanaga UI is not only about how things look and how they feel but also about overall UX and interaction: a new guide that will provide common ground for creating elements through all views and forms, allowing a unified and coherent structure and how to create new components in an homogeneus way. Pastanaga also presents a new way to create and interact with your content via a brand new editor, simplified and ready for modern times. Do you want to see a sneak peek of Pastanaga UI and Pastanaga editor? Do you want to help make Pastanaga happen?
10.5446/54896 (DOI)
So, I'm going to show you a little bit of the application server. It's one of the largest piece of software in the Python world that has not been Python 3 compatible two years ago from now. That was a huge problem because a lot of our projects depend on Plown and without a path to the future, it could be horrible. So, yeah, what was the problem? We started in Bristol. I think it was 2014 in Bristol. We had the first discussion about the roadmap again, how to move, and some of the quotes was it is impossible to get Plown to Python 3 because there are too many blockers in Zope. No one has touched restricted Python or access control and they are from the beginning. So, it's the really hard stuff. But, hey, when it is time, our third C-Clock, the author of 2001 Odyssey in Space, has made some fantastic quotes about it. When it is distinguished, what elderly scientists states that something is possible is almost certain right. When he states that something is impossible is very probably wrong. And if someone tells you that's impossible, well, try to change it, make it possible, that's about what it is. So, why should be restricted Python be the major blocker for Plown on Python 3? Well, if you look at the dependency graph, all of the underlying frameworks that we depend on, access control, document templates, Python scripts, the Z catalog, Zope 2, everything directly depends on restricted Python. And a lot of the other up-way framework parts of Plown that makes Plown Plown depends on them. So, yeah, we need to touch restricted Python. And, you know, every piece of Zope that was not adopted by Plown is literally dead. Not a lot of the Plown people touched those parts of Zope. Restricted Python has been around till the end of the 1990s. So, in the code I've seen, there was a Python version 0.8 something or so that it has started on. So, really weird shit and stuff, yeah, but hey, let's try to adopt it. And that's the fantastic thing in the Plown community. We have a diverse group. Everybody has a capability of doing something. So, we started it, or I started it. And, well, what was the problem restricted Python has almost had no documentation. The test coverage on its own was extremely low. And, you know, debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, you're not smart enough to debug it. Horrible stuff. Restricted Python for that time was probably one of the most advanced Python packages around. It took a long time to go into it and understand it. But, hey, the problem was created by ourselves where Zope leads Python follows. A lot of the Python 2 standard library are directly influenced by Zope. There are a lot of packages that has been gone now in the Python 3 world that are just left overs because Zope needed them. So, there are deprecated modules like the compiler package, which was 80% of the implementation on restricted Python, depending on, and there was no documentation at all on the insides of the compiler package. Horrible. The only way to discover the limits of the possible is to venture a little way past them into the impossible. And so, let's go there. It's fun getting into the new stuff and, or the old stuff. Archaeology is also, but, hey, what doesn't mean any sufficient advanced technologies in this thing for magic. So, if you're looking at the stuff we have in clone and Zope, there are so many fantastic things. And we have a lot of things that we don't have, or other frameworks don't have. So, if you look through the web, we can write scripts in Zope that could be used by, or written by our users that are not destroying your server completely or something or can hurt that extreme. We have a fantastic workflow machinery. And so, that directly depends on restricted Python to only show you the things you're allowed to see. So, hell, there is a lot of shit in for someone who's outside. That's like magic. How can they accomplish that? And that's the thing that makes clones so special. And that's the thing we want to keep. So, yeah. But the problem is what is restricted Python? And especially what it is not. Because if you're looking, in the early days of Python, the first Python workshop at NIST, the SPAM 1, the first Python conference, in 1994 already has an S, their first topic, requirements for a saved Python interpreter. And Jim Fulton and Paul Everett were at NIST at the conference. That was the origin where the idea for restricted Python is originated from. So, there is the requirement for that. We have it for almost 15 years now. But Zelp and clone are the only ones that use that. So, yeah. Most of the biggest problems in software are problems of misconceptions. So, the people don't understand the real problem or the real needs of the users. So, that's where we need to figure out what to do. And, well, restricted Python is not a sandbox. That's important to know. Sandboxes most often don't work. You cannot escape the sandbox. So, if you're relying on a sandbox to think, to secure your system, there's always someone smarter than you that found a way to escape. Security by obscurity or by putting everything into one place is problematic. Doing security is a long process. There are different approaches to that. And Python has done a way that has been shown to work in another language also. So, what restricted Python is, is a limited, safe subset of the Python programming language. The grammar, the grammar that we love to work on. But it is so limited that you get it to be not anymore Turing complete. So, programming languages depends on being Turing complete to be able to do everything. So, if you limit that, you could do it and decrease the possible things that could be done with it. There are simple things that you would like to allow the people, but not everything. You don't need file access or something else. And there was another project that's have done that, that was the other project in 1995 with the Ravenscot profile file, which is a limited subset of the programming language order that makes it restricted and logical testable so that you can show that the program, a program in it could be run correctly. So, yeah, there is a way. And that's also the way to restrict the Python 4.0, the way to Python 3. So, well, if you want to start it, first solve the problem, then write the code. So, understanding what it is, think. And, yeah, the first read of the complete code says it uses the compiler module and its AST class. Well, go to docs.python.org, compiler package. There is not, there may be 100 lines, a short overview, but no details, nothing. It's not fully documented. There's no described upgrade path or something like that. Just say it's gone in Python 3. Deplicate it. And the other thing is, if you're looking into the code of restricted Python 3. They do manual byte code generation. Manual byte code generation? Holy fuck, what are they doing there? You have interpreter specifics. They just let it run on C Python 2. Oh, yeah. It's hard. But, well, if you think in some scientific way, it's not documented, it's not usable. If it's not tested, it did not work. If it's not checked in into version control, it did not exist. And if it's not repeatable, it's not science. So, someone has already found a way to shield it. So, there must be another way to do it. If someone has already done it, it should be repeatable. That's science. So, yeah, it's hard. And compiler knowledge is necessary to port it. Hell, we are such a diverse community that we have a specialist for everything. I have studied computer science. I have taken compiler lessons or written one of my thesis about compilers. Well, it probably will take time, but it's not impossible. So, let's give it a try. And, well, where did it start? In Plon OpenGarden 2015. Then we discussed it again and start looking into the code. And, yeah, the people already decided, if that's not possible, we need an alternative way. So, Plon Server, which now has become GioTiner, and some other ways to make a turnaround on that, probably get rid of restricted Python if we can't handle it. But, hey, if we can do it, let's do it. And so, reading and understanding the code is the necessity in the beginning. And, hey, any fool can write code that a computer can understand. Good programmers write code that humans can understand. That's all about it. You need to get down to the Zen in Python. Make the code readable. Make it understandable. So, now I have taken over the maintenance, but I'm probably not smart enough to do it from the whole of my life or something like that. So, someone else, if I'm probably gone, could take over the maintenance. So, make the code more easy, more documented, more commanded, and everything so that the community can work with it. So, yeah, what did I do? I started, and the first half year I was working on porting was just writing documentation. What I see in the code, just annotate everything, making the whole set, getting some feedback from some of the Zop developers that has already worked on it, and, hey, if you don't have any requirements or design, programming is just the art of editing back to the empty text file. So, it did not work that way. So, yeah, compiler AST. Hey, there is a new preferred way in Python 3 that already was established in Python 2.6. That's the new AST model. And AST could completely replace the compiler AST model. And the compiler function itself in Python accept AST as an input. So, you do not need to manually generate bytecode anymore. You just drop in the modified AST and get it worked. And you can get out of the compiler function the AST and get bytecode out of it. And that works from Python 3.6 up and Python 3.4 up in a very, very smooth way. The packages have already been there in the beginning of Python 3.0. Python 3.0 implementation from 3.3 to 3.4 changed a bit. So, we have tried to make it comparable with 3.2 and 3.3 in the beginning. It was hard, but they are already out of support. So, we skip that and make it easier. Python 3.4, 3.5, 3.6, 3.7 are going like a gland. So, that is not a problem. We can get to work on that. And, well, the first thing that was necessary was to get access to the foundation or the foundation repositories. And I am glad that Minabu organized the Plones Symposium in Tokyo, 2015. That was the first place where I met Pressiever and get access to the repositories and everything. So, that was necessary. And it is sometimes the simple things that take you to travel 20,000 miles. Yeah. And, hey, another thing is we have done a lot of code in the old Zope way. But we are now tending to do more and more coding conventions to get the code readable. And that is what the Zen of Python is about. It is the lessons learned by the people at Zope Corporation at that time from Zope and translated into the whole Python community. So, beautiful is better than ugly. Explicit is better than implicit. Simple is better than complex. Complex is better than complicated. Flat is better than nested. Sparse is better than dense. Readability counts. And, hey, you have to read your code. And after two weeks, three months, six months, looking at your own code, it could be the code of someone else. You need to understand it again. If you have not written it the way that you can read and understand it, quickly again, you are lost. So, the first thing I tried to make was applying our new coding conventions on it. So, I did not change any of the implementation details. I just make it following our coding conventions so that it is more readable. The second thing is writing tests. And I really have raised the test coverage. I thought in the beginning it was about 18%, 18%, less than 20%. Test coverage over the whole package. We have raised the test coverage now to more than 95% of the complete code. We've done it completely tested. Every syntax statement that is in the Python language is explicitly tested. What happens if it is allowed or disallowed in Python 3? Or in Restricted Python. And we do a functional test of all of that. So, hey, in tests, the first principle is that you must not fool yourself. You are the easiest person to fool. Because you believe that the things you are doing is good and right. So, get another person into the system. Have them have a look at it. That's the thing you need. So, I make a decision together with two others and said we have to move. The test set up, Restricted Python did not depend on any other external package. It just depends on the Python standard library. But writing tests in the Python standard unit test library is not that straightforward. And PyTest and TOX give you a lot of easier stuff to start at. And it's actually at the moment the defective standard in modern Python packages. And TOX gives you the possibility to test multiple Python versions locally on your machine against the same thing. That's fantastic. And PyTest gives you the possibility to parametrize your tests. Not only with input values, but also, that's the advantage of Python, you can pass a function in. So, I do paramize the tests with the old implementation and the new implementation. So, we had the left the old implementation, complete impact, do just another file next to it with the new implementation. So, we can test if everything works like it should be. If it's on the one implementation or on the other. So, at least for Python 2, we know it is exactly the same. And that's fantastic. And, well, PyTest has the advantage you just can use the answer. You don't have all the time look up, is it now as true, is it as raised with something or so? Just easier. I give directly after lunch another talk about modern Python testing. If you're interested in the lessons I have learned on porting restricted Python, what we should do and get to get better best practices on developing stuff compatible. You can come there. That's one of the examples of the tests we have done in restricted Python. So, we take both implementations. So, the new implementation, the second line is the old implementation we pass it in and just look, execute a statement and look if we get errors, if the result is also translatable or a hand load, we export it and see what it gets out. So, you can prove both implementations work the same. Perfect. So, yeah, it takes a long time. And doing it alone is very hard. I'm glad that during the up-and-city sprint, the people from GoSep and some other core developer joined in, stepped up and helped through, we were able to do it. And hey, if someone says to you, it is impossible, well, things that are impossible just take longer. It's the fun thing. You can get it work. That's the nice thing. So, at the beginning, all the people were saying, Zope is that. But we as Plon has not adopted is that. Hey, Zope is not that. The people came up again today. Or if you're looking at the conference, there are several people that originates from the Zope community, are here and doing stuff. And, yeah, we get it working during one of the Zope sprints in May 2017 to cut a first release of restricted Python 4.0. That is Python 3.0. We can already test that. Zope 4, the talk from Hano tomorrow, will tell about runs completely on Python 3. And it depends on restricted Python and access control. And you see on the right, we now have a test coverage of 98.58% test coverage. Python 2, 7, 3, 4, 3, 5, 3, 6, all working, all test passes. Each time we do something. So, yeah, we can get it done. And how has Ice Mac Michael Hovitz from GoSep said it? Welcome to the Python 3.0. Thank you him for his work doing it, helping me. And, well, for those people that are outside, or knowing some of the old stuff from Zope, PJAB has given a fantastic quote on it. Those who do not study Zope are condemned to reinvent it. We do see a lot of fantastic stuff in the Python community today. They tried to do stuff that has Zope done before. But they are still stuck in the path where they are not can give the equivalent of Zope some way. So, hey, on the other hand, they start and say, that couldn't be that hard. But, hey, if you think it's simple, then you probably have misunderstood the problem. You have to really think and look how it works. It is a long path, but it could do. And my wish is that we get the strength of the Python world. We have the Zope and Plon community into other frameworks. Get them to use the stuff we produced. And they are adapted by more people so they can work with it. Project Jupiter, Jupiter Notebook, which was originally iPython Notebook, is a fantastic environment for learning and teaching on Python, doing data science stuff in Python through the web, but you can kill your complete server through the web. So, you have to shield it. You have to do everything. And they have a sandbox. But, hey, everybody of us should be able, with our skills, got from the Plon world, to escape that sandbox and kill the server if we like. So, that's not the thing. So, if you do something, and there is TryJepeter.org, where you can get the server up and running with iPython Notebook, that means you have to secure that more or get things done. Django, Pyramid, and Guillotine have other security features, but they do not allow writing code through the web. Giving the people the possibility to write something. A user in a system. And even if it's just a God method or a short script, to move a file to another folder is something that empowers the users. We should be able to give them and help them. So, I would like to see others using it. But, I'm probably also a person that sees, if we have something complex, like Plon, it is not easy to recreate from scratch. Django has said it correctly. A complex system that works is invariably found to have involved from a simple system that works. A complex system designed from scratch never works and cannot be patched to make it up to work. You have to start over with a working simple system. I like the ideas that Guillotine also takes, but the problem is if you're knowing the Pareto principle, so that the first 80% takes 20% of the time and the last, the harder 20% takes 80% of the time, Guillotine has started with the easy parts of Zope and Plon. They not even have reached the hard part. And that's the problem. At the moment, it's a fantastic framework for doing REST, but all the empowerment that Plon and Zope has done to the community is not there at the moment. And that's the thing that I like. Yeah, legacy is boring. The old new stuff is shiny and makes fun doing something quick, but doing something that lasts, but doing something that enables large-scale things is hard. And getting there is something that our community has shown in the past and can show in the future. We don't should let that down. So yeah, what are the lessons learned I have done? Don't take impossible to port or fix that moment as serious. It is possible. It probably takes longer, but you can do. You should adopt modern tools and framework for your help. So Talks and Titus is a fantastic thing. I said, if possible, we should do that in the Plon community and we should probably update our best practices for Plon. And if we can use Talks, we probably can easier also so for all of our add-ons that they work together with. Different Plon versions with different Python versions, and so that's possible. For all of you who are interested in the way to Python 3, I can recommend three other Talks. So Hanoz talked more about Zope on Python 3. Mike talked about modern Python testing, and Mike has worked a lot on sub-templates and Bob-templates-plon. And that's the package where the best practices for Python and Plon packages are currently defined for the Plon community. So give it a try and hey, every revolutionary idea seems to evoke three stages of reaction. There may be some up by the praises. It's completely impossible. Don't waste my time. Oh, it's possible, but it's not worth doing. Oh, we have done it. I said it, it was a good idea at all. And that's the way the people think. So thank you. We are done. We have it on Python 3 now, so that's the way to go. Thank you. Thank you. Thank you. Thank you. Any questions? Any other questions? Yeah. Okay. Philip Bauer already had a running Plon 5.1 on the 4 on Python 3. It has some errors, so it's just about fixing all the bugs with some of the print statements and some of the code or so. So there's a few things, but it just takes time. I guess if we work on it and get more people testing it, we could have a Python 3-compatible Plon next year. Okay. Hano is giving tomorrow a talk, but Zulp 4 is running smooth already. There is some stuff on generating the new startup templates and so on. So Zulp 4 has got rid of that server as the default handler for presenting the generated code. It's now using whiskey as the default. So you need to get all the stuff on the configuration side. So probably Zulp Recype, Zetzi Recype Zulp Server needs to be adjusted for Plon, but hey, that's possible. So there is some stuff to do, but it's just the normal stuff we do on those prints, isn't it? Any other questions? I guess that's it then. Let's thank our speaker once again. Thank you. Thank you.
Zope and Plone are fantastic systems that provide a lot of features out of the box. A primary focus of Zope and Plone has always been about security and empowering of users, and one of the things that provides this is RestrictedPython. A lot of code in Zope and Plone has become legacy and remained unmaintained for several years, RestrictedPython was one of those core packages that got less attention and became the major blocker for the porting of Zope and Plone to Python 3. Many thought it would be impossible to port RestrictedPython to Python 3, but in May 2017 we released a version that is compatible with both Python 2.7 and Python 3.4-3.6 and even PyPy. Nothing is impossible within the Plone community! This talk will focus on RestrictedPython: • what it is, • how and who could use it, • what was the problem with porting to Python 3, • why things seem impossible and why that is almost always not true, • thoughts on Zope/Plone and other framework security
10.5446/54900 (DOI)
Sheipe den leveriro skSe itse osasti kommentoinen. Lie Towingas ​​En So it's more than ten years ago. And back then, blown was, so I saw the time in developing with blown was very much fun. So you just used to a browser and tried to do anything and changed everything and you got instant feedback and instant error from blown. And of course, back then when you had to migrate the new blown version, you kind of might have lost or your changes had to start again, but at least it was fun back then. So what if we can bring at least half part of that fun back to the current version of blown. So you have set up blown instance running and some cloud service easily clicking a button somewhere. And then you have feature request. For example, I have an event coming and from the event we get a lot of photos and I want to publish those photos on blown site. And with a nice layout like that fancy pictures wall like lay out is responsive and can show a lot of pictures with different aspect ratios and sort them out nicely. I mean, you had different size of screens, it will also reflow them nicely. And of course, one more feature. I also would like that the participants of that event can also submit their pictures so that I can review them and then publish them if they are any good. And now this is our fun scenario. I just created a clean blown site on cloud instance on Heroku. And I have researched from, so I ordered that features from some small blown agency and I got a cheap file back from them. It was maybe few days of work and they built me, I got the cheap file and now I go to my whole blown site, I want to update the cheap file there. So it's like I'm, looks like I'm uploading a theme. I guess it could be also a theme. And I click check so that it's instantly activated. And now when we go to blown site, well, it looks like nothing happened. So it still looks the same. Because now the team in this demo, it's actually just reuses all the resources from blown file default team. So it's something new here. I have a new content type called wall of images. So that was the one I was request for that blown agency. So we get wall of nice public domain images. And now I have a bunch of my own images I want to submit there. So luckily on blown 5, I can go to content and use that nice fast upload all the files at once feature. I select the files and guess the right word in German. And here are previous and now it's to upload. I think this is not running the latest blown. So I don't see the nice scroll bar here, progress bar. You can see in the background something is happening and the photos are going off from the upload, upload, pop up. So now we are ready. We close the upload. And now we have our new public domain image wall. So we have a list of nicely layered images. And when I zoom in and zoom out, they are responsive setups for these images. Now if I find the plus sign, yes, I find the point. Okay. Well, if we had different process sizes, so mobile, only one image per tile and larger screen, we see more images at once. Okay, next step was I want to make this so that my event participants can also update their terroin images. So let's make it at least public at first so they can see all the images. We have another browser somewhere. Here we see our pop site as not logged in. We see our wall of public to my images as not logged in user. Okay. So as before. And now we have an extra virtual state here. Let's open this wall images for submissions. Done. And now when I go as anonymous user of the pages, I see this submit in the image link. I can select one of the images I didn't upload yet. And now I get the nice message. Okay. Thank you for your submission. It will be a review soon. And as you can see, I don't see the image here yet. Of course, we don't want to immediately publish all those images. Anyone can submit there. But as owner of the site, reviewer of the site, I see a portal of the preview list here. It looks nicer when you have wider screen. But it's one-one when you submit. And with the proper click, I can open it and accept it. And now as the visitor, I can see the new railway station image at the bottom. And all these features came with that theme that didn't really even add a new theme here. And that's kind of fun I'm looking for. For our university, we want to stay more competitive with the other vendors. We are like our own department. We offer services for other departments. But we would like to look competitive compared to Reval's Out of the University. So we are looking for cheaper and faster ways to deploy new features for blown. Like you see now. So now. So what did we actually see here? Yes, we did have a team which actually only replayed everything in Barcelona. But then we had resources. We had a major rich AES called JavaScript library that made a very nice layout of the images. And of course it can lay out anything that has fixed dimensions, not only images. So in blown, I can easily see it's layout in different content types also. Then we had a folder content type, wall of images, which accepted images. And now that our custom content player. And then we had our custom web template that rendered all those images. And something enough for a major AES to make that layout, activate the layout. And then it was the hard part was that how do we accept custom anonymity tab missions. So of course in on blown we can do custom workflows. So that package includes a custom workflow. And that's custom image content type. Because when you need to apply custom workflow, it's most easier to apply it to custom content type. We also have placeful workflow. So you can activate different workflow policies for different folders. But it's more advanced I think. And be a little bit more secure. We added also custom permission to only apply for that new custom content type. To be really sure that we only allow anonymous users to only add that content type. Even if they know some magic URLs in blown. And the last policy was that we added content rule that showed our customer the message. Thank you for submission. And it also redirect the user back to the folder page. So that user, because the anonymous user who uploaded the image cannot access the image. The image goes in private review state immediately. So we need to redirect the user back to the front page of the image. Also that user didn't get error. And we added that a review booklet that comes by default blown. To the root of the blown site. And what we didn't see, there is also internationalization. And localization messages. So if I would have created that site as in Finnish language. I would have shown Finnish messages and Finnish names for the content types. And Finnish messages for the content rule. So thank you for submissions. So now if we go with the trend of Salloway. For example you take the latest many years old professional blow developers handbook. For blown four. And according to these practices. Yes, we like to make everything in Python package. You would create at least one generic Python package for the front end resources. And other for the content types. You nice reusable test, well tested generic Python packages. And then a few more packages to make the generic components fit well with your customer team. And finally you would need to update your site policy package to meet everything together nicely. And of course when you have all these packages for those features. You still need to have the deploy workflow. The package everything and release everything. And finally restart your instances blown service so that users don't get any downtime. But with this fun theme approach. You simply have everything in that team package. And you upload it to your blown site. Replace the current team with the new one. The new version of the team. With integration for these features. And configuration for these features. And you can do the upload through by clicking the browser. And you can have some kind of robot automation for that using Selenium. Or I have created a small npm package called Ponteen upload. Can be installed with npm. So it takes a folder of the team. And then address of your blown portable route. And it makes a cheap package of the team. And then on the behind clicks through the blown site. And ask your username and password. And saves your cookie to make the rig upload easier. And then just final upload. And there is no step three. Because there are limitations. If you do everything only through team. You cannot do everything you can do with Python package. But you can do a lot. Of course we have configuratable settings in blown. Now it is we all have almost everything in blown registry. That has control panel you can edit. And also all the control panels are in blown site setup. Yes we can define custom content types through the web. We can do that. Next to the content is blown 4 and blown 5. And define fields and even more. Which is the map they have. And everything if you know how to do that. And our documentation is not really so complete and upgraded on that. But by looking for examples you get pretty far. And we have reusable content type behaviors. You can activate on your content type. Those behaviors are the ones you cannot develop through the web. But you can enable it if they are. Or ask someone to develop those to. And then use those by yourself. And we can make workflows through the web. Configure roles and permissions through the web. What we cannot do by default is to define new permissions. So I come to later how that's possible. And we can configure all kind of portlets. In blown current versions we have those context portlets. You assign into some folders and they inherit the top folders. But there are less and more content type portlets. That only are displayed on some content type. Some specific content types. And there are group portlets that are only displayed for users belonging to some certain groups. And I don't even remember how you can bridge all those features through blown sites. But they are somewhere there. So if you are going to blown user and groups from sites, you can set the group portlet. And go into content types in blown sites. You can find configuration for the content type portlet. That's fun. And one of the main feature users to really know how to use is content rules. So you can make a event trigger content rules on blown. That something happens like there is new content uploaded or content state changes. There are a lot of events there and custom packages that add more of them. You can have actions like send as. So make it or send email or move that object somewhere inside the blown. And a lot of more. And then we can define completely new weaves, templates. Those are displayed content on use. And even have support with Python scripts for those. You can write small Python scripts inside templates. But when you need something more processing, you really need to extract that into separate Python script. And finally, we can also include in theme all static resources we want. JavaScript and CSS. And then know how to configure them on. And then we can use Diaso to do and move in stuff around the page. And there have been separate talks about how to make blown theme. And probably I forgot something here. But a lot. Well, this is the most advanced topic here, I guess. What does it mean that those are rest of the templates and rest of the Python scripts? Well, in blown there are, as I said, the Python packages you install in blown and then restart blown. And those can do anything. But in blown you can also upload code to the browser. And anything you do to the browser are considered restricted or untrusted. And they have limited set of features you can do there. And I think Steve McMayhem said quite well that what is this restricted Python that powers those templates and Python scripts that can be included in theme or entered into blown. Previously, we added a lot of them into portal skin slash custom, I think. Many of you probably have done it if you have used blown to a program tree. Now it's a little bit frowned upon that you shouldn't use that approach. So it's not a sandbox, but it's kind of safe to use or doing stupid things. So you can accidentally, for example, delete files on your server by writing those scripts. And we're a little shy about the security of those scripts. My understanding is that we don't know any active exploits in those. So whatever you add to the web, okay, then you can run things in your blown site with the permissions of the user, like delete, delete, opts to the blown site, but they cannot access your server, your files system, like go there and delete the whole database of blown. But we don't really call them 100% safe because, of course, we cannot prove that, and there might always be something we don't know. But basically, they are the feature that makes blown really blown, that you can blow this one of the, I don't know if there are any other systems that can allow you to write real code, to do custom stuff without being afraid that that code breaks everything. But of course, as I said, it comes in some price. So people say it's not very pythonic. You cannot import, like import libraries you can do, but on a normal point you need to do things called restricted travels to find blown, like blown restricted APIs and do stuff. And those are not always complete, they are not up to date, and they are not anymore so well documented, they are scattered around blown ecosystem. So you kind of need to know from examples where you find things so that you can do extra things there. And for the last few years we have, with a lot of voice, blown API, how blown API makes it easier to program blown and automate blown. But when blown API is made, we thought that all these restricted Python things are past, we don't use them anymore, and they are just standing in a lot of way, even many of us still think they are fun. So that was taken account of blown API. So blown API by default cannot be used from restricted Python, nor is designed to be usable from that. So if there are ways how you can enable blown API to be usable from restricted Python scripts, but then there are real chances that if you have untrusted users, they can do bad things on your server. So if these are not well documented and they are all around, how do you know what to do? So this is my favorite book, Practical blown tree. I think it practically covers everything I saw before. Everything is expected to be contentized. So how do you make content rules, how do you make workflows, and how do you write Python script, how do you write blown baits template for the browser. So if you like the approach and still want to do this, you need to find this piece of this book. It's a killer. And if you can affect what packages available on your system, there is a very old package called product dogfinder that adds new depth in job management interface. And that depth kind of shows you what kind of public API each object or tooling in blown on job tool level offers. So that was the thing we used a lot at blown tool era, long 10 years ago. So if this is so much fun, why don't we speak more of this and recommend you use these features or have not done so for the last few years, is that I think probably almost all of the blown code developers have had to migrate and upgrade, but they have to have to fix a site, upgrade the site where previous developers have used a lot of these features and that have caused a lot of technical depth. And because you can, in blown you can add those script and templates on everywhere on the system and when you do upgrades, the next version of blown tools may get broken and it's a lot of work to find those and upgrades to fix those. And because most of the blown code developers have done that kind of work, they try to avoid doing that again and recommend not using these features because it's a technical depth that makes more expensive to do upgrades. But this topic of the day is how to go around these limitations because when we pack everything in theme, actually all the custom code is in one place, so you can find it and you can upgrade it for newer blown versions. So what we can do with these advanced teams, we can use everything listed before all the customization can be included in the theme, at least almost all of them, and they can be created through the web, creating new team and starting adding things there. And they can be custom sized using the blown team editor. And if you make a great mistake and you lose your old version, you can always go to try to find the right version through Job 2.0, well it's like kind of last resort but it's there. One of our best features. But of course because you work in team editor, you can always chip export your stuff on your local computer, work on there and re-import stuff like I did in the beginning of the talk, I imported team and got all those features. So this app supports both through the web development, so you can click things around and get started and then export everything on your file system and continue there and keep uploading new versions of blown. And once you have everything on your file system, they can be on your version control, just so you can easier to get older versions and see the changes. And this can even be testable. So if you go to my GitHub page, my GitHub profile, the most recent repository, contains this talk and that example team from the talk. And there's also a minimal setup for doing robot testing, robot framework testing with this team. So what it does, it makes a chip package of your team and then goes through using Selenium to the browser and uploads the team and then you can do robot framework test, access test for the team, the things how they should be. And that would make even Timo happy, I think. Of course this is not possible with it. Well, you saw me just creating the blown site, but you didn't see that in that blown site I had two special packages to make this approach possible. The most important package is called Collective Team Site Setup. It's a few years old package of mine. And that package, when you activate team with that package on, it goes through that team directory. And from that team it can run so called generic setup imports. So generic setup is stuff in blown that can export and import most of the global configuration of the site, including content types and workflows and content rules and portlets. All portlets and then port definitions and portlet assignments and the portals route. And all these can be installed when you install a team. And then there are a few extra features which you can have dexterity XML models in the team because it's very, a lot of dexterity, if you do it through the dexterity development, then export the content types, you get the dexterity XML model inside another XML file, which is really cumbersome to edit. So you can have those XML files separately on the team and make it easier to edit and manage them. And yeah, it can register your custom permissions. It's a little bit hacky, but it works, as you saw. And also it can register new localization message catalogs. I don't think it can overwrite exting messages, but it can add there. When you, for example, workflows need to be, workflow translations I think need to be in blown namespace, localization packages, you can just have blown message catalog there and it reads them there. You can see examples there in that example team, how it works. And there are one more feature, it can copy, well, teams are contenting in, fit like portal resources called folder inside blown site. And so teams like it can also populate other portal resources folders and team folders. And that's not much use right now by default. We do use it a lot in blown mosaic. The blown mosaic can have content layouts stored in blown portal resources, so we can populate blown mosaic layouts there directly from team. And when you uninstall team, it tries to clean up everything. So, okay, it only can do, it can, you can have like uninstall profile in team that tries to uninstall the registration by default it doesn't do nothing, don't think on that, but it does unregister all the permissions and message catalogs. So it should be safe to use. And this one I will show, so as I said, you can start developing on your site. But, and then when you want to code files, you want to somehow extract the stuff. So, now we create a new team here. And when we are below that team folder, this stuff is documented in the read me of the collective team site setup. I hope I find this, okay, I need to be careful to find all the letters from the keyboard. So, it's export site setup. Yeah, you get a form where you can check the stuff you want to export. The steps depend on what I don't say I installed, so they do use the blown scenario setup. Back into like an I want to export workflows and content types. Done, and then echo back to the team control panel. So, we have now here an installed folder there. And if you have ever created a blown profile, you see kind of familiar files here. Of course, this export everything from there. So, when you really want to make a reusable team package, with these features, you need to clean this up. And only include the types you want, like our new types here at the bottom. So, you can start from through the web, create the content types, the desktop editor, and create workflows using collective app workflow either, or just using the old approach through the management interface. And then export everything in your team, and then export the file system, and tweak everything in the upload pack. And the dexterity XML models must be exported separately. They are available to dexterity content types control panel, selecting it to dexterity content. Then they are, they used to be export button here. Yeah, no, I wonder where they have moved it. Somewhere here is the export button. Oh, yeah, front page. Yes, here. You can select and export it. And you can only export, it's enough now to export schema models, and then add them in your team. Special directory. And there's another package called collective tier fragments. This was our original tier fragment idea, made by Martin Aspelli. But it was never accepted in the blown up teaming core, so I made a separate package of it. And you can define a template in your team, and then can be used as a fragment, as originally designed in your digest rules to make it into small snippets somewhere in your page. But also, then it can be configured as a real use for content types. They can also, they either be defined as a default view of the content type, in that content type definition, external definition, or you can go to site and go to manage properties in, so, manifest interface and set, manually set the layout attribute to a magic word that matches the layout. And you can also have a companion in Python script. This is a trusted Python script, so if you have a lot of logic behind the team, so you can only have your matching name of the template, you can call like, like it was a real view with real methods. And that helps. The main purpose is to keep your template more clean, that you don't need to add a lot of Python in your template, keep them more readable. And the main benefit here, we know that usability is hard to make good user experiences. And usability is not a very exact science. There are a lot, a lot of science, areas of research, and they are mostly how you experiment with people, how you test stuff, how to make it better. And the core idea is you need more iterations, you need feedback. How to have fast iterations, how to get best feedback. That's the most tough in usability research. So what this approach allows is, allows to have faster iterations, and that means more iterations with the same amount of money, because you only have that one team package, and you can instantly upload it on your site without restarts. Yeah. So I think I saw some of the things from the team. So I did, for the example, I made an initial configuration through blown, then I exported it using the export site setup, and used it to export from the control panel, and it exported extra XML models, as I said, and then I moved it back to the file system and finished developing there, and finally packages and uploaded it. And this is how the layout of my team looks right now. So there are the usual stuff you know from normal teaming. There are templates and manifest and preview and data rules, and then the default, in blown file, each team can have a kind of default bundle. So default bundle, Java, Java, default bundle, CSS. And then out there, there are the bundles directory, has the static resources, I added, for the measurement, the team fragment, and install the contents from generic setup, and then there are the localization message catalogs, and well, I am politicized for that naming practice, so localize slash LC message slash language, and then actually I didn't want to invent something new, so I used the practice from Python packages, and then there are the XML models. And then we have Fibonius left, so I saw some highlights. So practically you can use any, any CSS bundles, JavaScript libraries quite easily this way. There are some gotchas, like I want to use Mesonet.js, which also need some images loaded to only make a layout after all the images have been loaded. And I use the official distribution often, and the official distributions are so-called AMD bundled. And what does mean is that they don't work by default in blown fire, because they conflict with the blown fire required CSS configuration, and make those work just as they are, without doing any kind of bundling or compilation of resources, which some of have thought it is difficult to do, you need to wrap them at the few lines at the top of the file, and one line at the top of the file, and one line at the bottom of the file, so that it doesn't find your required CSS configuration at loads. And then you need to have the normal artistic configuration for the bundle. You can see these completely in my team examples. And yeah, I skip content types, yeah, XML models, and views, as a normal blown views, and there are main filling, main filling content, and these are the content course for the wall images, which get it for all the images, and render them using the blown images up, to get the resolution we wanted. There's one, deep here, is the add-add content listings. So that's one of these many through-the-web APIs. The package got blown up content listing, and it documented on that package.readme file in PyPy. It allows you to easy access on the content of the folder, that makes it easy to iterate all the items and render them. And then this is the magic word that makes fragment wall images work at default view of the content type. And it also works by setting that magic team, fragment plus wall images, adding that as a layer property of the content object, it will make that view work. A message catalog looks like any message catalogs in blown Python package. And I even use InternetStats that I wanted and used to extract those messages from my template, like I do normal Python package development. And this is the magic to add the custom permission there. So in manifest.tsd we can define new permission. We need to have source name to permission, long term of permission. We need to have short term in dexterity model to make that permission control can the user add that content types. And workflows content rules can be defined. And finally, I mentioned that there are bundles. So that measure.tsd comes without CSS actually. So I have the custom CSS for responsive layouts in that style CSS. And then it has very small jQuery code in script.tsd that actually activates that measure.tsd. Yeah, that's everything. Thank you.
Plone ships with built-in batteries for building sophisticated content management solutions without writing a single line of new Python code. We present how to use these features to customize content types, workflows, permissions and user interface elements directly in your custom theme. We also show how to deploy all these new features instantly, without running buildout nor restarting instances.
10.5446/54903 (DOI)
So I'm Sally Kleinfeld. This is Matthew Wilkes, Alec Mitchell, and David Glick. You probably know them all anyway. And we're going to talk about a really interesting project that we're doing right now. All right. So first I'm going to talk about the program. Next slide. This program is called the Next Generation Social Sciences, also known as NGS2. It's a big program funded by DARPA, which stands for... Closer? Okay. Thanks. Thanks for the reminder. Which stands for the Defense Advanced Research Projects Agency. It is, in fact, an agency of the U.S. Department of Defense. It was originally ARPA, which was created in response to SPUTNIC back in 1958. We have to develop technologies and stay leading edge. And it was the agency that created the ARPA net, the precursor to the internet, the first network to implement TCPIP. So DARPA slash ARPA has had a long history of developing leading edge technologies in the United States and really worldwide. So I offer you a quote about the basic aim of the NGS2 program. Basically the desire is to scale up social science modeling and the experimental methodologies that are available so that you can potentially include thousands of people in experiments instead of the typically very small sample sizes that social scientists are often dealing with. The goal is to identify the primary drivers of social cooperation, social instability, social resilience. So to study what unifies individuals and what causes communities to break down, aka also known as probing the sources of social unrest. Thus the interest that DARPA has in it. So there are certainly benefits for this kind of scaling up these kinds of experiments in all sorts of fields, public health, economics, but also national security. It certainly is a useful thing to be able to more clearly understand terrorist communities and et cetera. All right so the initial focus of this program, they decided that they should do a sample, narrow in on one particular problem area, it's rather a large area, and that is to identify causal mechanisms of collective identity formation. So what drives the emergence or collapse of collective identities in humans? What unifies individuals, what causes communities to break down? And this was to be used as just an example that can be used to sort of take off with studying other complex problems like resilience in social networks or changes in cultural norms, all sorts of interesting problems that people like to study. Next slide. So now I'll talk a bit about Dalinger specifically, the system we're working on. Next slide. So one of these cooperative agreements, it was a big competition for soliciting work on this problem area, and a team led by the University of California Berkeley's computational cognitive science laboratory was one of four teams that were awarded cooperative agreements for the NGS2 program. So it's a very large grant, $5 million or something big. I don't even remember. So we are developing, the team is developing Dalinger, which is a software platform for laboratory automation and the social sciences and behavioral sciences. So a little bit more about Dalinger. It's a platform for crowdsourcing experiments. The idea is to be able to abstract the experiment into a single function call that can be inserted into higher order algorithms like you might want to progressively refine your experiment. The experiment itself is captured in software. You want to be able to refine the experiment based on the results. So these kinds of algorithms, this is the goal. We're not there yet, but this is the long-term goal of Dalinger. And in order to do that sort of thing, the whole process of doing the experiment is automated. So the system recruits participants, in our case right now, on Mechanical Turk, although the idea is to be able to have other recruitment platforms in the future. Obtains their informed consent, arranges the participants into a network, runs the experiment. The Heroku is where the actual, it's a web application essentially that the experiment is sort of, please, you guys, interrupt me when I say something that's wrong. You haven't yet. So run the experiment, coordinate the communication among all these users, record the data that's produced, pay the participants, because on Mechanical Turk you sign up to do hourly work or minute-by-minute work and you're paid by the time that you spend. So it pays the participants automatically, recruits new batches of participants, you know, contingent on the structure of the experiment, and validates and manages the resulting data. So it does a lot. How does it work? Basically, and we can, I'm just, I'm talking really fast and I apologize for doing so, but I wanted to just push through a very high-level explanation. These are all things that you can ask questions, we're going to try to leave a lot of time for questions and discussions. So the experiments themselves are modeled as directed graphs. You can sort of think of the experiments as if they were flown add-ons, not literally the same technology, but the same idea where you have this system and you can add on something else and run it. So the system, Dallinger, runs experiments. The idea is for researchers to create their own experiments, although Dallinger comes with a set of, I don't know, 20 or so sample experiments, each of which runs a sort of classic social science, you know, like the experiments are known by their bibliographic references, like Bartlett 19, whatever it was. It was one of Bartlett 1932, it was one of the experiments. So that's the basic idea. And basically all the teams, the four teams or the five teams that were funded by DARPA, coalesced on using a public goods game to pursue the research. Public goods as in there are, somebody helped me explain this, there's a set of stuff like food in the game or whatever, and you have some sort of group of people either in communities or not or whatever, sort of using and generating the public goods. Dallinger, however, is the only team. Dallinger, the team at UC Berkeley, entered the competition for this and GS2 program kind of ahead of everyone else because they had already started creating a platform named something different, but they had created a lot of the infrastructure to do this kind of automation before. So they were far enough ahead that they were actually able to develop a real time multiplayer game to do the experimental work in Dallinger and that is called Grid Universe. All right, so I'm in charge of demoing this thing, so you're going to get to possibly see a live demo. So this is what, if you were running an experiment from Python, this is what the call would look like and you can run this sort of thing in a loop with different configuration parameters. Oh yeah, let me expand this here. So the experiment in this case is called Grid Universe. We're pulling in three participants, though that's not actually being used here. There's a text configuration that provides configuration for the experiment and that sort of defines the public goods game, exactly what the, this is kind of a Pac-Man-esque game where you run around trying to collect little pellets of food and the public goods aspect is about you can share that food with other players, share your points you gained from that food with other players or with the group of players that you're with, you can change which group you belong to as you play it. You can plant more food as well and then you have to wait for it to mature so you're not going to do it if there's lots of people from another team near you. And also things like when you move, it can leave walls behind you, like a kind of snake-like version of the game. So there are a bunch of different variations of the game you can create by setting configuration parameters. This is the configuration we're going to use for the demo. This is like a basic demo run that just says, you know, import the experiment which is a Python module, run it, take that data, analyze it, print out the results. I'm going to go into a terminal which hopefully you can see and run this experiment. This is, I'm going to run it with two Chrome driver based bots that are going to compete against me doing this thing. So a hit is like a mechanical Turk lingo for when a user accepts a job, essentially. So here are the instructions for the game. Those are based on the configuration of the game here. So I'm going to say I'm going to begin this. And here I am in the game competing against a bot. For some reason only one of these bots showed up. I'm running around. I'm going to change my color because I don't want to be part of his team. I want to be my own team. And then I'm going to yell at him and go back to moving around. Oh, somebody else is about to join. Chrome driver was slow. Which color are you? I'm the red one, but now I'm a blue one. So the reason for having bots in this is so that you can write a sort of a Python function that defines how you think people will behave in your experiment. And then you can run it with these bots. You can compare it to people in the real world and see what was my idea of how people behave accurately or do people actually go a different way? Are they more effective than the bots or not? The goal of the DARPA program is to be able to scale up experiments to tens of thousands of users playing the game simultaneously with mixtures of bots and humans or all bots or all humans. Right. And some of these experiments are chained together where you have a group of people participating and the results of that become the output for the next group of people. So there are all of these different network structures for experiments. At the end, they always ask some quiz about the experiment. This is the most basic one, but there are some specific sort of graphical questions they ask about how you feel about your group that you were part of during this experiment. I'm going to finish this up and then I can go back to my terminal here. Yeah, the survey is one of the fundamental, like, deliverables for the program. There has to be a survey. There has to be a measure of, like, what's it called, the DIFFY, the academic lingo for the measure of the group's cohesion. So during that experiment, every action that was taken by the players, me and the bots, was recorded in a Postgres database as like JSOND data. I can export everything that happened into a zip file and then I can, if I want to, replay that experiment with a simple command line command. So there's this Dallinger command line that the experimenters use to load up experiments, run them, deploy them to Heroku, recruit people. In a second, we'll get a browser window. Just to interject here, another goal is to. To kind of integrate the system with the open source framework, which is a place for scientists and all kinds. Open science. Open science. Sorry. Thank you. To upload their experimental code as well as their experimental results so that their publications are backed up by public data. So you notice the replay is doing the same things that Alec was doing before. Right. So he replays the chats that were made, the color changes, everything about the game is available in the database to be replayed by the experimenter to see exactly what happened during the course of this. Or analyzed. And then because the experiment itself is a function call, this function call, if it weren't mode debug, this function call would actually create servers on Heroku, create an application on Heroku with, you know, some potentially large number of servers, go out to AWS, Mechanical Turk, recruit people, run the experiment with potentially hundreds of people, collect the data, analyze the data. And then you could run this in a loop. They took the results of that data analysis. It's like a genetic learning algorithm that you can parameterize. Look at the output data, change the configuration for the next experiment and code and then run it again. So potentially you could have this sort of thing run in a loop over the course of a week recruiting hundreds of participants at a time doing your experiment over and over and over across multiple groups of people with small tweaks and parameters. So as an example of why that might be helpful. I mentioned earlier that you can plant food, but in some circumstances you might not want to because people will swoop in and take the food that you've planted. With this you can tell the evolutionary algorithm, I want you to vary how long it takes for food to mature and then optimize it so that people try to plant food as much as possible. And it will start picking numbers and find out the function that describes how likely people are to plant food and defend it depending on how long it takes to grow and mature and be available to harvest. So that gives you the basic idea, the sort of thing that this particular experiment does that Dallinger is using to investigate. So Dallinger's grant covers two things. One is to create this generic experimental framework for doing experiments, but also to invent an experiment that's going to measure these kinds of questions about collective identity formation. So the grid universe game is what's going to be used for that latter thing for doing those measurements. And there's just a link to the repository where all this stuff lives. So I'm just going to say a few words about what Jazz Carter is doing as part of this. So we were awarded a five year contract to help build this system. Why us? Because we have expertise in the technologies that are used and that's like a whole alphabet soup of technologies which are up there. It's obviously not just Python, but a lot of things. Also because we have expertise in project management, classically not something that, I mean universities have people who know all the bits and pieces, but they don't typically run large projects. So we bring that to them. And we also have, one of the big selling points we had was that we have expertise in running and being part of mature open source project, an open source community, namely Plone. Because part of the goal of this is to make this not only just simply downloadable, but to create a whole community around Downinger that can use it and foster the same sort of community that we have. Only not as cool, of course. So our process, we did a discovery meeting in the fall. We generated a whole bunch of user stories, estimated them using planning poker. And we're in the process of implementing them in a series of iterations. We've been working essentially since November, December. And we will continue on into next year. And then there'll be a pause and then there'll be phase two, assuming that they like the results of what they see, et cetera. So our team is the people you see here, plus two people who are absent. Carlos Del Guardia and Jesse Snyder are also part of the team. This is the Jazz Carter team. Because it is an open source project, it'll be a small one and a new one. Though our people act using Berkeley and other organizations who contribute as well. It's not just us. Absolutely. Sorry, I should have made that clear. We're doing work, but the people at Berkeley are working right alongside us at the core software. And then there are other people at Berkeley and at Arizona State University and I don't know where else that are actually writing experiments, getting trained. Oh, and also another... Finding bugs. Finding bugs, yes. Another interesting part of the project is there's a visualization team as well. Like a data visualization team, which so far we've not worked very closely with them. But I'm very excited and interested to see as we go on what kinds of visualizations they'll come up with. Okay, what I'm going to do now is to launch into a number of sort of question areas. And I'm going to stop talking so much and let everyone else talk more. We just thought of a number of sort of topics that you guys might be interested in and we'll talk a little bit about issues of these topics. And then we'll have hopefully a lot of time for your own questions. We have about a half hour left, guys. All right, so we brought a lot of experience with Plone to this project. And we had learned a bunch of lessons. So anybody want to kind of chat about some of these ideas? You know, when you use another framework that's not Plone and they've done something really silly and you think, well, I wish someone would learn from the mistakes we made, like the amount of over-engineering in portlets that we realized after we'd all converted to it that it was a really bad idea. And if someone had asked us and said, we're planning to do this, we could have said no, but nobody ever asks an existing open source community. But for this, we had an advantage. So we could just go and say, no, let's just not over-engineer that. A lot of the sort of the plug-in architectures that we've got are really basic and we're just intending to throw them away and rebuild them if we ever need to because we haven't sort of heavily invested in them. Do you agree with that, David? You look dubious. Yeah, no, I think so. Yeah, and that relates to the breaking backwards compatibility as needed also. We're not afraid to do that and removing references to all types of doing things. You guys can probably relate to that. Let's see, there's another set of bullets about this. Anybody else want to talk about? Yeah, just that we sort of have the luxury of starting, it's not quite from scratch because there was this prior system that they had built that we have been using as a starting point and making changes to. But to a large extent, we're getting to decide how things will be set up. But we're doing that with all of our knowledge of Plone and it's not the same sort of, it's not a web framework, but there are these parallels like the fact that in Plone, we build a site that runs on a framework and in Dallinger, you build an experiment that runs in a framework. So there's things that we know about what things to make reusable that has been valid with valuable knowledge. And then in terms of process, which is what this next slide deals with, it's because this is an open source project and it's being used in a variety of different ways by a variety of scientists and hopefully even more in the future. Having pretty comprehensive test coverage has been important. Having continuous integration processes, well-defined processes for development, for merging features and releases and whatnot has been really important. And then we're still working through that to some extent, exactly how we want to do some of these production release decisions and breaking change decisions. We've certainly applied a lot of knowledge that we have from working on more mature projects to that process. Go ahead. It's a bit different from sort of a normal commercial project because usually on a commercial project you are the only developers and you use your expertise and you get your way. Whereas with this, we're trying to create a community and we're helping out, we're helping build it, but it has to be a community decision. So although there's not that many people at the moment, we're kind of acting like everyone is on the framework team. We have long, long discussions that we could not build framework team phone calls. But over time, the community will get bigger and our role and that will become less. Right. So we're going to move on to the next topic, but before we do, anybody have questions or the comments they want to make on the sort of lessons learned from Plone open source communities and generally questions about that? I guess Joel will, too. So do we all. So for anyone who joined the community after Joel stopped being active, which was quite a long time ago now, which is sad, Joel used to do lots of Plone trainings and he would come to the conferences with long lists of things that people found confusing that we'd never even thought of, like having separate file types for Word and Excel documents when there's no technical reason to do that. But if you are trying to learn a new system, you think, well, this isn't a file, this is a spreadsheet. I want to go to spreadsheet. That kind of insight from someone who spends all of their time talking to end users and knowing what they want was really helpful. And that's something we have with Dallinger because Jordan and the other people who are on the more social sciences side of things are using this as their day job and we communicate with them a lot, but I miss it in Plone. If we had someone who was doing full time Plone training, I think we would be going a lot faster with good user experience. Good point. Let's move on to the more tech, the tech stack. So the question I'll throw out here for the assembled guys is what are the biggest similarities and differences between Plone and Dallinger in terms of the tech stack? I mean, it's got a component of it that is serving web pages. There's a Flask app that gets deployed to, well, to Heroku when you're running the Relics Rimen or just runs locally if you're debugging. And that serves up the web pages that the user interacts with. But it is a Flask application rather than Zoke, so it's a relatively simple framework which... That decision was made by others before us, by the way. The Flask decision. Well, yeah, but I mean, we would have changed it if it didn't make sense. And I think that it serves the needs and I think we were a little bit dubious about whether it would be extensible enough for some of the things that we want to do with it. So we haven't actually run into problems using Flask, I don't think. In terms of database, things go into Postgres through L instead of ZODB, they're both good databases. Well, interestingly, we've been using more and more of the JSON stuff in Postgres because not being tied to a schema at that development time is really helpful, especially for games like the universe where you want to support lots of different options. It's hard to predict all the different things that one particular experiment might want to record different events that might happen, different questions that might get asked in the questionnaire. So it's easiest just to have eight questionnaires, answers, column that is JSON as opposed to make to provide a way for the experiment to configure its own schema for the database. Yeah, in the past, it was originally the schema had quite a few sort of arbitrary property fields that experiments could use in whatever way they felt like, property one, property two, property three. Because they didn't, you don't really want every experiment to extend the schema of the database because then it becomes difficult to run multiple experiments off of similar databases and do analysis in a way that will work across experiments. Recently we've moved that sort of property mechanism into JSON B columns and other things into JSON B columns, which allows experiments to have sort of arbitrary custom data that's indexable and searchable. And it seems to be working really well and that's what allows things like that full replay of an experiment like Red Universe where there's a lot of stuff happening in terms of movement and actions that's recorded and can be replayed. One big difference is the fact that this is a real time system, it's not just HTTP requests going back and forth where we're using WebSockets. So you saw the demo whenever Alec is moving or sending a chat message or changing color, that is a message that gets sent to the server over a WebSocket that gets relayed to all the other clients as soon as possible so they can display what's happening to other users. So in terms of the tech stack, that means we need to be running a server that handles WebSockets and handles asynchronous communication. So we're actually doing that using G Unicorn, which is a whiskey server that also has sort of a, you can configure it with using G Event and then it will handle things using Event Loop. Do you have any special questions about the tech stack? So this alphabet soup, any questions about any of those things? One thing in terms of deployments, currently it's tied to Heroku, but the plan is long term to allow deploying to different architectures to sort of university internal architectures. A lot of these universities have their own cloud stacks or shared clouds across universities. There are a bunch of different kind of recruitment mechanisms that social scientists use. Mechanical Turk is a big one, but there are things like Cloudflower and some Cloudflower. That's plan out, the Facebook's one? Facebook's one? That's, what is that one? Plan out? It's a Facebook API per... It's for crowdsourcing something, I don't know why. It's on the horizon, we haven't used it yet. The project lead is integrating this Facebook API for something and we're not sure exactly what it is, but it seems interesting. But in general, we're slowly making different aspects of the system pluggable from the recruitment process, which we have a few options, but we want to add some more services to eventually deployment options. In the end, the idea is that everything about the system can be configured in a simple sort of text INI file and then use this command, the experimenter uses a command line to launch. There'll be a number of canned experiments, there are already a number of experiments that scientists have written separately, demos that come with the project, separate non-demo's like the grid universe that are sort of practical open source experiments that anybody can use and run. We're looking into basing some of the creating ways to integrate existing JavaScript experimenting platforms into the experiment building framework, which is currently glass based and we have our own kind of JavaScripts, set of JavaScript tools for that, but there are some interesting frameworks out there for doing experiments using JavaScript that we're going to be integrating over time. So there's a lot more to do here in this future phases. It's an interesting kind of space to be in because a lot of these university departments have taken open source kind of to heart, but they're not programmers, they're social scientists and they vote the kind of code that scientists write. So although there's lots of things out there that people are used to and do specific things that we might need to do, they don't fit in with a high quality piece of open source software. So we have the difficulty of kind of moving people slowly from something they're used to into something that's reliable. I have two questions on the tech stack. The one is you talked about blockability and so one question is don't you miss the CCA and how do you work without that? And the other question is I heard a lot about dynamic schemas and using JSON formats for that. So how do you handle evolving schemas as experiments evolve and how do you do cross experiment analysis then if you have several flavors of the same experiment? I'll take the first question, I haven't really missed the CCA, but the scope of writing a particular experiment is basically writing some code that runs in your browser and writing some back end views. It's not necessarily needing to change how the system as a whole works the way that we often do with clones, so that's probably why I haven't. There are some pieces that are inspired a little bit from that. There's the idea of a recruiter, so you can have a mechanical Turk recruiter or you can have the hot air recruiter that just prints out the fact, oh hey, I'm recruiting somebody, but doesn't actually do anything. So that is basically a utility that you can configure which one to do. So there's some level of dependency injection happening. We are using the set of tools entry point support so we can reference classes in packages and get them by name and by type which is kind of like a named utility lookup, but it's sort of a first class Python thing that people are a bit more familiar with and has less boilerplate to set up, so it's a couple of extra lines in your set up PY. And as David says, generally speaking these are spun up once on a Heroku server. You don't have to deal with having a big install with 20 different things and configuring which ones you want in a particular experiment run. I suspect if we ever get to the point where we want to allow third party packages to add their own recruiters or their own infrastructure deployment mechanisms or things like that, that we may need to look into having some global registry. And it might just end up being the set up tools entry points like we used for experiment registration. But that's something we haven't needed to think about yet right now. Those aspects are all encapsulated within the package, but maybe at some point we'll need to allow having external packages add recruiters or deployment mechanisms. Can we speak to his other question about repeating experiments if the scheme is evolving? Well I mean that's the, so do you mean in terms of like because we've changed the database scheme away from this sort of property based thing to more JSON B, that means you, because it was early on in the process, you know, we don't expect people to be comparing experiments run across those two versions of the database. They're sort of on their own with that. But now that we've... That's the get rid of backward compatibility part. Right. But now that we've moved to using JSON B like a sort of generic column for question responses, sort of generic column for information from an event in the experiment, it becomes a lot easier to compare those things. If somebody makes a change to their own JSON representation of experiment data, then it's on them to figure out how to compare those things across different runs, but they probably shouldn't be doing that if they want to compare it. So scientists are a lot more cautious when it comes to what exactly was the version of the software. So when we upload to the open science framework, we upload a zip of the actual deployment that ran on Heroku. And I think a lot of scientists would be nervous about the idea of opening up a dataset created with one version of the experiment with a different one that potentially has subtle differences in exactly how things work. So I suspect if we ever did have to compare people in two different versions, we would have to have something like the analyze function we have in the universe that pulls out the data for that specific version. And then we would say, well, this was one with version one and this was one with version 1.4. So we have to use the version one and the 1.4 analysis tools and then compare the analyze data. Right. Yeah. Yeah. So these experiments, once they run and uploaded to the open science framework or similar, if somebody else wants to run them, they pull that down and they would get the exact same set of code. And that includes your requirements file that says, you know, install this version of the software. Right. So in theory, if you were running the same experiment, you wouldn't have a schema change ever. If you do have a schema change, then you're effectively running a different experiment. And we might have problems with that in future when API is changing, it deprecated, but that's a problem for the future. Yeah, really. So we have about 10 minutes left. So I'm going to suggest that we kind of breeze through the next couple of sessions. So we have time for the fun technical challenges. Sure. I mean, just to say some of the things that we've done, like we're trying to support people who aren't, you know, full-time developers in getting started with building an experiment. We've drawn inspiration from the tools that are provided for full-time developers, you know, especially in a lot of JavaScript frameworks these days, you get the tool, you run a command to generate a skeleton. We've had something like that in phone for years. So we have a cookie cutter template to create a starter experiment. We have some base templates that are, you know, like the first page of the experiment, the consent page, the instructions, the actual experiment, the questionnaire are all pages that you can either, you know, inherit the default or override it to fill in your parts. And there's a JavaScript library that takes care of a lot of the communication between the front end and back end. So you can just make a call to that as opposed to thinking about what the API endpoints are. And then just a little bit about what about the Dellinger users, that is the experimenter's writing experiments. Documentation is obviously important. It started out sketchy. It's getting better. It needs to get better still, I think. There's a Slack channel that we have. There's a general channel where we work on, where we work doing the development kind of discussions. And there's a help channel where new users can come ask questions. And all of the Jescar to Anne combined teams from Arizona State and Berkeley and everybody pitch in to help users in that channel. Anybody want to say something about local debugging? Local debugging is obviously important so that you don't have to go to the overhead in time of actually running something on mechanical Turk and Iroquo every time. Right. Also, we have this pain with Plow and you have to restart the server. And every few years people come up with things like Plow and Reload and there was road runner for a while for doing tests and things to shorten that loop. And we have exactly the same problem with Dellinger because we have to package up an experiment and then run it locally. So we've been looking at lots of tools. Try to make sure that as you're making changes, you can see them immediately without needing to restart the experiment. So let's move on to the question of code quality. So we've learned a lot from our experience in the Plow and Community obviously about testing and QA and that sort of thing. So we don't have 100% test coverage but we're progressing in that direction and we have automated tests that run on Travis and will break if we decrease our code coverage. And we also try to be diligent about code review making sure that things go through a full request process and get looked at by somebody else before getting merged. Okay, fun technical challenges. Let's talk about some of them. So yeah, we have a short list here. We can talk about that and if anybody wants to talk about something else, wave your arms and we'll take, okay. So maybe we should take some questions before we dive off here. Yeah, could we just have a few minutes left? There were some questions. I just have one more question which is mainly, which is only partly technical, but that is about participant selection. So if you want to do a large social science experiment, how do you ensure that you recruit the participants in such a way that they are a representative sample of the population you wish to examine? And if you start by, if you, for instance, if you recruit through the mechanicals, then your sample will already be skewed because you are recruiting amongst mechanicals, so how do you compensate for that? Or do you need to compensate for that? That's basically a question for the social sciences. If you know something about mechanical Turk, there are some criteria that you can choose but of course that's a skewed sample of humanity. So yeah, I don't know. That's a question for each experimenter and each experiment, how they handle those kind of issues. This is where repeatability is really something nice that we have because we have this pluggable recruitment system. You can use a recruiter that gives you a URL and you give it to people in a laboratory. So you could run an experiment on mechanical Turk with a thousand people, draw some conclusions and then rerun smaller scale things in a lab or with other sources of participants to verify that your conclusions were accurate despite the fact it was a skewed sample of people. And also, I mean, it's worth comparing this to the status quo, which is that they typically only do that sort of local, very small study. So this is at least something that you can compare to. Yeah, right. Generally, social science experiments are run with university graduate students. So we've got, there's a lot more possibilities at least. The mechanical Turk sample is generally broader than like the set of students who are willing to participate in a social experiment on a university campus. But there are filters that you can use when you're running a mechanical Turk experiment. They can be set up in the configuration. Such as by region or what browser support you have, things like that. Yeah, I think age, they can filter on too. I mean, I think there are some demographic things they're not allowed to filter on and demographic things that they are allowed to filter on. And then, like I mentioned, there are other tools that are more specifically oriented towards recruiting for social science like crowd flower. There's like two other ones besides mechanical Turk that allow much more refined ways to limit or define your sample. And eventually, we're going to be trying to integrate with those services. So that's the importance of having pluggable recruiters to be able to address those kinds of questions. I think you had a question also? I think on his line, the filtering issue is how much control do the scientists have and is that part of the development as this continues? But all my other questions would have to go to Berkeley. Yeah, right. I actually worked with Dr. James Greer Miller who created the very first behavioral science department in Michigan State. Cool. And we did a lot of work together. This is Tom Griffiths, who's the name of the PI on this at the Cognitive Science Lab. Yeah, because I'm more interested in his falsification procedures. We could put you in touch with the source of all that kind of thing. But for sure, the answer to your sort of broad question is... Because there are different cognitive functions operating in the method that you're using against what they're testing as the tentative hypothesis. That's a whole other world. Yeah, that's the social sciences world. Our world is to make sure you've got a good pluggable system for recruiting so that people can control how they do that. Time's up. All right. That was a real fascinating talk as far as I'm concerned. So thank you very much. Thanks. Thanks everybody for questioning. Thank you.
A group of social scientists at the University of California Berkeley received a large grant last year to develop tools for rigorous social science research, initially focused on collective identity formation. Jazkarta has been helping them develop Dallinger, a tool to automate experiments that use large numbers of subjects recruited on platforms like Mechanical Turk. They chose Jazkarta because of our web development and project management expertise, but also because of our familiarity with large, open source software projects - which is a goal for Dallinger. Join members ot the Jazkarta team (David Glick, Alec Mitchell, Matthew Wilkes, and me) to hear about how we've put the lessons of Plone to work setting up this new open source project. We'll leave plenty of time for Q&A and can also describe how the technology stack (Python, Redis, Web Sockets, Heroku, AWS/Mechanical Turk/boto, Flask, PostgreSQL/SQLAlchemy, Gunicorn, Pytest, gevent) has been working for us.
10.5446/54904 (DOI)
I'm Manuel Reinhardt, I work for CISLA.com. I want to talk about solar and how to use it with PLON. I'm assuming that at least some of you have heard of solar. I'll briefly say what it actually is. It's based on the Lucene indexing and search engine, which is written in Java. Solar adds to that an HTTP interface. You can get XML and JSON over HTTP. It also adds some features like caching, replication. It has a nice web admin interface. And it basically makes a high performance search server out of that Lucene engine. It's made by Apache. So it's available as open source and an Apache license. So if you know a bit of Java, you can also hack around in the code. I said I wanted to talk about solar and PLON. Why would I even want to connect those two? One big argument is speed. Solar is just terrifyingly fast when it comes to indexing and search. The portal catalog is also fast. You can also search. But it has its limitations, especially if you have a lot of content in your set ODB. You have a lot of text in your searchable text. Then at some point, it gets a bit slow. But speed is not the only thing. You also have some great features that solar offers. Things like faceted search, hit highlighting, complex query language, cross-site search, and so many more. I'm going to mention a few of those later on. So say you want to try it and do want to connect solar to PLON. How do you go about it? There are several options. I want to mention three of them. One is collective solar. Its approach is to index all indexes of all content, both in the portal catalog and in solar, just by registering an additional indexer through collective indexing. And then when you query the catalog, it checks whether the searchable text is a part of the query. And if so, it diverts the whole thing to solar, and everything runs through solar. And its response is used as the search results. There's also ALM solar index. I can't say too much about this one. I haven't had too much experience with it. The approach is slightly different. It doesn't tamper so much with the catalog itself. But it implements a single index inside the catalog through solar connector. So you don't store the indexing data somewhere in the ZODB or somewhere. But it constructs queries to solar, and it updates the index there, and gets results from solar. There's also searched, which is an even more low-level approach. It's basically a Python solar API, which is more powerful. You have more options there, more things you can do. But it's also less convenient. You have to do more pedestrian work to get results that are usable in PLONE. Built on top of it, that there is plonintranet.search. That's, as the name suggests, is used in PLONE intranet, or Quave. It's based on Scorched. But it's tuned towards PLONE or PLONE intranet. So it's more convenient to get stuff out of there, and you don't have to do as much yourself. But it still retains some of the great powers and features that the solar query language has. All of these approaches give you access to some awesome features that solar has. I only have time to mention a few. I encourage you to look online for what else solar can do. One thing is boosting. A good search, I would say, is something that lets you find what you're looking for. Or in other words, which shows you stuff that is relevant to your search. And if you have multiple results, as you usually have, the most relevant results should, of course, be shown first so that you see them first. And solar does that by default. Results that come from solar are by default sorted by a score that is usually calculated by things like how often does the search term occur in the searchable text of a document. But you can influence that score. You can specify, actually, what you think is relevant and what you think a high score constitutes. And that's just what boosting does. So you can, more or less, put arbitrary functions into the solar that it uses then to calculate the score. A very valuable real-life example is a date boost. For example, in a social internet setting, maybe you've heard Alex's talk before mine, you're probably looking for stuff that has been added or edited recently, something that's active and that's not been lying around for ages. And to do that, you would want to assign a high score to a recent document and a lower score to an older one. I won't go into too much detail on that formula there, but you see that the modified date is taken and what it basically gives you is a nice curve that assigns a high score to a recent document and then it curves off nicely towards the older stuff. The next feature I want to, I almost say, to highlight is highlighting. You can see an example here. It's in German. Please bear with me. I think you should get the blue thing. This, of course, the title of the result then comes the URL. And that thing there is a snippet from the searchable text. And the thing in square brackets is the search term that I searched for. So you don't only see what documents mention your search query, but you see the actual context, the bit of text that mentions the search term. It's basically what Google also does. If you type anything into Google, you get these small bits of text where you usually can already judge, yes, this sounds interesting or no, this only mentions my search term in passing or it's just inside a boring list or something. So that's usually really helpful. And Solar can do that as well. It can deliver these snippets of text with a bit of configuration that contain the search term. And you can configure how that's displayed. I chose to just put square brackets around them, but you can basically do arbitrary things there to highlight your search term. One warning I want to give there is if you're generating these snippets from the searchable text, which is a sensible thing to do, then it must be stored in Solar in full so that it can process it. So in portal catalog terms, it must be metadata. And that means it would be returned with every response that you get from Solar. So for example, for office documents or something that has a very large searchable text, that can get really big. So if you have hundreds or thousands of search results and deliver those searchable texts for all of them, that is just not going to work because all of that is going over the network. And it's going to take much too long. But luckily, Solar allows you to do that rather easily. You can specify what fields to return. You just have to pay a bit of attention what you're storing in Solar and what you want to return with every response and what you don't want. And finally, if you are setting up a plain site for something like a larger company or something like a university, then usually there is more than just this plain site that you're setting up. There may be other, even other plain sites or other web servers, other document stores or what is there where people need to get information from. And the user might not always know where exactly is this information that I'm searching for. Am I in the right place going to this plain site? Well, they don't really need to care because what you can do is index multiple sources into the same Solar. You can index other plain sites with the approaches that I've mentioned, Collective Solar, for example. You can even crawl arbitrary sites with Nudge, which is another tool from Apache, which works really well with Solar with just a bit of configuration. You can point it at some other website and it crawls it and indexes everything into your Solar. And what you do then to make everything available to be searched from your plain search is to tweak. You tweak the query so that you search for all the paths and not just slash plain or whatever your plain site is called, but for everything that Solar basically has to offer. And you tweak the results page that usually just links to your own plain site, of course, because it assumes everything comes from there. And you do a little check where a result comes from and you put the right domain or host name in to get the external link right. And this is the same example from before. It doesn't only do highlighting, but also does the cross-site search. I hope you can see these different color icons here. These represent different sources of information. They have different host names here. So these are results that come from a Solar that has different sources indexed. And this is indicated to the user here with a little tweak of the results page. So do you get all this for free? Well, of course, there are some things you have to pay attention to. For one thing, everything goes over the network. Solar is an external component. You do HTTP queries. And you have to pay attention a little bit. It mostly comes down to what you save in index processing time versus the time the result spends on the network. And as I've said, Solar is really fast. So if you save enough time there in index processing, and you don't have too much time that the result needs to go over the network, then you've basically won. You've gained some time. So with a Solar in a local network, in the same data center or whatever, it should be fine. But as I said, you have to be careful that you don't send masses and masses of data. For example, exclude the searchable text if it needs to be stored for highlighting. Another thing that comes from Solar being an external component is the transaction integrity. It's not always sure that transaction is atomic, because you have to communicate with the outside when you are doing stuff like re-indexing. There is basic transaction support in collective indexing, and also in collective Solar. But in my experience, there are still situations where something goes wrong. Sometimes Solar does get out of sync. You have old stale data in there. Some update doesn't get through that kind of thing. That's probably not an unsolvable problem. I expect to be work to be done there in the future. But it's also not that often. So maybe it's OK in your use case to just re-index one or two items and forget about it. Another thing is that commits can be slow in Solar. The Solar is optimized for search and for lookup of the indexes, and the trade-off that you have is that indexing can take a bit of time. So if you have some view that updates your content and you need to re-index, then you may wait a little while until the Solar index is updated, and you can render the result and show it to your user. But what you can do about that, Solar has a feature for that as well, which is called asynchronous commits. So instead of waiting for the index to update, you basically just send your data to Solar. Solar says, OK, got it. And you can go on with rendering your view and showing it to your user. And in the background, Solar will update the index and do the commits and everything that it needs to do. But that, of course, means there is a small delay between your rendering of your view and the index being updated. So at the time you show something to the user, the Solar index might not yet be up to date. So you just need to be aware of that. For example, we had a calendar view that was generated from Solar, and that let you create an event. And after you created it, you got back to the calendar view, say the month view. And if you had asynchronous commits activated there, you just not see your event. Because, yes, the data was sent to Solar, but what you were using to generate the calendar view was out of date because it was still processing the update, and the event was just not there. That's not an answerable problem either. Maybe you just need to not use asynchronous commits in situations like this. Or you can do some trick. I mean, after you know what event was just created, so maybe you just add that to the Solar data you're getting. So there are ways to solve this. So in summary, I've mentioned several methods of connecting Solar to Plone. I've only scratched the surface on the awesome features that Solar offers. And I've mentioned some manageable costs and risks, some stuff that you need to be aware of. So in conclusion, if you want to know should I use Solar for my project, I'm tempted to just say yes. But I'll qualify that a little bit. If you subscribe to that philosophy, that query should be really blazingly fast. And indexing is OK if it's a little bit slow. Then yes, you should definitely consider trying Solar, especially if you have a large site with loads and loads of data to index and to search. Or if one of the features that I mentioned, or that you can find online, appeals to you and gives value to the setup and the customer that you're working with, then yes, definitely do look into it. So that's all for now. As I said, there's a lot more to discover. Do check out the Solar user guide and wiki on the Apache websites. There's a good Solar training on the Plain Training website. Or just find me, or one of my colleagues from syslab.com, or check out our websites as well. We do web solutions and intranets and risk assessment. And we've been using Solar for a while. So we're glad to help you get started or to hear stories how you've been using Solar and discuss all that. Or if you have any questions right now, I'm available. Thank you very much. Thank you. Any questions? Thank you for the talk. I've been playing around with Solar myself for the last one and a half years. And what I've noticed is it really takes time, a lot of time, to really know what you're doing with getting to know this whole search engine and all the indexing. There are a lot of moving parts. And a lot of small things can go wrong. I follow the training. Can you give some advice on how to start with knowing this search server part? How did you get into this? That's rather tricky because it really depends on what you need for your use case. I thought that it was rather easy to get the very basic stuff running. And then it really depends. You can branch out and look at the query language, for example, and all the ands and ors and stuff that you can do there. Or you're more interested in highlighting or in boosting. So it's really tricky to give general advice there, I think. But I found that the Solar wiki has a lot of information on almost all the topics that you can imagine. So I find myself there a lot of the time. And yeah, I think that's the most general thing that I can say right now. Yeah, maybe wait for the mic so that we have it. Yeah, I just recently used Solar, and I didn't know it before. But I used it in context with Django. But for there, I generated some integration tests where I actually had the same data every time and started the Solar instance and made the same search query and did some counting on the results to make sure that it was the same thing every time. So it didn't break the indexes and stuff like that. That's also one way to make sure that it works as intended. And a way to learn Solar while developing. Sounds good. Nice talk, very clear. I have a question about this asynchronous indexing. Can you turn it off and on by query? So for example, when you create your event in the calendar, you have the synchronous indexing. And when you upload many, many files, you have a synchronous indexing by specifying some request parameter to Solar. Not that easily as far as I know, because you have to have some commit strategy in place in the Solar instance itself if you don't do automatic commits, but do it asynchronously. So but if you have them configured and everything the asynchronous commits, then I think you can also send an explicit commit now commands to Solar. That would be great. Yeah, because. Yeah, can be handy. My questions? Maybe it's very basic. But when you make a search in Joe, you get the results for which you have permissions. You have right. How do you manage that with Solar? That depends a bit on the approach, whether you're using collective Solar or Scorched or something. But in the end, it's more or less the same. You have an allowed roles and users index in the portal catalog. And you do something like that in Solar as well. So it can just query, give me these results that these users are allowed to see. Is this native or you have to implement it? You basically have to implement like all the other indexes that you have. Yeah. How about unindexing stuff? Do you have any risk in that? Or does that just work? Unindexing, you mean removing from Solar? Yes. So any issues where you can have results available at Solar that are not any more existing in Plow? I remember having some trouble with that in the early stages. But I think by now that's at least in collective Solar. It's implemented pretty well. So if you're deleting objects, then they're getting removed from Solar. But as I mentioned, on rare occasions, you have some junk laying around in Solar where there's no object anymore. But there's, for example, a sync view that synchronizes the portal catalog with Solar and that gets rid of these unwanted entries. Two additional answers to the two questions. The first thing, if you install Solar with the collective Solar and Solar recipe, you get all these indexes in Plow. So when Manuel says you have to implement it, it is implemented in Solar, but it is implemented for you. So everything is already in place and pre-configured. You don't have to do any coding anymore. And on Askors question, normally if you delete something in Plow, it should be properly unindexed. But of course, it is an HTTP request. And if somebody, for example, restarts Solar, in that moment when you delete an object in Plow, you run, of course, into, it's not in sync anymore. And that's where we have this Solar maintenance browser view, which is great. You can re-index stuff repeatedly. You can run the sync, which really makes sure that everything is in place and so on. And it's quite quick, even. Yeah, thanks, Alex. I have two questions. The first thing is you mentioned collective free-cyped Solar, which was, as far as I know, bound to an old version of Solar. So I think the 5.1 something or so was, I think. And Solar actually is now at the 7.0 something version. So has there been an upgrade path? I know with the change in Solar 5.3, they've completely get rid of the old install methods and everything like that. And the other question is, with the newer versions, like 5.3 and so on, they introduce replication, a Solar cloud, and everything so that you can replicate over several servers your Solar instances. So is there anything that we can use now for that with Plone so that you have a high availability set up that even makes it quicker for you? I have not followed the recipe very closely, but I had the impression that a lot has happened there. Some upgrades are there. I'm not sure about the current state, what versions are supported. But upgrading Solar itself, I found it pretty easy. We upgraded, I think, from 3 to 5 recently, and that was pretty painless. I don't think we've used anything from the recipe itself, but native Solar upgrade methods or something. And about replication and Solar clouds, that's definitely on my list. I haven't looked into that yet. I'm afraid. So I can't say anything about it right now. But that does sound exciting. Yeah. Can I chip into that question from my friendly neighbor here over here? Because I've been looking into that. I've been following the recipe and collective Solar a bit more closely over the last half year. And we have a bit of a chicken and egg problem here. Many people start using Solar a bit, the recipe. And the recipe is, at the moment, supporting an old version of Solar, version 4, and version 7 has just been released. But it's a bit of a community chicken and egg problem. As soon as you start using Solar and you get more experience, then you want to do more. You want to upgrade it. And then also larger companies will start to use configuration management and install their own Solar instance using another system. And they won't use the recipe instance anymore. So we're a bit stuck at the moment in the Plone community that we have a nice starter recipe to install Solar server, but it's very old. And many of the advanced users go away from the recipe instance to set up Solar because they have their own advanced setup method. So this is a bit of an issue we have. We might have to talk about it in an open space or maybe at Sprint. OK, so maybe someone feels inspired right now. If you want to contribute to the recipe, please check it out. Please do. Have you used it in the same context with the rel storage? I haven't done that. But I wouldn't know why not. That should work. I was just curious if there are some strange issues if you use it together. No, I won't make any promises, but that sounds totally feasible. Yeah. I don't have any experience. I'm doing multi-site search with Solar and Plone. Like, first in multiple Plone sites to one Solar instance. It sounded like collective Solaris made mainly for one Plone instance at a time. Yeah, you have to do. We do have multiple Plone instances in one Solar core, even distributed over different Solr servers. So that's without any problem. Yeah, my experience, you have to do some tweaks. So maybe those can even be backported at some point into collective Solar to support that a little better. OK, if we had a multiple site to one Solar, would those maintenance be used in the work, or would they kind of break the catalog? Yeah, if you have moved this multiple site in one Solar, you're like, Alex and I will say it, would we still be able to use those maintenance views? Yeah. I think the only thing you need to do is you need to make sure that what you index can still be recognized to belong to a server site. Also, you can see the right, a single code in the website. So the one who will sing or index that, and you see it. I think what you are thinking about is you sync, and it will see what is not in your local portal catalog and then throw everything away in Solar, right? All the other sites. I think there's a danger that that might happen, yes. But I don't know. I think we're definitely taking a look. That's a good point. Yeah, you have to be careful there. OK, so we've had many interesting questions. Let's thank the speaker, please. Thank you. Thank you.
Integration of Apache Solr into Plone is well-established for powering fast, scalable full-text search with highly relevant results. The many amazing features of Solr like faceting, hit highlighting and suggestions add real value for the user. Even when search is not explicitly the given task, Solr can be extremely helpful as a portal_catalog replacement to filter content for speeding up complex views and generating overview pages and navigation. In this talk I will relate my experiences and outline the ways in which we use Solr with Plone at Syslab.com. I will highlight the benefits of Solr as well as report some caveats and lessons learned over the years. Previous experience with Solr will help you get the most out of hearing this talk but is not a requirement.
10.5446/54906 (DOI)
.. Welcome to the second talk today here. We have Roland from Germany and he will have a talk about the JSON API and the add-on that he wrote and how to use this for more web applications. Please have a round of applause.. Thank you very much and welcome everybody. Thank you for being here and listening to my talk. My name is Ramon. My company is called Riding Bites. Since we have another Ramon here, maybe some of you know me from the IRC. I am Ramon Schiem. The other one is Black Bear. Not to get confused. I am a developer since 2004. The name of my talk is called Using a Plone JSON API to interface modern web applications. This talk was actually planned as a demo. I wanted to show you just this little web application. But then I was asked to extend the talk. I was asked to extend the talk. I was asked to extend the talk. I was asked to extend the talk. I was asked to extend the talk. I was asked to extend the talk. You have to go with me back in time. I will start with the back end and introduce you Plone JSON API core and Plone JSON API roots. Which is a JSON interface before Plone REST API came out. Then I am going over to the demo, the web application I wrote for you. Plone JSON API core, everything started in 2012. Have you ever dreamed to have an application that communicates with Plone like a fancy app on your phone or tablet? Or maybe you were playing around with one of these modern web frameworks that were out there and wanted to have them interfaced with Plone. Then probably you felt like me a couple of years ago where I was fascinated by the frameworks that came out these days and the modern web applications that could be built on top of JSON interfaces. This was 2012. There was some time ago, a long time in terms of technical speaking. If I can show you, there was these days Plone 4 out which had the only interface, something called the web services API for Plone. This was in 2011 or at the end of. That was an XML LPC interface these days. That was the only possibility to get the contents from Plone in some kind of neutral way. I was already playing around these days with Flask and this is the Pipe page from Flask. As you can see, Flask is fun and I had no fun these days developing Plone. Plone seemed to me clumsy, obsolete. It was something that was not modern. I came from back end but I wanted to do more front end. I wanted to do snappy apps and I couldn't do it. What I did then is to think about a way to have this the same interface like Flask provided. I named this package Plone JSON API core that was the first draft of my own API. What might sound like a real complex piece of code is at the bottom line, just a browser view. The browser view is registered on at API. It is capable to traverse subpaths. Just keep in mind Plone is content centric. You type in a URL and you will end up by a content. You end up with this path traversal. You end at the content you see. The concept of this API is to take this path and hand it over to the view. We have the API view and pass in hello world. This will not traverse to a content in Plone but it will use this hello world path as a subpath to the browser view. Instead of traversing to the content, it maps this subpath to an endpoint. I am using the same Flask, it is a really cool name. This is a German word for tool. It is a cool library which is integrated in Flask to provide the route mappings. I use the same to have something like this. This is a code snippet, how you can register a simple view endpoint in Plone using Plone JSON API core. We add a route called hello and we get another path segment which is passed as the first parameter to a function called hello which is below. It simply returns a dictionary which is transformed by JSON API core into a JSON response. I have a try. I am sorry. Can I just like... Damn. Give me a second. I want to click. I have to do it like this. I am calling this endpoint. As you can see here it is the API view and I am passing hello Plonkconf. It returns a JSON where hello Plonkconf is in or hello world. Whoops. It returns. The runtime here is in seconds. You can see it is fast because we don't have to traverse any path. We just hit the view and it will immediately return. It will map to the endpoint and return the result. I thought that is cool. That will give me all the freedom to add my own functions and expose some data to JSON. What is this endpoint mapping all about? Basically it is only you have a rule and this maps to an endpoint. That is just a string. This rule maps to hello. The subpart names to this endpoint. The endpoint... Shit. I am sorry for that. I can remember days where Macs were just like easy to go with presentations but nowadays something changed. Here we go. One more time. Here we go. The endpoint is registered for a view function. This function gets called with the context and the request. Since the API view is usually registered on the portal, we get the context is the portal object and the request is that what the user requested from you. You have to do... If you want to return the content, you have to search the content and expose it. I thought now step one was done. I have an easy way to register functions and expose JSON somehow. Why not go further and build something on top of it that supports all the blown standard types and maybe even custom content types? I started to code on something, a package called blownJSON API routes which allowed me to have a restful API to blown standard content types to create a document, to edit a document, for instance, delete it, cut, copy, paste it, whatever I wanted to do. So, yeah, how cool would that be? So, I started with a set of route providers on top of blownJSON API code. So, I named the base path as blownAPI10 and then you have a resource. And a resource is just the content type. So, you say resource is document or event. And then you get the JSON representation, schema introspection and you get the full content type exposed as JSON back into the browser. So, it supports basic code operations, create, read, update, delete on basically any content. Because the C publisher is limited or was limited these days to just get post, I mapped all the writing operations to post and then I got the C publisher directly. Because I had no put and no delete requests that was grabbed by the C publisher directly. But it is really simple to test because with the get request, you can test around your web browser and it is almost like an own UI on top of blown. So, I decided to do a two-step architecture. Step one is to search for the content and return only the schema information from catalog brains. Step two is to return to the content and return only the schema information from catalog brains. Step two is to return to the content and return only the schema information from the catalog. Step one is to return to wake up the full object and return full object schema. And this is how it looks like. Step one, I just say resource folder. On the Plonges and API rest site, I say, okay, this guy wants a folder. So, I'm searching with the catalog for all folders and return only the schema information. And I say, okay, this guy wants a folder. Step two is to fetch the resource. But when do I know that the user really wants the resource? And I say, slash folder and slash UIT. For instance, to get the full information of just this folder. And the UIT comes from step one because it is exposed by the catalog brain. And the UIT is unique, a PlongedWide, so it will return the full object. Or even if I say, okay, I want a listing, but I want to have the full object, I even can pass a request parameter. And I can do so as well for folder-ish contents for all the children. But obvious it is then a little bit slower. It supports all the catalog indexes. So when you say, okay, I want a folder with the ID to do. I can do so by simply using the ID index, the get ID index. Or like in this example, I'm passing in the sort on index. And I say, okay, sort on created. Or get object position in parent, maybe. Or I can use the searchable text, which is really a nice feature. I will show you in a moment later as a demo what is possible with that. Or even I can do something weird like this. These are the C publisher request constructs where I can pass in the start date of an event. Say, okay, starting from there. Or even if I have a custom content type. It is supported out of the box. This just works as a generic API. So I want to show how this looks like in the browser. If my browser is still there somewhere. Here we go. I have to find. So here we go. You can see. Is it possible to see that in the back? I'm zooming in. So here, here, PlonJSON API call view. Then this piece in between, which is pretty just like an identifier to root to JSON API roots. And I say, hey, give me all documents. And here I get one result back. Page size is 25. Pagination is supported. And then you see all the information which comes from the brain. And here again, the runtime that are in seconds. So to get this content. So let's wake it up. So step two is let's go to the API URL. Click. And now the object is wake up. I have the full object information, like I have the text and some other information which are only available on the object itself. Or the node is going over here. That's more difficult than expected. So I have the document root. I skip this because we have at the moment just one. I can also say, hey, give me all events. And I will show it to you because at here in this items, dict, we have three events now. They are returned in 005 seconds. So let's pass in a request parameter. For instance, we say I want a limit of one. So it will have the items returned by one. And then you get this next URL. You can click and then you have the second. We are on page two. And then we go over here and then we are on page three. So you have pagination directly available. Or for instance, oops, that is really more. For instance, collections also are also supported. We will wake up a collection and this really looks funny because here you have the query which is available in this collection. And of course, you can also go up the URL. This collection lives in a folder. So now I am here in the folder where this collection lives. And for instance, I can also say then children equals yes. So here I have the children and it will return the full objects which are contained in this folder and also the collection which happens to be this item. So that is PlonJSON API roots. Browse an MF. So now you may ask, okay, I have this really a content type that behaves different. So PlonJSON API roots is built to be extended. When the request hits Plon at API view, it will get dispatched and it will say, okay, the user wants to have this content type. For instance, my content type. So there is this data provider which you can implement for your own content type and then return your own dictionary. For instance, you can make a schema, schema introspection, return the full schema and a little bit of extra information. Then you have something like a data manager. This is to set and get the data from this content type. So again, I can have a more specific adapter for your type and you just have to implement get and set. Then you have the content type and you say get the title. Then you can return it as a JSON serializable value. So this is also possible. Even you can go further. There is something like a field manager. So when you have a real special field like a records field and you have to handle it, especially then you can overwrite or create an own adapter for your field and have a field manager to get and set the value which I post. So this is also possible. There is also a catalog adapter to search and even a catalog query which you can overwrite. So subcomponent architecture is a basic concept of the API, of the JSON API, of my JSON API because nowadays, not to be confused with the rest API which came afterwards. Yeah, but these are the concepts of this API. Now the interesting part, maybe the part for what you are here. I built for you a little to-do app on top of Plone JSON API roots to demonstrate you how this can be used, how Plone JSON API roots can be used to build a simple application. And I decided to not create our own content type, but to use documents. So everyone of you knows Plone document, takes several clicks to add them in Plone. And I said, okay, my to-do item is a document and in a title there is what I want to do. And the description will be some kind of additional description for my to-do item. So app demo, I hope this works. Here we go. Again, browser. So this is Plone 5. The Plone JSON API roots works with Plone 4 and Plone 5. But Plone 5, yeah, it's special, but it needs to get more integrated in Plone 5. But anyhow, we can use it. I have this to-do app and there it says what needs to be done. So I'm going to put this in. Hold Plone Conf Talk. Enter. So this, what happened now? I typed in a to-do title and hit enter, and then something happened. And this something is, when I go over here, hold the Plone Conf Talk. In the to-do folder, I added a new to-do. I said, have breakfast. Do more stuff. And I can just like to demonstrate how quick it goes. Add some more and I refresh this folder. Yeah, and then I have just like created some documents, which is cool. So hold the Plone Conf Talk. Obviously, I sorted here differently than Plone, so always the newest ones are on top. I say, okay, I don't want to have these go back, reload. Then I just have wiped them out. So then I thought, okay, how to mark a content as done. And then I used simply the review state of this document. So I decided, okay, a content, a to-do item is active when it is published, and when it's done, I can click here, click. Then the workflow state has changed. Go back, so the do more stuff, I will reload. Do more stuff. It is like, it is a little bit invisible, but the state has changed. Maybe this one is better. Do more stuff. And I said, oh, no, maybe not. I will go back, reload. So it is published. Kind of cool. So this allows me to really add quickly, maybe CDO, to add quickly contents to Plone. Oh, of course, this is not there anymore, so to do this. So here we go. And even I can, like, go down there and do something cool. And have an additional text here. So we are going to delete the rest. Just like to just see this one. Go back here. Go to the to do's. So as you can see, Plone can be fast. Real fast. And this is something, yeah, I really like to have. And that was the reason why I started to have something like this API. Click, add demo. What I used for this is a framework called Backbone. This is already integrated in Plone 5. I've seen it. And I will just show you quickly two concepts of Backbone. That's a JavaScript framework where you can build applications like this. So Backbone expects a rest API, a rest URL to create, update, delete contents. So I will quickly go through with you how to create a model and a collection. So a collection is a kind of list which will hold the to do items, which I describe in a to do model. So this is the boilerplate, how to create a Backbone application. Because we are on, or because I'm on here on Plone 4.5, I don't have put post delete requests. I have to do emulate HTTP. So these information will come with a request header. But the API is capable to parse that out and do the right thing. Even if I say post and in the HTTP header, there is, oh, hey, that's an delete, actually. Then the JSON API routes knows, OK, this user wants to delete and not to update the object. But maybe this will change with when Plone Rest API is completely integrated. And we can make use of put delete requests as well. This is like how such a to do model looks like. This is CoffeeScript, by the way. This compiles to JavaScript. I don't know if you know that. So it's a little bit more easy JavaScript. Yeah, you have a Backbone model here. And the only thing you need to do is to specify the URL route. And there you see, hey, that's the Plone JSON API core route. And then document. Some same defaults, for instance, the parent path, I specified as Plone to do this, because I only want to have my to do this in this to do this folder. I'm able to have a toggle state which will do an HTTP post to this document resource and pass in a transition. And the transition comes from here. So if it is private, we say, OK, it needs to be published. If it is already published, I think I make it private again. So that is the functionality where you can mark it to do as, OK, this is done. And the collection, that is the list which holds all my to do items, to do models. Quite easy. I say, OK, please get the documents route. And there you see, I'm using here the path index. So it says, OK, give me all documents from the Plone to do folder. And I'm just interested in the first level of it. And please sort it on created. Since the items are returned by the API in the items key as a list, I have to do this dance here as well. So I say, hey, the to do models come from the items list. And I will show it to you how it looks like. So here we are with our to do app here. So this is now... Is this visible? So app. So that's my app. And I have app.todos. So obviously we have just now one to do item in there. Length, so there's one. I can also say to JSON, which will return the JSON representation of this to do item. And there we see that what I've shown you before, which is also returned by the browser for a specific document. So there is some creators array. You have a description here. We have the effective date when it expires. And all the stuff you know when you code Plone. So let's go over here. So we just have one that is nice. So let's get the first item. So my model is app.todos first. So I have this model now in my head model. And now I can say set and I pass in a new title, like this comes from the JS console. Click. So changed. But no request yet done. So I have to do a model.save. So and then it will be posted. And then it should be if everything worked, something to do. This comes from the JS console. So this allows you or could allow you to have a complete offline application where you can add contents as you want. And when you have your connection back, then it syncs with the server. And suddenly all the contents are there. Might be a possible use case for instance for an offline iPhone application which can use this. So yeah. And the next thing is more apps. Yeah, I have some more prepared for you. So I did also a couple of years ago now already. This is, that's an electron app. So this works basically for Windows or this should run on Windows, Linux, whatever you want. The framework is XJS and it is using the JSON API which I showed you before. So you have the possibility to log in. So admin, admin. So now you are logged in and it is the same side we just, we have just seen. So here you see the blog contents. Let's search for this comes, oops. This comes here, this one. Here we go. You can also open it in Explorer. So that's to do with this comes from JS console. And now here we also have the possibility to for instance reject done. Let's have a look back into, here we go. So that was published. Okay. Obviously this should work. But however, is this. Oh, oh yeah. Private to item. Oh yeah, I have to, thanks. That's it. Yeah, but I'm confused. Stay private. Works. So yeah, this is possible. And as I told you before with the searchable text, you probably know this small search box on the top right corner of Plone, which allows you to do life search. This was really cool these days in 2004 maybe. But now it could be also that we have something like this. This is by the way a topping of picolimps on bootstrap. Actually, this is plon. And here we have this possibility to do a life search in plon. So we have something like a setup. Yeah, and then we can even you don't see it, but I can with the arrow keys of my keyboard, I can navigate the results press enter and it will go to this content. Complete new way how to navigate in plon. Control space, go back and maybe I want to go to the Happy Hills customer. So again, this is a way how to make plon fast and modern and more appealing to people that come. And here the whole search thing is boot with backbone. So it is fast and we can have modern applications built on top of it. What else can be done? One of my visions I have for this is a way to import and export or maybe migrate to Plon sites. I mean how cool would it be if you have a control panel where you can add a URL of a plon site which ships with Plon JSON API routes and then you hit the migrate button and then this migration goes, fetches the contents from the other plon site and migrates it because I have all the information. I even can set the creator date or the creator or when I do it in the right order, I can also resolve relation fields. This is all possible with it. Offline editing, I told you already it would be cool, a smart phone. So just when you think on a lab use case, someone wants to sample something and has his iPhone with him. That's a whole computer. You have a camera, you have everything. And then you take somewhere a sample, gather some results on this and you are in the middle of the jungle where you have no internet. So you come back into a city where you have internet and then it gets synced to Plon. So new use cases just like with an API. That can be all done and I want to say thank you for listening. Thank you very much.
The usage and the concept of the Plone add-on 'plone.jsonapi.routes' will be shown in the web browser to the audience. This will encompass basic content retrieval, as well as create/update/delete operations. Afterwards, a modern JavaScript web application that communicates with Plone over the JSON API will be demonstrated. A small excursion to the JavaScript developer console provides some deeper insights into the asynchronous data retrieval. Finally, ways to extend `plone.jsonapi.routes` will be discussed.
10.5446/54910 (DOI)
So yeah, I will talk about building bridges and the headless future of Plone. Working on what we call headless these days started 2014 in Zoranto in Italy. During the keynote on the first day, you already heard part of the story around how Plone REST API, Pasta Naga, and all the other components started, so I won't repeat that. I will try to provide you with, I hope that this talk can be more like maybe a roadmap than a vision because my ultimate goal is to bring our vision into reality. But first I would like to share a few observations. Mobile is overtaking desktop. I already mentioned that on the first day, that there are more searches on mobile since 2015, that there are in general more mobile access on web pages than on desktop. That was already reflected in Plone 5. Somehow it's mobile ready, but our idea is with Pasta Naga to really provide the best user interface, the best user experience for any device. The second observation is open source becomes mainstream or is mainstream. When Plone started, the community and the project was pretty unique because we were never controlled by the Plone community or Plone the software was never controlled by one company. Plone was never, I mean there was Plone solution at the beginning, but it was never really one company that was dominating the community. And that's somehow unique. We also have the Plone Foundation. But today if you look at the large open source projects, you have huge players like Facebook or Google that are open source stuff and they're building great communities, but it's like a different kind of open source community. But on the other hand, those like this involvement of those companies made open source becoming more mainstream. So just one example, GitHub just published a study about contributions last year and guess which project had the most contributors on GitHub? It was Microsoft. Visual Studio Code. I mean Microsoft, right? Open source. I mean, that's amazing. The third observation is we talked already a lot about that, but JavaScript is taking over. In the same study at GitHub, that's like the number of open pull requests by language. And JavaScript has doubled the time than the second language, which is Python, which is also really great. I mean Python overtook Java on GitHub. But JavaScript becomes more and more important. And if you're a web developer in 2017, I think there's no way around learning JavaScript. You can of course choose to become just a backend developer and then you can stick to Python. But I think if you're into web development and you want to develop the full stack, you have to learn like a modern JavaScript framework. That's just the reality that we have to face, at least in my opinion. The first observation is that the web is everywhere. Five to ten years ago, web applications started to replace desktop applications, right? I recall visiting my uncle, who's a doctor at a hospital, right? And he showed me like scans from the human body. And I was like, is that a web application? And he was, no, no, it's just windows. I click here and then it opens, right? But it was a web application. I mean, for like scanning, a huge amount of, I mean, that's a huge amount of data, right? But it was a web application, actually. So that was already like, I don't know, five years ago. And in recent years, we saw that web technologies taking over mobile, right? With technologies like Cordova, Ionic, React Native, NativeScript and things like that. T.O.B. reported in their index in 2017 that Swift is actually losing popularity. And in the report before that, they actually reported that Swift was like the highest ranking newcomer, right? And what they're writing is that the reason for that is that web technology is like taking over, like Cordova, Ionic, they didn't mention React Native or NativeScript strangely. But I think that's what you see, right? That web technology is taking over web applications, mobile applications. And also you have a third vector, which is like, desktop applications. So the web is coming back with technologies like Electron, right? You can, like, the same as you can wrap a web application in Cordova to distribute it as a mobile application. You can do the same with Electron to distribute applications on Windows, OS X and Linux. So isn't it just like a great time being a web developer in 2017? I mean, the web is everywhere, right? It's used everywhere. Isn't it a great time to be like a JavaScript developer because JavaScript is everywhere and the web is everywhere. So it's like, isn't that awesome? I mean, and open source is everywhere. I mean, major players are using open source. Like, some of the largest companies in the world are like publishing open source, right? And it's everywhere. So the times we're living in as web developers is really exciting. I mean, I wish I was in my early 20s and I could, like, hack day and night on all the cool stuff that is around, right? I mean, that's just great. I mean, but, I mean, we could, like, see it in another way, right? I mean, we could complain out loud and say, like, CMS is out there, the CMS.mark is there, like, blown is that. And when I hear that, it occurs to me that, like, I mean, our sector, like, digital transformation is everywhere, right? We're transforming entire sectors, right? We're making, I mean, think about, like, phone books or maps, right? I mean, the printed ones, right? I mean, they're just gone, right? And entire sectors of the industry are, like, completely changing, right? And usually if you, if people complain and say, hey, we're losing our jobs, right? Then what IT people usually do is, yeah, but it's, hey, it's more efficient, right? But on the other hand, if, like, we see in our sector, like, large changes, right? And we have to learn, like, new programming languages, new frameworks, like, every, like, a few years, right? Or sometimes maybe every few months, right? We start to complain about that, right? I mean, that's not the right thing, I think. I mean, we expect other people to, like, transform themselves for the better, right? But if it happens to us, I mean, then we can't just say, no, no, I'm not going to, like, do that, right? That sucks. So I really think that we're living in exciting times, and that's a great time, like, being an open source developer, web developer, and a JavaScript developer. So you could ask, like, hey, if JavaScript, Timur, you're, like, telling us that JavaScript is so great, right? I mean, why don't you, like, just jump on that and, like, yeah, go and try out a, like, JavaScript CMS or whatever, right? I mean, why are you, like, still sticking with Plone, right? I mean, that's a valid question that you could ask yourself. And I think Eric Proholl, I hope you went to Eric Proholl's talk, and if you went, I think he could, hopefully he could convince you already. So I will just briefly go through my list, and I will not try to duplicate too much from Eric's talk. So what holds me back, right? Why do I just, like, go on, like, and using, like, a JavaScript CMS or write my own one? So first thing is Python. I just love Python. I mean, it wasn't my first language, but it was the first language that I, like, truly enjoyed, like, programming. I think Python made, like, lots of right decisions. And I still miss Python in every single line of JavaScript code that I write, right? I mean, it's just, like, JavaScript sometimes feels so clumsy, like, and in Python it's so elegant and easy. But JavaScript is, as a language, is quite okay. I mean, I'm not using JavaScript for the language, right? If I could use Python in the browser, that would be, like, awesome, right? But that's not a reality. So I can live with JavaScript. It's fine. I'm using JavaScript not for the language, but, like, for the tooling, for the libraries, for the communities, for the speed, for the cool things that you can build with it, right? But, I mean, I would prefer to, like, keep Python, right? So, and right now I couldn't, like, imagine, for instance, to use, like, Node on the back end, right? Because I think Python is doing a way better job. It makes things far easier. So I will, personally, I want to, like, stick with Python and I will do that. Second thing is, clone the community, right? That's usually the first thing that people mention, if they're asked, what's great about clone, right? We all love the community. We all love to contribute to the community. And I think that's, like, what we have is pretty unique, right? In the last years I went to many other conferences, like Jenkins conference, CI conference, testing conferences, lots of JavaScript conferences. And every community is, like, different. But, I mean, there's no place like, clone, right? I mean, and, yeah, so, yeah, I just want to, like, share one observation about, like, the JavaScript community. I went to React Europe, I think, earlier that year. And it was, like, really an awesome conference. I was, I went there alone and, and I, usually when you go alone to a conference, you need to, like, you need to start, like, trying to, like, talk to people. And that usually takes some effort. But at that conference it wasn't like that. I was basically talking to somebody else after every single talk, right? And it was easy to, like, talk to people. Everybody was really, really friendly, really, really helpful. So it was, like, a great experience. And they had, like, I think, thousand or two thousand, like, attendees. So it was, like, a great conference. But then I went to the sprint, like, what I usually do, right? And there were, like, literally, like, 20 or 30 people showing up from, like, thousand or two thousand people, right? And, I mean, the sprint was nice and everything, right? But if I compare that to, to clone, right? I mean, our ratio is, like, far better, right? So people, like, show up to the sprint. And when I talk to even, like, other people in the Python community, oh, they always ask me, hey, Timo, how do you do that, that you just, like, invite people from all over the world and they will just come? I mean, some will even, like, pay for their flights and their accommodation and they will just come and work with you on stuff. I mean, how do you do that, right? That's magic. But, I mean, that's, that's how we roll, right? And that's, that's, that's a great thing about the community. Next thing is clone the software. I'm, I'm doing consulting since, like, eight or ten years and, and clone is not the only system that I see, right? So I see many other systems. And I think I won't go too much into depth on that, but I think there are things in, in our software stack that are still pretty unique. The way we handle permissions, workflows, users, Eric mentioned traversal in his talk. Those are still, like, those, those are, like, great assets, right? And I think there's lots of things that, that we want to keep. The next thing is clone the CMS. Eric, in his talk, sorry, sorry, Eric was, like, citing you so many times. But he said, like, clone is doing breadcrumbs since 2001, right? And that, that's really, like, catchy way of, like, putting it because we know, like, the CMS market since, like, fifteen years, right? And that's a huge asset. And you can see that, you can see the value in that if you look at the, at, at JavaScript, CMSs, for instance, right? Go out and try them out. I, I did that. And bottom line is, they're usually, like, they, they're usually have, like, really awful, like, user interface. It's really nothing that I could imagine to, like, show to my clients, right? I mean, they have, like, great widgets and everything, and they have everything that's great about JavaScript, right? But, but I mean, the user interface is usually really awful, right? If you look at the Hedlis CMS solutions, they're, they have really, like, nice things, like, like, libraries and everything that you could imagine, right? But they really lack, like, what we consider basic functionality of a CMS, right? So that's, that's not there. Like, even if I would want, I couldn't, like, jump off and, and, and, like, move to another system because I wouldn't have, like, half of what I have now, or, like, maybe only, like, 10%, right? So, we have lots of experience. The only thing that we have to, like, I think, like, prevent ourselves from is, like, becoming those, like, grand parts that's, grand parts that are sitting on the bench and telling, like, younger people that we did this better already and we know how to do that and you're all doing it wrong, right? Because those, like, new JavaScript communities, they have, like, lots of energy, and I think we could, like, we need to, like, take that energy and, like, enjoy that, right? And take some of, like, the old thing that we have and take the new, the new things as well. So, I mean, what do we now? On one hand, we have, like, a stable and mature content management system. We have the community, we have the foundation, we have the software stack, right, that we want to keep, right? And I think we all agree that we want to keep that. On the other hand, we have, like, a really fast-moving front-end technology sector with JavaScript, JavaScript fatigue, every, like, few months there's a new JavaScript framework. Personally, I think things settle down on that. I mean, you have, like, the two large frameworks React and JavaScript and React and Angular. They're pretty stable and they're also not that different any longer, right? I mean, even if you use Vue, JS, or something else, it's not too hard to switch any longer. Actually, like, React and Angular share a lot of things. It's really, like, which one you take, for instance, it's just more a matter of if you prefer a framework or a library or if you prefer a TypeScript over, like, plain JavaScript and those kind of things. But it's not, like, one or the other end, they're completely different, right? I don't think that this is true. So how do we handle that? On one hand, like, stable basis, and we want to provide our users with, like, something stable that they can rely on for, like, ten years, right? If you look at most of our clients, at least that's true for KIT concept, they're thinking in terms of, like, five or ten years, right? I mean, I sign contracts where they say we want to, like, do that for the next ten years, right? I mean, that's a lot of time, and lots of things will happen in that time. But if you look at the past of Plone, we were able to provide that, right, on, like, a stable and mature basis. So, again, how do we handle that? I mean, the answer is Plone Rest API. And as I said, that started, like, three years ago, and Plone Rest API is a restful hypermedia API for Plone. I won't go too much in depth on that. Rest stands for representational state transfer. The basic ideas that you leverage HTTP that you use HTTP accept status codes to communicate, it's basically the basic ideas really just using HTTP. So if you know the HTTP standard, then you know how, like, rest works, basically. And the hypermedia component is, like, a bit of an academic, like, concept, so you don't need to, like, understand that too much, like, in depth to use the rest API. The basic idea is just that you have hyperlinks that you can basically follow, like a human user would, right? You start with an entry point in your API, and then you see what you can do, which links you can follow, and that's basically it, right? So it's not too hard if you put it, like, a bit further than it becomes hard, but the basic idea is pretty simple, actually. And our idea with Plone Rest API is that that's actually, like, it's an abstraction layer, it's an API, of course, but that's somehow some kind of, like, bridge that allows us to separate the front end from the back end, right? So if we have a fast-moving front end and, like, a stable, and we want to have a stable and mature back end, we need them to communicate somehow, right? And we need an API for that, and that's what Plone API provides. So stability on the back end that we need, and the flexibility on the front end that we need today and that we will need in the future, because I think we can't assume that, like, in two or three years the JavaScript world will look the same as today, right? So what's the current status of Plone Rest API? It's stable around since three years. It's used in production. As I said, 4Teamwork is using that for the OpenGaver platform. CodeSyntax is using that. VNC is using that for their VNC Lagoon stack. We are using that at KitsConcept in production. Riedling is using that from Eric O'Andre, who just arrived this morning, and he gave a talk in Boston, essentially, last year about Plone Rest API. And I asked him, Eric, so, I mean, what are your wishes for Plone Rest API? What are you missing? And he basically said, nothing. All good. Great. And that was, like, last year, right? And since then, a few things happened. We had the Beethoven Sprint in Bonn, organized by KitsConcept. We added quite a few missing endpoints, sharing, vocabularies, copy, move, translation. And after that, we added, like, a history endpoint. We added expansion. We added a TUS endpoint. So I would consider Plone Rest API to be feature-complete right now. We're using that internally to build an Angular 2 application on top of Plone, and it has, like, all the functionality that Plone has. So I think we are really feature-complete. The only reason why 1.0 is not out is that it's because I was just too lazy, or I didn't have time to do that. So there are really only, like, a few things missing, and then we are good to go with 1.0. I also wrote a clip for including Plone Rest API and Plone 5.2. So that will happen pretty soon. And I don't see, like, any, like, major problems on that front, right? So we can get back to our vision with Plone Rest API stable. So our vision was, like, not only building bridges between the front end and the back end, but also building bridges between, like, different approaches. I already said that we have, like, different kind of clients, right? We have clients that have, like, a long-lasting strategy, but we also have, like, needs for, like, creating modern solution, right? So I guess we have to make sure that all those, like, possible projects are covered by our strategy. So one of our, like, ideas of, like, one of our, so what do you do if you do a Plone 5 site today, right? And you want to, like, create a few new widgets, right? Then you can use Plone 5.x. It's a stable release. We have, you can use Plone Rest API to build modern solution on that. Actually, our main project is still on Plone 4.3 with Plone Rest API and this angular front end, right? We're about to move to Plone 5, but we're using that, right? We're using the existing solution and just build something new on Plone Top of, something new on top of Plone with Plone Rest API. So you can do that today. And thanks to Nathan, we also had a clip for including React into Core, so you can use React in Core if you want that. So that's our, like, stable branch, right? And then if you want to go with, like, the full, with the full modern JavaScript front end, like, React or Angular, you can just build that on top of the existing one. We could even start to implement, like, Pasta Naka on top of that. And the third, like, branch is, like, the long-term vision, which is, like, GeoTina, right? Because our idea is to, like, somehow keep the API in sync, which is, like, which will be incredibly hard. But I mean, it's doable if people want to put, like, effort into that, right? So we have, like, three branches, and the idea of Plone Rest API is, like, that it allows us to build bridges between those branches as well, right? So you can switch from one to the other if you want. You can start, like, using, creating React Widgets in Standard Plone. You can create libraries for, maybe, React or Angular. That's rather Plone Rest API. And at some point, if we build, like, front, rich front end libraries and applications, then we could even, at some point, switch Plone with GeoTina, right? I mean, that could be possible. So that's our vision of building bridges. But visions and, like, telling stories is fine. But to be honest, every time we prepare a talk about all this, I mean, Victor often asks me, should I really say that? I mean, like, going on a stage and, like, telling people about all those things that we have, right, or that we will build, I mean, everybody will expect us, like, to do that, right? Or somehow make that happen, right? I mean, it's, of course, not, like, the two of us or three of us, but, like, they somehow will expect that to happen, right? So, on one hand, we are trying to, like, we are using visions and stories to convince people to jump on that, right, and help us with that, and then make that possible. But I would like to provide, like, as I said, a roadmap or a way how we can make that happen, right? And we already thought about that. So who here would like to have Pazdanaga out, like, today, right, and use it? Okay. So, I mean, how should we approach that? I mean, how should we make that happen? I mean, should we start to build it like that? I think, yeah, people are laughing at me, but, I mean, I think we have, like, a history in the Plum community, like, that some of the projects that we have, and that they're, like, around for quite some years, I think the main problem with them was that we were really building them like that. I mean, that's like from Henry Knieberg about his, from his blog was making sense of MVP, and I think everybody, I mean, yeah, and everybody wants to, like, develop like that, right? I mean, if you, the idea of, like, having a minimal viable product is that you provide your user with something usable right from the start, right? You don't build something over a longer period of time, but you build something small that has value to the user that they can use and, like, try out. And that fits very well with, like, the agile approach, right, that you iterate over time and improve the product. And I sincerely believe that if we want to make Pasadena AgaplonRest API a new front-end library, if we want that, I mean, I'm talking about, like, the full application, right? Not about the SDKs and stuff, right? What we have, for instance, for Angular 2, I'm talking about really the full application of, like, like, full clone in written in a modern JavaScript front-end. So if we want to have that, then I sincerely believe that we have to start with a minimal viable product. We have to think about how can we create something that adds value to users today, and not only users, but also companies, right? That allows companies to start with that, right? To say, yeah, we will build that in a new site which is maybe smaller in React, right? Because we have the basics, and we know what we have to, like, put on top of that, right? But that's better than having, like, a product that, that, where maybe, like, 30% of the functionality is not really working, right? Then it's hard to estimate. So I want that skateboard. So what do we need to, like, have that skateboard? Of course, we need login, right? Otherwise, the user can't login. We are a CMS, so nobody can, like, add any content if we don't log in. So we will need a login, right? I mean, that's obvious. Second thing is content editing. I'm talking about my basic content editing. Adding, editing, deleting, and navigating, right? That might sound easy, but as I said on the first day, I think it's not. Because what we're aiming for here is bringing Plone back to where it belongs, like, to make Plone stand out when it comes to the editing experience, right? I really want that people will start using our MVP and think, wow, that's, that's, like, the greatest, like, user experience that I had in the CMS, right? I know that, that, that this is, like, a high goal, right? That's not easy to reach, but I want us to, like, put as much effort into that, that user story as we can, right? And I also want us to, like, then iterate over that, right? If we have an MVP, then, like, I want us to, like, iterate over that and improve that. And really, like, the idea of, like, the idea of Alba to focus on the UI level, I want us to, like, focus on that, on those users, right? And that user story. So if we have that, then another thing that I would like to have is, like, image upload, right? Because that's something that, yeah, that you need if you want to, like, if you think even of a small website, if you just have, like, tags and no image upload that, that would suck. And I want to make it really easy that you can just drag and drop things that you have, like, a few options that you can see there, how to put them. And one thing that we have to solve on a technical level is this idea of, like, folder-ish content types, right? Because I want a problem that we have is we built our, the KIT concept block, we built that in Angular 2, right? I thought that's, like, I knew that this was a stupid idea, but I wanted to do that. And one of the problems is we can, we create, like, a medium-like editor, right? So you can drag and drop the image. And it works. The problem is just, like, the image is huge, right? Sometimes you have an 8-megabyte image or something, you put it there, and then, like, your block is broken, right? Because we don't have image scaling. Something that you take for granted in Plone, right? I mean, you just upload an image, and sure, I mean, you get all the scales, and that's all there, right? But if you build something from scratch, it's not there, and you have to, like, build that stuff, right? Of course, it's internal, like, in the back end, but you don't have it. So I want us to have, like, a folder-ish page object, and then, like, drag and drop that image there, and then Plone does its magic, uploads that to that page-like folder, and stores it, and provides the user already at the front end with the scaling, right? So those are, like, the main functionalities. But I think I know that we're talking about an MVP, and we should, like, try to focus. But there are a few other things that I think are essential, even for an MVP, to make it, like, production-ready and usable. One is performance. I guess you all know the, like, statistics, right, that Google reported that, like, 53% of mobile site visitors abandoned the page if the page take longer than three seconds to load, right? I already said that mobile is taking over, taking desktop, right? Actually, Dennis Maschunov, who will give another keynote here, has wrote a great blog post on smashing Magazine Why Performance Matters, and he covers there the psychological aspects of that. But I think that's really essential. And if you look at, yeah, at modern, like, page builders, they really put a lot of effort into performance. If you look, for instance, at Gatsby.js, that's a static site generator written in React, right? What you basically get out of that is if you build that, if you build a site with Gatsby, you get a PWA that does all the latest, like, performance optimizations that you could, like, imagine, right? So you get, just get that out of the box. And I think if you want to compete with, we won't, we don't want to compete with Gatsby.js because it's, like, just another sector, right? It's for smaller sites. They don't have, like, CMS features and all that kind of things. But I sincerely believe that we have to provide users with a great, like, out of the box performance, right? That's really essential. So we will have to use, like, Webpack and do, like, the usual optimization, right, bundle, to reduce the bundle size, use tree shaking, all that kind of things. I think we have enough experience on that front to build something good there. We will also need, like, server site rendering to increase the performance, which also has implications for the second thing that I would need is, like, XEO. That's also something that I, that we saw on our, on our Angular 2 block from Kit Concept. We couldn't make server site rendering work. Like, that was, like, a year ago or so. I know Eric made that work and he said that it works perfectly, but that didn't really work out for us. We came up with a solution. But I think that's really essential for, if you want to use the new clone, Pasanaga, minimal viable product on a real site, Google needs to find that site. If you want to use that for, like, even a small portal, right, that's just essential. And we have to, like, make sure that this works. I think Eric and Andre will also give a talk about Angular 2 and how we optimize that, so that will be really interesting. Yeah, so that's my minimal viable product, right? I mean, the three, like, functional requirements, basic content editing, login, and image uploading, then XEO and performance. And I think that would, that's something that's, in my opinion, doable. And I think that would provide, like, a real value to users and also to companies that want to build something on top of that. So on GitHub, we had a list of user stories for that minimal viable product, so we basically know what to do, right? You can check it out at github.com, clone slash Pasanaga. So question is now how do we, like, bring that into reality, right? We have a vision, we have a roadmap, so how do we do that? We'll have an open space today at 12.45 to half past one. That's a slot right before the lunch. We'll do an open space, and we can, like, talk about how we could make that happen. Of course, we have, like, the standard things that we can do, right? We can organize sprints. We will have the sprint at a conference. Kit Concept will organize another bit open sprint in, yeah, early next year, I guess. There will be many other sprints, I guess. One good way is, of course, having real world projects, because, I mean, that's what drives open source communities as well, right? If you have a real world project and you can put billable hours on open source projects, if your clients allow you to do that. So if you have any project and you want to use Plonrest API or Plonreact or the Angular SDK, please talk to us about that, right? I mean, ask us questions, and we can, like, help you to make that work. And if those real world projects allow us to contribute something back to the community, that would be awesome. At Kit Concept, we have a few projects in the pipeline where we'll definitely do that. And, of course, contribute everything back that we do. Another thing is, of course, like sponsorship. That's something that we can maybe discuss at the open space if we can do something on that front. And, yeah, that was basically it. So I will briefly just summarize. So with Plonrest API, we have a stable and mature platform to build bridges between the different systems between Guillotine and Plonreact, Angular SDK, Plonrest API, and existing Plon versions. I think Plon's future is really bright if we combine the knowledge that we have and the stable back end platform and adopting, like, new frameworks and just reuse the stuff that is already out there, right? Just take our experience and look at the solutions that are there and adapt them to fit our needs, right? I think that's the way to go. Passenaga UI is awesome. I love it. I think that's a great opportunity and great way for us to really improve the user experience and how people see Plon. And I hope I could provide you with, like, a roadmap that we can make happen together. And that's doable. And, yeah, like I said on the first day in the keynote, the great thing about the Plon community is that you start with an idea and people just, like, come and contribute. And it's, like, an effort from, like, many people. And, yeah, let's get together. Let's do Plon magic together. And thank you for listening. Yes, thank you, Timo. Great talk. Do you have any questions? We have some minutes left. Any hands? Okay. A question for me. Yes. If it comes to Passenaga UI, is there a website where we can already see something? We have the domain. Yeah, that's one of the things that we're actively working on, assembling a web page that runs on Passenaga and Plon REST API. We're working on that. I hope we can continue that during the sprint. We have the GitHub page where there's all the information, right? But I think that's really important that we have that. That we show that we have a page where people can go where we show our roadmap idea, what we have to do, right, so that people know how they can contribute, right? I know that's not enough to go on a stage and say, hey, contribute, right? And then, like, people have to find and fiddle their way through. You have to, like, show people exactly what they can do to contribute, right? And that's the intention of the Passenaga IO website. But, yeah, we didn't have much time to work on that. So if you want to work on the Passenaga IO website, then come talk to me or come to the sprint or the open space. There's one more question. Yes, thank you. It's a very interesting way of building the future in Chrome. But I was wondering about, you were talking about search engine optimization. And now I've been working with the site building JavaScript myself. And, yes, the problem was that the search engines are simply not seeing it, so it's not being indexed any of the content. That's okay. In that case, we could live with it. But I was wondering how you actually do to make it work. Yeah, so I would really suggest to go to Eric's talk, because I think their solution is based on Angular 2. Eric, are you here? There. So Angular 2 or Angular 1? Two and four. Two and four. Yeah, okay. Okay. So, yeah, if you're on Angular 2, you can use server-side rendering. Both Eric and Eric Brahold made that work. If you are on React, it's also easy. You can just use server-side rendering. And if you're on Angular 1, it might be hard. Then you need some, like, pre-rendering service that provides Google with the content of the page. Because if you're using Angular 1, then Google will just not see that. I know they announced that they can actually crawl JavaScript now, but this is just not true. I mean, from my experience, if you have an Angular 1 application, Google is just not seeing anything. But, yeah, I think if you're using a modern JavaScript front-end framework like React or Angular 2, it's not too hard any longer.
Timo will talk about how the Plone community faces the exciting challenge of keep up with new technologies and which efforts and projects are in the making to make it a reality.
10.5446/54911 (DOI)
Hello, hi. Thank you, everyone. Thank you for the introduction. As I said, my name is Tudor. I live here in sunny Barcelona, not so much today, but generally. And I work for a company called Ships to the media. And there I do mostly JavaScript nowadays. I work with Vue.js. And I think it's an amazing tool. It's a framework that's really good, has some really strong points, and it doesn't get the attention it deserves. So I'm going to start with a bit of history. So, by the way, who here does frontend? Quite a few. Question for those people. How old are you? How many of these do you recognize? Because if you recognize the first two, you should plan for your pension plan because you're pretty old. So the thing is, over time, we had a lot of JavaScript frameworks. And they started with prototype script calculus, if you remember, was building these annoying applications that would slow down the page. Mootools, dojo, jQuery. JQuery is still widely used today. And the problem with these tools is that they're very rudimentary. I mean, they offer close, like just a bare minimum. Most of them offer some Ajax subscriptions, some simple DOM manipulation. You just inject something into the DOM or you delete elements. Event management, as I said, annoying animations. There are a lot of frameworks who do that. And shorthand methods, which from my standpoint is not really a good thing because a lot of them provide multiple ways of doing the same thing. And there's one thing that all programmers agree on. And that is that the previous guy made a mess of the code base. And this happens regardless of the framework you're using or the language you're using. And this happens because we're trained to write code, not to read code. Reading the code is difficult. That's why whenever we find an already started project by somebody else, we have problems understanding. We need to rewrite it. And then the project manager starts sweating. And that happens because frameworks provide multiple ways of doing the exact same thing. So I'm used to doing one way and then I end up with a project that does it completely different while using the same framework. What are the missing pieces of the old school frameworks? First of all, there's no real DOM abstraction or efficient way of manipulating the DOM. For example, most of the cases, most of the old school, so to speak, Ajax applications rely on HTML being generated at the server. You fetch it via Ajax and you just insert it into the page. Also there's no URL management or routing. I just navigate from one page to another. Or I cannot do a two single page application. We're using them. No state management. No support for usable components. And as I said before, no coding standard and guidelines. To give you an example of what I mean, this is like a jQuery example. And you have this pretty much do the same thing. It's the way we're handling the events is very inconsistent. I'm not saying that it's hard to follow, but if you're used with a certain coding style and then the framework encourages the shorthand method, makes it very inconsistent. And that's one, like having this kind of inconsistency is the path to full fledged spaghetti code. Also the DOM operations are very inefficient. For example, in this case, this expects to get already formatted HTML from the server. So I have me as a front-end engineer. I have to go and do it on the server or talk to the server, the back-end engineer to do it for me. And also whenever I upload, it's very inefficient because even if something really small has changed, I refresh the whole portion of the DOM. So that was solved by using virtual DOM. This was the idea. Basically, let's say I have an initial state, an article, ID, title, content, and the number of likes. I do an Ajax refresh, and then the article ID stays the same, the title stays the same, the content stays the same, but the number of likes has increased. It was like a really cute puppy or something. So technically in this case, I want just this part of the DOM to be updated here, just the two, to reflect the current state. I don't want to update the entire article. Why? Because DOM input, output operations to the DOM are fairly slow. So sometimes when you do these massive updates, you can see the page like disappearing and appearing again for a split second, but it's still annoying to the user. I want to have like a continuous interface. I don't want to flicker. So virtual DOM was invented or introduced, which basically what it does, you have the state, it creates a state or an abstract representation of the DOM in memory, then just diffs it and applies a patch to the actual DOM. In this fashion, the DOM is updated in an efficient manner. Only the elements that have changed will be updated. More bad practices that you find with old school JavaScript frameworks, for example, keeping information or state in the DOM. The DOM is not a database. It's not meant for that. It's a tree which makes it sometimes low to parse it, especially for arbitrary information. And another architectural problem is, for example, if I have content that's loaded via Ajax and I need to specify event handlers for that content, where do these handlers go? Where do I put them in my application? Because this for me is a component. This should be part of this component. Do I put them in the same Ajax response and then I parse the response as JavaScript? Do I put them outside? What happens when this content gets removed from the DOM? Do the handlers also get removed? Short answer, I don't know. And after six to nine months, your JavaScript project which is cool and amazing and it's going to be the next best thing looks like this and it's like, just go cut the red wire somewhere. Basically, what I want from a framework, from a JavaScript framework, I want pretty much what I want from a backend framework. I want good separation of concerns. That's one of the most important features of the framework because if it offers that, I can use it and my project will be maintainable in time. Otherwise, it's just going to become the spaghetti code from the previous slide. I want virtual DOM. I want operations to the DOM to be efficient. I want in-memory state management. I don't want to look for information from the DOM and I want to have access to in the memory. It's really fast. I want to have fast operations. I want routing, preferably using the HTML5 history API. I want to have clear coding practices and Ajax. I want to have an Ajax abstraction layer although that is not necessarily a deal breaker because there are a lot of really good Ajax libraries, like promise-based Ajax libraries that you can just plug into everything and allows your framework to focus on what's really important while you can pick the best Ajax library for you. So I'm not sure this is React. Mostly there were frameworks that already addressed these issues and solved them. In the front end JavaScript framework arena, now the main fight is between React, which is supported by Facebook, and Angular, which is supported by Google. So you have these two tech giants and two frameworks and I think 80% of the JavaScript engineers use one or the others. And there are several, you know, Ember, knockout, riot, backbone, if you like it really lightweight. And where does Vue fit in all of this and why should anybody consider Vue? First of all, it's really lightweight. It has the smallest footprint of when you compare it with the other two big frameworks with Angular and React. It's a two open source project. It's developed by a community, not by a tech giant. I don't know if you're aware, but until a few weeks ago, React had this problem with their licensing. Basically if you were using React in your application, you would forfeit your rights to sue Facebook for whatever. So this will not happen with Vue. I also think it has the fastest learning curve. So if you want to move to a modern JavaScript framework, I think Vue.js will be the fastest choice, will be the fastest to get you to production because it supports, it just uses things that you already know. So it doesn't use complicated JSX syntax. It just uses a more clearer syntax that people are familiar with. Separation of concerns in one file, basically Vue relies on components and components are one component per file. So if you want to change a component, you go to that file, you edit only that file, you commit it, that's it. And great performance per kilobyte. This is a metric I invented when doing the slides. There are some comparison graphs between Vue, React and Angular. There is the source. I didn't do that myself. And generally the smaller number, it's the better one. As you can see, in some cases Vue wins. In some cases, it loses. But it's in the same range as Angular and React when it comes to performance. There's no massive spike, but there's also no massive loss. Although when we look at the file size, Vue actually wins because it's by far the smallest. And for front end, file size, it's important. I want the download to be fast, especially for example, in my company, we're a multinational corporation and we have offices and we serve customers in countries where the internet connections are generally slower. So for example, if you live in Spain or Germany or the 50 kilos is not going to be that much. But if you live in Belarus, it can be slower. And also a larger file size means it takes more time for the browser to parse it initially. So small, it's better. And basically I'm going to go and talk about the architecture of a Vue application. And it's built on components. In Vue, everything is a component. It's very similar to React in some fashion. And basically what components are? Are custom HTML elements that have behavior attached to them. So imagine I create an element which is a menu item. And whatever this menu item does, it's incorporated within the file of the menu item. How it acts when it's selected. How it acts when it's deselected. How does it publish events telling that it was selected or not and so on. Everything is encapsulated in that file. That's making the components self-contained. They reside in.vue files which are compiled with webpack or browserify. Personally, I've never used browserify. For some reason I don't like it. Don't want to start the flame work though. Just when in doubt, use webpack. It can be mixed with regular HTML. So you basically write your HTML normally. And when you need something fancier, you can just pop in Vue component there. And it will do its magic. And the Vue application, it's wrapped in a root component. So imagine it, to some extent, it mimics the structure of an HTML page. You have the root component which would be the HTML element or the body element if you want. And all the other elements go into this element. The same happens in Vue. You have a root and all the other components go into that element. So for example, I would have a menu item, a simple page where I have a menu. Some menu items components. Some can be selected or not. I have a content and a footer. Very simple page. Basically, I'm just taking the wiremocks of the page and put them into Vue components. And how a component look like. This is an actual Vue component. In this case, it would be the middle one, the content component. Basically it has three main parts. A template, script, and style. In the template part, I would write the HTML for this component. And it uses a mustache like syntax. So for example, I want to write the title here. I want to write the article. I want to display if it's not loaded. I want to display this part. If it's not loaded, I want to display the loading message. And then I have the actual script part, which is the logic of the component. What the component does or what the component knows to do. And the most important part is these so-called reactive properties. What does a reactive property mean? Whenever that property is changed, it triggers a re-render of the template. For example, if I change the title, this part will be re-rendered. If I change the media URL, this part will be re-rendered. The thing is, only it doesn't re-render the entire component. It uses virtual DOM and only it renders the elements that have changed. For example, in my case, if I change the title, it will only change the content of the first header and the alternative text of the image. Nothing else in the DOM will change. Now, if you look in the syntax, for example, HTML attributes that link to a reactive property need to have these two dots in front of them. This tells the view engine this links to a reactive property added to your virtual DOM. If that changes, I want this to be changed. If you don't put it here, it will be interpreted literally. Then, as with any other framework, I have some life-sacrificing events. For example, mounted is an event that gets fired whenever the component has been mounted into the DOM and has become visible. In that case, I would just perform an Ajax call. With the response, I would just put it in the reactive properties in my data. I would say loaded is two media article and this will trigger automatically a re-render of the main visible part of the template of the component. I also have a style part, which is basically clear CSS, but it also supports out-of-the-box preprocessors for CSS such as less or SAS. Again, don't use less. Yeah, I know. Is anybody using less? I'm just going to stop saying that. No to self. Yeah. No. Okay. Basically, what can I use reactive properties? I can use them to show or hide elements. By the way, show or hide means it's not necessarily hidden. It means it's completely removed from the DOM. It doesn't exist. We can also just hide them by setting the visibility hidden or display none. There is something called V show for that. But V if means if this expression here evaluates to false, everything else gets removed from the DOM, which is good because at some point, if you have a lot of browser pages and a lot of tabs open and memory hungry browser can get really slow if the DOM is really big. So I want to keep in the DOM only the parts that are actually required at the moment. And then I can just add, remove others. I can also use the reactive properties to fill in HTML elements or to set values for an HTML attribute. And as I said before, changing any of the values here will trigger a re-render in the template. And there are also another set of reactive components called computed properties. Basically, what computed properties mean as opposed to the normal properties? These are functions and allow me to encapsulate sometimes a more complex logic. So for example, in my case, I want to decide whether this is loaded and means the title is null and the article is null. But I can add more complex logic such as no error has occurred or I can do whatever I want. This is a function that gets called and its result gets cached. So even if I use it multiple times in the template, this will only be called once. So you can do however complex operations here. And if any of its dependencies change, this will be re-evaluated and the template will act accordingly. So if at some point I set after this has returned to, so I have a title and an article body, if I set this to null, this will be re-evaluated and the DOM will disappear. It will go on the loading part. So as I said before, this is how a template looks like, a more complex template. Basically, computed properties can be anything. You can return a string, you can return a Boolean, you can return whatever you want, an object and so on. And also I can iterate with a mustache-like syntax to the two items in an array. So for example, here I have a list of articles in related articles and each article has a URL and a text and then I create a loop and I display the URL and the text. And in order to add, to listen for events, because it's really important that the end is front end, so it's interaction with the user. Events are probably the most important thing. Events are added in the template. You can say von, click and then you can call a function. And in the view, in the script, part of the view component, I have methods, but I can declare methods that I can call from the template. And now, for example, when the user clicks on this element, this method gets called, show related is switched from true to false. And this triggers the render of this, like show related articles or not. And the same because this property has changed. This gets re-evaluated and will show one message or the other, based on whether you're on true or false. And why HTML event handlers? Because for years we've been told don't do that, don't do like on click or on mouse over. It's so, let's keep navigator, it's so, you know, 98. Well, the thing is, it's easier to locate the handlers just by steaming to the template. You just look and you see on click, okay, something happens here. I don't know if you've had the misfortune of looking into a really legacy JavaScript application where you click on something, a handler is called, but you don't know where that handler is set. Because it can be set on that element, it can be set on a different file, it can be set in the DOM tree and rely on event bubbling and so on. Which made it difficult sometimes just identifying what gets called. I needed to use the debugger and see what event handlers are added to that element and where are they in my 30-something files application. So here, all the events that are related to this component are in the template of the component. So it's really easy to find them. It's also easy to test the underlying logic because it's completely separated from the DOM. I can easily unit test it. I don't have to mock events in my unit test in order to test the logic. So for example, if I want to test this component, I can just import in my test the script part and call this method. And then I can check that the toggle has changed correctly without actually having to fire an event. And one cool thing is they can automatically be removed whenever the DOM is removed. So view internally will do the event management for you. If you add an event listener, it will add it to the DOM. When you remove that DOM, automatically it removes all the handlers. So you don't have memory leaks. You don't have handlers listening for events from elements that no longer exist in the DOM. So for example, how would a minimalistic view JS application look like? If you remember, I said that everything is a component. So you have a root component. Generally in the view JS coding practices, you call that app.view. And then you have main, which is the bootstrap script. This is the endpoint of the application where you import the view, you import the application, and then you create a new view application. And you specify where to be added in the DOM. So basically you create an empty HTML page with a div or something where you want your view JS application to go. And then you bootstrap the application and you pass that element. And that element will be replaced with the actual view generated code. This is very similar to how React works, for example. And I can, in my application, in my root component, I can import and use other components. And I use them with an HTML-like syntax. So for example, I import the menu item, I import the content, I import the footer, and then I specify how the tag should look. And then I use it here. And components name are generally camelCaste. And these are, I think it's called kebabCaste or something where you have the minuses in between. So in the HTML part, I would have menu item with the dash, and here I would have menu item camelCaste. And it's like how do you communicate data between a component and its children component or from a child component to its parent? Given that each part of my application is an individual view component, sometimes they need to communicate with each other. So view implements a pattern with two-way data communication, where parents inject values via HTML attributes. You can send information to the children, and children push custom events to the parent using the view on custom event name, whatever you call it. To give you an example, in my app, I use the menu item. Okay? And now the menu item encapsulates all the logic on how it looks and how it acts. For example, the parent can tell it this is the menu item for ID1 and the label should be books. And then whether it's a link, whether it's a button, whether it's a checkbox that you have to check, the parent is agnostic to how with regards to the internal workings of the child component. So everything that is a menu item and everything a menu item does is encapsulated within the menu item component. So basically, I need to be able to inject data and get events out of it. And I can declare properties. For example, this menu item will have the properties ID, label, and whether it's selected or not. Again, the logic or what is selected or what is not selected is encapsulated in the menu item component. Basically, the set of properties is a contract between the child component and the parent. And it says, these are the things that I understand. If you give me these information, I can work with them and I can provide visual cues to the user on whether the element is selected or not or what the element is about. And at some point, I also need to send the information back to the parent. I need, like, somebody has clicked on the menu item. So whether it's a list item with a link in it or a button or whatever, the menu was selected somehow. I have logic in the component to make sure that it was selected. And now I need to publish that information. I need to send it back to the parent because the parent needs to act on it. So I can use whenever this was clicked, I can emit an event. I can say selected and I can send a payload of the event, what was selected, what component I am I. And here I can listen for that custom event in the parent and say, if this was selected, it was called process. And in the event object, I have the payload that was sent here. So this allows me to fully isolate one component from the components next to it or from the parent component. I have, and this makes it really easy to test. The component is self-contained. It has everything it needs to function. And sometimes I need more, when information changes from the parent, so let's say, for example, I get something injected from the parent via property, I need to do something in a component. Properties are also reactive components. So when you change a property, it triggers automatically a re-render of the template. But sometimes I need to do more than just re-render the template. If, let's say, in my content, whenever the parent specifies a new ID, I need to display the content for that ID. So I need to go to the server, do an Ajax call, get the article, and display it in the template. So basically, Vue has the so-called watchers. In Vue, everything can be watched. So you can add a watcher on every variable. It's like it was built by the NSA. So everything you can just spy on it. And whenever the article ID has changed, this function gets executed, where I get the current value and the previous one. In this case, I don't care about the previous one. And I do an Ajax call, fetch that value, and then when the response comes back, I populate the title and the article, and this will refresh the template with a new data. So basically, let's say this is my main application to give you a unified view on how this would actually function. We have the app.vue, and then I have some elements. And whenever one is clicked, this method gets executed. This changes the value of the article ID, which is a reactive property, which in turn changes the template here. And because this has changed, this is information that goes into the content component, which will be intercepted by the watcher and will trigger the refresh with the new information. So for example, in this case, again, if you see everything is encapsulated, however the content is being loaded, it's encapsulated in the content component. There's nothing from outside. From the outside, I just say load article one, load article two. That's it. How it loads it from where and so on, everything gets encapsulated in the content. How the response from the server is being parsed, whether it's JSON or XML or whatever, it's encapsulated here. So this makes it really easy to maintain a refactor. If I have a new type of response from the server, I just edit one file. And that's it. And at some point in a two-single-page application, you need routes. You need the stateful URLs. So for example, if I click on two, three buttons and I load new content in the page, I copy-paste the URL to and send it over WhatsApp or something to a friend. I expect that friend to see the exact same things that I see. So I need whenever I do an AJAX operation, I need to update the URL. I need to have stateful URLs. The URL should reflect the content of the page. And this is a client-side router. It comes as a separate package. It's not bundled with view. You need to install it and supports both HTML5 history API, which provides you with two URLs or uses the hash sign as a fallback, where you have your file hash and then the view URL. And installing the router is like a three-steps process. First, you need to create a file router, JS, where you define the routes. For example, when I'm in the route, this will be called the home route and will load the component home page. And then in my main, in the bootstrap script, if you remember, this is the entry point in view, I need to import this file and specify it here that this application needs to use this router. And in the template, I can add a router view. And whenever the user clicks on something and navigates to a different page, the component that's linked to that URL is loaded here. So you can have multiple pages being loaded by navigating without actually refreshing the page and hitting the server again. It also supports dynamic routes, which is, you know, article ID, for example, you can specify route parameters, nested routes, programmatic navigation. So, for example, you can navigate to a new route after five seconds or something. You don't need to wait necessarily for the user to click on something and allow adding watchers on routes. So, for example, if this kind of, you know, filling the component here when the route changes not powerful enough for what I need to do, I can just add a router on the watcher on the route. And whenever the route changes, I get the current route, the previous route, and then I can do whatever I want. I'm not going to spend much more time on the router because it does pretty much what any other server side or client side router does. If you use the router, you use them all. And I'm going to move to something called viewX. This is a view's response to Redux. Basically what it does is, this is the actual illustration from their website. I stole it from there. So, basically what it does, it provides a state store for the application because sometimes I have multiple components that need to react to a state change. So, if something has changed, I need to propagate that change into multiple components. And basically, viewX provides me with that. It has actions that are dispatched as a result. So, for example, let's say I click on... Okay. And this collection here uses some artees that who I used last time. I'm going to call Hello. Okay. Hello. Okay. This looks very hip-hoppy, so. Okay. Yeah, I don't know what to do with my hands now, so thank you for that. Okay, so I have mutations, and what does a mutation mean? It's a change of the state. Whenever I change the state, I call a mutation. And now I have in the VX module, I have the state, which is my, I have three variables in the state. Whenever one of these variable changes means something has happened in my application. And I want the components to re-render. So for example, I have loading, means I'm starting to load something from the server. So I want the components to display like, you know, those spinners or something. I have an error, whether an error has occurred. Sometimes I cannot fetch from the server. It's like the network is off and so on. And I want to display an error to the user. And my actual article, the thing that I'm loading from the server. And I have mutations. For example, when I start fetching, I assume that the loading is true. And there's no error. If the fetch was a success, I have no error. The loading is false. And I have an article. And if there's an error, I've finished loading. There's no more loading happening. And the error is true. And then I have an action, which is load. Received an article ID. Initially it commits the fetch start. So I'm starting to display the spinners or whatever. I do the Ajax request. If the request is successful, I commit it. I commit a fetch success. Or if there's an error, I fetch an error. And there are some limitations in the mutations. Mutations need to be synchronous. So you cannot have a synchronous code in the mutation. Why? Because this changes the state. And whenever you change the state, your other components start reloading. So you don't want to have half a state. You're either loading or you have finished loading. So it needs to be very atomic. Actions, on the other hand, can be asynchronous. And they end with a mutation. So I call an asynchronous operation. I finish it. And then I commit the change in the state. I have like five minutes? Two minutes. Okay, so I'm going to go really fast. I'm going to try to do the NAMI-NAM with the mic. So for example, I can use these helpers from Vuex and use the spread operator to map actions into the local scope of the components. So for example, whenever I call load article here, this will actually code the load action from Vuex. And then whenever a state is changed in Vuex, for example, let's say I have a new article, various components need to wait for that state, to monitor that state change and act accordingly. So for example, in the breadcrumbs, I need to display the current node. And in the new, the current article is selected. In the content, I need to actually display the content. And in the main application, I need to maybe update the title tag of the element. And basically, I have another helper called MapState and allows me to import state from... I have these three state variables, allows me to import them into the current in my components scope by using the spread operator. And then I can just add watchers again in Vuex. Everything is watchable. And so far, Vuex kind of delivers whatever I wanted from a framework. And, well, getting started with Vuex... One minute, I promise. I know you're German. You can start the time now. So to get started, it's really easy. Vuex provides a generic tool that allows you to bootstrap projects really fast, so you can just install it as a generic package. And then you call Vue in it. You have multiple templates, Webpack and Browserify. Remember, don't use Browserify. And then you specify the name of your project. And this starts a skeleton project with everything set up, tests, and so on. There are some DevTools that you can use in the browser for Chrome, and it allows you to inspect the state and your components in real time. And some resources that you can look and find more about Vue and plugins. And thank you. APPLAUSE Was it one minute? Thank you very much. You're a bit over time, but if there's one or two questions, that's okay. Any questions? No? I want a question, okay. Can you take this microphone for answers? What other frameworks did you look at before settling on Vue? React. React and Angular 1. And Jake Wetty. Last question. Hi. Good talk, by the way. So when you looked at React, did it have the licensing issues? When you looked at React, did it have the licensing issues you were talking about? If React had the licensing issues. Yes, when you looked at it. You mentioned in your previous answer that you looked at React, and I'm asking whether when you looked at it. It had. I think like two weeks ago or two weeks ago, they changed back and they're using the MIT, I think, licensing model. What I didn't like about React was the JSX, because it made it difficult, for example, in Vue, I can just give the template to the designer and say, dude, this is pure HTML. You cannot put anything here that looks like JavaScript, it will work, whereas in the JSX, I can't really give that to the designer. And also with a bit of webpack magic, I can move the template in a different file that I can give it completely to the designer. Whereas in... I don't like designers touching my code. Okay, thank you, Dudo. Now... Yeah. Thank you.
Browsing Hacker News or the #javascript hash tag on Twitter may leave one with the impression that there are only two options to build a Single Page Application: Angular and React. Each with its own strengths and weaknesses, with its own vocal community and the tech giant behind it. Each portraying the other side as the Sith. Vue is the open source framework that brings a third alternative to the table, combining the strengths of both Angular and React while trying to weed out their weaknesses. The result is an easy to use, lightweight and versatile framework. In this talk we will explore Vue's architecture, see how components interact among themselves, have a look at the event model and in the end, how to wrap everything together in a SPA using Webpack.
10.5446/54912 (DOI)
The Pro user experience, I left out the question mark in the original talk submission. So the talk title is the Pro user experience, useful, usable and delightful, presented by myself. Let's start with a little bit about an interesting experience. Recently I had something called a chicken waffle. Anybody else has had that? Very interesting. It had raspberries, blackberries, chicken and a sweet waffle cone. And at the very, very end when you reach the bottom of the waffle, there is this bite of black pepper. So it was a very interesting, very different experience. And the thing about these things is that the only things we notice are experiences that are different. When something is what we expect it to be, it's, it happened. But if it's different, that's when we notice it. And that's probably why I noticed the chicken waffle. It's a little different from my normal menu. So I'm going to say a little bit about this concept of the customer journey and the concept of delight. And then that is going to frame the context of the rest of the talk. This idea of a customer journey is all the different experiences that a customer has as they interact with you, your service, your content management system, et cetera. And it involves people, processes and technologies. And as far as Plone is concerned, Plone would be about the technologies from the perspective of a content management system. So I'm just saying that just for context. There are many touch points. So this is where I have a little map here. And each one of those touch points is a customer experience. And even within your content management system, you have these experiences. So different is very important when you're, when you need to differentiate yourself from other possible platforms out there. I recently participated in the Google Summab code mentor summit. And as the Plone Foundation, we were present. And this was our list in there. I don't know if you can see it, but basically, it presents the Plone Foundation as building and maintaining the ultimate open source CMS, Typhon, slightly different there. And in fact, if you go to the Plone.com, you'll see the same thing. That's what we want to be, the ultimate open source enterprise CMS. So the question is, how can we be different? How are we going to pull that off? So I'm going to take a little tangent, but everything will come back together. This is a little bit about my trip and how I got here. I traveled down Norwegian air. And they had a little touch screen. And they had little details that were interesting. So for example, on the touch screen, there were little animations around the icons. And that just added a little extra, a little interest. And when the plane landed, they had these special lights. And the lights just lit up the roof right along all the way up. And they changed color and everything. Again, I noticed that, right? Because we tend to notice things that are different. I landed in Copenhagen. So this is Copenhagen airport. And the food in Copenhagen is amazing in the airport. It's not airport food. I'm used to landing in airports and seeing airport food, Burger King. But no, Copenhagen had some very, very interesting food. Turns out they've gotten awards for the food in the Copenhagen airport. Again, something different that I noticed. Also in the Copenhagen airport, the way that they displayed the time and the gates and everything, there was a little more detail than I was used to in other airports. So this doesn't capture it perfectly. But for some gates, it even told you how long it would take to get to the gate. Like seven minutes or eight minutes walk and things like that, which I didn't see in other airports. So that, that again, was interesting and attention to detail. And as I walked through my gate, before I get there, in the bathroom, they made sure to have little hangers in there. You don't get that in every airport bathroom. So I was able to hang my bags while I use the bathroom. And as I walked to my gate, there was a little note to say how many minutes away I was from my gate. And even before I was even close, they even had the time on the floor. So as I was approaching my gate, I saw five and a half minutes away from my gate. All of this was very, very new, interesting, different and attention to detail that helped Copenhagen airport stand out for me. I flew on Ryan air on the next leg. And that was also different. So they offered me this, this magazine. I thought, oh, interesting, a magazine only to discover that the magazine was basically just a catalog of things they were selling. So I handed it back to them. I got seat 1A. So I ended up being at the exit. So what you're seeing there is not the emergency exit. This is where you would be playing. And that, that's, those are my feet right by the exit. Also very different for me. Also, I had problems checking in. So when I reached there and explained that to them, they explained to me that it would cost me 60 euro, which I had to pay. So Ryan air was also a different experience. Oh, and that's my laptop in the corner, because I used it to charge my phone. They didn't have any place for me to charge my phone. So I put it in the pocket and use it to charge my phone. A couple years back, I came across this thing called Zope. And this is roughly how I came across it. I was at linux.com. And there were some, they offered open source ads. So different open source projects would advertise in the corner for banner ads. And I saw this banner ad. Not well, I'm recreating it. It didn't quite look like this. And it said something like it's a bird, it's a plane. No, it's Zope. That's interesting. So I clicked on it and downloaded Zope, started trying it out, reading about it. And that's how I got into Zope before I knew anything about coding in Python. I actually went and searched for how to use PHP on top of Zope. And I found something. And so for a while I was trying to use PHP on Zope. And then eventually I decided I would use PHP on Python. That was also a different experience. So let's talk a little bit about whose expectations are currently influencing us and how those expectations affect where a plane may have to go. So this is the useful and usable part. And what are these expectations? Thing about it is that from the side of those who are designing these experiences, everything we think about in terms of our own designs are very biased. And until it reaches the reality of persons who are using it, we don't really realize how bad our systems are. In fact, one of the things that I did at the Google Summab code summit is it was an on conference. So they had open spaces everywhere. So I ran an open space for usability testing. And we did usability around Robbins. And so I had flown and tested with three people who had never used it before. And it was interesting to discover the expectations. I asked one person to create a page. And they interpreted that to mean a website. And so they went through the flown interface trying to figure out where do I add a website. Somebody else who is from the Internet Archive, he works with Internet Archive, he went to set up a page, went through and made progress, added a link, highlighted the link. And then he expected that when you highlight the link, it would remember the link you highlighted and put it as the default option when you start to put in your hyperlink information. Which happens in other interfaces. So these are all expectations that users now have. Another user expected that out of the box, a flown page would allow you to drag and drop components onto the page. These were not, you didn't go out to look for these people. These were three persons I interacted with. And those were the type of expectations they had. So I'm just pointing that out. Interestingly, at the welcome, one user of flown said very humbly, to say he's not an expert, but he said, you know, we really need to pay attention to the end user experience. And these are his words. He said, this is now a requirement for all projects. So it's really something we need to be very aware of. So who are our customers of this ultimate CMS that we want to create? They are the theme of the integrators and the content managers. And every time I speak to the customers, it just reminds me how much more we need to do. And this is not to say that flown doesn't bring a lot of value. Plone does have a lot of great features, which we can talk about. So these are some of the people who are at my training. And I got lots of feedback about things that they wanted to be able to do. You know, why can't we just take a WordPress theme and put it on flown? Why are these things like adding a carousel so hard? Things that people expect to be able to just do. And I see some of them here now, Romana and others. So these are real people. And they are here at this conference. And they're here because they want to get as much as possible out of flown and maybe influence the improvement of flown. Let's talk a little bit about satisfaction. The concept of satisfaction tends to be related to your expectations, which is why I spoke about expectations. Expectations unfortunately change. And what was good enough to satisfy a customer in 2000 is not necessarily good enough to satisfy a customer in 2017. So every airport I went to, when I put my hand underneath the sink from Jamaica all the way to Germany to Barcelona, the water just came out. Or you had something you pressed on and it ran for a little while and then stopped. None of that was delightful or surprising or unexpected. It might have been, you know, the water coming out when you put your hand in front of it 10 years ago, but not anymore. So expectations do shift. When I go in a car now, I expect that when I press a little button, the window will go up and down. When I was much younger, that was an impressive car. Now that's a normal car and it's strange to see cars where you go like this for the window to go up and down. This is a picture of the airport in Cologne, Germany. And this is nice, but compared to, it didn't have the type of information and detail that I got out of the gate information for Copenhagen. But I was satisfied. It was what I expected. This satisfaction, this satisfaction is when your expectations are not met. So for example, I hailed an Uber with my phone from the Google Tech Corners campus in Sunnyvale and 20 minutes later it had not come and said it was coming in 15 minutes. So I was expecting this would be great because I'd get a discount because the first time I was using Uber. In the end, I just called a Lyft, which was their competitor. And they came very quickly. Interestingly, the Lyft driver was also an Uber driver, so I don't know. So when your expectations are not met, that's the satisfaction. So people now expect a drag and drop experience when they're putting together their page. And so it's not just prone. A lot of the open source platforms are still offering pages where you can't do that. So that could cause this satisfaction and people say this is not user friendly. 10, 15 years ago, just being able to type text and make it appear without knowing HTML was great. Now that's basic. And then delight. Delight is when your expectations are exceeded, like good food at an airport, food at an airport, or those lights showing up in the Norwegian airplane. Here's a quick, this is very average, a little background about prone and where it came from. So there's this thing called a portal toolkit which was developed on top of Zope. And somebody thought we could make this better, a guy named Trace Seaver. So he created what was quite involved in creating the CMF, the content management framework. On top of the content management framework, these guys built what was pretty much a skin because they didn't like how the content management framework default skin looked. And so they created something that looked better. If this reminds you a little of Wikipedia, it's because Wikipedia borrowed quite a bit of the CSS. And of course, in those days, this was a really nice interface, especially compared to this. So, prone was one of the earliest systems to resolve URLs to code instead of files. And it inherited that from Zope. So it allowed you to decouple things. So you didn't have to FTP things to the file system and name the file the same thing as what was going to be served and so forth. And it shipped with a web UI. I mean, that was brilliant. And a lot of systems weren't doing that at the time. And then these guys, maybe I have my history wrong, but I think this is what happened. These guys later on came up with the Zope component architecture which gave more power in terms of being able to build out cohesive systems in a sane manner. In fact, one of them said that those who do not study Zope are condemned to reinvent it. And we see a little of that when we look at other Python frameworks and stuff like that. This is Plone 2.1. And what were the features that Plone 2.1 had? Plone 2.1 had live search. So you type in the search thing and it's something. When I showed that to customers, they were blown away. They just added something and now you can just type and you get live search. It had collections. It didn't call them collections at the time, but that was a way of having stored queries. And that was, wow! And it had a WYSIWYG editor, which was brilliant. You didn't have to type some type of markdown or something like that when creating your pages. Before that, I had to train customers in what was this thing, restful, I think something like that. So those were particular features at the time because the competing platforms didn't have that or most of them didn't. But how do you stay ahead of the delight curve? To stay ahead of the delight curve, you have to start by meeting the current expectations and then you have to figure out how are you going to exceed those expectations. So back then, user expectations were a lot lower. So the green curve there is user expectations. The blue curve is the Plone experience and how it has improved over time. And the intersection is about 2010. Somewhere around 2010, things like Internet Explorer 7 were thrown to the curb. And people started getting brave about what they made in web interfaces. You started to see things like inline photo editing and things like that. And some of the pioneers of that, certainly the ones who have been very successful, are people like Weebly, Wix and Squarespace. Now none of these platforms are taking everything into account as powerful as Plone. But what they do have is a user experience, a user interface that is nicer than Plone. And because a lot of these are not very expensive platforms, the expectations that people have start to go up. If I can get this with a free Weebly site, and this is supposed to be an enterprise CMS, then of course I should be able to get this with that. And thankfully, the Plone ecosystem is shifting and moving towards that. But what is happening is that user expectations are now higher than the experience that you're getting in Plone. And that's something to pay attention to. In fact, I actually did a blog post in 2010 about this very same thing. And in my blog post, I had this screenshot from Weebly. And basically, in 2010, Weebly had a system where you could edit, you wouldn't need Photoshop if you needed to crop, edit, prepare your photos. They even had a photo search system so you could find the photos you want and never leave the Weebly interface. And when I saw that, I said, hmm, things are changing. This was about the time when definitely we had, I would see in that there were systems that were certainly surpassing the Plone editing experience. By the way, this list here is a list, and you can just Google, just ask for just Google Weebly features. This is a list of what you get out of the box with Weebly now. There was a time when for Weebly, I would say to people, oh yeah, it's nice, it's easy, useful, but there's a threshold of things you can't do. So if you want to have e-commerce shopping, you won't be able to do it. But they added that. And then I said, well, you can't have hierarchical organization of your pages. They added that. Oh, but you can't have like secure pages, but they added password protected pages. If you need to run a forum with members, you can't do that. But now they have apps that integrate that allow you to do that. The point is, it's kind of a situation where to stay ahead of the curve is getting harder and trickier. And we appreciate this is an open source project. So it's not like a company that is able to bring together their employees and say, all right, this is what we're going to build. This is a drag and drop experience in Weebly. So inside of this interface, you are able to drag items, images, as well as composite components, like rich text component that may include an image. Or if you want to include a slideshow, you just drag it across and then start to add images to your slideshow. And this is stuff that they were doing since 2010. So it's not new stuff, but I'm not here to promote Weebly. So my thought on the light, we're not necessarily trying to get to the point where every single thing is blown is delightful. That's not realistic. And that's kind of, it's just something you add to your talk so that people might show up. But delight is when you have opportunities to delight, you do, right? But before you get to there, you have to get to the baseline of satisfaction. So we need to understand the expectations and look for ways to make sure that where we're below the expectation curve, we can come up the expectation curve. And then there will be opportunities to add things, little details that people won't expect. And that's where the delight comes in. This is my current minimum satisfaction hit list. There are probably other things, but drag and drop website builder. But we're getting there with Mosaic and I think it's the right direction. And even if Mosaic doesn't become V1, it certainly will influence V1. Inline photo editing. We have the capability to do that. There's some, there's, there's blown up image cropping. But it's, it's, it's, it doesn't allow you to do things like, you know, filters and things like that. Really, that's where we want to go. It's, it's what people expect at this point. Responsive templates. So we do need to reach a point where we have a couple templates that people can just click on and add and that cover most use cases. And easy publishing. What I mean by easy publishing is the full cycle of getting blown running, whether locally or on a cloud somewhere, and then getting blown into production very easily. Because again, when you think of competing solutions, and I've tried a couple of them, you can click a couple buttons and be live. Now, a lot of those are hosted by individual companies and blown isn't necessarily in that same setup. But, and I appreciate that. Plone also often is pitched at enterprise and, you know, education and all these things. But it is something that we should target. Because if we hit that target, it only makes it easier for deployment and things for enterprise customers. So, by closing thoughts, I think the goal of the Plone community should be to measure expectations and work towards meeting those expectations, which means having more conversations with the end users. Focus on satisfaction, not necessarily on delight. Delight is a nice extra, but it's not necessarily where you go first. Hit tick all the boxes that get to satisfaction, and then you can move towards delight. So, add delightful extras where possible. Because those are always nice talking points. And just remember that we notice different. So, as you add things that satisfy the customer, you're also looking for the things that make Plone different. And we already have a lot of things like that. The fact that when you move content item from one location to another location, it's smart enough to do an auto redirect. So that if someone tries to go to the old location, it redirects them to new location. Even cut and paste already exceeds expectation in terms of, with a lot of other content management systems, cutting and pasting and moving around is just not as flexible. So we certainly do have things that we can offer that still could exceed expectation, but there are also things in user experience that we need to work on. So, my hope is that as we move forward, we can help to make Plone different, but good, not the Ryan error. Sorry, not different, bad. And these are a couple of references. The slides will be available if you want to look through some of the reading that has influenced some of my ideas. And questions. And here's contact information. This is my Twitter and email address. Thank you very much, David. Are there any questions from your side? What would be your number one thing to implement or to change in Plone to increase the user experience? I'd have to go back to my his list. If we get the drag and drop experience right, I think that would be huge. And secondly, making deployment easy enough for drive by user who doesn't know much. Because if you get that right, and if you look at it, everything, Google sites, WeblyWix, Squarespace, all of them are already doing that. Thank you.
If we think of using Plone as a customer journey we can begin to break down the journey into key touch points. How can we improve each touch point so that they become more Useful, Usable and Delightful?
10.5446/54913 (DOI)
So again, hi everyone. It's good to join you for the first round of chats here at this year's Plone Conference. I'm Devin. I'm the VP of Venture and Analytic, a medical AI startup that is using deep learning to save lives by helping doctors improve their accuracy diagnosing their patients. I'm happy to be here with you to share some architectural patterns and best practices around design and grade API. Now there's obviously a lot that goes into an API. It's not just naming conventions around your endpoints or some layout structure. That's all that's very important. That's just one piece of the puzzle. So I'd say that this talk today is focused more around how do you build a engineering ecosystem that encapsulates your API that will allow your developers to be productive. So a lot of these kind of patterns I'm going to share today are pretty popular just because they're really useful. So I expect you've probably heard of a good number of them. But for those that you haven't heard of, I would hope that whether you are a developer, a manager, or an engineering team leader, that hopefully from this talk you'll gain some extra insights and context to bring some stuff back to your team and your core team of helping them develop something that both they enjoy building but their users will love to use. So there's going to be a lot of different practices. So I'm going to go through each of them at a very kind of brief, high level and then save some time at the end to dig deeper into anything specific that people want to talk more about. So when looking at an API, I'd say there's always two groups of people that you want to consider. There is your development team that's actually building the API and the end users that are typically a third party integrator or people that end up utilizing or consuming your API. So most companies nowadays, even if you have an internal API that's not part of your end product, I'd say it's important to have that mentality of your back end team could be your developers, but you do have a user or a consumer. That is your front end team. And when you try and get a bit more serious about these practices and building a good ecosystem around your API, even if it's just purely internal one, that will hopefully lead to a lot more productivity and less frustration by your team. So when looking at things that kind of help make developers a lot more happy, I'd say one is having an intuitive API. It's easy to ramp up and onboard new people. And in general, when people open up their code, it's something that they'd expect. People like it when it's hard to break your code. No one likes to break production and also it's easy to work with and extend. So as far as around making intuitive APIs, part of that is when a developer is opening up a code base, having the reality match their expectations and that they're not going to be surprised. So I'd say part of that is readability, but another part is around being consistent and enforcing standards. So in general, I'd like to think that whenever things are standardized, things are inherently more streamlined and faster. So imagine you're trying to help out another developer on your team. If their environment is totally different than yours, you're going to waste more time trying to learn what their setup is versus actually solving the problem and helping them. So in general, let's say systems start standardized and as developers have to make unique decisions of how to set up their system, then their system gets progressively more and more unique. We call that a snowflake. In general, snowflakes are bad. So to start with a simple lighthearted example of what makes a system a snowflake, I'd say one is being hard-coded variables. So you have some runtime variables that you need to set in your production environment, whether it's a connection string to a database, what have you. And whenever someone's pulling that code locally, they have to make some modifications to connect with their local host database, for example. And so in general, having these hard-code strings is probably bad practice. Even just because if you have all these hard-coded strings and 20, 30 different files around your code base and it's not centralized, it's hard to find all those different variables. If you have it in one place, it's easier to read and edit. And also, whenever you are keeping these sensitive pieces of information or these variables in file, you'll notice a very common pattern where if you have some production file with a hard-coded string in it, you need to make a modification to that string. Essentially what you're going to end up doing is you have to either accidentally commit that hard-coded string update to Git repo, or you have to do some finagling of resetting that string every single time. And a separate thing I would say is around, especially whenever you have private or sensitive information stored in these variables, it's important to have that kind of separate. I like to think it's only a joke, but there are a lot of people that will leak sensitive or confidential information because they are keeping hard-coded values. Yeah, so just an example of if you search GitHub for removed password, there's 341,000 public commits, which I think is kind of ridiculous. And also, that's not all of the commits that happened. That's only the subset of people where they committed a password to production and then chose to literally name their commit, remove password. And here's an example of someone removing their password, go figure. So how do you get rid of these kind of passwords or other hard-coded information? And part of that is around configuration files and environment files. So imagine that you have some configuration file with a list of key value pairs that you can have default values that you check in to your code base. And then whenever a developer will take those files, they'll make a local copy and add any variables that they need for their local setup, but that doesn't get checked in. So then all those variables load in one time and you don't have to worry about all these things being edited because it's all in just one place and it doesn't have to get checked in. So taking this a step further, one of the things that is really useful is having separate, I'd say, environments and configurations for your app runtime object. So imagine if your development and your production system is totally different from your testing system. So when you're running unit tests, you want your test to run on a separate database than whatever you're developing on your staging system. So when you have these types of configurations as far as the Python level, it's really useful to delineate those. It's easy to, let's say, have a, when you're running your app locally, you can pull in a variable or in your local database or you can have a separate testing database for your testing system. So when talking about database, the important topic that a lot of teams use, but probably not all of them, is ORMs. And essentially what that is a way of translating your database tables into Python objects that are utilized at runtime. So here, let's say you have an example of a user table where you not only have a relational database, but now you actually have that object stored in memory as far as being defined. So here, looking at how you'd utilize something like an ORM, traditionally, if you don't use something like an ORM, you have to write inline SQL, which is a bit messy. And typically, you'll end up having to, after you execute your query, you'll have to do some form of configuration or changing around what the end result is to actually make it usable versus when you're using an ORM, all of those objects are standardized and it's a lot cleaner to read. So when you have this ORM in place for creating these objects, I'd say a separate question that people have is, sure, I have all these objects defined and that's great, but if my production database, my staging database, and every single developer has a different structure as far as their schemas and their own local database, then it can be kind of hard to track those and how do you standardize that. So you have this great concept of all of your code can be easily versioned and you know exactly where you're at through something like Git history, but traditional database schemas, you don't really have that. So this is kind of where tools like database migration, specifically for the context of Python, because it's a great library for doing so. So an example briefly of how Olympic works is they will track all of your database changes through things called migrations. So if you have some Python object, like let's say a text report, and you add some attribute field like an author, what you're able to do is automatically generate a file that will do an upgrade and a downgrade of those changes. So whenever someone is trying to update your to a branch that has your code, they can also instantly recreate your database structure. So just a quirk of Olympic out of the box says pretty useful for people to know. If you run it with this auto generate command, it has a weird default value where it will have all these unique hashes for what that version number is. And you'll notice it's kind of not that much readable, especially you don't really know what order of these things are. So given that they are random hashes, they're essentially in random order. Like the first revision was actually near the bottom, which is when you're trying to debug, jumping between all these different version files is actually very difficult. So I'd say a way to improve Olympic setup out of the box is typically to change this template to use something like a date, where now obviously all these different revisions to your database are like, I clearly know here's version five, here's version six, and it's much easier to step through your code and understand how things are changing and jumping around. So a separate point we know, if the database structure is standardized, what about the data in it? So typically an example that people will do is having something like a seed file, which is essentially taking all of the rows that are in some example database and allowing you to recreate that. So one of the things that is very useful is if you have some totally empty database, it allows you to go from zero to standardization instantly. So one of the things that's really useful about this is now if you have a standard seed of a database, people can worry free, mess around with anything or what they want, they can do actions. If something goes wrong, they can totally destroy it and recreate it instantly. And also when you have the standard structures in your database as far as rows that you would expect, it makes testing a lot easier. And of course there's plenty of ways to set up those files, whether it's doing SH or a Python file, but in general having something database-dump for every single type of version and as you add more to your database, you get a creating new site seed file essentially. A separate problem I find developers with working a lot of different projects, they would say, is why isn't this running? Where are my packages? There's this common workflow of you're trying to run some application, you're again error saying, hey, you're missing this dependency. And you'll say, okay, I'll install that dependency. You try and run it again and you keep going through this loop and loop of you're missing some dependency which is kind of annoying. And in order to resolve this, a lot of people use this as a requirement file. Essentially it is listing for my app to properly run. Here are all the dependencies that I need. And whenever someone gets that error saying, hey, you're seemingly missing something, you can just run this requirement file once and then everything gets installed. So you don't have to juggle all these different dependencies. A separate useful thing around workflow of most of these requirement files is you don't have to use public packages. You can either reference local GitHub branches for your team, even private, or you can reference local file systems. So one of the things I found particularly useful of local file systems, if you have two separate repositories that one has a dependency on the other and you're trying to do a quick update, it can be kind of frustrating if you have to change the project number two, do some commits, push that to GitHub, and then pull that dependency over. So if every time you try and iterate and test if your code's working, changing two or three lines, if you have to push to GitHub every single time, one that's a lot of waste time, but two, your workflow is going to be pretty suboptimal, and your GitHub history is going to be pretty gross looking. So one of the things that's useful is if you reference a local file, you can just commit your changes locally without even pushing to GitHub, and then you can update all of your dependencies and test from there. Also, a separate thing around just seeming if your files aren't getting up to date, I know a common problem is around PYC files. So whether you are renaming files or trying to restructure a project, you'll notice a lot of times it seems like your Python file is running, but it's not the updated version. And typically a good way to just do a quick sanity check is to get rid of all of your PYC files, which is essentially just a compiled temporary placeholder. A separate point around making a system as easy to get people ramped up as far as using your application is around a readme file. So a lot of people have their readme's, but they read into a problem of, hey, I went through all these steps, and guess what? It doesn't work. There's an error, and there's no documentation, so I don't know what I'm supposed to do now, and it prevents me from really getting up and running. So one thing that's useful is if you already have all these steps of what to do to get ready with your application, why don't you just automate that? So essentially you can create something like a setup script, either Python or shell or what have you, but it'll just do everything in your readme, and people can just run that once instead of having to copy paste all these separate lines and do things on the run. So another point, just moving on to the durability of applications. Again there's the point of no one likes to break production or a project. If you make your system hard to break, your developers are just going to be happier because no one wants to feel stupid or make those mistakes. So as far as trying to make your system, I'd say, more durable or harder to break, of course a great answer for that is unit tests. But before all of your eyes is glazed over, yes, we know unit tests are supposedly awesome, they're useful, but people still don't do them, even though there's a lot of great examples of how to write unit tests. And I would say, my motivation here would not to be to say, here's exactly how to do all the unit tests, I would just say, if you're not using unit tests, please do, and here's a convincing argument for the reasons why I say people don't do unit tests. So one, people say, it's a waste of time, like let's build features, this is just making everything slower, I want to build features, and this is boring. So I'd say my counter argument to that is, if you're worried about wasting your time, then right now, you always have to verify that your code works before you ship it to production. So my question to you would be, do you want your testing process to be manual, or do you want to automate it? Because I'd say in general, as your application gets more and more complicated, you're going to have to manually click through your app and do a lot of manual verification that is going to be a lot, lot slower than running these automated tests, which can actually save you a lot of time and allow you to write more features because you're not wasting time doing these manual clicks. As far as writing tests are boring, I'll make the argument that yes, I would say writing tests is probably a bit more boring than writing features, but writing tests is a lot better than being a click form and clicking through every different flow of my app. So if I have 10 different pages, essentially, I make some change to my code base, I have to verify that all 10 of these user flows work, and then if three of them are broken, I have to go back, change a few lines of code, and then go through all 10 use cases again. And clicking through that can be very mundane and very boring and frustrating. So I'd say it's less boring than writing unit tests. Another issue I found a lot of people have is around if you're trying to balance a lot of different projects in the same laptop or same machine, there's a lot of sharing of dependencies that can lead to a lot of problems. So imagine if you have two projects, each of which have the same dependency, but a different version. If you're installing everything on your global system, you don't really have a way to track those conflicts, and you're either going to be in one of two scenarios where, actually one of three, either there's going to be a conflict and one of your apps is going to fail, you're going to get lucky where you're using the wrong version, but it didn't seem to break anything, or you're going to assume you have a package that you really don't, and it's going to be missing from your readme. So although it worked on your laptop, when someone else is trying to run it, they're not going to have that dependency, so their system will seem to be broken. So a way to alleviate this issue is to have a separate container for all of your dependencies, which is essentially called a virtual environment, if you will, where every time you install a list of dependencies, they'll be stored to a particular project and there's no cross contamination. So as far as the simple way to set that up, there's a lot of standard utils for doing that. I'd say one of the important factors to keep in mind is whenever you are trying to install these dependencies, just make sure that you'll have these little friends brackets saying that you are actually activated and currently using that virtual environment, because if you're not, you're installing things on your global scope, and that can be problematic. So let's say both from allowing people to use multiple projects, but a separate thing that's useful is can I run these upgrades? Can I get new dependency upgrades? Will that break my system? Having different virtual environments for verifying each of those is a useful way to test that things are working. Talking about durability, I'd say going back to some database stuff real quick, I noticed that there are a lot of developers that will just put commits everywhere when they're using something like an ORM, which essentially to give you guys some context, whenever we're trying to save information to a database, there's kind of two different languages than ORM would use. There's a flush, we're just saying, hey, I want to reserve this block of rows, and there's a commit which actually is trying to save that. So notice for this particular example, imagine you're doing some commits, you hit some error, and then you're doing other commits. In this scenario, it's problematic for two primary reasons. One is if you're running this for loop here, and you have a commit every time you're doing that loop, you're doing an extra query to your database, and that's a full back and forth request, which can add up and be pretty slow. And separate, if you're hitting a one-time error here and your app crashes, this test is never going to get saved in your database, but all of these answers will. So then all of a sudden, you have all these rows in your database that are seemingly unrelated or dangling, and they don't have other dependencies. So typically a good way to try and get around that practice of having malformed or some subset of the data that you need is to only ever commit once. So imagine if you add all of your answers and then try and add something like a test, if there is an error or not, you're going to be, if you're only ever doing one commit, you're either going to fall in a scenario where nothing gets saved or everything gets saved, and that's a lot better than having some subset of the data getting pushed. And a separate context as far as why flush would be useful. If you're trying to handle foreign keys or relationships between these different objects, you need to need the ID of an object to put in something else before committing it. So in general, a good use of that is to use a session flush, which is essentially saying for all of those rows that I just added, reserve that spot in the database, I can go get those IDs, but they're not finalized, they're not yet inserted. So if there's still an error, everything's fine, I don't have that malformed data. And just kind of our last quick point around, it is nice to have things like rollbacks or removing sessions so that if one of your sessions for whatever gets in a state that is broken, like a malformed query for whatever reason, it is nice that you can either rollback or remove that session before a new request comes in, such that when that new request comes in, when the session tries to execute a query, it doesn't go, hey, I'm still broken, I don't really know what's going on. And then there are two different ways that you can add things to your app lifecycle that will kind of remove those issues as far as handling breaks between different requests. So as far as making, let's say, your application more flexible and easy to expand upon, I'd say briefly touching upon, you know, abstract, there's a lot of different ways to structure an app. So you'll have some place where all of your runtime things are for the production development of arms that are actually getting run. And then you'll have your setup files and configurations. But I think an interesting, useful feature as far as writing tests is trying to have your test structure map to your application structure so it's easier to step through all of those directories and verify that those things are lined up. A separate point, app factories, I know that a lot of people, if you're using something like Flask, there's this common thing like Flask name, where you're trying to generate your runtime object. And a lot of people, if you don't use something like a factory, it's hard to delineate between what is your app module and what is your app runtime object. So app factories are a really great way to kind of make that delineation or those separation of concerns where you can register your application, set it up in a way, you can connect it with all of your other resources, like whatever your API routes would be or your database queries or your database session. And this pattern allows you to not only separate those concerns, but you can specifically have a runtime for your development system and your test system use completely different variables. To expand a little bit on the separation of concerns, one of the issues is a lot of projects, if you're not using this factory system, you'll have some circular imports. Where here we'll have like my app, we'll have some dependencies that's trying to connect with the API, it's trying to connect with my database, but my API requires my database and my database requires something from my app. So whenever you get in this cycle of circular dependencies, it is going to be a lot more difficult to try and test your application and when you're trying to use something like an application factory, it helps you kind of remove that issue. Another thing when you're using application factories is the blueprint is useful. If you have 100 different endpoints in your application or different routes to hit, one of the things that's very useful is trying to semantically group categories of endpoints. So imagine here we have an example where you have a bunch of admin routes and a bunch of publicly available routes. So if all these things are registered separately, not only is there a very clear delineation as far as the intent or who can use these things, but you can also do a lot of customization as far as the request lifecycle. So imagine every time that someone hits an admin route, this particular example would print something like, hey, this admin route was requested. So one thing that's really useful as far as modifying these life cycles is around having different session management or different user validations, which is a flexible way to have not all of your API endpoints treated as the exact same thing. A separate brief point around having these extra life cycles as far as how you would handle a request that's inbound. One of the things that's really useful is not only having a session lifetime where instead of it being infinite or it being per token, having some sort of rolling basis where every time a new request comes in, someone's session is revalidated for, let's say, a period of another 30 minutes. So as far as using something like Flask, I found this being a useful pattern as far as making sure sessions don't last around too long, but it's still pretty useful. And a point around testing your code in a flexible way. I know a lot of teams, when they're trying to debug their code, they'll just have a bunch of print statements, which is kind of problematic in a few ways. One, you are having all these different variables print. It makes your code messy. But two, if you are stepping through, you do a bunch of prints, you hit an error, you'll change your code in some way, trying to modify some print statements, a few lines of code. And you end up getting this very repetitive process of trying to modify your print statements, two lines of code, you keep hitting this error. And I think that's a pretty suboptimal workflow. One of the things that's very useful is Python Debugger, where essentially if you've ever used JavaScript development tools, for instance, the Chrome Debugger, you're able to use things like breakpoints and step through your code line by line or continue or jump between multiple breakpoints. So using something like a Python Debugger, not only does it give you that flexibility of pausing your code and verifying, hey, are these values what I expect? But you can also do some form of iteration of even modify this variable while it's in that runtime environment. So essentially, if you're doing that modification, you can verify that all of your values are what you would expect. And you can resolve that error before having to do this iteration of 2030 cycles of trying to change one line of code, because now you're debugging a lot more interactive way. So you're moving to around things that I would say make, before we talk about making door lips, let's talk about making your users happy. So of course, some examples of people want it to work. They want it to be sensible and easy to use. And of course, those want to be fast. So as far as it being reliable, I'd say one of the things is trying to maximize uptime, where there's two ways to handle uptime. There's before an error happens and when an error happens. So as far as some preventative measures of how to maximize your uptime before an issue would occur, would be, of course, don't ship bugs, but that's a lot easier said than done. So of course, having to do other things like unit test or some progressive rule out where if you're deploying new code, it doesn't automatically affect 100% of your users. It may, let's say, only affect 3%. You verify that that's working and then you will progressively put more and more users on that new version before rolling out to everyone. As far as if a bug gets out there, unfortunately, how do you resolve that as fast as possible? Some tools that I think have been pretty useful as far as, of course, you want to log all of your issues or failures. But in particular, if you have things like, let's say, a Slack hook where anytime some very specifically 500 error or what happens, you can notify your team instantly. So all of the ops people or people on call can get a quick response. But two, it's useful to not only log information, but be proactive and check whether there is an issue. And if you have something like a cron job that will every one minute it will check, are these end points working in a way that I would expect? Hopefully you will resolve that issue before a actual user hits that use case. And as far as another way to make your API a bit more reliable, I'd say, is round versioning. So part of the great thing about versioning, even though I see a lot of teams don't do it, is when you ship new code, it doesn't have to break other people's systems. So any people that have adopted your API, they can feel comfortable that this fixed version is not going to change. I'm not worried about people breaking this new system. So whenever you have people locked into those versions, it's pretty useful for them. And in particular, how you achieve this, there's a lot of ways of whether you have a separate deployment for each version or one big deployment. I would say typically the earlier version or the prior is better, where for every single type of version you'll do, you'll have your configuration and your Git history. You'll track, oh, this is version 1.1 or 1.0. And you'll just automatically prepend that to all those end points. So that's just a useful way of not breaking what people would expect to be working and you to plan when you're going to stop support for certain features. As far as figuring out when to stop things, one of the things I find useful is trying to track analytics on your API. A lot of people will get Google Analytics on their website or their web app. Why not specifically on your API? And some useful value to this said two parts. One being you know what areas of your code are most frequently used and most popular so you can try and optimize those particular areas of your code. And a separate area is if there is deprecation in your API and people are still using those particular deprecated end points, it is nice to send them a friendly reminder, email saying, hey, this is deprecated. We're not going to be supporting this in maybe six months or a year from now. I just want to give you a heads up. So there'll be less frustrated when the thing actually breaks because you gave them prior warning. And as far as you know, adding analytics on your API, that can be a bit tough but I think as far as getting a very simple version working, it could be as simple as just having a single incrementer for every single time someone calls a specific endpoint. You'll just have a key value table that will look up those values and increments that, hey, it was called 100 times. But of course there's a lot more complicated tools to also track that at an ops level where it'll give you things more of like how many times was it called in the past hour, which is useful. As far as a point of having things be user friendly, to briefly touch on things like endpoint design, I would say there's a lot of opinionated arguments around all these different ways to young naming conventions and how to let your API. Let's say really the most important thing is to be consistent with whatever system you do choose and to be intentional about this design decision that you made. So no matter what you pick, you're going to have a bunch of people on one side or the other that are going to complain. And that's fine. Although they're probably right just because there might be some optimization as far as how you design a particular endpoint, that may not correlate or expand to how the grand scheme of things, how your entire API works together. So in general as far as some simplistic things about an endpoint that I think would be useful is to try and adopt something like REST, be aware of there are more things than get and post REST requests. There's a lot of different options for you to try and use as far as providing more ways for people to interface with your API. And also a lot of people in their APIs only use the very common of a default 200 response where there's a lot of options for you to provide context to your users of, hey, here's why this thing happened. You can give a lot more context than just a plain 200. And it's a point of around documentation. I would say, yeah, of course, if it's not documented, you should imagine it doesn't exist just because everyone always complains that even if a documentation is wrong, that's a lot less bad than if documentation didn't exist at all. Because if documentation doesn't exist, you're kind of left in the middle of nowhere and you don't really know how to proceed. But I would say one of the notes is for us to take documentation a bit more seriously because people say, oh, it's boring. You only have to write documentation once. Imagine if a few of a million users, they're going to be reading it multiple times. So it will pay dividends if you only have to write it once, but a lot of people are reading it. It's a good thing to actually at least do once, even if it's a very simplistic version. And the last thing I'd like to say around useful interactive documentation. I'm not sure if you guys ever use something like Postmeme, but it's a very useful tool of keeping track of all of your different endpoints with example queries that people can make. So you can put some payload in. You can see a response. It's a very great way to quickly verify all of these examples of how something is actually supposed to start using your API. And whether that is something that you're really onboarding to your team or something that's a partner trying to adopt your system, it is a lot more useful to be more interactive than just reading some read me. And the other point is things like Postmeme can actually be used for debugging. If you already have all your endpoints loaded out and some error request came in, you can simply copy paste what the payload of that particular error was in your Postmeme account and you can verify that it would respond in the same way that was expected. And also just one last pro tip around using Postmeme. A lot of people, you have to type in the entire URL of what endpoint you're trying to hit and one of the things I've noticed is very useful is if you have environment variables, essentially you can say like, what is the host that I'm supposed to be hitting? And whether you're trying to test your production system, your dev environment, your staging system, your local API if you're hosting it locally, you can just change that one variable once instead of copy pasting 100 different API endpoint files. That's just the thing about Postmeme that's kind of useful to use. So far as around speed, some quick points, just to understand transparency and what your application is doing, there's a lot of profiling tools out there. For those of you not familiar with the concept of profiling, essentially trying to do metacognition on your code and step through line by line, what is being called, how often and how, what parts of my code is taking the most time to run? And there's a lot of command line tools. Like CPython, there's a lot of GUIs for doing so. Benjand is a quick sanity check of what areas of my code are the slowest and have the low time and fruit for improving. Other things around speed, when you're working on databases, I know the ORMs typically aren't meant to do things like bulk inserts, but if you are, let's say in the scenario, if you're trying to insert hundreds or thousands of rows at the same time, it's very useful to use these types of commands like these, where you can essentially speed up by 10x. So imagine if instead, without using bulk inserts, if you're trying to create a new ORM object and then commit that session, here you're doing 100,000 return requests to your database back and forth of insert this row, insert this row, insert this row, versus here, if you're using a bulk query, essentially you create all 100,000 objects and then you just send one big, large request that ends up being a lot faster than trying to send a lot of smaller ones. And last point, just around caching, I know that if you have a lot of common endpoints that people are hitting and you're always getting the same exact response, it is useful to not try and recalculate those things every single time. And part of that is having it cached in memory. So if there's a response that people are always going to ask about at the same time, it's nice to just cache that response in memory and the next time someone makes a request, you expect to give them the same thing. Instead of trying to calculate it, you can just fetch it and give it to them. So there's a lot of libraries for caching. But if you're using a Flask system, Flask is useful for that. So it's kind of all the points that I've had as far as proving APIs. I'm going to say thanks, I appreciate your time. And we've saved some time at the end as far as any questions about other areas or things that you'd like to dig a bit deeper on. Thanks. Thank you very much. We have about five minutes for discussion. So are there any questions from your side? I have one. Yep. So let's say you're building your API directly from the schema automatically in whatever way. So currently, once a developer comes by and changes a model and that reflects a change in your API, what we do is we rely on code review to pick that change up and then increase the version of the API. Do you know of any better approach of automating that? Because you can automate database migration with Alembic. We use that. But is there a way to automate also increasing the version of the API or something like that? So Alembic itself will always bump revision numbers, but as far as updating the version of your database, part of the complex, when you're trying to unwrap that, is how much should you bump it up? It's like a minor, medium, or major revision. So I think as far as trying to automate that process, it might be hard to understand of whether something would be a minor or truly a major update as far as your API version is concerned. But I'm sure as far as there are, this may not be directly related to Alembic, but there are a lot of things like you can do, get hooks, or essentially do some pre-computed checks on all of your files and say, hey, it seems like you changed this thing and then ask in the dialogue, here's a question, maybe you want to bump the revision number. Have you considered that? So as far as when people are about to check that code in, that might be a useful time to introduce that, hey, should we do a version bump request to them in an automated way? Do you rely on code review to make this change to that? So let's say in general, whenever you're trying to bump in the API version, I'd say it's always good to not only have that, it shouldn't just be even in a scrum dialogue, that should be more of engineering management and making sure as a team, hey, does it make sure to bump a version? Just because that's making that decision since that affects potentially everyone that's a lot wider to make that as a business level decision versus a scrum, hey, I'm doing this feature level decision. So yeah. Any more questions? Then I have one, do you have a recommendation for automatic documentation of your endpoints, stuff like that? Yeah. So as far as that particular example's off the top of my head, I don't have any to do automated curation, but I know that there are a few products out there that essentially they will attach to your Git history and they will auto create like those readme docs. I don't know the exact names of those because we don't necessarily use any of those. We'd like to do our documentation by hand just to provide a lot more context, but there are definitely places that will do that. Cool. Then thank you very much for your talk.
Flask is a very light weight micro-framework for Python. Compared to other bulky frameworks, Flask is easier to get started and is often more open/modular, allowing developers to use their preferred packages and dependencies. For many teams, this flexible modularity comes at a cost, developers implement suboptimal patterns that prevent their apps/apis from fully scaling and creating tech-debt that lowers productivity. In this talk I'll cover an intro to the Flask framework, why it's a great option for some teams, and how it works. I'll share some real-world examples of best practices, common pitfalls (and how to avoid them), and design patterns that make APIs integrators love using and developers love building. Some specific topics I'll cover: Flask architecture, request lifecycles, security & auth packages, REST + CRUD API patterns, ORMs, separation of concerns, information hiding.
10.5446/54914 (DOI)
So yeah, I'm going to talk about front-end today. So first question, what front-end is about? That's super simple. Front-end is about making better websites. OK? That's all it is, just this. And we all want that. We all want better websites. We don't want to do just a regular website, good and un-compatible. We want good, very nice websites. That's always been the reason why we have been using chat-shrinks since the very beginning. We introduced the last week very soon in front, because we needed to make it better, to have a better experience for the user, better experience for the country and for the editor, et cetera. So this is still true. But with time, now, front-end is an actual domain in the industry software, in the software industry. So that's something else. We used to be the guy in charge of the JavaScript at the same time as the website and the backend setup, et cetera. Now, we do have some people doing just front-end. It's a new domain. You can hire a person to be a front-end. You can learn about front-end as tools. This is something which was not true at the time. But the original objective is still the same, having better websites. So now, when you see what you can do for the development, that's really a total game changer. You actually can be very, very powerful in catchy websites. So when we say, well, JavaScript is the future, that's not even what we are talking about. JavaScript is now, we use it now, we need it now for all the websites or all the web applications you might imagine. And question is, what about CMSs and front-end? How is that going? Well, I've got to talk about Drupal. This year was like in March or something, I think. I read an article, very interesting article, from a guy in the Drupal community asking, is Drupal a burning platform? And he was saying that he feels that nowadays, you can see, you can read it, this Drupal is seen by many as a share point of the JavaScript generation. What does it mean? Yeah, I'm a share point because I use it in my company, so I have no choice, that's not really fun. But same for Drupal. And probably same for Plong. I mean, yeah, I do websites, but the only thing is running with a CMS, so I have to use it. And that's a job. OK, I don't want that. And you can see, for instance, what's still about Drupal in the article, the guy was mentioning that a lot of Drupal freelancers, for their other websites, they don't use Drupal. They use a static file generator. It does mean something. So that's, to me, we are in kind of the same situation with Drupal. And I don't want to reach a situation where I'm going to have people saying, oh, yeah, Plong, I know that. My father used it a lot. No way. OK. So if those kind of person, yeah, my father used it a lot, I'm not using Plong or Mitzvah. I don't know the way. So they think they can work without the CMS. I read another article which was epic. It was a blog post about people from, I think, that was from the New York Times paper. And for their websites, they use Google spreadsheets to manage them. It's a one row or one out of columns, you know, for the dates, the summary, et cetera, in Google Chrome spreadsheet. Me. What? That's crazy. And they say, no, no, that's really cool, because the UI is super simple. So authentication is really nice. And the API is great. So that's all I need. I mean, no. Seriously, we cannot do a spreadsheet of the CMS. I mean, it makes no sense. Instead of something which provides excellent features, developing for decades by new people like us, providing everything you might imagine, those front end guys say you can spreadsheet. I mean, it sounds totally stupid for me. It sounds totally stupid for all of you. But when they talk to front end developers, actually, front end developers, I know. They say, oh, no, it's actually cool. I like this idea. I should know it myself. And when we say, yeah, yeah. OK, it's cool. Right. Cool is a good thing. But think about workflow, access control, content type definitions, media management, all of this. Oh, they don't care. They don't care. All the great things we have been doing for years, all of that, they just don't care. But the problem is they do need it somehow. They do need those kind of features. You do need media management. You do need to be able to validate content. So what do they do? All they just prefer, to re-implement. Re-implement, they spend that time re-implementing what something like Plong could provide. Because they obviously needed the CMS features. How needed they are not there, they still need them a lot. So as they don't want to use the CMS, also they need some kind of practicing to make it work. So what could we do? But we have several approaches. First one is to ignore them. OK. Yeah, I'm not sure. We could have done that like 10 years before. 10 years ago, yes. But now, not an option. Not an option anymore. It's not possible. We all agree that pure HTML, non-java script pages is not an option anymore. Right? That's kind of the way. We do need front-end technology to do the work site. And so we do need those. And most of the time, they are our colleagues. So we need to work with them anyway. And it's not easy to make them work with Plong. But we do need them to work to be website anyway and want to run the whole website with Plong. So another approach is to integrate front-end into Plong. Well, that's probably what we are trying to do in Plong 5. But it seems that managing front-end process, our adapting best practices from the front-end world into Plong is difficult. It takes a lot of time. It's, yeah, a huge amount of time. Just consider that. By the time it is done, the GS classical mid-chain has moved away by three generations. We are still using it. So it does not work. And it's probably easy. At the end, the result is OK. But it's not effective. It's not effective for us, for people. And it's not effective for the front-end involved, right? I've been pushing a long time for things like IAZO, dexterity, mosaic, rapido. All of that are very good steps that you can use as a front-end developer. Because you don't need to know a lot about Plong to do that. You can use it. You can go through the work face of Plong and manage the design. You can also manage it from the outside and push it as a zip folder. So you will have your rules for the IAZO. This is something everybody can manage. You can push your HTML, UJ, IES. While bundling IES, going to be planned for work. We can work with that. But the productivity is not there, right? If you ask a front-end developer and IAZO, as some front-end developer, to work that way, they can manage that. And they are not as productive and as happy than using their regular PCH, using NPM, using all the good tooling you might imagine. They do have this tooling. If we remove this tooling and put it in Plong, it's not effective. And they don't enjoy it anyway. They don't enjoy it. So let's talk to the end, and that is not a good solution. So one, now if you think again about the Google spreadsheet, maybe we have to stop calling it stupid. And try to understand why they think it's cool and smooth. Why is it going to go? We need to stop this way of thinking, like criticizing their approach, because we do have much better knowledge about how we should fit on and so on. And this is crap, and we don't want to give this kind of stuff. If front-end developers like this kind of approach, like putting data into Google spreadsheet in order to use it on the front-end side, there are probably some good reasons for that. So let's try to understand. So let's try to promote Plong just as a very good spreadsheet. Let's say, yeah, imagine the best spreadsheet ever. It doesn't even look like a spreadsheet. Isn't it cool? So yeah, we need to go to the people and explain, yeah, you could use this. It's much better than a spreadsheet. And it's going to be easy for you. And that's what Headless CMS is about. And we are not the only community thinking about Headless CMS. And the reason why is we want to just provide something which is similar to a Google spreadsheet, but better. And just not annoy the front-end developer with all the rest of the stack, creating HTML, pages, managing the JavaScript and CSS on the background, et cetera. No, no, no. We just provide an API. And they do their lives the way they want. They're going to be using React, or using Vue.js, using Angular, or using whatever. We're not going to be fooling through days or three months. I don't care. They can use the API. And that's the reason why we are making Headless. And I think it makes a lot of sense. Maybe you heard about the softworks technology radar. They evaluate different technologies, different approaches, et cetera, different methodologies. And for now, since two months, they tag CMS as a platform with a hold sign, hold it. It's no more a good approach. They say that they see many organizations running into trouble because they attempt to use their CMS as a platform for delivering large and complex applications. Because, yeah, that's something you can do, but we don't want to do that anymore. It doesn't play with the rest. So they say that they're recommending to use the CMS as a component of the platform that will be running on Headless, not cooperating, cleaning, and all those services. So that's why I think, yeah, that's what I think it's a good approach. So Headless is, it seems to be a good approach regarding business. But it's also a good approach regarding technology. Because just see that, we have been trying to mix back and in front of four years now in front. And, yeah, mock-up, resource registry, that's difficult. Difficult to use, difficult to maintain. We need to move. We need to move. So if we move to this, if we move to Headless CMS, what is the competition? What was the competition? Because there are a lot of offers, probably. So you have some commercial offers, like FireLess. FireLess, well, it's nice. But it's mainly focusing on creating applications, right? It's not designed to store website content. It's not. You have also, maybe you've heard about Contentful. Yeah, I mean, it's just a product in PHP. No, I don't think I would use that. You also have Cosmic GIS, for instance. Cosmic GIS is no more than kind of Django and MinUI. And you're supposed to manage your competitors. No, not convincing to me. You have Graph CMS, say, promoting a GraphQL implementation to Graph.CMS, basically. GraphQL is excellent. And it's very handy to use as a front-end developer. That's for sure. But the CMS itself is very cheap. I mean, it's doing nothing. And the same goes with open source approaches. For instance, Groupful at less is interesting. It does exist. But it's just focusing on content, for sure. So it's delivering content, modifying content. That's pretty much it. So that's something I'd use to build another website that will integrate your content management. But you cannot use it to make a CMS feature, actually. Not really. And just one small remark. None of them, in the whole list I've been mentioning, none of them implement breadcrumbs. Breadcrumbs, I mean, in my sense, it's a very small thing. But, well, it's very often said. I try to have it more often. I have been using it once in general. I mean, it took me four days. And at the end, it was not even working. And, well, that's exactly why you get very productive as a front-end developer when you use a CMS site, because you're going to get an endpoint to have your breadcrumbs. And you just create a markup, you want them, and you're going to work. I mean, you're going to be dynamically refreshed whenever you're going to be translated into a language. You're going to be going to respect navigation, hiding, and so on. You're going to work perfectly. That's super difficult to achieve. And, yeah, I'm trying to say that, long, doing breadcrumbs in 2001, we can promote long. I mean, long, just maybe you're not aware. I mean, not aware of the same sense as front-end people would see. Long rocks. Long is fantastic. You have all the good features, you might imagine, like geological content. You have flexible content types. You have access control. You have workflow. You have content rules. All of that, that's wonderful features for any person who want to be in the website. And it's secure. We know we can sell long. That and it's open source. Sounds like a very good and very interesting referral. And the thing compared to other CMSs, long core is extremely rich. Going again on this breadcrumbs, through pool, they don't have breadcrumbs. They have an extra module, several ones, by the way, to implement the breadcrumbs. So it makes it very difficult to have an API or a API exposing breadcrumbs to service. It's not easy. So you need to make your API on the core and then make sure you're going to dialogue with all the different modules. We can be very different. So they don't do that. It's not possible for them. We can do that. We can deliver super rich REST API. And we do have this API. So we are in a very good position here compared to others. So now, if we say, OK, let's move with that. Let's move with REST API. As we mentioned, we do have an implementation which is excellent, which manages all the different aspects of the form very easily. You can manage your registry. You can manage a lot of things, not just content. You can do everything with the API. And but now, if we go to the front and side, the question is how to choose the highest primer. So many highest primers. And we have been considering this for a long time, even for when we were starting at 25, do we need a rich highest primer? Which one is going to be the ones that last four years? So many of them are changing very fast. The tooling is changing very fast too. And speaking, the right one is actually very different. It's actually too risky. So not choosing a JS primer is probably a good approach, right? So of course, we need to focus on some of them. And for now, we have been focusing on the two main ones, Angular and React. We would love to have a few JS implementation. And yeah, that would be nice if someone is ready to spread on that at the conference. Have a few JS implementation would be excellent. So does this make sense to have several implementation to do this? Well, it does. It does because what we want to sell now is API, the rest API. So we need to go to front end people saying, you're free to choose your favorite primer. If we say, yeah, there's a fantastic API, but you need to use Angular. And that's it. Well, also, it's important to say, yeah, not for me. I don't like it. And if you do the same, the same goes with React. If you go to the React, people like me, for instance, I get React. They will not use it. So it's very important to provide a large offer. But of course, we don't want to waste energy. Because yeah, maintaining different implementation takes time. So what we try to do, because we discussed between the team who makes the Angular SDK and the team which makes Plong React, we are discussing a lot. And we want to reuse external elements as a lot of external elements should be part of it. We don't want to maintain or arm CSS through it, for instance. Stupid. We don't want to maintain or arm widgets. It takes too much time. So we want to reuse external elements. And we also want to reuse elements between the two implementation. For instance, pasta. I don't think you can see that. I did it by hand because I'm too lazy to make it by hand. So here I mentioned, we struck four. I'm not sure we're going to pick who struck four. But in here is to have a CSS framework, which is not false, which is something very well maintained by a large community, et cetera. We use it to implement Pasta. And so we get one unique Pasta. And package, NPM package, providing the Pasta implementation, providing the market, providing the widget, the CSS. And then we want to reuse it into Plong. Angular or into Plong. React as a dependency. So both of them are going to have the same dependency. And of course, they're going to have their own specific dependency to run bootstrap. Bootstrap is once again an example. I'm not sure we're going to pick that one. But we would use ngbootstrap on one side. And we would use reactbootstrap on the other side. That way, we reuse external elements. And also, we share common elements when we can between the different implementation. That's the logic of raw boot. So now, let me talk about Plong.resteqi. You know, the very long and bad name is because of the German guy. I will not mention him. Yeah, it's difficult. Yeah, anyway. So this package is an SDK. But the thing is, it just wraps Plong.resteqi into services and components that can be used as building blocks to be the website. You have a small platform. I also did it back then. So it's not really nice, but anyway. So you see that all the elements of your page is a component that you can customize. So you can use it into your page template, think bootstrap or material, whatever. And then say, here, I'm going to put the bootstrap. So the broad front here, I'm going to put the navigation here. I'm going to have a view. So you're going to change depending on the context. It's, by the way, it is implementing traversing. Traverseing is a great thing that nobody understands. I've been talking about it with a lot of people in charge of a very important framework. That's very smart person. They don't understand that. It's kind of fun, but all of them are just focusing on routing. And routing is not good because with routing, you define patterns. So you're going to say, slash customer, slash an ID. Display a customer. According to the ID, you're going to get the right one. Or slash news, slash an ID. You got a new one. OK, but you don't want, for websites, you don't want to work. You want to be free to have the folder structure you want. So you might have news with some news and also a subfolder with October. We have for news for October. And you can move it around. And this must work perfectly for the webmaster, right? So we are not able to implement an application which, by and hence, would know all the different paths, all the different pages, URLs. So we need privacy. So we implemented it. Makes sure that when you log the page, the webmaster page, it will take the current path, ask it to the backend. It returns the context. And you can use this context. And you can map a view. Depending on the type of content you are getting into the context, you're going to have some view declared, just like you do with ZCA interviews. You just get them. You register. And you say, this is the default interview for news. And this is the default interview for collection or whatever. You just register them. So it's very similar to what we've been prone to. But it's unangler. And we also have a kind of ZSC form library. It's actually able to produce the form based on the JSON schema. So the rest API at the moment is exposing all the content types as JSON schemas. JSON schema is a standard which is followed to this private structure. We have enriched it a little bit. We have extended it in order to define widgets, define the fit sets, ordering, et cetera. All of those things we need to build the form. And we have a generator, form library, which is very flexible. That's very much like ZSC form, because you can register your own widgets. You can control master's sake, dependency, all of that. Everything I've been implementing into JavaScript into that space, actually, unangler. So it's very powerful. And we use this to have a website running. So let me show you a small, I won't go too much into detail on the technical side, just to show you what the marker looks like. Well, you can create a header with, for instance, long global navigation. So that's a type, OK. That's a type which is implemented into the Angular SDK. So you just add the styles wherever you want. If you're not that big with the rendering of it, you can customize them. You can extend the default web branch, for instance, and provide your own template quality. And you can also override the logic if you want. Everything is very late. And you have the traversal object, which is where we are traversing, actually, that were the current view of the current content going to be rendered. So that's quite simple to get. This is an example of a view. So that's a view view for a batch content, for instance. I have a method named onTraverse, which is going to be called every time it traverses to something. So if we traverse to something and the traversal finds that a current view is this component, this class, what it's going to run this and we're going to add. So as you can see here, what I do is just enable or disable the local navigation, depending on if I have a text content or some items in the folder, such as an example. So that's, as you can see, you have the context, which is into the data. That's what you get from your back end. That's the actual JSON object. The very one returned by the API is there. You can use it wherever you want to use it. And another example here is how I, for instance, I display an image. So I am into my view implementation. As you can see, I can use context.image. So if I have an image, I'm going to display it. So I have a EMG SRC, which is context.image.scale. It's a large, large, large download. That's exactly the structure of the content returned by the API. And I use it. I don't care much about how the context is set up, et cetera. I just focus on what I need to do, very much like a pd template. And so same way with the title, the description, you can see we have context, your description, and we're going to get the description there. So that's how you create a website using the SDK. It's very easy. Some of you have been to my training yesterday, and I'm pretty sure that everybody feel like it was kind of emotional, even if you're not a front-end developer. And if you're a front-end developer, or you're just at home, it's super easy to do. The rest of the API is fantastic, so you have a good feeling about it. Now let's go with some real-life example, because we are using it in production through our national websites already. First of them is a French territory who want to offer a risk management platform for all the citizens. So it's in French. So here is what it is. So you can click around. As you can see, it's not really in pictures. When you click on a menu here, for instance, the URL got changed, but it will not reload, actually. It's all dynamic, and it gets the content from the API to send everything. So it's quite fast compared to a regular website. It's running on a very small server with no door forward. Sorry. Give me a second. So, of course, it's just a basic linear platform by site. Actually, I have one unique server with a lot of long sites. There is no Python in Google at all. I just installed Chrome. I installed the rest API package. Everything is done through the web for the content API definition. And I export them. I export the model. And I put them into my NPM package, because I have a way to push a generic set of configuration from NPM to Plano. So everything is managed from the front end project only. So how do I build such a website? It's super easy. Sorry, but no worries. Yeah. For instance, I have customized the global navigation under it. I have created a view for what is the page. I have created some views for the search bar. So let's say I go to the town. That's it. So for instance, it's super easy to have auto-completion. That's something you just get for free. I mean, when you use something, I figure out. So here is a city page. Well, I have made my own implementation of it quite easily just by cutting the different fields I want. So I have different specific fields like this date. This risk is also a field somewhere which is into my custom content type. I can just use it in my template. Just really quick. So that's basically it. Just one thing. We know that we can do a lot with Python, right? And when you say, yeah, you just have a website with front-end that's quality, but when it gets involving more complex features like generating media, for instance, it's going to be difficult. Well, if you think that you can generate PDF in JavaScript, well, think again. It is possible. I generate this by JavaScript only. So I just get my information from the REST API and I generate this very formally. It's an official one. It has to be very precisely done. You cannot have this format in the way you want. You do have to do that way. We have some growth, for instance. Just like that. Also, we have to do that. You have here those growth, those fields. They have been dynamically cut into the PDF. So we can do that. Actually, we can do everything in JavaScript. Just forget what you think about it. We can do everything in JavaScript. That's it. So, here, I was going to start. Yeah, I'm going to general JavaScript. It's annoying because I don't want to have extra dependency on myself. My conservators have been wrong. I don't want to have something with open office and generation and template and it will be difficult to manage. I don't like this. Well, no, you can't do that. That was quite amazing, actually. You can't really do that. You can just define your template and you can just create the PDF that way. And it's quite fast, and you can see. That's also very interesting because here, the CPU which is used is a physical one. I'm not having some containers wasting resources on myself. I just sent a small child some content and someone else with an old computer can process it to produce the PDF. So, I like this. What else do I need to say about it? The way that I've been built is all from the outside. For instance, to import all the data, so the initial data, so there is a list of all the city of the territory, that's quite a long list, with all the different districts, etc. I've been done using Postman. You know Postman, the tool which is able to dialogue with the REST API server? This tool, by the way, here? Well, with using Postman, I made a kind of scenario able to create the city, create the different fields, there are also sub-formers, a lot of them with different documents here, a lot of documents that you can download. I made a scenario into Postman, and then I run it on a CSV file, and everything I've been creating is work. You don't need to go on the price, it's on the plan server to do that. The weight is refreshed because regularly there are some dates to change because there are some risks occurring, risks which are validated or unvalidated, etc. There are a lot of control spaces. For example, I just deposited a CSV file on the backend, so you have a backend for the back-of-picture page, which is just long, which is very nice, and people in the chat will be like, just really have to use this component page, and then according to a specific program, they attach their new Excel file with the list of dates for all cities, and the content will be triggered and processed to update all the different pages. So that's all from the outside. That's really really good. It's also symbolic because I don't need to deploy a lot of different planning stands, and so on, I just have a plan somewhere, a plan of service, right? Next example is about... Yeah. So that's totally different. That's a contest registration. So here that's about making a deposit of a contest candidacy, so companies can go there and reserve this contest. It's actually running on an existing plan. People running have a huge, long website, which they use for all their internal processes, and that's at an internet track, with a lot of documents and security and different permissions and workflows, and a lot of different things that you need to describe all the processes of this organization. But they need to run this website for outside people, so externally, and they still want to have the information into their internet, manage the websites, move on the wins, contest, etc. So it's phone form. It's quite old site in phone form. And I just activate the rest API, and I build an external website with Angular. It's super simple. You can also download your PDF form when you have finished your registration. You can create your account when you... Oh, that's okay. For instance, if you don't have an account, you're going to create one, everything's going to be done through the rest API, and they never enter the actual internet site itself. It's not possible for that. They just use this simple method. That's a way to spin off a little application for a given group of users, maybe an existing user, out of an existing internet quite easily. That's also quite nice. But websites can actually be totally different. Sorry, I'm too much trying. I'm too much trying. Yeah. So, here we are talking about regular websites every day. You could do that in Pro-2, of course. But with front-end technology, you can imagine everything. So you can have new UI, new UI possible, new way to display information. You can imagine mobile application. And I don't know if you went to the pay-doublete presentation today. It was amazing. That's just a lot to run a mobile application on top of any website, on top of any site, existing site. Just install the rest API on one side, and you just take the project of normal. You change your backend. It's done. You have a mobile application. You can also imagine an electronic application. So an electronic can run as a standard application on any computer. A lot of things are possible. Look this example. This is a plant site. Okay. It's a mind-map implemented on the phone with Angular. That's like, it's just a small demo I made. It's super simple to do. It's just getting information from the rest API. So you got the folder structure. And then we render it with this RGS, which is a nice library to do this kind of stuff. And you get this kind of feature. So this is another way to discover some content. You can imagine very rich interfaces, very, very valuable ones, and all using the same API at the end. But using the rich, I mean, there are a lot of tons of libraries on Java. You can use them to do crazy things. Like you can do a map, you can do a 3D phase, you can do everything you can imagine. That's possible. And you are not limited anymore by rendering capacity of ROM. No. You out of that. You get the content from the library. And then you get this kind of stuff. And so the question is, and I'm almost done. I'm out of time. What are we now? Sorry. Well, we are at the port port. We have Pro-Rest API in here, which is quite advanced SDK. And as I mentioned, we use it in production. And it does a lot. We have Pro-Reise, which is excellent too, which is basically implementing the old Pro-NUI in Reise. So that's Barcelona for now, but we plan to do it with Barcelona. And it will do the entire Chrome issues, all the screen sharing, configuration, everything. And we want to maintain a multi-formal approach. So Angular and Re-IOS stay provided if UGS is possible in the next two. And so that's ready. You can use it right now. As I mentioned, I've been using Chrome 4. So Chrome 4 with popular content, that's possible. You can do it from 5. And with UGS, as well, because UGS is compliant. Because of the rest API. What we learned? Well, we learned that there are a lot of great things front-end brings with Chrome. It allows us to make beautiful and dynamic websites. It allows us to be more productive. And basically, it makes, to me, it makes Chrome fun again. Great. It's so much fun to work with Angular and having this great API. Because the API is so great in here. And on the other, on the opposite, what are the good things that Chrome brings to front-end? Well, we have some very good old ideas. We have, like I mentioned, privacy. Plugability. Plugability is a wonderful thing in app. Something we know about a lot. And it does not make a lot of sense for most of the people dealing with JavaScript framework. They don't get it, because they just focus on creating one application that don't care much about plugability that was implemented in that app. But if you create something like a CMS has front-end applications, then you need plugability to be able to add something without rebuilding yours. Changing the code and everything in the concept. We're going to bring this kind of concept, which are bright to the front-end world. They don't have that for now. And also we bring an excellent, excellent CMS API. Really, a friend of mine, who is a very good front-end world portal meeting, tried almost all the commercial Atlas CMS existing at the moment. And they all are very limited in feature. So there is actually a huge market here. And we can be the solution for them. We can be this. We don't realize how good is this API, how flexible it is. That's something which is a hit. We need to promote that. And that's my final thing. Thank you. Thank you. Thank you for your questions. Maybe you can grab your mic outside and ask anything you need. Another big applause for you. Thank you.
In 2017, frontend development is everywhere, accomplishing great things and transforming the web. Still, many CMSes don't seem to embrace it totally. Plone offers nows an Angular SDK which brings all the power of Plone to frontend developers. This talk explains the Plone Angular SDK core principles, and details some real use cases.
10.5446/54915 (DOI)
Before I get started, there's a, Paul was talking about his Batman role I have to share. I had a game that we used to play at conferences because we were obstacles. We call it conference spinning. There's this natural tendency to just turn to keep your conversation open. So we'd start turning into people so that you're going face to face. They turn away a little bit more. Our game was to see if we could turn something a full 360 degrees. I'd say you see, we're across the room, one of us just like put their hands up. Yeah, I see it. So I apologize if I've done that to you. Yeah, so I'm here to give the annual State of Plum talk. First of all, I'm going to welcome you all to Barcelona. It's really great to be here. Barcelona has been the site of so many awesome things just to happen to Plum in recent years. Plum 5 is the full theme. It's called Barcelona. I'm not sure any of you are here. Much of Maka happened here. Mr. Roboto, or such a semi-friendly testing robot, was born here. Pure news project in Guillotine. We were born here two years ago. And a lot of Plum.org, I think Plum.org happened here as well. People want me to comment and talk about Plum 5.1 for some reason. So I just want to make a quick run through of what we are adding in Plum 5.1. It's coming soon. It's released. RC1 is out now. We're adding a direct link from the main ensuring tab, the certificate of that group's member list. It's a small change, but it will make things a lot easier. We're adding collected indexing into Plum Core and that basically adds hooks for indexing, re-indexing, and de-indexing to make cataloging on large sites run on faster. There's a lot of changes to the configuration registry as well. A lot of new to split the registry XML into multiple files. We're adding delete, add, and export import options in the control panel. And also the ability to conditionally import registry records, which comes in handy if you're adding something to target a specific Plum version. And we're also making changes to the resource registry. There's a splitting apart the core JavaScript and CSS and the add-on JavaScript and CSS. And that'll rebuild anytime you add a new add-on or remove one. And we're also integrating new import actions in the control panel, so that's one less reason to go out with the zone management interface, which is always good. We're supporting about image scales and adding enhancements to icons and plum nails. Auto-rotating images based on their XF data. And we're doing some clean-ups while removing portal quick installer, removing plun app OpenID, and replacing the element tree with XML for transforms. People keep asking when it's going to be out, I say soon. Like I said, we're in release candidate one now. It's nearly ready. And it's pointed out to me yesterday that we're going to be in the undo form in zone, which definitely needs to be fixed because that is a super useful feature. So today I really wanted to talk about stories. And I was hesitant to do this because storytelling tends to be the thing that once you are in the community for so long and you start to feel old, you get up on stage and you just say, well, back in the day, blah, blah, blah. And so I don't really want to tell stories today, but I want to talk about stories. And the reason for that is this thing called generational daylight. This is, it basically boils down to the idea that people leave projects and other people come in and assuming you're doing it right. If you're not doing it right, then the project just disappears. And this is something we face in the point itself. Right now we are really the second generation of contributors. There is a large number. I wanted to talk about this today to talk about all the people that left. A bunch of them showed up today. So I don't know how to deal with that. So we'll see how that goes. You can do the life cycle of a long-term contributor project like Plum. It'll look something like a bell curve. They come in and sort of sniff it to code base. And maybe make a few plug fixes here and there. And eventually they get hopes or there's a good reason for them to, their work finds out pay here at corporate contributor. You can make these fixes for us. And they start to really become more involved. And over time, I know we'll get to the point where maybe a framework team member, then they'll taper off and maybe run a conference or join the board and eventually disappear. Sorry, that was... So yeah. Over time, people become highly involved and they'll become less involved as things like economic pressures, personal lives, job changes will affect their ability to contribute. And the same thing happens with generations. So a group of contributors coming in will tend to look very much the same. We've seen this in Plum where there's an end in flow of major contributors to the project. So the research around it is really interesting. I kind of started delving into it when I realized that this book was happening with us about five years ago when we had a number of corporate contributors leave Hanover, Andy, Levy. And I really became interested in it, realizing that this is what's going on, why is it happening, and what happened and how can we deal with it. So the research around it talks about this idea of half of it, the point at which half the developers active during the period are no longer in today. They talk about the Demian project, for example, which has a developer half of it for around seven years. It's seven and a half years. And it's interesting that it's pretty similar to the human body, which they say it replaces itself every seven years. And really the project is basically replacing its own constituent parts, and it's a natural process. And that turnover is necessary because it stases, especially in a software project, the project comes stagnant and it loses its ability to innovate and move forward. And so this process of continuous renewal brings in new ideas, new perspectives and new skill sets, which really helps the project continue in the future. And normally this turnover is having to continually, this one generation is diminishing and the other one is coming in. The pitfall is really dropping in the time between the two. The smaller the overlap between those two generations, the better chance there is of losing knowledge that exists within the project. No single person knows everything there is to know about a particular project. So it's really important that you build a metronome to see across people, contribute to the project. And that's things like the history of the code, the history of the organization itself, and the cultural organization. And you'll see things like that gap between generations can resolve things like orphan code and orphan initiatives. And it's really difficult to recover that lost knowledge, since it's only partially recorded and scattered across places like comments, tickets and mailing lists. And once that project, once that knowledge really isn't shared, it completely disappears from the project. And there's different ways you can share that knowledge. Typically that's done through documentation of the code and the different organizational processes the project uses. And there's also stories. Joe Macbeth who wrote a book about working with open source communities, called the Army Community, says if we slice the team open, we can see a number of generations that bring us into free time. Each generation is a source of stories and also a source of mentorship. Each generation passes down stories, experiences and life lessons to the new generation. And that sharing is really vital to the project's continued existence. Plum's longevity is really tied into this idea of storytelling. Martin Spelly wrote a master's thesis called Plum, a model and mature open source project. And it's pretty old by now. But it's really interesting to look back through it and see some quotes from people in the early years. Alex Levy, Dilbert, Alan Runyon and Paul Ibrit. And it really gives a great idea of what makes Plum a work as a project. And a lot of them are really still haveable today, which is what I find so fascinating about it. Levy says another thing that separates the Plum community from a lot of other communities is the amount of face-to-face communication that they have. We can be best friends in real life, but we still argue agitatively about particular part of implementation detail about a planning agreement. Or as he goes, we want to meet you and make sure you're both the same. Which is why we have this. Paul Ibrit said, the Plum community is a tangible, warm, friendly and inviting thing to me. It's the lens through which I see Plum. In fact, it is Plum. The software comes and goes, changes it, etc. It's the people that make Plum what it is. So our generation really came into Plum. I view us as the second generation of Plum. And our generation really came in hearing stories about the castle sprints and the archipelago sprints where they had to carry this. They had a local subversion server. It wasn't connected to the internet. They had to carry it over the hill each night to sink it up with the mainland. So that they had the most recent code. Our generation has, we've had our own castle sprints now, which I'm so happy about. So we've been able to do these cast sprints in Castle in Austria. For instance, this year we went to Finland, we went to Cape Town, South Africa and Japan. Maybe the most exotic of all, Oshkoso's castle. We have our own stories. The shiny disco pants and badass battle boots. And the fact that there's now a reminder that no, you cannot bring a katana onto the plain. It's the Plum Ranger. Starting a current Plum member is a backer singer. And it's the URV that we rented from Python 2013 to shell people into downtown San Francisco since Santa Clara was so boring. This is really the sort of horrible evangelism that we do. It helps really solidify who we are as people within the community of the outlets. That's when Python attendee put it in made code with Django, which you hang out with Plum. The story will shape our future as well. There's a common term used when talking about open source communities called catberry. I never really liked that term. It shows men's and their difficulty of getting a group of dispassionate, unfocused people moving in the same direction. But I really can't think of a single time when standing behind and trying to go people to move towards a specific destination has ever really worked. I mean, there's always laser pointers. But if you have to pass, you know that as soon as you turn off that laser pointer, they will stare at the wall for the next 15 minutes and make you feel guilty. And they'll basically ignore everything else going on around them. There's a quote I read recently about shipbuilding that I thought was really applicable to what we do. And it says, if you want to build a ship, don't drum up the men and women together. Divide the work and give orders instead. If you want to build a ship, don't drum up the men and women together. Divide the work and give orders instead. Teach them as you year in for the investment in the sea. And really the best work we've done as a community has come from storytelling. From saying beyond the horizon is the thing and there's wonderful things that we will take as well. I tried to find the source for that quote. And the quote of Eskerek believes it's really modernization of the statements. And my French pronunciation is horrible, so I'm not going to attempt to insult everyone. But his book Citadel is not as distinct, but I feel it does a lot more appropriately. He says, one will weave the canvas, another will fellotree, another will forge nails, and there will be others who observe the stars and learn how to magnetically. Building a boat isn't about weaving canvas, or forging nails, or reaching the sky. Or reading the sky. It's about giving a shared taste for the sea, but a light of which you will see nothing contradictory, but rather a community of love. And I don't think that could describe us better. We have a history of using these stories in our project. Martin Spelly, he's typically my go-to scapegoat at these conferences. Since he's not here this year, I'll say nothing but nice things about him. He wrote a blog post called Pete and Andy Triplone 4. This is back in 2010, I believe. No, it was well before that. And basically detailing the first two weeks of using Plone for two year generators. And I love the read bits of it too, but the great thing is it sounds like current Plone. It basically was a set of user stories for Plone as a project. And not just code, but also documentation, the installers, the add-ons. And it really touched on features that would later become part of Plone Core. Things like data, so Mosaic, the theme editor, and Mr. Bob. While not everything came true, it was still something we could look at and decide that, yes, we are on the right track. This makes sense as a community. This makes sense as a feature. It's really starting from my own journey as a Plone Controller. He mentions the theme editor, the team's respect for it in here. And I took that and went off and built this project called Glowworm, which was a crazy piece of crap. But it did some stuff that we just weren't going to go into before. And later turned into a project called Banjo, which later turned into the dinosaur theme for the Recap-in Core today. Steven Ban followed that up a few years later with something called Enos Becomes a Plone Developer. We're basically following up the same story a few years later. Pete from the previous story is now a manager. It doesn't get to code anymore. Andy Neewer sure was a misillum. Not speaking from experience, but. And a new Python developer brought in to learn the system. And he really uses it to highlight a lot of what Cone has got to write in the intervening years. But he does also point out some gaps in things like our documentation, the installers, and the intelligent generation tools. And he used that as a launching point for a few sprints that we used to fix those issues. And the stories don't have to be told with words either. At the Emerald Sprint in 2014, we had a designer called Caldeval. And he did a mock-up of a new design for the Aderman product screen, our Adon installation screen. And it really, as soon as everyone there saw it, we knew we had to add this. It was a vision of, you know, like that. I want that. Like, go build that for me right now. And, you know, it's changed a bit in the current implementation, but we have that now in point five. If you remember what it used to look like in older versions, it was horrendous. And it's really moved quite a bit forward. So we really realized the many of the stories told by prior generations. Martin's blog post talks about things like dexterity, what's now diazo, recurring events, and mosaic. These were set forth by the initial clone generation. They talked about them for years. And we've really, now at clone five, have integrated the last of them into core and realized their dreams. And so it's really time for us to start writing our own stories, not necessarily for us, but for the generation of scholars. We started doing that. Eric Rihol has done a lot of talking about the hackability of the clone and something we've taken in place. And, clone 2020 has been an ongoing discussion of what does the clone look like if we were to move to clone three and how do we get there. And we're beginning to write this into the headless CMS story. And this is going to want to bring team on to talk about. We've, yeah. So, hey, as Eric said, I would like to share a story that I both can help to shape clone's future. And I want to talk about the Patsanaga in that Patsanaga is a new user experience frame of clone created by Alder Kasaru, who already created the clone five feet. And people also work on the clone conference website and provide us with all that material. And actually when Alder came up with that idea that I totally took me by surprise, I was talking to Vika, my co-worker, and we were always trying to convince Alder to do small things for us. He works full-time at HP as user experience developer, but I'm always trying to get small stuff from him because he's doing great work, right? And then we're like, Victor, I wouldn't have no idea, stand up and then Vika would tell me, oh, yeah, by the way, do you know Alder has worked on a new UI frame for the clone? And I was like, wow, I mean, who's paying him for that? Yeah, you know, he just started to work on that on his own time, right? And he already created, like, I don't know, a few hundred icons for that and lots of screens and I was like, what? Seriously? And I was so amazed by just the pack that he started to work on that because in the fun community we're used to people, developers, starting to work on great stuff, right? And also, like, capitalists work on great stuff, right? So that was nothing new, but the thing that struck me was that this crossed the border, like, the boundaries of our core profession, right? I mean, we're a real community, but most of us are like, developers, right? And I think that open source spirit of enjoying doing stuff for free in Europe in three times, that this idea spread was really amazing. To me to see that, like, a US designer can show the same amount of passion and involvement and care in his product and also the experience that he gained from some of the clients. So I was already amazed before I even had a look at it and I had a look at it and it was great, right? So I will give you a very short introduction because I'll be able to talk tomorrow about Hasan Aga and his play called Design Principles and Everything Far Better than I can do that. So I will just give you a sneak preview and please go to his talk. So I will just briefly cover three basic design principles. The first is design principle is simplicity. I mean, that's always something that you can do either way the same thing, right? You make a system approachable and you're the family by removing stuff. I think from users, but we all know that clone is a really complex piece, right? So that's not an easy task. But the good thing is that, I mean, that's our main challenge, but the good thing is that I'll let a lot of the people profile, right? And I just told me this morning that the reason why he started with Hasan Aga was that I think I asked him to do like small tasks and he was like trying that with clone. I mean, I guess you don't know that at some point you work with something at any of you. I think we can do better. I mean, it's okay, but I think we can do better, right? And that was like, I guess, the initial starter for you to get started. So we can take like all this experience from this like professional life and from like doing to a fight. We all put that into Hasan Aga life and I think you can see that. So the second main theme or idea is adaptive user interface. At first I used to turn mobile first, reflecting the fact that mobile is taking over desktop in recent years. In 2015 Google reported that in like 10 most important countries, including the US and Japan, mobile search is over to desktop. In 2006, in the Guardian reported that according to stats counter, mobile over to desktop will be like, right? To more and more people are using mobile devices, tablets or desktop, right? It's not like five or 10 years ago, we had like desktop moment, but you have to design your user interface to multiple devices, right? And the idea is really to have not to not go like mobile here, but have adaptive user interface and provide the best user experience for each of the devices that we have. So that if you have a mobile cell phone that you provide the best user experience that you can get and also on a desktop where you have more space, right? I mean, that's again, that's a major challenge, but I think all of this is up to that. The third thing is a focus on particularly like important user interfaces. And think about like our most like important use cases and also users. And that was an idea that that was like something that like resonates with me immediately because I am kind of an eye-opener experience about one year ago. I was writing a blog post for my company and I did what I usually do. I wrote the like plain text in my development editor and I copied it over to our blog post, right? And I was like, wait, what the hell am I doing here? The blog software that I was using was actually providing like great user interfaces, really like was a reduced user interface that made it really easy and fun to write stuff online, right? And then when I was like, do my usual stuff, it occurred to me that no, wait, I'm using like a system right now that makes it fun to like write stuff online, right? And then I thought about like, what the hell, I'm still in the corner, like I've been in the developers since 10 years, right? I'm building like a CMS and my workflow is like exactly the opposite of what we expect users to do, right? We expect our users to use TinyMCE and then I thought about like what was drawn, why was drawn to Plum, right? In the first place and it was like 10 years ago when I started with Plum, that Plum provided like a really great user interface and a great user experience. Plum was one of the first systems that had like what you see is what you get at it, right? I mean that was really like makes Plum stand out like 10 or 15 years ago, but other systems did not have that. And today we are using TinyMCE, that's the thing that's, I mean it's a decent editor, right? We're all working with that, that's fine, but every other system is using something similar, right? Most CMSs use TinyMCE these days or something similar, but Plum is not in that particular part, it's not outstanding any longer, it's just providing the same user experience that all the other systems do, right? So and what we're aiming for with PuzzleEye UI is to make Plum's UX stand out again, to provide really a great user experience because we think that like our editors are our most important like user group, right? That's what the, for many users that's at least my experience in the last 10 years of my profession, right? For most users like Plum is like TinyMCE, right? I mean that's where they spend like most of the time in that editor, right? And we sincerely believe that we have to put more, that we have to focus on that part and then we have to provide like a better user experience for those kind of users, right? So the question is how do we make that happen? I mean all that really created, I mean you will see that in this talk if you go there, we create really tons of materials already. And as Eric pointed out, we need to tell stories to make people do something, right? We're not like a company, right? We can just sign in and say, hey, you are going to do that and you are going to do that, right? That's not going to work because Plum, the software or the community runs up on passion, like the passion that we have for our product and we're doing that now a few times. We're enjoying that a lot, right? And so that works differently. And the cool thing about like Pasanaga is that all that did not only provide us with the UI and with Marx, but it also provides us with an idea. Pasanaga means carry a catalog, as you can tell from the image. And he told me when he actually showed me Pasanaga and Ryan told me about his idea, his idea is to have a carrier on a stick to drag people to use Plum, right? To use that Pasanaga UI to drag them to Plum. And that was an idea that resonated with me immediately and then started to think, especially since I started to work on another idea about like three years ago. That's kind of the second story I would like to tell. Like three or four years ago, I started with pulling up a RESTful API. For Plum because we started to use, in our company, we started to use more than JavaScript front-ends. And we needed to make the API for that, right? And it's a rental. And actually I started to like draft a solution with Simone back then. And we created draft an idea and we created like a PAP to show, to see if that worked with Plum and to do an initial implementation. And then I think even as a rental or write-up, we had Ramon jump in and we discussed about like REST API principles and how we can make that work with the Z-Publisher because the Z-Publisher is really old code. And so Ramon jumped in and strialed with the Z-Publisher really hard and I was just like sitting next to him and like okay, that guy's like incredibly smart and I don't have any idea what he's doing but I will say yeah, you are short of all the time. That's what my contribution is for REST. I wrote a few tests, it can do that. Apart from that, Ramon did all the stuff. And I guess that was also the point when he started to think about UT now, right? Which Nate will talk about later tonight. But that was about it. We created Plum REST and then we created Plum REST API which provides like the entire API. But as I said, I just created like a concept right to get things going. Because of my changes in my personal life in the last two years, until I became father two years ago, was that I didn't have that much time to work and stuff. So I couldn't push that as hard as I would like to. But then Thomas and Lucas for Teamworks came in and they wanted to use Plum REST API for an open-gator platform that they provided to their clients. And they came in and said, hey, what about adding R-Tex to R-Tex? And I was like, hmm, we wrote down our design principles and ideas and it said there are no R-Types. We don't want to switch and stuff. It's the old thing, we're a new mix editor. But I was like, yeah, sure, I mean, why not? If you do the work, I can live with that. And they did not only provide the entire R-Type support, but also re-roll my crappy initial POS code and edit it after this stuff. That gives Plum REST API flexibility today to adapt to different needs, which is really great. There was something that I didn't assume initially, but it just happened. Those are the things that happened in the company. People provided an initial idea and then people jump in and do great things. And then when we have Plum REST API, I think around the Beethoven string, people start to build stuff on top of the Plum REST API. Initially I created an Angular 2 POC on top of Plum REST API, which also really, really sucked. But then Eric came along, right? Eric rolled and he started to write an Angular 2 SDK, which he gave a training SDK, which he will give a talk about at the conference as well. And he created really an awesome SDK and what I've heard from the trainers, it's from the training participants. It works really well for them, right? So that's awesome. At the same time, Rob, Hittiman and Roald Berge started to work on a regular presentation on top of Plum REST API. And that's similarly amazing because they basically build that thing in a week or two. We are working on a total project with Angular 2 for a year or so. So I know how much work we have to put into building that functionality and what they did within two weeks was just so amazing. So that was also something that I had quite anticipated before. I mean, we just told the story, we set the basic ideas and then just did amazing things. So that's where two ideas come together. So on one hand, we work on a technical basis to allow us to use more than JavaScript fun. Because if we have a new UI, most likely we don't want to use old technology. At the same time, in the last three to four years, I will go more into that during my talk tomorrow about that. But things have changed in Web development. We see those emerging JavaScript frameworks that are everywhere. And there are amazing projects out there that you can just take. And from the painful experience in the last year, in the company team, we saw that it's really hard to maintain stuff in the world. And I sincerely think that we have to start to reuse over reinventing stuff. We have to become consumers of libraries that are out there. And the JavaScript rules, I think, gives us that opportunity. So we have two stories basically that come together. We have the technology and the libraries that we need to actually implement a really good user experience. We have the drop for the Pasanaga UI that allows us to create something. One wouldn't work without the other. If we would have just the Mars and not the right technology to do it, that would not work. And it would be easy. And the other would work way along either. So to sum things up, we have the Pasanaga UI from Alder that allows us to drag people into the direction, both people from the outside, to drag people from inside the community. And the thing that we want to, what we're aiming for is really to, what our vision is, is to make clones stand out in the CMS market again. That if people start to use clones, they're amazed by what it can provide. And this Pasanaga that allows us to drag people, allows us to do a few things that we had in mind, like anyways. One thing is that we want to modernize our front end and have this reuse over reinvent idea. Second thing is modernize our back end, which has been around, which Eric already mentioned, with Python 3 and so forth. If we would release a new clone version with just Python 3 and so forth, it might be hard to convince clients to pay a lot of money for us to operate to that system, if we don't provide them with more value to that. For us, they're developers, so it's cool. You can tell them over, hey, it's cool. If you can have it to Python 3, you say, hey, wow, that's great. That's awesome. It's up for. Perfect, great. But for clients, it might be a bit harder. So we need that Pasanaga for our clients and then add additional development to that, because they don't immediately see the value maybe on upgrade 2 to 3. For them, it's not just a number. Yeah, I think that with all those different approaches in the clone community and all those stories that we tell each other, there's a bright future for our future clone versions. We have all the knowledge in our community that we need to work together to make the clone outstanding again. That's not even close to correct. Thanks, Kevin. So wrap it up. I basically just wanted to challenge you all this week to share the story of the clone of Pasanaga Future. If you're here as someone new, find the people that look just like they have that just in the stair from looking too far into the ZO code. It's the bugger problems in their sites and ask them for some hints of race, because I'm sure they have them. A few beers probably wouldn't hurt. And I'd like to challenge you all, especially the core developers, to start writing the story of the clone's future. We have an opportunity to really map out where we go from here. Like I said before, we've realized the dreams of the previous generation and it's time to start communicating our own. Thank you.
The stories we tell, how they preserve our past and shape our future
10.5446/54916 (DOI)
Hello, I want to show you today a little bit what we have done in the last couple of years in the newsletter area for clone. We have the add-on products easy newsletter. It's actually there for quite a while. But I want to show you what we can do with it. You can use it to send emails to individual newsletter subscribers. That's the common way when people are subscribing by their own and unsubscribing like you probably know. There's another way you can also use clone groups and users. That way you also have the possibility to use existing active directory or add-up groups like in bigger organizations. This can be really handy to address like a group of people and send them some emails. And you can also use users from external services so you can actually pull subscribers from an external delivery service. You can create content, an easy newsletter like the manual way, like you write content and clone like a page or news item. But more interesting you can also automatically aggregate content. That means anything you have in your database you can use for creating newsletters. Starting with news items and just listing them or if you have more custom needs then you can have your custom content types and create your own templates and just use all the data. You can use different layouts. You have output templates. This is basically like the frame around like defining colors and stuff like that. And you have aggregation templates which define the structure of the automatically aggregated content. So is it like a normal listing or is it something special? You can customize it. You can provide your own templates. That works for the output templates for the aggregation templates so you can write your own add-on and install it next to easy newsletter and with just the templates and you can just use them in easy newsletter. I will show you later. You can even customize them through the web if you want without an additional product. Yeah, we have quite a list of placeholders like dates, like the year, the month, the full date, the names of the subscriber and so on. So you can use this and put this in the templates basically on any place. So it will be replaced really at the end before the email goes out. You can even filter or inject new recipients or subscribers or however. So for example if you have clone groups or clone users or LW users and you want to realize some kind of filtering. Like maybe you have, you use membrane for the users and they have a check box, I want this newsletter or I don't want it. So you get the, first you get all the users and then you just use an external filter. It has nothing to do with easy newsletter. Easy newsletter don't know your structure. So easy newsletter doesn't know your membrane content and the field and stuff like that. So you provide a filter which is a subscriber and that way you can filter out users and groups and also the recipients at the end. You can even send automatically newsletters so you can set up like weekly or monthly creation or daily creation of newsletter content. So if you have a lot of content in your website like university and you want to send out like, I mean a weekly list of the new seminars of this week you can really easily do it. So there's a view for it. You just call it by cron job or use a cron for crone for it. Yeah, we have some options to scale. We have now an async support. This works, can work with Redis. So it use under the hood collective task group. So if you want and you have like a big amount of emails like 5000 emails to create and to send out, you probably want to use a separate process for this. I would say up to 1000 it's okay to do it without. But it depends on your needs. You can even use external delivery services if you want. So you can create all the emails and set up all the stuff and use the internal knowledge and your internal data structure which you probably cannot do from like directly from the outside services. Usually they support like as-as-feeds or something like that. So you have basically a list of some news items. If you want to go more in detail in your data structure, you need an additional integration for that. So but you can still, if you have like 50,000 emails you want to send out, you probably don't want to do it from your server. So no matter if you're able to do it async from, it's just probably too much for your email service. It depends on your provider. And with this amount of subscribers, it's also more important to have like statistics and better bounce handling. So then services like this. Yeah. The version 3 runs basically on all main supported versions. So you can use it on 3, 5 and 5.1. I also set it up some here. Basically I have an example how you create the menu newsletter. So you basically go add a newsletter object which is the container. And in this container you will add as much as you want newsletter issues. This is some basic stuff you have to set up. So you have to set up your email address and you can also choose the output templates. We provide two output templates already out of the box. Here you have some personalization stuff you can set up. This is also like one of the placeholder which says then DM is the DMSS. There's the placeholder for the unsubscribed link. You can also add banner and logo images. The logo will be there for every issue. For the banner you can even decide on this issue I don't want this banner or I want a different banner. So you can add another banner for a specific issue you want to send out. Now we add an issue. This time it's totally manually so we give it a title, a description and then we just use the TinyMC to write the stuff down. This is not the best way to do it because TinyMC is not made for producing emails. Producing emails is actually a really hard job if you want to have rich emails like HTML with structure and all this stuff. Then you look in Funnaboot and it looks nice and then you go to some other email clients and it looks like crap. We did other generated templates using a structure which also made some use for example. That means basically a lot of nested tables and really old HTML and CSS definitions so nothing modern. But the problem is when you use the TinyMC to do some stuff then you will probably mess up the stuff. So we have to configure the HTML filtering a bit to allow more of the old stuff. You can here test the email, the newsletter. So by default you set it up your test email, you can always change it and then you will send out. This is an extra preview we just saw. The advantage is it's without clone wrapping it and also it shows you actually default placeholder like the unsubscribed link and like a dummy name so you can really see how it looks like when the people get them. In the first view you have only the presentation so unsubscribed link you don't have because it's the presentation you have on the website. So as you saw when I was, I think we also prevent the sending out by mistake so you have to enable it and then you can press the send button otherwise only the test button works. And here you saw that we have the archive with the already send out newsletters and on drafts you see the created but not send out versions. So the next example is the automatic generation of newsletters. So we basically use the ad aggregation sources that means you choose some collections, you already know collections so you can set up how many collections you want to find all your content. And we are iterating over these collections later to get all the items or the objects we want to show and the aggregation template actually then uses like okay I have an item I want to render it like this. Then we press aggregate content and this will actually fill in the issue so you can go and edit and see the whole content but probably not save it because the tiny emcee will strip stuff out. But you can see we have a nice listing so we have actually three lists. You have also the title and the description of the collections in it and we render the images and stuff like that. That's the default. We also can switch this. This time we do it directly on the issue but usually you would say it on the newsletter container because you don't want to change this on every issue. So we have another template and when we use this and auto generate again then you can see it looks a little bit different especially the pictures lists so that looks a little bit nicer for pictures instead of the just listing the stuff. So that's just an example. Now this template is a little bit more complex because it actually has for the three different types has different markup and has like conditions or it's an image so I ran out like this. In the future we want to make this a little bit more flexible but we need to drop the clone force of all probably for that. Changing the output templates. As I said we have also by default two output templates but if you want more you can see that the footer is like mainly wide and the area on the top also is like the color stuff and when we switch to the other one then we have slightly different color scheme and also the footer has different stuff in it. By default we did the footer as part of the output template so you just customize the output template and then put your stuff in because there's no good way to provide a UI for now to create the footer that it looks like nice. We also have a list of the collections in the top as you can see but it's your decision. This is just an example I did for a client. If you have other ideas then strip something out or add something. The last will be a quick look in the registry entries. We have some entries in the clone registry which you can use from outside. If you choose product easy newsletter you have these entries and there you can see we have an entry for aggregation templates for allowed content aggregated content types. For now we only support collections but it could be that you have another content type which works like a collection or like a list or like a list then you can extend this. Here you can see we have the title and the ID of the template, the same for aggregation templates and the ID is just used to traverse for the template and then we use the template for rendering. I will also show where the templates are. The first one is the old portal skins. There is the newsletter and inside this we have aggregation templates and also done the output templates. So as you probably know you can just click on it, customize it, put it in a custom folder that will work too. If that is a quick way to solve your needs you can just do it like this. Here is one example. You really can customize the content of the email. This is at the end. So just replace any PHP with Python for example. This is the actual data. The first name and last name this is like the receiver properties. So you can create something like if there is a last name then I want the salutation like this. If there is a last name then it is dear Mr. Mike and if there is no last name then say hello Mike. Something like that. So you can be really creative with that. For example you see this. First name, last name. And then just register it. It is a normal subscriber like an event handler and it will work. So we have docs, AdWords docs. There is this customization stuff described in detail. Some words about the future. We have on the list of course a migration to the dexterity. So far it is still archetypes but it doesn't matter that much. This will give us a little bit more flexibility. We are thinking about providing a behavior which we can enable on collections so that I can choose these aggregation templates on every collection. That means I can have different aggregation templates like for the cats. So I have like a completely different rendering for images for the collection which collects this and I have another normal listing for news items. This way we have just the template we need and not like a combination of templates. Because if you want to customize the second list of aggregated content you have to customize the whole template which is a little bit more complicated. Yeah. That only works on dexterity so in Blown 4 it is not that easy. I mean we will not go and do nasty things like Xima, Xtender on archetypes if someone wants to do that. We could still have the support for Blown 4 in the future but I would drop it. Yeah. One other thing. We are actually thinking about the manual content creation story also and the only real answer is providing an editor like Mircham and Cleveridge already have. There is actually one. It is named Musaico so sounds familiar. And this is open source so we are thinking about integrating this and combining this with the auto generator template so that you can actually, after auto generating the stuff you can customize it in like adding some blocks and some image parts and then just fill it in. And the markup you produce that way even with the manual creation is proven to work in the email clients. Because we can just figure out what works and then we can put it behind and you just fill out the content. The structure will be the same all the time and you just combine them. Of course this will be a bigger project. We have some interest from some clients to have this but if you are interested in that it would be really helpful if we have some more people begging this project. So if you are interested then give me a note. That's it. Any questions? What about automating sending of the newsletters? Is it possible to be a current job or? Yeah, it's the case. Maybe I wasn't here. No, we have, you can have like daily or weekly or monthly newsletters and what it does is it has a view. You call the view. The view will just use the collections to aggregate content. It will generate a newsletter issue and will send it out for you to all subscribers which are subscribed to that newsletter. So it's basically you have to define your criteria like find me all the news items from the last week and then it will send it out and if you call it again it will know I did already my job. And if the collection is empty it will also not send out. Okay, thanks. Okay, more questions? Okay, let's thank the speaker again. Okay, thank you.
We show how you can setup a newsletter solution based on Plone, using the Products.EasyNewsletter add-on, in its newest 3.x version. Starting with simple setups, we will show the flexibility of EasyNewsletter in terms of customizing and extending the standard behaviour. We will have a look into the future of EasyNewletter, including Dexterity support and a new editor UI for creating newsletters.
10.5446/54917 (DOI)
Okay, so let's get started. Hello everybody and welcome to this talk about Angular performance tuning. I'm taking the liberty to do this presentation by setting it because it is more easy for a microphone and because of camera and so on. So who I am, I'm a man-made and I'm a trainer consultant for Angular. I'm focusing on Angular since I was a little bit older and I'm also part of the Google developer expertise which means that I have a direct connection to Angular quality. I have also written a book, this is the book with the most ugly cover in the world but I'm here today very proud of it. So let's start with a picture, let's start with this computer here, who knows this computer? It's an 8386, a good old Intel computer, the hero of my childhood. And the nice thing about this is this tool button, you just press it and after it the machine was as twice as fast as before. Okay, there really is a place for it to make the machine slower by pushing it again to be more compatible with all the versions with the one version of this computer. But now the question arises if there is such a dirty solution for Angular and the answer is yes, there are some quick wins, there are some dirty solutions to make Angular faster. For instance, you can do bundling, you could do unification or attenification and you can put Angular into production mode. All those things make Angular faster and all those things are done by Angular skill. So when you see the CLI, you get all these things properly. But there are other things you can do to make your application faster and this is what the contents of this presentation is about. I will talk about lazy looping and grid looping, I will talk about performance with data binding with one bush. I will talk about the UT and scree shaping as well as caching the service workers. And if you should put that in line permits for it, I will also show you how to leverage the outside. Okay, let's start with the first topic, let's start with the lazy looping. For some reason a good friend of mine told me that I'm very authentic when I'm talking about these things. I'm not sure what he meant, but I think it was some kind of compliment. Anyway, what we see here is the typical structure of an Angular application. We have this root model. Oh, okay, now also the microphone is on. We have this root model, we also have this feature model or several feature models and we have one or several shared models. And normally everything is loaded at once when the application starts. And of course this isn't influencing the start up performance in a good way. And this is exactly where lazy loading comes into play. With lazy loading we can just load the most important models when we start the application. For instance, just a root model, perhaps some other models. And the rest is just loaded on demand when the user clicks here or there, when the user navigates to this part of the application. Or to that part of the application. And obviously this is improving the start up performance. What does it take to get started with lazy loading? We just need a special route, a route that is pointing to the model file, the file that contains our ngModel. And we have to abandon the name of the model class. In this very case here, the name of my model file is just Flight Booking Model. And the name of the model class is Flight Booking Model without dashes or points. Yeah, this is everything we need to do to get started with lazy loading. Now of course you are wondering which route is activated when we are triggering lazy loading with the router. And the answer is every model and so also a lazy model can have its own routing configuration. And of course when we are triggering lazy loading for a model, by default its default route is activated. This is the route without the path. We can even jump to a sub route within our lazy model. Here I have a sub route with the name sub route and to directly jump to this route we meet the URL segment that is triggering lazy loading. The name of this URL segment is Flight Booking in our example. It is this URL segment that is using load children and that is pointing to our model in question. And here we have to abandon the name of that sub route. Let's have a demonstration for this demonstration. I have brought a sample application. It is about booking flights. And the sample application is leveraging lazy loading. So let's put this away and let's have a look into the source code into our app routes file. Here we see that the flight booking model is lazily loaded. We are pointing with load children to flight booking model and to the model class. And because of this the Angular CLI is just creating an own bundle for this flight booking model. An own bundle where it puts the flight booking model into. And when we are loading our application, let's do this. We see that everything is loaded, everything, but the flight booking model bundle. When you look here at the list of the loaded files, there is no flight booking loaded bundle. But when we are moving to flight booking, Angular is loading this chunk just on demand. So I think that proves that lazy loading takes happen. And of course this is improving the startup performance. Lazy loading also comes with a drawback. Lazy loading just means that we are loading the things later. We are postponing work. It doesn't mean that the work vanishes. We just do it later. Because of this when we first click at flight booking, there is this loading indicator that shows us that we need to wait for one or two seconds until the lazy chunk has been loaded. Of course that doesn't matter much in this case because in this case I'm leveraging localhost. But when we have a slow data connection, this could be an issue. And this is exactly where preloading comes in. The idea behind preloading is that models that might be needed later are loaded after the application started. And so when the model is actually needed, it is available immediately. By this means we get the best of both worlds, the best of preloading, the best of lazy loading I mean, and the best of not doing lazy loading. The user sees the first screen very fast. And after that, Angular starts with lazy loading all the other things. And when the user clicks on this or that menu item, then the model is available immediately. What does it take to get started with preloading? We just need to register a preloading strategy. And this preloading strategy can be registered when we are setting up the routes for the routes model. Normally you are calling for a route for this. You are passing in your routing configuration and the second parameter takes this parameter object. And this object has this property preloading strategy. Here I am going with the preload all model strategy. And as the name implies, this is preloading all the lazy models when the application starts. Of course, you can also write your own preloading strategies that are just preloading a specific amount of models. Or perhaps just models the current user is allowed to use. So much for lazy loading. Let's switch to the next topic. Let's talk about performance tuning with the on-bush optimization technique. And for this, I just want to start with a demonstration. Let's switch over to my application. Let's search for some flights. I hope I'm online. That looks good. There is a validation error here. How it comes. Perhaps I'm not online. Let me double check. Oh, yeah. I went offline and my validation procedure is using the server. So let's reload this. Yeah, great. So let's search for flights. And here when we see the flight result, we also see a button delay. And this button is just delaying the first flight. This is what Lufthansa is doing all the time. When you look at the first flight, it switches from 329 to 344 to 359 to 414 and so on and so forth. Please also have a look to the other flights. You see that the other flights does not change. They just stay as they are. They stay with 514 when it comes to the second flight or with 814 when it comes to the third flight. But you have also seen the spring effect. What is the spring effect about? I'm not showing you something about animations, but I'm using here a 30 trick to visualize the change detection of Angular. This 30 trick just makes a component blink when Angular is checking this component for changes. And you see here, even though just the first flight changes, all the flights are blinking. That means that Angular is checking all the flights over and over again without a reason. Normally, that isn't an issue because Angular is very fast with this. But of course, when you have a lot of bound data, this can become an issue, especially when a lot of events arises. And because of this, we have this on-bush optimization technique. This is what Angular does on-bush do. With on-bush, you can tell Angular that some components shouldn't be checked at all. So Angular is skipping, for instance, the flight card components here when it comes to checking for changes. Angular will just check those components when Angular gets notified about a change within those components. So the question arises how to notify Angular about a change. And the answer is we can do several things. For instance, we can change bound data. Data that has been bound with a property binding, data that has been bound to an input property. But it isn't as easy as it seems because Angular is just checking whether the object reference of the bound data changed. Or to say it with an example, Angular is just checking whether the current flight is the same as the former flight, as the old flight. Angular isn't checking whether the first property changed, the second property changed, the third property changed. Angular just checks whether the object as a whole is the same, all the object has been exchanged by another one. Of course, this is about performance because checking all the properties would be too costly. You can also raise an event within the component. And a special case of this is you could also notify a bound observable within this component. And of course, you can change the detection manually, but I wouldn't recommend this. Don't do this at home. It isn't the fine English way to do change detection, and it may cause a lot of confusion. And most of the times there are ways to avoid this. Most of the times you can just solve your issues with the first three options. At least you should try very hard to avoid this option here. So what does it take to put a component into this on-bush mode? You just have to use this change detection property within the component decorator, and you can set it to on-bush. This is everything you need to activate on-bush on a component level. And that also shows that you can decide from component to component if you want to go with this technique or not. Here I have another example. Here we have the thing we have seen within my application. I have my flight search component. This flight search component consists of flight cards. And the flight search component is passing down the flights to the flight cards. And when the flight cards are in the on-bush optimization mode, then Angular is just checking whether the flight has a hole changed. And when this is the case, Angular is doing change detection for the card in question. Otherwise, Angular is just skipping the whole component, the whole component, and every child's component of this component. And here we see another option to trigger change detection, which just needs to bind an observable to our template. For this, we are leveraging the async pipe. And in this case, Angular is checking this very component when this observable yields a new value. So let's have a demonstration for this. Let's switch back to our application. And here I have a good message. We are lucky because I just using immutable Seer. That means every time I'm pressing delay, I'm just exchanging the first flight for another object, another object that has all the data of the former flight, and of course, the new date. Because of this, I can easily switch to the on-bush optimization technique here. Let's do this. I'm switching to my flight card component. I'm activating on-bush. Let's wait until the compiler has recompiled everything. The application is now reloaded. Let's search for flights. Let's press the delay button. And now we see, hooray, just the first flight, just the changed flight is checked by Angular. And of course, in this case, that doesn't bring a lot of performance. But just think about cases where you have a huge amount of bound data and where each subcomponent has several other subcomponents. Nice thing. Okay, so much for on-bush. Let's switch to the next topic. Let's talk about a head-of-dine compilation. So perhaps you know it. For a matter of fact, Angular is just compiling your HTML template to JavaScript. Angular is doing this because of performance, because JavaScript can be easier evaluated over and over again when you compare it with the evaluation of HTML. It is just costly to evaluate HTML text-based templates over and over again. And for this compilation step, there are two approaches. You can go with just in-dime, which means that the compilation takes happen at runtime. That means when the application starts, Angular is just compiling the templates. Or you can leverage AOD, which means that the compilation takes happen during the build. In this case, you have another compilation step. And so you don't have to do this when the application starts up. And using AOD has a lot of advantages. Of course, there is a very off-fields advantage. You get a better startup performance, because you don't have to do all this stuff when the application starts. But there are also some not-that-off-use advantages. For instance, you get smaller bundles, because you don't need to include the compiler into your bundles. You don't need the compiler at runtime, so the bundles need to include it. So this is a huge improvement, because the compiler is a real big thing. It has about 300k, and so it has about the half of the core framework. And another thing that isn't that off-use is that tools can easily analyze the whole code, because the whole code just consists of JavaScript, and JavaScript code can very easily analyze compared to the analysis of HTML code. And so these tools can find unused parts of your applications, and they can remove those parts from the bundles. I'm sure everyone of you has applications that are not using all the parts of the included frameworks. Normally, we are just using a specific amount of the possibilities of the frameworks we have included, and because of this, removing all the other parts is a good idea. This is also called tree shaking. It's a nice metaphor for shaking a tree to make all the loose branches fall down. So when I've talked about AOT and tree shaking about a year ago, I've showed the audience how to configure the Angular compiler, how to configure AOT, how to make it work together with a built process like Webpack or something else. Nowadays, I don't need to do this, because nowadays we have the Angular CLI, and the good message is the Angular CLI is doing this for us. Who is using the Angular CLI of you? Okay, about a half. Who is using something else like Webpack or System.js? Okay. So when we are leveraging the Angular CLI, we can just do a production build. That means we are calling ngBuild with minus, minus, but, and now we get all these optimizations like minification, acclification, production mode, we even get AOT and tree shaking. Behind the scenes, the CLI is using the AOT plugin for this. The AOT plugin lives within the NPM package ngDools Webpack. And very soon we will get another plugin for this, which is called Angular compiler plugin. And the good message is you can use it with and even without the CLI. Those things are just Webpack plugins, so you can just plug it into your own Webpack configuration. Okay, let's have a look at a demonstration for this. In this demonstration or for this demonstration, I have prepared my example. I have prepared it in several flavors. One flavor is using a production build, but not AOT. Let's just have a look at this version. Production and no AOT. Let's start my development server. Okay, so here we have the no AOT build. Let's go to the performance profile of Chrome. Nowadays, a browser is nearly a full development environment. We even have diagnostics features and tracing features. Let's clear the cache and let's reload the application. Yeah, it took us about two seconds to get started. A bit less than two seconds. Let's do this again with a warm cache. Yeah, we cut it down. Do about 1.5, 1.6 seconds or something. So this was the solution without AOT. And now let's do the same with AOT. Let's switch to my other compiled version of this application. And let's start the live server. Let's clear the cache. And let's reload everything. Duff goes down. Yeah, it took a bit more than one second after we've cleared the cache. And now let's redo it. With a warm cache, of course, it should be even faster now. Yeah, not much. One more time. Yeah, so we cut it down to about 800 or 900 milliseconds. So this is the result I also got before. So nice. So the choice that by leveraging the AOT, we can just cut everything about by the half. And this is the reason why just in time compilation isn't allowed at Google. Everyone at Google has to use AOT there. Awesome. So please use AOT at runtime. Okay, let's talk a bit more about reshaking. There are some challenges when it comes to reshaking, when it comes to remove unneeded parts of the application. And the biggest challenge is that most reshaking tools are quite conservative. That means they are just removing code when they are sure for 100%. And as a matter of fact, very often they aren't sure. And so they tend to keep the code in the bundle. They aren't removing it just to be on the safe side. One solution for this is the Angular build optimizer. The Angular build optimizer has been created by the Angular team, and it rewrites the compiled code in a way that helps to make it more reshakable. So it helps those reshaking tools to find out which code is really needed and which code isn't. Currently, this feature is available via the current CLI, but it is an experimental feature, so it can change in future. And it can also have some side effects. And because of this, I'm using this new logo here. This logo has been introduced last week at Ancient Mix in the States. And this logo just shows that we are dealing with hot new experimental stuff. So of course now I could show you a demonstration for using the Angular build optimizer, or I could just show you this diagram. And I think this diagram speaks for itself. I think this is quite awesome. Here I managed to cut the bundle size to the half. Please notice that here I'm using Angular material. Angular material is a huge component library with a lot of widgets. And most of us aren't needing all those widgets. And by the means of the bundle optimizer, of the build optimizer, this code gets more reshakable. And so the reshaking tools can shake off all the unnecessary parts of Angular material. And this is what results in this better bundle size. Nice thing. Let's switch to the next topic. Let's talk about caching with service workers. What are service workers? Service workers are background tasks. Background tasks that live within the browser. You can think about service workers as about window services that are living within your browser, or about Linux demands that are living within your browser. A web app installs them, and those background tasks are activated and deactivated on demand. That means they aren't stealing CPU cycles all the time, but just when needed. So how do they help with caching or even with offline scenarios? Well, a service worker can do a lot of things, and one of these things is intercepting requests. It can intercept your requests, and then it can decide how to respond to these requests. It can decide to respond using the cache, or it can decide to use the network to fetch fresh data from your web service out there. Of course, there is the same arching policy, which is very important, because otherwise it would be a real security issue without that we would have an issue when the service worker from site A would have the possibility to intercept requests for the site B. With this possibility, it could just inject that code or something like this, and this is something we don't want to have. Because a service worker is just a script you can write, you have all the freedom to create caching patterns. Caching patterns like cache only, or network only, or try the cache first, and then use the network, or try the network first, and when the network is down, use the cache. And there are even some other caching patterns that help here. Using service workers is a bit difficult, because there is this low-level API that browsers brings. But instead of this, you can use a high-level abstraction, and such an abstraction is Workbox. Workbox is a nice API that has been created by Google, and you can leverage this API to get started very easily with service workers. And by saying this, the next version of Angular, Angular 5, will contain an own service worker model. It is also a high-level abstraction for service worker within the world of Angular. So here in this demonstration, I'm using Workbox, and here we see a simple example for Workbox. I'm just importing the Workbox script into my service worker, and then I'm instantiating the Workbox. After this, I'm grabbing the caching strategies. I'm grabbing network first and cache first. I could also configure them, but here I'm going just with the default configuration, and then I'm setting up routes. The first route is pointing to my web API. It is using network first, and the second route is pointing to everything else. Please note that point is the regular expression for everything. So this route is caching everything else. Also my program files, my HTML files, and my JavaScript bundles, and so on. And for this, I'm using cache first. And please also note that this isn't the browser cache. This is a cache we can control by ourselves. Okay, let's have a demonstration for this. And for this, let me start with the last example that isn't using service workers. You may have seen that this example is quite fast. When we are looking to the network tab, we see that it just needs 800 seconds and something to load. But of course, we are using localhost here. We are using 127.001. And as I have found out during my research, this is a very fast address. And for some reason, it is available all the time. Just kidding. So let's just simulate something that is called fast 3G. Let's simulate a fast data, mobile data connection. And as we will see here after clearing the cache, fast 3G isn't fast at all. This takes some time. Here we are seeing my loading indicator. I'm very proud of it. And at the end of the day, we see it took us a huge amount of time. It took us about eight seconds to load everything, even though this is considered a fast mobile connection. So this is exactly where service worker caching comes in. And to demonstrate this, let me switch over to my service worker build. Okay. This is using a service worker. It has been installed by the web application. When you look at the application tab here and when you switch to service workers, you can see that here is a background task, a service worker up and running. It got this nifty ID here. And we could also unregister it or debug into it. But I won't do this. I'm just switching to the network tab. And here I'm simulating fast 3G one more time. Let's clear the cache. And we see that was quite fast, but it can be faster. We are down to one second and something. In some examples, it was a bit faster, but I think it shows it is a lot faster than the thing we have seen before. So that's awesome. But there is something that is more awesome. We can even go offline. We can even close our web server. And the application still works because we are now using our in browser cache, a caching region. Our service worker script can control by itself. So my smartphone is telling me that I have 10 minutes left, which is nice. We are right in time. By the way, using service workers is also a strategy, of course, for creating offline enabled websites. It's a key technology for progressive web apps. Awesome. Okay. So let's go to the last topic for today. Let's talk about server-side rendering. And first of all, the question arises, why do we need server-side rendering? Because we have all those nifty JavaScript frameworks that are doing rendering on client-side. And it gives us a good user experience. Well, it is all about pre-rendering the first page. Because this will just improve the startup performance, at least a perceived startup performance. And this is very important for customer apps. Customers are unpatient beasts. They are leaving your website when it loads a bit longer. There are some nice statistics. For instance, Amazon tried to delay the loading time of the page for 100 milliseconds. And they discovered that this decreases the gross amount for about 1%. And 1% of the gross amount of Amazon is a huge thing, I suppose. So this is very important for the consumer. The consumer needs to get something to look at within a very, very tiny amount of time. And here pre-rendering the first page helps. So with Angular 4 and up, we can use this nice method here. RenderModulFactory. It is a very easy method. It just gets the ModelFactory of your root model. The ModelFactory is what the compiler compiles out of your model class. And then you are passing in the contents of your index HTML. You are also passing in the URL for means of routing. And then RenderModulFactory is just spinning up Angular. It is creating the first page or the page that has been requested by this URL. And then the HTML for this page is sent to this promise here, to this then function of our promise. And here I can do almost anything with this string. I can put it into an file. This is called static pre-rendering. I could also send it down to the client. In this case, I have something that is called dynamic pre-rendering. So just think we are wrapping this method into a server process. And the server process is just creating the HTML and sending it down to the user. So as mentioned before, this is available since Angular 4. It was also available for Angular 2, but there it was a community project. Now it has been refactored and it is part of the framework itself. So let's have a demonstration for this, for pre-rendering this stuff. For this, let me go to the console. Let's go to my distribution folder. And here I have the main server bundle. It starts up. It listens to part 80.0.0. And now let's simulate a slow data connection. For instance, slow 3G. Let's clear the cache and let's reload this. Now it lasts. And after waiting some seconds, we are seeing the word server here. So this is the answer we get from the server. And after some seconds, also the JavaScript code is loaded. The JavaScript code kicks in. The JavaScript code is instantiating and initializing Angular. Perhaps the JavaScript code is even loading the data for the first page. And then the result of rendering on the client side is shown here. Let's redo this one more time. For instance, with fast 3G, we are reloading the page. We are seeing the result from the server. And after this, when the client side kicked in, we are seeing the result of the client side rendering. And this is exactly what pre-rendering on server side is about. It is just about bridging this gap between the point in time where everything arrives in the browser and the point in time where the client side framework is ready to display something on screen. So it's about bridging some seconds. But as mentioned before, those seconds are very important for consumer apps. Of course, this all isn't a free lunch. Server side rendering comes with some challenges. For instance, you have other conditions on server side. You don't have a document object model. You don't can access your local storage. Or you don't can access your navigation, your navigator object. And to deal with this, you can create separate services. One service implementation for the server side and one for the client side. You can also use the renderer object of Angular. It is abstracting this differences away. It is doing the right thing on client side, for instance, manipulating the dome. And it is doing other thing on the server side, for instance, writing into a string. And a huge challenge are fraud body libraries. Because those libraries are very often directly manipulating the dome. And as mentioned, you don't have a dome at the server side. So this can be an issue. This means you should check out whether your library does support pre-rendering, server side pre-rendering. But there is also a good message. Beginning with Angular 5, you have the server side dome simulation. They are trying to simulate the dome. That is working quite well. But of course, it isn't perfect. So the hope is that more libraries work with server side rendering. But it won't provide everything the client side dome can do. When you want to read more about this, I have a medium series. And this medium series contains several articles about performance tuning for Angular. You get all the slides afterwards and you will find this link in here. So let me come to the conclusion. I think we are very good in the end. We have seen at the beginning that there are some quick wins. There are some turnkey solutions like back in the times of the 386 where we had this turbo key. There is something like bundling, there is minification, there is the Angular production mode. And the good message about this is the Angular CLI is taking care about this for free. You can leverage lazy loading and free loading to get a better startup performance. You can leverage on bush and immutables as well as observables to improve the data binding performance. You can go with AOD and reshaking to get a better startup performance. You have seen with AOD and reshaking, we can really reduce the startup time by a third or even by 50%. And reshaking helps you to shrink your bundles. And we can leverage caching and service workers to get a better startup performance, even when we have no data connection or a slow mobile data connection. In addition to that, as the time permitted for it, we have also seen that server-side pre-rendering is a huge thing for customer applications. It allows you to pre-render everything and this in turn allows you to show the user something more quickly. Okay, so much for this. Here you have my compact data and the links to the downloads. Thanks for coming. Thank you very much. That's a question. My service worker is so much faster, the bottom end is the network, isn't it? Yeah, so in the service worker demonstration, the problem, the issue is the network. And there I've simulated slow 3G or I think fast 3G, at least a mobile data connection. And because of the fact that service worker is caching everything, it can just pull out my program files, my bundle of the cache. And this brings the performance improvement. Okay, caching. Yeah. Because of the fact there's no feedback in the first load. No, there is not an impact for the first load. For impacting the first load, you can do something like AOD and reshaking and so on. And for the second and the ongoing loads, you can leverage the service workers. But the cache is used without service workers by taking a load? Yes and no. It is so that when we are using just the browser cache, we can't control it. In this case, the browser can decide what it grabs from the network and what it grabs from the server side. It would be slower than we just relate on the browser cache. When we are using service workers, we can decide when to use which partition of the cache and my script is just using the cache all the time. And because of this, there is this game in the startup performance. Okay, so you can press 3 or 4 in the server. We are ready. We are ready to host it in the bundle change. Yeah. Yeah. That's the reason. Other questions. Okay, so if you have other questions later, just come to me. I am here at least for lunch. And thank you for coming.
An application’s performance significantly influences its acceptance and therefore also its commercial success. However, there is no simple adjustable screw for performance tuning in single-page applications: several influencing factors need to be considered. Angular addresses these aspects with its architecture and offers some possibilities to provide breathtaking performance. This session shows how to use these possibilities by means of a systematically optimized application. Attendees learn how to leverage Ahead-of-Time Compilation (AOT), Tree Shaking and Lazy Loading to dramatically improve application startup performance. The OnPush optimization strategy to speed up data binding performance in your solutions is also demonstrated. Service Worker for caching and instant loading is covered, as well as Server-Side Rendering to improve the perceived loading time.
10.5446/54918 (DOI)
Hello guys. Welcome to the Gila Tina talk. So since this conference, at least Eric kind of kicked us off with talking about stories, I'll try to say a couple stories about Gila Tina. So the name, we were trying to think of a name for Gila Tina because originally it was a Plonutserver and realized it was like a horrible name for the package because it, as we'll get into, it kind of wasn't ploned so much anymore. And it was really confusing for the rest of the community. So a co-worker of mine, I was asking to figure out a name and he thought of Gila Tina because we're really cutting out things and he thought it was like a clever name and I told Ramon and he liked it and he's like, but you mean Gila Tina? And I'm like, oh sure, the Spanish word for it. He's like, no, the Catalan word. Everything's Catalan. It's not Spanish. Not Spanish. And then, so Ramon really started the project. It was a lot of everything that's going on in Gila Tina was in his head or what he needed. And I kind of came in, I mean I helped out initially a little bit, came in and really took over after that. And so like last week I was talking to him and like asking how he liked what I've done with the project since he's kind of been too busy to actively develop it and I've taken over. And I said, because it was your baby and now I've kind of taken it over and he's like, well I'm happy to have a baby with you, Nathan. So Gila Tina's our baby and we're going to talk about our baby. So Gila Tina, a synchronous resource API to manage millions of objects. And a little bit about me. I'm Nathan Ben-Ghiem. I've been on the, I'm part of the Plon software foundation. Been on the framework team, UI team and I'm still on the security team. So I'm not on the UI team and framework team right now but I've been on them. And I currently work at Ona with Ramon and I'm a full stack engineer there. And Ramon, do you want to talk about yourself or I can talk about you if you're like. Well, I'm one of the organizers of Blon Conference and I founded Ona. It's a company that connects and search knowledge across all the enterprise sources. And well, I've been also, I'm also still framework team on Blon and Blon software foundation member. But I don't know if I'm going to be there for a long time. So I just want to give you a little background on where this project came from and how we made our decisions. So with advanced web technologies, it's obvious, you know, sort of like the old style of rendering web applications where you're rendering HTML is dying. Every web engineer knows React, Angular and wants to be able to use it. And they work with APIs. They don't want to work with generated HTML. And so, but Blon has a lot of experience of working with data and we do resource management early well. But the UI parts, we don't do so well, maybe, depending on who you talk to. And we love Blon. We love the way Blon does things. We love the way Blon organizes content and has the tree structure of content and how it maps to URLs. We love the security system. We love the community. So, but we wanted to be able to use it in situations where we had thousands of users simultaneously connected and we had to be able to scale. We wanted to be able to use it with microservices and Docker and all these newer technologies. And so how could we do that? How could we continue to use Blon but in a different context and different requirements? So, then let's talk about where Blon has come from and why it is where it is and why we felt we needed to give it to you. A long time ago, Zope was started with the ZODB and it was an object-oriented database and web application server. So the objects were serialized to a database like by pickling and those objects were retrieved and often mapped to URLs and you manage it like that. And that's how Blon works still. And then Blon, layer on top of that. Seven years ago, Pyramid was started and it was kind of like a fork of some of the original ideas with the Zope component architecture. And then two years ago, Blon REST API was started and that was an abstraction layer for creating resources for Blon 5. Blon 5 is still like 300 total packages to install. So it's still a big framework here and you're just kind of like adding this additional little API on top of Blon. We still have all the rest of Blon. And then one year ago, Blon server was created. I guess it's more than one year ago now. Blon server was created. That was a rewrite of the minimum amount of Blon that we could do. So a lot of the Zope packages were still in there and I think we pulled in like a couple of Blon packages but not too much. And it was so trying to use a lot of the same stack. It was still using ZODB and ZODB is not asynchronous. So that was still a problem. So then we decided we needed something different. We needed to drastically change the database layer. We needed to do full asynchronous. And that's when we, and what we were building was not really Blon anymore and we didn't want people in the Blon community to think that this was going to be a replacement for Blon. This is a, so we had to rename it and that's where Gilatina came from. It's an evolution. It's done in the spirit of Blon. But it is not necessarily a replacement of Blon. We were taking the lessons learned from Blon that we think it's okay to fork packages. We think it's okay to make compromises for the sake of performance. So not implement certain features because we want better performance. And you might have to implement those features and add on differently or whatever. We're not going to provide everything out of the box. And we are inspired by Blon and Zope and Pyramids Decorator based configuration syntax and Django's global application settings. And we love the Zope component architecture for how easy it is to extend and override. And it's a beautiful way of organizing complex projects. I love the security model. And JSON schema is heavily used as well. So with that and being okay with forking because we wanted to be able to fit that, those ideas more appropriately into Guiltina, these are some of the packages that we've forked to make it happen. So we've gone from, I mean, like I said, it's not a replacement of Blon. But just to give you an example, Blon has over 300 packages. Guiltina has like a dozen. And some of those are just like for debugging and stuff like that. They're not even necessarily used by default install. Yeah. And as I said, what it's not, it's not a replacement for Blon. It's not a re-implementation of Blon. It's not necessarily a Blon REST API compatible layer. It could be. There's a package Guiltina at CMS that Rona's played with that you could start building on top of it and building the replacement for Blon. But that's not our primary goal. We're primarily providing a performance, a synchronous REST API server. With that, Ramon will talk about some of the features. Okay. So when we started, we merged all these packages and we created the first version of Guiltina Blon server at that moment. And we were kind of saying, okay, which is the minimum features that we really need from what Blon has because we don't need everything. We just need to know exactly what's the minimum. So we choose the set that for us is, we think it's the minimum. So first transaction, for us, it's really important that each request, each operation to the REST API is transactional to the database or if there is any conflict, you know it and you can rely on the API to do any kind of data model application on top of that. Consistence writer. So at the same time, we provide a conflict resolution mechanism in order to make sure that if you have any conflict on the database, you can use it to resolve it. And it resolves it with better performance, either in the distributed environment where you have a lot of different pods running and you have objects in memory, you need to have a distributed cache system that allows to make sure that the transaction, either if it's a conflict, it gets written on the database. Well, about the database, we've been working for some months trying to get ZUDB close to our approach. We've been also discussing with Jim about how to do it, either with Reva storage and we couldn't find an easy way without writing from scratch ZUDB. So we decided to go to the approach of taking ideas from Reva storage, taking ideas about SQL, taking ideas about ZUDB and getting the best that we think about all these frameworks. So we are still pickling to the database what we are storing hierarchical information also on the database. So it allows us to optimize the processing of information. And well, we choose to databases to start and have drivers for them. One is Postgres SQL and the other one is CropRoachDB. If you know it, it's an really amazing database based on the idea of a spanner from Google. It's some ex-wugos that started to build it. It's distributed across the center, transactional database within the phases of Postgres. And it works really well with Gidudina. It allows us to scale it to multiple data centers without any problem of configuration, either performance. Okay, as you know, we love Plone, we love the Zoop. So the traversal, it's kind of an idea that's really hard to explain for people outside of the Blum community, the Zoop community. So traversal means that you are storing objects and that means it's part of the URL and you're traversing the URL and traversing different objects. But we wanted there to be the core of our URLs and it's still there. And we will see some examples. What it does not have is a position. So you can be relaxed. Okay, I really loved, from years ago, we did a talk about Plone, I don't know, which spanner conference that had a slide that was that content is the key. In our case, we are not talking about content. It's not an API only designed for content management. It's an API designed for resource management, object management. It's not, doesn't need to be content, as we understand title, description, text or whatever we are using on CMS. It's designed to hold any kind of schema. So in our case, the resource is the key. And, well, you have schema attributes, you have annotations, you can use inheritance, of course. We have static and dynamic behaviors. So you can define a field set of a bunch of fields that you want to attach dynamically to an object or you want to define that this object will have this shared bunch of fields. And in order to serialize and deserialize and provide user-friendly schema definition, we use JSON. We could use YAML, maybe, or something like that, but we choose to use JSON because the rest API was also using JSON. So all the interaction with API is just JSON payloads in and out. The content is defined using JSON schema. So you can get the, which is the schema of this object or this behavior, you get the JSON schema definition. Okay. The security model is one of the things I would really love because we've been playing with or working with blown for a long time. So we really like the way that we can define permissions on each object, inerity permissions to the children of that object, defining the permission, roles, groups, the way that it's indexing all this information. So we kind of say, well, why wouldn't just get all this knowledge implemented in a simple way that it's pluggable. And it's exactly more or less similar to the implementation that some may have a bit more simplified and not so many adaptions. But we added some simple things like allow single that it's a specific setting that allows to define a permission and an object and it doesn't get inherited on the children. So that's a bit more flexibility on that, the ZOA schema. But by the rest is equally to the ZOA one. Well, it's a good API. You can pause to create, patch to modify, delete to delete, get to get the content. And there's no strengths things on that. So it's really easy. And a sync.io. Well, at the beginning of a start in the project, we were, that was the kind of the subject that everybody was saying to me, are you sure you want to use a sync.io? Nobody likes a sync.io. And it's really hard. You need to write a lot of a sync words before each function. And okay, we've done a lot of that. There is a lot of interesting talks about the sync.io in this conference. So I'm not going to expand on this subject. But for us, it was clear that you gain more than the effort of learning how it works. And when you learn how it works, for most of the use cases that we are, that is normally our service, our API, or our server, needs to deal with a lot of external components, either if it's a cache, either if it's an indexer. So I sync.io for these use cases. It's really great to really have live microservice servers. Well, we also had the experience of being working with build out and blown for a lot of years. So we said, okay, we want that people is able to install Guillotine in less than half an hour. That it's in time, takes normal build out of it. So we said, just install Guillotine. And well, it's one package. You get it. You have an executable. You can start wherever you want. You have a configuration. You can modify YAML and just point to the database or the add-ons that you want to install there. Oh, there is also a Docker image that you can run. And then you can also run Postgres in Docker and everything gets connected. Cores. It's an API. Cores is really important nowadays for front-end applications. So it's really important that we support it by default and it's configurable and defined by the application. Web socket. As we are using a sync.io Web socket is a bit out of the box with our framework. It's really easy to connect to Web socket and do get operations of different objects through the Web socket in a transactional way. We are working on providing a nice Web socket protocol to do also writing operations. So we can start a transaction to the Web socket and send writing operations to you. So you avoid the payload of the HTTP request in case that you're interested. There is some good examples on Guillotine Web repository about how to create a chat, for example, with Guillotine. I love microservices. I'm a huge fan of Kubernetes, for example. So we designed everything that needs to be run as a microservice. So a small unit of memory to run it, you can really start a bunch of Guillotine and it scales really fast. We support the US. The US, I don't know if you know, it's an upload protocol that allows to push files on chats to the servers. So you don't need to push once and sync it with wherever it's needed. We build it a lot of add-ons that we needed and some of them are open sources. So I think we're going to go through them now. It's extensible because we use a zip interface and a zip component to use adapters and to multi-daptors. We have utilities and adapters. Yay! So you can overwrite everything that you want. You can extend everything that you want. It's really easy. You have subscribers that are sync KOS subscribers. You can overwrite a subscriber with a sync KOS subscriber that it needs to do whatever it needs to do with another web server. Cocky-clutter, it's really great to provide templates. We have some depleting for starting a basic package about Guillotine and to build templates of our configuration also. We have a Q system embedded on the Guillotine so you can start this Q system and send tasks to do. And it will take care of running these tasks. These tasks have its own transaction so they can do operations for a database, a schedule, in time, or it manages it. It's an event-based so you can subscribe to any object creation, any object modification in a sync, in sync, so wherever you want to do it. We cannot copy the idea of PlonRigistry so there is a registry so you can have global configuration values in an internal dig where you can access it, modify it, do whatever you want there. With each own security system and you can have one registry for each container. That have a container in Guillotine. It's what we would call PlonSide in Plon. It's the basic container for an application on Guillotine. As we are also huge fans about front-end developer and React, Angular, and building single-page apps and the problems that it has with it. We're providing, serving JavaScript applications from Guillotine itself so you can go into a folder where you have your JavaScript application and it will redirect to that folder all the subfolders or sub URLs that you need. You can have single-page apps with HTML5 navigation so you don't need to have your own Node.js, front-end server, whatever. It will be great that someday we have, for example, Angular Universal working on Python so then we can serve that directly on this kind of Guillotine. One application can mount multiple databases at the same time so you can define multiple from different kinds, pro-coach, post-bress, and you can mount them at any, how many databases you want. We open source our S3 and Google Cloud storage add-ons so you can have blobs stored on S3 and Google Cloud storage and we also have, provide the storage layer for the database so you can store blobs on the database itself. We open source also the elastic search connector. It's one of the most difficult one and if anybody has a lot of experience on all of the search, please, you're welcome to help on this package because it's really hard to keep in sync so we are kind of replacing the Z catalog idea of how they are indexing or storing information but on elastic search instead of inside the same process. We have a distributed cache system, we use Redis for that. As you see, instead of trying to implement our own cache, distributed cache, or implementing our own catalog thing, we are more about okay. There is really good open source projects there that does this really well so why don't we just reuse them and we just provide a good integration and with the microservice architecture we can scale up this. And the swagger implementation, so our AP, our guillotine is able to render a swagger documentation automatically just reading the code and providing you a swagger webpage where you can respect the application and I think it's now neutral. Oh, yeah. As I said, everything is designed to work with microservice and we always have in mind so we have full support for Docker, Kubernetes, Nomad out of the box so you need to deploy it is in a large scale or a small scale. We have some examples on the repo about how to deploy that and it works really well and we can say that with 200 megabytes of RAM guillotine it's able to handle hundreds of requests. Yeah. You guys can do that. And I'm getting back to you. Okay. So how we do configuration is we do explicit configuration in Python using decorators. And guillotine is now like the minimum implementation of Plone that we could do and the only thing left really is far as Zope and Plone is Zope.interface. Our traversal, our URL structure is the first part of the URL is the database, the mounted database and then the container that you created and then the folder structure from there on. Similar to Plone. It's only for Python 3.6 and I think we'll always support Python 3.6 and above at least for the time being but there's some nice features in 3.7 that are coming out that we will utilize. Designed for millions of objects and has lots of optimizations and the sync is really nice authorization and authentication plugins for or at least has the framework for building your own and it has like JWT support out of the box and it does the basic authorization structure support so like it does basic auth and bearer tokens and with JWT and it's really easy to plug in and then provide your own user providers. And hopefully we can get some reusable JavaScript components from the Plone world with REST API that will work with some of the end points that we implement and you should try it. Trying to decide here. We will try it together because I'm going to show you the swagger UI that gets generated for a container. This is just a Docker running Postgres SQL and a simple guillotine server with guillotine underscore swagger installed. And all of this swagger definition is generated from the service configuration and it gives you some information on permissions and what it's doing gives you example values of the model objects that you can use for it and you can test it out and it shows you the curl and gives you the response. So this is just a simple container that I already created but we can also create new content really easily and gives you example data you can use so I say we want to add a new item. So one thing that is not yet implemented is the URLs generated are not clickable through the UI. That would be a great next feature to do. So if you want to find out what the API is on any particular object you just go to the authorize button on the top right enter in the new object URL and then click authorize. Then you get the API for that particular context and it's smart enough where it only shows you the services that you have permission for or that are on the object. It does all of those checks for you and you can behaviors. We have dynamic behaviors so you can see all the behaviors available and it gives you a payload of the actual schema for those behaviors. The sharing end point, there's a lot of stuff. I can't go through everything but we have duplicates, move, we have uploading, downloading files, the search, the end points are there but by default there's no catalog installed. You have to install elastic search integration. They are able to use those end points. The sharing end point is really nice. You get a representation of current roles and then you can modify the sharing on any object. Then there's the toss end points too. Why don't we create, so we are on a, okay so, we authorize on the container and we'll create a folder. Okay, created and this folder has a few different API end points. It has IDs because if you have a folder with a thousand objects you don't want to get it all on your default payload. I guess that's all I'll go through with that. I'll go through some of the configuration and some basic syntax so you can get a feel of what it's like. This is the configuration file. We support YAML. We also support JSON for configuration file if you prefer that. It's pretty simple and we have core support right from the configuration. A lot of stuff is configurable with configuration values and you can easily run it with running your own Docker with Postgres and then installing Gild Tina. It's just a quick example of how you can do it through a command line really quickly. We kind of already talked about this. Our data model is resources and containers with schemas and then static behaviors and dynamic behaviors. That's where all the data is coming from because the content objects have schema defined on them as well and the behaviors are like these dynamic reusable pieces of schema you can use. This is what created a new content type looks like. We still use schemas that are just like Zope interface. That's one part of Zope that we still have. It's a very simple decorator for configuring it. These are subscribers. When the object is added I want to do something and it's an asynchronous subscriber that works. Services, you can do it for particular content types and give them a name and method and permission. Now Ramon is going to talk about data science. We're running out of time so I'm going to go really fast because we only have three more minutes more. I added this part of the slides because one of the reasons is that we work a lot on data science, a lot of restoring, a lot of data and that's one of the reasons that Gild Tina exists also. There's been a lot of always fight between data scientists and engineers because one says I want this machine and the other says you can't have this thing or you need to use this stupid service. My experience talking with data science and doing a talk about video Tina to them, it's that they are really pleased to have an API based way to handle their data and to be able to run operations. I'm going to skip this one. Most of the things because we are working on a project called Gild Tina Hive is going to be distributed way of starting a lot of Gild Tina's that are not rendering API but have access to database to distribute the operations on top of all the data. You need to change the text from the title of all the content. You can have as a task on Gild Tina. It's really simple to define and you just throw that to Hive and it's going to run it and do all the operations in the background in a distributed way. Try it, create issues, contribute. We are really open to new ideas to move it forward in the future and I think that we have a long future on top of us with the Blum community and with the integration of other ideas on the video Tina. Please see if you are interested in a SYNCIO, go to the introduction to a SYNCIO talk that Nathan is going to do tomorrow, Friday. And Preguntes and thank you very much. Maybe we have time for a couple of questions, fast questions. Somebody? Nobody? Explain it so well. Great guys. Okay. Thank you. I'm curious about the advantages of using WebSockets for doing CRUD REST operations. What's the performance advantage or other advantages to that approach? You would just be doing, you wouldn't be doing a full HTTP request then because you would have the socket open. I think what else Ramona was saying was we don't have support for this but we have been talking about it is eventually being able to batch operation, do batch operation so you can send multiple requests at the same time and you don't have to keep doing full trip HTTP requests. If you want to create multiple objects, you are pushing 1000 objects and you want to do 100 instead of doing a transaction for each creation, you can queue 100 and do a transaction for all the work. Okay. One last question here too. Yes. So you spoke a lot about REST API. I just wanted to get a feel for whether you've looked into GraphQL. I'm just asking because I've worked with a front-end developer who is very excited about GraphQL. I don't know much about it but I'm curious. It's a good question because we are working on this subject. We think our approach is that the best is to have a combination of both. The REST API is really great for REST management operations and for growth in terms of more creating and updating. This is in our experience. We are working on having GraphQL in your face for searching and for introspection and the objects.
Learn about the exciting new REST Resource API powered by Python's new asyncio library. In this talk you'll learn about some of the amazing things you can do with Guillotina and how you can leverage it to build your next JavaScript web application.
10.5446/54921 (DOI)
imaginantajaa paljon kokeaa näkemis beliefin avilaisa dysesiantaa joita on y To VI wir Yleisöä, ja jos haluaisi katsomaan tämän vuoden vuoden ja videon, että ne ovat vielä online. Ja niin, 2016, me aloitamme yhdessä yhdessä yhdessä yhdessä yhdessä yhdessä yhdessä 4-5-luvulla, sitten meillä oli uusi kohdattomuus, uusi kohdattomuus ja oikeastaan tuntuu organisaationaaleista. Ja tämän vuoden, johon olen tullut yhdessä yhdessä yhdessä, ja miten me voimme tulla, ja miten 5-luvulla olisi tullut yhdessä yhdessä. Ja nyt voin vain tulla, ja vähän yhdessä yhdessä, miten me teimme, tai en tehty sitä. Okei, olen käytänyt ploonit sen 2004, ja meidän universitaisuus sen tämän vuoden. Olen ploonit-uselun ja yhdessä yhdessä, ja minä tein ensimmäisenä treinin ja supportin, ja minä tein yhdessä webin, mutta kaikki yhdessä webin, joten en programmiin yhdessä yhdessä. Okei. Tänään eriässä oli kohdattomuus, ja niin olen. Olen tullut yhdessä yhdessä yhdessä yhdessä, ja vähän birokratiikkaa, yhdessä massiivissa suurin ja suurin ja perusin perustuun tämä web-site-renewo. Ja tietysti paljon contenta. Eli Finland on yhdessä yhdessä Poliina, ja se tuli tämän vuoden, joten olen yhdessä Parcellona nyt. Ja meidän universitaisuus on yhdessä yhdessä universitaisuus in Finland. Meillä on koko 15 000 studentit ja 2600 kohdattomuus. Ja olemme käyttäneet Poliina sen 2004, ja meidän public website on ollut Poliina sen 2005. Ja meillä nyt on 6 vakauttia, joita on departementalit ja eri instituutit. Ja yhdessä 90 Poliina-site-renewo joka on paljon kohdattomuus-applikat ei vain poliina-renewo. Okei. Se on se, mitä me olemme. Ja tämä on meidän main website. Ja yhdessä yhdessä, johon alkaa, 200 000 kohdattomuus on Pate views vähän kohdattomuus. Ja durante tämän 13 vuoden vuoden poliina-site-renewo, on ollut paljon kohdattomuus johon 200 000 kohdattomuus nyt, ja 100 editeerit on tullut ja universitivuosikulm website on yhdessä 20-luvun myös. Ja nämä ovat joitain tuntuu tämä website-renewo. Joten uusia organaattisia kohdattomuus, uusi kohdattomuus, tämä uusi universitivuus t-shirt täällä. Jotain uusi kohdattomuus tuntuu websitein. Me haluamme olla uusi internetin poliina-site-renewo, ja me haluamme olla uusi kohdattomuus, ja poliina-site-renewo kaikki. Ja kun poliina-site-renewo, me voimme tehdä tämä tästä, ehkä pj. poliina-site-renewo. Se on poliina-site-renewo, se on mozaiikin ja se on tekstirity-tyypäin, mutta me taisimme tärkeää tulevaisuudesta, johon poliina-site-renewo näkee ja me olemme pohjattelut poliina-site-renewo yhden vuosien yhden. Me haluamme ottaa poliina-site-renewo ja tämä oli teknikin päivänä ja ei ole tärkeää, jos me pitäisiin uusi kohdattomuus tai ei. Tällä vuonna olen todellisen tuntuu ihmiset kuin kontenteditot pitäisiin uusi kohdattomuus ja uusi organisioon kohdattomuus ja heidät yritetään jotain, koska haluamme ottaa parempiä näitä tärkeä ja tietysti yhden poliina-site-renewo ja yhden yhden yhden yhden yhden yhden ja peliottavasti meidän kohdattomuus kohdattomuus ja uusi poliina-site-renewo ja yhden yhden yhden yhden yhden yhden yhden yhden yhden yhden yhden yhden yhden yhden yhden yhden yhden yhden yhden yhden yhden okei nuo meteorot lover deleted takas vapio University Communications Unit. They are responsible for the whole brand renewal and intranet, and the whole renewal itself. Then we have content editors, lots of them around the faculties, and they have to do the actual work with the content and with Plan 5. Then we have this ad agency, who designed this new brand together with University Communications, and they designed a new theme for our website. And of course, my own favorite, our clone development team, we need to maintain the old site and create new ones, and do migrations and so on. Okay. I don't know if you did have any view lines, but it's here anyway. So this was our plan a year ago, so in October we would do some Plan 5 migrations, create new Plan 5 preview sites. In November new theme, more migrations, create new intranet site, and in December everything would be kind of ready for the year 2017. When in January we would release the main website, we would release two new faculties, and the intranet. And in spring we would do some more migrations and tweak stuff, and in summer we could just enjoy the sun in Finland. But it just didn't quite go that way, mostly. So next I will tell you the timeline of what we did at different times, and then some of the lessons learned. So this was the new faculty site theme, Faculty of Humanities and Social Sciences, really visual compared to the old one. And it was actually released in the beginning of January, as planned, with migrated content and new content and new theme. Okay, the release went fine, but we came up with problems. There were problems with caching. If you change something, it didn't show immediately. At some point, especially with files, it was low. It's weird. And Plonify we used 5.0.4 or something. There were still some bugs, and the theme wasn't perfect when we released it. And the old site was visible, so there were broken links and problems. So there were broken links and wrong content and search problems and stuff like that. And one thing we didn't anticipate was the huge shock about the new theme. Everyone was like, whoa, what's this? And what happened? So it really affected people. One challenge here was this new organizational structure. We combined three faculties into two new ones. And also people were moved under the faculties into this university services unit. So those people who were previously updating the faculty sites were now under a different organization unit. And some new people had to learn how to edit with edit Plon content and generally do new work. One good thing we didn't account for last year was that we just thought that we will just migrate all the old content and update to Plon5, and that's about it, then make some new content. But faculties and departments really wanted to renew their content since it has been there for 10 or 13 years. It wasn't just migration. It's lost out of the process, but eventually it will enhance the content quality. We are still working on the last four faculties that should be released in the end of this year, hopefully. And we haven't migrated everything. We have migrated when needed some content. But people are doing really good work with that, with new content. Okay. Solutions to these first problems, of course, we were tweaking theme. And one thing I liked about Plon5 is that there's this new theming system. Me as a, I could just go through the web to the theming and do some instant fixes there, and then the other guys could put the fixes to file system later on. And lots of Plon5 related issues, uscoded many fixes to Plon Core also. We did more training and more guides. And this really important, this communication thing. We found new groups for all the website editors in these new faculties, so we could easily contact them and give them instructions and all the feedback were directed to them directly, considering content. Spring continued. In March we released this another faculty site. It was much smoother release, since we had fixed most of the problems. And then in April we released partly this internet site, especially a thing called Help Center, university staff guides, that's really important for our staff members. We released other Plon5 sites and new mosaic templates, which I will tell you more about later. And still had one problem, old faculty sites still visible in searches. We tried to tell the faculties that you should hide the old sites, but they still wanted to have them. We didn't have the authority as technical people to shut them down. So that was a bureaucratic problem with that. Okay, in summer we got some sun in Finland, of course, especially in July when we held this Plon Meet Summer Sprint, which I will tell you more about tomorrow in my presentation. But we also released more Plon sites and created more preview sites. We usually do a preview site with the old site, and you have the old site and you have the preview site, and the content editors can work there. And when it's finished, we just switch the address so that the new site shows. Okay, in summer we also had to do other Plon development and other releases not related to this website renewal, so it wasn't the only project we had this year. But when August came, a new semester came, the priorities were that we have to release the new main website when the new semester starts. It will be released before intranet or other faculty sites. And we did a lot of work to the theming in there, and a lot of work to make the mosaic templates really usable for the content editors at the university communications unit. But when the release day came, there were still things to be fixed in the contents. The theme worked quite well at that point. So we released the main website with four main portals, and this is how it looks like. There are huge images, and when you scroll down, there are lots of news and even more news and even more bigger images, and it's all responsive and stuff like that. And in the bottom there is huge social media integration and then the footer. And I think I will show you the actual page later. Okay, the release went well, and we started to get feedback. And the feedback was this kind of stuff and this. Okay, so of course we did a feedback form to our website using blown form gen. That's the best add-on to blown, I think. We asked what were the impressions on the site, if you had any comments and questions on which device you used. And so if you have a problematic page, please put the link in there. And this kind of feedback we got, email doesn't work, how does that relate to university website renewal, but well we got that. Someone thought it was too modern, too big images, just PR material and this slogans and cannot find anything and something else. And even more feedback, men are presented as researchers, studying or leaders, women are just assies. So we had huge images, and if you look at this first page when it was released, men and our new rector and then girls down here. And at the last minute this image here was changed to tell you about this new brand. Originally there was an image of a wise elderly female, so it would have put images in balance, but no. Okay, so feedback, we had about 150 feedback messages through this form in a month. In our case I think that's a lot compared to our previous releases. And people thought that 75% it was bad, 15% it's okay and 10% it was good. But at first we forgot to put the selection of your role as a feedback giver there. We added the role there and it showed us that staff members think it's bad, students think it's okay and external audience said it's good. So this is really interesting information for us. Okay, another thing about releasing university websites. This is an old comic from XKCD. Things on the front page of the university website, things there are and things people go to the site looking for. And we have seen this image many times before and we have laughed at it. And this is what happened. So we did all the wrong things as the comic suggests. So there's this letter from the president on the front page. Campus news and events statements of school philosophy. Then we have alumni in the news, more news. We have the full name of the school in the future. And campus map, it's here, you can find it. But seriously, you can find all the important stuff there also if you just go to a certain place like campus address and stuff like that. This new main website was aimed first and foremost to external audience. This is a big change for our university. Previously the front page has been serving students and staff members. But now it's supposed to look good to the outsiders to the new students and new members who are looking to get a work at our university. But we didn't communicate that to staff members, so the shock was really big. Well, the theme is radically different to the old one, maybe too radically. Feedback on search result was really useful. And after the release we deleted lots of old content finally. We improved search results. And this is interesting thing to notice that when you renew something people start giving feedback also considering the old stuff. So we got some feedback that was targeted to the old sites and stuff like that. Okay, next about the new brand. Brand is called Unity and Unique. JYU is the name of Uwaskula University, so it's some kind of wordplay there. Oh, okay, another comic. I found the oatmeal comic. I like that. But we want, there might be some bad words in there. So this is a web design which goes straight to hell. Everything is cool in the beginning, of course. There's the client and the designer. And they are looking at the old site. And the old site is just awful. Both laugh at it, how bad it is. And then there's the new redesign on the new website. It looks amazing at first. And everyone is like, yeah, this is the way we're going to go. Just a few minor changes there here and there. And a little more minor changes. And even more minor changes. And so on. I won't show you the rest of the comic, but you get the point. So this was interesting working as a plan development team. We had to make the new theme work, but we didn't design it. And university communications was communicating with the ad agency. So they were gathering feedback on different groups at university. And the new theme is really different to the old one. So there was some resistance to it. The theme was already approved in a whole one year ago. Also by some high level people. But when we released the first faculty site, people really noticed that, oh, what's this? And we had to change some things, many things actually. Maybe changes in certain details. And that created a lot of work for our developers. And next time we will need to have a better documentation on what was decided and by whom. So we can say that no, this is actually what was decided. And this is the way we're going to go. So we don't have to do it twice or three times and so on. But in the end it does look like the original design. And we didn't have to water it down too much. So that's a good thing. Okay, about web design and lorem ipsum. This is the design for news carousel. And it looks pretty nice. But when you go to Finland, your titles tend to be this long. So I don't like lorem ipsum in web design at all. Okay, so these were also the things we had to tweak in the theme to make it work in Finnish language. Okay, couple of images for you. This is the old university front page. It still has the picture for the president. And it has lots of links for university staff members and students. And it was mostly text-based content and really many links there. And the new front page is somewhat more visual at this point. I probably could demonstrate it to you right here. Okay, this is quite a small screen. But as I said, it's responsive. So it comes from here. There are huge images and so-called visual navigation. Couple of blocks are reserved for news. And then there's also video. And these big blocks. And there's this news carousel. I think we have a long title somewhere here. Oh yeah, this is two lines in here. And then there's this what's great about the University of Jyväskylä. Some large images here and there. And then even more images. And here's a social media integration. This is actually quite nice. And if you use it on a mobile phone or tablet, so it's really responsive. And here's the one nice feature we ripped out from CastleCMS. Thank you for that. So-called focus point feature. So you have a large image there and you use the same image twice. You can select the focus point. So it doesn't show a person's like only hair or his feet. But it shows the face here. So that's something really nice. Okay. And another portal. And this is all mosaic. So it's quite easy to... Well, it's easy to update. And here we have another huge images. Some drop down menus. Some of these features we as developers didn't like. So I think we have some understanding on what is good user experience and what is not. But some of these features were really wanted. So here are some drop down menus. And there are carousels and stuff like that that the actual users always want. Even as a developer you don't want them. So that is something to notice. And everything here is editable through the web browser. All these titles and all these things. I think in this picture the focus point isn't correctly placed by the content editors. Okay. Let's get back to this presentation. Yeah. What's the weight of your image? Many megabytes. Yeah. But it should load... The first screen should load quite quickly. And it eventually loads the other stuff below when you scroll down. But we got some feedback that the site is low. If you look at it on your mobile phone it might be a bit much. Yes. Here's a technical thing. When we were adapting the new team to Blown 5 and if you want to know more about this, we got a new whole team bundle. It's the NLCS JavaScript from the ad agency. And the idea was that we could just use that structure directly on Blown. And the team was accepted originally. And the first version of Blown team used the team bundle as it was with only small tweaks. Everything was good so far. But when we released the first faculty site, changes started to coming. And in the end we had to do all the templates again for ourselves. And there was a lot of work with that. But in the very end it was worth it since the elements are now totally in our control. We can reuse many of the mosaic elements in many different places. Okay. What you get is important with Blown and with website content. In our university we have always had the front end and back end both themed. And we wanted to have this time also. But we needed some better tools for content editors to create these visual front pages. Previously we had this add on called portal view. It was our own development many years ago. But mosaic is much better. And it makes what you use is what you get easy. We have customized styles, so called team fragments. And we got some positive feedback on the mosaic thing. And a couple of words about using mosaic. At first this year we had to do lots of development and fixes. But they helped the whole community I hope. Asko did amazing work on that also. We created these customized team fragments like feed carousel and hero image carousel and social media integration. And we can use them in any mosaic page now. We also have these different page layout for department page, faculty page, university front page and just regular document page. And we're going to have a certain layout for project pages also. And we can do that with mosaic. And this was one interesting and good feature. We have layouts that have some fixed areas and some customizable areas. So you can have best of the both worlds. You can keep the pages somewhat current, but still make them flexible and look like your own pages. Okay. Here are some examples only as images. This is the faculty page of humanities. It doesn't show that well. But here's this huge carousel, but it is totally customizable and editable in the browser. You just edit the page and you have these rich text blocks over the background images. And the images can be changed just as easy in there. So that is something really nice. Here's an example of a new speak if I show you the faculty of humanities site. Well, I actually could edit it here. Here's a new speak. There are huge images. Well, maybe I'll just do it the demo way since I still have some time. It takes a while to load the mosaic page and you have to wait until it loads the whole page to actually edit it. But here we go. That's it. That's the research accordion here and I could just do some stuff here. And I hope I'm in the demo page now if just testing. And then you can add a link there and use it as a as a clone. And if I want to change the background image, here's the edit button. And you can find a new image here and select display size. Well, this should be only one in this case, but that's quite easy. Then we have this RSS feed carousel. Also directly editable here. We got these feeds and we can change the order of them and change the title in there and stuff like that. That's all too easy also. And here we have these news pick items, which I can also edit. It shows actual content from blown. So if you create a new news item, you can show it in the front page so you don't have to do it twice for the front page and the content. You just find the proper content. It has this certain image lead image in clone 5. It takes there. You can select keywords and title and mobile title if you want to have it shorter. And then there's this focus point thing, maybe on the hand or the flower. And some more descriptions and settings for layout, default or grand, color filter, red or blue, and text align and stuff like that. So that's, in my opinion, that's quite easy. Okay, now I'll just save this and see how it works. I didn't change that much, but in the research, yeah, it shows just testing here. Okay, back to the presentation. Okay, this was the news pick focus point. Then we have department layout. There's this huge image carousel. That's also easily editable. You can select either images there or you can select content items there. And if they have a lead image, you can use it as the carousel. And if it's a content item like news, it creates automatically direct link from the carousel to the content item. And you can select the color filter from the theme and you can set the title there and stuff like that. So that's really easy. Well, I get to do the demo. I could have shown even more, but I think that's enough. Okay, that's about Mosaic. Then we had this thing called, well, we wanted to release the sites and we needed to release, especially the main site. There was lots of content and we knew that all the content wouldn't be ready when we release it. So we needed to figure out the way to show the old site and the new site at the same domain at the same time. My colleagues, Jussi Tulaskevi and others, they created this fallback director and Varnish combination. So we could, if the address was found on the new site, it would show the new site. But if the page wasn't on the new site, it would automatically redirect to the old site. So we could release content in smaller pieces and that works really well right now. Even if you're logged in as a content editor, it depends if you are on the old site or the new site, but it all works. So this is really useful for the content editors. Okay, even more challenges. One was this intranet. We want to have one portal for all staff members in blown. We already have a portal for staff members, but it's IP restricted for viewing. It's not a good idea. Now we will have intranet, which uses separate credentials for everyone. So if you want to see the intranet, you have to log in and it will have intranet workflow and just basic blown stuff. It does intranets really well. And we have already departmental intranets in their own department sites, but we want to have all intranets in the same one site. And we did release the help center of our intranet in April. And we wanted to release the new intranet in this September or October, but we got some last minute changes. I will tell you more about that also. Okay, another challenge. We had this, was with search engine. We have Google search appliance. It's license will expire soon. We will change it to solar and elastic search. We will delete, delete lots of old content. In September we finally got to wipe out the old faculty sites. They were online for nine months in parallel to the new site, so that was really awful. Finally they are gone. We need to educate content editors to create good titles, descriptions and so on. We have this tool called Keyboard Match, with which we can make really good search results visible. And we have this thing called U-bar. It's on the every page in University of Uvast. It's top bar, and if you go to different services, even outside the clone, you can have the same bar in there, and you can search and find the most important links in there. And there we have this search engine integration. It shows people, it shows new courses, and it shows guides and all the website content. And we have to make that work with solar also in the future. Yes, our plan for the rest of the year is to, well, it was to release intranet, when we get back from this phone conference, but last many changes, intranet portal will be released after the faculty sites. So the faculty sites will be the most important thing next. Okay, and then we have to do the search appliance renewal. We will have to help faculties to publish the sites, and we have been asked to create a new dexterity type for projects, especially research projects, and also with mosaic layout. Okay, we get to lessons learned section. Lots of lessons learned. I don't think this website renewal in this scale wasn't quite well enough resourced. All the people had to do their old work and also do the renewal thing. And what we could have really appreciated would have been a full-time-plown power user or admin, who could have helped with the content especially. I think we got the technical details worked out, and we got the team and brand thing worked out, but working with the actual content with the new editors, that was something we couldn't help enough. We did have some new people working, and that's a really good thing. There was a project and resources for the brand renewal, so that's a good thing. One learning experience, while there was this organizational change at the university, it was hard for people, but I think in my opinion it energized some of the people, so they get to do new stuff. And maybe they got to work with Plown, so it was a good thing for many people in that sense. Different day, different stuff. Yes, about communication, really important in projects generally. We did have public preview sites throughout the year for new faculties. We did seminars for content editors. We had public pages and trainings and email info, but that's not enough. Using Flowdog chat tool was really useful for us. Okay, here's the Flowdog chat. Use chat if you can. About priorities, you need to prioritize things in big projects. That's one thing to learn next time. And doing things synchronously with departments when they want to release, we have to be ready for that. And maybe some push from the university administration, some deadlines would be good. About Plown 5, we got good feedback from it. It's toolbar is nice, user interface is pretty clean, mosaic is nice. We are not missing that many features, trainings are easy. There was a lot of fixes in the beginning of the year with Plown, but we are doing just fine and teaming tools helped. Cutting edge is good for Shushi, maybe, but we are actually running Plown 5.1, beta 1.1 or something like that on our main production university website. It's running quite okay, so don't be afraid of using Plown Betas. About migrations, most of our content was already dexterity. We had to do some migrations and some decisions on whether to migrate or not. We did the easy way some of the things and still using form folders as arch type. About mosaic, it's really a blast. People are using it and liking it. People are updating the mosaic pages much more frequently than the previous portal view pages. Other notes, the design is heavy. There are lots of images. We can't do anything about that and still we have this one program that doesn't work. To sum it up, we were renewing content. It was not just migration. There was a lot of work with team, ended up fine, I think. Also a lot of work with Plown 5, but now it works fine. Communication is important and mosaic is a blast. It's been a really fun year, even though some of us are tired with the renewal, but we need to continue work with intronet and search and other faculties and so on. To answer the question, are we there yet? No, we are not there yet, but the world goes on. That's just fine. Okay, thank you. Any questions? Thank you very much. Do we have any questions? I have one. You are using mosaic a lot on the pages and you are having a lot of tiles. Do we experience any performance issues when having a lot of tiles with sub requests? I don't think so. We have a lot of tiles there and the pages are long. The problems weren't with mosaic. They were with something else in the winter and we fixed those things. Did you get a lot of requests to develop custom tiles? Once you start with this, my experience is with any kind of tile system, people pilot this special thing or this special thing and before you know it, you are developing 30 or 40 custom tiles. We have maybe 10 custom tiles, but hopefully not 30 or 40. Yeah, people are asking them. We get the requests. So far, we are doing fine with that. You are using mosaic to implement your page layouts. My question would be, do you impose any limits on your editors how they can change the layouts? If you do, how you do that? Yeah, we have limitations. We have certain predefined layouts, which have fixed places here and there and some couple of places where you can add your own tiles. So we limit things. But we also have this basic mosaic page where you can put as many tiles as you want, so we can't limit that. I'm just curious if you're using Plone also on the admin side for new student signups, or is that APIing out? No, we are not using it. Is there any plan to integrate that in since Plone actually can manage those things? No, in Finland there are huge projects considering how to handle new students, so we are doing that. Can you say a little more about your decision process for moving from Google to Solar? I'm sorry if I missed that a bit. Yeah, on our decision to move from Google. I don't know, we have been kind of happy with Google search appliance, but it costs, I don't know, somehow we did have Solar before, and now we had Google search appliance and now we're going back to Solar. That's a technical decision, as you will ask, we will tell you more about the reasons behind that. Okay, so I'll remember my other presentation. Thank you. Thank you very much.
A year ago our university started a massive website renewal process - upgrading from Plone 4 to Plone 5, designing a new theme and moving to a new organizational structure. In my presentation last year I tried to anticipate the challenges we would face and how Plone 5 could help us. Now it's time to look back (and forward) on how things went, where we are and what we could learn from the experience. This is about Plone 5, theming, Mosaic, agile, and people.
10.5446/54841 (DOI)
Thank you. So proud to be introduced by you. So, today I want to talk about an alternative approach to clone seaming. I have some slides to explain what I'm going to talk about and what I'm not going to talk about. So, for those who didn't read this stuff. Is this correct? Any Japanese guys in here? Is it fine? Nice. Stefan Antonelli asks that. I'm from Germany. This is Twitter handle if you want to contact me or ask questions afterwards. I... Oh. You're not seeing anything. Pull the microphone. What do you mean? I should keep it. Is there anything to see for you? No. No. I see it on my screen. Not better. Yeah, better but... Okay. I cannot enter... Is it my bad? My computer... It's fine on my screen. I don't know. Any ideas? Really? Any ideas? Maybe you. Connection is there. Never had this before. Okay. Go for it. So, my Twitter handle if you want to contact me afterwards and GitHub repository. What is this talk about? It's an interesting topic. That's why I need to explain a little bit about. I showed a couple of weeks ago Max Jakob from the University of Munich how we do seeming at the moment or how we think we should do seeming to not waste a lot of time or to get to the end after a while. And he said I have to talk about that. So, that's why I'm here. We do seeming based on Barcelona. I don't follow the idea of introducing all new seem or introducing a completely for rain template. We use Barcelona as the basis. We override some of the templates where we need some changes. It's extendable to use the resource registry if necessary. And that's in my opinion that what fails mostly the mobile approach is a little bit missing. So, we found solutions for dealing with it. The talk is for guys who are not willing to switch to this new front-end development age basically. We are not talking about best practices in Plone. So, if you want to discuss about that afterwards feel free to contact me. I'm not talking about this whole REST API thing, Volto React. I've been to the talks. Very interesting stuff. Please go there. As soon as you hire a front-end developer you will be happy. I am. So, how to archive this? As I said, our theme is based on Barcelona. Everybody knows how this looks like. My idea or our idea is lots of it is fine, but let's make it nice and shiny somehow. We make use of existing elements. Lots of the stuff is really useful, especially for add-ons, for bigger installations like on a university. If you deal with forms, Barcelona is okay for doing that stuff. So, we drop stuff we don't need or we don't want to seem. And there are some corners around in Plone that you really don't want to touch. In our most layouts we drop the search box. We find different solutions for integrating a search for that. We also drop breadcrumbs. They are mostly not necessary. This is maybe really up for discussion. We drop columns at all. In my opinion, columns are not working somehow on a mobile phone. You always need to find a solution for that. My solution is we drop it. Just don't use it. We only keep the main column and this indeed makes in Plone everything working on mobile in one go. Next topic, Tuba. No more Tuba, please. Who likes to Tuba in here? Please raise your hands. I don't see any hands because of the light over there. I don't like it. I think it's not easy to see. It's not easy to understand how it works in the back end. It's not easy at all for me to deal with it somehow. So, we drop it. You can use the manage view let's thing to disable stuff where you want to not render the stuff. If you want to replace it, we have the solution for overriding it. I want to talk about it a little later. We still have to create a package. This is also not a talk about through the web seeming. I think everything needs to go into a package. You have full control. You can make use of Plone CLI. Mr. Bob, to create a package, it helps you to set it up properly. Keep everything together in a Git repository. That's the way I guess you should go. What's the package thing? We also have to create a seam inside the package. It's part of the package as everybody knows. Add this as a regular seam. Nothing new to all of you. We copy over the default index file that we have from Barcelona. We copied over so we were able to change it easily. That's our idea. Copy also the rules. I'm not a big DOS fan, so it's difficult to understand how it works. It's also interesting to deal with it. You can follow the concept completely. It's okay. You can do powerful things with it. For me, we copied over and we changed only a few lines where we need to change things. I'll talk about it later. The most interesting thing is do not register any resources, bundles, whatever. That's the stuff we had the most pain, trying to understand how it works, trying to understand how it needs to be extend. Especially the old days of blown five, we had no clue about where to change it, where to touch it, where to do the actual bundling. Is it a back-end thing or a front-end thing? There was no really good documentation. Now, two years later, I understand mostly the stuff. It's still okay to use it, but my idea is if you want to introduce another person to blown, don't show it. Before I said we drop a lot of elements. Of course, sometimes you need a header, you need a footer, you need to style that a little bit, use overwrites. There is a collective J-bot for the lazy ones. It's part of each theme. It's easy to figure out where the original template is, copy it, and change whatever you need. So here it starts getting interesting. Seaming means you want to add your own styling, you want to add your own JavaScript. So where and how do we do it? Use a static CSS file and put it into the seam, into the seam folder, and add it as a static resource, and just add it to the HTML. That's what our idea is at the moment. I'm talking later about how to extend this to use inside a bundle, but I tried it and it works. It works quite well. Each time you touch the file, you can reload the page and you don't need to bundle or update stuff. This snippet is added to the head section of the index, HTML. For the JavaScript part, this goes at the end of the body section. As you know from each bootstrap theme, for example, when you do static HTML, that's commonly used. There are no dependencies. We are not heavily making use of JavaScript in Plone. For the simple things we do, like using a little jQuery or adding another library that's not conflicting with the default Plone things, can be used like that. There are no dependencies. We rely on jQuery, which is part of the core. I wait for that. They're behind me. We rely on jQuery, which is part of the core. For our projects, there was no need to update it. It was good enough to deal with it. With the examples, I said you have to tweak the rules a little because all the CSS goes at the end of the head section, which is apparently after your CSS when you follow the example. The other thing we need to change is add the original JavaScript from Plone before your custom ones to override it simply. We figured out the easiest way of changing, for example, the footer in Plone is rename IDs and classes. All the existing CSS is not applying anymore. You can add your own classes. You can add your own CSS to style that classes. That's just an idea of how you solve that. In my opinion, it's better to override than existing styling. If there is stuff in the styling you don't like, you can override everything without touching the template. I guess the CSS gets complicated after a while. What is the result? I show you a screenshot. This is an example I did. I called it plone seam.toq. It's also available on GitHub. Try it if you like. The same concept you can see in Pasta Naga is just dropping elements, makes it a little clean, makes it nice. I love it. The whole navigation is hidden behind a burger icon. I want to talk about it a little later. I think it looks pretty good, even on mobile. The UI works for mobile as well. I think you should not separate between desktop and mobile as well for the navigation. The user switches between the devices. I don't understand why you should force him to understand different navigation concepts for that. My idea is the mobile navigation should work the same way like the desktop navigation. There is no separation between the seams in this way. The luckiest thing is, as long as you use plone's main column, it works on mobile. It works out of the box also for editing. You don't need to change anything. Just try it out. It works quite well. Quick summary for that. We still have to use plone. No Yays here. I'm so happy to use plone anymore because I don't want to switch to another framework. You don't need to know about the other rules. You don't need to deal with the resource registry. This is my personal opinion. I don't want to talk about that. The bad things. The front end guys can talk about that. Of course, this is not one of the modern, straightforward, fancy frameworks that look good or interesting but are completely different technologies. It's more like the old school seeming and I guess it's out there for the next five or ten years. Whenever you run a bigger site, you can hire a big front end team to develop the same which you did in the past. The command technique works a little bit longer. With some tweaks pointing to mobile, for example. The solution is not so sexy. Carrots are more interesting at the moment. I've heard a similar quote in the Vault2 talk before. This fits also for me when I started with plone 5. I also worked with plone 4 and plone 3 and also plone 2. When I started with plone 5, I have to say that. It's really difficult to dive into this front end thing. I love it. I love what I see for Vault2. I love something like Angular but it's a completely different technology. It's the decision if you want to learn it or not. I can see lots of back end guys, plone guys. The technology, the complexity, the security stack, everything that makes plone great. But they don't want to deal with curly braces at all. So next question, to bundle or not to bundle? I guess this is where we can discuss afterwards a little bit. Since we have a seam package, you are free to create a bundle. You can do the regular seaming stuff. Most of the concept things work together with the bundle. I showed an example of just adding static files. Of course, you can change that to use less files and JavaScript files as resources and bundle them together and make use of all the bundling stuff and put it into your package. That's what we do in the most projects, basically. The static example is just for if you don't want to. You're free to make use of external tools. In some cases, it's required. When you want to touch some of the plone features, functionalities, then you need to touch existing JavaScript or you need to touch patterns, for example. Then there is no way out. You need to create your bundling override existing resources. But it's extendable. So for starting without and switching and turning it on, that's a common way. Okay. No questions so far. I asked a question. Wait, did he say no toolbar? Yeah, we tried to avoid it. I was discussing with some guys and we came up with a solution that says why not bringing the editing features of plone and the navigation story together in one thing. Lots of projects we have. There is login and editing needed along with the mobile experience. This is where, in my opinion, the tool bar is great for editing on plone on a desktop. When you need to create a mobile site, enable the login for the mobile site, and enable the user to do something on your site, it's difficult to archive that somehow. So we had an idea of creating something that is called sidebar. And the concept of the sidebar is bringing editing and navigation together. We started a project that is called collective sidebar. Before you analyze the code and talk about it, please be careful. It's very new. We're working on it. So it's not really for production use, but we use it in production, of course, but it needs some love. It's under heavy development and we want to use, we want to work on it during the sprint. And the idea of it is to bring tool bar navigation together. It acts as a drop-in replacement for tool bar. So it's not a seam. You can add it and it disables the tool bar and shows the sidebar. We had in mind the mobile-first approach. So as I showed in the screenshot before, we tried to keep it easy for integrators or for other backend guys to seam it or to change it. So it's everything in one template. There is only one template you have to override and then you can drop stuff, extend it, change it, whatever you like. So I guess I'll show a little demo without showing it. It's difficult to understand what I'm talking about. This is the site you have seen before. We added a handle for that. It's a burger icon. This is part of the Tokyo Seam. If you don't use the Tokyo Seam or if you use Collective Sidebar alone, it will add this burger icon to the regular navigation of Plone to get something where you need something to click on to open the menu. The idea is to add some more handles so you can integrate it to your seam without deal with it. So different classes should be answered or should be available for a click handler. If you open it, at the moment it slides in from the left. We also had a project where we slide the stuff in from the right. So in a project had lots of features in that are not part of the Collective package at the moment. We're curious about the work on that. We want to extend it. We need some discussion about how to archive the stuff to respect different ideas on how to use it. But that's at the moment the starting point and we are going to extend it. So everything which you don't see here now is maybe part of it on the future. For example, configuration, my idea is to add lots of configuration settings to configure it for your needs. Everything you know from the toolbar is part of the Sidebar, workflow things, editing links, different display options, adding new content as well as the search. So let's go to the front page. It looks a little bit different. At the end there is the navigation part. This is for example a setting. I get some feedback where they say we have to move the navigation part on top of it. You know, ideas, there is the Collective package. Have a look on it. There is an issue tracker. We add all the issues or the stuff we want to add in the future as an issue for now. There is no roadmap documented but we work on it and we are happy for ideas and future requests what we need to add to get this commonly used. One idea is for example put the search in here on a desktop to use the space that is wasted otherwise. Search feature, site map, something can go into here. Interesting thing. Works on mobile as promised. So if you open it on a phone it works like that. Okay. The code of the scene you have seen is on GitHub. I added it yesterday evening. There will be some improvements I guess but you can be checked out and used. Also the site bar code is on GitHub. We have to make some releases in the future. So I think it will be for now do the source check out, tie it together and then you can make use of it. Questions so far on that. I left some space for questions and discussion because I'm curious about feedback on that and how we can bring this forward. Very dark outside here. Not yet. We discussed about that already. I like the idea of the bundling and clone to add a cache key to the file URL basically. This is something we need a solution for that. We also discussed the idea of enable it to do offline bundling. So the caching is an issue and if you want to use less or less instead of CSS, of course you have to use external tools and this needs to go back in. I guess it's possible to work with the other rules. Somehow we need any type of cache key to not run into exactly that problem. Good question so far. Feel free to ping me afterwards. Maybe during the sprint there is time to discuss. Okay. No more questions? As far as I can remember we followed the order of the tool bar a little bit. I'm not sure about that. It's sliced out from a customer project of us where we had a static part on top, static links that can be actions in the future and clone like the site actions. They are already there. We can put them in so that's extendable and you can add action if you like. This is basically what you see. Settings is only here because I'm unlocked in as administrator. So it respects of course permissions. I can also show quickly. Then you don't see all the stuff of course. The ordering itself, at the moment there is no special order. We were discussing that and make it basically configurable to have the different sections and add it somehow ordering to the back end where you can order the things. For now the only feedback we got navigation needs to go on top. This is something we will change quickly. Ordering the elements I guess it could be archived easily somehow. Something for the future. As well as collapsing the different sections by clicking on the headline. This is also a feature we got a request for that. It's also not that difficult. We also had I'm talking a little bit about the future. If you check the GitHub repository there is an initial list. There you can see some of the ideas we are willing to work on. We already have some of it working in a local project which is not published. We could be over the stuff step by step and we add functionality. We make it configurable when we add something. I'm looking forward for some discussions to get different ideas and different feedback on it. This is basically the list. Ordering was one of it you mentioned. Another one is a locking feature. You should be able to lock the toolbar so it's visible permanently. When you're heavy using the Plon site as an editor you don't want to open it all the time. There is a little lock icon you can click and it will stay open. This cover goes away and you can edit on the screen as regular. At the moment it's planned as overlay. We need to discuss if this is the result of not touching one of the default templates from Plon. This comes out without adding a seam to Plon. When we decide to add it as a view next to the body somehow we need to touch the HTML and then it may conflict with an existing seam. At the moment you can use that as a standalone project. I can try to add this quickly to show. No, I won't. Without the Tokyo seam it would look like that. We added the handle next to the home button. Not the best design idea but it needs to be changed in a custom seam. It works like the same without touching the existing layout. This is the idea of the add-on. Any more questions? No. Kim. What about the castle? You've got that where you're exposing all the actions in a flat way which is good for useability. But because you're hiding it, it's less good. So I mean, you're kind of doing something that I think is useful for a lot of users. It lists all the things that are possible, which is talking to editors. So that doesn't seem like a good thing. I have to agree. But you have to think you're an administrator. A regular user doesn't have a link to the site's configuration panel. A regular user is not allowed to add the collection, which is one link less. A regular user can maybe not change the layout, which drops a complete section. So anonymous user only see the static actions, or the site links, basically, and the navigation. In my opinion, I would drop the search as well, because it should be a configure option to show the search or not. So when we release the first version, which is something like 1.0, then you can configure should the toolbar be rendered, collapsed, and only the navigation is open for now? Or do you want to open everything? Or there are some UI decisions you have to make. And I guess the only way we can go is leave some of that decisions to the integrator that uses it for his project, or for his website, or for his internet, or whatever. The idea why we use this as an overlay or as a slide-in is basically a technical problem, because show it always means it's not working on mobile. And to me, it's important to have exactly the same experience on mobile and on desktop. And I have to agree it's maybe not the best idea to use it on a large internet, where people expect some kind of navigation or menu next to the body. But the most projects we did in the past, like two years, this was the perfect solution for it. And yeah, open for discussion for that. So as soon as we touch the default HTML or the default templates on Plone to integrate it somehow, then we have more options. I, better than this overlay, think I like the push version of it. It makes it more part of the website itself when you push the body a little bit and show the sidebar instead of doing an overlay. This 1980s screen here is also a little small. So on my notebook, which renders a fake resolution around like 1,200, it's lower than each modern display you can buy at the moment. There is enough space to render it without changing any of the default layout. So at the end, you have to make some decisions what you want to support or whatnot. OK. We want to work on it during the sprint. So feel free to contact me or give me your opinion or support a little bit. We have some questions regarding the navigation inside the sidebar. We have some questions regarding some of the actions. How is the best option to get them into without reinventing the wheel? So for example, what you've seen from the workflow is code. We copied over from the toolbar. For the navigation, this wasn't possible. So we didn't find where exactly the navigation renders the ad of content types thing. OK. Thank you. Thank you.
There are lots of use cases where you want to keep Plone's default UI for anonymous and authenticated users. An alternative approach to Plone adds a custom mobile first theme based on Barceloneta without diving into Diazo rules and resource registry.
10.5446/54844 (DOI)
では、今日はユーセイタハラの紹介をします。今日、CITESTINGシステムを紹介します。このシステムは、今日は、まず、私は、彼のカンパニーを紹介します。そして、ユーザーポイントのデモを紹介します。そして、このシステムは、技術の内容について、その後、こちらのソフトウェアを紹介します。ついに、エクステンリードの建設方法を紹介します。私のカンパニーは次第です。ついに、ヘッドクローターを、フランスに、東京から出ることができます。次は、 Paaadにて誇り出してかけれどどのように予算していく mmm台場で私たちをわずかかりますそうなんです。すごかったよね。だったんです。 Rafaelとへんで今回のコンパニーについて私たちが�業を始めていたリノックス・ディスリビューションパケージのシステムはRPMやデブパケージのシステムを使っていますがリノックス・ディスリビューションパケージのシステムは私たちに使わないといけませんRPMやデブパケージのシステムはバリアスオープンソフトウェアのコンビネーションZOP、CMF、PythonLibraries、MISQL、 etc。RPMを使うと、一つのコンポーネントはアンディペンデントでコンビネーションを使って手に入れにくいのを使って同じコンビネーションを再生することが難しいです10年前に自然に見えたものがZOPの部分ですRPMからの層を取り出し層を取り出し層を取り出し層を取り出し同じコンビネーションを再生することが難しいですしかし私たちにも少ないことがなぜなら普通にZOPを外すと一つのコンビネーションを使って多くのコンビネーションを使ってシンプリでアンディンバイルメントを作って一つのコンビネーションを使って多くのコンビネーションを使ってエンバイルメントとして実際に学生とデブロップメントが同じですそのため同じエンバイルメントが多くのコンビネーションを作ってハードウェアマネージメントは困難できた次に、新しいアイデアを使用することを教えています。今後、このデータを説明します。テクニカルディテールを説明します。このデータは、私たちのソフトウェアの中で、このデータの映像を 説明します。ここで、ユーザーを説明します。テストスウェイトは、私のソフトウェアの 自分のフォークを使用することを意味します。これを説明します。ここで、私たちのソフトウェアの中で、2つのレポジトリーズを説明します。このデータを説明します。このデータは、私たちのソフトウェアの中で、このデフィニッションを自分のレポジトリーズに 自分のレポジトリーズを説明します。このビデオは、私たちのソフトウェアの中で、私たちのソフトウェアの中で、私たちのソフトウェアの中で、私たちのソフトウェアの中で、私たちのソフトウェアの中で、私たちのソフトウェアの中で、私たちのソフトウェアの中で、今回はスパイメントを、 probabilай winnersと改善させる様ので詳細を� Уild Fab provisions andではまして完成されて Kara Brailleに参考します。完了しました。今回この点について教えていただけます。and to know the details,I can check the standard ARRテクステリアthe details are stored in this place.車の上提とでどれへくるの繋げていき forgivenessと想定のロゾost針金を別詣まん鈴試験は大丈夫でした試験の方は、ユーザーの方法で使用することが簡単ですまず、試験を作りたいと言いますレポジトリーを説明しますリポジトリーとバリデートを説明します毎回、試験を自動で使用すると、リポジトリーを説明しますこれが簡単ですこのように、試験を使用すると、オーバービューのダイヤグラムを使用することができますこれが、オーバービューのダイヤグラムを使用すると、簡単です少しずつ説明します少しずつ説明しますここに3つの重要なエンティーがあります1つは、試験ノードを使用することができますこの試験ノードが実際に、このコンピューターで試験を行いますそして、試験ノードの中にデータの種類がありますそして、1つのモアマシンを使用することができますオーエスマスターの名前を見てみましょう1x1のモアマシンを見てみましょう試験ノードの中にオーバーのオープンソフトウェアERP5を説明しますこのシステムを使用することができますそして、このシステムを使用することができますまず、ERP5を説明しますERP5はプロンのようなゾウルパプリケーションですこのシステムはプロンのようなゾウルパプリケーションですしかし、このソフトウェアのパプロンは少し違うですしすれば、 scratchr4ISP4を使用するダイヤものなどTechMethro as PowerRP5の中にあるときは、非常に大きなコンプレックスです。このコンプレックスの中にあるときは、世界中のプロファイルであると思います。RP5の中にあるときは、450セクションであるときに、どんなレシピを使っていますか?最も使い方は、スラップウェスレシピCMMIです。このレシピは、コンフュギュアを作り、ソフトウェアを作り、パイソン、GCC、GCC、GCCレシピ、プロンコマンド、ジンジャー・テンプレートを使います。ジンジャー・テンプレートは、私にとっては重要なことです。私にとっては、ジンジャーを作り、コンフィグレーションをダイナミカリに作り、エアルピーファイルを使って、アプリケーションを作り、だが私は、私にとっては、アプリケーションを作り、マネージ・ハードウェアを使ったのを楽しみました。その後、一つのアイデアは、リソースマネジメントシステムで、リソースマネジメントシステムは、ERP5で使用されているので、ハードウェアやコンピューターの場合、ハードウェアリソースを使って、リソースマネジメントシステムで使えることができます。そのため、リソースマネジメントシステムで使うことができます。キーポイントは、マスターノードサーバーが使用されている。コンピューターを使うことができます。プロファイルを使うことができます。ソフトウェアを使うことができます。このように、ディレクトリーを使うことができます。ユーザーは、スラップOSマスターでコミュニケーションを使って、スラップOSマスターでコミュニケーションを使うことができます。コンピューターを使うことができます。このように、ディレクトリーを使うことができます。このように、ディレクトリーを使うことができます。スラップOSマスターでコミュニケーションを使うことができます。まず、コンピューターを使うことができます。例えば、ERP5を使うことができます。まず、ERP5のソフトウェアを使うことができます。そして、サフトウェアを使うことができます。しかし、まだアプリケーションを使うことができます。これが最初の一つです。そして、ソフトウェアを使うことができます。そして、このファイルを使うことができます。このように、リヌックスディスティビューションパケージのシステムでの違いがあります。例えば、マイエスQL-RPMを使うことができます。マイエスQL-RPMを使っても、サフトウェアの他のマイエスQLを使うことができます。しかし、このような方法では、マイエスQL-RPMソフトウェアを使うと便利性があると、サフトウェアの他にも多くのマイエスQLを使って、このようなエグゼクションの エンバイルメントを作りましょうこれがスラップOSの 意味ですそしてインスタンスを デプロイしてデプロイしてデプロイしてスーパーバイザーDに 使わないといけませんシステムDやリネックスどうか言うとシステムで リネックスで スーパーバイザーDを使っていませんしかし スーパーバイザーDを使っていませんスラップOSは リネックスで使っていませんデプロイしていませんデプロイしていませんデプロイしていませんそして リシピーを 作っているのが 重要ですリシピーは スラップコンフィグレーションです通常 スラップOSを使っているとデプロイしていると 同じ結果が出ていますしかしスラップOSは スフトウェアで オーケーで オーケーで オーケーでしかし スラップオープニングアプリケーションの eco一般的にこのアプリケーションのプレゼントは例えば 三度空のクライアンツを クライアンスするクライアンツは アプリケーションの クライアントーでデプロイしていたクライアンツを クライアンスするクライアンツの コンフィグレーションが 別にいいこのスラップOSコンフィグレーションレシピを使ってコンフィグレーションパラメータを設定してスラップOSマスターノードを設定してここにスラップコンフィグレーションレシピを使ってスラップOSマスターノードを設定して同時にコンフィグレーションパラメータを設定してスラップOSマスターノードを設定してスラップOSマスターノードを設定してスラップOSマスターノードを設定してスラップOSマスターノードを設定してスラップOSマスターノードを設定して2ユーザーインターフェイスを設定してパイソンユーザーインターフェイスを設定してこのような例がありますリクエストメソドがありますこのメソドはスラップOSマスターノードを設定して新しいERP5アプリケーションのアンバイルメントを設定してこのパラメータを設定してディクエストエンバイルメントがどうなっているかを説明しますここで、私はクライアンスを8つ設定してシングルスレッドとHTTPポートの数を設定して3,200と設定していますクライアンスの1つのグループを設定して4人のクライアンスの数を設定してこのグループを設定してタイマーサーバープロダクトを設定してタイマーサーバーインターバルパラメータを設定してこのパラメータを設定してファイルを設定してインスタンスを設定してこのレシピを使用してこのパラメータを設定してスラップウエスコンフィグレーションレシピを設定してジンジャーテンプレートレシピを設定してファイルをダイナミカリを設定してファイルをダイナミカリを設定してこのように、各アプリケーションのエンバイルミントを使這個 mute Sauce上 signal thatHTTP portの数字や小さなコンフィグレーションが違いますが、ソフトウェアの自然は同じです。So, All-year P5 instances 使用しているインスタンスは、ファイルを建設して、ソフトウェアで使用しています。しかし、ソフトウェアの自然のコンフィグレーションを使用しているインスタンスは、ソフトウェアの自然のコンフィグレーションを使用しています。では、私たちのビルダウトソリューションを紹介します。このスラップOSの名前は、マスターノードのコンフィグレーションパラメーターの使用者のノードです。このスラップOSの名前は、2階のビルダウトソリューションを使用しています。1階のビルダウトソリューションは、ソフトウェアでスタッティックです。2階のビルダウトソリューションは、アプリケーション・エグゼクション・エンバイルメントを作り、この変化がダイナミカルです。このスラップOSの名前は、アプリケーション・エンバイルメントを使用しています。このスラップOSの名前は、アプリケーション・エンバイルメントを使用しています。このスラップOSの名前は、アプリケーション・エンバイルメントを使用しています。このスラップOSの名前は、アプリケーション・エンバイルメントを使用しています。このスラップOSの名前は、アプリケーション・エンバイルメントを使用しています。例えば、1ERP5アプリケーション・エンバイルメントを使用しています。このスラップOSの名前は、4と8として、スラップOSの名前を変更します。このスラップOSの名前は、新しいコンフィグレーションパラメーターの名前を変更します。このスラップOSの名前は、毎日5分間、生徒の生徒を生徒に行います。このスラップOSの名前は、毎日5分間、生徒の生徒に行います。このスラップOSの名前は、新しいコンフィグレーション・エンバイルメントを使用しています。この新しいコンフィグレーション・エンバイルメントを使用しています。そして、8と10のクライアントの名前を変更します。最後に、私は、スタッフの試験を早く解説しています。この通りについて、私は、この知識を基づけます。この試験の試験のアドミニューストレーターの見解を、まず、新しいコンピューターを使用しています。このスラップOSの名前は、デビアンパケージやRPMパケージを使用しています。そのため、スラップOSの名前を設定しています。このスラップOSの名前を設定しています。その後、新しいリネックスサーバーのパケージを設定しています。このスラップOSの名前を設定しています。この新しいコンピューターがスラップOSの名前を認識しています。その後、スラップOSの名前を設定しています。この新しいコンピューターを使用しています。このスラップOSの名前を設定しています。このように、新しいコンピューターを設定しています。このスラップOSの名前を組み込んでいます。このスラップOSの名前を設定しています。このスラップOSの名前を設定しています。その後、手前を設定しています。そして、TestNodeソフトウェイズを建設した後、スラップOS Master Node User Interfaceを再現します。そして、TestNode InstanceをCustom Configuration Parameterに移動します。そして、新たに、TestNode Application EnvironmentをCreated by Buildout with Configuration Parameterに移動します。このCondfiguration Parameterで、TestMaster Serverをどこに行っているかをご紹介します。このように、TestNodeソフトウェイズをどこに行っているかをご紹介します。TestMasterとして、TestNodeソフトウェイズをコミュニケーションしています。そして、TestNode Applicationをコンピューターに始めた後、TestMasterとのコンピューターに連織します。そして、TestMasterとのコンピューターに連織することができます。TestMasterの内側に、TestNode documentを作り、TestMasterのコロスポンジの新たに、TestNodeコンピューターの documentを作り、TestMasterの新たに、TestNodeの新たに、おそらく、TestMasterの新たに、新たに、TestMasterの新たに、デモビデオは2Git Repositories 作り方を説明しました。このGit Repositories 作り方は、新のビルダウトプロファイルが自動で、消毒の性能と次第に、 Mix 有產電気ソフトウェア。二 hour mark goneっており、アメリカです。これが中国版のプロシャルについて図写課はつまり不會化する被せてrestabilityを少ないで10-12en是がメインに感染可能な限界部屋もファイルеですそのつび替えアレクションに適法このテストアプリケーションのアンバイルメントを再現するためにテストノードソフトウェアを再現するために1つのコマンド名はハードコーディードでランテストスを再現するために拯 Alertとリアルメントに求められた証を守るためにRaju packageはランテストスを再現するためにApp的人が ageについて読 っていない話ですANA pizzasを閉じてこのスクリップを試してみます5を試してみますこのランテストスウィートコマンドを試してみますこのスクリップを実際に行って実際にスクリップを始めますそしてテストノードがクローンドギットレポジトリを試してみましたユーザーでテストマスターノードを使ってテストノードをクローンドギットレポジトリをチェックしてみます新しいコミットがあって新しいコミットをクローンドギットレポジトリを試してみますランテストスウィートコマンドを試してみますランテストスウィートコマンドが実際に行ってランテストスウィートコマンドを実際に走り出す実際に試してみますテストマスターノードを試してみますテストノードのリザルトオブジェクトを試してみますそして試してテストレザルトドキュメントを試してリザルトラインズサブオブジェクトを試してみますリザルトラインズサブオブジェクトを試してみます実際に試してみます例えばERP5のテストスウィートコマンドは100のテストレザルトラインズを試してみます100のテストレザルトラインズを試してみますそしてここにDC1のテストマスターを試してみますここにDC1のテストマスターを試してみますそして実際に全テストレザルトラインズのドラフトステートを試してみますそして一つのテストスウィートコマンドは多くのテストノードを試してみますそして各テストノードを試してみますテストマスターのノードを試してみますそしてドラフトステートレザルトラインズを試してみます一つのドラフトステートレザルトラインズを試してみますそしてコレスポンディングテストケースを試してみますこのように大きなテストを試してみますパラレッドでそして2つのオプティマイズションオプティマイズションこのシステムでオプティマイズションを試してみますオーダーのテストレザルトラインズを試してみますそして一つのテストノードを試してみますまずはドラフトステートノードを試してみますそのため測定の状態を試してみますこれを試してみますプレビュースリーフェイルドテスト試してみますプレビュースリーフェイルドテストケースプレビュースリーフェイルドテストケースエネルギーを実施しますこのように続けてデ ナヂ デ ナ ダ ダ ダ ダ ダ ダ ダ ダ ダ ダ ダデブロッパーは、いったいフェイラーで、フィックスをして、次のテストの結果を見つけたいと考えています。そのため、前回のテストは、前回のテストではなく、他のテストでは、使われていることができます。それが、私たちのテストのシステムを使っていることです。それは、少し complex じゃないですが、このシステムをよく使うことができます。実際に、同じ技術を使っていることを使っています。他のテストのために、バーチャルマシンを使っていることを、私たちのプライベートクラウドで使うことは、多くのテストのシステムではなく、使われていることを使っています。ありがとうございます。ご視聴ありがとうございました。ありがとうございました。
I introduce an original CI testing system based on zc.buildout that Nexedi uses everyday for running unit tests, functional tests and scalability tests. All code is published as free software.
10.5446/54845 (DOI)
Okay. Good morning everybody and welcome to our talk about Game of Plones and gamification. My name is Jörg. I'm from Interactive. We are a Cologne-Germany-based Plone Service provider. We have 13 people organized in scrum teams and our main business is doing development projects mostly based on Plone. And before I jump into our topic of my talk, I would like to introduce Johanna. Johanna is a scrum master at Interactive and she prepared a little warm-up game for us. So. Okay. Hi everybody. I'm Johanna or you could call me Jojo or YoYo whatever. It's fine. And I prepared a little game to start the day to wake up a little bit and I would like you to everybody to get up and come in front. And the first task of the game is to for you to self organize in two teams. Yeah. So team one should be standing here. Team two on this side. You should have the same size, group size. That would be great. Okay. Let's see. I would say this group is a bit bigger. So maybe one or two. Go to the other team. Please. Just another one. Come over. Okay. So I prepared some paper and pens over there too. The next task would be to each of you should write down one word that is overused on your day to day work. It can be technical or in context of project management words you use every day. So. And then fold it the paper together and put it in my beanies. Words you use on a day to day basis. Any word related. It needs to be work related. So the word should be work related. By the way, you can keep the pen if you want to. Okay. So you are ready. You're finished. Okay. Great. Okay. You finished too? This is great. So we're going to switch the beanies. Here you go. So and now each team has 90 seconds to explain the word. So one of you goes to the beanie, picks one word and explains the word without mentioning any form of this word. That's written down to your teammates. And they have to guess what it is. And for each correct answer, your team gets a point. And after this 90 seconds, the other team has to do it. Okay. So this team starts right now. Good. Next one. So I guess you got six right? Or was it seven? Six? Or seven? Okay. So now the other team, you can start now. Over. Thank you very much. You can go to your seats. Okay. So the intention of this game was to work on these words we overuse on a day-to-day basis and to learn, relearn to define them. Because often it is we use words, we know them because we use them every day, but our customers, they don't know what we're talking about. So that was a playful way to relearn definitions of some words. Yeah. And that is one. Oh, sorry. Yeah. Sorry. Yeah. So we played this game too. And we were thinking about more ways how we could relearn or motivate our employers. And Jörg is going to talk about that more now. Okay. Thanks a lot. Thanks for participating. So I think we had a little bit of fun. And at the same time, we learned something. So we're in the middle of gamification. Maybe a couple of words how I came to this topic, why I'm interested in gamification. I said already that we are a project organization. We're doing software development. We have scrum teams. And as a team lead, I often encounter challenges like how can I motivate a team over a longer period of time, for example? Or how can we effectively learn new skills, new techniques, new methods in our field of work? Or how can we keep the team spirit high over a longer time? So I came across this buzzword gamification already a long time ago. And I read a lot of articles mostly about bigger companies, what they did and how they improved their performance in marketing and sales in software development with gamification. And I thought that could be something for us. And let's just try it what we could do. So let's start with the question, what is gamification? If you look at the Wikipedia definition, it's pretty simple. Gamification is the application of game design elements and game principles in non-game contexts. And non-game context for us is our business context, of course. And so we wanted to see what we could do there in our business context. And actually, if I ask around, if you already came in contact with gamification, how much do you let, who came in contact with gamification already? Okay, so yeah, most hands or lots of hands went up. But I think every one of us already came into contact with gamification in some context. Mostly if you use your smartphone, if you have, for example, a fitness app, health tracking app or something, there are always elements of gamification in it. You do something good, you get points for it, you get rewarded afterwards for what you did. And that's one of the main principles of gamification. So in the characteristics in general of gamification are that you have a goal. You have players who want and who need to reach this goal. And then you have rules. The rules of the game, they give the framework, they tell you how to reach the goal, what to do, what the steps are to be successful and to get your reward at the end. Then you have a feedback system normally so that you as a player, you always know where you are in the game and what you have to do, what's still ahead to reach your goal. So constant feedback would be something very important. Then voluntary participation is a very important element of gamification. So you can never or you should never force somebody to participate in a game. It should always be voluntarily. And then of course there should be some desired outcome in the real world. It's not only about the game, like this little game we just did, it was not just to have fun, that was one part of it. But the real goal and the goal in the real world was to learn about something of our work. We wanted to explain those words. And last but not least, gamification should be embedded in the company culture. Like if your company culture is very strict and is not playful, gamification will probably not work. So there must be a certain alignment between what you do and your culture. So as I said, we are working in agile projects and mostly scrum. And in scrum we already came across gamification or it's already built in some sort. In scrum, if you for example think of team estimation game or planning poker, those are games that are used to estimate your user stories for example. So it's already game elements in this agile approach. And what you see here, these are pictures from one of our in-house scrum trainings. And that was with Lego. We built together, we had teams and we built together little houses and the whole process was structured in the scrum way and we did the whole feedback cycle and everything with Lego. So that was the very beginning, what we tried to do with gamification. And in agile work environments, it's pretty common to use gamification to help in collaboration and communication for example or to learn about agile techniques, about testing, about refactoring and so on. And also what we did in our little game before, strengthen self organization of teams for example. So when we saw that we are not the only ones doing gamification in our agile work. There are meetup groups everywhere. That's just one example from Munich. We also have meetup groups in Cologne and there are over 800 people for example meeting regularly and just playing games after hours. And then we thought okay, we should do that too. And now once a month or every two months we have that at Interactive. We meet with our team in the evening for one hour or two hours and we just play little games that are fun. That's one part of the thing. But then for the other hand we really want to learn something in the same time. So if you want to implement gamification in an organization for example in a software development company, there's a certain process you should follow from our experience from what we saw. And the first thing is you should define business goals so that your gamification program really has this deeper goal you follow and it's not just about having fun and about playing. You should then define the desired outcome, the desired behavior. Normally you want to change people's behavior in a positive way for example and you should exactly define what you want to reach, what your goals are to be successful. Then it's very important to get approval from your players, from your team so that they really want to play this game. As I said you should never force somebody to play with you. And then also you should analyze the structure of your team, what player types they are. There's a so-called Bartle test and there's a certain classification of people. It's mostly meant for online games but it can be used pretty well for our gamification approach too. So that you do a little online test with people and then you see do they like to explore things or are they more the killer type so that they want to shoot things or whatever. And they have in the Bartle test they have different categories of players. They're called killer, explorer, socializer or achiever. And then you know better how your team is structured and how you should design your gamification program so that the tasks or the quests you develop for the players are really aligned with their needs or what they want to do. Then yeah you should design the game in detail, the sign in the sense of really making up the rules and the story and everything. And then you need software, you need the platform. You need to do something to log all the quests, all the tasks you want to do. You need to give the points and so on. So you need the software and that's the main part for us. What we had to decide how we want to do that, everything else was pretty simple but then we came to this question how could we manage our gamification model or gamification program. So then when you start the program it's very important that you have this feedback cycle. That you cannot just let the program run, the gamification run and it will be good. You should always like in agile 2, like in scrum 2, you should have the feedback process. You want to know from your players how do they feel about the game. Is it okay and you for sure have to adapt after a while. You have to get new quests, you have to change the topic of your game and so on to keep it always updated, to keep it fresh and to keep people really motivated to play on. So we're in this process of deciding what software to use or what we could do, how to manage our gamification program and we looked around what's around and one big chunk of software is commercial software, mostly software as a service. You can rent the software you pay on a player basis mostly and then you get a fully functional ready-made software where you can just start. There are some examples there on the right side of the slide, software tools that you can use and start tomorrow with the gamification program. Then I found some add-ons and some open source tools which promised to do gamification in some sort, add-ons to useplone even that you could use but at the end we were not satisfied with the functions those tools gave us. So finally we thought why not useplone or at least considerplone and look at the options we have and yeah we finally really decided to useplone. What you need in such a gamification platform, of course you need players, you need to be able to create and manage users, players on the system, you need an administration interface where you can add new quests, where you can see what the players did and so on. You need workflows so that you create a quest but you don't want to publish it yet, maybe later you want to publish it or whatever you want to want people to be able to send in a quest for review and after the review they get their points and then they get their reward so that would be a typical workflow. In a gamification platform and you of course need a nice user interface, user-friendly application where people want to participate and where it's not a hassle to log in and do something but it's just something you do while you work anyway. So best thing would be to have some interaction with other tools of a software you use in your company, for example if you use a ticket system and creating a ticket or working on a ticket would give you a reward in the gamification platform then it would be nice if the ticket system tells the gamification platform that a user creates a ticket or updated a ticket so that points are rewarded automatically. So then apart from those technical things you need a good story. Every online game you know everything you play it's embedded in a good story and so we also thought what could we do and our first story we developed now for the gamification program that is running now that is starting now at Interactive is based on Star Wars and we called it blown wars and so we started with a little intro at May the 4th Star Wars Day we created this to introduce the program to our team. I apologize it's all in German but the idea is just to give this story around our program and to make people feel that they are in something bigger than just a little small game. I guess we don't have to read all that because you won't understand it anyway so I will skip that. Then the game design when you really focus on your game how it should look like it's all around the quests. The quests are the center of any gamification program basically so you make people do something that's the quest the task and when they do something they get rewarded for it. There's never a negative consequence if somebody doesn't do anything there's no subtraction of points or something it's always about rewarding good behavior or doing something good. Since we work with Scrum and we used the Fibonacci numbers to estimate our user stories for example we thought okay our reward system could be based on that too so we gave the points for quests those points are 1, 2, 3, 5, 8, 13, 20, 40 and 100 as we do in our estimation games. That's the point part you get points for your quests then as a player you have a visible status that's also one important point of a game design so that you like in an online game you have a badge for example telling you at what level in the game you are and how many points you have to reach maybe to get the next level and this telling you how many points you need would be the notifications you get maybe email updates you get notifications on the platform telling you where you are you get feedback if you did something good if you finished a quest you are told what you did you are told what other people did so that you have a comparison to your team mates there's a review process for quests so if you finished something you have to give it to review not to a manager but to mostly to a teammate so it's a peer review system it's pretty low level so that just somebody says yeah I saw that he finished the task and I agree so you can get the points because we don't want to have a whole lot of administration overhead so it's pretty low level then there's an activity stream kind of dashboard where you see what happened what you did in the last little while and what all the other players did and finally when you collected your points there's a reward system and you can exchange your points for immaterial or material rewards the reward system is important for the motivational factor of course you should more build your gamification system on the intrinsic motivation that's the more powerful one but apart from intrinsic motivation so just have fun to play and be at the top of the leader board for example I think you also need some extrinsic motivation so you need some real rewards for your points so that you get whatever a cup of coffee if you did something good so then we designed our quests and we have a distinction between normal quests that one person can do we have missions that's a quest that one teammate can give to another teammate it's not made up by the game administration but everybody can create missions and then finally we have team quests team quest should be done by several people there are some examples of our real quest that we have from our system and the appropriate points for that quest then status of our gamification process we designed our game with the story with everything we wrote our user stories we have we did 11 development sprints every sprint was one week and we developed our story around Star Wars as I said we celebrated Star Wars Day on May the 4th we started the first episode the first part of the gamification program in October and every episode is 30 days so we can reward people after one episode after one month and then you go on you collect your points for the next episode and at the end of the year for example we have an overall winner of the game then after our first episode we had a feature sprint because we got the first feedback from the players they found bugs of course they had new ideas what we could do and they found usability issues in the in the program so we addressed all that and so they saw all the players saw okay it's getting better and we want to do that further on too so that with every episode we enhance our gamification platform and finally when we think that we can give this piece of software to somebody else we will do so we want to publish it as open source software so that if you're interested you could maybe try this for your organization too and enjoy your game now some screenshots how are we with time are we okay okay so some screenshots from our system that would be the lock-in screen again this is all in German so it's just the visual impression I can give you right now but the the whole system is of course fully translatable back and in front and you could have in English Spanish French whatever that would be no problem so that's the kind of dashboard where you have your activity stream you have your status you have your badge you have your points etc then you have one part where you can choose your quests you want to do we have categories all our quests are categorized so we want people to learn new methods for example we want to enhance their social competence and so on and the quests are categorized in those topic areas then we have our leaderboard so that's the manager view as a manager I see the whole status I see all the players in the list and how many points they have the players themselves they only see the first three players and their own position but they cannot see the whole ranking because it might be demotivating if you're the last in the list and then you see that everybody is better than you so that would be the store basically so the reward system where people can see what they get for their point for the points they collected so yeah it would be for example you get 10 euros for Amazon or you get a soccer ball or you get one day home office or you get a cup of coffee or ice cream or whatever and the higher the points that are attributed to the reward of course the more you have to do to get this reward and in the same sense the same thing happens with the quests of course I didn't say that when we saw the quest for example if you just bring out the trash it's one point because it's nice that you do that but it's not a big thing but if you contribute to bring us a new customer for example there I would give 100 points for this quest that you did so there's of course a certain leveling and evaluation how important the quest is you just did so that's also an administration interface where I can add the rewards and that's the manage view for episodes where I can create new episodes we gave them names according to the Star Wars series in our case but of course you can think the whole gamification program with a lot of different topics it could be a soccer league or it could be a game of thrones or whatever you like and also this the software recreated is completely flexible as an administrator you can change the color screen you can change the logo so you can do the repranding of your gamification platform pretty fast pretty quickly that's the administration interface for quests where I as an administrator have an overview can create and edit and delete quests and that would be a normal clone form to add a quest and the basic setting page for the gamification platform so that was just a quick visual overview and then now we want to do we would like to do something you shouldn't do we want to do a live demonstration and I know that Murphy's Law is against us and probably the internet will fail and the network will crash but we will try it anyway and try to lock in the system and I will show a little bit how it works okay this is can you hear me okay so this is the lock insideucks um uh Okay. Here we go. Okay. So this is the lock inside. And I will show you my, hopefully, my dashboard. So, okay. This is my dashboard. As you can see, as project manager and scrum master, I don't get a lot of points. So my position is nine out of 12. That's not so good. Here you can see the points I made. I need to do to get to another level. I made 28 points. When you scroll down a bit, you have the activity screen with the latest, I guess, 10 activities of everyone. I can switch to my activities. So you can see last night, I finished the quest of wearing some interactive clothes. We have t-shirts and sweaters, so you can get points for wearing them. I got points for doing fitness three times a week. And I actually got a message from another player, but I don't know who it was. So, yeah. What I want to do now is to get some points. This is the dashboard for the quests. We have here some common ones and the ones about learning new methods. And here we go to the engagement quests. We created a quest for attending at a community event like Plorn Conference. So I'm going to get 20 points for each day. That's good for me. So I locked the quest, and now I have to finish it. I'm not sure if this quest doesn't need any reviews, but I'm just typing in attending Plorn Conference 2018. Day one. I'm going to click this one. And as you can see, I have 48 points now. I'm position eight. And when I scroll down, you can see in the activity screen that I locked it there. So when I go to the leaderboard, as Jörg already told you, you can only see the first three players and my position and the other ones are anonymous. And then we have a player list. I don't know if we didn't do a screenshot of that one. You can put in there a motto. You can put in the birthdays over here. You can also change your profile picture over here and your motto and your email address. You can see here your rewards. I don't have any yet. And you have here the activity, everything I did since we started. Okay. Now it's your turn again, I guess. Okay, thanks a lot. And that's about it. Let's see. Thanks a lot for playing with us and for seeing our gamification platform. If you have questions about the gamification approach or about software, don't hesitate to contact us. Ask Johanna or ask me about it and we would be happy to talk about it. Thank you very much. Okay, I guess we have time for some questions over here. Do you have any integrations with RedMind, Yankees, any tools? Not yet, but we use RedMind and that would be the next thing on our list that we have an integration with RedMind, yeah. Any other questions? Come on, the question everyone wants to know. What about the t-shirts? Can we buy some? You don't have to buy it. I think we will just make a list or something and everybody who is interested puts his name and his email address on it and we will send t-shirts. Okay? Okay, over there. So how much, was there any custom work on the clone done for this or how much work was it to use out of the box clone to create the gamification platform? Well, we did 11 weeks of work, but actually it was a project for our apprentices for people who start in our company and it was a great way for them to learn about programming and to learn about clone and so for sure it took them longer than it would have taken a programmer who has experienced with clone, but still it was okay. I found that time frame okay to develop it and clone basically gave us everything we need. We have the workflows, we have everything we saw, so it was more like creating some content types for quests and so on and then doing the styling and yeah, so not a big deal basically. Okay. I'm curious why you would, I mean if you had your apprentices do this, that's great. Are you planning to try to get clients to use this because this looks really useful? It wasn't the plan or isn't the plan for now. It was really something internal for us. We wanted to use it for ourselves. We will see how it works, but the next step would be that we give it to everybody who wants to use it to get more feedback on it because for sure it's still a first version and it would be good to get more people playing with it and we could enhance it. I don't know, maybe the future holds, the business of future community in it possibly, but it's not the plan for now. I'm very kind of you because I really think it's good the way it is already, but. Okay. Any other questions? Great. Then I just created a mating t-shirt list where you can write down your name, your email and your t-shirt size. It's here in front. It's very old school with a piece of paper. Yesterday I had to talk about privacy, so don't look at the other email addresses. Please, just write your email address down, but don't look at the others. Thank you.
We have often wondered what we could do to motivate our dev teams and enhance team performance in our day-to-day business. Our solution: gamification. This lively case study demonstrates the introduction of gamification into our company and Scrum teams of Plone developers.
10.5446/54846 (DOI)
I think that, does anybody who was not at the Barcelona Aguilutina presentation? Okay, just a bit recap. Three years ago, we started to do something that we name a blown server, naming, it's always difficult. We tried to push a synchronous mode to see what we can do. We meet in Barcelona 15 blown developers. We tried to do what we can do to rewrite from scratch a backend using ZUDB and everything else. One year later, at Cologne Spring, we decided to rename it and to rename it to Guilutina and to use Postgres database directly and we are here now. Now, one year later, after the Barcelona blown conference, Guilutina is really stable, really mature and I will be really happy to explain what is now and what is going to be from our point of view. First of all, who am I? For the moment, you will really do the presentation. I am co-author Guilutina, a blown foundation member, is co-founder and entrepreneur. I like to start new companies and I am a full stack developer. I am also a magician, a dancer, mostly a Catalan over everything. What is Guilutina? I try to write a sentence. An ecosystem is not a framework. I like to repeat that a lot of times. It is not that it is one package, it is a bunch of packages that are creating an ecosystem of packages that are used to work together and they are designed to scale and to manage resources with security and traversal storage. That is the main goal of Guilutina. The two main authors and main contributors are Nathan and me. He is not able to be here but he is in the slides a lot of times. I really want to mention him because we have been working on this for the last two years together and it has been a pleasure. Most of the ideas are just here and me discussing and deciding what to do every time. Before starting, this is going to be a code talk and before starting with the gray slides, I wanted to do a bunch of colorish slides. This is my dream of what is Guilutina or what I want Guilutina to become and what is it right now. First, we wanted to be something that is easy to use besides it is a sync and you need to write a sync and await all the time. That allows to develop ideas that are scalable and extensible with options to grow from one user to thousands. We had a use case. We were working on the same company and we had a use case where we needed to grow a project from a really small use case to a really large use case with lots of users and a lot of data. We wanted to have a pipeline of components to host resources. Resources is the name we use for what Blum calls content. To split into smaller problems, something that it may be a really complex problem from outside. My dream here is that everybody could use this to deploy large systems, to iterate and to deliver fast, mostly designed for startups where they start with a really small prototype and they need to grow and grow adding new users and new resources. That is always community driven. That no company belongs, has the IP or the rights on the software. It belongs to the Blum Foundation and I'm really happy about that. And trying to keep it simple and small. As you see, my dream is not to replace Blum. So it's not a tool to replace Blum, to replace the back end of Blum. It may become, but it's not why we created this thing. We created this thing to be a tool for creating projects. Because the most ask a question. So let's start with the technical side of the presentation. First of all, Glutin is based in Blumgrass API. That thank you to Timo and all the team that was building that. We defined this API as the entry point. It's a valid API. It's a good API to manage resources and to manage content. And, well, it was easy to implement the back end with this API because we also are traversal and it's the main request for this kind of APIs. Then for the storage layer. Here after iterating over different databases, we went through MongoDB and we went through even RADIS we tried. We decided that the best database that fits our needs for the transactional model and for the kind of way we were willing to interact with the database layer was Postgres. And later on appeared CockroachDB. I don't know if you know. CockroachDB is an amazing database built on go. It talks the binary protocol from Postgres. So you can connect anything that you can connect to a Postgres to CockroachDB. And it uses a raft consensus algorithm in order to distribute the data across different nodes with replication. And we have, we've been having some deployments with nearly, I don't know, 1,500 requests, transactions per second with millions of documents inside the database. So it performs really well. It's a bit tricky, but and it's new. So you need to follow up the book fixing and the versioning, but it's really, really amazing database. And from our point of view, we only needed to change a few more things because it talks Postgres binary protocol. For the indexing, indexing the content and being able to search and do full tech searching, we choose Elasticsearch that we kind of like it. And we also are providing support for Postgres.Jsonb. So we can serialize on the Postgres.Jsonb as a JSONb field. On the catch layer, we are using Redis and memory catch. It's a really small memory catch. And it's mostly everything delegated to Redis, which is doing the invalidation. And I'm going to talk a bit more about that later. And about blobs, we decided to choose S3 and Google Cloud Storage as the main citizens of blobs on Guillotine. Everything is in a sync framework. We are able to stream files up and down to S3 and Google Cloud Storage. So we don't need to host the file and memory ever. And we are just streaming from the browser or wherever client you are connecting to Guillotine to the real storage. And we only have a buffer of 500 kilobytes. And we are just streaming the file up and down. And we also support that the base blobs that Cook Storage and Postgres also has in case that you don't want to use a cloud file storage. Well, these are the layers, the main layers that we needed to choose to reuse the data to the tools that we have already stable on the ecosystem. But now let's go inside the Guillotine architecture. I just speak from Guillotine the most interesting concepts. And I'm going to try to explain a bit of them. First, the configuration. We kind of decided that we don't want to store the configuration on the database. So everything is explicitly configured externally to the database in a YAML that you can provide when you are starting Guillotine. This config.yaml that you are sending may overwrite any of the configurations that you have on your system. And what we do is we merge this configuration with all the configurations from the different packages that we are loading on our system. All this information can be accessible from the code when you are importing app settings from Guillotine. So here you get a large dictionary that with all the configuration that you have for your site, everything, the elastic search URL, the database URL, anything that you need, it's going to be on this large dictionary. Decorators, because maybe we suffered a lot about GCML, we decided to move to the pyramid way of defining the different elements that developer needs. So we have a long list of decorators to define nearly everything that we think that it's needed. So the first one, configure.service, it's to define an endpoint. You want to define an endpoint for a specific kind of content. This one. You want to define a new content type, a new resource type, a second one. A vocabulary behavior. We implement behaviors as the first class citizens of Guillotine. Add-ons if you want to register an add-on to be able to be installed. Just subscribers and utility, different languages, even the permissions, the roles, granting and granting all can be defined as a decorator. JSON schema definition, it's a way that all the input output from the API is done with JSON. So we needed to find a way to be able to define pieces of JSON that are going to be reutilized in different places of the response. Value serializer and binary serializer are two decorators to define how you want to serialize and deserialize a specific field. Imagine that you have an integer. How you want to serialize that in JSON. And from JSON, how you want to convert that to the value that we're going to use. So you can configure your own for your own types. And renderer, right now there is only one renderer registered that is JSON, but we are working on having more renderers to be able to convert any internal structure to Protobuffer or any other kind of format that you want to use. More things. What happens on Guillotine when we are running Guillotine? The first thing that happens is that we register all the configuration, the merge that I said about all the dictionaries of configuration. Then we register all the adapters, events, and utilities that Guillotine has out of the box. Then we scan all the packages that are on the key applications on app settings. That means that if your application is not listed on this key, it's not going to be loaded. Then we copy it from Pyramid also. That includes me, root function that we are calling on the package in order to do specific things that the package needs to do at pushstrapping them. We register all the adapters from this internal package. Then we load multiple databases. This is important. Guillotine, you can define on the configuration file multiple databases that you want to load. And you can also add new databases when the application has already started or removed them. Then you configure a static folder and JavaScript apps folder. These two are for static files, clear? And JavaScript applications. JavaScript applications is you know that single-page app needs to have this wildcard routing that everything that it's from one point to two, as the children of that point, are going to be needed to be rendered as the same application. We created this kind of mapping where everything that you are pointing is going to render the same JavaScript file. So we can serve one-page applications without gins or anything else. We just need the Guillotine. Then we create an error-seq for generating tokens and load the asynchronous utilities. I'm going to go there later. Security policy. Well, like Zop, similar to what Zop is doing, we have the code configuration. It's the permissions and roles that we write down with the decorators that I explained before. The user provided global configuration. So maybe we have a user that is manager. So this information comes from the user information. And then we have our local security policy. It's quite similar to the Zop one. So you have roles, permissions and principles, and the combination of all of them linked to users and groups. Maybe the difference from the latest versions of Zop is that we have something called allow single, which means that if you give a permission on a specific note on the tree, this permission is not going to be inherited in the children one. So you can assign it. On this specific note, I want that this user is manager. So the children of this note, this permission is not going to be inherited. You can also deny or unset. Guillotine is not opinionated about users. Doesn't have nearly anything about users. There is only one user. Is that root user? Nothing else. Why? Because we wanted that anything that provides a user information, groups or permissions is delegated to another package that can implement maybe, I don't know, an OAuth system or wherever. So we needed to create an interface in order to extract and validate the credentials from the user that is connected and to get the user that is connected to the system. First we get the credentials. We provide basic support, a WebSocket token and cookie so we can extract the credentials from any of these systems. Then we can validate. We have JSON WebToken and SaltedHash. And finally, we get the user. We have only the root user on Guillotine. And then we have three different packages that provides different databases that provides users. For example, Guillotine.db users stores the users as content. So each user is the membrane project from blown a long time ago. So it's the same thing, but really simple. IDRA IDP connects to an OAuth to provide that it's written in Go that you can apply on-premise that it's really cool. And authentication provides delegated authentication with Twitter, Google, Facebook and all these kind of social networks. More things. Special permissions that we have. Well, first, access content. We created this permission that is a bit controversial because we wanted to make sure that a user is able to traverse to an object. So Guillotine doesn't implement the security proxying from Zope. It's not wrapping the objects in a security model that you make sure that you can access or not a specific field or do whatever with the object. So our protection of the specific object, it's done at the traversal time, just making sure that you can access to this object with the access content permission. And then the view permission that you specifically need to run the view. If you want to modify, you want to add, you want to delete, whatever. So in order to edit specific content, this content needs to have access content and modify content. Another special permission that we have on the system, it's mount database. So you can dynamically decide, I want to mount this postgres that it's wherever on this Guillotine or I want to do file system database. There is another permission that it's get API definition. All the end points, as it's based on completely a REST API, we are not rendering anything on Guillotine. It's just providing REST API responses, a permission to get the API definition, and Guillotine a public to have an only Moose can access to it. All resources, whoever is used to blown knows that blown has dexterity content types and behaviors. And you have large dexterity content type and some behaviors that you are applying to it. We decided to copy more or less the same idea, but adding something new. It's called dynamic behaviors. The behaviors that blown has is what we have here called static behaviors. It means that on the class, when you are defining the class, you are saying this class is going to have this amount of content, this amount of behaviors. But dynamically, when there is an instance of that class running, you can decide to apply a specific behavior on that instance. And it gets added to the index. Also if it involves adding annotations or whatever. So you can have objects that has the static behaviors and suddenly you want that it has an attachment. So I add the behavior of attachment or I add a behavior of Dublin core, whatever. Fields. Well we support most of the Zopi schema fields. The only one that is noticeable here is Cloud files. This is a specific field which depending if you have S3 storage or Google Cloud storage installed, it will store the file on the database, on your local Postgres, or on a Google Cloud or on S3. You just always use this field and depending on what you have installed, it's going to use one support or the other. Then we have some more kind of hacky fields. One is called dynamic field behavior. And this is what we call through the API, new fields that get indexed. It means that imagine that you have a, that you want to have a mosaic kind of layout where you are defining new fields and you are defining these fields and you want that the content of this field gets indexed so when you are searching on full text search you are also finding the content of these fields that you are creating. So we created this behavior where you can create any kind of fields on a behavior and then gets automatically everything indexed. JSON field. It's easy. You will define the JSON field with a JSON schema and you can push any JSON data that it's validated by the JSON field. Bucket field. This is a need that we had on the last project with Nathan. Sometimes you have an object that is really, really, really large. Something that you want to store on a note, a conversation that you have on Slack forever that maybe it's kind of millions of messages on the system. You want to store all on the same object instead of nesting them on children. So we created something that it's able to group 1000 sub-objects inside an annotation and link to the next one so we have a pointer to the first and to the last and we can go through the history of that large list. It performs really well and it's mostly designed to really long objects that you want to store on one object. Patch field. This field, it's really not a field. It's a wrapper of a field. You use that as a wrapper of a list or a dictionary or an integer, for example. And it means that in order to interact with that field through the API, I don't know, do you know if Plong Rest API supports patch operations right now? Yeah? No, no, no patch in terms of the verb. Patch in terms of the JSON that you can, you have a list on a field and you will just want to add one element on that list. It's hard coded. So this is a way to, so you define the field saying, okay, this field that it's a list instead of if I want to interact through the API, I don't need to send again all the list if I want just to delete one element or add one element. So you can define operations onto other existing fields and then you just need to say, I want to append and this value. And then automatically, Python is able to just append this element on the list. You don't need to send again on the API. And we also implemented for integers if you want to increase or decrease the integer value or reset to the default value. Storage. Well, this is just, I'm trying to go over everything, maybe too much, but just to talk a bit about the storage layer. This is the schema that we have for the database. This is how we are storing everything on PostgreSQL and most of the we have two kind of records on the SQL schema. One is the main object, meaning the object that is going to be on the tree. And the other one, it's the annotation of one of these objects. There is only these two kind of objects. The first element, the first column, it's the object ID and kind of romantic. We maintain it a lot of names from Zoop to make it friendly. Transaction ID, when we are in the voting phase, which is the size of the object that we are serializing. Part. Part is a specific column that we are using when the size of the Guillotine grows a lot, which can be defined with an adapter. It means that you can define if one object goes to one partition or another partition of the database. PostgreSQL supports automatically partitioning of tables and mapping all of them in one view. So this is used for that, that triggering on PostgreSQL to split the data across multiple tables and to be able to scale on PostgreSQL. Resource, it's a Boolean that just shows if it's a resource, meaning a main object or a notation. Stuff, it does the reference to the object ID in case that I am a notation. All transaction ID is the previous transaction ID. Paran ID, it's used for the main objects to know which is the parent, the ID. It's the ID, the real ID, the ID that we have on the URL. We wanted to serialize as a specific column. Type what kind of resource is it. A JSONB field that we are mostly using on if we are doing indexing on the database and the state that it's the pickle of the object. We have also a registry. You can access requests that contain our settings, register an interface and get and set the value of the field. No strange things. Async utilities. This is something that I really like. Is that you can define utilities that are going to be bootstrapped at bootstrapping time and are asynchronous utilities. So they can run forever doing any input operation that you need during the cycle of the process and it's going to connect to different systems if you need. For example, this is a stupid use case that it's a work of utility and we define this on the config.yaml that we want to load that when we are bootstrapping. Then, we define the utility. Just inheriting from ising utility and configuring the utility that provides this utility. And then we have these two specific methods. Initialize and finalize. Which is going to be executed when we are starting and when we are shutting down the process. And both are synchronous. They have the generic loop so they are able to do operations like give me the credentials from Google Cloud Storage or make sure that everything is clean on the database layer or whatever. And then on the code, of course, we can also get this utility whenever we want using the standard get utility. Well, the view, at the end, is what's important. I think I wrote three examples of build definition. It's using the decorator config.service and here you define which is the context, which is the resource interface that we are going to apply this endpoint, which is the method, which permission is required to execute this view. Then we have the endpoint. In this case, you see that we have also URL dispatch inside the endpoint. So you can parameterize with multiple levels the endpoint inside the view to give kind of the traversal, the publish traverse method on ZOOP publisher to be able to go into different levels. Summary and responses. Most of the calls from Guillotine have a lot of documentation on the definition of the service because we provide a swagger out of the box. So all this information is serialized on the swagger information so you can check all the endpoints and make sure try them out and know which are the values that you need to send. Then you have the class where you have the request and the context. This is another example. This is the special thing about this example is the allow access through. This is the only way to skip the access content check on the traversal. So you can define, okay, I want to allow to access to this view no matter if the user has access to the content that it's going to be accessed. For example, the login endpoint needs to have this otherwise you're never going to be able to call it. Well, another example of how to do the explanation about responses. So when we have a 200, we are going to receive something that is a schema that is a reference to something we call application and we define it application with that decorator I explained it at the beginning where you're defining JSON blocks. Well the flow of the request is quite simple. We just do the traverse to the object. We check access content. We look for the view. We check there is some language translation mechanism that we check if you're looking for another language to get the multilingual idea. It's already implemented. We check the permission, the view permission we execute the view. Then we do the commit or the abort and we do the post commit operations. And one really nice feature that we are using a lot is that if you need to do something when this is already committed, so maybe I don't know, logging on some stupid place in a synchronous way, we have the features. You can define multiple features on your view that is going to be executed after the commit. And this is done after the response has already been delivered to the client. So maybe you want to lock something on lockstash or whatever, you need to do any kind of after commit operation that the user doesn't need to know if it worked or didn't work it so we can register all of them in the features. Cache invalidation. Well, there is a package called Guillotine of Redis Cache. This is quite a bit complex problem to fix. It's not perfect, but it works really fast. And at the end, all the Guillotine has a PAPSAP connection where they receive when an object gets invalidated and so they need to retrieve again from the database. And we also store the objects with the TTL on the Redis to get them faster, the pickles. Indexing. There is a lot of things to explain about indexing. We have the elastic search plugin, it's really cool and it works really well. At the end, we convert to JSON with a specific mechanism where you can define indexes with directives on the interface. And then an elastic search is in a post commit hook and on postgres send on the same commit. And then for security, on elastic search, we're using access roles and access users that we are defining for each object. Okay. All this backend thing, it's really nice. I have only five minutes, so I'm going to go a bit faster. We have a UI, thanks to EriBriho, that it's really nice. You can log in, get the database, get a container and being able to browse, like a GMI, but the GMI. You can create new content here and you can do whatever you want. Now I think it's down. But well, it's really nice. Tomorrow, we will show more about this on the Gliudina CMS because there is also a rich text editor. So which is the status? We know that there is five companies that are using it. We know deployments that think it's more than 20 million right now. It's nearly 30 million or 50 million. Objects that are deployed. We have 93% of code coverage. We are now in 4 to 11. We are releasing five by the end of the year. There is a lot of documentation. There is a lot of tutorials. There is a GitHub channel. There is two main repositories on GitHub called Gliudina Web and the Plone Repository. And I want to do a really short demo of something specific. There is a repository called Gliudina Processing that opens, if you run it, opens a Jupyter notebook where there is some kind of interesting examples. It's just for teaching Gliudina, not for production, of course. This first notebook, it's about using the API in order to push onto the system a CSV file with a lot of articles. There is, I think, this is a standard REST API, Plone REST API. We create a container. We install another add-on. We read a CSV file and we push all the articles as documents on the system. There is 2,579 articles. So this is using the REST API, no strings thing. If you want to check, it's in the Gliudina underscore processing. But the interesting one, it's the second one, the compute one, that we are able, since a Jupyter supports a sync.io and recently in the last month, you are able to start a Gliudina inside the Jupyter. You can create the application. You can connect. We use this context manager called Content API, sending which database you want to use. So you can do, it opens a transaction and getting out of it, closes the transaction. And this example goes through all the content and creates a machine learning model with TensorFlow and Keras to classify news between if it's political or a sport news. It's quite interesting. I think I have, I'm running out of time. So I'm going to just go into the last two slides. That's the roadmap and where I'm going to spring. We have two main problems right now. One is how to index the security information and how to be able to search the security information without the need of re-indexing the children when we are changing the security information of one node. And we have some proof of concepts of ways of organizing the data on the indexer. So we don't need to serialize everything to each object. And how to distribute more. We are reaching limits on the performance. And we want to be able to distribute the data or sharding more on different systems. We have a lot of ideas on mine. We are going to try to do something with Rust and Raft. Zero time. So thank you. Hey, thanks for the talk. What swagger version do you export to? And is it exported schema? Does it include the schemas of the content and all the validation fields? Or is this just a simple schema? I think it's the lightest one. Are you able to see which? No. Can you show me the YAML or the JSON file? No. Everything is automatically generated. So I cannot show you. I cannot access it. I can show you later. But I think it's the latest version of Swagger because we are using the latest version of Swagger, the client. When I use, for example, React Swagger library to generate a React client, then you need to provide me a JSON or YAML file. The problem is that this is traversal. It's based on traversal. So you need to change the path to define which endpoints you have on that path. So you cannot do that. You cannot do automatically generating code because the definition depends on which path you want to access to. What do you use between the code? Do you use an ORM or how do you get things into the data? The ORM is something we built ourselves based on persistent or persistent. We created something a bit different that is much thinner and that allows to work with a sync.io point of view. Anyone else? No. So a transaction started at the beginning of a request. What does the transaction system work and as well as the life cycle around persistence if you have ghosts and these sorts of things? The transaction starts after the traverse, after we get the view, just before executing the view and just finish committed after the view. So we keep it as small as possible on time. And about the state of the objects, we don't have ghosts. We are just the objects that are on memory are getting validated and are linked to the specific request that you are connected to. We have weak references to the objects but we don't have the concept that persistent or persistent have as ghosts. We are actually in the break right now so if you need to go to another talk. Thanks. So it's amazing what you've built but I'm a bit afraid, like I'm asking myself why. For example, you have support for automatic partitioning but the official documentation for POS-RSS until your tables are several hundred gigabytes long, you don't need table partitioning. You most probably don't need table partitioning. Pyramid can run with a sync G unicorn worker. So what's like comparing to pyramid, what would be the main selling point and if that is speed then why didn't you use go or Rust to write it? That's a really good question. Maybe because I didn't know Rust and go. So is the main selling point speed compared to pyramid? The main selling point is the security system from my point of view that we are using this traversal security system where we are able to use the same kind of traversal pyramid, doesn't have it as the same way as we implemented as powerful out of the box. I don't know if it has evolved a lot during the last year. And the asynchronous are your one of the really cool features is that you can connect everything events to web sockets for example. And so you are modifying an object, you are throwing a modification of an object that is an event. Events are asynchronous here. So you can plug a web socket to connect this and send to another place any kind of information. This kind of connections of asynchronous input output is really powerful. There is a project where they are using a lot this. And mostly we reached limits of scalability with pyramid, with Zope. We tried to go deep to tune it, we were not able to make them faster, maybe it was because we were not enough experts on that technology. And we needed something with a nice developer experience. So Python, it's a good tool.
Guillotina its mature and ready for production! The talks cover the main functionalities and coding examples to develop your applications on top of it: - Resources / Behaviors / Fields - Security Policy - Configuration strategy - Serialization and Deserialization - DB design One big question I will try to explain is the future ideas that we are working on.
10.5446/54848 (DOI)
Okay, well, as I said, we're going to talk about how to organize scientific publications, particularly mathematical publications, because we work with a group of mathematicians. We have around 100 mathematicians in the place we work, and they work in different areas like computational geometry, data analysis, probability, algebra, etc. So, well, somebody might wonder what a mathematician do. So, basically, they work on open problems, for example, this is a very nice problem, the aggregated problem, which says that how many, what is the minimum number of words who together can observe the whole gallery, for example, let's say a gallery is this room. So, how many words you need to have all this space under custody? So, this problem is very old, so the answer, if you are wondering, is you need n between three, where n is the number of corners in the gallery. So, they solve problems like this, they modify the litter, for example, if you have a, now you have superpowers and he can see through walls, through one wall, through two walls, how many is the ones you need. So, okay, well, they have their result, when they solve that, they write a paper with the answer, and basically they use what is called Latin, that is a format, it's a standard language for writing papers, scientific papers. And this is a document preparation system for high quality type setting, right? So they submit this, they work to journals, the journal assigns some people to review those papers, and then if the result is correct, they make a publication with the result. So, this takes to a site, a reference to this publication with each in a format called VipTech. VipTech is basically bibliographic reference, well, it's managed bibliographic reference, it's flexible solutions to the problem of automatically, in different styles. This language was developed in 1985, so before the web, or before the web was very used, and it was mainly for printed journals, but the good design of this thing is that a VipTech database is basically a text file with a lot of entrants in there, and basically it's a keyword value pair, everything of this. So we have some types, sorry, like an article, it could be a book, it could be a setting, different types we have here, and every item has an identifier, and a pair of keyword values, like authors, the number of the authors, the title, the title of the publication, etc. So, this is very important because when somebody make reference to work of other people, so we have this like those, and depending on the journal that is publishing this, is the format they present this, so that's why VipTech was very, very, very important. So the problem we face now is, now that the internet is very, well, it's everywhere, how a mathematician, for example, can show all their publications, so other people can find the work, students, let's say, if we live in Mexico, how the students from other countries can find what the mathematicians in Mexico are doing, and decide if they go to work with them, that's a problem, for example. There are some sites that collect those information, those VipTech reference, but unfortunately those sites are closed, you have to pay subscriptions to have access to those things. But if you or university pay for that, you have access to all the bibliographies that are in there, you can export that to VipTech and make some searches inside. There are many of those, this is one we use much because it's only about mathematics, it's called MathSignet. There's another one more generic called Web of Science. The difference with this is that in the MathSignet, every author has an ID, so we can identify exactly what are the publications of one person. That's not the case in that site, Web of Science, you just search for the name of the name of a person and it can send results of all people that has a similar name, so it's not easy to find what is the real work of that person. Another one is this one, Scopus, it's the same, there's no identifier. In here you can have an identifier, but if the person who publish those articles don't sign an ID, this thing is the same as Web of Science. Then we have a Google scholar which has, which I think is better than the others in some ways because you have an ID, but we have the problem that not everybody likes Google, not everybody wants to have an account there. So if everybody of the mathematicians had an account here, we have the problem that if we want to find how many publications are of some group, the group of category theory, algebra, or anything else, we can find how many publications the group has. We have to go one by one. So that's much work. So what this has to be with PLON? So we have to talk a little about the history of PLON and bibliographies. There was in PLON 3, PLON 4, there was a product called product CMF bibliography which addressed this problem. At least you can load your bibliography in PLON very easily. I think it does a good job back then. The problem with this is that as the site said, it only works with PLON 3, PLON 4. I really don't know if this works with PLON 5 in the early versions, but it's implemented in Archetypes. So we cannot use it anymore. And we know that PLON 5.2 is going to take Archetypes out of it. So we need to move on and search for things. Another thing that goes with PLON is collective citation styles, which complements bibliography with the styles. This bibliography is just to upload the data in PLON. And the dem part, the style part is done by this product. Unfortunately, there never released a version, I think. I think it was developed by Jaskar, right? But they stay in an alpha version. And also, it's for the same effort bibliography, which is an archetype. When we start working with PLON, we face with a problem, a more general problem, that was to create a curriculum. A curriculum has many of the items that a bibliography has, like articles, proceedings, and those things. So and even more things like how many courses this person has given, how many conference has given. And well, we implement something based on same bibliography that results these things. We have, for example, the same functionality in terms for articles. We can import from VipTech and export from VipTech. We search in those engines that publish bibliographies and upload to the user so they can choose if they want to add this to their collection of publications or they already have it and they don't want it. The bad thing with this is that it's also, we implemented also an archetype back then. But since then, we have been working with more things relating with bibliographies, like retrieving VipTech in VipTech files from different sources. We developed a small script called VipScrapper with the idea of a Perlis, there were a Perlis scrapper that used MathSignNet to download those VipTech files. So we developed this script in Python where we can give the number of the ID of the person we need the publications or a list of IDs and the year of the publications. If we only want the publication of 1980, we do it like this. Or if we omit this, we can get all the bibliography from that source. This is done only for MathSignNet because there is where we have IDs for every person. Other thing we have done is retrieve bibliography from different sources, MathSignNet, Google, and instead of store that information in plan, just have the bibliography database in the file and show different results. For example, we want to know the publications from 1978 to 2018, only in this range. So this thing can tell you or for one person or group of people, sorry. So what we have been entrenched with bibliography is a little. But what about bibliography in plan five? We definitely need to do something with this. We are working on something. It's just a proof of concept that we have. We find this package called VipTechParser, which reads a VipTech database. It can write, if you have content in Python, a dictionary in Python, you can write it in VipTech format. It works with Python 2 and Python 3. This has very good documentation. So we can use that package for the VipTech thing. Obviously we need to move to the exteriority. All those content is moved to the exteriority. And of course, for the styling thing, use what Yaskarta was using, citation styles. I think it's the most generic. There are many styles here. And there's also a JavaScript thing that if you give it to an JSON thing, you transform the VipTech to JSON and this thing showed in the way you want. So that's the path we're following. The idea is that VipTech is well-defined. We have things like articles, book in, book collections, et cetera, almost 13 types. And every type has required fields, like the article has authors, title, journal, years, volume, and optional fields, for example, number, patience, month. The thing is that every... No one has the same of the other. There's always a difference in one or two fields. So what we're thinking is that we can implement the... This one, for example, as a content type with these fields and the optional fields like behavior and exteriority or something like that. So we can look more of those attributes in there. And well, this is, as I told you, a work in progress. We have the idea. We want to migrate that package of bibliography to Plon5 and then extend that to our needs for a CV, a more genetic idea. So if you have feedback that we can use to make these implementations, it will be great. We just have the idea. We have made some proof of concepts. So I think it's going to work, but if you have feedback, it will be welcome. That's... Oh, sorry. Fuck now. Does anyone have questions? Actually, I can... Yeah. I can just... Hello? Okay. I can just speak to why citation styles that add on didn't really go any further. We at JustCarta have a couple of clients who are heavy into bibliographies. And we wound up viewing the CMF bibliography route of having full bibliographic information in Plon to be somewhat of a dead end because so many of our clients are using this thing called Zotero. Are you familiar with... You guys familiar with Zotero? So the new approach that we're thinking of... So yes, we're needing to upgrade all these sites and we need to do something about the bibliographies for the short term. We'll probably upgrade... By the way, my talk is going to be really short. It's going to be the next one. So I think it's okay to go a little bit over on this. So we're probably going to upgrade CMF bibliography to work on Plon 5 for a very short term solution and just... We'll have to be on Python 2.7 for a little while. So for the long term, we're thinking of moving to a model where the bibliographic information is stored in Zotero and it's managed in Zotero because so many academics use Zotero, but not mathematicians who use BibTek, but all the other guys seem to use Zotero. And then Zotero has an API so you can display that information in Plon but manage it in Zotero. So that's the new thing that we're thinking of doing, which is why we haven't... We haven't done much else with citation styles. Other people have questions or want to talk about what they're doing? Do other people support people with mathematical formulas and they have to use BibTek and Leitech and all that stuff? Because it's like a niche that doesn't seem to use Zotero. Is that true? Can you speak to that? Do your academics not use Zotero? They don't, no, they basically use Plon, what we have in... what we have developed in Plon, yeah. All right, great, thank you.
We implemented a proof of concept package that allow management of scientific publications.
10.5446/54850 (DOI)
Thank you. Yeah, so I think most people were in the last talk, so you know who I am. Victor, take it away. Yeah. Who are you? Victor. There's only one Victor in the community, so it's me. Yeah, so, same thing. So the talks we have, and we currently have the extensibility one. Yeah, so what will we cover? So the idea of this talk is, so the previous one was more or less what is Volto and what does it look like and what is more or less the code base. This one is more about if I want to use Volto now in a website, what techniques should I use and how can I bootstrap my own product and how I can customize it to add some teaming and custom views and overriding other views and all that stuff. So that's what we'll be showing. Yeah, so you know how you can create your own site. All right, Victor will now take over. Yeah. We had Volto and not a long time ago, Volto was thought also was as we created Volto. It was not very thought to be a library, so we are required a way, an easy way to extend it. Then that's why we call for a GSOC meeting, Nilesh here did, and he will do a talk on that tomorrow. And he will explain you how this extensive way of creating the boiler pair required to use Volto as a library is, which we call it createVoltoApp. So the only thing that we have to do is to yarn global and install this utility that will create that boilerplate. And after we have installed in our system, we can call it and we will have all the, our Volto app in this location, right? It's very much the same, the createReactApp application if you ever use it. So it's an easy way to, an easy entry to create your own Volto app. So you only have to do this, so CD to your recently created directory and then a yarn start and then you will have your Volto app. Yeah, after that, we will have a bunch of files, Rob already showed you the last, during the last talk. Yeah, it looks very much like this, which will, you will have some directories by default, very much the same Volto ones, but empty because they are ready for you to add things. We will cover all them during the talk and all the boilerplate required for the Volto app work, right? So this will be the output of the createVoltoApp command. So yeah, as I said, we'll have several of them. So like the static resources one when we will put our fab icon or roadstick and so on. Then yeah, we also can, yeah. The most important thing that you want to do in our, in our, in our new app is to override components. You have by default what Volto has and it will look like exactly the same like Volto. Then you can start overriding a couple of things, a couple of artifacts. One of them are barely components like the logo one here or the SVG that accompanies the logo component and you do it like, you will do it like in JVOT. If everybody is used to do that, you only have to grab the original one, the original file, then place it in your customization folder, which is here also, the customizations folder. You just will put it here using exactly the same folder structure that Volto has. So that way Volto knows what to customize and when. So having this kind of, so this tree of folders which maps the same that the Volto one has, then you will customize the logo SVG component and you will have another, another, the logo customized with your own logo and it's that easy, you only have to reboot the server and then you'll have it. And the same for every component, right? So it will be the same. So you will have to go to Volto, grab it and copy it here, maintaining the exact structure that Volto has. Alright, so internationalization, as I mentioned in the Volto talk, we're using more or less the same machinery as we currently have in Plone. That is based on React Intel. That's a library created by Yahoo. The React Intel is the React implementation of it, but there's also an implementation for Angular and for Vue and for basically everything. You can use the same machinery. So what we did there was that we created the script called, well, it can be called with Jarn AI18n and what it does, it will extract all the message strings from all the sources. So the same thing as you would expect from Plone, which is actually not really common in the JavaScript world because in JavaScript you have to, usually, have to define your messages somewhere, but we actually have like a nice extraction tool so you can't forget any of those. So what it does, it will extract all the translatable string from all your sources. That will be generated into a JSON file. The JSON file will then be converted and synchronized with a POT file, which we can actually use in our translation system. The POT file is then synchronized, as you would expect, to a POT file so all the new strings will show up or strings will be removed when they are removed. And the final step here is that the POT files are generated back into a JSON file, which can be picked up by the front end again. The change from Volto and from teaming your own site is basically we are extracting all the strings which are in your own app. And the last step here will combine all the translations from Volto itself with your own translations and will merge them together into one big translation file, which can be used to do all the translations. So how can you make text translatable in your own view? There's a really simple component which is called a formatted message. It's a simple React component, can be imported from React Intel, and what you do is you render a component, as you can see here, and you provide an ID and you provide a default message, and that's it. Then it will be translatable. If you want to translate attributes or any other string which is not directly rendered into a view, there's some more steps to take there. What you'll have to do there is you have to define the messages on top of your view. So you have a list of the messages you have to define with the same ID and default message. Then you'll have a decorator for your class, which is inject Intel. What this does is it will add an internationalization property or object actually to your class, which you can use to do some translations, and it has matching prop types, so you don't have to do the whole prop type validation. Then you can translate it in attributes, for example, so if I want to translate the title, I can call the Intel, which was injected into it. There's a format message function, and I will pass in the message I defined on top of my file. Yes, still me. We just decided this morning who's going to do what, so we're looking at who's doing what. So, overriding views. As Victor said, overriding the views is exactly the same as you would overwrite like another resource. As long as you match the exact folder structure which is there in photo, it'll work. So, if you want to overwrite an SVG, match the folder structure, if you want to overwrite a view, just match the same folder structure, copy over the file, make your changes to it, and it'll work. One thing we also added to the runner is it'll actually check if... So, it will go to your customization folder and look for all the files. Then it will look if any of the files exist in the VOLTA package itself. If it does, it'll override it. If it doesn't, it will give you a warning. So, it says, okay, you passed in this view, but there's no matching view in VOLTA, so you probably made... you misspelled something or... So, it actually... it's nice to debug, which usually have something that I really overwrite it. It's not working. Why it's not working? So, it's actually a nice thing to have. Then we have something which are custom views, of course. So, if we want to add another view which is not currently there, what we could do is write it in our own... in our team package. So, for example, the full view, as I said, is not part of VOLTA yet. So, if you want to add that, what we do is we go to the components folder. We add a folder called full view and we create our component there. So, the full view, just JSX, is put on that location. That's step one. So, what will the full view look like? This is a lot of code. If you're not familiar with React, then there's probably a lot to gasp at once, but I will go through it quickly. So, basically, it's a React component which does rendering of the header. It will loop over the items which are in my folder. It will show the title. It will show the image. If there's an image attached, it will show the description and it will show the text. That's basically what this component does. Then we have to make sure that we add it to the index.js file so we can easily import it. This is just your entry point. And the next thing is that we have to make VOLTA aware that your view actually exists. So, how do we do that? We have a config.js file and by default, it will look the same as this except for this part. So, what do we do here? Basically, we define which views are available. We import all the default views from VOLTA itself. And if we want to add some new views to it, we will just extend the property of the layout views in this case. And we say, okay, the ID full view, if the layout is specified like full view, we're going to be rendering the full view in there. And that's it. So, that means that if the backend says, okay, you have to render the full view, we go to the full view components which we created earlier. Yep, same thing as the views are the same stories for custom widgets. So, let's say that we have a custom widget that we want to extend because we, I don't know, we have our own add-on product that requires a specific widget for rendering the form in VOLTA. So, the way of doing things is almost the same like in views. So, we have our components and we will have our component widget there as well, which will live in this location. And then we will have the code for this rating widget. And, yeah, it's, I won't go also with all the code, but we'll do whatever I should do. And then we have also the registration of that widget. So, we have to add it to our index.js, the same thing as we did with the view. And then once we have this declared, we can, in the configuration, in the config.js, we can declare that we have a new widget and we have to do exactly the same as we did with the view. So, we have to import the default widgets from VOLTA and then add ours. So, the rating, we have to add, to map the rating key with our component and then that will be it. So, when the backend says, hey, this widget should have the rating component, then VOLTA will know that he will have to use this component for rendering, not any other. Of course, this, if there's any, so the rating ID should come from PLON, because we already have declared that this widget should use the rating component. So, PLON will return with the PLON recipe I that this widget should match the rating which then VOLTA will map, and VOLTA will know that he will have to render the rating widget instead of any other. And we also is nicking the PASTA-NAGAN icon system, because today is like SVG, SVG, all the things, and we thought that it's a better approach, we needed a better approach that we have now implanted with all the font icon system. And in fact, there's a way of doing that and we implemented it in VOLTA as well. So, the thing is that if we have an SVG and we wanted to use it as an icon, so we can use directly the import from ES6 and then this will be taken over by Webpack, which will apply some nice things that SVG is a package, an NPM package that filters SVGs and like kind of clean them, because the output of the SVGs directly from the authoring systems, like Sketch or like, how is it called, the Adobe one. Yes, they are not very optimized, so SVG will optimize this SVG on the fly so you don't have to do anything really, Webpack is doing for you. And then we have to use this icon component that we built and then use the SVG like this in your code. So, you declare the icon is going to be there and pass the name with the import that you just declared before. And then you will have your SVG there. One of the things, of the good things that is having SVG inline SVG icons is that you can style them with CSS, which is already huge. So, we have very much the same things that we have in font, using font icon systems, font-based icon systems, but you can also animate them, which is also a fancy thing. I don't know if we could find a good use for that, but you can do it. And there's a lot of geese about SVG today in the JavaScript community and we can do a lot of things with that. Another thing that you get with inline SVG is accessibility. So, you can set accessibility properties directly into the SVG. And also, in fact, our custom component, our icon, sorry, there's an error here. It's a typo. In fact, it should be the same like the icon. So, we call the icon component with these props. One of the props should be the name. I don't know why, but it's passed over. Sorry about that. And we can pass several props that will modify the behavior of our icon. We can pass directly our color. So, if we don't, we want to set up the color of that icon directly without using CSS, we can do that. And we can declare it in JavaScript like this. We can set the title of our icon, which could be static like this, or could come from the internationalization clone engine, which happens a lot of times. For example, for when you are rendering the toolbar and you are having lots of literals from that, that you have to map to icons. And then you can set the size, which by default is set to 36 pixels, yeah. But you can modify it. You can set the class name as well in case that you need, as we said before, you need to add fancy CSS to that without having to inline-stall in it. And you can also add a handler for what happens when, for example, when you click on that icon. All of them are props that are optional. And this will be the most simplest one, as I said, where it says, my customers should say icon, and then the name should be there as well. So, my fault. Sorry about that. Yeah. Yes. So, as I said, what you can also change are some of the other settings which are defined there. One of them is all the rich text editor settings. So, by default, the people saw the VoltaTalk, you have, I can show you though. That'd be better. If we go to a specific news item, and when we select some text, we have what you see is what you get editor tool by there. So, we'll have, by default, we have the bold italic one with the link item there. And then we have the block-styled items, like header, H1, H2, like the list, unordered list, block code, and call-outs, which are there. But if you want to remove some of the styles, if you don't want them there, or if you want to extend them, you actually want to have an extra one, you can do that. So, how does that work? So, the first thing we'll be doing is, for instance, if we want to have underline, which is not there by default, maybe for good reasons, but if you do want to have that underline, then you can do so. So, what you'll do is, step one, there's some helper methods which you can import, one of which is the create inline style button, that's part of the draft.js button package, and that will help you to easily create a button, so you don't have to worry about what it should exactly generate. Then we'll import the icon components, which Victor already mentioned, and we'll import the SVG. So, as Victor mentioned, we have a lot of the SVG icons, or actually all of the SVG icons, which are part of PastaNAC, and there are a lot of them. I think there are, I don't know, like a couple of... 250 or something. 250, I just heard. So, that's a lot of them. So, for most of the use cases, there probably is an icon you can find, and if not, then we could add them later, I guess. You can add your own as well. Maybe we shouldn't let developers do that, though, but... Or just add Hello Kitty in there. Yeah, so, the first thing you do is you create the inline style button. What you'll do there is... I think... Oh, this one is missing. Yeah. Training. Yeah, that'll work. No, it's not here. No, it's not here. Yes, I do. It is in... Yeah, so this is what the, for example, the bold one will look like. So, what you do is you provide the styling you want, and these are currently defined in Draft.js. Draft.js has a definition of all the available styles which they have. If you want to... So, they have bold, for example, like, as you can see here, had a two, had a three, an ordered list. These styles can also be extended. So, if you want to have your own custom style, which is, I don't know, like a fancy call out or a quote or whatever, you can add your custom one there. But by default, like all the normal ones are available. So, you specify which style you actually want to apply if you press the button, and then you specify what the button will actually look like. So, in this case, we'll say, okay, so the children, so the button, what it will look like, it will be an icon. It will have a name and it will have a size. And as Victor mentioned, you can also add the title and all the other stuff to it if you want to. Next step is to also set the... To add the button to the toolbar. So, what we'll do there is, in the Convex.js again, we'll have a variable called settings. The settings controls a property called... Really long name though. Rich text there that are inline toolbar buttons. Yeah, that's a lot. Yeah, Alex. Is there any plans to do it like we have current system that you can, via the control panel, just activate and deactivate buttons? Definitely. I agree. All developers but integrators, the administrators of the systems might don't have the capability. Yeah, no, definitely. I mean, so, yes, that's definitely on the roadmap currently. But for that to work, we either have to see if the current settings we have for TinyMC, if they match up with the settings we have, I guess with the buttons, it could very well be. So, we can actually read the registry settings already. So, that's one thing we could definitely do. So, yeah, we could definitely work on that. Yeah, I guess, yeah. And then it's also, I think you can also define on your schema, you can also define custom properties. So, you can actually say this field should, like this rich text field should only have, I don't know, Bolton, italic, for example. And I think we could read those values as well. Yeah. So, currently it doesn't work yet. Yeah, I know, yeah, I remember from TinyMC how it worked. So, yeah. Yeah, so for now, what you do is you have this property called rich text editor in I'm talking about buttons, which is a list of all the buttons which are available. And what we do is we add our button in there, and then we paste in the rest of the buttons. So, that basically means we want to underline button to be the first button. So, if you want to have it like as a second or third, this is just a list. So, you can actually just split these values and put it as a second or third item, whichever you prefer. All right. Then the last topic is the actions and reducers. If you don't know what Redux is, then this will probably go a bit over your head because it's a lot of stuff which is added in there. So, this is not easy to understand. But I'll try to explain it anyway. So, just a short end production of what Redux is. The state machinery which we use on the front end, it is a store which is more or less a store of data in there, which is just one object, a nested object with a lot of items in there. So, it will contain, for instance, all the navigation properties which can be used in the front end. It can contain all the content like the title and description, etc. It can contain all the breadcrumbs data and all of that. So, for the store to actually contain the data, we can fire some actions. So, actions are things like go get the breadcrumbs, get the content or add the content or delete something or update something. All those actions can be called and they will eventually end up in the store. I say eventually because there's some steps in between. The step called reducers. So, reducer is basically a function which takes in the current state of the store, looks at the action and combines the two and returns a new state. So, basically, if you have an action saying get me some items, it will handle that and it will put in a new data in the store. If the action needs some backend calls, we'll use something called middleware. So, middleware is something which is in between the action and the reducer and it can actually do some side effects. So, in this case, side effect is fetch me some data from the actual backend and if I get some data back, then handle that and it will fire a new action with the data in there. I hope this makes a bit sense, but as I said, it's quite complicated matter. All right, so if we want to... So, by default, Volto has a lot of actions and reducers there already. So, for all the common stuff to fetch navigation, to get your breadcrumbs to get... Well, basically everything is an action in Volto. But for some reason, you want to add your own actions and reducers. For example, if you're building your own form product or another add-on product and it has custom rest API calls, then you have to match the actions and the reducers on the front end to call those backend endpoints. So, we'll start for example, in this example, we'll have a FAQ module and what we do is we get the FAQ items. So, we'll start by creating an action type called FAQ. Next, we'll create an action. So, an action will look like this. An action is... What it will return is just a plain object. So, in this case, it will return an object with the type saying get FAQ. There's a request property in there. So, that means that this will... The middle where I pick up everything which has a request key in there and it will do something with it. And in the request, there's an operation. So, get can also be post or patch or any of the other HTTP methods. And there's a path in there. So, at this time, there's a path saying, okay, we want to do... Call the search endpoint and we want to look for a specific portal type FAQ. So, that means we only want to find all the FAQ items. Of course, we'll add the actions and reduce to our index files again so we can easily import it. And then the next step is to add our reducer. So, as I said, what does a reducer look like? So, reducer is a function which has... Which takes a state. By default, it's the initial state, but later on, it's the current state. And it will have an action passed into it. And what it does is it'll process that action and depending on... And then it will return the new state. So, for example, if we do the get FAQ call, the middleware will send a couple of actions depending on the state of the current backend method. So, it will first send out the pending method message. So, it will say, okay, this is pending message. We're still fetching the data. So, our state at the beginning looks like there's no error. We don't have any items yet because we didn't fetch any data. It's not... The loaded state is also false and it's also not loading at the beginning. So, when we do pending call, well, we keep with current state whatever the values are in there. We set the error to null because we don't have an error at this stage. We are loading now because the pending call is fired. So, we say true. And we can use this value in our UI. For example, if the loading is there, we can show a spinner, for example, or any other method indicating that it's loading. And we say loaded is false. Next, when we have the... When the success comes in, we get the current state what we have. We say there's no error because we have a success message. The result is there in the action and the result contains the items which we fetched on the backend and we apply it to the items. We say loaded is true and we say loading is false. So, this is the success handler. And then the last one is the fail. So, if a call for some reason failed, then we'll say, okay, we didn't have any items. We specify the error which comes from the backend, so that might specify what our error is. And we say we're not loading anymore and we're not loading it as well. If there's a... So, here we did a type of action.type to see if we match any of the actions which we have if we have to do something with it. If there's a known action which we don't know anything about, we just return the current state so we don't do anything with it. Can we look to the top one, please? Yeah. Very top. Okay. All right. Then we have the index file of the reducers. So, reducers, there can be multiple reducers, of course. So, all the reducers are combined into one big structure. In Voto, we have a list of all the default reducers. So, those are all the reducers, like as I mentioned earlier, the breadcrumbs and navigation and the content and everything. And if you add your own custom reducer, you can just add it to the list. So, we're now adding the FAQ reducer, which we just created also to the list. So, now we created all the actions and the reducers and everything. So, how do we actually use them in a component? So, the first step is we're importing the get FAQ from the actions. So, that's the action we created because we want to call it at some stage. Next, we are going to bind all the both the data which is in the store and the actions to our component. So, as I said, we have the state which is our global state, which is the store. And we have the FAQ, which is the reducer returned. And that one contains a property called items, which was empty at the beginning. And we'll later on, if we do any fetching of the data, we'll contain the actual data. Next, what we'll do is we imported the action and we will bind it to the store. So, we can actually call the action on the store. We'll have the rest of the view here, which is more or less the same as the summary view we saw earlier. Then we have the lifecycle method. So, this method is more or less an event. So, this one will be called when this component is mounted or when it will mount. This one will be fired. What we do is just before this one will be mounted, we're actually calling the action and to make sure that we're fetching the items from the backend. So, on our initial render, probably the items are not there. But on the second, just a couple of seconds later or milliseconds later, depending on how fast your backend is, the items will show up there. In our render, we have this that props the items, which we mapped from the global state, and we can just map through them. So, in our second call when the items actually come in, we can loop through those items and we can render them and we render the title and the description in this case. And that's what that looks like. Yeah, so as I said, we will be sprinting on Volto on Saturday and Sunday. People of any level can join, so we're trying to keep it as open for everyone as possible. We have some really good tasks to start with to get you up to speed to create a view, for example. And if people say, okay, I already have some experience with React and Volto and I did the training and then maybe you already know some more stuff that you can do, definitely there's a lot of advanced tickets there as well. But anyone can join if you just want to help out. On another topic, let's say you want to change or edit the training material or documentation or anything is always welcome to help out with Volto to make it as accessible for everybody as possible. I want to sneak one thing. Go ahead. I went to a mini kudos section. Go ahead. Improvising, yeah, because Davi Lima helped us a lot during the last Vitovena sprint on the extensibility. So I wanted to remember him and also, of course, again to Albert Casado for all these amazing set of icons that we already know. He's there, all the amazing set of icons that he made for us and we will for sure use in our applications. Thanks again. All the other people that help make Volto true. Cool. Any questions? Yeah. Let's wait for the mic. Sure. Could you go back to the slide where there were the various case statements? Yeah. This one. Yeah. So there seems to be some magic going on there. So my assumption here is that the user has submitted a request. Yes. Right? Yes. At the instant that request is sent, the pending state is returned to the user. Yes. So actually, so what you're looking at now is not magic because this is just really simple, but the magic you're referring to is actually the middleware. I should show you what it looks like. Although I might confuse you more than it'll solve. So this is the middleware. Okay. All right. I'll go through it quickly. So what it does is so middleware is in between the actions and the reducers. So we had the actions which is fired. Then we have middleware which can do something with the actions. Then we have the reducers which actually just handle actions. And at the end we have the store which saves that data. So in this case, we created an API. Middleware which actually looks for something specific in your action. In this case, it'll look for, it will get the action and it will fetch a variable called request which we added. And if it's not present, then it will do not do anything. So we'll just call the next reducer with the current action. If the request is there, so if I actually want to do a backend call, it will immediately send me a request. So it will fire a new action with the same type as we're receiving but adding the pending state to it. And this is the state actually which you saw in the reducer. Next up, it will, so you can also have multiple requests in the same item. So it checks if it's a list and if it is, then it'll fire multiple events. If it's just one, it will fire multiple events. If it's just one, it will do the request itself. And then when the request is done, we have to, the handler, it will have the result and it will fire the same type with a success handler and it will pass in the data. And we have the error, so if the backend call doesn't return something valid, if the backend code doesn't return something valid, it will call the error type with the type and the type fill, so that's what we're checking for there. Thank you. That's really good. So it looks really complex, but maybe it's a good idea. That's all without even changing the URL or refreshing the page or nothing. Yeah, yeah, definitely. Yeah, so it's all Ajax calls. Yeah. Yeah. Cool. Thank you. Sorry, this is probably a bit, I'm not that up with React, but so the server side stuff, right? So we do a lot of stuff with UK government, and they have this GDS standard. And part of that is progressive enhancement, which means that things have to work in some reduced form without JavaScript. Now, obviously, what you're talking about is a lot to do with the editing UI. And I'm not talking about the editing UI, but the server side rendering, what happens if JavaScript is turned off on the static pages, the pages of the website, not the editing UI? So if I'm just a normal user, let's see. Where is this setting? I don't know. Here? Yeah. In the middle, I think, oh, they're changing. And now it's in the console. Can we disable it, right? Can we? Anyone else? In the console, yeah. You can do the setting. Yeah. OK. That's all. Setting. Yeah. Let's see if we can just play. I don't know. You'll be there. I can search. Here. Sorry? You might want to keep the other tools open. Oh. No, I think it'll work. If the setting should work, right? We can do a call also. Hmm? Oh, it wasn't there. OK. Let's keep it open. OK. So this is the, I actually have to build the production one, because that will include the static JS. But if I, so the CSS is not here because this is the developer build. But if I run the production build, then it will include the static CSS, and then this part will work. But as you can see, I can just press all the items without JavaScript, and it will still render everything. So the exact same code, which is used on the front end, is also used on the back end. Is that still faster than rendering for the page tempers? I didn't do any benchmarks, but I think so. Yeah. Yeah. No, it's a hard question. No? No? I have to admit that I don't like the J-Bot way of overriding. And my question is, don't you think it's a better idea to extend the components and only overwrite the parts that you really want to overwrite, and maybe to split the JSX parts of rendering? Because I saw big JSX in the render functions. Yeah. One on this one? Yeah, sure. Thing is, well, there are several things to say about that. It is, although you see a class there in JavaScript when it's transpiled, it's not a class because JSX5 doesn't support classes. So that's a bunch of overhead there. So the whole React community themselves, I don't know if you know the news, but they are moving out of classes as base elements for components. So they even now announced a new way of doing a state in a non-class component, I mean in a function component. And the way that we were used to do extension like in Python, like, yeah, I got this class and then I extend it or I do an inheritance, I inherit from this class and they are half all the class. It doesn't work in the same way than the Python world. So something that seems to us so obvious to do is not that obvious in JavaScript. And we have to keep thinking that we are not talking about Python life unless we see classes there and extends and things like that. So, and also the fact that the React community is moving away from classes is something that we don't have also to have used from. And I'm not good. Yeah, so to add to your point of splitting up the JSX part with the other part, that's definitely something we could do. So some React projects use a special controller class and a few classes basically. So the few classes are really stupid so it only has properties and it will render and the controller class does all the logic. So in cases where you actually want to override stuff, we could split it up. And also if we split up more components into smaller components, then I guess it's also easier because then you don't have to override the whole specific class but just like a really small part of it and then it's more or less the same as you would do. But yeah, as Victor said, it's like JavaScript and Python are different environments so we have to deal with how it works. But yeah, there's definitely steps we can make that it will become easier. Okay, time for the next talk. We have time for two more questions. Or one. Yeah, next talk starts in three minutes. Oh, that's true. I thought it stops in... Okay, thanks very much. Thanks, Rob. Thanks, Victor. Thank you. Thank you. Thank you.
How do you extend a Plone-React instance? How to use Create-Plone-React-App for create a basic boilerplate package, how to override components, reuse reducers and actions and create a new ones. How to integrate third party components. Demos and walkthroughs. The Pastanaga SVG icon system. How to add new icons.
10.5446/54855 (DOI)
I talk about Pyramid Services. About me, I am Achi Oda-yu. Please call me Ao-Dagu. I like web frameworks, Touga, Spirons, reports BFZ, Pyramid. Pyramid is a web application framework. First reports BFZ, renamed to Pyramid and Pyramid into Pyron's project. Pyramid using Zop interface internally. But we can use registry to create Pyramid add-on. To use registry, I create a sample utility. This is very simple. I send a message to the Slack outgoing incoming URL. To register utility, I require interface. This interface is very simple. I Slack interface has send method. This method passes payload argument. Pyramid has configurator and config has include method. This method include Pyramid add-on. Pyramid add-on has include me entry point. And config has registry. To register utility, config register utility method. To register utility call register utility method. But Pyramid has config action is register registry. And Pyramid has introspection system. Introspection system is information add-ons information. To create introspection call introspectable method. And create introspection to action method. To use utility, Pyramid request has registry. And get utility method. Get utility method returns. I Slack utility. I forget assigned variable. Slack variable utility and calls and method. And Pyramid using configuration paste deploy configuration file. Slack incoming URL is settings to Slack utility. Use settings and include Slack add-on. But register utility is very tired and complex. Pyramid services is provide wrapper to register and use services. Configure directive register services and request method find services. To install Pyramid services, simply pip install Pyramid services. And to use Pyramid services, config include Pyramid services. Pyramid services provide register service config directive. Pass to utility and interface. And find service method provide request method. To get utility, call find service method and use utility. But find service is using interface and user have to know. And to do application architecture. Add-on should provide simple interface. Send Slack function is a wrapper to use Slack utility. And many add-ons using registry. Registry get from request and Pyramid can add request method. So I as request, I as send Slack function as request class method. And request method using API request send block method. For testing provide dummy utility. Dummy utility is Slack is external service. Because we want to access Slack in automation testing. Provide dummy utility for testing. This dummy has message box on memory and send method. Only append to message box. To use dummy utility, Pyramid has testing fixture. Create config and setting includes utility. But create dummy Slack and register utility for iS Slack. And I added request method. Apply request extensions. This is send message method to request. And call test under the view. And accession dummy message box. Conclusion. Pyramid have component library registry. And Pyramid services provides useful wrapper for registry. And Pyramid add-ons should provide useful API. That's all.
Pyramid includes zope.interface and its own component registry. pyramid-services is the pyramid add-on to use registry easily. I talk about to use pyramid-services and this internals.
10.5446/54856 (DOI)
There is this backpacker, a tourist travelling through Germany, and he comes to this nice beer garden in the southern part of Bavaria. And it's quite a peaceful time, and there's an old man sitting there in Lederhosen, sipping on this one litre mug of beer. And the tourist gets himself also a beer and joins the old man, and after a while the old man starts talking. Do you see this beer garden? I built it with my own hands. I went to the forest and got the best wood, and I treated it and gave it more love than my own child. And do they call me Sep the beer garden builder? No. And then he points to a little hill and there's a chapel, a very nice small one, and he says, Do you see that chapel? I built it with my own hands, and I carried every stone up that hill. But do they call me Sep the chapel builder? No. And then he turns around and points to the little lake, and he says, Do you see the beer? I built it with my own hands. I drove the piles against the tide of the sand and put every plank. But do they call me Sep the beer builder? No. But if you fuck one goat, and if Plone Internet is remembered only for the things that it doesn't do well, then I would prefer if people do not even see or feel that Plone Internet is there, and this is what their talk is about. Plone Internet is a tool that helps you to get a job done. You're not using Plone Internet because you have to use an Internet, and it's like nobody remembers the hammer when he was building a shelf, right? Because nobody wants to think about the hammer when he's building that shelf. If you know IKEA, then you know building instructions like this, and when you want to build that shelf, then you are completely focused on that building instruction. That's your job at that moment, and it's not technology. And in that example, the hammer is the technology. You just want the hammer to function. So Plone Internet is the hammer and the drill and the pincers. And today I'm going to tell you how you can solve real-world problems with Plone Internet that really improve people's work experience while not distracting them from their real task at hand. And why is that so important? Because many applications miss the point that people have work to do that has nothing to do with computer science, and they are not trained to understand computers. Who has experienced a situation like this before? Hands up. Not all of you, I see. Great. And do you know people who still complain that Microsoft Word is changing his user interface with every version? And it's very easy to assume that everyone, especially in rich nations, is a computer literate. After all, nearly half of the world adults have a smartphone nowadays. But the OECD has made a study in 2016 which says that 33.9% of the users don't actually know how to use a computer. And even 10% of these, nearly 34%, they chose to do the test on paper instead, the computer-based test. About 5% of the adults fail the basic tests which included using a mouse or scrolling through a web page. And 94.6% were at level 2 of 3 or below. At level 2, respondents had to solve a problem using an online form and some navigation across pages and applications. One example was responding to a request for information by looking through a spreadsheet and emailing an answer. That's level 2. So we are not talking science here. Only 5.4% reached level 3 where the problem-solving took place over multiple steps and operations. Writing a Word document and formatted using TAPS instead of spaces is level 3. So when we are expecting our users to understand more than 3 checkboxes, we are talking to 5.4% of the normal population. And we are not even talking about accessibility issues here. I have another story for you. This is Alice. She is an emergency room nurse. And on a slow Friday afternoon, she is about to leave office. And when she wants to leave her station, the clerk looks up from the desk and asks, By the way, since you are passing housekeeping on your way out, would you remind them that room 12 still needs to be cleaned? And she says, no problem. And on a slow Friday afternoon, this is actually not a problem. That's the only thing she has on her mind. She can do that. And these informal ways, methods and processes in the hospital have developed over the years. And they keep that hospital humming. The humming work well in general. And in optimal times, this is working. This is good. People have it in their heads. There's no trouble to remind housekeeping to come up. It's no trouble to run a special specimen down the lab. And even if these small adjustments are forgotten in time, people remember and call again. And it's kind of self-regulating. But we don't always live in peaceful times, right? So especially in a hospital, you also have situations like this. And they immediately expose stress under trying circumstances. And when the water is full and it takes 12 hours for a room to be ready for the next patient, that impact is actually felt not only for one person, but throughout the whole organization. And when the number of small interruptions outweighs the amount of planned work done in any given hour, that impact is felt in slower progress, lower job satisfaction and potentially lower quality of care. In many situations, it's actually quite clear what needs to be done, but where organizations differ in how they do it. If you look deep into that, it exposes the workflows that the organization is using. If you don't have formalized this, then you completely rely on what the single people have in their head. And if they are experienced people, it tends to work out well, because under stressful situations, they start to improvise. However, the moment you have to let go of one of these experienced people and you hire somebody new, that person of course needs to be trained, but this person perhaps even has different experiences, your whole interlinked situation might crumble down. And it's in trying times when these systems implode. And luckily there are approaches to solve this, and if you have a way to monitor these processes and resources, you can forecast shortages. This is a bit like in software development. The earlier you find a bug, the cheaper it is to fix it. There have been situations in emergency rooms that are quite fatal, that are directly related to such stress situations. There is one really bad thing in hospitals, you get there, you have one problem, they fix it, but then afterwards you die because of an infection. This infection is completely unrelated to the original problem you had. And why do they happen in these infections? Every hospital has them. And this is because of a lack in hygiene, and there are solutions, very, very simple solutions to these problems. In 2001, a critical care specialist at the John Hopkins Hospital named Peter Pronovest decided to give doctors checklists a try. And he didn't try to write down everything that a doctor has to do in his day-to-day work. He was only focusing on these central line infections that led to a death rate of 11% of people getting them. And he took just one sheet of paper and he put down some basic rules on how you have to disinfect. And these rules should be no-brainers, right? Every clinical staffer learns them in school. Wash your hands, after you've washed your hands, don't touch anything else anymore. It seems so totally silly to write that down. And he did that anyway and he asked the nurses to check on the doctors. And in more of a third of the cases, the doctors skipped at least one step. After a while, the hospital authorized the nurses to stop the doctors if they skip a step. And they reported of cases where nurses just gently reminded the doctors, ah, doctor, didn't you just forget to wash your hands? But also cases where the doctors gave them really a body check and say, I want to see that you have enough equipment in your jacket and so on. And when they did that, the infection rate in the first year went down from 11% to 0%. And in the following 15 months, they only had two more cases with infections. Afterwards, they calculated that these simple checklists have prevented 43 infections and eight deaths and saved 2 million in cost. And we're talking about in A4 paper of this size and this has been adopted by the World Health Organization by now. And it's actually the same thing that pilots also do in planes. Do I actually have pulled up my wheels? Very simple stuff. When you say people should know that they have been trained. It's not enough. If you're interested more in reading up on what checklists can do, there's a fantastic book I want to recommend, the Checklist Manifesto. It's a great way to not forget trivial things and we do under stress. It helps newbies a lot. If you get somebody in a new job, he is stressed 360 degrees all the way around for months. And just having a checklist so that he knows he can trust, this is what I have to do, this is fantastic, gives a lot of peace of mind. Checklists provide guidance and security, so a checklist is a positively connotated thing. And also if you check a checklist, you have done documentation. You can document that you have done this stuff. So my talk is a bit about workflows. So when I talk about workflow, I'm not immediately referring to DC workflow in plane. A workflow in real life is defined as a set of tasks which is grouped chronologically into processes. And it includes the people and the resources that you need for these tasks, which are necessary to accomplish a certain goal. Quite simple. So what do you actually have to do if you want to digitize such a workflow? So actually you take the tasks, which are in order, grouped by process, that's checklists, right? And you need a way to adapt them quickly. If something changes, you need to make sure that people are using updated checklists, can be done by templating. And you need a way to track time commitments. If people say, I want to do this task in a certain time, you need to be able to remind them. And you need a way to document and find them and archive them. That's the documentation part. There is a problem in forecasting and doing this stuff. It's only as good as the data that gets collected. So the first step if you digitize a workflow is you are pushing people who have done everything out of their brain before into doing more manual work. They have to document this. They have to physically go to a computer and press a checkbox. And this is out of their normal routine. So it's bothering them. So you also need a way to make it positively connotated so that they want to do this extra work because they understand it is saving them time. So the benefit of forecasting using these digitized workflows must exceed the effort for the data collection. So you need to pick very carefully what you want to digitize. You cannot just go and say, okay, first we make an inventory of all the processes in our organization and then we digitize them and the project will take 12 months and then we'll done and then our company will run smoothly. That's now not how it works. Your people will kill you for that. So you need to first identify where can we actually achieve a big benefit, do that first and slowly get people on board. And there is a wonderful matrix on XKCD. I have lost my reference, which one that is, but I can find that out if you're interested. And who knows that one? Who has seen that before? One. Fantastic. I love it. I love it. I have it on my wall. I look at it every day. The top, the X axis, it shows how often you do a task. And the Y axis shows how much time could you potentially shove off? How much time can you save on that task? And then in the cells, it calculates over a period of five times, five, five years. How much time can you actually invest in saving time so that it pays off? Five years is a long time, but in an organization, that's, that's a typical lifetime of a process. So if you digitize a process and put it into your internal, it will probably live there for at least five years. So this is a reasonable calculation. Let's, let's take a look at some extremes. If you look at the top leftmost cell, if you have a task that you do 50 times a day and you can make it one second quicker, it will still pay off to spend one day of doing this optimization. And the bottom right, if you do something once a year, it's like preparing your Christmas party. And you can make that one day quicker. It would pay off to save a week to make that time saver. These are not that exciting actually. Even less exciting is the top right cell. If you do something once a year and you can speed it up for a second, you can spend five seconds thinking about it to speed it up. It's not interesting, but these are really interesting because that's, for example, something that you do daily. In a hospital, there are a lot of tasks that you do daily. And if you can just make it five minutes quicker, documentation, five minutes quicker, it pays to spend six days to do this. Or even better, if you have a daily task and you can reduce documentation by 30 minutes, it's worth spending five weeks doing this. And this is per person. Okay? If you have a hospital with 100 employees who are doing this task, you can spend 100 times five weeks and it will pay off. That is an incredible potential in digitizing processes. And now I'm going to show you a few ways how we have done that with Plon Internet in the past two years. One very simple thing is workflow. And you might have seen that before, but it's actually an evergreen in Plon Internet. That is the happy flow where we modeled a DC workflow into a simple metro map. And that way, people suddenly understood workflow. And I'm talking normal people. I'm talking 96% of the population and not 5.4. And if a computer scientist thinks about workflow, he is thinking about a state diagram, transitions and branches. And if this, then that, and it goes this way and then it joins again, nobody in that world will understand us if we are starting talking about processes like that. But everybody understands a process like, first you do this, then you do this, then you do this, and then you're done, because that's how it works. And if you do this and it was not good enough, then you take a step back and then you do it again. And then you go forward if it's good enough. And that's this happy flow that we've modeled there. And this one is quite powerful because it combines two different things. Forcing people to comply to certain rules and allowing people to be flexible in a certain way. And if you look at the top bubble, that's the phase. That's a state in DC workflow. As you might know, a state in DC workflow is coupled with a certain set of permissions. The small vertical lines that are the transitions in our DC workflow and the little bubbles within the lines that are tasks. So if I have certain legal requirements that I have to adhere to, then I model them in states and transitions in DC workflow and I can force my people to comply. And I can force permissions to change. So in the first state, only one guy has access. If he transitions to the next phase, other people get access and he loses access. I can really have tight control over this. But within the phase, I allow people to create tasks themselves and by that self-organize how they want to fulfill the task. We use that for example for onboarding. Onboarding in this case means we have a new colleague who has to complete certain steps and preparations to be able to do the work. Usually you have manifested that in some way. He has to read the company handbook. He has to get a new computer. He has to get an account. He has to sign up for email. He has to report to human resources, whatever. That's a special case of workflow. It's a workspace with that happy flow and it contains all the material that this person has to read, to check, to print, whatever. It contains tasks and events that the new colleague should complete or attend to. And the whole onboarding workspace is a template. So if the company decides that we need to add more information or change something, they change the template and the next person being onboarded automatically has a new material. And here in the case manager, we can easily track. Here we have onboarding of Anja Falterer and she is 53% through the process. And the happy flow at the same time is a kind of a progress bar. And you can see, ah, how far has she gotten? Ah, she's already there. So she should have a computer already. Of course, there are also situations where you have need for such a happy flow with all these constraints. It's something you just want to plan ahead as you go. And that's what we call projects. Projects model unstructured processes. An unstructured process is something where you don't know in advance what will happen to that. You get a letter, somebody wants something, you don't have a process for that. You still see, this is going to take months to answer or to do. It's like when the minister sends a request, you will need to put that through different departments. A lot of work needs to be done. It's basically a workflow without security. And you start adding your faces and your transitions and your tasks as you go. At the same time, it's wonderful documentation of what's happening. This looks the same. There's no difference for people, but they can go and add their own faces. There is another wonderful little small application that solves real-world problems. It's just what we call the Absence app is to get to book holidays. We have this customer and some people fill out holiday request forms on paper. Some other departments have digitized the process. So they fill out the request on the computer and then they print it and sign it. And they all post it. They put it into snail mail, even though the administration is across the street, because this is the process that has been established many years ago. That means requesting holidays takes at least seven days. And if you are not granted your holidays, you will first know seven days later. And that more or less kills all possibilities for short-term holiday requests. If you say, oh, I'm burned out. I should take the day off tomorrow. Forget it. It's not possible. And a nice thing is that if they approve online, it will turn into an event into the central staff calendar, and everybody knows you're not there. It looks like this. You basically fill in a very short form. You also say who is your supervisor who has to approve this and who is a replacement. We had huge discussions about how to model the improvement hierarchy in the organization to find out who is able to approve for a certain person. At one point we decided this is not necessary at all. Because if somebody picks his friend to approve his holidays, and this goes through, he does that exactly one time. As everything is documented, nobody will ever attempt this. People will in turn be very precise about who to pick to grant their holidays, because if it is retracted, they don't have holidays. So why would anybody fake this? So people just pick their supervisor. The supervisor comes and approves, and all is fine. It's documented. And then you see your planned absences, and you see the state on the right, light green approved, pending, or rejected in red. And the manager has a similar view, and he can see what he has approved, what is still pending, what he needs to okay. And in the end, the thing just sends an email to the staff department, and they copy it over into their SAP system, and they are also happy, because we found out they actually don't need it on paper. And the nice process is the legal app sounds very dry. It keeps your contracts under control. It saves real money, because it does a simple thing. It records all the contracts a company has, and makes them searchable, and it notifies your termination dates. So you all probably have a mobile contract, and it auto-renews, and you need to quit it three months before the termination period. And nobody actually has a reminder set up, or very little people, I think, that they have to rethink their mobile contract once in a while. And by just recording your contracts there, you can see how many people in my company already have a mobile contract with that company. When do they renew? Can I make a volume contract instead? Can I renegotiate, because we are such a good customer. And it basically, again, looks like this. It's a small table with some data. It looks also very dry, because finance people usually only want to see letters, it seems. You can upload your contract data, and you have quite a powerful filter. And that way, the legal department in one month already saved the double amount of money that it has cost to develop this application. So I think that was the quickest return on investment I've ever seen. And any idea on which type of contract they saved that money? Paper purchase. Copy paper. Collaborative editing. I have attended an open space on editors. We are still looking for a nice online HTML editor. But the real match winner for us is collaborative editing. Things that you can do in Confluence or in Google Docs. But due to data protection rules, we usually cannot send our content to an external service provider. But using only Office, it allows them at least to edit Office documents collaboratively. And if you show people that they can open a document together and write the minutes together, they will never do it alone anymore. And this is really powerful. Because so far, somebody has written down the meeting notes on paper, and he had to reserve at least one hour after the meeting to actually turn the meeting notes into a digital document and upload and distribute it. And now he more or less, the note taker fires up only Office and starts writing. And if other people see typing mistakes, they just join in and correct them on the fly. And when the meeting is done, the notes are done. And this is so simple. And at the same time, so powerful, we had completely underestimated that. And when we came with that feature, it was immediately radically adapted. So now I've talked a lot of Plown Internet. This Plown Internet, quickly, Plown Internet is already on Plown 5.14. And we are sending our developers to the SOAP 4 Sprints so that we know how to upgrade. Now that Plown 5.2 Alpha 1 is out, we'll probably start next week trying to port. I love this approach, and I will definitely support this. We use DC Workflow Solar Search, Stellarie, Async, DocumentViewer, Previews. It's not a lot of add-ons, but very powerful ones. We have a few of these smart apps that I showed you for these mini databases that solve a lot of problems. We connect to external apps like the only Office editing, Zapier, to connect data of other providers. And a big selling point is that simple and easy UI. We have that design-first approach that you might have heard about. We have a dictatorship in design. So everything we do goes through that small design bottleneck, but it leaves us with a tight quality control for easy user interfaces. And that is the main selling point for Plown Internet, because it abstracts most of all these checkboxes, which most of the time are only implemented for very narrow border cases, but are massively confusing users in their day-to-day work. So what we hear a lot from our customers is that they like Plown Internet because they don't really notice it. They are able to do a job and they find out they need some document, and this document is in Plown Internet, and they just go there, get it, and forget it again. I want to give you a quick example for our Plown Internet project lifecycle. We had a project and on Day-N-Zero we installed Plown Internet as a basics for cooperative work. We had a small business team of five or six people which we trained, and we set up a training website where it explains the basics, how to use it. We did a brown bag session where we just sent to the central mailing list, come if you want, we show the new system, you don't have to use it, it's all fine, it's just if you are interested, take a look. And we had news ready, and on the tile section we configured a lot of tiles to external systems that they are already using. After one month we were through with the business team on how to import static content. So they have a huge website, very old BSCW website, where there are forms and big PDFs and stuff that they went through, filtered out, and we imported that, and suddenly they had proper search. They could see the previews right away, they don't have to download the files and open them to read them, and could easily browse through, and we saw that some people that attended the brown bag session were clicking through the site. After two months we ported the phone book, and we introduced the holiday app. And the big thing was we allowed the containers to make food ads in the news section. We got their own section, suddenly we had a spike in traffic on the internet, and a lot more early adopters. It's actually very nice if people come to your website because of food, because they are in a good mood. Suddenly they like what they see, perhaps the food was good, they immediately combine the food with the internet, which is good. And what happens now is people start stumbling over the work of others, they see what they are doing. We have profiles that shows what other people are working on, so you can discover content. And that's what social internet is about. And they started, they wanted to do the same, so they went to their training website and found out how to do this, or even better they contacted these people and said, how did you do that? That's all communication that didn't have to go through the business team and didn't have to go through trainings. So we introduced the travel cost reimbursement process, which is one of the most frustrating processes. And people started looking at the internet to see who's on holidays. The thing is if you have the holiday process on your internet, after three months basically 80% of your people have had to go to the internet to book their holidays. And the people started to use the app section as a main navigation entry point because they cannot remember the URLs of all the different services. But as we made them tiles, they just went to the internet and clicked the tile. And then five months into the project, we introduced administrative workflows like how to get third party research money. That's a bit more nerdy, more specific already. And we started to expose the travel cost reimbursement process that were happy flows, as you saw before. And here, now that's something for later. And we saw that people were onboarding other people's over lunch. So we saw that in the container, people actually told other people how they do it. We saw about 50% of user engagement by that time. And you know what happens in formal communication, somebody comes, oh, that new internet is such a shit. And I have to learn something new. I don't need that. It works perfectly currently. And that other guy says, why? It's so simple. I could throw away masses of paper. And then that guy who already knows how to use the internet is implicitly convincing that other renting guy because he has more information. That is so powerful and that's something you cannot orchestrate. Nine months into the system, because we exposed the travel reimbursement process to people, so we let them see the happy flows and where their travel reimbursement request is at the moment. We saw that people got a more humble attitude towards the administration. So before everybody was saying, ah, administration, lazy people, they never work on my stuff. I have to phone them every day. And once they saw all the steps that have to be done to get a travel reimbursement through, and when they got a notification every day that their travel reimbursement has preceded one or two steps, they got a humble attitude and said, oh shit, they are doing so much work. I'm embarrassed that I put that upon them with my travel expenses only to get 40 euros reimbursed. And the administration people started to lighten up. We also saw that email in started to replace long email discussions. You send an email, you get an answer, then you involve a third person and a fourth person and the email gets longer and longer and you have to read these horrible, indented threats to get an understanding. And as always you have to scroll down and read that email, then you have to go up and read that email and go up and read that email. This is not a natural reading flow and it's really, really stressing if it's not formatted properly. So people started mailing that into workspaces and then using the comment feature there. We saw 98% of user engagement. That's a fantastic number. It basically means though people had to book their holidays, so it's not really people actively working with everything and they had to do their travel expenses there. We saw that users are members in an average of 10 workspaces. And the former document repository had been shut down. We never expected that because people in the beginning said, it contains everything. 96 gigabytes of data. We cannot shut that down ever. It's unsupported for six years now. No security patches, but we cannot shut that down. And the central computation center has been only offering one day training per month to interested people because of the low demand. There was actually not enough demand for more training anymore. And the people are saying that they are asking their colleagues how to do stuff. So if you want to do something like this, how can you do that? I would just say get blown internet or quave. And if you now ask what is quave, then I would tell you there are two editions of blown internet. And then I have to explain why. I mentioned this design first approach and this is a very, very important thing for us. Because we have to make sure that it stays easy for people. And if you just install yet another add-on that was done for a completely different use case, you will crash blown internet totally. It lives from its user interface. And if you do work on a project-based work model, you can only work on your software when you have a project. And that is not sustainable for a stack like this. And we definitely want to avoid that it becomes unsupported. And internets are not software that are replaced after two years. So if we install internet, we expect that we have to support it for at least 10 years. So we need to make sure that this is a live software. So we have a version that you can actually subscribe to, to get full support. Still, all is fully GPL. And we decided to just make a delayed release cycle like closed betas. And every three to six months, we are pushing the complete stack that we have out to the open again. That is actually coming very soon. We wanted to do that for the conference, but we simply didn't manage. And that is blown internet the community edition. You can find data on that also on Quave.com. Quave is basically the name. We had a long discussion that for people not knowing blown, it's better to brand and to leave technology completely out of the discussion. So version 1.4 is just around the corner. We have new apps. We have tons of small improvements, speed ups, of course, latest blown five and the design first approach. And if you want to go for a customer, we have Quave social internet. That is what we call the supported edition. We promise full email support. We offer hosted solutions. We offer onsite support and support with all these different add-ons, LDAP, Active Directory connections, of course, migration support. That's the stuff that we only offer for the Quave solution. That's a subscription service. And basically that subscription helps us to maintain the software also if we don't have a software project. We also offer consulting because it is very talk intensive to introduce an internet. You have to explain a lot. You cannot just install and say, use the processes, be happy. You have to look at this and for example, show them the matrix that I showed you before and say, let's look at the process that actually give us money. Five more minutes. Five more minutes, super. Then I will tell you how to get in touch and close down. Go to the website. Just ask us for a free demo if you want to see it. Subscribe to the main list. Send a note. You can also get a free playground if you don't want to install it yourself or just send me an email. And if you want to get involved, we have a reseller model. If you don't want to concern yourself with all the technology and just let us do the hosting and the support, but you want to keep your customer. But you are also very welcome to become a consortium member where we are going to onboard you and to explain the whole stack and explain the design first process so that you over time become able to actually sell that on your own. Quive, that is smart collaboration. Thank you very much. Thank you, Alexander. We have four minutes for questions. So the workflow, the workflow kind of mini apps, right? You are calling them smart apps? Is that the same thing? The same name? That is probably the one thing that is a little bit special because it hooks into a DC workflow that you have to put underneath. It is an extended workspace. It is basically a normal workspace and we also show the workflow in that workspace. So my question is to develop a new workflow, how quick is it and can the client do it themselves at all or they have to basically come to you guys or how much work is involved? If the client knows how to configure a DC workflow, he can do it himself because it is basically nothing else than a content type that bases on a workspace and has a local workflow policy. So we found out that no client is ever able to do this and instead it takes about one week to one month to talk through them what their process actually is and then it takes two hours for us to click it together. Any other question? Okay, then thanks again, Alexander. That's one. How much does the subscription prices cost? For a small setup with up to 50 users, I think we have six euros per seat a month and then it gets less the more users you have. But it is important to consider that it includes complete hosting and support so it is a carefree setup. Any other details on the homepage? We have time for another question. What is the cost of the subscription in the hostings only in Europe? We have hosting possibilities in Europe and in the US with six feet up.
Plone Intranet has evolved over the past years from a platform for sharing content to an engine that protects people from repetitive work strain. While earlier talks on Plone Intranet were technical and design oriented, this talk now describes applications of Plone Intranet from real customer projects. It shows examples where saving a bit of time for everybody highly exceeds saving a lot of time for a few and what a system must do to get really popular.
10.5446/54858 (DOI)
Okay, I think my presenter is not here, so I'm going. I can do it. I have no problem. Alright, I'm going to introduce Ramon. Everybody knows him, right? He's been a long time contributor and co-author of Guillotine. And now he's going to talk about Guillotine as CMS. Thank you. So, this talk is, first of all, it's a work in progress. Okay? We've been improving and developing Guillotine as CMS as a layer to create CMSs with the same API as Plone has. But it's still a work in progress. We use in production, and I really invite you to join, to contribute to it, to make it more bigger and more stable. So, who am I? I want to use this opportunity to thank Plone Conference because it's really lovely to see so many talks about Guillotine. We did a talk on Monday on the Talkie Meetup, training Guillotine, talk about Guillotine, talk about CMS. Later we have a talk about Guillotine on real case studies with Plone React. So, it's really cool. So, what is Guillotine as CMS? This slide is identical to the one I used yesterday on the Guillotine Talk. It's an Async.io framework designed to scale and to manage resources with security and traversal like Plone on the CMS use case. And mostly, yesterday I was explaining at my Guillotine Talk that we have Guillotine as a framework that connects to databases, catalog, cache systems, wherever. That provides a good backend infrastructure. And on the Plone community, we have Volto, who is doing the front end in React. We have Angular traversal, NGX schema form, NSF-Pastanaga on the Angular wall that are components to build your applications with Angular or Ionic. And they are really cool. It works really well with the Plone REST API. So, we wanted to fill the gap between the Guillotine system and all this front-end ecosystem that we have. So, we created Guillotine CMS. That is just an add-on for Guillotine that provides the needed endpoints to cover the differences from the Plone REST API to the Guillotine API. So, what are these differences? This is the list of the different set of endpoints on the Plone REST API. For example, all the authentication system is already built in Guillotine. We don't need to care about that. All the content manipulation, meaning all the crude, doing the get, the post, and the leads of the content, it's already built in Guillotine. No problem. All the history management, this is not on Guillotine. Guillotine doesn't do out-of-the-box history management. But this is implemented on Guillotine CMS. Batching the same. Comments, it's still not on Guillotine CMS and needs to be implemented. We didn't have the use case to need to do that. A Plone App Discussion API on comments could be implemented easily on Guillotine CMS. Copy of moving of objects is already on Guillotine package. Portal actions. Portal actions is something that we are now defining specifically for each project. So, we don't need to persist on the database. So, right now, it's not implemented on Guillotine CMS, but could be easily implemented. What else? Warflows. I will explain later. Warflows, it's something that Guillotine doesn't offer. But Guillotine CMS, it's providing the Warflows API from Plone, just a different implementation and a different way of configuring it. Locking system. Right now, we don't have it implemented on Guillotine CMS. Sharing. Everything in order to share one resource to a user. It's already implemented on Guillotine. The registry, in order to configure specific things for each container. It's also on Guillotine. Which types do you have? Same. User management. Guillotine doesn't have user management out of the box. And Guillotine CMS doesn't provide also any user management. We delegate that to what specific kind of use case do you have. Out of the box, if you want to have like a Plone site with Bolto, with Guillotine, you could use Guillotine DB users that it's storing the users on resources on the tree. Groups. It's exactly the same. Which components, breadcrumbs, navigations. This is specific endpoints from the Plone Rest API in order to provide the navigation and the breadcrumbs. This is already implemented on Guillotine CMS. All the serialization and deserialization also on Guillotine. The search API, and I will spend a bit of time on the search API of Guillotine, it's already implemented. TUS, a Plote of Files, Guillotine already provides. Vocabulary management. Being able to define which languages do you have or any kind of vocabulary that you need. It's already implemented on Guillotine CMS. Contra panels. We are still not providing this option. It's something that if somebody has the use case and the need of providing them, it's easy to implement. Tiles. It's already on Guillotine CMS. Sending email, we have a specific. So we are covering nearly most of the things of the Plone Rest API. But the way we develop this is instead of going to Plone and checking what Plone does, we went to the Plone Rest API and we checked what Plone Rest API does. We implemented trying to follow up that API. So we are not kind of focused on what Plone is doing by itself. So I'm going to cover just some of the most important things that from this list that we implemented on the Guillotine CMS. Vocabularies. Right now we have three vocabularies implemented. It's just languages, workflow states and content layouts. It's kind of simple. We use a decorator. We define that we want a vocabulary. And then we just provide the standard, pythonic ways of going through an object. And in this case, we copied the list of languages from Plone and we just provide, I provide all the list of the languages that Plone has. Link integrity. It's something that there is no API in this case, but it's something that we needed to provide. We already implemented. It's a separate add-on. It's called Guillotine underscore link integrity. And it's doing both things. It's making sure that if you are moving a content, you are still being redirected. And we are storing this information on our Redis. We are also checking that every time you are touching the HTML code of a page, the URLs are redirected, reverated. We have also the resolve URL so you can reference an object by the URL. Constraint apps. It's what Plone is using to say, on this folder, I just want to use to have documents or news or whatever. It's already implemented. It's really easy. There is no API right now on Plone, so we needed to create the API. And we hope that Plone then is adapting this API as their standard API. And then we have a notification in order to be able to subscribe to a specific folder. It's already also implemented. Guillotine CMS. And here, as the last thing of these small items, behaviors, we have some differences to the Plone REST API on the Guillotine REST API. This is the biggest one and the most difficult one. We are not flattening the fields from the behaviors on the main object. So when you get a page, all the fields from the doubling core are nested inside a key of the doubling core behavior. Because we are really splitting the behaviors when we are serializing and giving that information to be more explicit on the serialization and the serialization of the information. In order to make this easier for the frontend to understand what's behavior and what's not behavior, we are providing a specific key. It's called static behaviors where you can list all the keys of the behaviors that you have on your content. So you know that that specific key is a behavior. Rob did an amazing job in order to implement this in Volta, so I really want to thank him that we have this feature already implemented on Volta. Content types, well, I think it's the less interesting kind of things because they are really simple to implement. But we already are providing document news, files, link, and events that are the easy ones. And we are working on implementing collections also. Oh, folders, yes, sorry. Folders, it's already on Guillotine, so it's not Guillotine CMS. We implemented two specific fields, the rich text field that for some reason, Plone needs to have on the API the encoding and the content of the text and what kind, if it's HTML or what kind of field is it, and the image field in order to be able to have images on the site. So workflows, what we did here, I've been Plone developer for, I don't know, 10 years, 12 years, I don't know. And one of the most crazy things I ever needed to do is to do an XML about workflows on Plone. And to go there, I was, come on, this is so crazy. I want to create my own workflow by hand and it's really complex. So when I was thinking how I would love to have workflows, I say, okay, I'm going to check this XML and I'm going just to try to represent something that feels comfortable for me. So I use YAML. And so now workflows are defined with a YAML file where you define which is the initial state, then the different states, the actions that you can do, which is the guard for this action and which permissions are set when you go to this state. It's a really simple approach. It covers most of the use cases that Plone has. We have on the Gliudina CMS, there is one example of this YAML for the simple publication workflow that you can check out. It's really large. And for me, the nice thing I wanted to maintain is that what we have in the set permissions is the same payload you will set on the sharing. So when you know when you are doing a workflow, you are publishing something, you are applying the permissions the same way you will do a sharing post to modify the permissions on that object. By this payload, you could copy and just use that on the rest endpoint. We still need to provide mechanisms in order to execute code when the action happens or do more crazy things. But it covered our use cases right now. And at least you are able to publish and publish content. And it's also used internally. You have an event where you can subscribe. And so you could also write code when something is published to do whatever you want. Search. Well, we had a long discussion with Timu. It's a shame that he's not here. Because when I was facing to implement search, I was facing the search URLs from Plone and Column, List, different arguments. I needed to implement a lot of things in order to provide that kind of feature on Gidotino CMS. So I was discussing with him kind of saying, search has changed a lot during the last years. Now, for example, and giants like Elasticsearch, Solar provides aggregation. You can do kind of really specific things on the search that are auto-completition or different kind of things. So why we don't try to provide a search endpoint on Plone Rest API that it's a bit more powerful and we arrange the consensus that is what I'm going to show. And there are Plone Rest API will try to implement this same API, at least when there is collective Elasticsearch or Solar. So first of all, a list, it's defined on the query with a plus symbol. If instead of how Zop is converting the lists, we just define like this and we are defining a list on the query. So if I want to search that the text is exactly that text on a field, just the field name, underscore and the score equal and whatever we want to define that it's, we want to search. If we want to search that this text may be on the field, then we use in instead of equal. If we want to search that it's not on the field, we just use the underscore and the score not. If we want to use a wild card search, the same thing. If we want to filter based on a keyword that for example, subject or language, we could use directly like that. If we want to work with numbers and we want to say bigger, equal, greater, the same kind of modifiers. The same for the dates. Then when you know what you are going to search, you want to say, okay, now I just want to get some specific fields. I don't want to get all the different index metadata fields that I might get when I'm getting a brain. So I can define which specific fields I want to serialize and which specific fields I don't want to serialize on the payload I'm going to receive. So in this course, I want to sort, I want to define a batch size in order to do batching on the search results. Aggregations, that's something that is only supported if you have elastic server solar, that you get how many objects there is for a specific keyword elements that you want to filter out and you can write a facet navigation. And for the path, we have a specific modifier that underscore starts so you can define that your path starts with plumb folder. So really easily, this is to escape the plus if you want to use a plus. Right now, if you want to search on a path that's folder with the depth of two, this is how you will do now in blown. And this is how you do with this new API that's implemented in CMS. Path underscore, I missed the underscore here and there also. And it's the same. If you want to search something that has a title and it's a document, it's nearly, you can easily map things up and down. If you want to do a more complex search, kind of, it's published that has a specific portal type and it's the review state, you can map it to aggregations. We are also providing images, so we are using pillow and blown scale to be able to resize the images and to provide the different sizes of the images like blown does, the thumbnail size and all the different sizes. So you just need to reference the resource, the images, the field name and the scale that you want to use. And the scales are defined on the configuration.yaml. We have PAPS app. What it means is that you can subscribe to changes of one object. So we will see later a demo about that. And now we are going to see the demo. So the demo is going to be a bit... First of all, this demo I'm going to show you is using this configuration file. What we have here, we are connecting to a database, a Postgres database that I have on my laptop. I'm loading these applications, Swagger, because I want to provide a Swagger definition, GildingCMS, Elasticsearch on Gradis and DB users. The rest is much more standard. And configuring the Elasticsearch here. And here I'm defining which workflows. You don't need to define all the workflows that you're going to use on your configuration.yaml. You can import them if you have on a folder or define default ones on your system. And here, for example, I have that it's private, I am publishing, and then I'm being able to retire. And the only difference is that I'm allowing Anonymous to see it, to access to it and to see it. And at the end, you are defining for each interface which workflow do you want to use. So, for example, for the standard R-Resource, I'm going to use the basic one. And for the iDoc command, I'm going to use the Guillotine Simple Publication, that it's a file I have on my package. So now I started the Guillotine. Let me see where I have my browser. By default, Guillotine provides you an executor that is done by Eric and Amatild, which is an amazing kind of GMI, but for Guillotine, where you're logging here, you see that we have one database, there is nothing here. So we are going to create one container, it's what is one site. And I'm going to call it CMS. Once I have the site, I can go to the add-ons and, okay, I want to install the users. Now you see that it created two folders, one for the users and one for the group. And I'm going also to install Guillotine and CMS. Both of them are now installed. I can go to the CMS here and I can decide I want to create a new folder. And on this folder, I'm going to create a page, a document. And now in this document, thank you to the release we did yesterday, you are able to edit and you see a rich text editor here and a full edit page of the document, where you can even go to here and... So we have a minimal using, I think it's medium editor, rich text editor, where you can also edit pages through the executor. You can also go here and see which is the payload that we are going to send, and you see all the JSON that is going to be sent to the backend, in case that you want to do the JSON by yourself. Now it's updated. So we have a way to edit a Guillotine CMS, kind of simple. Well, I can also create multiple sites, so now I'm going to create one, it's called Web. And now I'm going to go to... So I have two versions of Volto here, one... This is the standard one, and this is 3000. Yeah. So this is Volto. So you see, now I'm in Volto, I'm seeing the same content I just created with Executinr. I'm connecting to the same one, and I can go here and I can create the folder, say... Oh, I need to set the title, sorry. So I have the folder, and now I create the document. Oh, I'm able to see the Pestanaga editor, amazing. As you see, I've been able to push an image, no problem, everything works. I have a Plone site and the backend is Guillotine CMS. And I can even go to the search. And, well, I'm receiving a lot of things, but, well, we are able to search for the content. So that's Pestanaga with Volto and Guillotine CMS. That's really great. Even we can go to our swagger here, now we can check here what we have. I'll change it to CMS. And now, if you have a frontend developer, you can go here and check all the different endpoints that we have for the root folder. And you can, well, go and check wherever you want. I missed also, of course, here we have the different states. We can change the state of the content or you can change the display based on the layouts that Volto has. And finally, I'm going to do a really... This is going to be risky. We have another version of Volto that it's on a branch that's called WebSockets that we did at the cost of Rava Spring on July. So it's a bit outdated and needs to be merged. But I'm going to try to see if I'm able to execute it. So here I have to... So I'm going to create a page, I'm going to say it demo. Okay, I have a page that's called demo. And here, if I refresh this thing, I should see the page with the moment demo effect. Sorry. Demo. Okay, I'm on the same page. I go here. All the connection between the Volto, it's going through a WebSocket. It means that every forget request, pause request, everything that needs to interact with the API, it's done through a WebSocket connection. That plus the option of being able to use the PubSapp allows us to interconnect. And here we are using the deepMatchPatch protocol from Google in order to map differences on the fields. But it took us, I don't know, one day or two days to do this, to prove our concept. So the WebSocket connection is easy to be merged. Let me see. I was wanting to show it a bit. WebSock. I don't know why. Under that power. Oh, yeah, thank you. Sorry, the screen is so small. I can even not see it. Well, now it doesn't appear. I can assure you that we have the WebSocket and all the communication it's done through there. I can show you later if anybody is interested. I don't know where I have my other. So I think that's all. Thank you. I don't know if there are any questions. So it might be a bit of a stupid question, but on the search you were showing a search by date. And how do you define which format of the date, the American format where you have the months first and the day after, or the European format where you have the day first and the month after? Here we are using the magic of Python data tools, which is able to parse. And then it's this library who is able to parse and detect what kind of data you are sending and try to be as much as smart as possible to detect that. There is a library in Python which you send a date in the string format and tries to guess which is the best match. If you do the year wherever you want, it works. Yeah, yeah, of course. How is it going to guess between the year and the month? If you read the Python data, I really think it's a really good approach. Besides that, it's just a matter of that you are defining that on your code. And right now, in order to deliver something fast, we are using Python data. I was wondering what kind of, what you modeled the search, I guess the pattern, my brain is not working. Did you copy or did you copy the way that you passed search parameters? Well, I tried to do a bit of research about how different systems are doing these sending parameters. Mostly oriented to content management nowadays, live-ray or different other systems. Also, how Google is parsing these variables and how people are used to use these modifiers. And I just try to reach a consensus that fits with Elasticsearch and Solar behind it. Any other question? Any plan to have the WebSocket thing connected to the PastaNag editor? That's a question for this guy. So you can collaboratively edit using PastaNag editor so you can see new blogs and so on dynamically. It's not difficult to use the WebSocket connection on the PAPS app. It's kind of a lot of work to make compatible with what we have right now and also the collaborative edit on the same name space, I think. I remember a few years ago, I think Jarn was showing something like this and the problem they ran into is that the parsing of the HTML, the algorithm wasn't aware of HTML and therefore would potentially send garbage. Is that something that's been solved since? What? Yes or no? So fixing the HTML isn't fixed, but in the PastaNag editor we use in Draft.js, and then the surface use is JSON and the JSON with some text so we can actually do the divs on there. So we can do it. Okay, nothing else. Thank you so much.
With Guillotina Framework ready for production and the Plone React becoming ready, Guillotina CMS provides the missing link on the connection between both. The talk covers how to use and create a basic project with Guillotina CMS: - Basic types and Behaviors, - Workflow, - Vocabularies, - PubSub, - Login
10.5446/54859 (DOI)
Okay, so it's a bit of a nebulous topic. I'm actually going to kind of talk about four things. So I want to kind of show off hands. I can sort of increase or decrease. I'm not going to get through all of it. So how many people kind of are most interested in say, like Al, I'm going to talk about our Rancher stack and how we scale and how we do multi-site. How one's most interested in that. Who's most interested in sort of practical examples of sort of theming, how we do that in perhaps a different way than you used to. Couple, few as well. And if I get time, I want to kind of do Prediform, which is our product, which we build on top of Plomino, which is a low-code platform for doing workflow. There's been a few different talks about very similar things. So anyone really interested in that? Okay, if I run out of time, maybe there's an extra lightning talk. Okay, so reducing the learning curve, I'm really going to talk about kind of one way of doing it. And this is coming from the perspective of an integrator, where you're hiring people to code or an organization or even if you're like, you know, a power webmaster kind of person and you want to get something up quickly. So talked about this a bit. People know that like, you know, build out Python, learning the whole Plone stack, you know, things like ZCA and stuff. It's quite a bit of time to get up and going. Things like, I think, you know, the headless kind of concept where you just have the REST API is going to reduce that learning curve. If that's all you got to know is the REST API and that's the standard. But you still have to be an Angular person, sorry, like a React person probably, I guess. You know, so there's, if you're hiring someone who's an Angular person, a React person, then you probably can get going pretty quickly. Otherwise, you're still going to need to pay a reasonable amount for an expensive developer. Whereas like some of the tech stinks, I'm going to talk about is, you know, you can train someone up, you know, in a couple of days if they know HTML, CSS and some JavaScript maybe, to do some reasonably complicated things. And there's also the option for non-programmers. So some of the stuff we've been doing with Protoform and Plomino is to make it so you don't have to write any code at all. It's got, you know, kind of a visual code builder and so on. So it's a low-code platform that could do reasonably complicated workflows without writing any code. So there's the sort of complexity maybe. And then, I mean, the thing is, like I said, the time may be quite short with React, but you're still going to end up paying a bit for those developers, right? So as an example, this particular one, right? So this is a theme that we got an agency to do. So we would deliver HTML, CSS, JavaScript, et cetera, right? We don't go and build things on top of blown things. We take a theme, we get someone else to do it. They do all the hard work with all the HTML and everything. And then we make it work, basically. So we hired a developer who basically did this job more or less, like, without too much input, you know, using some of these techniques. And it's in his first month of employment, right? So there's the other thing we kind of do is the lifts and shifts where we, you know, this is a theme that was, this was a site, complete site that was built on top of another CMS. And we just rip out all the HTML and JavaScript and everything and stick it using Diozo and make it all work. So I'm going to talk some examples about how we do these sort of things quickly. And the other thing we do is sometimes you can buy, you can buy themes with blown. How many people knew you could buy themes with blown? What you do is you go to Theme Forest and you buy a WordPress theme and you use Diozo and you make it work, you know? So this is our website. That's what we did. The same techniques work. And you can get nice themes that way, right? So what we don't do is we don't override things. We don't use variables in CSS. We don't use Barcelona underneath. We use it for the back end. We use the back and front end thing where we keep our public stuff themed and we keep the editing interface using Barcelona Netta. So the serverless kind of thing, right? So one of the reasons part of what we're doing with all this is we run multi-site. So serverless. How many people heard of AWS Lambda and stuff? Serverless, the whole serverless concept. So this is kind of a new paradigm. What it means is that you're only writing a little bit of code, right? You're not deploying an application server. You're writing just the code you want, right? And it's really simple deployment, right? You just upload it very simply. The biggest thing is that you don't know where it runs, right? It just sits there in the cloud and runs. It's like an endpoint that's there, possibly stateless. You can hook it up to a database but the middleware bit, the application bit is just sitting there running. You don't know what server it's running on. It's just running in the cloud. So you don't have to install a DB, maybe connect via APIs to some other DB. And you only pay for when it runs, right? It's kind of a per second billing type thing. So you can pay more than if you are running the thing constantly but you don't have to run the thing constantly, right? So the thing is that through the web what we have with Pwn is when you put the code in the database, you have a very, very similar thing, right? So you only write just the code you want if you're using theme fragments and Plomino and things like that. You're not if you take out Pwn and treat Pwn as your deployment mechanism then you're just writing the code that you want to actually achieve what you need. You can upload a theme zip or, you know, and the dexterity, you can just upload a zip. Again, you know, the code, you don't know where it's running, right? When you're running a multi-server, multi-zdo cluster, you can have as many servers as you want and that code's running over all of them but you've only just deployed it in one place and it's automatically distributed. So, okay, so there's still a problem of, that's why we call it DIY serverless because someone's got to install it for you but with things like Docker and Kubernetes and RDS and stuff, if you've got a standard Plone set up like a Docker image, you can actually do this without too much trouble or get someone else to do it for you and I'm going to show how we do it. Per second billing, well, it'd be kind of cool to implement that. I mean, all you really need to do is just kind of look at, add up all the number of seconds that everyone actually used for domains that they, that are running through your system and you can get per second, you can work out how much time they're spent in total. So, how we do the multi-site stuff. So we use mount points. We have a different database for every single customer. Obviously, we don't want to restart, clone or do any deployments to put a new sign up. So, what we do is we pre-allocate a whole bunch of databases, Zope databases from mostly it's not 20, but yeah, that's how we do it. So we use collective.file storage for that. We don't want to do the VHM stuff in GenX again. We don't want to restart or anything. So we do it in virtual hosting, right? So this is a neat little trick. Virtual host monster doesn't support SSL. There's no way to sort of like say that I want to support an SSL domain. So you can use this neat little trick. So this is what goes in your virtual host monster. So we've done this underscore SSL thing. This redirect the thing is actually that's a plug-in we have. So basically, we're taking the non-SSL domain and sending it to a Zope object which is a redirecter which has rules that says send it to the SSL domain or wherever we want to send it. So it allows us to do arbitrary redirects for any domain within Zope. And then on the harproxy side, you know, all you're doing is just adding that underscore SSL. So that allows us to basically use the VHM to do all our routing of domains to the right place. We use a let's encrypt. We haven't quite hooked up the let's encrypt side of things to automatically do it, but we could. So we support multiple clone versions in our cluster, right? So we're running clone four, clone five. We may be running five.0, maybe running five.2, I guess, maybe soon. And we run all of that in the same cluster. So at that point, right, you need to work out how to do get this site to go to clone four and this site to go to clone five. And that's a little more tricky. So we wrote a thing called collective Zope console. What that does is take the VHM and other things that you want out of Zope and sends it to console every time it changes within Zope. Then we use a console template and that will rewrite our harproxy template and go and all it's actually doing is it's very simple, right? Like, we could get it to rewrite more and do rewrite rules and all sorts of stuff. But all we're doing is basically saying this is clone four, this is clone five, then it goes to different queues. Unfortunately, that does restart harproxy, which kind of sucks a bit, sucks that harproxy doesn't have live reloading. It would be nice to fix that. The way we want to fix that, I think, is do the classification in the step before in varnish and that does have lively reloading and just use a header. So we just haven't done that yet. So then scaling. So we run varnish, obviously, to help with scaling. We run one big varnish in front of all of it and it does a logging. So the one thing about nice about Rancher we're using is that when we add it, and we can just scale the clone instances and it will create new servers and it actually doesn't have to restart harproxy, right? We can use our custom harproxy code and it uses an API to go and add and remove servers from the appropriate queues, which is really nice. So yeah, like scaling instances is super easy. We do do some stuff to kind of help manage traffic. We do traffic classification a bit. I think in our latest version we're doing a bit less of this, but so we treat crawlers a little bit differently because they tend to make the cache extra big, which kind of sucks. So we have a special instance, one or two instances there, just for crawlers so that we don't have things going through and ruining our normal Zope caches. Things like long requests for things where we try and minimize the long request. We try and use clone adapter async and so on. But where we do have them and we know they're there, we can sort of have separate. This is all using harproxy queues, by the way. You can have the same instances and different groups of instances all using harproxy queues with different sort of settings so that you can say and it basically separates your traffic out. So not everything is going to all instances. The big thing you're trying to avoid here is you really don't want big, long requests, blocking small requests basically and making some things about unavailable. We've done this in the past where we've set posts aside to like one or two instances and that helps reduce write conflicts because you just got less instances. Like if you've got particular applications that happen to be doing a lot of writing at a particular time because of government deadlines and stuff, then that became. And all of this kind of has failover so we can just not run certain types of these clone instances, these clone Docker instances and it all kind of still works. The latest version doesn't quite have this, but the old version did, but collective send file, it helps offload a whole bunch of things like videos and stuff like that. We did a work to make collective X send file work with wild car media and so on so that you can have really big videos and it's all just getting streamed out of and GenX not out of clone. We use Z at rest. So the thing is the whole sort of idea here with all the theme stuff is that we don't actually have to deploy, you know, new Docker images. We basically have a whole bunch of plugins that are very generic, they get used across all our sites. We do make those plugins, we do have to fix them sometimes, but we don't necessarily have to do them very often. So we're using collective, we made a, it's a tool we used to deploy things before, but now we use it to sort of bundle everything up to make it easier to take, develop packages and put them into Docker. So, and then when, like most of the deployment we're doing is these theme zips or a Plomino zip or a few things. So I'll talk a bit more about that. That is pretty much all the plugins we're using. More or less Plom4 has a few more that we haven't needed for the current set of sites. This is mainly from Plom5, I think. The first set, mostly the ones I'm going to talk about which are things that help us build client things. The second set are more like things that sort of give us functionality that we don't have to use for customization. And then the last set are more sort of infrastructure sort of things, I guess. It had to fit in three columns, so it's not perfect. Okay, so I want to talk about about that things which is something that David Bain, I don't know if he, did he make up the term or I think he made up the term. It describes, I think the thing is that when we talk about theming, right, what I notice is when people are talking about it, they're talking about different things at this conference, right? Like people are talking about, you know, just like changing of CSS variables and just doing everything by CSS, right? To me that's not theming. A fat theme, like one reason I think WordPress really works is because when you use WordPress, you're using a theme and the theme has a lot more in it than just changing a few colors here and there, right? What it's doing is giving you short codes and content types and all, and example content and everything in the theme is often like, you know, is designed for your use case, right? You'll buy an e-commerce one or you'll buy a library one or something like that. So the whole experience is much, much nicer, right? And this is not the way we've kind of thought about things at home, right? So what kind of happens is that, I mean, really, if you're building a CMS, right, only one thing can win, right? Either the designer with the theme, they're the ones in control of the sort of the whole experience or your plugins are kind of going to have control or your editors, you know, have sort of layout and they can do everything. Plone, we've kind of tried to have it all. And the problem with that is that, you know, you, if you do the plugins first, right, then your themes become really hard because you've got to take into account all these different places and things that the plugins can do, right? If someone goes and installs a new plugin, then it installs JavaScript, some JavaScript thing, then it's probably not going to work, you know, with the other theme. So it makes making theming really hard. Editors first also makes theming hard, right? Although it would be, you know, we've got to balance this. We've got to make sure that editors can have the amount of control that they need for smaller sites while still maintaining the right amount of control as an integrator or a theme. Okay. So I think if you didn't see John's talk, go and watch the video, it's another CMS and it does a lot of this kind of through the web stuff, like really well and has a whole export format that brings way more and is more sort of integrated than the way Plone does it. So it's an alternative way to look at it. So the reason why we, I sort of talked about this past, Diazo is great. It's this kind of, one of the reasons I like it is that it's a scalpel. You can just go and override the little bit you want. I hate this idea that you take this large chunk of internal Plone code and you just want to change one little bit, right? And so that's what JBOT is. It's a massive hammer and that incurs technical debt basically. You're now responsible for working out, like if Plone goes and changes some internal API, you've dragged all of that into your application and you're responsible for that now. So you still get that with Diazo. You know, people can change IDs and stuff, but it's like, you know, there's less debt there. Diazo is a great get out of jail free card, right? If you've got the HTML there, you want to change something, you can fit it with it. So if someone does a plugin that's almost right, you just need it to fit a bit better, then you can move things around and do what you want. I don't like writing XML as much as the next person, but the concept is still a good concept. So here's a bunch of examples. So here's one that we did, right? Font awesome icons. So a lot of what we do is text styles. We use tile styles. We use character styles. We use various different things, link up Diazo rules. And here what we're doing is we've got this quick links box on the left and rather than just make this one particular box, you know, with which is hard code, it's like, well, why don't we allow font awesome anywhere in the site, anywhere in the site? So we now have a character style that turns, you just tell the people to go and have a look at font awesome site, pick the icon name, they select it, they pick turn font awesome name into icon and it does the rest. And that's a little bit of Diazo, right? It's pretty straightforward. This is just a concept that you can use for almost anything you want to do, right? So this is kind of like a cheap version of shortcuts. It would be nice if we had better ways to do this. So carousels is another interesting one. We've done it a bunch of different ways in the past. So I'm going to talk about that. So there's things like collective carousel. We kind of got burnt by using some of the plugins. They often come with their own JavaScript. We're taking designs from designers and stuff. They've already got their own JavaScript and everything. We don't want to have this worry about having this JavaScript. They often have schemas, you know, like this particular one is kind of interesting. Yeah, you can't see here, but like they wanted to be able to mix images and HTML on the right-hand side. So we've got to have a slightly more interesting schema, right? It doesn't fit for most of the plugins. The way a lot of people do it is they use a folder with a bunch of images. Again, it's like, well, you can only have an image, a title, description. Where do you put the link? I don't know. Yeah, where does the link? You run out of stuff, right? You basically reusing content. And I hate the idea that, you know, you've got content in the site that is not content, right? It's basically part of one page, but it's sitting around, right? And if you do a clone search, then you might come across those images and go, what's this? Plus the editing experience, right? You've got to tell the people, go to this folder to change this thing which is on another page, right? That kind of sucks. Multiple tiles, multiple portlets. We've definitely done the multiple portlets thing. So each portlet represents a different pane in the carousel. You know, we think use things like portlet page in the past. We didn't use that for this. Now, theme fragment tile, right? Like you actually create a tile using theme fragments. How many people know you can do that? How many people use theme fragments? It's super cool, right? So if you can create your own theme fragment tile, right? And you can create configuration user, and that's all sits in your theme. There's no need to deploy any Python whatsoever to have your own tile type. That means you can create a custom interface for a particular user. Unfortunately, it didn't really work for us in this case because at the moment you can't really use data grids. And if you want a carousel, right, you're going to have multiple rows. You've got unlimited rows pretty much, right? Oh, unlimited panes. So, and relation choice doesn't work particularly well. It just kind of sucks. So both of those things need to be fixed. So here's a simple check that you can use for almost anything, right? Use an HTML table or a bulleted list, right? That's a data structure that's really simple to explain to a customer that you can apply some Diozo in front of it and do things with it. So we've done this in a whole bunch of places. So here's an example, right? Here's our carousel definition, which is a, it's just a rich textile. I forget how we, I think we use a tile style. You've got to apply our tile style and say this is going to be a slider. Then Diozo knows and then goes and takes this. I'm not going to show you what the Diozo looks like because it's not pretty, but you know, this works, right? You know, you can put images over here. You can put HTML over here. You can put extra columns, et cetera, if you want to. So this was a really interesting one, right? So customizing search is definitely getting a little more tricky because, again, we're not deploying any Python packages here, right? So what they wanted to do here is they came to us and they said, ah, we would have, this is an interesting one because they assumed that Plone would automatically work this way and it doesn't, which kind of sucks, which is that you get default descriptions. You have to type in your descriptions and stuff. So they wanted to automate the descriptions to take the first paragraph. Okay, so how do we do that? Yeah, it's not a great idea. You get things like this, but you know, it's also not a bad idea either. So what do we do? So we use Diozo first to redirect the search to go to faceted navigation. Nowadays, we're using collective content filter instead of faceted navigation because, again, faceted navigation comes with a whole bunch of JavaScript that just gets in the way of dependencies and stuff. Not that we use anything in the front end, but... So then what we do is we add our own metadata thing called basic search summary. Not a lot of people know that you can actually just have metadata or indexes in the catalog and then just create a Python script with the same name and through the magic of acquisition, then suddenly you can have custom indexes with custom Python code. We're going to get rid of acquisition, so we need to find a better way of doing this. This sucks kind of doing it this way, but it's, again, it's a nice get out of jail free card, right? So yeah, we then use Plone's internal transformation chain to transform PDFs and things like that into HTML and then just crop it. The actual code was a little bit more complicated because we filtered it out different crap, but that's the basic idea and it worked quite nicely. Except, of course, so the great caveat with all of this stuff is that restricted Python is cool except that no one actually tests for it anymore, so almost half the APIs, all the old APIs used to work, all the new APIs don't work because no one's put the right security stuff in there to make it available within restricted Python. Now we use ListingView. I want to replace ListingView or something else, but we don't have a good solution at the moment. Maybe you can do a theme fragrance, but basically, like, you want a custom listing. ListingView works with things like faceted navigation and content filter and basically can create views for either content or for listings, you know, like folded listings or collections and stuff. It'll create, like, first class things that you can go to the display menu and say, I want a folder summary, right? You can create a folder summary with this. And Diazo. So this is actually, it's kind of interesting for this. We didn't have to do, like, it allows you to write tile expressions to kind of calculate certain things within the listing. We didn't even have to do that because it just picks out the metadata columns and you can just pick them. So it was already done for us. So there's a bit of Diazo to make it look nice. So if we go back, you can see that, you know, we were pulling in the size. We were pulling in an icon. These things don't come with normal clone search, unfortunately. Modified date doesn't come out. I can't remember. But you can pull in lots of extra data, have a customized listing. Online help. This is something we did. I don't know why I didn't think about doing this before, right? We've got all these different kind of custom things that, you know, that are outside of the clone manual. So what you can do is you can just write a little bit of HTML, stick it in your theme and put this Diazo in there and suddenly you've got online help that goes with that is deployed with your theme. It's kind of nice. You can see my comment in there. It's like it would have been super cool if I could just the URL that I'm using, where's the actual URL there. The URL to the actual bit of thing, it would be cool if I could just point it at the.RST file and get blown to actually turn it into HTML on the fly. So you've got to compile it before you deploy the theme. Feature tiles is one where we actually kind of use the theme fragment tile thing. Again, you can't use relation choice. Why can't I just pick an image, right? Like I can. With all this stuff, that's because relation choice doesn't work yet. It doesn't serialize or something. So we've got to fix that. So this is showing, right, so the way you do a theme fragment tile is you basically have to copy and paste the XML schema, but you can just go to the dexterity editor and create a bunch of configuration and copy it across. And put it, so it goes into a folder called fragments. Okay. This is, yeah, even more complicated stuff. We do some stuff, quite a few things to add customizations using content rules. They're pretty cool. So this is a case where you like, we wanted people to have to fill out a form, but they only have to fill it out once. And then after that, they kind of have a cookie and everything. So we used a plugin that we make a custom login, which basically allows us to customize the login experience on a per content basis using content rules. So we then get it to redirect to form, then we use easy form in a script, and then we use the token role plugin also really awesome to create a token role that actually allows people to read the thing and does all the cookie for us. This is one we're doing at the moment, which is super interesting. We're going to change, we prototype this, we're going to change the MPD dexterity to be able to do things like auto form hints and stuff don't work with MPD dexterity. You don't have a way of sort of changing the widget or hiding and removing things dynamically and everything. We're going to add that to MPD dexterity. We've tested it. It worked quite well. We're changing content rules, SC content rules local role. So what we want to do is this, essentially the plan here is to have a workflow where you can do things like pick who should be the next reviewer, right? So you can say, okay, so these are the people who should be in the review step during in the DC workflow. And you can do that in the form. And the way we're going to do that is we're going to modify this to say, okay, so we're going to pick out the, we're going to pick out a particular value from a field and make that the local role. And then when it goes, we're going to mail them and then we're going to do, we're going to modify a collective cron as well to have content rules to be able to do things on content based on dates within the content. It takes a bit to get your head around. So here I was going to demonstrate how much time I've got, five minutes. Okay, so basically all the stuff is where you're dealing with content, right? Now, it comes to a point where you're not sort of dealing with content anymore. You're not doing a workflow that relates to content. You're doing workflow or an application kind of thing where you want something more like a relational database where you want more flexibility. You want an app, right? So where you might turn to say Django or Pyramid and have that running side by side or something like that. And that's what we kind of use as Plomino and what we built on top of it. Plomino. So this is kind of a skin we put on forward for the product and stuff. But just imagine this that you can just go add service within Plone and then you can start creating your app. So it's a placeful thing, right? So you can have it a certain place on the site. It's not going in the control panel. It's like created here. And then we can say, okay, which one's going to do more? And it loads up the forms. Again, application. I mean, some of the nice things you can do with Plomino is you can do things like hide wins, which basically allow you to hide and show things at any particular dynamically based on what you select. I just don't think of this demo as that. It doesn't. Oh, yeah, there we go. So you can create a demo. So you can create a demo. You can create a demo. You can create a demo. And it's just basically a demo. So you can whatever you want to know about the specific old studio object that you have in there. So it'll be Ü, or you can add content to them. resort to JavaScript. And this kind of works for us because we have requirements that things have to work without JavaScript. So if we want to design this, then one of the things we build on top of Plomino is this big ID that allows us to do things like we have a workflow editor, for example. So it opens with a workflow editor. So the workflow editor is kind of interesting in that what it does is it puts a layer of documentation and flow on top of Plomino. It's not changing Plomino. It's not forcing it into Plomino is quite flexible with regard to the workflow. It's not like DC workflow. It's like things have to go through certain states. You're basically kind of linking things together with actions. So this kind of documents it and allows you to sort of say, OK, here's how it's supposed to work. And here's how things go together. And you can link actions and so on. This is the sort of code builder. So you can see that what it shows you, the code builder gets used in a bunch of places. But this shows you the code builder that's in submission. So if we have a look at the application for myself, we've got a visual editor. We've got everything's quite visual. We can just drag and drop these things around. We can edit field properties. And where you add sort of behaviors and things, right, is in Plomino you would add write code. Here we can do things like code builder, which is like, OK, we want to default value. That's a really brain dead example, but anyway. So we've added the default value, but then that goes and generates the necessary code. I think I didn't save it. Maybe I have a look at this one. This is probably more interesting, right? So this will have the hide when as if the council, if there's other value here equals to pools, and we can pick these other fields and stuff, then we can do tests, different kinds of tests. Then we're going to show it. Otherwise, we're going to hide it. And if we go have a look at the code for that, you can see it's generated all the code for us. So before in Plomino, you have to write the code, which was simple code, but here you don't have to write the code. The really cool thing about this macro system is that you can build your own macros really easily. So if we go back to... There's a close ID button somewhere that I've lost. All right. So if we go back to the start here, all those macros are actually just another Plomino database with forms, so you define the configuration the same way using the visual designer, and you can actually add as many macros as you want, so you can have validation macros, you can have conditioned macros, all the things that are in the palette on the left-hand side. So we can have CSV download macro, right? We've done things for XML builders. We've connected up to e-commerce sites, and all of these things can be written through the web. So it's a really extensible system. So, yeah, if we go to designer, you can see all those different things, and then we've got all the things that appeared in the palette section here come out of our templates here, right? So the radio button up here is just this thing, and then we can customize that per site and say, you know, this template is going to always have this particular text on it, et cetera, et cetera. So that's that. I was going to show, I gave a similar kind of talk about, what was it, five years ago in Brazil, and I talked about, like, the idea that, you know, this kind of way of coding, like, okay, it's not best practice or anything, but it's super accessible, right? So the very first time I got involved was, we were playing this game called Falling, which is now open source, and you can download it and print the cards. It's a real-time card game, which, if you've ever played a real-time card game, it's kind of cool. Basically, the concept of the game is that everyone's jumped off a building, and the last one to hit the ground wins. So, and so the very first kind of web application I built at that point was like, ah, this Soap transaction model kind of works the same way as it works for, you know, like the way the cards get passed around, this real-time thing. So I built a real-time card game using, and about a year or two ago, I rebuilt it again, and repeated, I didn't get time to kind of get it working again, which is a shame, because it's just been sitting there, having some problems with OSX updates and Python. So, yeah, I wouldn't use Reptito anymore. It's kind of not maintained. I'd probably redo that in theme fragments. If you want something that's sort of like a little bit of an app, that's, you know, that's just sitting as part of your page or something, use theme fragments for that. Here's another example. So we have this thing called Collector Trusted Inputs. Trusted Inputs, the idea is that everything that is external to Plone, that you need whitelisted, go and add it here, right? So if you need functions like we need things like encryption, and we use regular expressions and stuff, so then you can just do import regular re, you know, in your restricted Python. Go and add it into Trusted Inputs, and then we get, you know, we start to get things that have been sort of vetted and so on. But this is just showing, I had to do this thing the other day where I just want to do it a quick CSV export of all the members, and these are all the different APIs that you can use in Plone that exist to get member email addresses. There's a lot of them, and almost none of them work in restricted Python. Plone.api, doesn't work. So it kind of sucks. I got there in the end. I think there was many to do with CSRF protection, but, yeah. That's it. APPLAUSE Any questions? Covered a lot of ground. Yeah, that was a lot of, sorry. I don't have a question or a statement. With the data grid, yes. The data grid field in the team editor, in the data, so it wasn't working. I found a solution for that. It's a bit of an hack using Python to the web. I can pass on that code to you. The proper thing to do is we should find a way to encode it so that it works in... It does the encoding. It basically does the encoding to the web. I should thank Oshane and Altaru. Some of the stuff I showed you, they did for us. It's kind of collaborative on some of those things. I think Oshane talked about in his talk as well. One of the really cool things is that... If we go actually have a look at this particular theme here... We had it initially. One of the things you do with the list as is, is you try and get to use Maizec to move things around. They made it so that we could move things around. If we go back to the site here... You look at this front page, which is kind of some of the stuff I was showing, and click on Edit. If you do move those tiles around... They move these things around, they don't move them around. They do things like take Maizec grid and turn it into Bootstrap 4 grid. The theme that we were given was Bootstrap 4 grid. Again, that stuff that dies, that makes it easy. I think it would be interesting to mention with how many stuff do you do all of this? Two. We're hiring, by the way, in Sydney. Maybe in Bangkok. Hopefully again in London. Any other questions? Thank you Dylan. Thank you.
Real examples of building complex websites without having to teach buildout, packaging, ZCML, ZCA and other hard things. Will touch on how we do multi-tenanted Plone using docker and rancher.
10.5446/54860 (DOI)
So this is a talk about Elm, as the title says. And no, it's the moment where it breaks. I've exercised five times. Okay, so maybe that's where we'll do it. Okay, cool. So what does day say about it? When I mean day, it's mainly the creator. So as you can read, the delightful language for reliable web apps, generate JavaScripts with great performance and no runtime exceptions. And that's maybe a very important part. I think the wording is pretty delightful as well. And this is a recurring theme in the Elm communities using nice language towards each other. Why did I look at Elm? Actually, it started among the Plone community. I heard the men and rock that have been using for nicks for a long time, and speaking of functional again and again, then I saw this Elm functional stuff pop up somewhere on my radar, and I was okay, that must be interesting, but functional. Ooh, that's so frightening, I won't look at it. And then again, the men and I think it was the men in that time. He mentioned that he was using it, and I thought, okay, this is now a sign that I should look deeper. Then there's some relationship with the Zoop community as well. The main sponsor today of Elm is a company called Norriding that has the same type of relationship to the language as Zoop Corporation, Digital Creation's head with Python a long time ago. They host the Elm creator, and they pay him full-time or half-time, or I don't know what to develop the language. And I thought, hmm, that's another hint. That's again interesting. Then I watched some more talks and got more interested and found out that they actually have also a BDFL, and where the B part is the most important. Even Kamklassak, sorry, his name is hard to pronounce, Saplinsky, he insists a lot about empathy, and what he means by this is empathy at all levels, empathy for the users, empathy for the developers, empathy towards the creator of the language as well, and so the B part is very important. And in another talk was mentioned, okay, we have with the Elm the same Python paradox that was mentioned by Paul Graham a long time ago, if you remember. A famous blog post where he explained that the Python developers were actually interesting people because they had looked out of the box and that it was a good advice to just go to Python 15 years ago to get people that look out of the box. And they were mentioning the same, that since they had published the fact that they were using Elm, they had so many people that would apply for jobs because there are a lot of people that would like to use Elm in a job, but could not. And so all those stuffs lined up and I thought, okay, I really need this time I have to look at it. My name is Godfra, you can find me on GitHub and on Twitter. Goals of that talk. I will try to explain things that are not that obvious. I'll do my best. I will explain all some things of Elm. There are so pitfalls, there are no silver bullets as we all know. Most of us have gray hair, so we don't need to be convinced anymore. And you'll get some reference later, for later. I will try to avoid to get into very detailed syntax to try to start to speak of functional programming concepts like monowheats and occurring and whatever, that's not the goal at all. And actually it's supposedly useless in Elm. The goal is to avoid all that jargon. Very few codes, at least so that I'll show some so that you can have a vague idea of the syntax, but not much more. So what is Elm? And then let's see some codes. Explain the Elm architecture quite deeply because that's the interesting part in my opinion. And go quite quickly about all some things very quickly, how I use Elm at work, caveats. Let's go. So what is Elm? Functional language, as I already mentioned. Very important, it's statically typed, which means that it has nothing to do with the current pattern or JavaScripts, the compiler will hold your back. And it has type inference, in other words, we don't have to annotate for typing everywhere. Obviously it's recommended for documentation, but there are a lot of places where you can avoid it. I click too quickly. All data is immutable. That's very important. The more I've been in IT, the more I've become wary of mutable state. I think build out was the stuff that taught me first, that being able to get rid of a clone installation, check out build out again, run everything and start from scratch. In other words, without any state was so good. And from that I understood more and more why we don't want data to change under our feet. And stop me if anything, if I'm speaking too quickly. All functions are pure. No side effects. So this is really important when you have inputs, whatever inputs you have, the outputs are guaranteed to be always the same. We all know that if we have a method that's calling a web service, we might have an issue there. The data that might come back might not be the same. That's what a monitor stuff, something that could be called a web side effect. And in Elm, we don't have null undefined or exceptions, which could be pretty hard to manage in JavaScript as we all know. Let's compare JavaScript in Elm because Elm is a functional language that compiles to JavaScript, which is something very important that I forgot to mention. So if we are in JavaScript, we have, I don't know how many tools to use, NPM for packages, Webpack to assemble the application, React for the nice uniflow of data, Redux to keep your state inside, and then TypeScript flow, whatever. I don't want to know actually. Any in Elm? Okay, thank you. Everything is built in, so one tool, you got it all. And by the way, if it's not clear, if you use JavaScript, you will need to take care of the dependencies of all those tools and be hopeful that they work together properly. So just for the people that are still a bit skeptical, I guess that some of you know that Redux took some of its ideas actually from Elm from its model of unidirectional flow view update model. Short side notes. When we speak of functional, and I said we want a statically type, we want stuff that's unbreakable. And what do I mean by unbreakable? This is something we all know. We have seen, I don't know how many times, and this is obviously broken. This is questionable, but at least it's not broken. The business analyst and the developer should speak to avoid this type of mistakes, but at least it's not broken. And it could actually be something that's desired, if you're describing, I don't know, which painting or I don't know. Functional style. So JavaScript and Python praise themselves to have some functional constructs and show you could use them. Like we used to say, I can do object oriented style in C. And yes, you can. We all know what happened. First, you have compromises, and second, under loads and under stress, the developer immediately stops the good practices, and so we lose the discipline and the functional programming practices in JS. When you have a functional language, actually the compiler won't let you do something else, which is quite nice. Okay, let's go. I will show you a very, very basic plus minus counter. So first, Elm is modular like we have in Python, like we really like. And as you can see, it can also declare what it exposes. Can import from other modules. And then here are the four types, four things, let's say this way that we need to describe when we do an Elm application. In this case, it's a counter, so let's use an integer as my model, and the initial value will be zero. Then what can I do with this application? I can increment or decrement and see how nice this sort of typing is because this will be covered. Those increment and decrement are actually types, and they are validated by the compiler. Then when I get one of those messages, what should I do? This is very, very complex. And finally, some of you that will use the model to display, to turn into HTML, and where you can declare which messages will be sent. And outside, the fact that you need to get used to the syntax, I think that people that have done HTML should be quite at ease with this syntax. Finally, I'm initializing the program, beginner program in this case, no side effects at all. With the, and there's a mistake here in the slide, model should be equal to init model and not model. Model equals init model, view equal view, update equal update. And that's all I need to give to Elm to make it work. We have defined a model, we have defined messages, and we wrote two functions, update and view. Let's hope that this switch will work. And imagine this is crazy. Thank you for your applause. So, again, stuff that's not showing up here. As I said, an initial state is passed to the program. The view, how to turn the state into HTML, into a UI. And update, which is the function that knows how to react, I said it, to messages. View takes a model in, it's a function, it's a pure function. It spits out HTML, actually a description of HTML that the Elm runtime will manage. New HTML, it's always, because data is immutable, we are not fiddling ourselves with an existing model. This is HTML that is pure new values. And that's it. So that's quite easy to understand, in my opinion. Update. And we get the message, because this is the hint that will come from the runtime, and we get the model. Based on the message and the model, we get a new model. Once again, this is, the old data are immutable, so this is a copy. It's not the same model that we are fiddling with. Let's try to show this with a bit more detail. So, let's say that I clicked in the browser, the browser emits an event. That's taken care of by the Elm runtime. And it will sense, and actually not send, it will call update function with the message that was fired, and the model. Update will return the new model. New model, I insist. And then the runtime passes the model to the view function, the function that we defined. And the view function can then define which order HTML will look, and which messages should be wired in, let's say this way. And then again, the Elm runtime will turn that into DOM. So we never touch DOM ourselves. And obviously, they have a virtual DOM like React, like Angular, and so on. And maybe something less obvious is that because the language is totally controlled, the virtual DOM can be much more optimized because they control the data in data out, which means that they can really choose the best constructs in JS and make its very low memory and very performance. Any questions until here? Oops. So that's very fine. I showed something with no effects, but obviously we want to speak to APIs. We want to store data. We want to send messages through sockets and so on. So this is what we've seen until now, except there's something new. When we are not in a beginner program that I showed just before, so that you can play and get used to this Elm architecture basics, then we get an update function that actually returns not only a model, but also a command. And a command is actually a declaration of a side effect that you want the Elm runtime to apply for you. So you don't call the server yourself. And if it makes you laugh, it's actually like when you ask the OS to write a file. You don't open the file descriptor yourself and so on. No, no, no. You just send some data and you hope that the OS will do it properly. In some cases, it doesn't happen. It's the same here. So you describe what you want to do. And the loop goes on. Okay. Later, the runtime will emit the command and take care of it. And this is totally asynchronous. You have no control on that aspect, which is quite important in a UI actually, because in a lot of cases, we have UI that are asynchronous because we want quick reaction while complex stuff is happening in the back end. In the background, I mean. And so once the command has done an effect, messages will be produced. They come back into the runtime. New events later, which produce an event. So the server has finally answered to your query. And so now you will have an error or success. But anyway, that comes again later. And this is again asynchronous. So you get a new message that you have defined yourself from the command. And we are back to what we've seen until now, which is, okay, there was a new message. The message is passed by the runtime to the update function. And we can go on. So with this, you have seen all you need to know about the Elm architecture. There are no more concepts. Obviously, you will want to learn a nice, you need to learn all the nice functional structure. In some case, we are lost because we don't know how to do a loop or to do proper conditionals and stuff like that. But at the architecture level, this is all you need to know to do an Elm application. So once again, because repetition is so nice. Update function in that case, which is the one you will use because you will want side effects, obviously. The update function gets messages from the runtime, gets modeled, the current state of the application. It will, you make your own computation. You create the new model. And if needed, new commands that you pass to the runtime. Any question until now? So let's go. You have, sorry. Could you show the curve again? I got... Your problem. So this one does not have comments, right? Because it does not have side effects. So you see the message type that's describing the increment and decrement. You have the update function, which will return models. So I think the key here is that that expression model plus one is computing a new model and returning it. Exactly. Other questions? Yep. Best pair programming buddy ever. So having been into dynamic languages for so long, I am used to guess what's inside the variable to guess a type. And so on. To protect myself with, I don't know how many tests, or rather to not protect myself with as many tests for the missing values and so on. And so I will show a few scenarios. Nice hands and a bit worse scenarios. How the compiler can actually help you once it has more knowledge about the code than just interpreting it. And dynamic typing like we have in Python or JS. So... Oh yeah. I do this. That's better. So this is the proper code. So let's imagine I had a typo. So this is the type of hertz that you will get. So this is obviously a basic one. But if you... What I like is how precise it is, how much information you get. The hints are pretty cool. Actually, no. Oh, if I did not fix the other one, that won't work. So this is one of the really nice stuff about Elm. It's the fact that when you branch over values, it will never allow you to miss one of them. So I've had to reset message inside my message type. Now the branching does not take care of it anymore. It won't compile. It will absolutely insist that I decide what should happen if... So I could decide to do nothing, obviously, but at least I have to decide to do nothing. Right? It's still not a silver bullet because if I do something... Oops, what's going on here? Oh, thank you. This compiles even though it does nothing. I added a new message. I did not change the HTML, so thank you. There's a new message. It won't ever be emitted because no events can actually produce this. I would need to wire it inside the HTML for it to be emitted somewhere. That's the role of the compiler guy. He is the body programming. Thank you. What did I do again? Sure, sure, sure. What is he expecting? Text risk. So, who can guess with me? Thank you. Okay. So this sort of workflow we just saw is very interesting. It's the fact, okay, now I have a new feature. I add one or two more messages. And then I will change the render compiler again and again and again until it works. Because it will find all the places where I should have taken care of that missing type, missing branch. Let's see this way. Oh, there's one thing I forgot to show. So this is a basic code. This is some code of the only code I have in production. And let's introduce a mistake there as well. That's not easy. That's not quite. So here you get into less pleasant error messages. It's still quite precise and you still get a lot of information. But that might be a bit more frightening. That's what I call worse scenarios. Once your application gets a bit more complex, you will have a lot of issues with the typing, as usual. Which is why we always don't cast in Java. This way we don't need to fiddle with type anymore. Right? So it's the same type of problem we get here. What I found out eventually is that quite often by reading the documentation, once you have the syntax in brain, you can go quite quickly to understanding what's wrong with your types. And so this takes some more habits. So that's why I call it a worse scenario. Excuse me. I'm going to add a browser. Yep. So you have the... Yep. It's all JavaScript. So you don't see the HTML. Yep. So if you want to fine tune the last bits of pixels, that will need some time. There's, as I said, no silver bullets. Sure. Yep. Did you... I still have some time to... So... So what's nice, what else do we have? That's quite nice. Because Elm knows about types and knows a lot about the APIs that you describe in your modules, you won't be allowed to choose yourself the package version when you publish to the package repository. And it will actually... Oops. One more slide. This is not a show-up. So the major, minor version will be changed by itself. So if you do not touch the API, it will generate itself the version numbers. So in other words, when you look at the version number as of a package, if it's only a minor change, you know that outside semantics, you can actually update to the latest minor version and this is enforced by the compiler itself and by the Elm package manager. Sorry for the missing slide. And another one. Oh my God. This is not missing. So how do you do? You just... And you saw me do it. And here's the answer to your question. So you saw me doing Elm make with Elm. And you get JS files that you can or make work by themselves or that you can include into an existing application. So you can mix much Elm application inside React, inside any BlownPage template or whatever. It does not need to control the whole page. It will control the element that you ask kids to control. You can also, if you want, because as I said, you might have this need of make React and Elm live together. So you have some helpers to make it work with Webpack. And I've heard crazy companies that actually had Elm around React, around Elm. So... And it seems it's really manageable. I did not do it myself, but the Elm community claims that it's better to really migrate your application incrementally rather than start from scratch because there's too much to learn. Because... Thank you. And so as I said, you can use the Elm Webpack loader. I did not say it was written there. If you need to integrate your Elm files into your application. How do I use Elm at work? Well, I wish I would use it more first, until now. But this is really the important answer, gradually. It's quite a change because a lot of our reflexes have to go away. Like we always speak of components and objects. This is gone. The basic piece of reuse is function. And you have much less that you can hide in a function. You can still use types. And you are actually encouraged to do it. But that's for instance, one step that you have to make to learn Elm. And so the idea to say, okay, I start from scratch is quite not a good idea. And I found it when I tried to do PlonReact like PlonElm. There's too much to learn. I got to some points, but I think it's actually a much better way of doing small part of an application to get used to the architecture. And the code I showed you in the worst scenario is actually in production for more than almost one and a half years now. And I've never heard about it. And I did something very small somewhere on its Plon widget somewhere. And it works and does the job. And I haven't heard about it. When the new version of Elm came out, migrating it was quite quick and easy. Because they do not add hundreds of features rather than get rid of features. Trying to make it simpler and simpler. Those are the two slides I knew were missing. So let's skip them. Because I don't know why they show on my machine and not on the screen. And I had no time to dig what's happening. Kivyats. Elm is not a general purpose language. So you will use it to do UI and nothing else until now. A lot of people would like to do something else with it. It's really optimized for doing UI. A lot of the patterns are built with that idea. And actually does a general purpose language actually exist. When we want performance in Python, what do we do? We go to see. Or we go to rest if we are very fancy. But do we keep Python? Maybe not. And so just keep in mind that's one tool more in your toolbox. No fiddling with the DOM. If you start to fiddle yourself with the DOM, you are fighting with the Elm runtime that does it for you. And so you'll have issues. So if you need to fiddle with the DOM somewhere, you should rather do it in JavaScript. And then use the proper way to interact between JavaScript and Elm. Because obviously they have a story that I won't explain because that would get us too far. But it's a story that's really nice in the sense that it guarantees that as long as you are in the Elm runtime, in the Elm world, all promises are fulfilled. So there's no null or undefined that it can come and enter from JavaScript into Elm. They do what's needed to avoid this to happen. And the widget I'm mentioning in Blown is actually a widget that interacts with JS code. So that's not that hard. And it works. I just said this, so I don't need to repeat. That's the worst case yet. When you get back and suddenly you find out that, oh, shit, I forgot to test for none or for nil or for undefined or whatever. It gets on your nerves because it's so nice to have the compiler do it for you. That I've said it's, I don't know how many times, and it's worth repeating again. So there are things that you still cannot do because it's still a young language. It's a young community. So if you need some very exotic web API, you might need to go through JavaScript because the Elm runtime does not pack them properly to make it easy. So that's one reason. Broken runtime, try, try, try. Noverding had the first runtime ever something like 3, 4 months ago, and it was due to Chrome extension that was actually fiddling with the DOM behind their back. So it was not even their code, right? And they have, I think, more than 150,000 lines in production or something. It's quite big. And those mistakes, what I mean by this is, okay, I forgot a branch. And not smothering your code properly because once you use this type of strongly typed language, and I could not go into this, you can really design a lot of your application with the types, actually. And which means that you have much less unit tests needed because the compiler does the unit test for you. As you saw, refactoring is awesome. Actually, during that project I mentioned, the small widget I have for Plone, I had made one of those mistakes, a semantic mistake. And I personally exercised the fact that yes, just changing my model and following the compiler until the end, the stuff was working again and the semantic mistake was gone. So just following the compiler errors until there were none. That was good enough. I did not need to change anything else than just change one of the types and follow the stuff to the end. And this is really awesome, really, really awesome. And that makes it clear that if you are a team, it's much easier for someone to dive into this type of code. I will go quickly over those because the slides will be published. If you listen to some of those talks, what you'll find out is that those people think very, very, very deep. It's not like impulsive stuff like I usually do. It's stuff that's thought really deeply. And I think it's really, really interesting. Which also means that the sort of impatience I have when I want a new release now won't happen because they want to think deep before they make a new release. So that's another caveat. In some cases, the fixes are slow to come. And I really want to thank Mario Rodjic. He made those slides for another presentation. I tuned them very few and he was nice enough to allow me to use them for that talk. That's it. Thank you. Yeah, thanks for that introduction. You mentioned about mapping Elm to other concepts. So say I would want to rewrite an existing component like a pattern slip or a mockup component into Elm. Would that be a natural fit or would it be troublesome? If you want to redo a component, you'll be in trouble. You should rather decide to replace that part of your application that uses that component. And hopefully you should start with a simple one, right? And totally get rid of it. The reasoning of Evan is to say that usually you start with something that seems similar and pretty soon your two versions of the supposedly same component are diverging for UI reasons. And so they say just use functions. Just do views. And if you have slight difference, do two subviews. Let's say this way. Do two helper functions that will show the two variations. And don't try to add a myriad of options like we do usually. And then we get to two to the power of n cases that we should test for our component. And that's obviously not tested. Actually, I can't. Other can. I can't. Does that answer your question? It's so true that you start to gain annoy by other languages when you start working with Elm. And I'm more and more annoyed with Python and I'm trying to load annoyance with mypy. But I don't know what I should use on the back end. What do you use on the back end? On the back end, I'm still doing Python until now because I thought I would not be capable of all that stuff. That was interesting to see that I was actually. What was a really strange experience last month is that I found a bug in the M compiler, not in the compiler itself, in one of the part of the tools, had never seen a Haskell and had less than three hours managed to send a pull request with a fix to that. Because the syntax is quite similar. Reading through the docs of Haskell and so on. So I think my next step will be digging into Haskell further because I was so overwhelmed. I had never seen a single line of those three hours included installing the compiler, understanding the slight difference and all that stuff, three hours. Okay, I'm an old monkey but I was still surprised. I was still really surprised. So I think that if we get rid of all the jargon, there's a lot that could be done in Haskell. That will be my next step. But I've not done. I mean, I just fixed a few lines. Yeah, Matt. When you set the messages, you had increment and decrement and then you just added reset in. Is that just how you define messages or is there implicit declaration happening behind the scenes? That was actually a type. And those are all types, right? And actually what I did not show and that's missing in this presentation is that you can also include some values into those messages. So it could be a reset with a value actually. I could put an input field there with the value and the reset could actually hold the value. Does that answer your question? Yeah, that's what I wanted to know. Okay. Can you show? Oh, sorry. I'm going to ask a question. Sorry. Can you? No, I can't show. Dylan needs it. Okay. But I can show it to you later. Yeah, it's working. That's usual. Can I detect? It does not detect. So I would let Dylan... While you're setting that up, are there... I love seeing eye candy. So is there a full UI example available to... Just go to Novrating and you see the type of application they use. I did not explain what they do. They teach English to classes. So it's an application used by teachers for their students. Actually, I'm quite young students. And so I would not use it to build a website part today. I think it would need some time. But their application to teach language has a very fine use. So just use the application for findings. Any way it would be fun to look at it. Yeah, that's... It's purely in. They don't have anything that animates them. Okay. And then secondly, as far as... Is it a good fit for using as the front end to a back end such as blown or any other kind of thing that can provide an API? So I think so. I really think so. When the experiments I did with speaking to the blown Everest API worked well. As I said, the issue was that I was trying to make a too big application at the... In the first step. So getting the data in and out was fine and then the stuff did work. And that's actually how they use it. So it's meant like this. That's one of the caveats that's not mentioned in the code in the talk is the fact that you have to write JSON decoders because JSON comes with all that dynamicity that's... And don't... Does not want. So you have to write it. And that can be tedious at the beginning, but the community has come with a tool that's doing most of the... How do you call it? You give JSON and it will produce some element for you and you can fiddle with it instead of running it all yourself. Yes, running it from the database data type. Exactly. Okay.
The talk will present the Elm language and ecosystem. Elm is a strongly typed, immutable, functional language that compiles to Javascript. It does compete with modern JS frameworks to build client-side UI.
10.5446/54867 (DOI)
皆さん、来てくれてありがとう。このセッションはプロのコミュニティと日本のジャバスクリプトのコミュニティを会いましょう。このセッションは、サイレントのセッションではありません。ない。手を伸ばすことができます。まずは、この会見のオーガナイザーを紹介します。どうも。こんにちは。私は与一郎、今日はリアクトネイティの紹介をしています。私はCTOバーツサディスウィッチです。お会いしましょう。こんにちは。私はカズヒロハラ。私はフルスタークウェブデベロッパーです。私は、フロントエンドプロンを助けました。テラダーズのコンパニーです。マナブーズのコンパニーです。9年前に、CSS、JS、CM、JS、サム・ユニバースティーズ、パブリック・インスティテューションを使用しました。そして、スピーカーの皆さんも来てください。私はエリック・ブレオです。私はプロンのコントリビューターです。私は、パイソンを作りました。私は、フロントエンドプローバーです。私は、エンゲルR6を使用しました。次は、ティモです。私は、CofCADコンセプトです。CADコンセプトは、5年前に、大きなプロジェクトをしています。エンゲルR1やJS、エンゲルR2、そして、フロントエンドプローバーです。ありがとうございます。私は、ロブです。私は、フロントエンドプローバーです。20年か20年か、多分、私は、フォネラジャーフスカップを作りました。私は、フロントエンドプローバーを使用しました。私は、今、リアクトを作りました。はい、私は、ビクターです。私は、ティモです。私は、ようやく、��風スタッフの作品です。私は、今、フロントエンドプローバーとして行われました。私は、大きなプロジェクトを作りました。そして、私は、フロントエンドプローバーです。私は、フロントエンドプローバーです。そして、Frontendのサーディアンです。フロントエンドプローバーです。そして、Frontendの人物や、スピーカーズの前の日をご紹介します。私の名前はターダシー・コイワ・ハラブ。スライドワックスターズ。みした、アライ、プリズ。私はマスター・タック・アライ。私はプリズコミ・マスター。私は普通にアンギュラ・ジャンゴを使っています。ありがとう。こんにちは、私はリナ。IoTエンジニア。私はアンジニア・マスル・トレーニング・インターフェースの話を話しました。IoTマスル・トレーニング。ありがとう。私はターダシー・コイワ・ハラブ。フリーランス・ロント・エンド・デブロッパー。私はクレイティー・ワックスターズを話したいと思います。私はクレイティー・モーションを作っています。クレイティー・ウィーション・ビルドウ。楽しみです。ありがとう。ありがとう。私はバーチャルリアティ。私はヒューカーズ・エガシュラ。私はヒロと呼ばれます。今日のセッションは、イマシウ・ウェブXR。私は、今、主なロールはXRエンジニア。主な2はユニティ。よろしくお願いします。ありがとうございます。まず、1つの質問。皆さんに答えをお願いします。1つの質問は、最も人気のJavaScript libraryのプロンのデベロープメント。皆さんに答えをお願いします。プロンのデベロープメントは、フレームワークの中で、人気のJavaScript libraryのプロンのデベロープメントを作っています。1つの質問は、エンガラー7のプロンのデベロープメントを作っています。2つの質問は、フロンデンのプロンのデベロープメントを作っています。2つの質問は、フレームワークの中で、フレームワークの中で、フレームワークの中で、フレームワークの中で、フレームワークの中で、フレームワークの中で、フレームワークの中で、フレームワークの中で、フレームワークの中で、フレームワークの中で、フレームワークの中で、フレームワークの中で、フレームワークの中で、フレームワークの中で、フレームワークの中で、フレームワークの中で、フレームワークの中で、フレームワークの中で、フレームワークの中で、フレームワークの中で、フレームワークの中で、フレームワークの中で、フレームワークの中で、フレームワークの中で、フレームワークの中で、フレームワークの中で、フレームワークの中で、フレームワークの中で、フレームワークの中で、フレームワークの中で、フレームワークの中で、フレームワークの中で、フレームワークの中で、フレームワークの中で、フレームワークの中で、フレームワークの中で、フレームワークの中で、ここにあった人がリアクトをしているのか?誰がリアクトをしているのか?彼らはリアクトをしている人がいるのか?彼らはリアクトをしているのか?彼らはリアクトをしているのか?彼らはリアクトをしているのか?彼らはリアクトをしているのか?彼らはリアクトをしているのか?彼らはリアクトをしているのか?彼らはリアクトをしているのか?彼らはリアクトをしているのか?彼らはリアクトをしているのか?彼らはリアクトをしているのか?彼らはリアクトをしているのか?彼らはリアクトをしているのか?彼らはリアクトをしているのか?彼らはリアクトをしているのか?彼らはリアクトをしているのか?彼らはリアクトをしているのか?彼らはリアクトをしているのか?彼らはリアクトをしているのか?彼らはリアクトをしているのか?彼らはリアクトをしているのか?彼らはリアクトをしているのか?彼らはリアクトをしているのか?彼らはリアクトをしているのか?彼らはリアクトをしているのか?次の問題は、Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?Dynamic UIの重要性は?
Q & A session.
10.5446/54870 (DOI)
Well, welcome to my talk. Thanks for coming. Yeah, we're going to start. All the photos that you will see except one. They were taken by me in my previous stay in Japan, eight years ago in my honeymoon, and others during this one as well. So, yeah, after this, this translation will rather start. So, you all know that we are no longer Prone React. We are Volto. And yeah, and this is a preliminary icon. You already might have seen it in some presentations made by the almighty Albert Casado. Yeah, you probably have been in the other talks that we already made. We don't miss the next ones, especially the Nilesh one. We will explain one of the main tooling that we have right now in the Volto ecosystem, which is create a Volto app, which we will show in a minute. So, the idea is that Prone love React, right? But not only React, we love also Prone, I'm sorry, Angular, and Vue, and whatever framework that comes out in the next 10 minutes. So, yeah, and it's not about PLO5. This is the only image that is not mine, right? But yeah. But who cares? I mean, the idea is that we have a strong community, lovely people there, and we are not going to fight. Everybody is doing great things, and it's not the thing to stop in Volto. So, we really have to continue this effort on front-end developing in all the other possible ways, and yeah. So, Plone loves modern front-end, like all of them, all. But Volto especially loves React, right? And this is another version that Albert provided us with the logo, the colorful one. I love this one, especially. Yeah. And let's talk also about Passenaga UI. Passenaga UI was another concept that was developed also by Albert. And the first implementation that we had was done in the last Plone conference, in fact, and was on Angular. Later on, we started to implement it in November last year in Plone React itself, in the proof-of-contest concept that was already done by Rob. And we decided that I will not show anything from Passenaga UI or you already know all about it, but we started to implement it on the top of Symantec UI. Symantec UI is yet another CSS framework, right? And we choose it because there are very specific reasons that I will go over them in the next slide. And first of all, because there's already an implementation of Symantec UI on the top of React, which is called React Symantec UI. You have the URLs here from both things. So we have, for one part, the Symantec UI itself, which is kind of bootstrap causing, we could say, which is implemented, all the JavaScript that is done there is done with the jQuery, a la bootstrap. But we have a good React implementation of everything that Symantec UI brings you. And you can find it here in React.Symantec.UI.com. And we're using that for implementing Passenaga on Volto. One of the main things that Symantec UI is endorsing is one thing that Frederick Brooks, that was, it is still, because he's still alive, is an ex-computer engineer architect from IBM. He designed a lot of things, did a lot of books. One of them is the design of design. And it's one thing that is called progressive truthfulness. So I have my cheat sheet here. The idea is that progressive truthfulness is the best way to implement models of physical objects is to start with a model that is fully detailed, but only resembles like the final thing. But it's fully detailed already. So what you do is incrementally do steps for reach what you want in the end. But starting with this thing that is fully detailed and has everything that you will need in the last one, but only resembles and is kind of there, but it's not fully there. And that's important in Symantec UI because it's the whole foundation of it. We'll see it later how it does Symantec UI endorse to that. Why Symantec React? Symantec UI, before mainly because of the components architecture, which matches directly to React, I will say, and not only React, of course, to other major frameworks. So it is divided in a lot of small pieces, which each piece is a component. And you have a lot of flexibility on customizing those components. And also you can extend them. And the most important feature there is theming, and we will know why in a while. It has, it sports very, very good. And I've never seen something like this in the industry, like the one that Symantec UI have. And we'll see it in a while. So Symantec UI is built on less. So, yeah, not SAS. Sorry, you have to deal with it. Pillowfied. Pillowfied. Yeah, I love SAS. I mean, I'm also, what's the disappointing when I knew that. But the things, the, in fact, the building blocks that make the theming possible is, it just cannot be done with SAS at this moment. And we'll see them as well, what is exactly what SAS lacks. And for, as far as I know, it's no way that SAS is going to do that in the short term. So we won't have a version of Symantec UI in SAS anytime soon. But it's fair enough. I mean, both things are close. We already have less in blown. And it's not that bad. I mean, yeah, of course, SAS is quicker because of Lipsass and everything, but it's not a big deal, not the end. So we have components in Symantec UI. We have a full-site CSS that is applied first. So things are applied first, the blue, the blue section, then the other ones from left to right, right? So they are applied in a CSS cascade mode. And we have the site CSS and then the reset all the other way around. I don't remember now, but yeah, you know the idea. Then the elements and the components are all these ones available already in Symantec UI and are applied in this particular order. So I divided it into elements, into collections, modules and views. And you already can have a good idea of what Symantec UI is composed by. Those are building blocks that we can use already in Volto. There are Symantec UI components you can reuse and extend. And you also have the Volto one. So you can build your own site with all of them. And we, in fact, build Volto on them as well. So OK, yeah, fine. But show me, right? I mean, let's see some code. So we have Volto app, which I don't want to spoil things, but this is something that you do about launching the Creatory Act app. So please go to Nilesh talk, and he will teach you about that as well. And then you only have to specify your, maybe it's not very simple, I want to show much console thingies, but you only have to provide your app name here, and it will create something like this, right? Which you will have the structure, generic structure, that we will need for our app. So we will have actions, components, constants, customizations, helpers, reducers, when you can here put all of your custom elements like that. Then what we interest us in this talk is the theme directory. We will come soon to this theme config, which is very important in the semantic configuration, and is one of the entry points. But the main thing that I want to show here is how semantic UI is composed of. So it's for you to see these themes, sorry, these blocks that are made that make semantic that you can see here. And there, for example, the views, you have the cart view, the common, the feed. Okay. We'll come to that later. So after this, which is the component anatomy? The component anatomy comes of three parts. Each of one component that you already seen is the less variables, the basic styles definition, and the React component itself. So we have these three things available, and that we should replicate if we create new components as well. And we have a main entry point for all these. So we have to tell Webpack, if you have any knowledge of that, but we have to tell Webpack where is our less files. And this is the entry point that we have for Volto to look into that and start loading things. And this is located here in Adplot and Volto of StarClients, which we can see in the Volto client.js. Not here. Here. Here you can see the entry point. Okay. This instructs Webpack to load this less file and then pull over all the theming, engine, and everything else. How, what, how, here. We have it here, and we'll look into that in a moment. And it looks a bit like this, right? It's standard, less import definitions, and loads the small pieces that you already saw in the previous slide, one by one, in the cascade that we were supposed to be. And, yeah, and that's about it. You can add more here at some point, but we can still yet. But don't worry about that. It's fair enough for now. Then we have another artifact, which is the theme config, right? Here in this theme config, we find a mapping of less variables, because inside this flexibility that I told you about, you can, in fact, theme each of the components with a different theme. Here, all the components are based in PastaNaga, but you could say, okay, I love PastaNaga way of doing, I don't know, messages, so pop-up messages. But I like the most the other theme that is whatever, I don't know. Semantic UI has several themes already, and maybe we like the alerts from the GitHub theme. So you could go here and say, okay, no, my alerts, my message component will be like the GitHub theme one. And you only have to touch this, really. It's very flexible. Another thing that the theme config has is where the theme lives. For now, by default, you can find the themes inside Volto. So it goes to the Volto themes where the PastaNaga theme lives. We will also see it now. In fact, I can show you. Rather way, in Volto, we have the theme folder as well. And we have inside the themes folder, we have the PastaNaga theme, which as you can see is also divided in the same folders that represents the types, the categorization of the components. And for example, in the globals, you see also the site and the reset that you can override. We will see how to do that later as well. So another thing is the site folder that we will see also in a moment. A lot of things that are recursive, right? I can show you things without doing recursive things. We will see that, don't worry, and what it means. But I want that the theme config is the way that you configure how semantic will be high for you. But basically, all here is, by default, is fine. You don't have to come here right away and start changing things. You should know that here is the point where you customize things and do different things than the default one, right? You can configure also where your fonts path is and things like that. Well, let's continue. The seeming engine, which is the important part, the seeming engine works a bit like that. So we have three parts, three parts that are loaded each time that we go to the entry point that I show you. So semantic will pull all these three parts and these three artifacts. The first one is the defaults. Semantic UI has the building blocks. If you remember about the progressive stressfulness, we need the full definition of our building blocks as much detail as possible, but that doesn't resemble, doesn't have all the styling already there. But they are building blocks for, fulfill everything where we are going and where will be our final goal, right? So we'll have these definitions. These definitions leaves here. If we go to the semantic UI less source, we have in this folder, it's called definitions, we have the site.less. And we have here, for example, all the default styling for the site that doesn't really match any of the components, but it's defined. It's things like default tags, bodies, the headers, and things like that, right? This is the body definition by default. And what else? Yeah, we have a default theme as well. The theme that leaves here in themes and here in the default theme, you will have the overrides that already applies to the default definitions. And everything in this theme is empty. So if you want to make a new theme, you have to start from this default theme, because it's the theme that is empty. When we started with the PastaNAR theme, which we will also come back later, we started from this theme and built upon the default semantic UI theme. Sorry, definitions. So this is the default, so we have the definitions with our less files, and we have the default theme, which already have, you can find them here, either on GitHub or in the code, because they are important when you are building your theme or either building the custom CSS for your site, because you need somehow like the source, which is the default, and which is the things that have to override. And it's handy to have them at hand. Then we have the package themes. We made a theme out of PastaNAR. And what you have to do, as you have already seen, is to use another artifacts that are called the variables files and the override files. Those files will override the default ones in a Cascade CSS way. So for example, here we have PastaNAR, and we have the site variables for PastaNAR, that will override the default ones. For example, this is the variable that will set the phone name, that will take it over directly from Google. We have the header font, more things like the text phone size, the default phone size, and so on, so on, primary color from the branding, secondary color. We have the text color as well somewhere, which is here. The default text color for all our site, we can set it up from scratch. And this lives in the site variables. It's the main variables that we are going, that are general, and we are going to use along all our site. And we can directly set them here to define how our theme is going to be. Okay, same for the override files. For example, this is the override file for PastaNAR. This file will override the default ones, like in override, as in CSS. It will come later, so you can override previously defined directives, I mean, CSS directives. So here, like the body, in PastaNAR we have a flex layout in the body, so we override it here, right, and so on, and so on. So, more information about variables. We have more than 3,000 theme variables, okay? Yeah, which is a lot, I know, and it's kind of overwhelming. Yeah, but I can also see that for someone that it's not used to CSS, it's not used to be more logic or to more into CSS. I think that probably it's a good entry point for someone that doesn't know how to do proper CSS. So, it's like a downside. I mean, yeah, there's a lot of things good, but I have to really search for that variable. And it's kind of, yeah, compromise, I don't know. I'm lazy, I'm not myself, so I am lazy. And instead of searching for the good, which variable controls what, I'm guilty of simply override the CSS and whatever, but the idea in PastaNAR theme is not to do that and use the proper variables, but non-experiencite people is coming here and can kind of theme PastaNAR properly using only variables. And that can be done in semantic UI. It's kind of a TML-back, base properties, but yeah, I don't think that base properties back in the day was that so bad idea, whatever. So yeah, there are more than 6,000. And yeah, the overrides, as I said, they are less files, but they are only, they have different suffix there, I mean, extension, but they are overrides, they are less files, you have to set up that right in your IDE, if not, it will. So PastaNAR UI theme, we made a theme, we made a theme with semantic UI theme with PastaNAR, you can find it here. It's still not complete. Help is, we're very welcome during the spring to finish and to polish around the edges around PastaNAR theme. And yeah, finally, we have the side theme, which I think that is the best thing that we ever had. So we have the defaults, okay, we have a theme already, and PastaNAR is a theme, but if we want to make a project, we don't need to build another theme. We build a local side theme, which is called a local side theme. So you only have, on the top of that, you have a still something that is overriding again, everything that is set up, that you set up before. So this is the full cascade, and it goes like this, in this direction, and the most rightful one is the one who wins, either using variables or either using overrides. And I think it's great, and it's a great idea that we can only have to take advantage of. So, okay, so we extending PastaNAR UI, the semantic UI way, it can be done. We'll have to drop some file that we can call extras. I'm not saying that here is where you should put your customizations. No, you should put your customizations in this side theme, which I have to show you. So we have the Volto app here, or theme config, which we don't have to touch anything. And we are going to set a globals folder, the same globals folder that we will have in the PastaNAR theme. This is the PastaNAR theme, and as you can see, we have to comply with the categorization of the semantic UI themes, right? So if we want to customize the side variables of the site styling, we have to make a global directory, and inside that directory we have to have side variables and a site override, for example. In the side variables, we have changed the phone name, and we set it to red. And in the set variables, and in the site overrides, we are setting this small piece of code. So we are saying, yeah, phone size 54, why not, and mine it 1RAM. So let's see how it looks like. So I don't know. Yeah, it's already started. As we introduced Neo files, we have to restart the Volto process, unless the watcher is not that smart. But once we finish...... Okay, we have our Volto site here, and here it is. We have our, how it was called, the Viore rhyme font, and it was set up to 54. And we can watch it live, because we have the nice reloader, and we can set it to 84. Okay, we should have the... yeah, of course. We should have the developers open if not, it doesn't happen. No, no, it's an H2, in fact. H1. Oh, yeah, because this is the main site, yeah, here not. Because it's blown site, yeah. It's about we have to correct in the... Yeah, the non-blown site ones is... There are H2, because it should be H2. Okay. And that's it. And here is where your site, your local site theme should live. So in the theme, the Volto app theme directory, then you start to put there the semantic UI. Customizations. So quickly here. So yeah, extended Pasternak UI. And when I say this is, for example, we have... we've seen in the interwebs, the nice component, drop-down component, needs the react select component, for example, and you see, oh, that's awesome. I want that in my Volto site. And then it has its own CSS, hopefully also built on less. And it has the component, so you only have to use that in your Volto app, which is fairly easy. But then you have to deal with CSS, right? So you can create this extra dot less and put it on SRC client and Allah semantic UI. You can do the import of the select. And then in this select file, you only have to maintain these headers. So you have to set this header, setting the type, which is an extra, and setting an element, which is called select. And then import this multiple, which is one of the last features that SAS doesn't have, unfortunately, and makes the old theming work. And then place your third-party less CSS or less files here. And then load... call this mixing, which is called load UI override stingy. And that's it. You have your react select less integrated into your theming engine, which is kind of cool. I have this duplicated for some strange reason, whatever. Same for the variables. So let's see that react select has variables, and in fact, it has. And then you only have to put the variables coming from your third-party component here. And that's it. And the semantic UI theming is going to take care of it. So, yeah, getting to this point, I want to talk also about the depth workflow that you get in Volto. And I... we tried a way of doing things that we like the most, and it helps the client to see things right away by building a prototype in the middle. And this is something that we react you can do very quickly, in fact. So you have mockups from the agency, the design agency, or the designer that you do the styling. Sorry, the look and feel for you. And then you make a prototype that doesn't work, but it seems like it resembles... it matches the mockup. And without do any wiring at all. And you can... it is something that you can even show to the client with, I don't know, fake data or something. And then get the approval from the client, and then wire it up back with Volto and make it work with the real thing with blown on the back. And it's something that we... it really worked for us. And it's very handy, and the client can see results right away very quickly. And I really recommend you to do that, because React allows you to do that. I mean, it's that easy. So a little bit of wording on tooling quickly. So use prettier. Use prettier, I mean. You have... it doesn't matter the IDE that you are using. Use prettier, which has a nice plugin that is called prettier styling. And use also styling. Styling is the next big thing in LinkedIn CSS less. And it has also two plugins that I like the most. The first one is the just load the standard rules for LinkedIn, whatever styling preprocessor that you are using. Here less, of course. But you can... if you have other projects, you can work with that as well. And also style configurative metric order. This is something that MDO from Bootstrap started to enforce in Bootstrap itself. That is, okay, no, I don't want to just a bunch of declarations in my less classes or IDs in my styling. And... but I really wanted ordered that to have a special order. So first go the layout, then goes the fonts, then it goes the... I don't remember now, really, because this plugin does it for you. So when you save, all the declarations are ordered in the pre-established way, which is kind of cool. And you have always the same thing in the same order in your CSS files. And yeah, you only have to do this to install them. And in package.json, you have to enable the idiomatic order and the pre-tier styling like this. And that's it. You have it. I don't think... yeah, I think that's it, more or less. So final thoughts. I was really... I was weary at first coming from Bootstrap and other frameworks that, yeah, just another framework, right? And... but at the end, I really enjoy to have it... to have to work with it. And it's very enjoyable and developer experience is not that bad. However, it has some drawbacks, which I had in my notes. And since I'm not seeing my notes because of the projector, let me get it. Yeah, semantic is not perfect. I mean, the idea of the theme is unbeatable, I have to say. But semantic itself is not perfect. It's far from being semantic at some point, so you get some... what the fucks along the way. But... yeah, I mean... At some point, it also has too much specificity. Because of the variables. So yeah, you have a variable for the description of the cards, of the cards component. And in that variable says, yeah, this is your font size. What does it mean? It means that you have to override that each time that you have a card and you want to change that font size. Yeah. Well, but thanks to modern ideas, it's also fairly easy to overcome. But yeah, it's that specific, you can tell from 3,000 variables, right? Yeah, it's something that... yeah, it's variable, but you're saying, yeah, again, I have to go and set my own size there. Whatever. Then, yeah, again, the seeming in giant is brilliant. And you have... when you work with Voltault, you only have to take care of this. You don't have the Yazzo, you don't have templating, you don't have... yeah. I have to write this here in a J-Bot, but I can't because it's a navigation port and I have to write the Python for that. And yeah, and a lot of things that you can think about it and you only have to deal with React and Semantic UI, right? So you are from a list of these dependencies or technologies that you have to think you'll only go back to, which is also fine. So questions? I just want to understand, right, like you get an agency theme, you do that prototype thing, but then to do the final thing, you're basically throwing that away and using all these styles to... No, no, well, the question is how we made the prototyping. It's React, right? You wrote your React components in Voltault, but you don't have to wire them. What means wiring them? It's wiring them to clone with actions and reducers so you get the actual data from clone and made it appear in your prototype. You just don't do that because that takes some time, but you only do the component and the styling with fake data that you later will remove and do the proper wiring. So you don't have to throw anything away and everything stays in the same place. So you have your components folder, you have your styling folder and everything is fine. The only thing is that it's not wired to clone. Well, for a client, fair enough, and then you wire it to clone with actions and reducers and everything. You don't have to throw anything away. Everything is there, only it's static. And that's it. Last question. Last question because we are running out of time. One of the nice things that we've had in clone for a long time was like long before Living Style guides or anything, there was a page called Test Rendering where you could go and just see how does everything look. Does Semantic UI or Voltault have something like that? No, but there's something else in the JavaScript world which is called Style Guide DIST and Storybook which both are fine and their purpose is precisely do that. So you have a gallery of components and along with the gallery you get the description about them. So you have the reference guide on one side and you can build things around that. It's something that we already look into that but we have time still to do it, especially for the Voltault components. The Semantic components themselves, they are fairly well documented in the React Semantic UI website which is cool. And you even have life, you can modify them and play with them. Definitely we have to do that. So thank you so much Victor and it's time to change.
How theming is done in Plone-React using SemanticUI. Introduction and basic concepts of this CSS Framework, and how to effectively build a theme on Pastanaga UI (or build a completely new one from scratch. How to add new React components to the theme and apply the correct patterns. Apply a proven development workflow to build a theme using Create React App.
10.5446/54872 (DOI)
Thank you for the introduction. And before I continue, I should thank everyone that supported me to come here for the PRUN CONF 2018. I suppose to start out with Konnichiwa. I'm ashamed, but I already got the introduction. What can we start doing at PRUN? That's basically what my presentation is going to be about. Before I do, tell you what we're doing after telling you about the state of PRUN in Jamaica. PRUN lacks presence and there's almost just 10 PRUN developers in Jamaica right now, so it's extremely scarce. Yet we're doing a lot of work, building a lot of websites. So, new developers, are we onboarding new developers, PRUN developers, that is? Well, when we get a new developer, most likely Python, they say there's too much to learn, too much things to do in PRUN. They said it's for a non-subway developer, it's a lot, especially someone that is coming just coming out of the university for sure of the press. So, as a software development shop, RTL, which is the company I work for, David Bay and I'm company, we realize that, yeah, Python developers don't go on tree and PRUN is not like H2O where you can stay to a chemist and they get it. So, there's a transition from being a Python back end developer to a PRUN back end developer. So, and to start off new projects, it takes some technical effort for it to install, deploy and just develop PRUN websites. And this learning curve, during this learning curve, of course, any project, once there's a learning curve, the project moves momentum. So, what are we doing about all this? How are we addressing it? Of course, we create road maps for developers, designers and even our client site owners to develop and deploy and maintain this site also to get users engaged. It's all about digital customer experience when it comes on to the clients. So, we provide documentation and guidelines. We use collaborative tools for communication to do markup. We're using Figma to do markup. And we do design sprints and brand sprints to get the customer the right type of logo, the right type of design and the workflow for their customers, the client customers to be engaged on the website and on social media and other platforms. So, we want developers to sprint like you say in bold by giving them a platform workflow to go out and run as fast as they can when they come on to coding and developing effective and efficient software. So, we develop route site launch kit which basically is a CLI tool that help you to either deploy a prone site, it will create a GitHub repository and set up all of the necessary tools that we use for development. We also have predefined CICD, continuous integration, continuous development, configuration inside of the route launch kit so that when we just build out a new project, we have robot testing and continuous integration set up and I will show you some of that later on in the presentation. We also docker route which we use for documentation, user manual and it works with read docs and spings for auto documentation of code. All right. You let designers be designers, focus on designers by giving them, earlier I mentioned Figma, Figma we use for mockup and we build out UI component. These UI components we are using as most styles and styles which is dropped directly inside of the team. Everything Dylan will touch on later on in his presentation. You probably noticed this is actually using the upgraded team editor for my Google some of code project class here. This is prone 5.1. This is an example of a predefined DLZ rule, XSLT. What it's doing is basically transforming music grid to bootstrap grid. This is something that Mike was talking about in the community or can you get music grid to be transformed to other grid system. He had some problems with the transformation and that's where certain music styles are transformed. Our classes would be lost during the transformation. I added some couple of new code to it which keeps the basic layout and basic classes, IDs and so forth from the music transformation to bootstrap transformation. This is an example of the transformation. This is for the footer. What happens is that anywhere in the footer I put F call to, F call 3, F call 4, and so forth. You can just assign which column you want and then the DLZ rule and XSLT will basically take out the F call and then drop the footer portrait directly on the, or stack it under whichever column which you can see there. It was first in the second column. No, it's in the first column. It's a gift so it started over. I touched on this earlier so I actually jumped the gun. This is the music and bootstrap code, XSLT code for transformation. The tool that we built out and you probably heard about it before is the GROSS project. It basically allows you to don't write XSLT or DLZ. All you need to do is just put some classes like GL, title inside of your team, and it will do the transformation for you. This project was done by David Baying, an employer and another Jamaican. As I mentioned earlier, we're using a lot of music. Music is the way to go for most designers in Jamaican. Another thing I posted in the community is team, auto-depriving team directly from GitLab to whichever Chrome website you want. This is basically the configuration file for the CI CD inside of GitLab. It can be modified for Jenkins or whichever CI CD tool. I'm going to do a quick demo of it. The demo didn't go so well so hopefully it's better. No, it will be better. This page is basically all of the version of the team that was deployed to the Jamaican developers website. Currently a rollback version is there. If I go and click this button right here, it will launch a new build out for the team. It will push it directly to the website once the build has been passed. Let me just show you how it will look from the team manager section. In the team manager, you see the version number of the team and also you can find it if you do a page in spec where you see the version attached to the REST file or an static file that the team is using. The reason why it was done that way is to clear the cache so that will fix any chashing issue that this site will have. Other concerns that people ask me about it is whereby what if someone can change the configuration file. GitLab allows you to lock the configuration file so owner maintainers can make a change. Environmental variables like the password and so forth that pro team up to that Askel built, all of those are encoded and used as environmental variables which is only accessible by the maintainers of the project. Let's go back and check if the build out finished. It's still built up. It's using the pro team up now to push directly to the site. I'll just continue to talk while that's running. Another thing, what if someone could create tags? Tags are only created by maintainers. GitLab have a lot of permissions, roles and workflow that you can forbid developers from doing certain tasks. This is taking a while. I'm going to move on and then come back to this later. Another thing we are implementing is faceted search and filter for results with indexable JSON data. We're using a data grader for JSON data input. This is basically the code for it. We have index that basically added to the indexing mechanism of prune. We created utility which is most likely the vocabulary for the index that we created which is profession in this case. We added the profession to the catalog so you can basically use the search bar to search for profession, like for example, software developer. We also added it to the registry so it can be used by the collection content type. You notice you have two operations. Any and all. In the search filter you can use any which is basically any of these tags or all of these tags much much for the result set. The whole aspect of doing all of this is the digital customer experience. We want to let the customer be engaged. We interact with social media and other tools. Some of the tools were actually built by a Google Summer of Code developer, Shiran. We are trying to let site owners, clients basically do less work as possible and focus on the things that they need to focus on. We have jammicandeverepers.com which we use as a directory of website. We are doing a lot of events, pushing a lot of Python programs and talk about Python application and open source stuff with all of these companies, well, events and companies they did have. I'm trying to run through. In Jamaica, people use Java and Python but their preferred language is Python. Based on statistics I actually did one day ago, created a survey and 100 people answered. A lot of people don't know about Pune in Jamaica. Right now we are running 20 to 30 Pune sites for government and other institutions in Jamaica. Of course we are doing outside of international projects. We want more projects to come in so that we can publicize Pune more and push it to more developers because more international projects that come in is the more work developers can do better and more things to learn. This is my ending slide. Before I end it, let me go back to the CI and you notice that it builds successfully. On this page, if you notice, these tabs, menu options are blue and this is just ugly and big. We go to CASA CMI. If I refresh the page, the menu items will turn green and the text will become a bit smaller. I'm going to check if it actually finished the ploy. I actually deployed the wrong one. It's the version of 1.12.9 I was supposed to deploy which has the latest code. That's it. If questions and answers give me enough time to just show the change, then I will show it. That's it for now. Thanks for sharing. The GitLab CI deployment is running one site for one Docker image? It's basically totally separated outside of Pro and just push once any change is done. I do an automated rollout for that using a digital ocean system. Is it multi-site? Are you putting all those sites on single the same set of zone or is it separate? Right now it's just one site I'm running. Other sites, we've done projects with multiple sites on one single instance which push multiple different teams to different sites. Okay, time's up. Thanks again to O'Shea. Next talk will be given by Mike Derstappen. Mike is a longstanding member of the blown community. He's also a member of the blown foundation and the CMS garden. And he's going to talk about blown CLI. Welcome, Mike. Okay. I want you to give a little bit more detail overview about what you can do with blown CLI or let's say what blown CLI can do for you. First, what is blown CLI? Basically, it's a user-friendly front-end for Mr. Bob and Bob Templates' blown or any other add-on which is providing Bob Templates for you. The main advantage is you have way shorter commands. You have auto-completion. So you're not forced to remember how to use it. You can just start and it will tell you. What we also gained in the last couple of years is some sort of modularity. So now it's possible to create a package and enhance it step-by-step. The image is not really fitting, right? Okay. It's a little bit too big. Basically, what you can do is you can create content types. It will ask you some questions. We added a couple of questions in the last years to give you more control what's being created. You have several options. So what we basically did here is create two content types. One is a container for to-dos. The second one is the to-do item itself. And after that, we also create a view to represent the to-do items we have or we will create. As you can see also here, you get support included so everything you are doing with the PlonCLI will check in automatically commits. So you can remove steps you did. Let's say you created a view and later you decide I don't need that. You can just revert the comment on different places, the content and the tests and all is gone. Yeah, the only thing you have to customize a few places up to your needs. So for view, for example, currently it's just registered for folders as an example. So you can change this to your interface of your content type. That's a manual change. So I think there was one change left. What we also created in the last year is a small snippet plugin for Visual Studio Code. So you can use this to create your schema because PlonCLI will not create your schema. That's not the right place to do it. You want to do it in your editor or you want to do it in Plon with the schema editor and export it. Or in this case it's like the XML but you can also do this in Python. The Python schema is not complete yet. I'm still working on it. But you're free to help and to customize and to put some more helpers in there. I would really like to see this also for sublime and atom or whatever editor you prefer. Yeah, so now we're starting the whole thing. This basically was like a fast build out. I skipped this a little bit. Then we tried the whole thing out. I think I speeded up the last part a little bit too much. Okay, let's go to the facts. We have several, let's say, standalone templates. You can use PlonCLI create to create an atom. This is the most used template you will need. It's the base for all the other sub-templates we have. You can also use something called build out which is basically creating your folder with build out configuration which is meant to use for project build outs or for testing purposes. There's one seam package which also creates a clone add-on with an integrated seam. This is deprecated already because we implemented a sub-template called seamBuslinita. You can use this in the add-on you create with the add-on template. This gives you more flexibility. Here we have the commands you use usually. After you create it, you add on package. You go in the package folder and then you can enhance it step-by-step. You can add behaviors as much as you want, content types one after each other, and then you can add some portlets. The first seam sub-template is rather simple one. It's meant for if you get a seam from your designer or from seam forest or whatever, you just take it, you put it in the seam folder, and then you start to integrate it with the ASO settings. There's a lot of example but deactivated the ASO rules. It's up to you to integrate your seam, but there's no buslinita stuff and you will not use when you have a completely different seam. For the other way, let's say you're okay with the default view of blown, and you want to just customize then you use the second which is seam buslinita. It ships basically buslinita and you can customize it. The other templates are view, viewlet, and vocabulary. You can create views, viewlets, and the vocabulary is also with Clonsia. Clonsia always tells you what is available. It's this that can be changed. You have always either this overview which also shows you that these templates are nested. They are depending on the add-on package or the add-on template, let's say. Let's look a little bit into detail. How does it look when you use a template? For example, for the behavior, we just call this like this. The first always is a check with Git. We check if the current state of your package is clean. If it's not, you can override it, but that's up to you. You shouldn't do that because you're not safe otherwise. Commit your changes before you use the Clonsia. Clonsia will support you in doing that and taking care of that. You should not accidentally start with using the sketch folding and overwriting your stuff. Then in this case, we just ask for the class name for the behavior. That can be anything you want. You can give a description. Then we also run the Git commands to also make a nice comment and a meaningful comment message so that you can later see in the Git log what you have done. If you're done all this, you will also get some useful information. In this case, for example, you have the names you can use, the identifiers to actually use the behavior. You can put this in the XML file of your content type and you have the behavior there. The same applies for vocabulary, for example. You don't have to think about how do I do the lookup later. The content type template is by far the most complex or the flexibles one. You have a bunch of questions. Some are simple and have been there for a long time, like the name of the content type and the description. You can decide if you want to use the XML model. For example, if you want to use the schema editor in Plone to define what fields your content type will have, you can do that. Then you can export the XML file and put it in your package and you have your schema. You can say no to that. In this case, you will still have an XML file which you could use in the combination of loading it into your Python interface. For example, if you use the schema editor in the Plone interface and for writing this down when you sit together with your client, you still be able to do your advanced stuff in Python. You just do model loads, the name of the XML file in the interface and from there on, you would do when you define the whole schema in the Python interface directly. Or you just ignore the XML model file or the lead it and just define in the interface the good old way your schema. We have also some other useful questions. This is like, will be the content type globally addable. This is especially useful because when you have the container like the to-dos container we added before, you wanted to do only addable inside this container and not on the whole side, otherwise you mess up your add menu with 20 or more content types in some cases. And also it's important for testing. So we generate the tests that you actually create the necessary container before you actually try to add the content type in it. We have here some examples. So here you see the parent content type, you also be asked for here, will be used to update, the information will be used to update the FDVIF files of these. And in the test we create the parent container before we actually use the parent container to just use the Blown API to create inside the parent container to do it. You have here a list of other useful commands to Blown CLI provide. You can get the list of templates as you have seen. You can get the versions of the Blown CLI itself and also the Bob templates. The most useful use commands are create and add. It depends what you see here. It depends on where you are. If you are outside of a package, you see the create here in the list. If you're inside of a package, you will not see create because it's not useful there. Inside you have the add command to add more features. Then you have some commands like build or serve and test. Build is basically doing a clean virtual environment, doing the build out command for you so that you have like multiple steps. Usually the Blown CLI shows you also which commands are running when you use these commands. You can also stack them. You can say Blown CLI build test serve. It will run the build. It will run all tests and then it will just start in the front so that you can actually use it. We just have your Bob templates for Blown CLI and Mr. Bob. You can actually write your own packages. You don't have to put every template we want to use with the Blown CLI in Bob templates blown. That would be a mess. You can have custom templates you just use for your projects, for your customers, for just your needs. Or you can add some community stuff so that you, for example, the migration stuff we had for Transmogifier. Maybe it's a better idea to put this in a separate package. We just said this. How this works is we are using Python entry points. Basically, you just put this in your setup. This is the name of, this is the global name. We have blown underline as a prefix of all. Then this is the name of the template. This is the dotted name path to the registry file. Inside there's a method which has this name. It looks like this. This is the registry object. It just has some configuration. We have the template name. We have an alias for the Blown CLI. In the Blown CLI, we just say create add-on. It's a Blown CLI, of course, it's a Blown add-on. This applies to all the other templates, too. If you use Mr. Bob, you will use the global name which has the prefix Blown underline. We are not the only ones who have add-ons. This is the add-on method there. It's a standalone template. It just has the path to the template. This is what you would put if you call Mr. Bob directly. You would name the template like this, but it's hard to remember. This is the alias. When you have a sub-template, you have this additional thing, like I'm depending on Blown, this information we are using to show the nesting so that we can actually see which template is meant to live in what standalone template. Contribute. You can customize. All the templates are not perfect. We are living from the community, so please have a look at it. Discuss and issues on the forums. What we can improve there to make all people's lives easier, to have best practices there. Add missing templates or fix a small bug there. Add just a question to a template if it makes sense. Please also build your own packages and publish them if they are for any use for the community. Just some ideas for the future REST API support for the add-on template. In the future, the REST API will be included, but for Blown 4 it's not, and Blown 5 packages also not so far. I just want to have, okay, my package should be dependent on REST API. Then also some REST API templates to create services, serializers, series serializers. Some more settings for the content type template. A variable option for parent settings in content types. For example, if you have already created some content types and you want to give the parent name, you have to know it, but I would like to have a choice list or something. An option to set the interface for the view. We had to make these customizations. It could be also like I have a list of all available content types. When I create a view, I can already select for which content type I want to use it. And some options to use the interface. And last but not least, to make these selections of options possible, some sort of user interface which should work on all platforms. But this is just an idea, we will see. So if you want to spend on it, do it. I'm here the whole weekend. I would be happy to help you. Any questions? Another question, just short update. Bob templates, Plone runs on Python 3. There is a branch at least with the pull request that you can review. Plone CLI does not need a couple fixes in the sub process, but that shouldn't be too hard. Yeah. That's what I want. There is a migration, Chrissy did a pull request to add migration templates, like what she had in her training. Okay, but it's more basic, instead of, what we would like to do, we have a lot of old archetypes and we want to use this to create new dexterity. And then my question is if we can use the Bob templates as they are now, or we have to twist it before they will work with Python 3. You have to switch what? I mean, it works on Python 2, of course, everything. Yes. You can switch now to and create your new stuff and dexterity, no matter if you are still on Python 2. Yeah, but my question is if the Bob templates themselves will create code that will be compliant with Python 3 version of Plone. That should be the case. We will probably work on that. Yes. Okay, so templates are perfect. Yes, exactly. Just the creation has more issues. Okay, great. There might be still room for improvements. But as this is the main tool to create add-ons, we will take care of that. Okay, thank you very much, Mike. And give some applause to Mike.
In this talk, I am going to demonstrate a suite of tools and techniques that simplify the process of moving from an idea to a deployed product. The presentation showcase how we at Alteroo, the only development shop in Jamaica that uses Plone, use our current stack to get things done quickly. Parts of the stack that I am going to demonstrate includes the following: - Roosite.launchkit for starting new projects. - Mosaic Styles and Tiles, - Auto-deployment of Plone Themes with rollback functionality powered by Gitlab CI/CD. - Predefined Diazo and XSLT rules for content and theme manipulation. - Faceted and filtered results using collective.filters, - Indexable JSON values by making DataGridField into metadata. The vision is to make it easier for future developers to start, build, update and deploy with little backend changes as possible while having the ability to do so if needed. Plone CLI: The new way of creating Plone extensions. The Plone CLI let's you create a Plone addon and add features like content_types, views, viewlets and portlets to it. It makes is easier and faster to get started and also gives you a good the structure of code and tests.
10.5446/54819 (DOI)
Yeah, good morning, everyone. I would like to give you a small introduction into how to build a progressive web app based on Aurelia JavaScript and the Clone Rest API. So first of all, this is all a prototype we built like about the last half year. We got some funding from the German government for this project, so this allowed us to spend some time there and built this. The whole project is about empowering small farmers basically in Zambia and India. It's to help them, to give them the right information and help them improving their likes and making more out of it. The goals for the project, it has to be mobile app because what people use in Zambia and also in India is basically mobile. As I said, the target market for us where we want to start it with the project was India and Zambia. Target platform, Android mainly and also PC or whatever device you use where you have a browser. No iOS for now because it's not important in that market from the market share. It might work but some of the technologies will just not work yet on iOS. It also has to work offline because especially in Africa, the coverage is not that great. So you have to have the possibility to get some information when you are connected like in a central place where they have Wi-Fi and then take the information and read it later up. About the technologies, we decided to go completely browser based. We also decided to go the way of building a progressive web app instead of creating an app for an app store or a native app. We realized this with the Aurelia JavaScript framework and for the back end, we just used mainly the Plone REST API. So why progressive web app? First of all, it runs on many operating systems. Basically the only thing you need is a modern browser. It allows us to make fast changes, fast deployments. So especially in the beginning, this is really key because you want to fix something, you want to add some features and you want to roll it out and talk about it with the people if you go on app stores, this takes days and if you have a bug, this takes also days to fix it and stuff like that. It just slows us down. It's easy to build. You just need web technologies. So if you know JavaScript, HTML, CSS, it should be enough. And with the modern technologies, we also have the offline support and if you need to also push notifications, that's all possible by now. Why Plone REST? I don't think I have to talk much, but yeah, it's a solid storage for the structured content we have. It fits really well for us there. We also are using the workflow and permission system. And most of all, the REST API is really nice, really flexible and powerful. And especially in the beginning, we also used just the normal backend to handle the content and add the content. This will also be integrated in the app, but especially in the beginning, we saved some time to go to the market. Very Aurelia. I don't know if you heard about it, but you have a lot of JavaScript frameworks out there for basically every taste. The reason why I like it is it's really simple and clean, HTML-based templates. So it's in this way a little bit like Angular or Vue.js, for example. But you have to write less code than, for example, in Angular. So you have some good conventions. So you don't have to register every template and every Vue component together. You just name them with the same name so that they fit together. You can always overwrite this, but usually you don't need to do that. Aurelia is similar to what Angular does. It's a full-featured framework, so you have the full functionality, including the routing, animations and stuff. And for what it does, it has a relatively small footprint. And because of nice conventions, usually it stays just out of your way. You can have components which are just your code. There's no registration which is like hiding the one line code or two line code in smaller components you might have. The community, it might be not the biggest if you compare to Vue.js, for example, or React. But there are thousands of people using it, the chat room and forms are full of people exchanging ideas and helping each other. And I really like they have a, yeah, kind of big core team, so it's not like they are hiding in a room and releasing at some point something. So we have a lot of core team members there. It's a little bit, I would say the same philosophy we have in the clone community. It's open, everybody can contribute. But they also have some really smart minds which are like giving out the roadmap and have just have a vision where to go. Last but not least, they are really doing and caring about release management. That's not the case in every JavaScript framework. So that's also important. Yeah. Let me show. Maybe a little bit smaller, I don't know. Okay. Basically what we have built here is in farming, this is international. You have the farming cycle. So you start at some point with planning, preparing, seeding, growing and so on until you store or eventually sell your products or use your products yourself. So we picked this up and then you can dive into this, depending on where you are. If you are in the growing phase, you have some needs specific to that. And then you can dive into different topics until we finally have some content for this specific specification or category tree. In the back, this is basically like a nested folder structure, like a category tree. And eventually you will come to the articles. So these are then informations which might tell you how to, I don't know, heal your plants when they have like some diseases or you have pest issues or for the storage, for example, you can find informations how to store your plants because one of the main issues they have is like food waste because they have nothing to keep them alive. So they have some, in some areas they basically go at three o'clock in the morning to harvest the crop and at six or seven it's on the market already. So they have to sell it at the same day for the price they get there, they have no other choice. And this is really an issue because sometimes when they decide what they are going to plant, they look at the prices of the last year, tomato was really high, so everybody is growing tomato. You can imagine what happens when they all at the same time have the tomatoes on the market then. So they, yeah, almost losing money. And we try to give informations on many different levels there. So far this is a prototype. So the next steps will be getting in touch with more NGOs. Some of our team members are going into the field again in a couple of weeks with the prototype and talking to NGOs so that we can actually have a lot of content information inside the platform. You also have related services. So depending on the categories you are in, you have services. By now it's basically a small introduction and pointing to the website of the service. Sometimes it's an app, so you can have mobile apps where you make a picture of your plant and the mobile app tells you what disease the plant has. So this could be the solution or you can find storage providers which are specifically addressing the market in Africa or India or wherever you are. Later on we will work on integrating some of the service more into the platform, but for now this is it. A little bit from the technical point, all these informations, when you clicked on them, it's relatively fast. The reason for that is that we cash a lot. So the plan is actually, this is currently not perfectly working, but you can pre-fetch all the contents or later on we want to say, okay, give me all the content in this category or something like that or just the articles or just the services so that you can go offline after that and just serve. But it's also for speed because what we are doing is we serve first from the cash always. So no matter how fast your internet connection is, if you had the content already, it's just there and in the back it's automatically updated. We also have like self-registration which will register users in the background. What we also have integrated is a community forum. So we have a discourse forum which will use the block and from the app here. So at the end it's the Plone user which will be logged in in the forum. And the nice thing about this course is you can also access every single piece in the forum via the API because this course itself is just a JavaScript app. So we will later deeply integrate the category tree with the topics inside the forum so we have sort of the same structure so we can show ongoing discussions in specific categories. Okay, yes. A little bit about how to build a progressive web app. This looks kind of funny. Okay, first you have to use HTTPS because of things like push notifications, that's just a requirement. What you should use anyway is a manifest file because you have some options, some functionalities here, you can basically use it for every website even if you don't want to build a PVA. You want to use ServiceWorker which gives you the possibility to have these caching functionalities, you can even cache writes. So if you're not online, you can still save data and as soon as you are online it will get to the backend. Optionally you can use Workbox, it's basically a small helpers from Google which makes it even easier to set up ServiceWorker. So you just have to, you can use a web app, plugin and two lines of JavaScript code and that's it. You just have to decide what kind of strategy you want to go with and push notifications if you need it. The manifest file, you can define in a manifest file basically some metadata. If you have a manifest file and you define a short name for your app, normal, longer name, icons, the orientation you can say, okay, is it flexible or should I just stick to portrait or landscape mode, background color for the splash screen or even for the in underage you have the title bar, you can configure this stuff. When you have the manifest file, the browser will allow you to save the website as an icon on your phone or now even on Windows already and Linux Mac is coming soon. So you basically can save the app and use it as an app. And for this, especially on the phone, you need something like the short name. If you have the title and it's a little bit longer, you don't want this text on the icons on your home screen. It looks a little bit like this. So you have a splash screen. I can show it a little bit bigger but after a short loading period, you end up in the app itself. This is an example of a manifest file. So, yeah, you see you can define the short name. You can have multiple icons in different sizes. You also have a start URL. In this case, it's just the app itself. It depends a little bit if you want to separate your app from the back end. And on the display, you can, this is standalone, so it will look like an app. There's no browser bar, no location bar. You can allow that. There's also a minimal mode but it's not supported by all of the browsers. So you can play with that. Service Worker is the really, really powerful part here because you gain, as I said, offline support, this caching functionality which makes the look at feel of your app really fast. And with the Workbox plugin, it's that easy. I mean, some configuration depending on if you use Webpack or some other stuff but that's really not much. And this is basically saying with a regular expression, everything which is coming from Gepa.org slash app, which is the back end, gets cached and the strategy we are using here is stay a while, re-release date. So it's first, serve me the content from the cache, so give it fast. And then go to the back end, update your cache. Yeah, and the last line is then initializing this. You can have multiple of these routes. So you can handle static images differently than the JavaScript or the back end calls and so on. I could even cache the, for example, if I would have integrated the phone by now, I could also say, okay, cache all the requests which are going to the discourse form. So even that could be offline usable. So a little bit examples to how to work with Aurelia. As many of the JavaScript frameworks you have in CLI to get you started with and have some skeleton functionality. Basically you just add Aurelia CLI, then you have our run watch inside your project. And this will give you the normal browser watching, reloading stuff functionality. You have the option when you create a project, you can decide if you want to use Webpack or some other bundler, if you want to go normal JavaScript or if you want to use TypeScript, different testing options. So you have to answer some questions, but you can also just stick with the defaults. And so Aurelia allows you to stick with plain JavaScript or with TypeScript, whatever you like more. And also with the bundling and installing packages, you have some options there. When you want to create a new component, you can use also the CLI. It's not doing that much because there's not much to do for this. I mean, to create a component, you basically create the HTML file and the JavaScript file, like we are doing here with the simple top bar. So JavaScript file looks like this. That's all. There's no import, nothing. So this is a basic component. Of course, you can have more functions defined there. For example, you have like a bind function and activate function. So these are like hooks where you can do stuff in a specific time of the whole cycle, Aurelia component has. So here we just defined some variables. And in the template, it just... Every template has these template tags, like in the web component specification. And then you use normal JavaScript string expressions there. So it's pretty standard and clean. Here's an example how to iterate over a list of values. I mean, we defined the France here, so we have two France. And then you just repeat over it. If you like the page templates and know how to use it, you might like this too. I mean, it's pretty much the same. Also, I like the cleanness of this. Yeah. This is a little bit bigger. I should go here. Example. So here we actually also have some imports. So it's not that Aurelia is doing anything without any importing anything. But for simple stuff, you don't need imports. And what we are doing here is we basically have for accessing the content in the backend, so what the stuff we're getting from Plone, we're importing a service, and we're injecting the service with a decorator. Also it goes here in the constructor, and then we put this in this variable. And from there on, we can use the content API to get our content. Here we have the activate function. This will get called early when the component is getting activated. This is the right place to extract some parameters and put them in variables so you can use them later like here. Yeah, bind is like when the binding is done. So binding means, yeah, binding the JavaScript definition with the front end HTML template. So what we are doing here is we're basically defining a path where we want to get data from, and then we call the function getServiceData. And this is using our content API, just calling the getService with the path. Service in, not the best naming here, because service is here, the service we saw in the app, so it's the service content, not the service as a functionality. So this is just using in the back end HTTP fetch. So modern, you can also use the little bit older way to do, but yeah, this is pretty much standard. The component for this, the HTML part of the component would look like this. So we basically create some article tag, we put in the title, the description, so service is the variable where everything is in. Most of this is pretty simple to understand. This is like if you have HTML stuff, so the standardize will strip out that stuff a bit. And this is a way to how binding works. So what I really like on our area is you read it and you understand it. It's like, okay, there's an href and this is bound to this variable. There's no cryptic, I mean, in Angular, for example, it's like some symbols, you have to know what they're doing. Here it's kind of clear. By default, you have either two way or one way binding, depending on where you use it. If you use it on an input field, it makes sense to have a two way binding. If you use it like here, it's a one way binding. You can always be specific, so you can say one way, two way, once. So we have ways to overwrite the defaults there. Here we bind the title attribute of the tag. That's it. Here's a small example of how a service looks like. As by default, when you define this JavaScript class here, it's everything is a singleton. So creating services is really easy. You don't have to do much. What we are importing here is basically the inject function to inject the HTTP library from Maria. So in this case, the fetch client version. There's also an HTTP client because fetch has still some limitations. But fetch is way easier to use and the preferred way to go. So what we are doing here is we injecting the HTTP client. This gets in with this name. So we sign it to this HTTP. And then we define a method which we can later use. And this will just do fetch. In this case, a simple JSON file. It could be also a URL. What you don't see here is you have one small configuration part where you say, okay, my default backend URL is this and you can make some settings to configure once for the whole app how you talk to the backend. You can have multiple backends if you want. But in this case, you just, for every call, you just do like this. Yeah, and then you get a promise and packet and we turn it. Yeah, enough for already. They have really good documentation and there are also some books out which are worth reading. Some examples for the blown rest API. The good thing is the blown rest API in the current state is pretty much ready to do everything you want. There are not much situations where you say, okay, I cannot do this with the rest API. There are just a few places. You have access on all the content. You have access on most of like pieces, like the control panels, workable areas, whatever, searching, switching workflows, whatever you might want in blown. Most likely you can do it with the rest API directly without doing anything in customization. But it's also really flexible. Basically you have these three things you most likely want to touch, things like serializer or the other way around to deserialization or maybe you want to create some services or components, there's a little bit of naming clash there. For the serializer that was the first what I had to touch, usually the API will give you the way use of different fields and of your content in a useful way. But sometimes you have specific needs, so you want to customize this or in my case, for example, I wasn't happy with what the rest API is giving me for vocabulary, for example, it just gave me the token. But on the client side, I don't need a token. I also need the label or the name to show it. Of course I could go to the back end and get the vocabulary, take the token and tell me what's the actual title for this token. But this would need like extra code on the client side, another request. So it would be nicer if my choice or multi-selection fields would just give me everything, not just one string. And it's pretty easy to do that. It's all, I mean, most of the code you see is boilerplate. So it's usually almost the same. What I left out here is this is like just, I'm getting the vocabulary, I'm with the token and then I extract the title and give it back. So the term in the end will not be just a string, but a little dictionary and then JavaScript object at the end, which has the title and the token so that they can have both and don't need to ask the back end for more. So you have this zero laser here and the only thing here is then you have to register this adapter. Services on the other hand, we have things like the content. So if I have like article or like pages and clone on news items, I will call a new URL say, okay, give me this news item. So I will get the data for the news item. This might look like this, like in case for the front page. So but what I do not have here is like things like the breadcrumbs or navigation or workflow actions. So if I want to have this, I can use the, I cut it a little bit down so that I have not too much space here, but for example, with the breadcrumbs, you have a list of accessible components or services you can use. And to get the breadcrumbs for the current context, you would just call this URL and would give you the breadcrumm data like this. But this would need another request and you have to write some extra stuff on the client side to do that. Sometimes that's what you want to do. Maybe you have a lot of services and you want to fetch them until you show the content already, but for simplicity, it usually makes sense to include this. So I want to have both together. The ProNUCiVi gives me the expand functionality so I can just name this service or multiple services comma separated with the expand option in the URL. And what it does is it will give me still the list of components as before, but for the components I choose to expand, they will actually have the full content. So it's the same as calling the URL, but it's already included in my request for the page or for the news item. This is really handy, but it's not doing this all the time for all the services, otherwise we would have to load a lot of content even if you don't need it. This combination is really powerful because you can combine this with custom services to do whatever you want. For example, getting some related content based on categories and also expand this so that you have everything in one request. On the client side, this makes your code much simpler because you just say, give me the article and the backend says, okay, you have the article and you have everything you need like the navigation, the breadcrumbs, whatever you want to show. If you have other parts of your application on the client side, you can also choose to not expand because that's your decision when you ask for the content. You can just have the article without the services, so you always have the options. To create a service, it's up to you how to do this in the backend. We probably also add this to the Ploncli so that you have a structure everybody knows. I like this way. I created just a service folder in my backend package and then for every service, like the related articles, I create another subfolder, like Python module and put the code there and the registration. This is the registration in the ZCML file. The two parts, the first is just an adapter, which is doing the actual work, like searching for related content by a category with a normal catalog search. The second is the registration for the service, for the REST API, which basically takes the adapter and uses it and gives back the information. This is the code. It's a little bit long. The constructor is not that interesting. Here you basically have the little bit boilerplate to how to call the service. The actual code you have to, you can write down is here. Until here, it's just a boilerplate. What I'm doing here is just taking the current context, reading out the field, which is basically just some text, some categories, and then I do a catalog query for a portal type named chapter, which has the categories and at the end, I just iterate over this and create my JavaScript object, which will at the end be the JSON I give back. What I basically do here is I'm emulating the kind of the same functionality you have like when you call a folder, you have the listing, you get just the previews, like the title and the link to the actual items. That's what you get here. Okay. This is the second part, the actual service, which is basically using the adapter and also setting the expand parameter, which will in the end reaching the lower part here of the code where you actually do your stuff. So that's how it looks at the end. You have a new entry in the components list, which is when you use it with the expand feature, will extract and expand all your content, whatever you put in there. That's it. Any question? Do you think people using Aurelia usually use regular JavaScript or with the type script? I think it's pretty much half and half. I would guess internally the core team is switching more to type script because for the framework itself, it allows you to give more like auto completions and IDEs and stuff like that. But if you want to use it for your project, that's up to you. And that will, they will keep this this way. They are currently working on the bigger next version where they are doing a lot of improvements under the hood. But the main principles will stay the same. There's no big change in like the direction or anything. They just try to make things more modular so that you can even have like smaller footprints at the end depending on what you are using and things like that and make it even faster. It's all a bit fast but they have some ideas. So there's a big refactoring for the next generation but it will have not that much breaking changes for the users. Any other question? Yeah, you get this with Jason as Jason. The only thing on the client side is you have to add some CSS to style it because usually the styling is made in the back end. So basically what you need is some template rendering on JavaScript side then getting the JSON from the REST API. What I didn't mention is you can use plugins to inspect the JSON and make just call and just look what the API is giving you back. So you see just the JSON, you can play with the parameters and see what is actually coming back and then you start to put this in your JavaScript code. But yeah, it's really simple to start. The REST API is not doing anything which is like complicated to understand but it's still flexible. It's not the with S thingy where you have a lot of boilerplate and a lot of complex stuff. So it's really straight forward. And if you're thinking about starting with client side stuff, it gives you another flexibility. You have another abstraction layer. You have to use interface completely in your hand. It's not related to the back end. So this is powerful and it's of course fast. It's the same back end but it's way faster than doing everything in the back end. And the cool thing is you can integrate the stuff. With the JavaScript app in the front, I have another app which is written in Ruby like the form. I can integrate them and glue everything together. And I could have other back ends, other services and you just bring everything together and the result is just your app. The user will not know what's in the back end. That's the cool thing there. Any other question? Then, thank you. Thank you.
Empower small-scale farmers - for the gaipa project, we build a progressive web app (PWA) based on the Aurelia Javascript framework and the powerful Plone Rest-API.
10.5446/54821 (DOI)
Good afternoon everyone. I'm Kumar Akshay. So I'll be presenting my Google summer of project on Plone CLI and Bob templates.plone in general. So I'm a senior student pursuing electronics and telecommunication engineering in, I'm actually doing my bachelor there from India and I'll have to participate in algorithmic focus coding challenges as well as contributing to open source. So this is the basic, there were three evaluations and the project guidelines were based on in the first evaluation. I worked on improving content type in Bob templates.plone and there were few minor updates in Plone CLI as well. And in the second evaluation, I added update locale in Mr. Bob in Plone CLI and Mr. Bob default profiles as well as few other commands in Plone CLI. And in the third evaluation, I worked on adding view, viewlet and portlet templates, sub templates in Bob templates.plone. So for, if you don't know what is Bob templates.plone, so it actually, a wrapper over Mr. Bob, which actually uses some Jinja templates to render the profiles, render the directories for generating boiler codes for different Plone templates. So there is like two terms in Bob templates that the creator Mr. make uses. So in general templates are like add-ons, theme packages and build-outs. And probably many of you already are familiar with this. And there are few sub templates, which means that it will be stacked over templates. That's why they called sub templates. And these are different kinds of sub templates. So the one I worked on was content type and content type was already there at the time. So I improved its functionality adding few features that I will talk about it and portlets, view and viewlets. So let me start with the content type first. So content type as we all know is like a folderish stuff that we could keep on going in an astrid way in the content types. So in the base class of content type, dextric content type, we can have like two classes. Like one is going to have items in container. So if you see in the, I mean, when you run Plone CLI add content type, you could actually see several questions popping up. And so the first question is content type name. So you can have wide spaces and alphabets. And the next question is content type description, which is very obvious. You could write anything. And then you could define, you could choose whether using XML model that is using Plone.supermodel. Or if you choose no, then you could, you have to use like job.schema model. And then the next is dextric base class. So the one that I talked about, either you could choose container or item. So in container, you could actually go on using several nested way. Like you can have parent container and as a child container. And so the child container can actually have a binded parent container content type or it can be also globally addable. So in item, it's just an item. So the next question is content and globally addable. So if you suppose you are choosing different types of, I mean, a configuration in which you have a parent content type container and then you have a child content type and if you go on deeper also you could. So for that, the parent content type, you have to select globally addable true. That means why you are one or any number. And for the child container type, you have to choose globally addable to false. And then it will actually ask you the parent content name. So the next question is if you want to filter content types, you can choose either yes or no. And if you want the class's name here as well, so that will generate a class for content types. And if you want to activate default behaviors to the content types. So in this content type sub templates, I actually worked on improving it like generating the prerequisite codes. I mean the basic boiler codes which were necessary for this and which can support several configuration based on your questions, based on the answer of your questions. And like if you are choosing the container type to be, I mean, the content type base classes to be item, then it should not ask filter options. And we also worked on improving test coverage for all these conditionally configurations. And this also has some tests in the robot test also which are generated on its own. And yeah. So the view sub template is actually just a browser view. And it basically asks like three or four questions. So you can see the whole workflow here. So the plone CLI add view triggers the set of questions that you need to ask, you need to answer for generating the view sub templates. So the first question is do you want a Python class or not? So view can have Python class and it can also just have a template file or combination of these two. But you must have any one of these to actually work. So if you choose yes, then you need to give Python class name in camel casing. And this is the view name that you actually want on the URL part. So it can have like underscore or dashes. And in case you want a template also, then you can state that. And that's pretty well everything in the view template. And if you visit on the local host, you could see in the demo view, this is the URL that you gave. And this is the, I mean, default message you could see after installing the package. And this is the viewlet sub template. And this is actually pretty similar to view. And the question were also very obvious and the same as view. You are asking whether you need Python class or not. And what is the viewlet name? And if you need a template file or not. And so for viewlet, it will use a predefined viewlet manager just above the content title. So you could see here the default message after you activate the package. So for portlets, so portlets has a wide variety of different portlets we could use. So in this one, it just weather portlets inside it. So in this, it actually asks you two things. I mean, if in the question, you just ask you the name of the, what you want to give to the portlet. And it will generate a form. And inside the form, you have just one text field in which you have entered the place, I mean, the city place and the country code. And after that, it's actually using Yahoo API to fetch the data for weather. And it will show you on the left side of the, you could configure on the right side as well and at the bottom too. So clone CLI is a wrapper over Bob template.plone. You could actually just use Bob template.plone as well. But the codes, I mean, the commands for using it are a bit messy and the, I mean, it's hard to remember those commands. So clone CLI makes it easy to actually have a command line and you can just type like three or four words and you can create everything. So this is a normal work through of clone CLI. Like first way, we have to type clone CLI, create and add on, I mean, the template name, either add on build out or theme package. And yeah. And then the package name. So in this case, it's collective.todo. And after that, you get to ask about author name and the email. So it will fetch the data from your git config by default. And next thing is your GitHub username. And if you have description about the add on. And it will ask if you want to initialize git, if you're using some other version control, you could type no. Or if you're using git, then it will initialize it. And the version name. So it usually, by default, it will set to 5.1. And it will ask you about whether you want to activate git auto commit or not. And it will commit every changes in the project. And this is the normal work through of clone CLI. But you can also have by default several settings on clone CLI config using clone CLI config. And you can actually save these, save these pretty fine answer for this so that it doesn't ask every time. So when you write like clone CLI config, then it will ask you everything just that you answer your author name, email, GitHub username, and the clone version. And there is actually a question, do you want to disable git in case you are using some other version control? And do you want always initialize git repo in case you are totally working on git? And the last question, do you want to enable auto commit? I mean, without asking every time. So this is also very handy in case you don't want to bother about anything else. And it will generate a, it will actually generate a Mr. Bob config in your home directory. And every time before asking those questions, it will fetch the result and it won't ask any question which is already been answered in this profile. So in case you are doing this once again, then it will just ask, yeah, it will just ask the description of the add-on name. That's all. So for internalization, we are actually, we developed an update local command in the clone CLI. So it's very handy and you don't have to bother about installing anything else except get text. And you could actually update the local as well as sync it with every commands. The workflow is pretty easy. You just need to make a directory for the language that you are translating in. So in case we are in the root directory of the clone package, then we need to go inside local directory. And right here you could see, you could see there's an update.py which is actually being used in our virtual environment for generating scripts so that it could be actually used in Windows, Linux, and Mac OS as well without any issue. Otherwise, the previous issue, the previous version was using update.sh which would not work in Windows. So here you just need to create a directory. For example, in Germany, we write G and for Indians, it's HI. And here I have already activated my virtual environment. So I just need to run the update local command. Otherwise, you have to, I mean, use it like bin slash update command, update local. And then it will just ask your email address and the syncing process will work. So it will create.po files and you could easily get.mo files from this by running build outs. And, okay, yeah. And if you go inside, as soon as you could see.po files there and all the text extracted inside it, using text. So there were also a few minor improvements in Prolong CLI. Like you could see the Prolong CLI version name as well as BOP templates using Prolong CLI minus capital V and Prolong CLI test, different tests just like that we use in Prolong. And there are a few other improvements like unit test coverage, increase in Prolong CLI as well as BOP templates.plones. And you could see all this detail on the wiki page of Prolong CLI. So this was my first GSOC experience and it was really fun. Of course, there was a huge learning curve at the beginning since I was a Django developer back then and coming from Django to a clone was really a good learning curve. And of course, the constant mentor support from my mentors, it would have been not possible without them. And of course, the community. So it's my first open source community and I get to see very friendly people always eager to help us and work out with our stupid questions as well. So I don't know what happened here, but he's like sitting there, make and he was the creator of BOP templates.plone as well as Prolong CLI and the two other mentors were and Colp and Alexander. So there is another talk on 9th November about the present and future scope of Prolong CLI. So you could see everything else about the Prolong CLI there. And this is the project's links. So it's on the Prolong CLI and BOP templates.plone on the GitHub. That's it. Thank you. Any questions? Yeah. So the Prolong CLI is a rapper to the Mr. Bob, right? But the script running and the message shows Mr. Bob still there. Do you, I think that confuses, do you want to update the message? So first part is Mr. Bob is a different package for installing. It uses GINJA template to render and using.ENI files for asking questions. So Mr. Bob template is a totally standalone package. So Bob template.plone actually uses this one to ask questions related to plone packages, templates, rendering stuff for configuration of each different sub templates and templates. And Prolong CLI is just like a rapper of this whole thing. So it's for accessibility access so that you don't have to remember every big commands for creating a package. So if you see in the Bob templates, it's like four or five words combining and then you actually generate a package. And the question would be same if you just use.plone and if you use just.plone CLI then the question is of course same but you just have to type less for this Prolong CLI. Does that answer your questions? Okay. Anything else? Okay. Thank you very much. Thank you.
Command Line Plone Tools is a GSoC 2018 project that I successfully completed this summer. In this talk, I'll share my experience with the community.
10.5446/54822 (DOI)
Thank you. Yeah, don't try to pronounce my last name. It's even difficult if you're Swedish. I should have never changed. I work for a company in Boston called Shoebox. Not Shoebox, but Shoebox. And we do a lot of cool things that involve legal documents and long story short. If you have a Delaware C corporation, which is basically what everybody has in the US, you need Shoebox. If you're going to do a startup, you definitely need us. You might not know it yet, but you do. The official blurb is the one down here. This apparently is designed to sound good to the type of people that start companies. To me, it says nothing because that's not me. So with legal documents, of course, with our system, you can sign them with a pen, print them out, sign them, scan them in and upload them to the system. You can do that. But one of our benefits is that you don't have to do that. You can create the legal document in our system, sign everything electronically. People will get notified saying you have to look at this paper and read it and sign it and it will get sent to the lawyer that then has to approve that everything is fine and all these kind of stuff. Really long, big, complicated workflows for legal documents. And you can customize them and things like this. So we have loads of documents and many of them are generated then through this system. And we have our own template language unsurprisingly called shoebox templates or SBT, which is really report lab RML plus soap page templates, plus a little bit of extra magic and app specific stuff. Both templates and documents can have revisions. And of course, it would be nice to have a way of showing the differences because there's loads of documents and there's loads of templates and there's several revisions of each template. So a text diff wouldn't do. We can't just run it through a text differ. Lawyer wouldn't understand anything of that. We need this kind of graphical diff with red and green texts. It should be easy to read and semantically meaningful. And that means that if you replace a word, it should show that in the diff that you have replaced the word. It should not show which characters that have changed in the word because that is not readable. The first effort we made of making a diff of these templates, it worked, but it has less than optimal results. We often got things that was marked as being inserted or deleted when it shouldn't be. I'll show an example later. This was implemented by someone else than me and I'm told that it took a month or so. So there's clearly faster programmers than me at work because this work I did did definitely not take just a month. Because diffing XML was trickier than we thought. So why not use somebody else's library? Well, there was one called XML diff and it seemed to work, but it was unmaintained, which was the reason we didn't use it from the start. But since it was harder than we thought, we decided to take over the maintenance of XML diff. The last version was 0.6 at that time. It's not only library, it's also command line tool. So if you're on Linux and you have a command that's called XML diff, you install it, this is what you're going to get. You're going to get XML diff 0.6. So I was tasked with implementing document diff based on this, and that was not difficult, but it also didn't give nice results. And here you have a good example. What you can see is that instead of inserting a new paragraph 3, which is what actually happens, because of the numbering, it will actually update the old paragraph 3. So it will delete almost all the text except an O and a D, and insert new text instead. And then it will insert paragraph 4 with the same text as what was there before. And it's actually worse than this, because then it deletes paragraph 5 and reinserts paragraph, you know, the old paragraph 4, new paragraph 5, it deletes it as number 4 and then inserts it as 5. Only then it starts actually changing the number, showing the numbering difference for the rest of these sections. So the output was no good, but that wasn't the only problem. We also figured out that there was a memory leak in the C code, because there were C optimizations to this. And I haven't done any major programming in C since the 90s. So it was not obvious to me where that memory leak was. The Python code also was very fond of two-letter variable names, like typical C, one or two-letter variable names, so it was hard to read. And the internal data structure was a hierarchical list of lists with the parent list contained as one element in the child list. So you had lists of infinite loops in hierarchies. So if you printed out, if you wanted to print out just one item, because the parent was part of that item and all the children, of course, was part of that item, you ended up printing out the whole structure anyway, which could be thousands and thousands of these lists of lists. And there was some infinite loop somewhere in there, maybe because of this data structure, we're not sure, because once I figured out there was an infinite loop, well, I had already decided to try something else, because it was also really hard to improve the matching and fix the problem with the other page. So I ended up scratching all of XMLDIF and writing a new library, which we have to some discussion and deliberation decided to call XMLDIF. So we have then released this as version two, where it's not like a separate package called XMLDIF two, we quite brutally just deleted everything in the GitHub and stuck my stuff in there once it was reasonably finished. And we released that as version two. Current version is 2.2 because obviously there was a lot of bugs and problems in the early versions. It's almost entirely incompatible. It's pure Python, we get better matchings, it's easier to use a library, and it supports four matters, which I will explain later what that is. So how do you actually diff anything? Before we talk XML, let's look at the simpler case text files. There's three stages to diffing. First is matching, finding bits that are the same, then it's edit, making an edit script, I'll explain what that is, and then it's outputting this diff. So matching in text files when you want to show a diff in a readable way, you typically do this line by line. So a diff in that case will simply split up the files, the two in files into lines, and look if they are the same or not. If they're not exactly the same, then they don't match if they're exactly the same, they match. So it's very easy. But you can't just go line by line and compare, that's going to get complicated and it's going to get slow. Instead, you have this is a classic computing problem and it's well solved, it's called the longest common subsequence. So you have an old list and you compare that with a new list and you get a longest common subsequence out. So that is basically the longest list you can find where all the items are in the same order in both of these lists. So for a text file, you'll run the longest common subsequence on the lines or the LCS, as it's usually just called. And from this, we then get a line, list of lines that match. Then we edit, we make an edit script, and an edit script is a list of edit actions that will turn the old file into the new file. For a text file in this kind of system in this way, when you have line by line, the edit actions are basically just delete lines or insert lines. You could also have move in theory, but discovering moves is not necessarily particularly easy. So often you just skip that. Depends also if you want to output it, because then if it's a move or if it's a delete and insert doesn't matter because the output, if you have a visual output in a normal diff command, it's going to look the same if it's a move or if it's an insert and delete. If you want to store this in a compact format, if you want to actually store the differences, for example, in some sort of version control system, then you might want to implement the move. And then we use this edit script to make a nice looking output like this. So that's not so hard, right? But if XML was this easy, I wouldn't have a talk. So let's talk about XML instead. First of all, let's talk about the matching. Here we don't match lines, we match nodes. The node matching is tricky for XML. Scientific papers on how to do hierarchical diffing, which of course is what you're doing when you do XML. Generally, you hierarchize as nodes that have a value and children and nothing else. That's all there is. So XML diff 0.6 did something very clever here. It converted the complex XML nodes into many simple nodes. For example, these two nodes here, a para node and a b node, gets converted into six simpler nodes like this. You have a node node node with the para as a value, attribute node with section value node with three, which means that the section has the value three, and another node with text, and then another text. So every node now only has a type of value and children and comparison is now easy. But being clever is always dangerous in computing, because what happens then if we update the numbering? Well, these two nodes down here, they stop matching. And although these nodes have the same value, the children are completely different. So the similarity of these nodes get to be 0.5, which is right at the cutoff. So they get set to be not the same as well, these two. And that means we have this situation because the top node has three children that are different. Two out of three children are different. So again, that top node won't match. And the result is what we saw before, we get things like this. So you need to look at the node as a whole, not as independent pieces to get good matching. I make a string out of these nodes, the attributes and the texts. So I just make a string like this. And then I use the standard libraries, deflib, to get the similarity ratio out of that, which actually uses the longest common subsequence method. If the node has children, I also take that into account in equal measure, measure to the deflips ratio. But if it doesn't have any children, I ignore that. And this works, but there's a lot of room for alternatives here. I, for example, tried to deal with attributes separately from text, so you can have weighing, you can put in and say, I don't know, text should be more important than attributes and children and things like this. But I did not significantly improve the matching. It matched as good or as bad as it did before. So I stayed for now with this. This may change in the future. I ended up here basically by trial and error. And then is again this overall matching procedure. We now know how to compare two nodes if they are equal or not. But should we use longest common subsequence again to match the nodes? Well, you could do it. It could flatten the whole tree into a list of nodes by traversing it and then just use the longest common subsequence. But that will leave a lot of nodes unmatched that could be matched. So you can't just do that. It would get very bad. And the defs would get very big. And we want them to be somewhat compact because they should be possible to store. Finding the best possible match over two big sets is usually called a stable marriage problem. And there's various algorithms for that that basically compares every node with every node and tries to find the best possible matching over that. But doing that would have been crazy slow. So what I did is what I call single iteration best match. Maybe there's a good name for it, but that's what I call. I simply go through each node from one tree and find the best match for that node. And then I remove both nodes from possible matchings. So that means that the best match from the other nodes point of view may be actually another matching. So we could still improve the matching here by using a stable marriage type matching. But this seems to be good enough. And it was also slow enough. So I stayed with that. Some documents could take one and a half to two minutes, one of the biggest defs and slowest defs that we had. And we needed it to be second. So I didn't try the stable marriage versions. And then we have edit actions with a linear file. We got just insert and delete or maybe also move. But XML needs more. We need to be able to delete, insert, rename and move nodes. But we also need to delete, insert, rename and update attributes. We could theoretically move attributes as well. But then we have the same problem that it's not easy to find moves of attributes. And we need to update the texts, both the texts inside the node and after the node. As we saw with this previous example, the text that are inside the paragraph is actually mostly hanging after the B node. And that means that that text ends up on the B node, not inside the paragraph. So that's two different actions. And we also need to insert comment as a separate action because comments are just almost nodes. One problem I discovered when doing this is that with LXML, which we are using in the bottom here, you can't actually delete a top node comment. If you put a comment in your XML file before or after the root node, there's no way to delete that in LXML. Very funny little piece of information. Of course, we had that problem. Of course, we have top nodes in one version and not in another. Well, now we have a list of the node matching. So we go over the tree again, node by node, node by node and make this edit script. And if the node has a match, then we have to look at what the differences are between the new version and the old version of the node. Because it's not with text, it's exactly equal. This can be just a half match where some things have changed in the node. And then we have to then edit, we have to give out these edit actions for updating text or updating attributes or deleting attributes and things like this. If it does not have a match, we insert it because we actually iterate over the new version here. That's the easiest. And lastly, all nodes from the old tree that doesn't have any match should be deleted. And that's how we then have an edit script. And from that, we generate an output. And this is basically how the output looked from the old XML diff when you used it as a command line. It just prints out the edit actions. So there was several comments or edits, comments on or bug reports on XML diff here saying we can't diff our files because it just never finishes, it just eats up all the memory because they got into this infinite loop with the memory leak, which meant that XML diff would just slowly chew up all the memory and then crash. And then we figured out that they could actually diff it if we used the pure Python version because they didn't have memory leaks and we could fix other stuff. And with my new version, we could also diff it and get the output. And then people just go, well, I don't know what to do with this information because this is the output they get. We don't want that, really. We want something like this, right? We have here now a span with a class that says delete to remove the three and insert the four with class insert. So then we can format this with CSS to get the right sort of colors and strikeouts. But XML doesn't really have any spans and things like this. So instead what we do is that we have a four meter. This is what we talked about four meter. So one four meter just prints out the edit script as before. Another four meter will give XML output like this with special diff tags in a special diff namespace. That says what has happened. And here you go with that. The attribute has been updated from three to now four. And we have a diff delete tag and we have a diff insert tag. And then you have to format that to whatever output your XML should be with XSLT or something else. But XSLT would be the standard way of doing it. That can be complex if your format is complex. We, for example, had this fun little problem. We have XML that looks like this. This is basically sort of app specific switch statement. So if the variable called expenses is set to bear by our own, then this should be shown. Otherwise, if it's reimbursed, then this should be shown. Things like this. And when we show the templates in a GUI mode, of course, both needs to be shown. And they need to be shown with this information plus some other information that I skipped now because this is simplified. Otherwise, it doesn't fit. And we have this as a part of an XSLT. It calls a function and that function is a Python function. You can do that in LXML. You can stick in Python functions and pass that in and let the XSLT call it. And this function makes a title from the field from this app term tag and the app option tag. And it creates a title from that. However, if app option is not the child on app term, this function breaks. And that can happen after diffing. And how can it happen after diffing? It's a node mismatch again. In one version, a section of the document might be inside of this app term, app option tags. And in the later version, it's not. It's insert something else. And this leads to mismatch because attributes of nodes and content is very similar. So we get a match. And the end result is XML that looks somewhat like this. This is pretty a lot of information here. But here we say that the app term has been renamed. So this section should just say section there. So this section tag used to be an app term. And one of the attributes have been deleted. And this whatever tag has been moved from inside the option to outside. And the option has been deleted. And this happens for the other option too. So delete and insert here. That means move. And now the app option is not inside an app term. And that should say section here too. Sorry, I didn't see that before. This is the last things I changed this morning. So how to fix this? Well, we have to either fix the function to not break in this case, which is what we did. Or you have to fix the XSLT to go, okay, here's an app option. But it has a delete tag. So ignore it. Don't do anything. But most likely you don't do these kind of advanced stuff with XSLT and your XML. Most of you here is actually using XML in this specific implementation called HTML. You might have HTML documents that you want to diff. And you can do it with this. But then we come to the next problems because we have formatted text inside of HTML and also inside of RML, which is what we're using. So if you add a little bit of formatting to some text, you get some very big effects. So here we just added a beat tag here and an i tag over there. And this means that the old P had a contained text that said this is formatted text. The new one has contained text this. So that's not going to match. These things are not going to match anymore. So what to do? Well, what we do is that we replace tags inside of formatted text with unicode characters before the diffing. And this means that the nodes will now still match. Because it's these unicode characters, they weren't there in the old version, but this is still similar enough to match. And then after we've done the diffing, we'll unreplace this with the new tags and add the proper insert and delete attributes on those tags. And these characters are from the private user area in unicode. So we're allowed to do it. It won't end up by mistake being Chinese characters or Japanese characters. So yeah, this is how it then ends up afterwards. That we have a B and diff insert. Actually, if you set this up correctly, the formatting will say insert formatting. So that you specifically know that it's formatting. And then you can use another color for that, for example, or maybe yellow background. So you don't have to show the deleted text with one formatting and the inserted with another. You just show the text and show that the formatting has changed. That's what we do. But as I mentioned before, it's very, very slow. The worst diff case we had took one and a half minute or more. So I went on trying to improve that. And one of the biggest speed ups I found was implementing a simple shortcut. If the match is 100% between two nodes, then stop looking for something that is better because you're not going to find it. I also added a flag to choose between three different ways of calculating how different two nodes are. They're all a part of diflib. We now have accurate, which is the one I used in 2.0. That was very slow, which is using the LCS algorithm. And we have fast, which is a first one that's good enough. And we have faster, which is not that great. It doesn't give very good matchings. But it's very fast. I also added a fast match option, which uses the longest common subsequence as a first step to find matches. So I still go through all the nodes that are unmatched node by node to find new matches and better matches for those. But the first thing I do is to flatten the trees to lists and use LCS on them to find the first set of matches and remove that. That won't give very nice matches always because it won't look for the best match. It will just find a match that is in the right order. So if you move around nodes a lot, it's going to be terrible. But it speeds things up. So by the end, I had gotten down the time for a typical XML doc for maybe 20% of the time. Our worst case that we had took eight seconds under XML diff 0.6 with all its C optimization and took 100 seconds under XML diff 2. But I got that down to five seconds with default parameters. I think 20 something if you use the accurate matching but five with the default one, which is good enough. With less accurate matching, you get down to two seconds. And if we also use this fast match LCS algorithm, we're down to one and a half seconds. So it means we get better matching and faster diffing even though we're pure Python compared to XML diff 0.6. How do you then use it? You can of course use it from the command line. It has a help. You can look at it. But that's not so exciting. You want to use it from Python, right? And it has a simple API that's designed to be easy to use. So here's one example. You just do diff 2 files. So you have a function here. And this function can either take file names or it can take open file objects. There's also a function to diff XML text, unicode or binary text. And there's a function to diff LXML trees if you already have LXML trees. And the result you get in that case is an edit script by mode of Python objects. So you have different Python objects with the parameters that makes the edit script. But of course, you usually want this XML output. You want the formatter. And then you specify that. So you import the formatting, say XML formatter here. And here I've defined that the p tags, those have formatted text. And the formatting tags inside of that is i and b. Everything else that is not formatting tags will not get this format inserted, but just a normal insert. And it will also be treated as one whole tag, including the contents of it will be treated as a tag and replaced. So if you change something there, it gets deleted and inserted because it's not formatting. Instead of just changed. And then you diff it and just pass in the formatter here. And then you get some XML out instead of this list. Well, we're using this. So we're not stopping. We're not going to let this be unmaintained. We're trying to fix bugs quickly if they show up. So far we've succeeded. We have some requests for more options when matching, like being able to more accurately change if we should let the text be more important than the attributes and things like this. But we don't have much information to go on there. So we don't really have their use cases. So we would just be fumbling blindly if we implemented that. So we're waiting for more people to have more opinions. We're more flexibility when matching. It would be nice to have this stable marriage algorithm. I don't know how long it would take. Thanks to the other speedups I made, maybe it won't be that slow. That's all I have. Questions? We have a microphone for questions so they can be recorded. Does your algorithm also handle XML where the order of child notes is not important and can change between versions of a document? No, it doesn't. If they change between documents, they will get moved. But if they're the same, that's the only thing that will happen. They will just get moved. Thank you.
Change detection and diffing in hierarchical structures is a big sinkhole of issues. This talk will talk about those issues, the implementation of the xmldiff library and how to use it to make pretty diffs of hierarchical documents.
10.5446/54826 (DOI)
Welcome everybody, I am here to present the add-on which we have built this summer as a part of Google Summer of Code 2018. It's the presentation though its name is now play with ecosystem. The presentation is more about how can the plane, the plane can be integrated with other platforms all there like any social media platform or any web-based services. So the motivation to our project is to rule out all that, tire some manual jobs for site admins such as if there's a new item published on our site then tweet about it or make a Facebook post about it or such post or let's say if there is any event let's say this plane conference itself then we need to add it to our calendar or send notification, new notification to subscribers or the event host about the events or any such manual jobs which site admins have to go like if there's anything then do this or do that. So the motivation was really to automate this process so that if everything just triggered somewhere then automatically do all this related stuff and you don't have to worry about it. Like there's also one thing like if there's a content being edited then log all the user name and what they have changed on a spreadsheet so that you can go back and look who have changed what and all records. So the current solution for is Plone's content rules. As of now Plone content rules are the nearest solution to this but the problem with this is when you are trying to integrate with other social platforms you have to write additional scripts which can essentially expand the horizon of their triggering and their action and which is very cumbersome for the non-techy people who don't really know how to write scripts and basically we are trying to make it ease for all non-techy folks so that they can just go pop in use this, this click, this click and it's all done. So our add-on also have a very easy to use and flexible UI which I will show in a while. So in content rules there are three things that are trigger conditions and action. Trigger is something like if this happens somewhere then and all these conditions are satisfied for this content then do this action. And this is the in theory this is the line this is these three functionalities that we are again using in our add-on. So the solution that we are proposing is the add-on name our add-on name is known as collective.if it's under the collective repo it can be found there and if is essentially pronounced if it's IFTTTT and it's essentially pronounced as if. Let me take you through the if first. So if it is basically a free web-based service the acronym if itself stands to if this then that so if something happened here then do this all this stuff. For example let's say you got a new Instagram post or new upload then save it to your Dropbox automatically you don't have to do this manually or let's say you are nearby your home and you want to send a message to your friend or family that okay I have reached to this place then it will be send it automatically you don't have to do this. So these kind of stuff which if provide this kind of flexibility this that if provide and we essentially try here to integrate if with Plone so anything happen any trigger happened at Plone it itself be published or you know anything could be happened on all other social platforms it has plenty of social platforms as you can see there like each and every day if it is integrating new and new platforms it's not just social but all the health applications all the social all G-Suit applications everything is there. So this is about if let me take you through the add-on itself now a small demo of it as I said the add-on is available under the houd of collective organization this is the GitHub page of our add-on it has a well written documentation and yeah let's go through it. So the easiest way to know about and to be flexible with if it is using the Plone RSS feed. So RSS feed so basically to use let's say there is a new RSS on your website and you want it to be published somewhere else so just sign up for if make a new applet there is a whole bunch of documentation given on given here and you can also use if documentation it's also well very well written. So here is an example where I've used the Times of India RSS to create a Google spreadsheet so every time if there's a new news item has been published on their website it's all logged in my spreadsheet. So there's a demonstration of it like this it logs all the data like when it is being published what is the title what is the URL we are and everything like that. So this is just a basic example with RSS. Now let's see how can we use this to integrate it integrate inside integrate with Plone Sites. So the first step is to save the ift secret key so when you sign up for a sign up at ift websites you got a private ift secret key which you need to when you are you will get a secret key which is essentially needed when you are sending information to ift so that they could figure out is that a real person and is that the legit person for this event. So you can get the ift secret key easily there and there's a when you install our add on there's a new configuration available under the add on configuration called ift configuration and you can simply register your secret key there. Now as I said this whole add on is on top of the content rules so essentially every time you can go there on content rules and create a new ift event but to make it a more make it more easier to use for people we have created a bunch of content triggers which can be used in a Jiffy and you know and admins don't really know what's there underneath and what is content rule they don't have really they don't need to go through all of it. So let's have a look to it in theory we have created three triggers ift triggers in theory these triggers basically differ in what information are they sending to ift so essentially ift except three payloads the first two payloads of ift trigger are always the URL of the content and the title of the content and the third trigger is essentially what admins want to know so the first trigger that is ift content trigger send the description of the content that what has been what is the description of this content which has been changed. The second trigger essentially provide the name of the user who has changed that trigger and the third trigger essentially sends the date and time of the event. These triggers there's also an option of manage ift triggers so you can see what all triggers are available on that particular URL and you can easily delete or change this triggers at any time. So let me let me now demonstrate a simple if trigger on this news website I have already created a content trigger so let's say if a new news item has been published then the description along with title and URL of this content will be sent to my Slack channel where all the people enrolled for this site can see the new news. Let's have a demo to it. So as of now as you can see the state of this news is still private so we don't see any trigger to be happening but as soon as you make it publish yeah you got a successful trigger notification that it has been triggered and also you can see it has it has it is now showing in the Slack channel as well. So it's it's automated and there's no worry you can just all of all of the people who have signed up for that can see the new news item and for single content you can create as many triggers as you want so let's say you want to tweet about it you also want to make a Facebook post or you want to make it as a RSS or Slack everything can be created there's no limit how many triggers you want you can create or also if you want to log who has created so you can create different triggers with n number of possibilities. So this is a short demonstration to this similarly we have user trigger where information of the user who has changed the content will be recorded and a similar event trigger where the event of information with start date and time can be recorded this event is can actually be used to collaborate with calendars so that if there's a new event it will directly be added to your iOS calendar or Google calendar and a pop up will be there so it's yeah one another gem that we have integrated in this add-on is easy form integration so nowadays easy form is getting popular with blown five so you can actually add another if trigger here let's say you create a new easy form for let's say question survey or a question answer or a query query site a query page where people will get in they will add their about information like who are they what are their email information and stuff like that and they post their query and you then let's say you want to record all of this in a Google spreadsheet so while creating this this easy forms you have an option to define actions when the form itself gets submitted so under this you have an option for if trigger where you can configure that what all fields from the form itself you want to be the data from these fields you want to be triggered or you want to send to the if so that the data then itself be processed and recorded somewhere and used elsewhere so this is another gem that we have added to this add-on in the documentation documentation itself we have some working examples and app of applet and triggers so I have here I have created a yeah on my demo site I have created a Google spreadsheet example where the information of who who is who is who has made the changes to this particular content are is recorded yeah so this is pretty much about the add-on yeah at last as I said the this this triggering is all done like on the clone website the trigger event is handled by content rules and in here itself if you go to content rules you can actually change let's say because we want to be keep as simple as it for the admin so that they don't have to go under underhood and check whatever it is but if let's say tomorrow someone wants to then they can just go here all the content rules are available here all the content type information so you can actually check for the conditions of what content type it is or what workflow transitions are there all these conditions can still be made here so and under the option of add lock add action you have the if trigger option so if someone wants to play more with it they can just go here and do all those changes that they want yeah so this is all about it also we have a pi pi release to our package and it's available there the version 1.0.2 has been already released and we have also working demo environment to play with yeah this is the pi pi release that we have yeah so like this is the management interface of F2 website where you can just see what all applets are there and you can just turn it on and turn it off you will it is it's also highly integrated with your mobile mobile devices you can check notifications there itself and all the pop up will be there itself yeah so this was all about our add on the contributors to this add on our it's me Shreyaanj Sallie who is here then Kim neck who helped us with the demo site as I said this add on was developed on with the with Google summer off code 2018 timeline and yeah at last we have asko who helped me with all the technical stuff that we need to know off yeah so thank you any questions yeah yeah to have it automatically put to face, but I think that the idea is great. So I was curious, were there any limitations using the ecosystem that you get that would make it very well if not, this one would not be useful for us. Not as of now, we have not heard any of such thing. And also if it is integrating multiple applications of each and every day, so you could have endless number of possibilities to integrate with any kind of surf. Social media is one of the thing, otherwise on if it's self, people are using it to even turn off the lights and switch off or even, yeah. Even turning on your refrigerator, changing the temperature and stuff. So they're using it for all kind of stuff. The one thing which is, which still need enhancement in our add-on that as of now, Ploan can send triggers to it so that other websites can see what's happening here. But one thing which is lacking is if let's say tomorrow, some another website has a new, let's say, news and you want it to be published on the Ploan org itself, then it's not really accepting, but I've already created this issue on our GitHub page and it's like an enhancement, but it's like it would need another Google Summer of Code project, the whole timeline. It was not possible to integrate this stuff at this moment in this time period. We were only able to do this, yeah. Authentication is not really that easy with it. That's great that I have, that was my next question. What is this thing that you wanted? Yes, so this is exactly what you wanna know. Getting triggers from other websites or other platforms and perform some action on your website. This may even help to transfer data, like a whole bunch of data or just single set of lines or single set of things. Is there another thing that you can do? So the limitation is if only accept three payloads, so it's upon you, like the payload string is not limited, but it's just that three different payloads and they'd use these three different strings to publish on, they again have to give this particular strings to other platforms, so they should know of that what is it. So there's no limitation with the length of the string, but you only have three payloads with it. I guess the URL to that image is possible and then there's more processing to it maybe on the other platform or in the F itself because they are essentially sending the encoding stuff and not really this thing. So yeah, basically it would be string. So there's some more processing required, but most of the other platform itself have this processing styling and there's a lot of option on those platforms to accept what is coming in here. Yes, essentially everything is a string, right? It's when it's encoded, you got a string. It's just that you have to decode it and process to get the stuff on your page then. So there's some software that can compress that model project and decode that problem in 10 seconds and then accident and death. So thatTP has a good result and she hasieren the fewerues thereof. So far I have not seen any scheduling post. It's just that if this IFTTT is standing in between as soon as you send data to it, it will send the data in a nicely manner as it is accepted by the other platforms and then it has to be processed on the other platform. So I guess if there is a, I don't know, I have not seen any scheduling stuff but there is a possibility that if have some processing on them and settings you know that only after this time send it to the Facebook or Twitter or stuff like that. Yeah, yeah, on the on on the own itself there could be an option that published this on this time or yeah, or yeah, yeah, there is a possible way that on the phone itself you have a option of publication date right that when it should be published. So in the transition workflow transition itself as I show you in the demonstration until unless I make it published the trigger didn't happen and it was not posted on my Slack channel. So on the flown site itself you can configure the publication date or timing and on that particular moment the site will get up this content will be published and as soon as it is published it will be triggered to if then stuff goes on. Okay, take your time. Are you sure? Okay, anyone else have any question? Okay, sorry last line again. sir trail So, and actually, we just have to do the search each year or each year, or each year. Oh yeah, there is an option for LinkedIn connectivity. So, let's say you, the three payloads you are choosing a title, URL and the image content itself. And when you post it to if they forward it to the LinkedIn, maybe as a post, there is an option with LinkedIn itself. There are multiple applets already available. There you could just select one of these and use with your blog. So, all these content, basically these three payloads will be sent to if and if then process it to LinkedIn and it can be posted as well. You might have to look for their business model as of now, I don't really know. Any questions? Any other question? Thank you very much.
Everything about the collective.ifttt add-on with a demonstration. It's a Plone add-on which enables any Plone site to play in the IFTTT (pronounced /ɪft/) ecosystem by allowing you to create IFTTT applets. This project was developed during Google Summer Of Code 2018.
10.5446/54829 (DOI)
Does this thing, hello, hello, okay. I'm gonna need my hands. I'm not so much gonna criticize Orchid. I'm gonna criticize Plone more, I guess. Actually, what I'm going to be doing mostly is to show the Orchid UI, the Orchid admin content-oriented UI. And this is the level of this talk is set as high because it presupposes a lot of familiarity with the way things are done in Plone. Okay, how about this? Okay, we've switched mics. Okay, I'm going to be showing a lot of screenshots. So you might wanna come closer if you wanna read what's going on. Okay, so. The Orchid CMS, just to give a little background on what that is, it's been around since about 2008. It's completely open source, BSD licensed. It's under the.NET Foundation, Microsoft, the salaries of some of the people who have worked on it over the years and the lead guy all along, Sebastian Rose. Orchid is currently on version 1.10 and there will be no Orchid version two. We're changing to Orchid Core. This is kind of a ZOPE 3 moment. It's a rewrite reconceptualization. And it's also the first version that is completely cross-platform. Just to put it in perspective a bit more to compare them, I looked for usage statistics on built with and filtered to the top 10 case sites on the internet and blown as a handful of them and Orchid has a handful more. So it's also a very niche CMS I'd say, but it does serve some serious sites. Does see some serious use. So I'm showing Orchid now. I've been working with it for about three years and I decided to show it now because it's cross-platform. I'm running it under Ubuntu. I would not feel comfortable really dragging a proprietary CMS running on proprietary tech into this conference. It's also interesting to show it because because of the fact that I faced a similar challenge in rewriting as blown has been going through from Python 2 to Python 3, the effort to revitalize ZOPE project as well. Okay. And also blown is confronting the move to the headless CMS paradigm. And in that as well, Orchid core specifically is built to work as a normal CMS as a headless CMS. And that's what I call a decoupled CMS where you have basically pure ASP.NET templates that have access to the Orchid content, but that is not wired into the Orchid CMS rendering process at all, your Android and rolling your own code. And it feels a bit like ZOPE 3 in that it's a framework for building modular applications and the CMS is basically a set of modules that you enable. You can build any other kind of modular application on it as well. What struck me looking at the Orchid environment was that compared to the blown add-on world, there was an Orchid gallery, but everything there seemed to be out of, completely out of date and maintained. There was, it was much more sparse than the blown add-on ecosystem. That said, the blown add-on ecosystem also contains a lot of things that fell by the wayside. You're not really sure which version or fork or of a package is really the live one. In the case of Orchid, the problem is more that the things that disappear under the waterline, it's not so much a problem of sifting through stuff, it's more a problem of locating where the live stuff actually is. So the community is much smaller, and initially this was a major concern for me, like how can you do anything with a small amount of people? But I came to realize that a critical mess can be small as long as it's people with real commitment to stick around for a long time. And in the case of Planned, a lot of its progress has also been due to a small group of people involved over the long term. A couple of places where Orchid has used a couple of big sites, there's a software as a service, I think there's a.Nest website where you can create your own Orchid instance. It hosts about 4,500 Orchid instances on one, or what they call a tenant, on one Orchid instance. That's like, it's a similar model to multiple clones inside of the site, but that's a lot. It also runs weblogs.asp.net, which is about 1,500. It's got the core team make the design decisions, for example, they have strong opinions about what type the theming is, we'll see that a bit later. They've got one content type story, they've got one import-export story, they've got one way of doing layout. It takes a lot of wondering about do I use this or that out of the picture. One thing I like about that process is they have weekly core team meetings that are recorded and put on YouTube, and by now there's up to 290 of them besides other tutorials and things. But that's a nice way of allowing people to catch up and follow exactly how technical decisions get made. So now I'd like, the ground I'd like to cover looking at Orchid is the import-export story, content types, layout, theming, workflow, and two things that they call shapes and parts that I'll describe more as we get to that. So to start with import-export, in the plan world, it's a very, very patchy situation. There are CSV solutions, there are ones that are built around JSON, there's transmogrifier, is it really import-export or is it more of a migration tool? There is database level, dumping of pickles, there's Plomino, which had a bunch of very nice ideas where you could, for example, replicate Plomino content between different sites. Orchid wraps everything into one way of doing it. Your, with Orchid Core, the import-export story is described as deployment plans. The idea is that you actually formulate an import-export recipe as a way to deploy to other Orchid sites. And that one import-export format encompasses site settings, content type definitions, the layout placement of things in the UI, and of course the content itself, and its model you pick and choose which ones which form parts of an import. Orchid one is using XML, sort of a very fast and loose undisciplined, it's valid XML, but it's schema-less. So that means it's actually quite easy to write. Orchid Core is moving to JSON, which is more suitable for the way that they're using its simple key values. And because it's so not fussy and consistent, I've been able to easily generate the input XML or JSON from CSVs or whatever other sources I got. Let's have a look at what it looks like in Orchid specifically. So in the Orchid admin UI, you added a deployment plan and then you pick what you are going to import and export. So here you see the options that you have, like site settings is the name of the site and all that kind of stuff. You've got the content definitions, you've got all the templates that are defined in the site. Orchid has default ways of rendering all its basic content types and widgets and so on, but you can always override that in a specific site. Okay, so next let's step over to content types. We've got content types. A content type, you can add parts, which is a bit like behaviors in dexterity. You've got parts on the content levels while you can actually create parts, similarly to how you can create types. So it's like as if you could create behaviors through the web. However, parts with more functionality would be implemented as file system modules. The fields and forms are also content instances, content items. Then just to have a quick overview of how Orchid does layout. Normal content item like a page, you can build on normal content items like page that lets you do layout in the content area of a page already. Then you can define layouts that are used in specific contexts, a layout defines zones. You can place widgets on content types anywhere, you can use them in the layout of a page, for example, but you can also assign widgets to zones. So if you have a widget that is not related to specific content, but like a footer or something that is relevant across wide areas of the site, you can figure it will show up in some zone. And then everything that becomes part of the page is rendered by a template. So you've got templates for layouts, for content items, for parts, for fields, for widgets. So you can override that as granular a level as you feel like. And you can do that writing razor code or writing liquid razor is sort of C-shop scripting embedded in HTML and liquid is a much simpler cipher sandboxed sort of string interpolation like way of templating which it does loops and so on, but it's not a full language, it's cipher to give to editors. So let's look at that in practice a bit. Looking at the layers UI in the Orchard admin, you can assign or create layers there. So in the footer, you can create rules for the footer zone there. And layers defined by rule which can say, for example, this thing appears when someone is authenticated or when someone is not authenticated or on a specific URL. You can add widgets to zones. Widgets can be static, can be dynamic bits of templates that are evaluated upon rendering. It can be complex, you can add a container widget that contains a bunch of other widgets in layout relating to each other images, whatever. So here I'm adding a container widget. So after adding the container, I can add widgets on the container again. And in that little toolbar at the bottom there, you can see I can set them to, take up a certain amount of horizontal space that can flow left, flow right, or take up the whole width and so on. So just with that, I can do a layout. And that's two widgets, one aligned left, one aligned right taking up half the space. So what I like in Orchard is, all the Orchard functionality is built up from content in the CMS. So what that means is, for example, menu items are content instances. In this case, the site that we're looking at now is, let me switch to that thing here. Okay. The demo Orchard site that we're looking at is an agency homepage that looks like this. And to build this, this is basically a landing page site. And to build that, they define a landing page content type. So, and build that layout and gathering of content on that. So it's a theme specific content type then. I said that menu items are built on content. So here you can see the menu content item and on the menu content item, we are adding individual menu items and they are of the content type link. And you can drag and drop these around, make sub menus. And we'll see a little bit later where how that link content type becomes eligible for being in a menu, namely, we assign a stereotype to it. Let's look some more at content. So, Orchard is a bit like, if you look, it reminded me a bit of WordPress in the sense that in the admin, you go to the content listing and then you get a listing of content items and compared to Plone, this felt very backwards to me. In Plone, we have got a hierarchical tree of folders. It's much, much, you can get to the content, you can organize the content much, much nicer and sticking everything in a flat list. But it turns out that this isn't really what you get there because in Orchard Core, to some extent in Orchard 1 as well, but differently, we're like, okay, in Orchard Core, you can add collections or bag parts to your content type. And when you do that, you can add sub items. So these are actually contained inside and there's service that these are. So on the landing page, we've got one collection called services and it collects instances of the service content type. Looking at the service, this is in place. So I expand the service on the demo page and I want editing that service. This is a custom content type. We'll look at the definition of that content type later. But here you'll see it's got an icon class field added in there. And we're gonna see where that icon class field is used in the template in a minute. We'll look at templates now. So that landing page in this theme is implemented using a custom template. That, this reminds a bit of J-Bot in the Plone world where the name of the path to a template determines how it's gonna be used and what context it's gonna be used. Except that in orchard, this name-based lookup of templates is baked into the framework from the bottom up. So all lookups of views go through or look for templates in a hierarchy of places, according to these names. So in this case, it's named content-dash-dash. So it's gonna apply for rendering a piece of content and the content type that it's gonna be used for is landing page. And then there follows a razor template. And there you can see this, where are we? The LORAM Ipsum text that you see there, you can see in the template here. And then follows a loop of a service content items. Down there, okay, in the previous slide, this was a couple of commits ago when they didn't have highlighting turned on on that field. Actually, it's gonna be looking more like this. And there you can see the icon class field being used to render the shopping cart icon on the service. And this is what it looks like in the loop on the page. Okay, let's look a bit more at how content types are done. So I mentioned before that menu items because they get a menu item stereotype assigned. Widgets content type becomes available as a widget by assigning it the widget content type. You can see here the create new landing page button is on this screen, that is the only content type that you can add instances of directly. All the other ones are things that you use as part of layout or in the context of a menu or as a widget and so on. Let's see what it looks like when you're editing one. When you're editing a content type, you have an area where you add fields and then you have an area where you add parts. And adding parts is a bit like picking the behaviors in dexterity except here you can go and edit the part and add additional fields to it, change labels and so on. And you can actually add parts more than once and then it's a named part, you give it another name. So here I'm adding a new field to the service content type. The field types are gathered from whatever module defines fields conforming to the correct interfaces and so on. They all have associated templates. You can override those templates one by one. So here I'm looking at the service content type. It's got that edit template option. You can edit the template for use in different contexts. If you hit that, it volunteers to create your template for you. Now you can see it's named for that content type service. So now it's content service instead of content landing page. OK, let's look a bit more at parts. This is like Dublin core metadata, some of them like auto, like alias and auto root in common are things that add the basic functionality of all content parts through the web we're going to have. But then others you've seen like bag makes it a container, flow gives it the ability to flow widgets relative to each other. In the case of this landing page, it doesn't have any fields of its own. All the fields that show up on the landing page come from the parts that are on it. OK, so this gives you an idea already of how theming is approached in Orchard. The designer doesn't really believe in the fact that you can take one site and stick another theme on top of it in terms of template and layouts and colors and everything. For him, it's much more intertwined. It's editable in the admin for a specific site, but you're going to do some work turning, say, some random bootstrap theme into something that works with your content model and your application. OK, this is like, let's look at modules. This is like the add-on page for blown. As you can see by the scroll bar, there's a lot of modules available in Orchard because a lot of the basic CMS functionality is implemented in terms of modules. There's a lot of infrastructure stuff that is automatically turned on. But a lot of other features that are baked in in-plan or standard in-plan like workflows are implemented as modules that you can turn on an office you like. Templates as well, the ability to add templates. I mentioned how you can override templates in a granular way. Here is a quick look at what kinds of overrides you can do and to reiterate the overlap between functionality and theme. They've got a theme called the blog theme, and they've got a theme called software as a service. So that enables multi-tenancy. It's really a different kind of site. OK, let's have a quick pass through the workflow in Orchard. Workflow in Orchard is very different from Workflow in Plown. Workflow in Plown is managing the state of content through managing content through various states. And managing the access rights and stuff of a content item in a state. And in Orchard it's more like conventionally, when this happens, then that has to happen, and another thing has to happen. This diagram you see is the visual workflow editor in Orchard. So to do some workflow in Orchard, you could, for example, do a contact form, contact submission form. And again, to do forms, you need to enable the form module. So we filter the add-ons and see the form module, enable that. To enable a contact form, you're going to submit the form. So then you search for workflows and enable HTTP events on workflows. Once you've done that, suddenly your list of content types contains a new one, namely form as a widget. To create a form, I'm not creating a special separate thing. I can just stick it on a page. I create a page and I add a form widget to it. I set the HTTP method of the widget. I set how it flows. I pick inputs that I add as widgets on that page. Now I've got my contact form. Then you go and look at it and define and build a workflow. So here I've defined the contact form to do a post. So my initial workflow activity is to react to a post submission. When that's done, in this case, I've got a content item representing that form. So I'm binding the form submission to that state. I validate form fields. So I do a bunch of form field validations in parallel. I gather that up and see whether any of them fail, display errors, or send email and display. Thank you, Page. So on the headless CMS front, Orchard Core comes with GraphQL. And it bundles a very nice GraphQL editor called GraphQL. It autocompletes. So you can start with a blank query and just hit Control Spice to see what do I have here. And basically dig down through your content types definition and structure. So if you've got, in this case, articles in your page, you can see what fields we have on our articles and include the ones that you want in your GraphQL query. In this case, article reference posts and posts can be either blog posts or articles. So now you're traversing from an initial article instance to related content instances and gathering them up into one GraphQL response. Here we're looking at the same thing in Postman, which obviously, the GraphQL is a very, very, very, very very, very good machine. So if you're doing the graph QL editor inside of, Orchard is great for formulating, exploring things, formulating how you want to tell your client to query. But then the actual querying is going to happen from another thing, like Orchard. I heard like Postman, for example. So to sum up what we have here is, what I like about Orchard The entire stack of functionality, what you can do in the CMS is build up from content types, parts, instances of content types. One thing that I didn't spend enough time on really is this concept of shapes. So everything, a content type field, everything in which it gets rendered as a shape before it is rendered, before it's finally rendered. And the shape is basically like a JSON structure, it's a bag of hierarchical bag of values. So when you're working with content in a template, for example, you're dealing with a shape of that content. You don't get rendered HTML that you need to wrangle, you get key value storage that you can go through. So content types, parts that are part of content that you can assign to your own content types. And all of this stuff around tripable, exportable, importable. A couple of days ago I presented the hacking plan training day, which was basically about building this same kind of stuff in Plone where we were looking at dexterity, ambidexterity, Plone form gene, contrasting a bit with easy form, looking at workflow manager down to the ZMI to hack workflows in a more granular way. I was very proud of getting Plone import, export working, grabbing the Plone conference talks from the live website, turning that into CSV and importing that into my dexterity based new blank Plone site via CSV, except it only worked halfway, it didn't do all the field types, it took quite a bit of guessing to see how I needed to do the CSV. So basically we had to, it was like building a ship while you're on it, it was like pulling things together from all over the place. There seems to be a lot of consolidation happening in Plone right now with things like Plone REST API and archetypes getting dropped finally and so on. So I think what impressed me about Orchid is it's the same kind of problems that Plone has been dealing with and the solutions that it comes up with are, make a lot of sense from a Plone perspective as well and it seems like Plone is sort of drifting in that direction as well, but it has more legacy, it's got much more of an issue with design and leadership than in the case of Orchid, we've got one guy who's been doing it like for going on ten years. And that's basically what I wanted to get across, I hope, I hope, it was interesting. So if you've got any questions, please ask. Yeah. It's the workflow editor. Any module can add activities, for example, when I enabled the HTTP workflow module, suddenly we've got HTTP redirect popping up here and we've got stuff like add a form validation error or validate form field and so on. So depending on the kind of site you have, you will enable modules that give you the set of activities and events and stuff that are relevant for your workflow. And let's say, let's add one of these and then you wire them up together. Yeah. Can you write the templates like razor templates, like sandbox, or how does the code work? razor templates are dangerous, but liquid templates are sandboxed. You just enable that module. You can enable liquid templates and razor templates separately. Oh, no, no, no, you drag them around. So this one goes here. Yeah. Yeah. So, yeah, basically, if you're using a deployment plan that includes a bunch of, I mean, the deployment plan is going to specify which modules get enabled and it might include templates in the UI and so on. So pretty much all this is through the web, right? I've... Well, I presented this, I presented this through the web story because I wanted to contrast, I wanted to contrast the same kind of thing in Plone. But it's... for example, writing razor templates and overriding things like this is... That's not something that... I mean, that's something an integrator would do. But like, you wouldn't do that, say, when you're adding a Plone instance on.nest, just to make your own home page, you wouldn't be doing that. One of my earlier slides there had line community open source versus corporate open source and that is an issue because, like I said with what you see is you find there are big users and there are all kinds of interesting modules, but a lot of the time, they're big commercial sites, nobody ever bothered to actually publicly release something even though they don't mind releasing the module if you meet this guy. So the fact that it's sort of coming from a more corporate background does play a role. I don't think it compromises the project, the design of the project. And there's one guy leading the project, but he doesn't implement even the majority of stuff himself. There's a lot of people helping and it happens in a normal open source community way. There are a couple more companies doing commercial services on Orchard than in the case of Plone, but I don't know if that's significant in the Orchard ecosystem. Looks like I'm done. So catch me later. Thank you.
I'm currently working with the Orchard CMS. It addresses many of the same issues as Plone: flexible content type definitions, workflow, fully customisable layout and theming, third-party module development, all with a strong through-the-web story. In many cases, I find the Orchard solutions form an interesting contrast to the Plone way, specifically in terms of content type definition, widgets, templating and layout. I'd like to look at Orchard from a Plone point of view: what is easier, what is lacking, what lessons can we take from another player in the same space. I will be looking at Orchard Core, the fully cross-platform rewrite that is in beta now. It uses a .NET document database working on any RDBMS (we use postgres and sqlite). Orchard is an open source (BSD 3-Clause) project, .Net, C#, started around 2009.
10.5446/54830 (DOI)
Hi folks, that thing is on, right? Yeah, so let's start back in 2004 when I started to do to do a clone. The journey started for me with a clone 2.0. I was a university student in my first years and I had a pet project. I was really into American authors over like specific magazine and I was really interested in like translating those articles into German so I got together with a few people and we started to create a website. At first we created an HTML website but over the time we added more articles like every day or so and that became a problem so I started to look around and I started to do PHP and MySQL and started to build like kind of my own CMS, right? Then it occurred to me that I was like reinventing the wheel. I was running into the first walls in my career. Many hours will follow. I will share a few others with you. And I started to look around for like real content management systems, right? So I somehow run into clone and also Zope and back then in 2004 I was really, really impressed by both Zope and clone. What you could do in Zope was that you could like add a wiki with just one mouse click, right? That was a thing back then, right? I know I'm old but that was something that impressed me because I was used to like having to set up like PHP, MySQL and like doing the database connection and all those kind of things, right? And it did not stop there. In clone you could do like many things through the web. It was easy for me to pick up because you could just like search for something for a string or so and then just copy that over and get started, right? So that was something that appealed to me. So what make the young Timo or young university student pick up a new like programming language because I was used to like Java and PHP but not Python. A new database like the ZODB. I was used to relational databases and overall in UCMS, right? The thing is that the clone was really the hot thing back then, right? It was really like it was like standing out. If you compare it for like with other systems that I looked into with PHP MySQL based system, a clone had as a first at one of the first systems. What you see is what you get editor, right? EpoS who recalls that. That was the first clone editor. A few, right? Many, many followed. But that was something pretty unique, right? If you looked at like say type of three back then, they had like plain input fields, right? Where you could like maybe add formatting or stuff but no like what you see is what you get, right? The theme was by Limie was pretty cutting edge back then. Do you recall like the navigation with the CSS tabs? That was when I saw that first, that was pretty advanced back then, right? It was so advanced that like one of my friends actually copied that for his own website and even Wikipedia copied those tabs, right? So that was really like a cool thing. And we had like all lots of other like optimizations like for instance like search autocomplete, right? That was also something where clone was earlier than most other systems. But that was like long, long time ago. And time flies over the past 15 years. I came for like the fancy stuff but I stayed for other things. Eric in his keynote already mentioned that so I will quickly go through that. The things that we like about clone, right? It's scalability, it's maintainability. We basically allow upgrades from clone 1.0 or even maybe 0.1. I don't know. I never tried. But we offer upgrades from all clone versions, right? That's also pretty unique. If you compare it to other systems, you can easily scale clone. I once shared the office with a type of three agency and I once asked them, hey, how do you like scale your systems, right? And what they told me was like they were looking like that at me and we're saying we buy a bigger server, right? And like clone has all that baked in. Our security record is really outstanding, right? We all know about that. We have the best security track record of all CMSs. No zero day exploit in 15 years. That's really awesome. One of the other things that are really valued over time is the unique set and combination of permissions, workflow and traversal in clone. As a consultant, I saw many, many teams and good teams struggling really hard with building something like that we have, that we take for granted in clone and Zope, right? I saw teams reinventing the wheel over and over again, right? So that's really also unique in my opinion. The number of available add-on products, right? We have thousands of available add-on products. When I go to a client and I don't know what to expect and they ask me, hey, can clone do that? Can clone do that? I usually like, either I know that add-ons exist or look them up and usually clone never like failed me, right? There are still tons of add-ons available for everything. Maybe they're outdated, maybe you have to like migrate them to clone 5 or whatever, but they're there, right? People did that before. Another thing that also I mentioned and that everybody mentioned is the clone community, right? I mean, we all love being part of the clone community. It like sends us all across the globe, right? And meeting nice people and I'm always looking forward to clone meetings to meet like nice friends and like people that became friends over the time, right? Another thing that's really important is the clone foundation. We have a stable legal entity that governs the clone community. So if we start a new project like Ramon with Guillotine or maybe with Walto also like existing project with Permit, they choose to become part of the clone family, right? For a reason. If you start a new project from the start, from the scratch like the JavaScript project or whatever and you're not Facebook or Google, then you have to put a lot of effort into those things. We usually consider that to be like boring if you're on board and doing like all that stuff. But that's really important for clone. Though over the last like 15 years, that's like lots of time as I already mentioned, lots of things changed. So to summarize, we have a unique set of features in clone and a unique community and a good basis, right? But we also have a large code base that makes it hard to adapt to changes, right? Philip put a tremendous amount of work and others into Python 3, right? And again, I said that before, but thank you very much, Philip, for your work. That's highly appreciated. Though with that, we saw how much work it was, right? So that's one problem. The other problem is we're kind of like growing old as a community, right? I mean, it gets harder and harder for us to like attract like new developers, right? If you look at, for instance, at least if you compare our community with like JavaScript communities, right? Where they easily have like a few thousand attendees at S-conferences, right? So I think there were some fundamental changes in web technology, which I would like to share. Some of them are already mentioned last year, so I will quickly go through them, right? One of the important changes was that mobile overtook desktop in like around 2015, 2016. In 2015, Google reported that in ten countries, including the US and Japan, more people were using the search, Google search from mobile devices than from desktops, right? If you look at a regular website these days, usually more people access it via mobile. Another thing that's related to that is SpeedosKing. There's this research about what time people accept, what waiting time people accept when visiting a website. So the research shows that after three seconds, you usually use one-third of your users. After another two seconds, you lose another one-third of your users and then it goes on from there. And keep in mind that this is independent of your device, right? It doesn't matter if you are in a fast internet connection or if you are on 3D somewhere, wherever, right? So I recommend to go to WebHTestOrg and just try your regular website and see how long it actually takes to load on 3D, right? And if you manage to be under three seconds, that's already not bad, right? But you might already lose one-third of your users. There's also Google added like a penalty to slow sites and downrank them if they're slow, right? So people do not expect slow websites any longer. Another important change is that JavaScript is everywhere. GitHub does a report every year on the most successful projects, right? And JavaScript has more than doubled the number of commits than Python in terms of pull requests. In 2018, web development means JavaScript. I will come to that later. Another development is that the web is everywhere, the web technology is everywhere. Five to ten years ago, web applications began to replace desktop applications, right? Things that were like written for Windows or Linux. They were replaced by web applications because like it doesn't matter which device you use, right? About like two to three years ago, web technology started to take over mobile development with technologies like Cordova or React Native or PWAs. Today you can write native applications with native speed and the native look and feel with web technology, right? That's a major change. Another field where web technology starts to get adopted is actual desktop applications. So you can write desktop applications with web technologies. Examples are Visual Studio Code, for instance, Adam Editor or Slack. So the web is literally everywhere. Another thing is open source became mainstream. I think Plone is also is still pretty unique in terms of like we are, we have small to medium companies and we don't have any large company that dominates the community, right? Which is awesome. But if you compare that to like usually successful projects, for instance, in the JavaScript world, there are companies like Facebook, Google, Microsoft behind them, right? And open source really became mainstream. When I started with open source in 2004 and other started way earlier, it was somehow a niche for like maybe nerds. But today open source is really big business, right? So one of the things that I forgot to mention, I mentioned those things last year in my talk in Barcelona, but one of the things that I forgot to mention that also changed is actually the expectations of our users. I was focusing on the technical terms because at heart I'm still like a developer even though I spent most of my time these days in other things. But I tend to focus on the technical stuff. But I left out the expectations of the of the users that actually changed with those technology chains, of course. So what do users expect these days? They expect a blog, a blessing fast loading website like Google or Facebook, right? They expect a search like Google, right? That's like if you talk about like with people about search, they always say yeah it's easy, just do what Google does, right? Easy peasy. When you build an application or an internet application, they expect it to be like Facebook, right? So Facebook is actually like competitor to like some competitors like Pro Internet, right? So we're even competing with those companies. So page speed and loading time are incredibly important. So Erico gave a talk about he used Gatsby.js to speed up his website and also like the development process. That's an example of our our kid concept blog. And with Gatsby is a static site generator. And the two cool things about Gatsby is that at first it has like all the latest optimizations for page speed included, right? Everything you could imagine it's there out of the box. You get a page speed of like 98 or 99 from page speed inside by Google out of the box from Gatsby. Second thing is it's built-in React. So you can use the rich React ecosystem both on generating the content and on the front end, right? So we start using it for the blog page and I will briefly show you how I think a modern like website should work. You see that it loads the website instantaneously. And that was I recorded that yesterday evening in my hotel room and the Internet connection is really really crappy. And you see how fast that loads, right? Like other screencasts that I did I had to wait to come here. But that one is really really quick. And you can add like nice animations to like make it fancy for people, right? So that was the first group of users. I think that this is what people today expect from a modern website, right? It's a second group who's maybe our most important target groups are the editors, right? The editors that are working with Plone on a regular basis every day. They're basically our ambassadors, right? If they like Plone they will go around in their companies and their organizations and tell everybody how great Plone is, right? If they dislike Plone they will tell people how badly Plone sucks, right? And that will lead to maybe a company organization abandon Plone, right? An intuitive editing experience is super important in my opinion. I talked about ApoS and Kupo back then, right? And I talked about how cutting edge that was back then. But if you look at editing today, we're using TinyMCE and TinyMCE is fine. It does its job, right? But it's not that we're separate that we're standing out from the systems that like everybody's using TinyMCE everywhere, right? Or other systems. We are just like as good as anybody else, right? But if you look around a bit then you see solutions like Medium.com. Medium.com is a blog platform founded by one of the Twitter co-founders and they put a tremendous amount of work into their UX to make it really, really user-friendly and super easy for people to start blogging. And I think they reinvented in place editing. So I will show you how easy it is to create a blog post, right? So you just say create a new blog post and you start typing immediately. The first line is actually the title. The second line is the description. Then you can add this add button and you can add like images or tweets or YouTube videos just like that. If you have an image you can change the appearance. If you want to add the image caption you can just like click in there. It's super intuitive, right? So in my opinion this is how the modern editing experience should look like. Unfortunately this is like closed source, right? There are open source clones for that though, but I will come to that later. Another thing is Gutenberg. Gutenberg is a project by WordPress. WordPress used to have like lots of page composition tools, right? They have like something like Mozaik but with far more features and they have like three or four of that, right? And it's really impressive if you look at that. And Gutenberg is an approach to have in I think WordPress 5 a unified like core add-on product for page composition. And it's quite nice. I mean it's not as intuitive as the medium editor. It has like far more features as you can see here. But it's not bad. It's written in React and it's open source, right? So you can see you can add an image here, you can upload it. Basically the same that you can do with medium, right? Like a few different choices but basically that's it. So you can do the same and you can choose how the image appears. The third target group that is important is developers. I mentioned that earlier, right? If you want to attract new developers, you have to compete with all the other systems that are out there. And let's have a look at development experience that modern JavaScript developers have. So that's Create React app. But it's similar. It doesn't matter if you use React with Create React app or Angular CLI or view CLI. It's all the same. What you basically could do is you install note, the note package manager, then you say NPM install, whatever you want. And then you say yarn install or NPM install. And then you start running yarn start. That will start your development server. And it will automatically fire up the browser on the left side. Now it's there. That's a demo app. And now you can inline edit the code. And once you hit save, it will automatically show up on the left side. But that's only the start, right? You can, of course, like make the editor auto-save. And then it will automatically do that. And it actually works with the auto-save if you want it. But the really cool thing about Create React app is it has something that's called hot module reloading, right? So the problem is if you have a more complex application and say you don't have the front page, but you navigate to another page, and then you open an overlay, right? And say you want to change something in the overlay. What usually will happen if you reload the application, it will go to the start again, right? But together with Redux and React, it allows you to go exactly to that state that the application had. So the application can reload and keep the state that you had. So if you go to another route, open an overlay, the reload will keep the overlay open because that's part of the state of the application. And you can even do things like time traveling. So go back and forth, right? So that's pretty cool. But that's, we are competing basically with that developer experience, right? So most of what I showed here is actually open source, right? So except like the media editor which has clones. So we have all those things at our hands. We can provide users with a state of the art user experience like Google and Facebook does, right? Because they're basically open sourcing their stack. Both Google and Facebook are completely open sourcing their core technologies, right? Because they're not making money with web technology but with ads, right? With all our data. We can provide state of the art editing experience. With the medium editor, Facebook published Draft.js which we use for the Voltaur or Pazanaga editor. The Gutenberg editor that I showed is open source, is written in React. We have the Auri editor which I didn't show but Nathan started to work on that to integrate that. It's like also a full page composition tool. So we have many like different libraries to provide users with a far better user experience than we have right now. We can also provide developers with state of the art JavaScript development environment with Voltaur, right? Because it's based on React and it does what I just showed you. So wouldn't it be great if we could combine what I just showed you and take all those libraries and take all those libraries and combine that with the stability of of clone, right? And the reliability and the maintainability of clone. So take the best parts of clone out and combine them with the stuff that I just showed you. We had JavaScript or we have JavaScript in clone, right? Our first start with JavaScript, Roman is already smiling. So we had our start with JavaScript, right? It was markup and resource registry. Who here likes working with the resource registry? Still smiling? No, just kidding. Just kidding. I promise it won't be around, right? I will tell a bit about the story about resource registry, right? I have a confession to make. I'm standing here and I'm like telling that for a few years that you should all move to JavaScript. But I was really reluctant to make the transition myself from Python to JavaScript. One thing is I really love Python, right? And I was always skeptical about all those JavaScript approaches. The first JavaScript approach was KSS, right? Who recalls KSS here? Who tried it out? Okay, I never tried it. I never liked it. I never liked the approach. I never thought that this would work, right? And there were really, yeah, great concept and really, really smart people, right? Really smart people that were working on that. But I was always reluctant. Same was true for when I met Rob in Munich at the sprint, right? He came to me like enthusiastic like he always is. I have a new thing and this is like markup and like pattern slip. I don't know what it was back then. And he wanted to like build new widgets for Plone, right? And I was like what I did always like I was skeptical about that. I was like, I don't know. I mean Rob is always. I actually did a project with Rob and he was with Rock, sorry. Did I say Rob? With Rock. Rock, Rock, Gavas. So I did a project with Rock a few years earlier and he also like already dragged everything, the new stuff in there, right? It worked out fine and he's a super smart guy. So all good. But it was like kind of a bit skeptical. And the problem is that we started to pull in JavaScript libraries and frameworks from all over the place, right? And if you have like lots of different libraries, you can't just like ship them individually like we used to in the old resource registry, right? You will end up with like doing 300 like HTTP requests in your website. So that won't scale. Not even with HTTP, right? So what you need to do is you need to bundle the things. The problem is that if you use a bundler like we did in markups and pattern lips and which we do today as well in Volto, is that you can only do that externally, right? It's something that you run on command line. And that will break your through-to-web editing and the add-on installations, right? Because even add-on has to install JavaScript or CSS. You can't do that like through the web, right? Because you need to run the bundling to get the bundle back. If you don't bundle, then you get a slow application. So it's not working, right? And Roman and Victor were the first folks to run into that problem when they implemented the Barcelona data theme, 4.5, right? So we sat together at the at the Rento, right? And Victor and Roman were struggling hard of implementing Barcelona data. And we didn't have any way out of that, right? Because, I mean, on the one hand, there was Rock. He wanted to, like, I think his idea was just to run the bundling externally. But we couldn't build the theme and maintaining the through-to-web and extensibility story and add-on story of Plone. So we got into a heated discussion and, like, kind of a fight. And at the end, Roman, I think, with help from Nathan, came up with the idea of running the bundling process in the browser, right? And it was like, like, what I, my first reaction when Roman comes up with something is like, that's undoable, right? But every time I say that, he proves me wrong. So in the end, he did it, right? And that's something that you should keep in mind when you complain about resource registry, right? Roman and Nathan and Victor, they saved our asses when it comes to Plone 5, right? I can't imagine that we would have been able to deliver Plone 5 if we wouldn't have come up with such a solution, right? And then, at the end, when we reached a consensus, how to do that, and Roman came up with a solution, right, and saved all our asses in Plone 5. Rock mentioned at the end of one of our discussions, yeah, you know what? I saw this, like, new library called React, and I would like to rewrite mock-up in React. And I was like, what the fuck? Sorry. I was like, really, like, no, you can't be serious, right? I mean, we will never, like, being able to ship anything. I'm not sure if I would be able to try and travel. If I would, like, travel back and say, Rock, you were right. That React would have been, like, a great choice. It was, like, way too early. React was really, like, out for, like, a week or so. But, yeah, I wonder how that would have turned out. But anyways, I think that the time was not ready, and if you look back, what we have in mock-up and patterns live right now, we have Bower, which has been declared dead, like, two years ago. Require.js was just the emerging thing back then. Now it's, like, totally dead. Nobody's using it. And our tire stack is, like, totally outdated, right? So I think that the time was not ready for that. The Plone community was certainly not ready for that, because I can't imagine that we could sell, like, external, like, JavaScript bundling to the Plone community back then. And JavaScript was also not ready, right? Because, like, all the tools that we had back then, like, went away, right? So we went with the resource registry. Though we have now the situation that, that, at KIT Concept, even though we have developers who are really experienced and fluent in React and Angular and other and jQuery and whatever, we are scared to touch mock-up patterns, right? Because it's just, just, so, the technology is just so old, right? So usually what we do is rewriting that thing from the scratch in React, because it's faster than, like, touching existing, existing mock-up patterns. So, that's the problem. Though, as I said, I was really reluctant to all that JavaScript thingies and really opposed to it, until I hit a wall myself in a project, right? And that was actually in 2014. We built an application in Plone, a large application, and the client wanted to use the content that they created in Plone for telephone support. So they had people on, they had telephone support folks, and they had people on the phone, and they had questions, and they had to, like, search and browse in the application. So search was easy. We plugged in solar, so we had blazing fast search. But browsing was a major problem, right? It was just not fast enough. Imagine if you have somebody on the phone, and you need to browse through extensive content, it needs to be, like, instantaneously. So what I always did was, like, lazy loading stuff with jQuery, right, with an Ajax call to make a better user, first load, like, user experience. But that did not work out, right? So I recalled that there were, like, new frameworks, like, that put code in the front end. And I was, I was technically lead back then in the project, and I had, like, I had a team of people that were inexperienced with, with JavaScript, and I had, like, a few days to make a decision, like, a technical decision that would save the project, right? So in the end, we, we, to make a long, sorry, short, we went with Angular, Angular 1 and writing custom endpoints in, in, in Plone, right? That, that's pretty easy, writing custom endpoints. You just create a browser view, and then you set application JSON, return JSON, and you're good, right? So that solved the problem of our client. We were able to provide the client with an application that, that was loading really fast, that allowed them to, to browse through it really quickly. Then something happened that's, that's wildly called JavaScript fatigue, right? The feeling that you have a new framework every, every, every single week, right? That, that, like, scares people away from, from JavaScript. The problem is that front end technology will always move faster than back end technology. In my opinion, it came down to, like, having, we have in the JavaScript world now three, like, major and stable, um, uh, front end frameworks. Um, so it's not that bad any longer, I think. Though, front end technology will always evolve faster than the back end technology, right? So what we want is a stable back end and, uh, a front end technology that can adapt to new, to new things, right? That will happen. Um, so, um, after a project, I started to build Plone REST API to decouple the front end from the back end. Um, and to being able to expose the stability and the security of the back end to modern front ends and have there the speed and the, the modern libraries on the front end. So that, that worked well. We just, uh, released Plone REST API 3.5, uh, zero. Um, it's stable as, as, as I told to Eric, it's boring technology. It, it exposes, like, everything that, that we value and it allows us to use the most modern frameworks. Um, then we had a large internet project where Plone was just one of several different systems. Um, the client asked us to build a unified UI for all the different apps that they had. That was not only Plone, which provide like DMS and CMS functionality and social internet functionality, but they also had like a mail application, calendar, chat, you name it, right? Um, that, that, that project was quite, quite large, but it worked well for us. Um, Angular 2 worked well in, in, in our project. Um, we had a few scalability issues. Uh, the Google release policy in, like, the alpha phase wasn't like that, that good because they changed and replaced all the, the stuff in the alpha and beta phase, but in the end it worked fine, um, for our use case. Um, though there were, there was one problem that, that we were pretty aware of. Um, Plone has two major, um, use cases. One is internet, the other is public websites. Um, and if you, if, so Angular 2 worked well for, for, for, for the internet use case. If you have a public website, that's a different thing. Um, if you have a public website, um, what will happen if you have an Angular 2 app or like any, any server side, uh, any client side JavaScript applications, you will bundle all the code on your front end and that, that bundle will be huge, right? You can try to cut it down, but it will still be huge. So what will happen if the user, uh, what happens if the user first, um, uh, accesses the site? The, the, uh, it will load the, the browser will load the HTML page and then it will have to load the entire bundle and that will take time, right? It has to load the bundle and then it has to execute the bundle and that takes lots of time. In an internet application, you can easily like fake that because you, people need to log in and you, you can use that time to, to lazy load that bundle. Um, but for a public website, that's like just impossible, right? Um, uh, another problem is, uh, uh, search engine optimization or, or that, that you are actually found on Google. Um, we started to build our, uh, company blog in Angular 2, um, because the Angular folks were like saying service at random works and you can do it. So we were really enthusiastic and I like started to build, um, build a blog in Angular 2. I wanted to like, like build it and ship it. So I did. Um, but over like two years, Victor and I both tried really hard to make it work. Uh, and we didn't, and we even had access to the Angular framework team and to like people that are really experienced. Um, uh, but we couldn't make it work. Um, I know that Eric Behold like still tells us that this is possible, but it didn't work for us, right? So we somehow lost a bit faith in Angular 2. Um, it became for clear for us that Google, that, that Angular, uh, in Angular development, Google, uh, the, the priorities in Angular are Google and their internal usage first and then, then the other users, right? I, I, I still think that is a good choice for, for internet, but it wasn't back then a good solution for our public website. Um, and, uh, so we didn't have a solution for public websites in Plone, but of course we wanted to, to use modern technologies and modern UI as well. Uh, so it all started, um, with a joke. Uh, in Bucharest, Ramon, uh, Nathan and I gave a talk about headless CMS and we were really like, I recall Ramon like hacking on Angular 2 around like the breaking changes in alpha, whatever. Um, and we're giving a presentation and we were doing that like, we weren't really doing a pillow fight and, um, Paul, that, that slide is just for you. Um, we're all friendly folks, right? But we had like, different developers had different preferences, right? And, and Rob, uh, Rob always had a strong preference for React right from the start, right? Um, so we were discussing and like joking about it and on the stage, um, I, I announced that, that Rob will, will, like, um, uh, will, uh, will implement, uh, uh, React front end, right? We were working on an Angular project and, and Rob, and like announced that Rob will, um, start on a React project and it was a joke, right? Um, but, um, at the Beethoven's print in Bonn, uh, Rob and Roll actually started to build, uh, Plone React. Um, Plone React does not have the problem. Uh, um, for us we run into, uh, with the Angular 2 project we run into a few scalability issues. I won't go into the, into detail, um, but the unidirectional data flow, uh, it was, was, was solving kind of like that, that problem. Um, that was one thing that, that dragged us to, to, to Plone React. Another thing was that server-side rendering works. I still don't know if that was just like because Rob, uh, uh, Rob is a really good developer or if that like just works out of the box, but, but it worked, right? Um, what? Okay. Not, not, so, so, yeah. It, it just works. Um, uh, one other thing that really impressed me was I was managing a large team working on Angular 2, right? And we were making good progress, but I was really, really impressed by the progress that, that Rob and Roll were doing with Plone React. They basically built that thing in like one or two weeks of prototype that has lots, had lots of features and I wasn't aware how much work that was, right? Um, another thing that really impressed me in React was an upgrade from Plone, from React 15 to 16. They basically rewrote that thing from the scratch and the upgrade, uh, was really, really smooth. And when you compare that with the alpha and beta phase that we had in Angular 2, which might be unfair, but that was what we experienced, um, that was really like something that, that impressed me, right? Um, so, um, we just went with our first, when, when, when, when one of our clients approached us to do a public website, we just went with Plone React. Um, uh, so the, the client asked us to do a portal for, um, for volunteers to help refugees to like learn German or talk to the authorities. So, so it's basically a combination of a learning management system and a content management system, right? So our client wanted to, to allow the editors to, to, um, not only manage content, but also, um, uh, managing like multiple choice questions and like flip charts and all those kind of like different things, right? Um, so we went with, uh, with Plone React and, um, financially I have to admit it was a disaster, that project, um, because of course we had to put way more effort into that project. Um, uh, but from a technical point of view, it was, it was really great. I mean, it worked out and like the fun that I, that, that we had in the project, um, was really, was really awesome, right? If you see your developers being happy because they work with like, with good technology that allows them to build things fast, that was really like, uh, that was like really like good to see for me, even though I, I didn't have the chance to get my hands dirty in that project, but, um, that was really that, that I, that I liked very much. Um, so the project went fine, client was happy, we delivered the first project, uh, all, all went good. Um, though we had one problem, um, reusability. Um, Plone, uh, Plone React, um, is, is an app, right? And in the React world, uh, people are used to like building a product or an app, right? They're not used to, no, nobody, it seems nobody is used to like building a CMS like application that allows you to theme and to override stuff, right? It's really, if you, if you talk to people in, in the JavaScript world, uh, about that problem, they, they don't have that, right? Because they, they just built either a product or an application, but not, not something that you need to theme or where you need to override things, right? So that, that was a problem for us, that Plone React was, wasn't a library, right? It was a thing that you could like do a git clone and then you can customize it and then, then that's it, right? Then, then you have to manually like copy over stuff if you, if you want to upgrade that for, for your client. Um, so we needed to have Plone React working as a library and we, we needed like to being able to override, um, components. Um, as you can imagine, we, we managed both. Uh, I recommend going to Victor's and, and, and, and Rob's talk, um, about, uh, theming, uh, Volto and, um, uh, and, uh, the extensibility story. Um, so, when we did the, when the, like another client came, came to us and wanted like a public website, um, it was one of the, like leading supplies for bakery products in, in, in Germany. So it's like a business to business, um, uh, use case. Um, they wanted a relaunch for, for the company website, which we worked on for like, like, um, uh, for last like 70 years with the old, uh, Plone 4 website, I think, and they wanted to, to do, uh, like makeover. Um, they have an extensive database, a solar base search. Um, so a larger website, um, so we went to, uh, we, we went also with, with Plone React. Um, Victor and I will give a talk about those two projects if you want to have more, more insights. Um, uh, to that, um, to solve the, uh, the extensibility and reusable story, Victor creates a poor man approach of what we, what we have later to make it work that we use Plone React as a library. In that project, we were able to use Plone React as a library at first. Um, then, uh, we had a, we're at the same time when we worked on Zelandia.de, we had a Google summer of code project, um, from, from Nilesh, um, to, to, um, build this create react that I showed you earlier, which allows you to just like, call create react and it will create some kind of like, team policy product, right? And we wanted to have something for Volto as well. So he worked on that and, uh, um, it, in the end it worked out quite nicely. He will give a talk about that. So I recommend to, to go to that talk as well. Um, so coming back to the expectations that I, that I, um, talked about earlier. So we successfully built a large internet application with Angular 2. Um, we should successfully ship to public website with react. Um, and I would like to, um, to talk about the expectations that are raised earlier if we, if we are, if we are there yet, right? Um, so exposing clones core assets, which I talked about, we have a stable and mature core, um, with, with, which is reliable, which is secure. Thanks to, um, uh, to Philip and others. It's, it runs on Python 3. Um, but I think we're all good there. As said, clone react, a clone rest API is boring technology. Um, so, um, check in my opinion. Um, second, uh, group is, is, is, is users. So we can, we can have blazing fast loading time. It was either Gatsby.js, um, which is another, which was another Google summer code project where you can combine Gatsby.js with, um, with, with clone. Uh, and also with, with Volto, right? It's, it's super fast to switch between the sites because Volto has server-side rendering. Um, and of course we can use the latest UI and widgets libraries that we have, which is like semantic UI in our case. Um, again, go to Rob's and Victor's talk if you want to know more about it. The, the other thing is, is the editor experience which I talked about. We have like, we have the Passenaga editor, which is based on draft JS from Facebook. We have the Ori editor, which Nathan started to integrate it in, in, in, uh, in Volto as well. Um, and we have the Gutenberg editor, which we had a look at, but like three individual developers, uh, came back and said that like the, the, the, the code is not like flexible enough and the, the code quality is not, not good enough for, for, for our use, use case, right? Um, so I will, um, I will briefly show the editing experience from the, from the Zelandia site. It's just this like quick sneak preview. Please go to, to our talk or to Rob's talk, um, if you want to see the, the, the full thing, right? So that's your locked in user. You go to the ad menu and here you have, um, oh crap, sorry. Um, so that's, that's the locked in menu. You go to ad menu and you see basically the same that you see on, on the, um, medium editor, right? You can type in right away. You can type in a title. First line is the title. Second line is the description. Then you can just hit a return button after that and you get to the next, um, next field. So you can, you can paste, uh, or enter content. Um, if you want to add a photo or YouTube video or other content, you can just like click that ad button. Um, it will upload the, the picture. Um, that takes a while for megabytes or something that, that picture. So you can change the, the appearance of the, um, uh, of the, of the picture. You can drag it, drag it, um, uh, up and down if you want. Um, and we are far, we will, we will far more, um, uh, um, oh yeah, you can, you can do like inline, um, um, inline styles. Um, you can add, um, uh, headline styles if you want. That's all like inline. Same as, as both, um, um, medium and, and Gutenberg. And if you hit the save button, you basically see the same, uh, you have to, the exact same view, right? So this is in place editing how it's, how it's supposed to be, right? Um, the other group that I mentioned is developers, right? Um, I already showed you, uh, create react app and what it can do, right? So create Volto app is, is basically the same. We can do the same, right? So we already have that. So check. Um, so basically we're at a, I think we're at a great like time in for, for a plone, right? I mean, uh, we have, we have a huge set of, of existing libraries and open source frameworks at our hands in the JavaScript world. Um, we have a, a basic application which is Volto, which allows you to use the latest, uh, frameworks and libraries. Um, we got the stability of plone exposed via a Plone REST API. Um, we got the, so we got the stability, we got the experience as a, as a community. We have the foundation and all the, all the good things that I mentioned, um, earlier, right? So we have all we need to, to get started, right? So if you want to like start today, uh, go to, as I said, go to Rob's talk, um, about Volto, uh, to Victor's and Rob's talk about extensibility, Victor's seeming talk, Roman, uh, Victor and I will talk about the two Volto use cases that we have. Roman gives two talks about Guillotine, uh, if you, if you want a, uh, uh, superscalable back end for, for Volto, um, Nilesh will talk about create react app and AJ, uh, about Gatsby JS, right? So lots of cool things to learn. Um, Rob, Raul and, uh, uh, Victor gave a training on, on, on Volto, um, during the conference. Um, if, if you, if you're interested in, in Volto training, um, please come talk to us. We can also like imagine to give, give trainings if people are, are interested in that. If you want to try out, um, Volto, go to Volto.kitconcept.com. Um, uh, you need to log in. Uh, log in as admin admin. It's a Docker image. So you can do whatever you want. It will be like perched. Um, don't try to register. Um, we did not, uh, we did not add a mail server there. So just use admin admin. Um, try it out. Let us know if you run into any problems. Um, though it, it does not run on the latest version of Volto. It's like, it's a, it's a, like previous version, but, uh, yeah, give it a try. Let us know. Um, so we are right now as a, as a, at a stage with, with Volto, um, where we're still like looking for early adopters. Um, we, as said, we did like two or three projects with, with, if you count like Angular in, um, and we are looking for, for people and companies to, to join our efforts, right? So if you're a company or a developer who wants to pick up Volto, please come talk to us, talk to, to, to Rob, talk to Victor, talk to me, to anybody else who's involved. We're really happy to, to help you. Um, if you want, uh, want to Volto training also talk to us. We're happy to share everything we have. Um, Rob and role, um, uploaded their training to, to Plone training, right? So you can just go there, try it out. Um, if you're a client and you're interested in doing a project, please talk to us or like other companies that are interested, right? We, we as a company or we as like developers, we are really interested that, that other, uh, uh, that other companies and developers pick up, um, pick up Plone React. Um, and, um, that was about it. Thank you. Thank you for the nice presentation. Um, I think it's very good to see how the future involves in the development. But in the last few months, I've talked a lot with other universities with other larger companies. And the first feedback I get from the universities that now using WordPress is the first thing they will deactivate is Gutenberg. Why? Because in corporate identity based solutions, there's one thing you should never allow the editor to modify the layout of the page. Plone has been used almost in title in environments where corporate identities were large enterprise usage and so on is in there. A better user experience than we have now is important. But things like Pasternaga, the editing experience and everything is just one element to improve. The other thing is to make a consistent, uh, improvement of the overall because we have so many features like collections, the workflows and so on in the system that also needs to go in line with that. So the question for me is how will that go with Plone and Volto or will Plone then be not anymore the decision for larger companies? What's the aiming target audience? Okay, let me split that. Um, so first thing is regarding corporate identity and to be honest, I couldn't agree more on that, right? Um, when I work with clients, I usually try to reduce the amount of things that regular editors can do as much as we can, right? And that was also one of the reasons why we started with the editor with like a really basic text editor at first, right? Our first idea was to create just a medium editor, right? That just allows you to create paragraphs, images, YouTube videos, so nothing more, right? And if you come to the, our use case talk, you will see that for Zelandia, um, actually I showed them like all the features that, that, that we have and, and, and can like provide them with, right? And they basically said, oh no, you know what? We don't need that. We don't need like the front page doesn't need to be editable, right? At all. I mean, we will just like show the news there and this, this here, uh, in the end we made it at like the text editable, but, but not the entire page because they're, they were asking a design agency, right? So if they want to actually change the content on the front page, um, they have to ask the design agency first to get a new iteration and then they can change it. So it's, they don't need it, right? And for the other parts we built like quite complex widgets, but they were all from the design agency, right? So the design agency gave us a long, um, example of, of a page, right? With the, with the elements and we just created exactly the same elements that they had and that was like lots of custom development, which we couldn't like push upstream because it was like custom for that theme. Um, but, but I, I don't, I believe that, that we don't need all the flexibility, um, that, that a solution like, uh, like already editor or Gutenberg provides that we, that we have to add that to Plone and, and maybe we shouldn't even, right? So we're all on the same page. Um, second, second part of your question, um, is about like large, um, large, uh, organizations, right? We're at, at concept, we have a few university as clients as well. We know, we know their use case. Uh, and it's when I go to university or other larger, larger, um, organizations that have existing Plone sites and they have to like run like 600 instances or 300 instances. I don't go there and tell them, Hey, go with like jump on that. Go with like with Walto, right? That, that, that's not the right case. They have to wait longer. They have to be more conservative on that. Um, though I think that especially like things like Gatsby, right? That, that is used, us could push that very hard because that, that was something that helped them a lot at the university to being able to publish subsites, right? They can just put it there and they don't have to maintain that after they, they, they, they build it, right? So, so I think that even universities or large, large organizations, um, can actually use parts of what we have here. And one of our plans was always that we have, we have reacting core, right? So you can basically grab, um, components from Plone react and integrate them back into standard Plone if you want, right? It's not that hard and we are open to do that at key concept. And like in our group that's pushing Walto really hard, we want to focus on, on, on Walto and Plone react, right? And I, I try to convince like all new clients to, to go with Walto, but that doesn't mean that, that we don't do like, uh, old school Plone project if the client, if that's the better, better fit for the client, right? So in the end, every, every client and every developer, every company, they need to decide which way to go. But what we want to offer with, with Walto is, is, is a way to being able to, to really build cutting edge, um, state of the art website, which might not be what a university needs, right? At, at that point, I think in the future they will, um, because that will, will further evolve and like in five or six years, there's no way that universities in my opinion can still like ship like old, old school side, right? But they have more time, right? And they have different tradeoffs. Um, so there's no, no easy solution for that. And, and Walto is just like one option that you have, right? Or Gatsby or all the other things that I mentioned. Um, so just to follow up on that, like I totally understand that big high paying clients, they're going to, you know, often want their branding as important. Then you have the, the smaller sites, uh, where you've only got one or two editors and they kind of want control of the layout and so on. Uh, and don't want to have to come back and keep paying for it each time. So is that, I mean, I know there's two competing kind of ways of doing that. Is there a plan to make those two different, um, ideas work within it or is there only ever going to be sort of let's make it simple and locked down? Um, absolutely. So I, um, I think I, I had that slide last year. Um, I, I really love that like agile, that idea of agile development that you first like build a scooter and then a bicycle and then a car, right? And that was always, um, always my idea. Uh, and I put those pictures on, on the wall at our office when we worked on, on Volto, right? I wanted to have the medium editing experience first, then being able to add more like things like, like images, like place them. And then, then at some point, I'll also allow things like Gutenberg does, right? More complex, um, more complex parts of the website, right? So it's totally on our roadmap, but, um, I'm not 100% sure if we, if we really want that, right? It depends really on the use case on the clients and we could build Gutenberg, right? We could like, we could try to match that, but I'm, but actually that's not what I'm aiming for. I think that's more that, um, that, uh, user friendliness is more important than features. And, and on the Gutenberg editor, I think that's a good example of an editor that has all the features, but that's not what, at least personally, I'm aiming for, right? I think we need, for Plong, we need something in the middle to be between like the, maybe university use case, um, uh, and, and like a super complex like, uh, uh, uh, page layout, right? So it's definitely on our roadmap, but we have to like iterate and reevaluate all the time to see where, where Plong fits in. Uh, okay, we're, we have to make this a little bit quick because I think we're out of time. I actually have another question or I had the question before, but you didn't answer. The one thing is the editing experience. And I said before it's important to get a better experience than we have today, but editing is just one thing. Plong is about content managing. So it's all the other stuff. So saying, giving the keywords for a content type, giving all, all from when to, or from which date to which date the content should be presented. Uh, we have all that. Doing the workflows and everything. And even that, well, the content management, uh, features are in there, but having it in a way so that the user experience on managing them is also one thing we should improve. And is there a vision in your thing, how to get that in an easier way to the people? That's, that's super hard. I mean, imagine like a, like, uh, having this in place editing, um, that I just show for the page for, for an event, right, where you have to choose the, the date. Um, how would you like do that in place? I mean, that's, or, or like the, the, the publication date, right? Well, I mean, you don't even show the publication date, right? So where would you like have that in place editing? The only way to do that is to have a settings like we have and, and being able to edit that. And this is what we have, right? So, so we, we basically add the first iteration that Rob built was basically like the same, um, the same editing experience that, um, that we had in Plone, right? And we matched that. Um, I think Volto pretty much matches the core functionality, um, of, of, of Plone. So we all have that and we're open for suggestions how to improve that, but that's, that's a really hard problem. Um, uh, and, and I think that's, that's independent of the question to do, to use if you want to use Volto or Plone, because as I said, as I said, we, we matched the functionality already. Thank you very much.
When Plone 1.0 was released over a decade ago those were different times. There were no smartphones, JavaScript was nothing more than a way to animate elements on a website, and open source was more a niche for nerds than a business. Fast forward to 2018. Mobile is everywhere and JavaScript became the predominant programming language for the web. JavaScript-based web technology is used to build not only websites and web applications for the browser but also to build desktop apps, native mobile applications and virtual reality apps. Open source became mainstream and some of the world largest corporations make large parts of their web stack available to the public. We live in exciting times for web developers! Though, Plone is a mature open source project with more than 500k lines of code, a wide variety of add-on products and huge ecosystem of companies, developers, and projects. Therefore, the modern web is a huge challenge for us. More than four years ago, we started to develop both a vision and real-world software to bring modern web technology to Plone. The starting point was the development of plone.restapi in 2014, to allow to use modern JavaScript frameworks on top of Plone. Since then apps and SDKs have been developed for Angular, React, VueJS and others. The project that gained the most traction lately was Plone-React, a new frontend for Plone written in pure ReactJS. Plone-React implements a new UX/UI framework for bringing a better user experience to our editors and users. At the same time we worked on implementig Guillotina, a new, plone.restapi compliant, super-scalable, asynchronous Python 3 web framework that can be used as an alternative backend for Plone-React. Exciting times for being a Plone developer! This talk will present the current status of the tireless work of many individuals in the Plone community over the last years. It will present a common vision of how those project can come together to shape the future of Plone
10.5446/54831 (DOI)
みなさんこんにちは。私は初めてのゾップをプログラミングしています。私はアツシ・オダギルです。私はアオダグルを呼びます。私はパイソンを使っていたので、20000年からです。私はゾップゾップです。私はウェブフレームワーク、ターボゲア、パイロン、BFG、プレミッドを使っています。ジャンガーは良いです。私はゾップをプログラミングしています。ゾップはリスプを使っています。私はゾップをプログラミングしています。今日は、プログラミングのゾップを知っています。ゾップはゾップアプリケーションサーバー、ゾップツールキット、ゾップファンデーション、ゾップアプリケーションサーバーが、ゾップツール、GODB、Gサーバー、GMIのマネジメントインターフェスです。ゾップはオブジェクトパブリシングエンバイオルメントです。これはトラバースURL、オブジェクト、パブリシーオブジェクトです。ゾップのフレームワークは、ZOPPのURLをスペシファイズオブジェクトとブラウザーを作り、ZOPP3の部分を紹介します。ゾップのコーポネント、アンキテクチャー、ZOPP3のブラウブリシングエンバイオルメントは、ZOPP3の部分を紹介します。ZOPP2の部分を紹介します。ZOPP4の部分を紹介します。ZOPP4の部分を紹介します。ZOPP4は、Wiskey、Python3、Python3の部分を紹介します。ZOPP4の部分を紹介します。ZOPP4の部分を紹介します。ZOPP2の部分を紹介します。ZOPP4の部分を紹介します。ZOPP4の部分を紹介します。ZOPP4の部分を紹介します。ZOPP4の部分を紹介します。ZOPP4の部分を紹介します。ZOPP4の部分を紹介します。ZOPP4の部分を紹介します。ZOPP4の部分を紹介します。ZOPP4の部分を紹介します。ZOPP4の部分を紹介します。ZOPP4の部分を紹介します。ZOPP4の部分を紹介します。ZOPP4の部分を紹介します。ZOPP4の部分を紹介します。ZOPP4は非常に簡単に非常に簡単に設定されています。ZOPP4の部分を紹介します。ZOPP4の部分を紹介します。Name、Version、 and LeadMe、RST、LongDescription、 and Licenseオプション セクション、このパッケージは、Dopeプロダクトが、普通にプロダクトの名前は、スペースパッケージです。このプロダクトは、Dopeプロダクトの名前は、ページテンプレート、アセツ、JavaScript、インクルートパッケージデータが実際に、Dopeプロダクトの名前は、インクルートパッケージデータが実際に、Dopeプロダクトの名前は、インストールリクワイヤーの名前は、Dopeプロダクトの名前は、Dopeプロダクトの名前は、インストールリクワイヤーの名前は、Dopeプロダクトの名前は、インストールリクワイヤーの名前は、Dopeプロダクトの名前は、インストールリクワイヤーの名前は、Dopeプロダクトの名前は、インストールリクワイヤーの名前は、Dopeプロダクトの名前は、Dopeプロダクトの名前は、Dopeプロダクトの名前は、Dopeプロダクトの名前はブラックフォーマツのフレ学エイトジリータで、パッケージでは現在の認識比べて、税紙やアプテオに関して、部属のアプセリで競争容易しています。 ranging from 11 Sana and Kiriya or Kato, from Nagata paper coloring shop wird for 120 e4.Dopeプロダクトの名前は5 0 E我們に購読できないんです。Dopeプロダクトの名前は5 10 imam!!では、私はバーチャルエンブを使って、バーチャルエンブを作り、私のプロジェクトを作り、バーチャルエンブを作り、私たちのプロジェクトを作り、有効効果バーチャルエンブを使って、では、ウィスキーサーバーをご覧ください。まず、NKWiskyのインスタンスをご覧ください。このコマンドはGODBとコンフィグレーションのファイルを作ります。そして、RUNWiskyコマンドのRUNZOP instanceをご覧ください。では、始めましょう。まず、Iをモデルクラスを作ります。このコマンドは、BANQアカウントチュートリアルクラスを作ります。このコマンドはBANQアトレビュースを作ります。BANQは、BANQアカウントチュートリアルクラスを作ります。まず、BANQアカウントチュートリアルクラスを作ります。BANQアカウントチュートリアルクラスは、0の代わりに比べていない。0の代わりに比べていない。このコマンドは、ZOPのGMIモデルを作ります。シンプルアイテムを作ります。シンプルアイテムは、RIFWIRE IDアトレビュースを作ります。このコマンドは、RIFWIRE IDアトレビュースを作ります。このコマンドは、RIFWIRE IDアトレビュースを作ります。このコマンドは、RIFWIRE IDアトレビュースを作ります。このコマンドは、RIFWIRE IDアトレビュースを作ります。ZOPの研究をしています。このコマンドは、難しいです。ZOPのページを使って、ZOPを使って、ZOPのページを使って、ZOPのページを使って、ZOP4を使って、カメラのテンプレートエンジンを使って、ページを使って、タル、テース、メタルを使って、サービス、プロパティを使って、オリジナルテンプレートのページを使って、タル、コンテンテレクティブを使って、ZOPで使うカメラを使って、カメラのテンプレートエンジンを使って、ダラ、カーリー、ウェス、ノーテーションを使って、ショートカットを使って、エクスプレーションを使って、バランスプロパティを使って、コンテキストを使って、バランスを使って、ページテンプレートファイルを使って、エクスプレーションを使って、デフォルトビューネームを使って、アクセスコントロールを使って、アクセスコントロールは3種類のアクセスウェイを使って、パブリックメソッドがウェブで使って、パーミッテットのウェブで使って、プライベートメソッドがウェブで使って、デクロエアを使って、デクロエアを使って、クラスセキュリティインフォルトを使って、クラスアトレビューズを使って、デクロエアパブリックメソッドを使って、エネーブルを使って、アクセスコントロールを使って、インシュラリーズクラスファクションを使って、GMIのコンストラクターを使って、フォームをディスプレイフォームに使って、バンクアカウントは、インシュラリーズクラスファクションを使って、パスタイディーズクラスファクションを使って、バンクアカウントを使って、コンテキスを使って、リターンオーケーはレスポンスボディ。コンストラクターとモデルクラスレジスターを使って、インシュラリーズクラスファクションを使って、ゾープアプリケーションサーバーを使って、インシュラリーズクラスファクションを使って、プロダクトスネームスペースを使って、レジスタークラスを使って、バンクアカウントは、2メソッドでデポジットを使って、デポジットは、アドアマウントをバランスして、デポジットメソッドを通じて、ウェブのフロームを通じて、リクエストは、リクエストがある場合、ウェブのフロームを通じて、このメソッドは、デクレーターを使って、バンクアカウントをデポジットで、バンクアカウントを通じて、デポジットを使って、デポジットを使って、バンクアカウントをデポジットで、デポジットで、デポジットで、デポジットで、デポジットで、デポジットで、デポジットで、デポジットで、デヴァルホームが受けられ、デディスポールの支援機を使います。まくどりの我們的ツーテ爱方は常に慣れてしまいます 있기パーミッションは only permitted display formこのパーミッションは only permitted display form動作するってなっていいの?ディスプレイデモレジスタープロダクトバンクアカウント動いてないコンストラクターフォームRIDBARIDBソレフォーソレフォーZOP2スタイルZOPCOPRENTアーキテクションZOPGCAGCMLコンフィグレーションファイルインターフェースシンプリビューテンプレートGCMLコンフィグレーションファイルインターフェースメソッドブラウザビューコールGISメソッドブラウザーBRAWDURAビューシンプルビューテンプレートファイルブラウザビューコールGISメソッドURLビューアダプタービュー名前スペースビューアダプターバンクアカウントメソッドZOPスペースウィッグシンプルパイソンクラスバンクアカウントメソッドBRAWDURAビューテンプレートファイルZOPプログラミングZOPプログラミングZOPプログラミングZOPプログラミングZOPプログラミングZOPプログラミングZOPプログラミングZOPプログラミングZOPプログラミングZOPプログラミングZOPプログラミングZOPプログラミングZOPプログラミングZOPプログラミングZOPプログラミングZOPプログラミングZOPプログラミングZOPプログラミングZOPプログラミングZOPプログラミングZOPプログラミングZOPプログラミングZOPプログラミング
Let's start zope programming. I introduce 2 programming styles with simple application.
10.5446/54833 (DOI)
I'm from Cologne, Germany, from Interactive. And this would be normally the point in my presentation where I give some advertising, how great we are, what great things we do, but I will spare you that. And so it's an ad-free presentation. And also, code-free presentation, low light of codes to see. And it's my very first talk at the Blown Conference. So thank you very much for being patient with me. Okay, so before I jump in the subject of privacy experience with Blown, I would like to address the question, why privacy actually matters? And my very first answer to this question would be that it's an important part of user experience. When we talk about user experience, normally we talk or we think about usability in the first place. How usable are our products, our websites, our applications for the user? How is the ease of use of our website? But of course, there's a lot more to user experience. User experience, it's about the emotional emotions and about the attitudes of a user towards our product. So there are a lot more dimensions to user experience than just usability. And in my opinion, privacy and privacy experience is one of the major components of user experience. There's also, of course, performance, design, security, and a lot more aspects to user experience. But today, I want to concentrate on the privacy experience. So my talk is mainly about user experience. It's not about legal stuff. And that's normally in my projects, it's what interests me. How can I keep my user happy? Or how can I make my user happy? And in the same sense, if I make my user happy, normally my business metrics are happy. So normally in projects, that is our major goal to have a happy user. OK, so users care about user experience. They care about privacy, sometimes more, sometimes less. And for sure, right now, they don't get around the topic since some big-style data breaches appear almost every day in the news. Stories of data breaches reaches on a regular basis. And those are just some numbers of recent data breaches with huge numbers. The most recent one, for sure, is the Facebook scandal, where 50 million accounts were compromised. We had the Facebook Cambridge Analytica scandal, which resulted in the loss for a Facebook of over $130 billion in market value. And we also all remember probably the Yahoo scandal, or the several Yahoo scandals with compromised data. Or also just one example from last year from 2017, the AcreFax data breach with huge numbers of data released to the public or compromised. Just if you see over 146 million people's name and social security numbers and so on were compromised, so more than half of all Americans were affected by those data breaches. And privacy as a subject is now, of course, a global trend. Talk about privacy. It's not. I mean, I'm from Germany. I'm from Europe. And 2018 was for sure a year where privacy came into our minds in our discussions every day, basically, with the new GDPR regulations from the European Union. But after all, it's a global trend. I just found this map. It's about privacy regulations worldwide. And the colors represent how heavily those parts of the world are regulated. Red is heavily regulated. This orange tone is still robust. And then yellow and green are parts of the world where regulations are not as strong. And you see that many parts of the world are already heavily regulated. And as I said, for Europe, 2018 was a major step with the new GDPR. And I think many people here are from Europe. So I don't need to go into legal details of that. Maybe who's from Europe here? Yeah, so most of you know what I'm talking about. Who's from North America? OK, still a few people. And Canada, for example, is one place where they also had recent amendments to their privacy policy and some advertisements in their legislation. And also, in the United States, it's becoming more and more a subject of everyday discussions. But also here in Japan, we have regulations now, amendments to existing regulations from 2017, which gave a pretty good privacy standard on the web or for online users. Also in Australia, it's pretty good. So in Europe, it reminded me kind of the year 2000 bug. We didn't talk about anything else anymore. Say, starting in March or April 2018, all companies had to deal with GDPR. And there were a lot of jokes about it. And yeah, you could only take it as a joke, I think, because everybody you talked to, it was GDPR all over the place. So finally, I had this feeling, so I couldn't hear it anymore. But the year 2000 bug, the problem that went away, like the year 2000 passed. And now it's just in our memory that something happened, but that's of course different with privacy issues. The regulations stay, and also the topic is still a modern topic, and we have to take care about it. So there are different approaches to the privacy experience, to the privacy issue around the world, not only on a legal level, but also on a cultural level. And I just pointed out some here in this graph, some sliders. And you can think about your country or your region of the world where you belong. For example, do you have an opt-in culture or rather an opt-out culture? The personal data ownership, is it more the individual, the person? Does the data belong to the person? Or is it more that data belongs to the service provider? Or is in your country, is privacy regulated rather through hard law, through regulations, or is it more soft law? Is privacy actually, does it have its own regulations, as a matter of law itself? Or is it just part of other laws of commerce laws or whatever, copyright laws or something? Is privacy decentralized or centralized? Or what's the culture in your country? Do you trust more the government to protect you from data preachers? Or is it more trust in businesses? And if you put all those sliders more to the right side, or completely to the right side, you would probably have an American view of privacy. If you put all the sliders to the left side, you would probably have a more European view of the privacy issue. And other regions of the world might slide from the left to the right. So talking about the United States, to give you an example, what changed in the culture or in the view of privacy? It started, or not, it didn't start, but the Facebook scandal was for sure, one major blow where people saw, okay, something is wrong, and we have to address the issue even on a legal basis. And some people in Congress said, Wild West Times and social media are over now, and we have to regulate it. So for example, if Democrats take over the house today, I guess it would be, they want to introduce a new internet bill of rights. And you don't have to read the fine print here, it's just the idea that they really made up a legislation proposal now, which looks a lot like the European approach. In some parts it's different, but still the direction, the way they want to protect online users, it's more or less the direction the Europeans went. So the question is, we have all different approaches to privacy on a cultural level, on a legal level, and what follows for us as an open source, as a global open source community. And I think problems could arise because we structure our work with different cultural approaches. We write code with a different legal background from our country, and we assume that everyone we code with works in the same way we do, but probably it's not the case. And in the same time, I think we have a huge responsibility because we power a large part of the open web, depending on what statistics you look at, this is just done with build with. You see that between 50 and 70% of all websites are powered with open source solutions with WordPress, Joomla, TruePull, and so on. So I think we have a huge responsibility as a community, and the question is what can we do to ensure that we are the good guys and that we stay the good guys? And I think one thing is awareness, so that we are aware of this responsibility, and that we are aware of privacy as an important issue that we have to take care about, and that we know that users want privacy and they want to know what we do about their privacy. I think another thing would be that we take privacy as an opportunity rather than a legal threat and something that we have to do. It could even be a business opportunity and a differentiator towards other software solutions, towards closed code software, and we could distinguish ourselves with this approach. But then the question is, are there any universal privacy standards away from the legal perspective? We saw already in the legal way we don't find a common ground, so is there something else out there, universal standards? And if you look at other open source communities, open source CMS communities, we see that the WordPress community, for example, they tried to establish those universal standards. They, in June 2018, published those 11 principles for privacy and gave that to discussion and said, look, that could be one way to have a common view on privacy among all open source communities, maybe. And again, we don't need to read the fine print here, but I gave the source here and it's really interesting. If you are interested in the subject, look at it and maybe it would be a starting point for us for the blown community to work on that. So another interesting thing they did, they developed a privacy impact assessment for site owners, for developers, for product owners, where you can look at your project and your software and see what privacy issues you have, what standards you meet and where you should still work on. And I think that's also something very interesting we should look at. I put it in small print here, so we don't have to go into details, but just as a hint, there are already very good discussions and fundamentals in other communities we could build on and that's one of it. So the WordPress community is by far the most active on privacy issues. They started with a GDPR compliance team and they wanted to make the WordPress core GDPR compliant, which they did, and they really then tried to embrace privacy as a 360 degree view and look at all the issues that you encounter when you look at privacy. And then they developed a roadmap, what to do, and that's also very interesting to look at all the subjects they collected that concerned privacy. They reached then the compliance in the core. They have a privacy notice, a page basically where site owners can just fill in the blanks and have a privacy notice. They have the data export function that GDPR requires. They have a data erasure function that GDPR requires and so on. Then they also integrated the privacy by design approach that GDPR asks for. And I will get back to that later, but the interesting thing is that they also gave documentation for developers, for site owners, for all people who are concerned with privacy, they gave documentation on their website and they gave guidelines how to write code and how to also make your add-ons, for example, compliant to GDPR. So those resources are very interesting. The links I put there, everybody who's really interested in the subject of privacy has a very good starting point with the WordPress community there. The screenshot is their version of the export personal data tool so that users in WordPress can export their personal data and take it to another system. Or also, this is the version how to erase your personal data in a WordPress website that's also required of GDPR, that you have the right to be forgotten. One short look at Drupal. I went to the Drupal Europe conference this year in Germany and there were six talks about privacy experience about GDPR. So in their community, it's also a big subject. They also started with a GDPR compliance team, tried to make the core of Drupal compliant. They wrote documentation and they have more than 20 extensions or add-on tools to make Drupal compliant. And I found that very interesting to see in comparison to the other communities. Joomla started the same way. They also tried to do the GDPR compliance first and then they developed a whole privacy tool suite that they published and have something they call privacy dashboard, a health check and several black ins where you can ensure privacy for your website. And among the documentation for development is something they share with the other communities we saw. And something really impressive is that they have a collaboration space where 580 people interested and joined them to really work on all the add-ons and the privacy tools they develop. And something they really know or they did really well in my opinion. They tried or they used it for marketing too. So they have a whole release, the 3.9 release, they called it the privacy release so that they could really tell the public, look, we did something and they knew how to communicate this to the world. So let's turn to Pro-Ploan and see what we have for privacy experience for our users. And I started by Googling, Googling, Pro-Ploan plus privacy, then I went to Proan.com, I went to Proan.org, I went to the community forum and so on and I just wanted to see what is out there. And on Proan.org I just found the privacy policy of the site. It looks like this. So I think it would need a little bit of love to make sure or to show the user that really care about privacy. This is more like, okay, I pasted the text somewhere, that's it, take it. I think it's even made with a free tool, with a, yeah, creation tool for privacy text. And that's the only thing I found on the website. And on the documentation for Proan, there's also nothing. Even in the developing or in the section for Proan developers, the search for privacy doesn't give any results, so we don't talk about it at all. And on Proan.com there's also no single word or the word privacy is not found one single time. Then the forums, I searched for privacy and finally I found something interesting. I found a discussion and I found the tool, the collective privacy tool that was proposed there and I think there we are in a good way but I can't tell much about it. But I think Matthew will say more about that in his talk Thursday about privacy best practice and Proan. So something similar, I guess, to what I talk about but from what I know he will of course show collective privacy in detail. So that's a good start, I guess. And let's look what else we can do or where we are. So privacy and Proan development, what's done? We have collective privacy. Probably a lot of us developed our own code snippets, at least providers in Europe. We had to do, we had to publish cookie consent messages and so on. So we had to use third party tools or develop our own code. So there probably is something. And I don't know, maybe something else. But I'm not aware of. But I would like to add some suggestions for discussion, what we could do and what we should do maybe. So the first thing is kind of an assessment. Maybe following this assessment I showed from the WordPress community before that we really look what's our status and that we write down what the status of privacy is in Proan core and make that public and show people, look, this is, I mean, we talk about security and how strong Proan is on a security level. But we should really focus on the privacy part and then publish it. Because from my experience, that's what people want to know. Like in the GDPR discussions in earlier this year, my customers asked, so what cookies does Proan set and how is user data stored in Proan and is there a possibility to access user data and to export it and so on. That's what our customers ask. But there's no documentation about it. So we could do this assessment and then write it down. I think we could also follow the examples from the other communities and give some guidance for developers. So how we should develop for privacy and privacy by design, for example, we should tell them those are the principles we should follow and make clear, maybe there could even be a certification for add-ons or whatever, that you followed privacy standards and principles. And of course, we could develop more features. For example, for the GDPR compliance, also following the examples from the other communities, the data export features, data erasure features, privacy notices, maybe, for example, we have this accessibility default page in Proan where we say what our standards are in core. We could do the same thing for privacy, for example. There could be a default page where we say blown at hers to the following principles and follows the following rules. Yeah, then question, of course, what about all the plugins, all the themes? How can we make sure that they follow privacy principles? And the first thing would be, of course, to give people guidance and documentation of what to look at. So that's a big project. If we see, for example, in WordPress, they created a roadmap. It's a process over several months or even years. It's nothing you can do at the one-time shot. So I think a roadmap, if this is something that we really see as an important thing for our clone and for the community, we could have a roadmap and work on the different issues. Privacy by design, I just want to show those seven principles. I don't want to go into details here. It's a framework that was developed in Canada already in the 1990s. So it's nothing new, but it came to our mind, I guess, with the GDPR. So also, this is really a good starting point if you want to work on privacy for blown. Then let's switch to communication and marketing. What's done there? Well, I don't know. Like, I didn't find anything, as I said, in my searches. I didn't hear anything. So maybe somebody has an idea or can add something, what I don't know, but I think we didn't do a lot in this field. So here's some suggestions what we could do. We should talk about privacy. I try to do it now. And I want to encourage you to do the same. We, in other places, other conferences, other venues, and make sure to the public that privacy matters to us and that we do something about it. So talks in conferences, documentation on our website, and so on, would be a starting point there. And as I said before, an accessibility statement like version for privacy in an empty blown would also be a good thing. Then in the beginning of my talk, I talked about cultural and legal differences between countries and parts of the world. And I think for us as a community and in our communication, it would be good that we make clear that we see privacy as something positive. So when I showed this example of the privacy notice on blown.org, I think your first feeling is, okay, this is nothing important for them. And we should change that. We should have a different cultural approach, a positive cultural approach to privacy and see really as something that could be good for us. And our users, of course, as the main point, and it should not be only a legal constraint. So use transparency and privacy as a differentiator so that we can really show we are the good guys. Then privacy in blown community work. What has been done? I don't know. Something that I'm aware of. So here some suggestions too. In the first place, of course, we need people. We need people who are interested in this subject matter. And I can only propose if somebody from the audience is interested, we could maybe use this time we have now in Tokyo and meet for open spaces or we could use the sprint days maybe and work on parts of the privacy experience I mentioned or other parts maybe that you are interested in. So yeah, maybe we could talk about that in an open space format in the next days or during the sprint. And also the WordPress community communicated pretty openly that they are interested in working together with other CMS communities. And there are talks between WordPress, Jumla, Troupal. They are always in an exchange. In Germany, we are also represented with blown in the so-called CMS garden. It's an organization where open source CMSs are joined together and there would be an opportunity to bring our forces together with them and really work in the same direction for privacy issues, for example. So I think it's for sure that we are the good guys. Let's show to the world, let's start the work that we have to do. And if we have to prioritize what we do first, I would say let's do things first that are visible to users. So that users of our communication tools, our website, the blown website, see first. And also if you install an empty blown website, what they see then there should be some hint that privacy matters to us. So that should be the first thing we do. Then we should, of course, assess and document everything that is security related so that people know what they have when they work with blown. And then we could also give blown a whole privacy branding. But it's really a differentiator that people see, wow, if I use blown, I really have a system where people work who care about privacy. So thank you very much. I think there are a few things open for discussion. For example, I would be very interested in your opinion on those universal privacy principles of that. It could be something where we also work on, where we contribute something. I would be interested in getting your idea about positioning you about the view of blown as a privacy branding. And of course, I would be interested to know if you are interested in working on privacy issues in the next days in open space formats, for example. So if you have any opinions on that, I would appreciate your feedback. Thank you very much. If you already for your website or for customers' website worked on privacy declarations that are understandable and easy, could we use or reuse or translate some of them if they were in German? Because part of it is also how to write it. It has to be understandable and not just legalese. Do you already have experiences and things that we could probably use? If you want to write a privacy statement or a notice for a website, of course you have to really look at this website. So it's not only blown as the system that counts, but everything else, all the add-ons or all the third-party systems that are connected to the website. So we relied on lawyers, on legal text that we then used for our customers. But all our customers who did it themselves, they came to us and asked about the blown. They asked about the basic system and they wanted to know how do you handle your user storage and how do you handle privacy in general in the core system? So that's something we should think about. But I think we cannot make sure that it's an individual website, for example, would be GDPR compliant. That's not our job. But yeah, so the legal notice would only concern the blown core, I think. There's some websites that sort of try to automate those privacy statements a little bit by looking at the stack and the plugins and the different things that you're using and combining privacy statements from each of those. Do you think there's scope to do that within blown? Like have some system where the plugins each have a privacy statement? Maybe? I think that could be a good idea, yeah. But we have a whole large ecosystem of add-ons and I think you will never be able to have privacy statements in all the add-ons and maybe you give a false pretence, you give a false security for a site owner. If you tell him, okay, all the privacy notices from add-ons and so on are collected together to one place and you are sure now that your site is compliant. So it could be something that you tell add-on developers or that it's kind of a requirement for the future that there should be a privacy statement, that they should say what happens to data, does your add-on transfer data to third parties and so on. But I don't think that this could be the only solution or the main solution to make a site compliant. So another question is if you just got a straight-plone site and you're not collecting any information, you don't have forms, but what you have is the users who are using the system. How important is that to have, should you be doing anything to make a GDBi compliant for that and also if your site is not based in Europe, how much should you be worried as well? I think you should always say something about privacy wherever your website is hosted, wherever your target audience is. I'm not really focusing on this GDPR compliance, but I wanted to show with my examples that privacy matters everywhere in the world and that users everywhere in the world care about privacy. So I would see it from this more universal perspective that we should really say, look, if you're use-plone, you have a system with strong security, you have a system where we take care of privacy and it shouldn't be so much focused on this European GDPR issue. I was wondering how the technical nature of a plan in the ZODB plays into privacy. When you're editing anything, old versions of objects remain in the ZODB, remain recoverable. Besides that, other kinds of functionality is built on being able to reference older versions of content. So if you want to give someone all the information that you have about them, probably you're giving them the current versions of everything or if you want to delete something, can you delete all the information that you have about someone without missing with other editing histories or object version graphs in the ZODB? I'm not a developer, so I'm sorry I can't answer technical questions. I'm more interested in the approach to privacy and what we could do, but I couldn't say anything about versions or technical implications there. My understanding of the legislation with regard to versioning is that there's some ability to say, well, here's our backup policy, there's the right to be forgotten, but there's some lenience with regard to versioning and backups. So if you say, okay, so we deleted the data, but it's in backups for the next six months. Don't get legal opinion, this is my understanding. So you're not completely lost, you don't have to wipe all your backups and stuff like that, otherwise pretty much everyone's screwed with regard to backups. So that's my understanding. I don't think we have to wipe all the backups. I guess it's probably not needed to keep backups as an archive for half a year. That's not necessary usually for restoring the data. You only need it for a shorter time period and that's totally fine. I think technical-wise we could have some stuff because there are some requirements from GDPR, but it also makes sense for other regions to make it possible to find data for specific users, delete them, make it easier. Now it's almost impossible. I mean, the zero to B is packed after one week usually and then the deleted data is gone. That's fine. But we need some ways, I think, to identify user-specific data like form data or even other stuff we have in the database. And there are some gaps, I guess. I've not seen much so far. So we could have some core functionalities. We could also look at other systems because there's a lot of ongoing, I know, in WordPress they did a lot. Exactly. That's what I showed with the examples in WordPress. For example, they already have those tools. And yeah, that should be something or could be something we work on too. Yeah. Thank you very much for running out of time. I'd like to see you again in about one hour. Thank you again. Thank you. Thank you. All right.
Privacy experience is about building trust by giving individuals transparency and control over the processing of their personal data. Beginning with an overview of the changing data protection and privacy landscape as well as of different cultural and legal views of privacy, we will share some ideas on how to improve privacy experience in Plone marketing and Plone development. We will share some insights from other Open Source CMS: How are they working to improve “Privacy by Design” for their communities? How can we contribute to privacy in Plone and the open web community?
10.5446/54773 (DOI)
Yeah, I guess we can begin now. So, welcome to the second day of the React Slice Volta training. We didn't completely finish the React training yesterday. There were a few points left to do, but that won't take too much time. So today, Alok will start the session again with the last few points in the React training. And after that, I'll take over and show you how to work with a nice software product that is Volta. So, I give it to you, Alok. As yesterday, if you have any questions, post them to the chat. And in case you're coding along with us, please... Okay, it's harder to hear me. So, I'll try to fix my mic setup a bit. Tell me if that's better now. Or maybe this way. Is that better? That is good. Okay, yeah. Okay, yeah. Change the setting of the microphones. Okay, so if you're again coding along with these right questions in the chat, let us know when you're done with the specific sections of the training. So, we have an overview on if we should go faster or slower. And enjoy the training. Say, take over, Alok. Okay, so we'll just do a quick introduction to the routing. And it will not take much time. Like, we'll just go through it and just look into that, like how we can set up a plan site routing in our React app. So, I think that we should continue from here because we left that. So, on yesterday training, we completely go through the React and the Redux. And today we'll just start with the routing. So, for cloud center routing, we need a dependency called React router dom. So you have to install it. And the router don't provide you like two, two, three meter, like one is the browser out there thinking you can clearly see like we are importing browser out there and the route from the reactor out there. The router out there is like the provider, like you have to wrap your app component, like the original entry point of your app with that. And then you can initialize your component with the route thingy. Like you can clearly see like the route is like the, you can see like what will happen if I go to this route, like the slash and the slug, then react will go there like, okay, I am this path, like the slush thingy, then this is the component which I have to render. So this, so this is the boilerplate, like you have to wrap the browser out of thingy to your whole app, and then you just define the route and the path like a route take like the two properties mainly like the path and the component and so path is like the slash route like where you want to visit and the component will be responsible for the viewing like if you go to the slash then we view the FQ component and if you go to the FQ index component, FQ slash index then this FQ item view, I will explain like what is this FQ item view is. And you can see that there is this exact thingy exact thing is like that, like, if you see the if you do not provide this exact then browser will compare every starting route with slash to be considered to be the math so exact the style is that like if the path is strictly the same as this parameter then so this component otherwise fall back to other thing. Okay, so for showing the routing what we do is like we will create another component like the FQ like this one the FQ item view what is does is like if you when you click on to the view button in for a particular FQ item, it is so you another view like which contains the question and the answer and you can put another question. So it works is like is that like if you go to FQ slash index will be the FQ item index like if you have five FQ that will be like zero one two three four so it's it will be like the blog post, like you can see a list of the blog post and when you click on to a certain post then it will go to the all the content to show the all the content, which is like the blog post so it is similar like that. So you just have to wrap your app component with the browser and define the path and the component which you render when the user goes to that path. So it's a simple thing. And then you have we have to write we are just writing a simple view for the FQ item so what we do is like go to the components folder, make it a FQ item view.js file and you can see that like we are just showing this particular thing like the this dot prop does FQ item question and this dot prop does FQ item answer. So when you go to FQ slash zero one two three or four or whatever the route, you will see this view, like it was just shows the question and the answer thing. So there will be an exercise like there will be a problem like okay we go to this view but how we can access this FQ item. So react router provides you a match props, match props to all the necessary component which is rendered by the when you wrap the browser router. So when you wrap this browser and when you pass this route function, then react router puts up properties to the following component which you are passing like this one the FQ and the FQ item view. So we can just check it some properties directly to the this component additional to yours like when you pass that so we'll get this match from the react router dom and we can access this from this one like the props match params dot index it is same as if you do in the server like you pass the parameters from the route and then we request it from the request object if you do the back end work. So same like that like we can access the whatever index which is passed here like FQ slash zero if you pass zero, then props dot match dot params dot index returns to the zero element and you know that like we store all our FQ in the redux router. So how we can access this items from there. So you remember from the previous training like we can use this connect function. Like you can see that like we have this connect function and the state and the props. And you can know that like we can access the FQ from the state and you can see the solution like we can access the state from a state FQ. And we also get this props dot match index, and we are just passing it so that it's the number and we are just accessing the FQ items from the state. Like we are getting this state from the store and then we're getting the FQ item and then we are passing the index like which are in a object which we want and then we get this FQ items into the view like here and then you can do this like this does FQ items that question and this props dot FQ item dot answer and that's the only thing you have to do. Okay, I think it is clear. Like how we set up the block lines are routing. Yeah, okay, then I should continue. So now we set up this route like when you go to the path in the client side. But we have this some problem, some another feature like the links, like how we can put the links in the, in the, in our react app so browser also provide you links like you know that the links is used for navigating to the different pages. When you click into this link, then the react will transfer to the another page and from there, the reactor matches the route and then it rendered the component. That's the whole overflow. You can simply set up a, you can simply pass it to the same like the anchor tag, like in anchor tag we have this href and for the link you have to use this tool, like you just simply told like like you put the link from the reactor out of the room, and then you create like simple like link and like where you want to go like similar as the anchor href, like you set the path like go to this page. So it will be like the FAQ and whatever like the you have this index thingy, and you whatever you can say like, this is just for a button or something like that. Okay, so for when it just show you the child content between the link. So we, you know that we have this FAQ item component previously. So we import the link from react router. Like a link to view component like when we have this list of FAQ items and we have this delete button, delete button. We also added a new link like when you view like what we want to show the whole FAQ items so it will not clear like if you think like a blog post like we have this read more section and when instead of you should have this read more and when you click into read more than hold the content. So you can think of it like that. So when you click on to this button, then our app will go to this route and then react router matches this to the component folder which we decide which we put in the app and then that component is rendered and the component is FAQ item view. So what we make to work for the links we just imported the link and we just define it like anchor tag and if you look closely like if you look at the com this link tag is converted into the anchor tag like if you inspect the console you can see that. So this is the way you can define the link in your app like you can put. Do I need to show something. I think that is okay, good. So then there will be this new feature which we want like how we can do that. So we think of it like app like like we have this login button and you want that when a user login then we go to an interview and then redirect to the profile thingy or the homepage or something like that. So how we can achieve the same thing using the react like in our react app. So what we can do is like we currently have this thing like when you have this list of FAQ and we have this view button and when you click into the view then this. Another view component another component renders and we just want to just go back to the previous one. So you can do it also by the link tag like you if you are big but here we just want to show like how we can redirect some user from there. So we will just in FAQ item view like which we created for index thingy for all the content will create a button like which will do the redirect like which is called a method called on back when you click into that and we can do that in redirect by accessing the history props. So as I previously told you like react out to provide bunch of properties additional to your past ones so react also react also provide you with this history props. So you can easily access in your component so you can do like it like this dot props dot history dot push and you can do whatever part where you want to redirect the users. So you can call it by the set time out or anywhere like in the component did mount or something. And then you are good. So what we did in FAQ item view for implementing the redirect. You know that like we first have to define the prop types so that this component was perfectly fine or if you do not provide this feed the console log. Then we'll define this on back property on back method where we should just do the redirect thingy and we just added a button like the on click like this one button that on click this dot on back so when you click on the back button then on back method is called we go to the on back and we get the history properties from the react router then we just push to the whatever you are aware you want to redirect. Yeah, I think this is the difference also you can also so by get it like we are using history. We need this constructor props so that our react component knows that okay we have to initialize the props and then we just do create a method on back, which do the direct and just the button on click. Yeah, that's it for the redirect I think. Okay, so that's the end I think. Did you understand like I think so. How you can implement the. Okay, seems. Everyone got that. Nice. Yeah, I think so. Yeah, I think so. Thank you very much for the react because my party's over and now jack up and today jack up will do the coding part also like, are you doing it like coding with the trainers training also. Okay, so yesterday, we do not do this the coding like I am not doing the coding when explaining it but now today will jack up so you like, if we have any problem or anything like I will so like series screen and teach you something like that. Okay, thank you. Thank you very much a lot. So, um, the plan for today is to take a look on Volto, which is will be the new default front end for clone six. It's very exciting. So, um, first thing I want to do is to show those of you who haven't even got in any touch with all photo to just give a really quick overview what is what what can it do what's the difference to all clone five or. Okay, for that, oh, quickly share my whole screen. I hope you're able to see that. And at the moment, the quickest way to just check out Volto how how it looks like for for users and editors is to go to Volto dot concept dot com we host there. And it's really pretty up to date version of what of playing Volto for people to try out. You can walk in there with admin admin and take a look at the general structure of Volto. I think biggest difference, except for the completely new, we looking UI to clone five is the introduction of the blocks and which is very similar to what the guys over at WordPress did with their new Gutenberg editor. And I'll really quickly fill that to you so when you have a page you can as an editor just very easily click on this edit button directly on the page. And the page shows up in this edit view where you have on one hand just rather simple text editing functions with a little bit of additional styling functionalities for the text. Then, and then that's where the blocks come when you're having your cursor inside one of those text fields and hit the enter key, a new text field will appear with on the left this plus button. Clicking on that, you'll have an array of several different blocks that you can add to the page very easily, for example this image block where then you are able to upload images. And default there's a quite useful selection of blocks. And when you're working on your own project you also might want to implement some of your own blocks and how you would go about that. That's something we'll look into later. So, this is a typical standpoint. The way Volta works is, is that it relies entirely of on clones rest API implementation. So when we're here and check out our network tab on that page quickly reload that will notice when we filter for XHR requests that there are a bunch of requests going to the slash API where we are retrieving the necessary information from the clone backend. Then Volta will process that to display that in the Volta front end. Was there, did someone say something a second ago. Okay, so you mean there's a lock. Okay. Um, which means, um, when you are working on expanding the functionality on a Volta you can use any information that is retrievable from clones extensive rest API implementation. And also means, you can only implement functionalities in Volta that are available with the rest API or you need also to extend the rest API implementation. So this is the general overview, how Volta looks, how Volta works. And now we're going to have a look on how to bootstrap your very own Volta project and start configuring that and customizing that for that. And then you'll be following roughly the Volta hands on training, I'm saying roughly because it is a bit outdated. There are a few keywords that are not up to date with the current Volta version anymore, but I'll tell you when that's the case. When we start to set up a project in the training. I just very quickly paste the link to that into the set and into the slack for everyone to check out. When we start with a quick start section over here, the training recommends that we should clone the GitHub Volta hands on training repository. We will not do that and I really advise you when you're coding along not to do that because this is like very, very outdated. This is a Volta version from, I think, two years ago or something like that. What we'll do instead is we'll be using create Volta app, which is a neat little tool that some of my fellow colleagues at Concert and some other pro and contributors wrote so very easily set up and bootstrap a Volta project. So, yes, so you're saying you probably already set up yarn so you do not need to do that. And then to install that tool just paste the npm install that we have one for the app and for me just to make sure that I'm on the latest version. And then we'll add the latest flag to that. Really quickly install that. So, to create a new Volta project. And then we'll have anywhere on our machine this create Volta app command and then the name of the current project you're working on. So, I'll name mine clone, clone, name the training name yours whatever you want. And you hit enter and wait for a few seconds or minutes depending how fast your system works and this application will bootstrap and not yet customized but working Volta project for you. Here we go. You see the in there. Okay, someone got an error. So, how did I update I updated using npm install this command here with at latest. Someone got the error. Anthony, can you quickly check on what node version you're on? I think that is npm. If you're using npm, I'd recommend you change your node version to node version 12. I would use 12 that might never mind nvm use. Node 14 should also work fine. I'm not entirely sure. Maybe try again with node 12 and my history of commands is let's check. Here, when skin is sec. So, we started with this one. Then we used that one. Then we went into our project using simple CD. I checked my node version using that. I'm using 12. I'm not new to use that was the wrong one. I'm not sure. It doesn't have the permission. You can use the pseudo. Yeah, I think I absolutely have no experience on how to do that. You can do that. Do you have access to the npm? You can copy it from the GitHub repo into your packages and then you have it. Maybe I'll argue you can discuss that in the chat while I continue with the training. What I did here is opening our freshly bootstrapped project in my code editor. I'll give you a quick rundown on what you see here. On the top we have the locale folder. We will focus on today but if there's time left in the end, we can take a look at that. We will see where all the dependencies go in that we use in our project. We will go into the VALTO version that lies inside our node modules. We will quickly copy files from the base VALTO and for convenience we added this sim link over to the VALTO directory in the node modules. We have the PAPFAS, we don't need to touch that at all. The public folder contains the static resources like favicons, the robots.txt, then we have the source folder where the main structure of our photo project lies in. We will go into all the different views and blocks. We will use CSS code or to be more precise in the VALTO case we do not use plain CSS but we use the less CSS preprocessor language. Now that we have our console inside our VALTO project, we can start our VALTO instance using just yarn start. In parallel we can start up our backend. Those of you who are already familiar with PLOAN can start up a default PLOAN 5 instance. If you don't want to set all those things up just use this command here to run a docker container with all dependencies already installed. This is what I will do now. This will take a few seconds but it is usually rather quick. I don't like that. Let's see. While we are at that we can check if the VALTO start process is already finished. Now that we have both started our VALTO front end and the PLOAN backend, we can check out the VALTO front end on our part 3000. The PLOAN 5 instance on part 8080. I think to most of you this will look very familiar and this is the new thing. We will take a look at the goal of this training, what we want to achieve. The idea here is to at least roughly rebuild the PLOAN.com landing page but do that in VALTO. We will try to at least very narrowly achieve something that looks like this. The first step of the training where we will start to touch our code is that we will look into the theming engine that VALTO uses. VALTO uses the semantic UI React package for its design and theming. In this little graphic here we see roughly how that works. We have the semantic UI default design layout. We will go to React.SemanticUI.com. We will see their default themes that they use for default. Those are in turn in VALTO overwritten by the PASTA NARGA UI theme. I came across that thing a few weeks ago but not with the Docker container. Is anyone here more familiar with Docker? Does someone here know how to upgrade to the latest version of the Docker container? I think the PLOAN.com is not the latest version. I think something like if you run docker pull ploan at latest or something like that. I am not sure if anyone here is familiar with the PLOAN.com. I will continue for now. This part should be really quick to catch up later for those who are stuck at the moment. I was talking about the theming engine. At the bottom of the semantic UI we have the default semantic UI definitions and the semantic UI theme. Over that we have a VALTO theme with the PASTA NARGA UI. When you are on the omlet in the theme.config file you can see that a lot of those variables are set to PASTA NARGA UI and not the default semantic UI theme. On top of that what we will be doing today is creating our own overrides for the site theme. We start by changing the font of our site. Inside of our theme folder we create a folder called global. We put a file called site variables. We change the default font in semantic UI. We put in a new variable for the general font name. We click save and we take a look at our default VALTO instance and inspect the text over here. We reload this first and then we restart our development server. We reload our page. Now when we take a look at our font family down here we see that it is now using OpenSense as the default font. Back to our tutorial. As additional info you can set any Google font available and the online font version of the font will be used. In case you want to use more than one font and a font that is self-hosted you should define it as usual in CSS and Z-Value import Google fonts appropriately. Next thing, for our general CSS rules we will be using another file which will be the custom overrides. We create a new folder called extras and inside extras goes a folder called site variables. Now, I think that should be very clear to everyone. A little bit of feedback is always appreciated but I assume anyone who has actually got the VALTO and clone running should be good by this point. The next step is adding the additional CSS or in our case less code into our custom overrides file. For that in the training there is already some code we need for our header on clone.com. This part over here is already provided to us inside the training. What we will do here is quickly copy that, put that into our custom overrides, save that and when we now go to our instance here and reload that. We will leave it with a hot module reloading. We should be able to see that instantly. You see that is the problem with the live coding. Things that work every time tend to break at the exact point where you try to sell them to other people. It did say on the previous slide to restart your to make it aware of the file. That is the issue with the things in the teams folder. Thank you for making me aware of that. When you add new files there, you need to restart your VALTO process. But usually after you have added the side variables and custom overrides and a few more files regarding your fonts or something like that. You usually don't need to restart that again when modifying your team. There we go. We have the black background and so on. Rather straightforward. Then we also have the additional styling things here for the navigation menu. I will put that in here. Depending on how you set up your project, you might have several files for each of your components. For the CSS or as we are doing today, just one big custom overrides file. One thing that I found rather handy when working with the styling here is to put a comment on top of your rules to at least make clear to other people. Can you make whatever dot overrides and it will get picked up? It doesn't matter what it's called. No, you need to have custom dot overrides. If you add more files, you can import those in your custom dot overrides. You can then say, import and then your file name. I can basically use the custom overrides to import. Sorry, say again. Who was talking a few seconds ago? I didn't get that. I think we lost in the middle. I just checked the chat. There was one question coming up. When you put this into version control, which files go into your Git project? I think basically everything that is not inside our Git ignore. I don't see anything that shouldn't go into the version control. A boilerplate Git ignore is created with a createVolta. Then at the end, we add this CSS. We take a look here. We see we already got the new colors here. We have a new style. We will instantly be picked up by the file watcher and be applied to our home page. The next step is the logo. This is where we will be leaving the theme part for now. The Volta customizations engine uses a concept known as component shattering in the JavaScript world. It works the following way. We have in our source directory here a customizations folder. Except for read me fire, completely empty at the moment. In that, we can drop any component from the Volta core package that we want to customize. We can also amend that in whatever way we want. When Volta starts building the site, it will go over your customizations and check which of those files you have in there are overrides for Volta. We can use those instead of the default ones that are defined in Volta. We need to check where the logo is located in the base Volta directory. We have the send the omlet part in our project. In the source directory, in the section components, theme, logo, we see we have the logo.svd. We now want to override that thing. We need to match the exact same path from the source directory as a base inside of our customizations in the project. Inside of customizations, we create a folder called components. Inside that theme, logo, we have the folder's components, theme and logo. Inside the logo folder, we need to drop the new logo. If we were using the old hands-on repository, it would be in the training resources folder. We do not have that here. Because of that, I'll just upload the new logo you'll be using to the Slack channel really quick. You can just quickly download that and paste that into this new logo directory. It's very, very important for the customizations to work that you met the very exact same path the logo svd inside the omlet does have with the source directory as a base or components, theme, logo and also the file name need to match. And this is also case sensitive. If a customization does not work for you for some reason, first thing to check would be if really all the spelling there is exactly the same as in the photo base. When we add a new file to the customizations, the same thing applies as for new files in the theme directory. We again need to restart our Walter server to make the file what's aware of the new file. Here we go. Now when we reload our page here, we see our logo has now been replaced with the clone logo. Do we have any questions or problems regarding this? Vince asking about with logo. If you want to have another logo use anything. Only important thing when you use this override mechanic is that in the end it is called logo dot svd so you can't use logo dot png or something like that. Using the customizations and there are other ways to insert other logos but the easiest way would be just to do like I did. So next step in the training is to customize the header component. For that we can again go into our omlet part and grab us the old header from Walto with this one over here. We can make a copy of that and inside of our customizations inside the theme folder we create a new folder called header to match this path over here. We can paste our old Walto header in there. Same game for here. Restart the server. But while it's doing that we can take another look at the training here. I can make that a bit bigger for you. This part here with the warning that is completely outdated that there's no longer relevant since we changed all imports in Walto to use add clones as a basis. This whole warning thing here when you're reading the training you can ignore that we'll be updating the training rather soon. Here in our component we have the classic jsx code in react.alloc taught you yesterday. Rather simple syntax we have the navigation here give props in there. Those props are path names. Those are coming from the Redox middleware. Alloc taught you about yesterday. Same for the search widget but for our use case we'll be just really quickly replacing the jsx from the old header with the thing we already have ready here in the training. Paste that in here. Save that. I hope our process here has run. That is already finished. Now when we reload the page we notice that just as here the navigation is aligned to the left of the site over here. Down here we have a little bit more explanation about the component headering but nothing that I haven't told you about before. Next page. In here we do the same game again but for the footer component. So we check our base water repository. Grab our footer.jsx. Copy that. Sheddle the path of that in the customizations. Paste the file. Restart the virtual process. And while that is restarting we can already start editing the footer component over here. As we have in the as we can see here in the footer of the original component page we also have the logo here at the bottom. We can use this import statement to import the logo component from water in this case and as the component shattering engine will recognize the logo component has been modified in the customizations. And we will also apply the already applied changes from the customizations for the import. So this import from the addplots that Volta package will also take all the customizations you did to the respective components for you into consideration. So we will copy the whole bunch of code and replace the complete footer part here with a new code. And what you might have noticed is that here we are using the functional way of declaring components. So we talked about that yesterday when talking about state and react that there are two ways to define components. On the one hand as I think the header uses the class variant and on the other hand the functional way of declaring. To make our footer look even more like the one on the Volta on the plume.com main page. We also need some CSS because as you can see here we are having a wrong background color and a bit of wrong alignment here. So we have this CSS slash less code that we also can paste in our customizations folder. So this is the footer, paste that there and when we save now, wait for a second. We should be able to see that change in a second. What do we need to reload the page? A little bit confusing to me. Let me fix something. Quick, that is already alright. We have that in place in the custom overrides. Do you have any idea why this is not working? Can you just check the CSS? So into the console, can we inspect the element like whether we have the same thing or not? I think what happened is we restarted the process here too early to re-read the new file. I think that is the error I made here. So make sure the file is already there when you restart the project. I guess that was the error I made here. But now we are at least having the rough alignment of the footer we want to revolt here. It is not exactly the same. We might want to add some changes there. But for our case, I would say that is good enough. We continue. The next part is the breadcrumbs. For our case, as we are only working on the start page for now, we can just use this code to hide the breadcrumbs. We do not have any more trouble with that. Put that into your custom overrides. Here we go. Breadcrumbs are gone. So next part. This is where things start to get really interesting in my opinion, is how to implement new blocks into your project. On the one note, when you want to customize blocks, that works exactly the same as with all the other theme components. You go to the omelet or a base clone repository that you might have on your computer. Find the block you want to customize blocks in managed blocks. And here you have the blocks you want to customize. Copy that path into your customizations, apply the customizations you want. And everything should work as intended. But what we want to do today is mainly adding new blocks to your project. For that, we will be using the documentation here in the training. But for that, there's one very important, a different I need to make that. Here it says water blocks. Wait a sec here. We are in the process of replacing the term tile and use block everywhere. So bear with us during the migration process in some places the terms are not yet updated especially code in both Walter and Flores API. In this training, and the term tile is still extensively used all over the training. And with the current version of Walter that we are using the term tile has been replaced by block pretty much completely. I don't know of any appearance of the tire keyword in the water. So, every time there's something with tile or tiles in here, we need to replace that with a term block. So starting with that, we need to check if our blocks behavior, not tiles behavioral, keep that in mind that's really important is enabled for our content times. So that we can log into our back end at 8080 part using the same login credentials as for the front end. And in the site setup part of that. So we can go to the dexterity content types. Select the content type we want the blocks behavior to work with. So in our case the page, mainly page and check if the blocks behavior is enabled. Indeed it is. In case it is not for you. You need to just take that button and save and the blocks behavior will work for your page. So in the way we can start with coding our first blocks, the blocks, all are structured in the same way as that they have a view component and edit component, which means, depending on whether you're in the normal view mode, let me quickly log in here again. So this is the normal view mode. All the edit mode. There are different fires that will be used by Walter to display the block. For example, as you can see with the image block, for example, this here is not shown when we're in the view part. So in this block, we want to create would be the main slider block for clone.com, which should be reminiscent of this section up here with those news items in there. We go, we leave the customizations folder because we're adding new components to the project. We're not customizing existing parts of Walter, but we're actually adding completely new content there. So inside of components, the structure to use is not really important for Walter, but usually it's good to have some folder structure that fits your purposes and makes having an overview view about your components easier. So for our case, we create blocks folder in which we will create all our blocks. And inside the blocks folder for each block we want to create, we create create another folder. In this case, the main slider folder. And in that all files related to our main slider component. Well, we created in that case, as I said a minute ago, we need a file for the view. So you don't face X and another file for the edit variant of the block. Edit. For now, we'll just use this dummy code just to inspect whether our configuration of the block was successful or not. And we'll just close in here. Very, very simple react components just function component. And for those various simple, not interactive components at all. We don't even need the render method, but we only have the return and that's enough. We see, in this part of the block that is other proxy find. We get past a few different props by the Volter props engine. So those props are information that we need to have to fill the block with content we want to have coming from the rest API. So we get an ID, which is just an identifier for the block, if block gets a new ID. Properties of the block. And the data of the block, the most important part is the data when we have editable blocks, every information that we type in there in the edit view will be stored in the data prop when we want to reuse that information. There are also a few more things listed here that gets passed to the block by the blocks and some of those are just information that are passed to the block others like on it block. This is all outdated on change block not tile. So, our functions that you can use to do different things with the block. Most importantly, the on change block will get to that later. Next thing is adding on your block. I, the block has this view this edit component and makes that available to us in the blocks to user. So we have the configuration file. config.js in the source directory of our project. And it already has the imports for the default blocks default views default visits and default settings so everything coming from the ball to call is already imported in here. And then we can start to amend that. And for that we create a new JavaScript object called custom blocks in our case, make sure to not use the custom tiles keyword. So on the side of that object, we create more as objects for all the different blocks that we will be using throughout our project. So our first block is the main slider block, which has a few properties there is the ID property used for internal handling. And that property is what will be displayed when you add the block. The icon is the icon that you see in the blocks to user with it. So, here, the title and the icon are what are defined here. The group is in which group that will appear in here so can't we per default have text media and common. So the most used category category, whether that is in there or not is defined by this Boolean. The restricted property is as far as I'm aware, not in use yet. So that does not do anything as far as I'm aware, we should usually just put that to follow to not restrict that block in the future where you might want to have it restricted when we have implemented the restriction mechanism for the blocks. In those fields, we have the view and edit components. So, at the moment, those are still marked as red because they are undefined we need to import them. And the security part is also at the moment, not with functionality as far as I'm aware. So, that just work fine. For the future we plan to make it possible so that only specific users can use specific blocks, but that's a tricky thing to implement and not quite ready yet. So, get rid of our missing files here to add those reuse import segments so first we import our slider SPG from Walter icons. And then for the Walter icons directly there's already a wide array of icons for the water theme. If you want to you can go to the omlet and open up the, the icon directory and take a look at those icons. There's already a slider icon so we can use that. And then we want to import these two things. For our tutorial, I'll do it a little bit different than explained in the training, because when we have bigger projects, we might import like tens, like tens to hundreds of different components into our config. And this will get messy very quick. And for that use index files in Walter. Inside the components folder we have this index. So import main slider from dot, from dot slash, blocks slash, main slider slash, deal. And do the same game with the edit part here and then export those again. So, like this. And this will make life in the later process of working on our page much easier, because we do not need to find the exact path to the component we need if we want to reuse every time we want to use that component, but we can always. Always in our file, like in our config, say import, what do you want to import, main slider view block, and main slider edit block. From add hack slash, what is it, components. There we go. And it's finished there now for custom blocks, only thing left is we need to insert our new custom blocks object into the into our blocks and to use them alongside the default blocks. This is done using the blocks config. So here we have a lot of those blocks and tiles things we need to train. So this is blocks config. Those are the, and anyways, we are in the wrong place. Yes, could you please go back to your index.js on components. Here we go. Okay, perfect. Thank you. If you want to I can also just quickly paste the code to the chat. The idea of those index files inside a component is to have like an index of all the components you have there for later use, just for convenience to make it easier to import those in other places. Okay, yeah. Eric, can you can I change back to the conflict. Oh yes, sorry. Okay. I'm going to quickly amend the tiles keywords to our structure so the structure here is the new object blocks will be consists of the customized default blocks object in which we insert the custom blocks into the blocks conflict object the syntax might look a little bit confusing. But when you start using those dot dot dot spread operators a little bit more often you get a little bit more clear picture how that works. And we're going to do something here for the internationalization part we can look later. This is not that important for now. But let's do that anyways is we import the define messages. And we create this object from a react internationalization here and create these this define messages object. And this will help us translating that main slide apart later on. Now, as we've added that we can reload our page. We're already in the edit view. And at least if I did everything right here and you did everything right. We should have the main slider here. There we go. We have created our first very own block. We can see here this is the text that is given from the main slider edit component right here. When we save our page here, we can see at the bottom bottom this small text and the main slider view component that we got from here. Okay. I think now that we have implemented our first very own block. This is a good time to go into a small break. If that's okay with everyone. At least I could have one. Then I'd say we go into break here I will use that to go on helping some people who are having some trouble here and take a look on where I can help them. And I'd say we will continue in 15 minutes so we can see that in the European time that would be 1648 I think. And see you then. Okay. Michael is having problems with importing. It's a capitalization error main slider. It's a camel case. I'm going to try again. Okay. Yep. Might be the case. And then Anthony has the problem page change to all unknown blocks. I already encountered that yesterday. I think this is probably cost inside of that syntax. First thing you should do is make sure that you really got rid of the tile keyword. In all instances there. And that you have the syntax correct. And then one thing can I begin Jacob can you go to like line number 46 or some some above. Okay, so you must have the same like if you define like men's slider then your ID should be like the same like the men's slider. That's where also you will receive the error. In this case I think what he did was overriding all the default blocks so also the text block was transformed into unknown block. I accidentally did yesterday and I tried out the training again. Then we have Adrian I have another content. Same as Anthony. Okay. Adrian has fixed that Anthony please let me know if you. Yeah, I'm good I had missing a s in blocks config on line 68. Great. Then we had the question from Mike about the spread operator Mike are you still around. Yes, I'm still here. Okay. So, what we are using here is the yes, six spread operator with in very simple words can be just as to demonstrate the base functionality here in the console. So we create. I think that it is just like it copies the value from the given area or object or any I travel object in JavaScript. Yeah, and I created a very simple example here. The approach. One. Work. We have an object using the content from a but adding something in here we can let be equal. And dot dot dot a comma the thing we want to insert there for example, value B. So the string to. Okay, bad idea for me. The dot dot dot syntax is not available in your browser because it can't use the six. I get what you say. Yeah, I do understand thanks. But in our use case here. We create a new object called blocks. And that consists of the content of default blocks dot dot dot default blocks comma, in which we add object called or modify if the blocks conflict object is already there. And we have the same with the following content in here. So dot dot dot blocks, default blocks blocks conflict and will be added in there. Plus additionally, the contents of custom blocks. Yeah, thanks. Thanks. Okay. I'll also be away for three minutes we'll be back. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Can you show me like what you want to say, like, what's the error, like it's not working. Okay. Mitchell. Suppose to add ever I think that yes, if we created the main slider view block and main slider edit block, then you can see it in both editor. If we select the add button and then go to the common used or the group which you assigned to the block then you can see it. Yeah, I'm not seeing an ad button anywhere. I see a save and may I share my screen. I'm just showing me what you should be seeing. I'm just seeing what it's supposed to look like. I'm trying to add the new block that we just created. Got it. Yeah. Just that ad button doesn't exist. Never mind. Responsive design fail. And there's main slider. Yeah. Just had to open up my window a little bit more it was a responsive fail. Okay. Yeah, when they're responsive failures and also just feel free to add an issue for that and you should track off. That's quite all right. I can't with any good conscience hold it against you. There's responsive fails all over my website. Yeah, but the more responsive fails you're aware of the more you can fix right. Yeah, I can share my screen. So I have the code here right. I'm still trying to figure out how to see other people's screens. Anthony, if you go to slash log in and then you can go to view option for login credentials. I can see your code. Okay. I can't know how to successfully compile but I don't see the option on the comments. Man slider title man slider. Yeah. Why are these warnings? Okay, so you have this style config you have to change this to to block config. Online number 47. Tiles config. Okay. 47. Make it to block. Ah, thank you. Now I think we are good. Like everybody. Okay. I think when everybody's black. I'm there are no problems. Yeah. Jack up there are some questions like which content types would I customize in control panel. I think document content type would be the one but if everything worked as intended, you should not need to do that. And we are not going to have to make a page document. Exactly. But if the Volta front and worked for you before. Exactly. That is what it is. Everything when your Volta page looked okay. But you don't need to do anything that means everything is already set up correctly here for you. This is at the block. The blocks behavior is added when you have installed. Go to the site set up. You need to have I think good concept Volta. To be installed. To be honest, I'm not too familiar with classic content. I think there are enough people around here who are who can get very good insight on that. I'm having the same problem with not being able to see the plus button. How do I get that? I opened up my screen. Made my browser window wider. I did that and refreshed. I am logged in. Anthony, are you sure you are logged in? Yes. I'm sure you are in the edit mode. Yes. Can you maybe... So the plus button only appears when you have those empty text blocks. That might also be an issue. How do I add a text block? You can add a text block when you have selected a block. I made that clear enough in the beginning. Hit enter. Then a new empty text block appears. Then you should have the plus button. Yes, I got it. Thank you. Anthony, it's like winning mosaic all over again. Yes. I keep teasing people about we need columns in Volto. We need Volto and mosaic to get together. All right. Next one with no plus button. Let's go through this one last time again. You need to be logged in to be sure that you are logged in. You need to have this sidebar here on the left side. If you do not have that, then you are not logged in. Which means you need to go to slash login. And login with the credentials admin and admin as password. After that, you need to go to the edit mode of the page. By clicking this pencil icon on the top. Then to add a new block, you select wherever you want to add that block. Then you can change the case. The bottom, the most last block at the bottom. Hit the enter key. And then a new empty text block will appear at the bottom. Next to that, there should be the plus sign for you. And as we already encountered, I think with Michael. This plus button is covered by the sidebar. In that case, you can either increase your window size as I am doing right now. Or you can minimize that sidebar by clicking on this orange, pinky, colored bar to move that sidebar into the left. And if you done, still don't have the plus button next to the empty text block. I guess your Volta was broken. Okay. There we go. You're supposed to get a non-block main header as the text when you add a new item block. You're supposed to get what I have here. The text, I'm the main slider edit or I'm the main slider view component. What when you're getting the unknown block error for only for the main slider block, that means something has is wrong with your block configuration either in here in the config, the custom blocks object. For example, one error that is often happening is that this string and this word, they need to exactly match. That's some cause of the error. Could be some cause of the error if that's not the case. And the other thing could be that your syntax down here is often could be that you forgot those dots. It could be that you use tile instead of block in this instance. I got it. It's a main slider is camel case in the ID and not in the other one. Okay. So it's working for you now. Yeah. Thank you. Let me just quickly get rid of all the other blocks. So don't have that this right. Then I'd say we continue with the tutorial now. I hope now for everyone who was coding along the block is not there and working. So next, now we want to add some more functionality to our main slider component. As we already saw in the clone.com over here, it works like a classic website slider component. And to make that a bit easier and react. We will be using the help of a new library will be adding to a project, which is called react slick. And for that the slick carousel library. So we add those two libraries to our project by going into our console over here. We stop our development server and paste this command in there. And wait a few seconds for it to install. There we go. Now we can start the process again. I'll just wait for you. Maybe a minute or so because I don't know how fast your machines are respective internet connections are. So that everyone has that installed. When we continue. Okay, I think most people should be good by now. If not, you can also already start with these additions. Now they will just be that it will just take a little bit more time to have them properly working. What I already told you is that in the last files over here because the Bova writes that is less styling file, you can also import other files. In our case, those two were installed with our new packages we just installed. I don't exactly know how the path resolution here works exactly unless but I think this thing here points it to the node modules and then it will import the right files from the node modules. So now what we are planning to do now is adding the proper code into the view component of our main slider, which is rather extensive. I'll copy that over and then we'll have a look at it together. We replace the whole file with the code from the training. The first thing we notice is that the slider image dot pnv that we are importing here. We got an ESLint error if you have installed ESLint in your code editor and telling us that this file is not there. To resolve that we add the image we want to import here, the slider pnv and that is again something I'll send you via Slack. So that would be that one. So what you need to do is download that image and send in Slack and paste that into your main slider directory. After you've done that, reopen that file. The error should be gone. Taking a look at our page, this already looks a bit different. Let's say not good, but at least different. We might want to add a little bit more styling to that. But before we do that, let's have a quick look on our code here. So we have a few imports here at the top again. As always we import react to properly create react components. Then we use the slider component coming from the library request installed. Then we use the link component from react router DOM, which I talked a bit about react router in the beginning of the training. So the link component basically works as a normal HTML a tag component, but for internal routes. So whenever you want to use something to navigate internally on your page, you need that. So we import the button component from the cement API to add better looking buttons. The import the icon component from Volta. That is a helper component that Volta uses to make it easier to add SVDs to your page. So we import a few resources like the slider image I just uploaded to the Slack. And those SVDs coming from the Volta core repository. So as you can see here, we can outside of our main view function also define a few more separate react elements, which then will be reused inside the view component. So we define those next error and previous error components. So here they receive class names, style and on click as their props and will be rendered by the slider component. So this slider thing we imported from the react slick library offers us a few configuration options. For example, those next and previous error options where we can add custom error components, we could several other things down here we use that link component. I've talked about to link to another page. This case is slash five. And same applies here. To get that block a bit better looking, we just copy over this huge bunch of CSS code. I won't get into detail over that. What exactly what does there. That's not the focus of today's training. But I also add the main slider comment here again. So for other people working on that, they know what the second of CSS code is for. Looking back at our page. Let's see. I was starting with the point. I don't know why but it seems that for some reason the. Okay, no, I'm seeing what's happening. That's also something you now need to do inside the custom overrides inside the code here copied over. There's still the tire keyword and some instances, please replace that with a block keyword. So there are three appearances of the tire class. And replace that with block and same thing applies to here. There we go. That already looks better. There with me. Find out. Where our slider content has gone. I think what we need to do is restart. And because we again added a new file in our case, the slide item of the. The. There we go. All right. Now we have our. Slider. Looking pretty neat to be honest. Also give you a few minutes more to implement that feel free to write your questions and problems into the chat. How can we help to get the documentation updated. Say again. Yeah, we actually. I'm already have a get brown. And we have a lot of things that we need to do to get the documentation updated. And the thing is we want to wait for the. The. So until we haven't updated this thing here. We still have the tire key word here in the training. But in general, if you want to help updating the trainings. And this is a. Collective. Collective. Or is it. Training. GitHub.com slash. Plown slash training. This is the repository where all the trainings that are accessible. My training. And then you can. Lay in and when you want to help update stuff there. Just create a pull request. Ask the respective editors of those trainers. To review those. And. Get your amendments into them. I just appreciated that. So in our case that would be here to hold your hands on. To the slack. You want to use that for. Yeah. Yeah. So. Everyone finished with the. Main slide up block. Or is there some, some more help necessary. What we could have some people could. Give their time. Did you put that CSS in the. Custom overrides. Exactly. My tail showing up. Overrides. And then what I also did both in the custom overrides. And the view. I replaced the. Tire keyword with a block keyword. To. Get that in line with our current. The main text. Okay. I fixed it. I didn't know we did it in the view. Did you manually delete all the other blocks? I missed that. Yeah, I did that. Okay. You can do that one. You can get a. A little bit. T. Yes. Not complicated. Okay. I think we can continue with the next step which is removing the title block because as we can see on the plo.com website there is no such thing as just one single headline that there is. They use this main slider instead. When we go to edit we will notice that we do not have this delete button for the headline block. This is her default set to not deleteable and this is what we want to fix now. For that we go into our constant blocks in the config. So again keep in mind this whole tiles thing is outdated. We have the blocks here. And in there under the default blocks we set the required blocks array to an empty array. Before that was looked like this. But we want to have it empty. So there are no blocks required in the default. I asked this way. You know how blocks are stored on the plo.com object. Will it be possible to add and remove them programmatically? Yes. I can show that to you. You don't need to get sidetracked. I was just wondering if you knew things. It's pretty straightforward. I can show that to you. Please I'm curious as well. When we go to the edit of the page in our old plo.5 interface we have this layout tab over here. And in there the blocks are stored as a JSON schema. At the moment that is empty. I guess the page is not up to date. I'm a bit confused for that. Let me... What the hell happened here? I'm going to wait a second. So we have that. We have the front page. The front page works a little bit different. For the front page I'm not entirely sure where the blocks are stored. But for every other page... Let me do that really quick. I create a new page. I add some text. I also have our main slider there. And hit the save button. And when we now check our plo.5 interface go to this test page. Go to edit the page. Go to the layout. You can here now see that the blocks are saved in this JSON schema. So for the home page the data is stored in two properties on the long side root. So you can go with slash manage. And then in there properties you're going to see blocks and blocks layout. So slash manage... Okay now... Properties. With the sort management interface. Yeah, properties. And here we go. Those. Right? Exactly. Okay, let's go back to our example here. We have the two JSON objects here. First the blocks. Here we have the idea of the block. And then after that there's always in the curly braces the information for the block. So each block has add type information. And for example a text block contains. And then there's never more information about the text. Don't get into detail here. And here again. Just type main slider. Main slider doesn't contain any other information. And in the other in the blocks layout this is only used to set the order of the blocks of the page. So in this items array there this is a list of the ideas from up here in the right order for the page. And then there's the list of the items that are in the list. Interesting. Thank you. Okay, I hope that answers your questions. I should order here is only for roles. There's no way to do like, like columns too. Or that doesn't support that. It does not support that. At the moment for column like layouts. And then we have a concept called grid blocks. So that each block itself supports several columns. That is something that is in the works of also. But let's get back to removing our headline for now. After we set this required blocks array as an empty array. We now should be able to delete the headline block. And exactly. That's what we're going to do now. And I'd say this is also the point where you can, if you haven't already. Delete all the other text blocks that were there per default. To only have the main slider. There. So can I just say something quickly about the columns and grids and stuff there are that is there if you go and look at the add ons. You'll find that there are people building add ons just like we build in this blocks as customizing. So these people who've built columns and grids and all sorts of other things as a. Exactly. And especially I think the folks over in Romania. Tiberio. I actually don't know that company name. The other one. Yeah, all the web. Exactly. Those folks have been working quite a bit on column layouts and also we here at good concept have been doing that. Let me. A quick look if we have that already implemented. Okay, we haven't we haven't open sourced our solution yet, but that will be coming at some point of time when that is more polished. We're still working on that and fine tuning some things there but at least for the future that will probably in the several solutions to those column layouts. And then we'll have some of the web. Yeah, if anyone has a link to the order web. I think we should go to column Adam, feel free to share that with us. I think everyone should now be at the on the same state here with only the main slider block present on the star page. And we can continue with our next thing, and that will be creating content types and creating views for those content types. So what we do first is again very familiar to those of you who already have worked a lot with normal clone, adding a new content type. And then we can create forward via the clone interface using the Xerity content types. Adding a new content type in our case that will be called success story. So I want to have the blocks not tires behavior enabled for that content type. So after we added that we edit it, go to the behaviors tick box of the blocks behavior enabled. And the lead image behavior also enabled. Save those settings. Now we go. Next part is creating for the view for on your content type for that inside our components directory. We create a new folder for the views. We add a new file with this call success story. For the views. Those don't need different files for the edit and the view view. So it's enough to create one file per view. It might make sense if the views are more complex and contain composed of several files to also create a separate file for the view, but for our use case. It's enough to have that one file for you. For that we again add some dummy content into our view. Rather straightforward on the success story view component. Happy days. And also import and export that. Inside our index thing this again deviates from the content of the training that something either I or someone else should be adding the future to the training. So, um, success story view from the success story. And that to your export object. Close the index day as so this was the index days inside the components again. And go back to our configuration file for that. We finally no longer have to fiddle around with those tire keywords, because we're working on the views. The views object. We add a new object called content types views. And in there we have the success story content type. And the default views. We get the error here because I forgot to import that at the top. But that's pretty straightforward. We just add that here at the bottom to our import statement. And then the naming down here. If everything went right here. You should now be able after reloading the page of course to add a new success story. And indeed, that is there. And as it has the blocks behavior enabled in the management interface, we can type in any title and also some text. And if we want, we could also add any type of block that there is in the auto. And we can also add a new save. And we see as we created the view. That's the wrong file. And the success story view. We haven't added anything for the view part of the component that the text you just typed in won't be displayed, but at the moment only the success story view component text. That's what we wanted to change. We don't want to build rebuild the whole blocks rendering engine for the view ourselves. And instead what we want to do is we just get the default view from Volto with already has all the logic to render the blocks, the separate blocks inside of it. We just get that from the Volto core. And instead of returning the stiff with this text, we just return the default view. We replace the stiff with default view. And after we've done that, and go back to our page here. Back to back from Paris before. And after we have our need little blocks back again. For the success story view. We enabled this lead image behavior so we can upload images to there. But for the time being, those aren't displayed. That's also something we now want to take a look at and change. So, what we do is change up the markup for our success story view to be able to both display the blocks at the bottom and on top of that, the image. What we use for that is the beginning this small syntax those empty HTML tags are known as neck fragments. And we have some something to do with the internals of react how it works. And one of the things in react is that you can only you always have to some kind of tag tag that wraps your component. You don't want to have a diff or something like that wrapping your component. Don't have any wrap up that you can just use those empty tags and the component will work well anyway and they won't be run won't be rendered in the final HTML. So, that we then can add the image. Take, just as anyone working with HTML, what you really do with the difference we have a class name. For that you use a need image as class names for the art. Take the use this year content.image caption and how this content of it works. I'll explain that in a second. And we get the URL for the final limits. So, now we have a few arrows here. One thing is, we do not have the content of that. So we need to define that first. We do it like this. This means inside of props that is already a object called props or content is already there. But what we do here is, we assign that to a new variable just called content to make accessing that a little bit quicker. You can alternatively also just props content in here. As well. And the other thing that's missing is the flatten to app URL function. That's a function provided by Volta to take the whole image URL that's coming from content.image.download. From there we would get localhost 8080 slash clone slash front page slash image and so on this flatten to app URL function that we import from the Volta help us. We move the first part of the URL to reduce the URL to just slash page slash image slash download to make that relative URL. When we're done with that, we should be able to reload the page. I just created some arrows in the process here. So this is because we haven't uploaded yet. Let me think of a quick fix for that. That's also a problem with the training. Let's do it this way only shown image if content dot image is available. Don't worry, I will quickly paste this into the cat. To fix the error. So we're getting an error because it's trying to retrieve this content dot image and content dot image caption. But as we haven't uploaded any images, the request goes into the void and the image tag graph is what I did here is to say, okay, only render this if content dot image is available. We should probably also do the same here. The all tech only content dot image. All right. So, to upload the image we want to have that we would then go to the edit view and use the sidebar on the right to upload an image. I think the image we want to upload here's again something I need to paste to you in this like this. What. Probably specify what image they want to have there. Just upload any image you have on your machine to there. I don't have anything that we could be faced something for you. You can use this right toolbar in the edit view to access the standard clone configuration options that are in the clone of us. So it's called this block highlight that is picked the fact that it looks like the highlight. Save this thing and there we go we have this lead image thing up here. So back to the front page. So, I think now to be clear how to create your own views for the content type. You create a file for the view add in whatever markup you need for that view. And add that to the wall to configuration down here in the views of it. That's working for everyone. I'd like to continue but wait till I just get some feedback on how people going at that. Okay. Okay. I guess. Okay. This would have been the image for the success story. Send that to the slack. Go back to success story place. Not with me. Go away. Okay. On the image we're getting the image can we get the different sizes like thumbnail or mini. Yeah, you can just add dot. You can just add a parameter and you should be getting the image scale. So all the information about the page you get as always in this request. Okay. Yeah. You would have to not go to image dot download but to image dot scale dot for example, There is a title. Yeah, I see. There you go. Okay. Thank you. Let's lose for large for our use case. There you go. Okay. Let's just quickly also at the CSS for that component. I missed that. There we go. This is now should be the term in this is office page. I'm not entirely sure which page of clone.com we were we are actually trying to rebuild here. I guess. Maybe this layer just not even available anymore on clone.com. Probably not the whole idea to show the how you customize that content view is great. Thank you. Yeah. Yeah, I think you get the twist of it. Yeah. And when you're done just at this bunch of CSS again to make that look a little bit more pretty. So, let's go on to the next one. And that's again a block. So, for that, I have, I have some small exercise for you to do on your own. If possible, just create a new block called highlights block with the view and edit part that both just say hi I'm the highlight edit view and I'm high I'm the highlight view block and add those to your project. Let me know when you're done with that. And when you have questions, always feel free to ask. Okay. So, I still have some problems with success story. Okay. Yeah, I show you the success story. So, let's try to replicate. Yeah, please don't, please don't do that. You need to import the default view from at clone, both of components. The idea is that we use the default Volta view that already has the blocks rendering engine implemented to show the blocks in your success story view. So, delete your default view dot face x, again, and use this line of code to import the original Volta default view to use that. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Yes, I know I was talking to the team on the office. Okay. I'm a bit confused on what we're supposed to do with config.js. Okay. And that is blocks highlight. And if everyone's okay with that, I can continue my sharing and show you how you should have done that. And hopefully, they clear up some problems people might be having. So, yeah. So, to begin with what I started with and what you also should have done in the beginning is create those added and view jsx components just as with the main slider in their own folder in the blocks directory. And then, of course, with that simple text in there. So, what we already did for the main slider and the success story of you import them to our index file and export them again. And then go to the conflict.js. And then we add here. And in there, we at first also import those two blocks, just as we did with the main slider. And then we use the same configuration boilerplate as for the main slider for the highlights blocks so inside of our custom blocks object. We create another object called highlights. And then we use the same image. Has the title highlights ID highlights. For our case uses the same icon as the main slider block. And then we use the same image to the common group. Use the imported react objects for the view and for the added respectively and leave the rest as it is. For this here for the blocks object on here we do not need to touch it anymore because we already added the whole custom blocks object down here. Did that, did that help. Yes, I'm nodding my head. So for our front page. We now should be able to add that highlights block. Just as we did with the main slider block. Is that is everyone now at the same state as I am. Could you could at least some of you come confirm that in the chat. Or vocally. I think you can go ahead. I can. Okay. Yes. Good. So, now that we have the block at least registered with water and can add and remove that. We now should continue to add some styling to that. And the idea is to have that block you look like this part of the clone page. And for the sake of some publicity. We don't want to add all those links and text here manually will just use images for that. And so now first, I quickly upload the two images you want to use here. To the slack again. And the highlights. As your highlights, you and edit components. Like this. And you can copy this whole mark up into your view. There's X. For the highlights block. In here we use another helper from semantic you I react. And we have a good component. That's a semantic UI. UI component that makes it a bit easier to create responsive quick layouts. And your components. And we are looking at our. We should be able to see. This three column grid layout. Those. The custom overrides. As always all styling just goes in custom overrides. There we go. So, could you put CSS for each component in the components folder. And imported into that JSX, or does it have to be in. Yes, you could. Okay. You could put. I think you should. I don't know if you can, but you should not put the CSS into the same folder. That's the component that might. Might be breaking, but it could also be possible. I honestly have not tried that, but I think if you fit the path from here. I think it's not exactly that should also work. At least what we're doing at kid concept. What I think is not really the optimal solution is just dump all the CSS in the custom overrides file with. And as the benefit of having all your CSS in place and don't having to look around for stuff, especially if it is CSS code that might be used by several blocks. On the other hand, you end up with like 2000 lines of CSS files, which is also not the best thing to handle. There is no generally accepted rule on how to do that best. Okay, when you want to split it up in different files, I would recommend that you do it like the Volta core does it. Because there when you have a look at the theme file and to the pasta Naga. In the folder, you have a bunch of less files for all different kinds of components. But as you can see, this can also get very, very bloated very fast and you can have discussions what CSS has to go and what folder is definitely when it's called that is used by several components. In the end, you need to kind of figure that out. What works best for your project and for your use case on the fly. I think I can continue a bit more in here. I also want to have these parts down here for that I also have images prepared for you that I will upload just like now, which are what images do I need highlights blown up in the highlights news. No, never mind. I already added those. Sorry, let's look at that step later. Okay. The next thing we want to do is make our highlights block a little bit more dynamic dynamic in regards of success stories for Jacob. Can you just wait for a second. The video has this like, can you show me your. You have those indexed as files they can also get very bloated and at least what I like to do is to group those imports by topic so I have here a group of all the blocks and if I'd had would add more views I would add them here and another group so you want to check the indexed file for something. Okay. So, the mirror Michael are you both. Yeah. And for that, we want to show the recent success stories just as this recent blown launches column does on the phone.com website. For that, we create a small component called recent success stories that face X and our highlights directory. We go paste this react mark up in there. And then we go back to the view of our success stories or highlights component. And in the recent success stories column, which is here the middle one, you add this recent success stories component. Instead of the body here part. It will complain again because on person, the component is not imported. So let us quickly import it here at the top. And on the other hand, this ID variable is again undefined, you do the same as we did with the content in the success story view. Here right above the return statement of the view component. We add this small declaration. And then we add the already set with a successor review alternatively you could also just directly access the props here at the bottom. And then we add some metal personal taste and how you structure your code. Okay. I think most people should be done with that. If you're done, your front page should now look like this. So for the middle column here for highlight block, that should be this list of success stories paragraph. Okay. Yeah, I'll continue now. Now, this is where stuff really gets interesting because we will now be querying actively querying the plan West API for informations. I'll quickly copy over this code into my recent success stories. And from here, explain to you what we are doing. Before I do that, maybe. Okay. So, people should be there now. So what we're doing here. First thing that's new is we import an action from the default actions that are already implemented and flown. In our case, the search content action. Then we import the use dispatch and use selector hooks from react redux to be able to dispatch that action to the ball to Redux and finally through the middle where to the rest of API. And then here we again have that link component. Here we again assign the ID object to props dot ID. And then we assign the search separate requests object to the results of the use selector function we imported here from redux. And then that then in turn, selects the requested state from the Redux store state search sub request so in the Redux store there will be search object with sub requests object. And this is what we want to get. And we assign create the new function dispatch. And assign the use dispatch we import from here to that. And finally, we say our results of our query shall be the search sub requests, if available, this syntax here with the question mark in the inside the query has the effect that the rest of this thing here will only be searched for or use if it's actually there if we don't have this question mark in here. And this, this part here is not available in the object because for example, the rest API query hasn't finished yet. We would get an error. This way, the request will only be the object will only be filled as far as here if the data is actually available. This syntax here actually is not an array. This is the JavaScript object. And this is the record notation, which means the results are always for this, this part here will be replaced by whatever value is assigned here in the ID thing so if ID would be number one, this here would be replaced by number one with the syntax. And down here we have the react use effect hook. This is a little bit reminiscent of the old react lifecycle methods I talked a bit about yesterday. So let's summarize it really quick. This is something that the stuff in here inside of these brackets here will be executed every time the component start changes or the component reloads or remounts. So every time the complete component loads, the search content action from Walter here is dispatched to the Walter Redox engine with the following parameters. Those are parameters from blown rest API. So that's search content action will query I think the clone search endpoint on path route with a parameter sort on created meter data fields all and the limit portal types only success story. And the ID as indefinite parameter to save it in the right ID inside the red ox state. And then this part here is the dependency array. That's something that goes a little bit deep to explain now. So down here, we have the final markup what we display so if results is available. We can type over the results and return each result as story and then we create for each story and I with a key of the idea of that story and then put their link with a story title. And see if that works. And as we can see, it actually does I created that one success story called title. If I add another success story. Now, called. Let's grab something from clone.com. Yes, they are. And then we can add it to our front page that will appear over there. And then we can quickly show you in red ox how that works for that I have installed in my brother red ox development tools with our nice little lead on to debug and get a better understanding on how red ox works. We reload the page. We see other with users and actions being called in our case. We trigger the search content reducer. And then we can see the state of our whole store. And as we defined here, the search sub requests variable is here reminiscent of the third sub requests. And this is the ID of that request. This is the sub requests mechanic is used if in case you have several blocks that query the same API endpoint, you can then have several sub requests from that API endpoint with the respective IDs of the block. And there you can see here we've got the information for our two success stories we queried. Are there are are there any questions on that topic because I know this is a little bit more complex. The only thing I look at is passing the review state part of the query, but I think that like if our success stories are published or private or the workflow happens there. When they are the review state is private. That's work that the progress API takes care for you so when you're an anonymous anonymous user or user that does not have access to that resource, the blown rest API will not give that to you. This is part of the security mechanisms and her and the phone rest API. Yeah, Miro asks where to get more information on the blown rest API. I'll give you guys documentation for that. And here are all of the endpoints and how they work. And if you have specific questions regarding the blown rest API, feel free to ask the developers of that create issues in the blown rest API. Repository. Probably there is a forum also on community on your Sorry, say again. There is a forum on community.clone.org That might be the case or a topic in front of me. Might be the case. I think there doesn't seem so. I think best way to get in touch with the blown rest API guys would be to use guitar but you can also ask because I'm sitting with a specific question. I'm sitting in the same office with a maintainer of blown rest API so usually if I have questions about that I just go to the next room and ask Timo. Okay. You can always ask from community. I can find Jacob. I can find a content thing working for every one of you. I stopped it to follow up to pay more attention. So, I am like, some point behind. We are almost. I think we're almost done with this training here. Last thing is to move the markup of your highlights block. For that you can just copy from the training. And paste that in here. I think you should do the same. I think it does not even matter. So, give me a minute to send them in this like those are highlight logos. Highlights small blown. Con. And. Yeah, that's it. Those two images also go into the highlights folder. Same game again. Reload the page. And what you're seeing on my screen should be how your page looks in the end. I think. Let me take a look at the. I think there's one. One part left. I don't know. I, this will probably take another 20 minutes, at least, I guess. I think it's about how to make your blocks editable and add the logic for that in there. Pretty interesting, but would also take a little bit more time. It's already pretty late. Are you guys willing to also do that with me or would you say that's enough. This will be the last, the final chapter of our training. I'm going to get a bit of feedback on that. Yeah. Mike is out. Michael, what about you? Would you be willing to stay another 20 to 30 minutes for the last chapter? Adrian. What want to do. We really appreciate some feedback. By the way, how many people are we left anyways, 18. Okay, yeah, then. Give me give me two minutes. I need to go to the bathroom real quick. And then we'll continue on that. I'll try to kind of rough through that. Yeah. Okay. I'm back. I'd say I'll just also do this part just to have it in the recording so the people who want to drop out can do so. And then just keep going to get that done. And if you're still interested and want to take a look at that, just take the recording that will be online hopefully tomorrow or at some time this week. When we added a block. The configuration for the block is almost entirely done via the sidebar that we have here on the right side of our pay. For example, if I add this image block and upload. And some example image over here, I can configure that on this right sidebar. And what I want to show now is how to create your own very own sidebar to configure whatever block you want to configure. And then we let me quickly. Okay, this, this part is also not specifically for the project we're currently working on will create that's creating separate block as an example for that. I really quickly create a new block as we learned this will be the teaser block at the view as X and edit as X really quick and then. And that the markup we are provided with in there. We create a separate file for the sidebar. Don't worry if I'm rushing through this really quick or explain when I have all the code in place and my editor. Go. This whole thing. That one goes to the view. Lock quickly. Config. To the X box. Paste. Let them to our important list. And. We need to replace this whole tire mess with the block. Again, same for uppercase. Tire. This uppercase block. Okay. Think we're good for explanation. What we have here is the entry point for the edit component of our teaser block. And then we have the source content from another page basically and you and the added view you are able to display at change what content you want to display there for example, the reference some other page and show the preview image and the headline and the description of that page. And then we have the case here. The edit component here consists of two sections. One hand we have the teaser body. This is something I quickly created here. And that shows the markup of the finished block. And the other thing we have is the sidebar portal, which we import from clone core and add our custom sidebar teaser sidebar we created over in the teaser sidebar.js file. And display that there the teaser sidebar gets past the data, the block and the on change block function that we get from the blocks engine here inside of our props for the props there are. This is a little bit different notation as we had before instead of this, we here directly define what props we expect and can use them directly in the components without doing writing like props dot something. But the concept works still the same. Now we take a look at our teaser sidebar and also a man if any appearances of the tire keyword on here. I don't think so. The teaser sidebar only consists of a bit of markup with this headline showing this is the sidebar for the teaser. And then we will see that in a minute, how that looks like. And then we have the teaser data part the teaser data part is a small component created to contain all the information that then other configuration information that then is displayed actually in the sidebar. And we also paste that directly inside of this teaser sidebar component, but to have it more clean. We move that into a separate component for our case on which we'll have only a five minutes. And we're meeting with us close. I'll quickly do the thing with here again. I think I can do it in five minutes. We should have the thing working now when we reload our page. We go to edit and add our teaser block. We need to have the teaser. I think that if you do. No, no, there's still some import with tire. I think that's it. I hope when you go through the recording a little bit slower and pause on what I'm doing, you might get a little bit more of an idea. What's going on here. At the moment, I'm just doing that to have it in the recording so people can check that out afterwards. So we know at that teaser component and go to the blocks that exactly there we have implemented this content browser. And from there we can add some content we want to show in the teaser block and that then in turn will be displayed here and you can click on that and get to there. And this object browser that you open this defined here in the teaser data using the open object browser function from the clone core. So I think we can take another look at the training resources if you're more interested in that. I think I can wrap the training up for now. I think I have a minute or so left until the meeting stops so I'm very grateful for you to have you with me. You learned a lot. Understood everything. If you have more questions feel free to text me in the clones like I'll help you. And if you need to rewatch something, there's the recording. Thank you very much. Let's see whether the meeting will close on a thank you very much Jacob. Nice to meet you.
Chapters: 0:00 Intro 2:02 Continuation of Part 1 React Training: React Router 15:14 What is Volto and how does it work? 21:00 Bootstrapping a Volto Project using create-volto-app 36:19 Training Project goals 37:20 Theming in Volto 45:37 Header styling 1:00:54 Footer 1:08:12 Hide Breadcrumbs 1:08:56 Introduction to Blocks 1:32:35 Break + some troubleshooting + excourse on ES6 spread operator 1:57:50 Blocks - Main Slider 2:27:10 Views - Success Story 2:52:48 Blocks - Highlight 3:44:09 Blocks - Edit component
10.5446/54772 (DOI)
So I've just typed in the chat and you can feel free to type in the chat if you don't want to talk. I'll just give another two minutes for introductions. So, two minute timer. Hi, I'm Jen Jennifer graph and I'm just outside Philadelphia. Okay, welcome Jen. Welcome Linda. Anybody else I see a Laura. So welcome. We don't have polling capabilities. So what what you can do is just on a scale of one to 10, just put a number in the chat where 10 is I feel like an expert. And then one is I feel clueless when it comes to editing with clone. So if you're, if you feel really comfortable with content management with clone and stuff like that you can put a 10, in which case you help me in this presentation. And if you're maybe feeling a little less comfortable you can give me just a gauge with if there's somewhere between one. I'm really new and 10 I'm a super expert. So just put your number in the chat, and that will help me to kind of gauge the audience. Whoa, my face. So you're going to be helping us. I don't know. I don't know about that might be an exaggeration. We'll see we'll see. It will come out as we move forward right. Okay, so we have an interesting range I'm seeing from two to eight. So you guys know who to call on. There's Matthias and there's Lee, I say Lee or I'll say Lee because that's what I see here. Right, so we have a good idea of the ranges. So I'm going to bring up my presentation now. I am just for safety. I have I'm going to have to connect a second computer, which I would go I'm going to connect to my phone's data. So I'm going to do that shortly. The reason I'm doing that is if internet goes down. I have a little bit of redundancy because my phone is on a different connection than my main internet, but the quality may not be as good. So, just making sure you're aware of that. Okay. So let me share my screen and share my presentation. Okay, I think I shared the wrong screen. Okay. I'm sure I shared the right screen. Are you seeing my screen now. Okay, great. I see a thumbs up there. What are you, what are you, you should be, hello. Not seeing the screen. Oh, I saw a thumbs up. I thought, I thought that make you say in the screen. Just give me a second here. Okay, can you see the screen now. Yes, yeah, some of us have seen it. Yes. Okay, great. Great. Great. All right, thanks, Mattias. So there's some very basic assumptions that I'm making for this presentation for this workshop. I think they, I think everybody here passes the assumptions just the fact that you can log into zoom. So, but it doesn't hurt to make sure that we're using the same vocabulary. So, but it doesn't hurt to make sure that you have different words for these things. So let's just make sure. So here's a quick review of tools we use on the web. The browser typically has an address bar. You put addresses in there and addresses are sometimes referred to as URLs, and the content of the site shows up inside of the browser. So this is very basic. But that's pretty much the anatomy as it as as we experience websites. So for today, what we want to do is just get a little background of clone demo the first steps. Go through some of the principles and concepts of content management. We're going to look at user and group management, managing users groups and permissions. And then we're going to talk about some tips and tricks. Okay. So I'm trying to do this presentation in about chunks of one hour. So I'm going to set my time again. And I'm actually setting it for 50 five minutes so that I have a kind of five minute warning. I am capable of going way past the time. So I need these timers to keep me on track. Okay. So I'll give you a little background of clone, although based on the responses, I think most of us know that loan is an enterprise content management system. It allows non technical people to create and maintain information. And that can be for websites or intranets. I apologize I changed the layout of the slides at the last minute and I see that I got caught with a little gremlin there. So typically the experience of using clone involves getting content into clone and content could be text. You know, like what we have here. And it's most commonly referred to as a page when you bring in text in its basic form into the plan you usually add it to what is called a page. You're going to use the word page. Most of the times. But in clone you'll also see referred to as a document. And that can cause some confusion so I prefer the term pages. So typically, you'll take take some text and you'll put it into the phone. You might also have some images. And you can take those images and put them into the phone. Okay, so this is this is illustrative of the idea. So any content that you have. You can get it into the phone. As a page as a picture. And there are other ways of getting it in which we'll talk about. So basically, at its first level is a storage system. It stores your pages, photos and images, documents which implone their effort to us files, and just to reduce confusion I will talk about documents as files, and news and events, videos and audio files. So, the whole idea is you can have different types of information, which you're able to add to be your own site. And all of this information together referred to as content, which is why we use the phrase content management. I'm going through this just to make sure we're on the same page. And we'll probably spend another two minutes just making sure we're comfortable with that. Right. One of the one of the overarching principles with clone is the ability to group your content. And the different ways of grouping it but the most basic way of organizing your content is into folders. So, clone allows you to take your content and then place that content into folders. And based on the purpose of the content or the category of the content, you may have different folders for different content, but you also may have different folders because it's for, for example, a different department. And you may have a university, just as an example. So, clone is able to take your content, and then individually render a piece of content. For example, if you have a page, you could display a single page with clone. And this is what this is representative of an example of a single page with clone. You may be displaying a list in page. The difference between the page on the left and the page on the right has to do with their, with what is being presented. This is a single piece of content. Whereas this view here has several bits of content displayed as a listing. So if you're comparing it to your computer, this would be the file browser showing all the things that are available. Whereas over here might be an individual file. Notice that we tend to have a title and a summary for individual bits of content. That same title and summary shows up in the listing. So you're really encouraged to have a description which becomes your summary. If you do not have a description, then your listings will look empty because we'll only have the title. At Altaroo, which is the clone service provider that I run, we prefer a clone because of its flexibility and rich feature set. One of the things that I've found is we don't always do clone projects, but there are some projects because of a need for complex permissions and workflow and organizing content into hierarchies. So clone is really useful in those cases. For some sites, you only need something simple, just a published and private state. And maybe you don't need hierarchy. Everything is a flat structure with everything on the same level. Then that's not as important. You can get away without using something like clone. We've actually been building clone sites for just about 20 years. And some of our customers have been with us for more than a decade, some 15 years and so. And the same clone site and we upgrade it, and we update the look, and they keep going. So some of these sites, this example here is actually a site that was retired because the organization decided to standardize on a different content management system. And that's the only reason they were fine with their site actually. And this was a customer that we worked with for 10 years or more. So just understand that clone is great for the long haul and it works with you and you can really build on it. Finally, just a little bit of information. Again, some of you may be aware of this, but these are probably the top three websites that you want to become familiar with if you're learning clone. There's one other community dot clone dot org, which is great if you want to ask questions and interact with people in the phone community. Okay, I am going to just ask if there are any questions at this point. This would be a good time for you to ask questions and I will ask a couple questions as well. I'm curious, the types of problems that you may have at the moment that you're trying to solve with clone. But, but before that, does anybody have any questions, I'll make a note of them. If it if this is too much to answer once and get back to it. Sorry, go ahead. So far. Okay. Okay, so no, no questions yet. Please write your questions in the chat. I will. Each time I take a little break, I will peek at the chat. So if you have any questions, please put them in the chat so that we can address them. So I have a question and I'll type it in the chat. What problems are you trying to solve with clone now. So, all of you here are here because you have to interact with a clone site. That's, I think that's a reasonable assumption. If not, just tell me now. So I'm aware. And for some of you, you already know the types of problems that you're going to need to solve in terms of what do I need to present. How do I need to present this information, what type of information. So, please feel free to again type in the chat or you can unmute and just share with us the problems that you are currently finding that you have to address. Okay. Welcome by your teeth. Not sure if I'm pronouncing that right or if that's a shortening. Brenda. Okay. Welcome Brenda Brenda Brenda your teeth. She says she has a site a new site and adding content has been difficult. And she hopes to gain new skills to learn best practices, which is great. And this is useful to you. Brenda. And, but they are some looking to use latest version of blown 5.2 up for faculty portal at my new university in Abuja. Okay, great. So that's important. So, it's interesting. I'm always discovering places where blown is being used. So, a lot of people didn't know that blown is using Jamaica. And, but here's is letting us know that it's been used in Nigeria. And, of course, we know it's using the states and, and other places as well. Okay, any, any more comments about things that you've been trying to do with clone and, or you hope to do with clone. If not, we'll move on to the next section. So just give a little bit more time. Okay. So, let's get to the demo and first steps with clone. So, in this section, just remind me I need to have my laptop ready in case we have a pocket. And so, unfortunately, we have to be careful about that. And in case there's internet issues. Okay. So, we're going to go to demo dot clone dot be. And if this screen share will continue to work. I'm not sure if it will. Let me know if you're seeing my screen. It should be at the clone demos page now. Okay, great. Thank you, Brenda. So this is a close to default clone site. What they've done for the clone demo sites is they've added content. They've added multi lingual capabilities. So the demo site is actually a better showcase of what prone is capable of than the default clone once you've just installed it. And for that reason, I like using the demo site, rather than the brand new installation of clone. Also, it makes it easy because I can just send you here and everybody can go here and browse, especially for virtual training that's great. So this demo site is actually running clone six. I would prefer to use clone five two, but if we can't get a copy of my to settle for clone six. I think what we need to do is just say, I think there's a way to change the URL. See if we can do that really quickly. Right. So I've so we'll continue using clone six. Just bear in mind that clone six is still in beta. But it's, it's the closest to the stable one, clone 5.2. So, so let me jump back to my presentation. So what I want you to do is head to the demo folder. So there's a special folder called demo. And underneath the demo folder, you're going to see that there is a page called a page. And we're going to click on it. One of the principles of clone is that it supports this idea of listings, as well as individual pages or individual content. What we're looking at here is a page, but there's a little bit more to it. So just to make sure that you're clear on what's happening. I'm just going to annotate this a little bit. My annotation software is taking a little while to load. Great. So, what exactly are we looking at in front of us here? Well, the actual content is in this area here. On the left is clones navigation. On the right is a portlet. And sometimes you can have multiple portlets. So, this whole area here allows you to have multiple portlets and think of those as shared things that can show up next to your content. I'm presenting by a window. So if you hear vehicles driving fast, it's a compromise between getting good daylight and getting vehicles making noise. The joys of working from home. And on the other side, these are also portlets. Sometimes you'll find portlets in the footer as well. The actual content is right here. So this is the actual content. This sidebar area here is sometimes called left portlets because it holds the left portlets. This one here is called the right portlets. And we'll talk about that a little bit. So I just want to make sure you're comfortable with that idea. Please, if you're having any concerns, just put them in the chat. I'll keep my eye on the chat. Sometimes it might take me a minute or two to notice it. I noticed that, for example, Matthias had mentioned that he's actually looking to integrate his own system with a learning management system. And that sounds interesting. That's not what we're going to look at today, though. But nonetheless, it sounds interesting. Right. So, I'll take that back to the presentation. So a couple of questions. For those of you who may be following along and maybe actually went to the site, I'll help you cheat a little bit by pasting it in the chat. Question. So, I'll tell you a little bit about the URL in the address bar of your browser when you go to the demo and our page. I really want you to pay attention to these things because Plone does something special with the URLs. They tend to reflect the titles of your folders and your pages. So even if you look in the chat, the message I just sent. The Power user can deduce a lot about the site by just reading the URLs. So, in this case, the URL, the pattern that we see is the name of the site. And in this case, it's the English version. And then it's in a folder called demo. And it's a page called a page. Okay. Secondly, just just for the sake of it, let's let's browse around a little more to see if we can observe that pattern throughout the site. So, if I go to a news item, and I look at the URL, it's going to say a news item. Similarly, I'm going to put it in a special folder and a page inside of that folder. Again, the titles tend to reflect and the URLs tend to reflect where you are. That's really useful for somebody who needs to kind of get a feel for where they are in a site. And so these URLs are great for Power users. Okay. All right, so let's look at the next exercise. As you navigate around, by now you should be able to see the structure on each page. And you'll notice that they tend to have title and a description and some text, the main text. Okay. The, that's very common. So if you, if you go around a clone site, you should see this kind of approach. Now, if your site has a theme applied to it, sometimes that can affect the way it's presented or sometimes certain aspects of clone are hidden by the theme that you're using. This is roughly how content is displayed. And as I said before, URLs in-plone reflect the way that content is organized. So for this exercise, I would like you to guess a little bit about the content by reading the URL. Can we have a volunteer who doesn't mind speaking and just saying what they think about the content based on the URL? Okay. Everybody's shy or finding a hard time finding the unmute button. Okay. I'm not seeing the chat because my screen is full screen. So let me see if I can get back to the chat in case someone sending me a message. No, no messages. All right, well, let's, let's discuss what we're looking at here. So we already kind of looked at the first URL. The second URL says to me that the type of content is probably news just based on its name. So it's a title of the news from 2017. And this is the title of the news item. So it's encouraging people to submit talks for the 2017 conference. This one here tells me that this is FBI.gov, which by the way, the FBI does use clone. They have an area for services. And they, one of their services, or one category of services, laboratory services. And in this case, the service is biometric analysis. If my theory is correct, then I should be able to click that link. And it should take me to the FBI website. And what should load up is something to do with biometric analysis. And indeed, that is what we're seeing here. Great. So this idea of reading the, the URLs is very helpful. A lot of the times with clone. Okay. Right. So let's make some distinction between how content types are displayed versus how content types are listed. So in this exercise, we're looking at the idea of standalone pages, images and listings. These are very different concepts in clone and kind of understanding this can sometimes help you with your content management. So we're going to go back to the demo clone site. And let's see what we can discover. So we're back at the demo clone site. Here's a news item. For this news item, it has the title, it has a description or summary, and then has the entire news item. If we go to the demo folder, the demo folder has a summary view or a listing view. And the same news item shows up here. And here is the description. And if we click through here, it takes us to the full news item. So one of the concepts I mentioned earlier is this idea that listings in clone are a way of kind of giving you a summary of different bits of content. And so I think of this as a listing page, and I distinguish it from pages or content individual bits of content. Hopefully you would have noticed that even for the news item, you could find news item in the listing view, but you could also, you can also find that same news item by going directly to it. So, so it's found in different places. So the image, if we go to demo and just take a peek at the image file, you notice that there's a preview of the image in the sidebar. In addition to that, there is the actual image itself. In this case, this image has a title but no description. So as a result, in terms of the structure we're used to, there's the main content, the main body. There's a title, but no description. So what we can do is we can try and add a title and see what happens. So I just, I'll just write that down as an experiment that some of you can try. What happens if we add a description, or if you prefer the word summary to an image. So that's an experiment that we can try out. And another thing to point out here, most of this is managed by clone. So you don't have to, you know, worry about placing an image in the navigation, the navigation will preview the image automatically. Okay. Presentation is loading. So give me a second. So for this exercise, I'm going to ask you to log in and log out. And we're going to use, if you may already have a clone website, and what you'll discover is if you put slash login at the end of the URL for your website. An unmodified clone site would provide you with a screen where you can type your login name and password. If you don't have a clone site to work with at the moment, you can use the demo on that we're using. I'll just send the link again. So what I want you to do is just experience the login experience. Now, if you're using the demo, you'll see that there are some ready made logins and also they have a shortcut, which you may not see in every clone site because maybe your theme is different. So what I've done is they've given you the ability to log in as different types of users. So for now, if you're using this site, you can log in as a manager and click login. Let's do that. The way that I suggested, which is typing slash login. And normally you type your login and password, but they've made it a little easier for you and login. Okay, once you're logged in, what you're going to see is a sidebar show. Okay, now, this is unmodified on themed site. But I would say for 70% of the time, the sidebar will look like this. Some people place their sidebar at the top. And if your manager has done that, you'll see, instead of a sidebar, you'll see a top bar. So just so that you're aware of that, that is something in site setup where you can move the sidebar. And definitely that was so in all the versions of clone. Let me just double check if that's still the case for clone six. So if you move that set toolbar position, you can set it to the top. Great. So, if you prefer to have your toolbar at the top, you can do that. Still does the same thing. But now, all your tools are above your site rather than running along the left of your site. But that's not the default. The default behavior in clone is for the toolbar to be at the side and specifically the side left. So let's just save to get that back. Okay, so I'm assuming my working assumption is that you're working along and you've tried to log in. But send me a message if you have any questions about logging in. This is a good time to ask those questions. Okay. Now, to log out of a clone site, you can use, you can navigate to your, your profile, your, your little profile icon and click log out, or you can just add slash log out to the end of your website URL. So there's more than one way to do it. So just as an example, let's go to the demo site. And if I want to log out, I can go log out slash log out. And that will log me out. Or I can be logged in and use the option at the bottom here. Either way works. Most importantly, you just want to make sure that you're clear on this idea of being able to log out, whether you prefer to put slash log out, or otherwise. Okay, any questions. Remember, you can put your questions in the chat. If not, then I'll move to the next item. So here are a couple of experiments that you can try out. What if you're already logged in. What do you think would happen if you went to the slash login address again. Anybody want to hazard I guess, maybe type in the chat. What do you think would happen if you're already logged in, and then you try to visit slash login. No thoughts. Okay, so so Linda says it's going to boot you out. I believe it takes you back. Brenda says it takes you back, not clear on what you mean by takes you back in this context Brenda. Oh, back to login. All right, well let's see. So we're going to test this out. So I'm going to put slash login. And I guess this is partially check it if you're paying attention because we did this already. Let me just confirm. I'm actually logged in now. So let's just confirm that I'm already logged in. So not good example. So let me log in first. And let's log in using the by appending. Okay. So I'm going to log in by appending and I'm going to log in as manager since that's what we've been doing. And how do we know we're logged in. We see this little sidebar here. Let's pretend that I visited somewhere else on the site. So I'm going to head to demo. So we're fully logged in. No question about it. And then I'm going to go to slash login. And indeed, what's interesting here is it does provide me with a login page. But in addition to that the sidebar is still visible, because I'm already logged in. So that I would say that's a parking plan, but it's it, it makes a point that it faithfully takes it to the login page. So let's log in. All right. All right, what happens when you put the wrong credentials. It's good to be familiar with what it looks like when you put in the wrong credentials. So let's log out and log in with the wrong information. Just to see what does blown do. If I try to log in with the wrong information. I'm going to go to slash login. I'm going to put password that is definitely wrong. And the result is a screen that says the login has failed. Okay. And then if I actually put in the correct information, it should let me in. All right. So quick question. Usually at this point we'd have sweeties to give out or something like that, or candy. I suppose depending on where in the world, the pilot sweeties are candy. But I can, I'll give you a virtual high five or something. Can you state two ways of logging out. At least two ways of logging out. Feel free to unmute. I don't mind hearing people respond. Two ways of logging out. I think the first way you should was using the URL right just type in lockout at the end of the URL for the site. Right. So that's your. So I'll type back in the chat. Yeah. And the other one would be using the profile link in the control panel. Right. So, so the other way is. So one way is to play in the URL. But the other way is to just go to your profile link as you said, and just click log out. Yes, so you get the virtual high five. I like you knew this long ago. All right. So let's move on. We're going to. So by now everybody should be really confident about logging in and logging out. They should understand a couple quirks about logging and log out system. So we're going to move on unless there is a question. All right. So I'm not hearing any pressing questions are seeing any pressing questions. All right. So now we're going to look at preferences. Okay. Excuse me a second. Just need to change the setting on my phone. Okay. Right preferences. So within prone underneath your navigation underneath your profile menu. One of the options is preferences. And if you go into that section, you'll see that you can change certain settings about your, your users are about yourself. So for example, Jane, this person's name is Jane Doe, and their email address is Jane Doe at example.com. If they prefer to use a different email address for password recovery and things like that, they could change that. You can also change your password. And there are a couple other preferences that you can change. And that is found on the preferences. Okay, let's just look at that quickly. How would you go about managing preferences so let's actually do a quick demo of that. All right. So we're at the site. And we're going to go to preferences. Right. Oh, I see a question that Matthias was asking, will we get to cover theming in prone. Unfortunately, this course does not cover theming. If time allows, maybe I can show you a little bit about theming. Yeah, that would be awesome. If I'm not mistaken, I watched one of your themeing videos on YouTube and it was just great. Yes, yes, yes. That was fun. It's not as, I mean, it's a little trickier than I made it seem. That was for a demo, but we'll see if time allows, we'll see if we can do a little bit of theming. So let's see if we can get to that. So here we are on the preferences. And as you can see, you can change your things like your editor by default, we use tiny mce. You can have other editors installed, like if you prefer a different editor and you've installed it as an add on your site administrator has installed it, you see it as an option here. And each user can choose their preferred editor. Can also specify your preferred language and your time zone. Of course, you'll only see time zones that are available for your site. So they're not going to show you every time zone. You can change your password. This is a demo site. I think I should not change a password. So, but on your own site, you would be able to change your own password. Okay, great. At this point, we are about 12 minutes from the schedule break. And I've actually reached the next section, which is going into content management. So, if you guys want to take a five minute stretch. We're ahead of time, which is good. So maybe that would give us a little time at the end to go through things that may be out of the scope of the course, but you might want to get into. We'll see if we can do that. Okay. So I'm going to set my timer for another five minute break. All right. You'll hear the time I'll go off if you're nearby because I want to remain unmuted. Okay. So let's set the timer again. Get into the next section. Hopefully everybody took advantage of the break. Let me mute my laptop. I took the time to have my laptop set up in case there are any issues. So I'll quickly switch machines if we have any issues. Okay. When it comes to the loan. This is a very simplified representation of what I call the content life cycle. It starts with the creation of content and it ends with the deletion of content. Of course, more sophisticated life cycle would allow content to be in more than just published state. So, Plone has the capability to have things in a state called pending. And depending on how you're working with your content, you could archive content. And some sophisticated workflows allow you to do things like that have things in the states associated with the item. So for example, a lab management system where you're tracking samples. There may be several states that are related to the stages of the samples. So then the samples might enter as submitted for or registered. And then the samples may be tested. Then the samples may be approved. And then the samples may be released. And then the samples may be released. So one of the most important stages would be a type of content life cycle. And Plone is perfectly capable of doing workflows like that. But in the simplest form, you have create, edit, publish and private, which I don't have here. And when I think of content management, a comprehensive picture includes the management of users. So it's not just the ability to manage content. That's a simplified way of looking at content management. But the users, what type of permission do those users have in relation to the content? Can they access it? And so on. And then what information do you have about the content, which is not necessarily content itself, but information about the content? So from the perspective of Plone, things are arranged by folders. And then information about the content would be things like what type of content is it? Is it a news item? Is it a page? All of those things are information about the content. And then you're also able to manage things like when. So when is this content, when was this content published or when is it scheduled to be published? And finally, as I hinted before, the users are about who can do what. You know, who can view, who can edit and things like that. So these are key ideas of content management. And in that sense, Plone is a very robust content manager. And they're tradeoffs. There are some content managers that might be easier to work with, but far less comprehensive. And in a sense, they may let you down at certain points where you need to be able to do fancy of things. So here is a key concept that you need to be familiar with. And we've kind of hinted at the idea of content types. That's about what type of information. If you want to think of it as kinds of information, Plone is able to know and sort information by the kind of information. If you're logged in, the default available content types will look something like this. And I need to point out a couple of things here. So let's, let's point them out. I have not spoken about collections yet. Collections are a special type of content and we'll get to those. So from my perspective, the basic content types are events, files, images, and when we say files, what are we talking about? PowerPoint presentations, Microsoft Word documents, Excel documents, PDFs, common files that you'd share. Even MP3 files and video files are all considered to be files in terms of the basic usage of Plone. So images are special files in the, they're still files, but you're able to view them. And so JPGs, PNGs, and I don't think you can add SVGs as images in this context. So the basic bitmaps like GIFs, JPGs, and PNGs, or if you prefer, PNGs, GIFs, and PNGs, and, and, yeah. The link content type is literally for you if you needed to have a catalog of important links. This would be a great thing to use. So let's say you wanted to start a site that was about gardening and you wanted the top gardening links around the internet. You would add them as links. And then in a list interview, you'd be able to see all those links of the top gardening sites. So that's the purpose of the link content type. The news item allows you to post news. It's, it is commonly used for announcements. Some people use news items in the same way that they use blog posts. Yeah. And then finally, there is the page. And the page is probably considered the simplest type of content. It's usually text. You can use rich text. You can add images to this rich text. And most generic ish content will be added as a page. Okay, so that's a, that's a tour of the different content types. Let's see if we can talk about the ones that I consider a little more special. So you'll notice that I did not mention collections. And I didn't mention folders. I didn't consider them basic content types. And only because I think these are a little more special because a folder allows you to access a container for other content types. A folder can also be a way of managing your listings. So you can create a folder called 2017, which contains all news for 2017. In that 2017 folder, you would have all news items for 2017. And then you could view that folder. You could view it as a listing. So in that sense, it's kind of special. A collection is a special query. That's the best way I explain a collection. It's a special query in the sense that what you're really saying is let me query for a particular set of criteria. So that could be I want all news items that were published this week. You could create a collection with that predetermined query. And then when someone visits that collection, it would show you all news items that were published this week. You could have more sophisticated queries. You could query for a particular word in any content, in which case it would do a query across different content types. And you can query based on tags. So you could find all content that has a particular subject that's tagged with that subject. So content collections are a good way of curation throughout the website. So if there's something that you want to present to people and people are adding new content regularly, that might be relevant to that collection. The collection will automatically update when someone adds something that matches that query. So it's really useful in that sense. So a collection is a smart query and it's an aggregator. You know, just different names for the same thing. All right. So we're going to do an exercise called spot the content type. Just reminding you of the different types of content. We have news items, pages, events, files, folders and collections. These are the default content types. Now, on many prone sites, people create additional content types. And we can talk about that a little later. There is a tool that allows you to define new content types. So for example, if you were having a gardening site again, perhaps you might have a content type called gardening tools. And when you want to add a new gardening tool, you could go to your site and say add new gardening tool. And it would have different fields for that gardening tool. But these are the default types. And again, from my perspective, a collection is a special type, because it's not really about presenting its own self. A collection focuses on presenting what it has found based on a special query. All right. So I have a couple URLs for you. This is a very basic unthemed website. But it's a very good case study of prone being used for sharing content. So try to visit that website. I'll visit it for you as well. And let's see if we can make out what's going on. All right. So if you were to make a guess, what type of content type are we looking at here? So we're looking at a prone site. And it's called, we're looking at a particular URL, which is calendar slash 2016. Okay. I'll go and highlight that for you at the top. So you should see. Pardon me. Event type. All right. So I'm hearing events type was that Linda. Linda put it in the chat. All right. So what we definitely have votes for events. Good. Any other thoughts? All right. So this this website actually. This is just a simple page. This is a page with a table. And then the table is filled in manually. So maybe that was a trick question. But this is just a page. Okay. So that's an idea of what, what you could do if you wanted to put some information up quickly. All right. Let's look at another page. So this one. This one, there's some interesting hints there. This one. What do you think what type of content is this? All right. Linda says event. Is Linda going to say event for all of them? Actually, this one is an event. So high five for Linda. This one is indeed an event. So this one is a little bit of a help. So you can see that the content in blown. Have additional fields, which allow you to include contact numbers and so on. Right. So the URL here is a little bit of a help. It's these events are stored in a folder called events. Now your events don't have to be in a folder, but you can see that there's a folder called events. So in this case, there's a folder called events. And you can also see in the breadcrumbs here. I don't know if you can see my mouse going up and down over these breadcrumbs. That indeed there is a folder called events. And inside of that folder, we have an event called secondary and tertiary swimming championships. As you can see, it has additional information. So where is this event going to be held is going to be held at National Aquatic Center. This was two years ago, January 2018. And it even includes a time zone. Just in case you're not sure. So you know that this is happening in the Jamaica time zone. Okay. Next one. And my desktop is slowing down. I don't know if I have too many things open. So that may be a good reason to switch to the laptop. Let me see if I can switch to the laptop quickly. Okay. You don't have to give me about a minute to do this switch. I'm just going to switch to the laptop. So we're talking about content management. We're looking at a few URLs. We had looked at events. And we were in the middle of playing spot the content type. So I'm going to try and stop sharing from my desktop. If it will allow me to stop sharing. Okay. Okay. Okay. All right. So I will continue sharing from my laptop. Because my desktop is getting a little bit slow. And everybody see my screen still. Okay. Great. All right. So where were we? We were looking at the. Yeah, this next one. Okay. So what do you think is going on here? If you had to hazard a guess. What is this URL? I'm going to paste it in the chat. Okay. All right. Linda says an image. Let me scroll down a little more. Maybe that will help. You still think it's an image. Okay. All right. Next guess page. Brenda says information about the item. Matthias says an event. No. All right. So this. This one is a little tricky. One of the capabilities that blown has. The default item. So in this case, this is a page. Which is sitting down as the default item of the water polo. Folder. So what for this website, what we've done is we've organized it into different categories. So we have a website. We have a website. We have a website. And there is an article. And then for each one of them. We have pictures. But these pictures are embedded in pages. So I don't know if that counts as a trick question. But yeah. So I just wanted to highlight. And I'll just type it in the chat. Where is my chat. Right. I'll type it in the chat. Folders can have. Default. Items. And when you set a default item, what happens when you go to that folder is the item that has been set as a default. So that's what happens when you go to that folder. And then it comes the view on that folder. So that's a little tricky, but it's a really useful thing to know about. Because if you have your site organized by folders. You may have. The folder full of news items. But maybe you don't want people to see a listing of news items. Maybe you want. But if you want to see a listing of news items. And that's up to you. You can place a page there. As your default item. And when people go to that folder, they'll see the default item. So folder default items. Are useful concept to be aware of. So. Let's continue. Right. What do you think this is? So number four. All right, I'll click the link. I think when I click the link, it might be a giveaway. Any thoughts. So let's follow that. Follow that and see what we see. All my screen is too big now. See if I can make my screen. I'm. I'm not sure if I can make my screen. I made my screen a little bit smaller. No change really. Okay. I'm not sure. Are you on a phone right now? Just wondering. No. Oh, wow. I'm not sure. Let's see. I can, let's see if I can change the way that I'm sharing. Perhaps if I share my entire desktop instead. I'm not sure. Oh. So the problem might be the. The resolution and laptop. So you still have in still having issues seeing the screen. Let me see if, if I can, if I can manage with the desktop. I can switch back. If anything. See if it will cooperate. Hopefully it was just a temporary thing. Okay. Okay. Okay. So we're back on the desktop seems to be behaving now. So hopefully that will be better for those who may have been having issues. So yeah, this content type here is a file. And depending on how your clone site is configured. When you click a file, sometimes it shows you a default view before it actually shows you the file. So, as you can see here, when I went to the file initially, it actually showed me. Kind of page with a link to the file. It gave me a little bit of information about the size of the file and the name of the file. And then by clicking on it, I was able to view that file. Okay. So we've seen a couple different types of content. Think that's it. Oh, one more. Let's go to this one and see. All right, so this is results slash 2017. And here are a bunch of results. What would you say 2017 is what kind of content type. No thoughts. Yes, Linda you're on a roll now. This is indeed a folder. Congrats. You can get two high fives for that. This is a folder and the. It allows us to place information from 2017 into that folder. And it's also paginated. So you see 20 items at a time and you can go to the next set of items. All of these functionalities is built into clone. And you can do that. Okay. All right. So let's move on. So, we're still talking about content types. And one of the characteristics of content types is every content type has its own special types of fields. So an event will have different fields from an image. And so with an image you're able to upload an image file. Whereas with an event you're able to say where the event is going to take place. You're able to say the time of the event the start time and the end time. You don't need that fun image. And so as a result, you'll have different fields for different content types. Here's something interesting. And I don't see this in every content management system. You can, depending on the content type display the same content type in different ways. And we can look at that. There's a special menu called display associated with items. And we look at that. And the next possible is for you to have a special way of showing stuff in one case that you may not want to use in another case. All right. So here's a very important concept. I've said it before. And I've put it here again, because if this is different from say another content management system that you might be used to. And whereas there are some content management systems that do facilitate folders or maybe simulate hierarchy and folders. And this is fully about a folder based approach content can literally be arranged into folders the same way you would on your desktop or laptop. So the folder content type allows us to have folders and folders and folders and arrange things how we want. That's a really important concept. Another concept that sometimes we overlook because we don't realize that we're doing it this way. At least people in the prone community. Lots of the management, the content management is done in the context where the content is. And the content managers, they have an admin screen where you might go to add a new event or add a new. I don't know image. Within context management. If you wanted to add an image to the water polo folder, you would navigate to the water polo folder. And then you would add the image. This idea of doing things in context is important. You navigate to where you need to do your editing. So that's a really important concept. So, three key concepts that we talked about in this section, content types, and the idea that content types. The clone knows that this is a file because it was added as a file. Clone knows that this is an image or an event. And also the content types have different fields, because they are different. And the follow based approach is important to be aware of the fact that things are arranged into folders and clone takes that to the next level. And then finally, in context management, the ability to edit things where they exist. So you navigate to the item and then do your editing. And then 90% of the time, that's the way that you manage content in clone. Okay. So all this talk about managing content, we should actually talk about adding content based on how people self rated. Many of you have at least added some content. This would be somewhat of a review, but maybe there are gaps in your knowledge and maybe as we go through this, we can kind of fill in some of those gaps. Okay. This is a good time to take a little confidence rating in terms of adding new content to your clone site on a scale of one to 10, one meaning you're a few less than 10 meaning this is I'm fine with this is really simple. Can you just in the chat put your ranking what how do you rank yourself for adding new content to a clone site. Okay, so I'm seeing five sixes eight. I, I don't know if other people have tuned out so right now, most people are relatively comfortable with adding new content. Nobody here is a two or three. All right, so. So all right, but so this is definitely a bit of revision here. So you can add news items, you can add pages. Again, I use the word pages. Plone. Let me show you something really quickly that's important to be aware of, because it can be a little confusing. So, I'm going to head to our clone site are we logged in as we are. Let us add a page. Good. Everybody sees very clearly that I'm adding a page. Good. So I'm going to click page. And we're adding a page. And it's loading it up. I'd like to point out something if you're on an older version of clone that might confuse you. So, most of us will not be using clone six, probably using clone five. And that might be the case for the next two years, depending on the policies of your organization or how easy it is for you to migrate and upgrade. If you're on an older version of clone. When you select add a page. You will not see this. Instead. You will see it say something like this something very confusing. If you're paying attention. Now you know for sure that you clicked add page. But for some reason, all the versions of clone. Have decided that you're adding a document. The reason for this is that in all the versions of clone document was the term used in the code. And in fact, even on clone six. You will notice if you take a peek at the URL up here. And if you can read that. Even though you said add page. What you see come up and I just added it to the chat in the URL is a document because inside of the code. A page is still called a document. However, in terms of what it really does a page is probably an acceptable name and it's the name that's used in the user interface now. So I'm just pointing that out. When speaking with people in the prone community, the word document and page, they tend to be used interchangeably. Especially people who are have been using for a long time, don't even realize how confusing that might be for, you know, less experienced users. Okay. I'll just point that out while I remember it so that everybody would be on the same page when the intended. So here's, here's a question that you might hear. It's also about the cotton paste HTML from other pages. So can I or can I copy some HTML or rich text and just paste it directly into my phone site. And another question. How do I upload a document and link to it at the same time. So, I'm going to make it clear when I use the word document, I'm talking about a file. I'm talking about word document or an Excel file, or a PDF. So the question here, right in that way is, how do I upload a PDF file and link to it at the same time. I think these are common content management issues. Let's address them. Let's talk about the first one. Cutting and pasting HTML. And we're going to use the website. So we're adding a page here. And we'll call this the demo page. Here's a question. I'm getting a question here. I'm not in the question. So feel free to send me questions. Talking of folders and context. Is it right to think of home folder as root of the folder tree. I want to make a note of that as, in fact, let me just, I have a little piece of paper but I need a little more paper to keep track of some of the important questions. Give me 30 seconds to grab a piece of paper. Even better. I now have my clipboard. So I'll make some notes on my clipboard. So that question I'm just going to write down the clipboard so that we can get back to this. I'll close my virtual background somehow. So that question again. Is it right to think of the home folder as root of the folder tree. I'm going to say yes, but I'm going to, I'm going to leave it on my clipboard and we can get back to it. Explore that in a little more detail. All right, so let us start filling this document in. This is my cool document. It has a write up about trees from Wikipedia. This looks more like the summary. So let's put this here. This will be my title trees from Wikipedia. And let's head to Wikipedia. Wikipedia and search for trees. See what we get. I'll take the Wikipedia list of trees. Since I'm in the Caribbean, let's see what trees are in the Caribbean basin. If any of them are growing in my yard. Let's learn about a Visenia germ in essence. This is mangrove. That's not growing in my yard. I'm not by the sea. So this is rich text. It's rich text because it has hyperlinks, it has bold text, it has italic text, and it has images. I actually studied mangroves in university. One of the things about mangroves is that they actually grow their seeds. Am I able to paste this? I'll just do command V to paste. They actually grow their seeds inside of a kind of what you call it. The seeds actually start to grow on the tree itself. So the seeds, it's called vivipari. It was very interesting. But more interesting for this blown presentation is that the rich text carries across faithfully. Let's look at the source code. And if you're familiar with HTML, you see that what it really has done is taking the rich text, which is stored as HTML and just carries across faithfully. And we'll just call this black mangroves. Yeah. All right. So, as you can see, you can actually move rich text across into the blown. And it works pretty well. Okay. Even the citation, which uses a super script is carried across faithfully. And these links actually link back to Wikipedia. And they should work fine. Okay. So, Brenda says she wants to be where I am. Is that because it's warmer here now? Yeah, yeah, I'm not complaining. It is, it's the weather is great right now. It can get super hot, but right now it's really nice. We're actually at the beginning of our tourist season. So we'll see how that works with COVID. So adding a page will always put the link in the toolbar above. Ah, yes, that's actually a really good question. So, if you add a page in the root of the site, as Matthias was pointing out, let's click contents here. Content is a really great way to see the structure of your site. The page that we just added is actually at the root of our website. That's considered a bad practice. At least I consider it a bad practice. So, the recommendation instead is to take your page. I'll just cut it and place it in a folder. If your page is placed in a folder, then you never have that issue. It will follow the folder structure and you'll notice that it doesn't show up in the navigation bar. There is another way to do that. There's another fix for that. And some of you may be familiar with it. Maybe Matthias and some of you, anybody who gave themselves six and upwards. But I'll demonstrate that. So, let us say you actually want that page to be at the top level. But you don't want that page to, what's the word? You don't want that page to show up in the navigation at the top. Okay. Well, in that case, not sure if I'm having some type of error here, but let's see. Oh, site just went down. I didn't plan for that. All right. So, we're about to go to our next break. In the break, I'm going to have to just spin up our clone site. So, our break is supposed to be five minutes from now and I set a timer. So, given that if this site doesn't come back soon, I'm going to have to just spin up a site that we can work with. And then we'll just go back. Okay. I don't know. Maybe I should speak to the guys who set this up. Maybe it detects when you add a page. And I want to point out that if you add a page, one of the options you can do is set the default settings. And under settings, you can say exclude from navigation. That's just a little trick if you definitely want it to be at the top level but not show up in the navigation. So, now, would that apply for four of us as well? Yes, if you add a folder and you don't want it to show up here, you can just go to settings and say exclude from navigation. Great. Great. So, trees. Something about trees. I'm still in my clipboard. Oh, yes, this is great. Great. So, wait, that's not supposed to be. Did I not tell it to be excluded from the navigation? Settings. So, what's going on? Because I'm at the trees page now, and this is another setting that I'm going to need to change. But if I'm anywhere else, it should not show. If it does continue to show, we've just found a bug in prone six. So, as you see, it doesn't show in the normal navigation. If I navigate to that page, it does show. There is a way to get around that as well. And we might as well just spend the next two minutes looking at that because that's important. So, right now, a couple of things. One, this document, this page is private. It's red because it's private. If you go there and you're not logged in, you will not be able to access this page. But if I do go to the page, it does show up in the navigation bar. There is a setting that can control that behavior. It's on the navigation, on the site set of navigation. I don't remember which one of these it is, automatically generate tabs. So, this is, this setting will disable the whole automatic generation. Then there is generate tabs from items other than folders. So, by default, Matthias, you are asking about folders. This one will only add tabs if it's not a folder. I don't know if, depending on the version of clone you're using, but there's also, or there was an option. Path to be used as, show items normally excluded from navigation if viewing their children. Now, this has a side effect of, if you're viewing an item, it will show up in the navigation. If you don't want that to ever happen, you're going to have to check disable that particular functionality. So, now that I've disabled that functionality, if I'm remembering correctly, what will happen is going to slash trees. I'm navigating to it, but I could have just typed it in the URL. It should not show up here. So, you notice I'm at trees, but now it's not showing. So, I'll just show you once more where that is in site setup. I know it's there as far back as clone 5.0 on the navigation. I can't speak for clone 4, but show items normally excluded from navigation if viewing their children. If that is unchecked, then you don't have that side effect of something short popping up into the nav bar when you don't want it to. Okay. So, I'm going to quick save. We are now upon our next break. Let's just just mark where we've reached. Right. So, I was just about to get to this point here, which we kind of got to anyway, which is, it's really recommended that you organize things into folders, because that solves the problem without having to do what I did a while ago. So, that's a good point for us to take a break. So, we're going to take another five minute break. Just quick, quick poll. Does anybody need more than five minutes? That's a bad question. 10 minutes. Alright, we'll do 10. Alright, great. So, 10 minute timer. I'm on. Just waiting to see a hand or hear something or some see something in the chat. Okay, great. So, you know, for sure, at least two people are back. So, one of one of the features of clone is this ability to automatically generate the navigation. But if you put everything at the top level of your site, you've experienced the situation of your navigation just starting to grow and grow and grow. And the solution to that is to organize your site into sections, which will be your folders. And then within those folders, you can add your content. And if you follow that practice, then your top level navigation will be steady. It won't, it won't start doing strange things. In some persons, they just use themes where the navigation is fixed. And so if you have a theme that doesn't change navigation, then that also works. Although I don't, I don't like that approach because it means that a content manager can control things that maybe it's okay for them to control. Okay. Let us talk about publishing content. Oops, I think I pressed one too many times. All right. Most content on your website is already in already supports publishing workflow and for clone, the default setting is a workflow that is private or public, meaning your contents is a private or public. And since in a private state, there is a red color to indicate that it's private. And when it is published, we tend to use blue. Let me, there's a couple more things I need to say about published and private, but let's head to the demo site. And take a look at the content that we have there. Oh, and let me remember to set my timer so that we don't run over time. Okay, great. All right. So, if you're putting up Bob Bob's that's definitely a tree that's known from Africa. So I don't know if that was my tail splitting around with it. Sorry about that. I was just playing with my own locking on. I was just playing with my own locking in the demo side. I didn't realize it. Okay. So, first in a few, I was looking at the navigation, how you could, what are you, support multiple levels of navigation, you know, and the ability to hide content. Your volume is not so great, Matthias. So basically the demo only works one month. Okay. So I thought that they would be different. Can you hear me? You might be able to hear me. Matthias is just one manager. It's so early in the call I was hearing you okay, but it's really going. Not hearing. I don't know if there's. Yeah, it was it wasn't so bad earlier but now it's not going to well. I see you wrote a question in the chat about Volto. So perhaps we can talk about Volto as well so I'll make a note of that as something to address. But there's, you know, when it starts to sound robotic. I'm getting that. So I'm thinking maybe there's a bandwidth issue. But you can still type in the chat. And maybe once it started out, sometimes zoom goes in and out. So maybe we'll try again a little later. Okay. Yeah. Yes. I assume that you are doing some experiments with the navigation based on what we were talking about. I don't know if there's some delay. I think I should move my mic closer as well so people can hear me. Okay, well, let's let's move to the next thing. So we're about to look at publishing. And we could actually use what Matthias has added here. So again, we're going to do this in context. So I'm going to this folder. And the folder has a state of private at the moment. Because it's private. I cannot see it unless I'm logged in. If I want persons who are not logged in to see it, I won't have to publish it. I'm going to use the folder of invoking a transition called publish. And that will cause it to go into a state called published. Now, as I mentioned before, this is the basic default setup of blown. You can go really fancy and you can have lots of states for the workflow that you use at your organization. So what's going to happen here, the content of this folder happens to be another folder that's private. So if we try to access this as somebody who's not logged in, and let's let's try that. I'm going to, let's see if I open an economy to window, will you see it? No, I'm going to have to change my share before you see my economy to window. So let's do a new share and open my incognito window, which you should now see. Let's paste the URL. Now, if you, if you compare what's going on here, you're going to notice that the incognito window does not show the folder. If I go back to the original one. Okay. I think that's what we want. Trying to share both of them at the same time. I think you're seeing both of them now. So if we go back to the original, this is what somebody who's logged in will see. But I cannot see that folder because in the incognito window, I'm not logged in. In fact, if I even know the URL of the other folder, which happens to be bookies. I go and type that in. Because I am anonymous, and it's not published, it's going to give me a login screen. So I'm going to look at the perspective of the anonymous user. I am not allowed and it's going to prompt me to log in. So that's important. Let's just make a note of images. Images have a very special publishing story. Images do not have workflow. So if I go to an image, and I pay attention to the sidebar on the left, there is no state. Images don't have state. They're neither public nor private or whatever. So at this point, someone should say, well, then how do I hide an image from somebody? How can I prevent somebody from seeing an image if images don't have state? Let's see what's possible. I am going to copy using, this is a different way of copying. Maybe you've used it before. There's an actions menu. And because I'm looking at this image, remember we said about context. I can copy that image. And the image is now copied. Let's go over to the bookies folder. I'm going to copy the image. I'm going to copy it on the bio-babs. I think I heard it properly. I'm not sure what was going on before. It was probably my network. I'm going to copy it on the bio-babs. I'm going to copy it on the bio-babs. Now I'm going to paste. The image is now in the bookies folder. Remember we said images don't have state. Let's go back to the bookies folder. Let's go back to the bookies folder. I'm going to copy this image. Let's go back to the bookies folder. Let's go back to the bookies folder. Let's go back to the bookies folder. You might say, oh, that's the view. What if I want to see the image itself? I think you should be able to see the image if you just put the URL directly of the image. It shows you the image alone. Maybe I'm able to do that. I don't remember what it does in this case. If you want to see the image, you can still get that image, even if it's in a private folder. Okay. What's that? That's a security concern. It depends. If you're absolutely concerned about your image private, you can do that. Let's say you want to make images private. You can enable state on images so that they follow workflow and they can be published and made private. In fact, I think that's something you can do really quickly. It's a tangent, but let's see if we can tangent quickly. There is content types menu on the setup or content types control panel on the setup. If you head to site setup, and then you go to, it used to be called dexterity content types. I'm trying to remember if that's what it's still called. Looks like building blocks right here. You can actually control the workflow, how the workflow behaves on different items. So you can, is it here? It might not be here. I think there's another menu called type content settings. It's actually the content settings menu. I'm kind of very surprised at this behavior. This is concerning, right? Yeah, because I understand plune supports something called acquisitions, right? Yes. And a folder, you can then override that in a sub folder. You can make a sub folder private. The process was that from that level going down, everything in it, the sub folders of that one main private, would that be visible? Right. What I'm getting here is that we know the direct link of an image in a folder that's private, you can still access it from the web. Yes. So that acquisition is not really kicking in there. Right. But you have to know the exact URL. But you can enable workflow. Notice that images by default have no workflow. You can actually tell all your images to use a workflow. Okay. Now I am not telling you to do that. So it's a performance issue. Right. Pardon me? It could be for performance reasons. I missed the last part. So might be the reason for that behavior is really to improve the performance. Yes, that's correct. That's what I was about to say. The performance can be impacted if every time you're accessing an image, the whole clone machinery has to spin up and check the permissions on that image. Right. So instead we start with no workflow on images. Right. Right. So that's a little tangent, but I wanted to point it out because two things. One, the images can actually become hidden because of the fact that they're located in a private folder. And two, if you actually want to control the workflow on images, you can by just assigning a workflow to them. Okay. So that's a little tangent, but I think it's worth being aware of. Okay. So that's a little bit about images, a little bit about publishing. So that publishing concept, actually, the contents view is a really nice way of dealing with publishing because you may want to do bulk operations. And this is where you do those bulk operations. So let's say the, do you want it to change permissions on the demo folder? There is an option here to change the state. I said permissions, but I really meant the state and you can retract or send back. Let me just say both of these effectively do the same thing. They make your content private. So somebody is going to ask me, why would we have two different ways of getting something to the private state? I will leave you to think about that for a little while. I will write it down on my click board. And maybe you'll come, you'll tell me the answer when we're discussing it later. The other thing is because demo is a folder, you can tell demo that you want everything in demo to become private. And what that will do is it will walk through the entire demo folder and make stuff private. It's what I'm doing now. If I click on demo and enter the demo folder, everything in the demo folder is now private. Okay. Except for files and images, which do not have any workflow. So it's not just images, other files as well. And it is for performance reasons. All right. So just in case we want to actually see these things, let's go back up the tree. And let us change the state. By publishing it, include contained items and apply. So it's going to walk the entire demo folder, descend into it and change everybody. Now, if I did not check this box, I could make the demo folder private while the content of the demo folder is public. Whatever state I'm changing it to. This time I didn't check the box, which means all the content inside of demo is now untouched by the change that I made to the demo. So this is a nice way to do bulk operations. And you can also do it this way as well. If you can check multiple things here and publish them. So it's a great way to make quick changes. In fact, I'll show you a little write up about these changes. Let's see if I can find it quickly if not, I will move on. So prone didn't always have this capability. But it was something that I thought would be useful. So back in the day, I actually would write little blog posts about things that I'd love to be able to do in prone. And one of them I had asked about was how do you make changes bulk changes. At the time, it wasn't a thing. Not seeing it here. This is not the post, but if you spend a time searching you'll probably find it. Okay. Any questions about publishing content? Okay. Great. Wait, was that a yes, no from Lynn? No, I think the yes was earlier. And the no just came out. All right. Okay. So these are some tips that I find really useful. And these are tips that some of them might be useful even outside of prone. So one thing that I really find useful if I'm taking text from say a word document or from some other source, I, a lot of people will use command V or control V to paste. But sometimes it carries across formatting that you don't actually want maybe just only want the text. So if you, instead of doing control V or command V, if you do command shift V, or control shift V, the shift modifier removes the formatting as you paste. So our Wikipedia example would look something more like this. So I'm going to copy. And then I say add new, let's call it a new site in this time. But what happens if instead of pressing control V or command V, what if right here I press add a shift to that. So instead of command V, I press command shift V. You'll notice that all of the formatting is gone. So I just faithfully paste the text. Now, I think, in my case, I would say five or half of the time this is what I want. And when it's those that half of the time, I am very happy to be able to do this. And then the other times if I actually want, for example, a hyperlink, sometimes you actually want the hyperlink to remain clickable. So I just do command V or command shift V. Sorry, command V or control V. So let's give this a world. I'm going to paste here. I did command V in this case or control V. And therefore it's kept all of the formatting and my hyperlink, whereas beneath here. Actually, I could just demonstrate it right here. Same link in my data copied, but with shift. That's how I find that really useful. That's not really a clone thing. That's just how our operating systems are designed now. But because you're, you might find yourself doing content editing. This is a really useful thing to know about. All right. So, next, we are going to move on. I think there was one other tip. Oh, no, that's the same tip. I'm going to move on. All right. Earlier on in the, in the presentation, we spoke about this idea of linking to a file. Let's see how you might go about doing that. And again, I suspect those of you who have done this for a while will probably be familiar with this. But a clone has a nice way of doing this. So let's look at how that's done. And by the way, for there are a lot of newer content management systems that don't get this right at all. So let us say I don't, I export this article as a PDF. And I want to then upload that PDF to my clone site. So I'm doing that now. I'm actually going to download it as a PDF. You should be seeing something on your screen of me in the process of doing that. Give me 20 seconds. I know what's going on. There are better ways to do PDFs. All right, I'll just do it as a screenshot. It's going to be faster right now. So let's say you've taken a screenshot and you want to then share that screenshot in your clone site. See screenshot of the article. Good. It allows you to upload the screenshot while you're right there in context. So the screenshot that I just took. I can locate it. It's going to be on my desktop. Here's a screenshot that we just took. I'm going to open it. And click upload. Now, I'm going to click insert now. I've created a link to the screenshot of the article. Oh, yeah, I forgot to give it a title. So we'll just call it something about trees. All right. So the picture that I just uploaded as able to upload it in context while I was editing it and then link to it. And just a quick review. The link creating the link is just a matter of highlighting text or you could type the text ahead of time and then highlight it. And that's it. You're able to link. I find that that's a task that is fairly common. And I've seen people actually upload the file first, then try to link to it by searching for it, which works. But this is a great way to do it. The only other thing I'd point out if you're doing this is sometimes when you're uploading the file, you don't want it to be in the same folder. So you notice the screenshot that I uploaded, all of them are the root of the site. Again, you can solve that problem by using folders. So in this case, the way that we solve that problem is to add a new folder. And then you have to be a new folder, but we just need a folder and we'll call this. I like to have a special folder just for my images. And I will exclude it from navigation. And I could even keep it private because the links would still work. But I want to publish it. Or some people have a folder called assets or something like that. So now I'm editing this article and I'm adding another image. I can actually add that image into the images folder. Here's another link to a file. So I want to use the same screenshot, but this time I want to make sure it ends up in my images folder. So see if we can find that images folder. I think I'm going to have to get rid of that first. And see if we can search for images. Found the images folder. A screenshot is ready to go. But now instead of it going to the root of the site, it's going to end up in the images folder. screenshot. It's good to add a title. That's good for accessibility so that people who are using special browsers or they're blind or something will have an idea of what the item is. So the contrast here, the first time I did upload it uploaded in the same folder by default. But the second time I did upload it uploaded to the images folder and you can see that in the URL up here. Okay. All right. Great, great, great, great. All right, so let's move on. Let's make sure you're seeing the screen. Okay. Just trying to get an idea. Have you had to do bulk uploading? Yeah, so you might have had a bunch of images that you wanted to upload or something like that. Yes. Yes. Okay. I've been a fight with my virtual background so I'm turning it off. So can you tell us just how you've gone about doing that, Matias, just since you're familiar with it, might as well allow you to explain it. Well, I was using a much older version of clones. It wasn't any different from say, uploading other files. No, it's, I'm not sure I remember because I'm relatively new to non-five. Okay, all right. Okay, no problem. So you prefer me to just go through it then. Yes, please. Okay, sure, no problem. All right, that's fine. So let's do that. Let's head back to our demo site. And let's say we have a bunch of images that we want to upload. So I have a couple images that I think are fine to upload. So let's, let's do that. Okay. So if you're doing any bulk operation, you're probably going to be doing it from the contents view. So that's, that would be the key takeaway here. If you're changing several files, and you want to change their rename them. If you want to change their state from, you know, from workflow from private to public. So what you can do is where you can do that type of heavy lifting. The other thing you can do the contents view is you can upload a bunch of files. So you can drop and drop. Exactly. You can drag and drop. So I am going to go to my desktop where I have a couple of files. I'm going to select them. And just drag them over. And there you go. Then when you click upload, you see a nice progress bar as the items upload. So this is great if you need to upload a lot of images or a lot of files. You know, for example, if you're managing a website with race results and they're stored as PDFs, you may want to do that. So now my images folder has all of these different images that I just uploaded. Okay. All right. So that's, that's, that's something that I find really useful in blown the ability to do bulk uploading and just bulk editing in general to change things. All right. So when you're cutting and pasting, you actually saw me do a little bit of that already. It's one of, it's a huge feature. And the truth is, I haven't seen it in a lot of other content management systems. A Plone is 20 years old. There are new content management systems on the market and a lot of them don't have this capability. And it really becomes useful if you want to move folders around and things like that. So you can cut and paste, but you can also copy and paste, depending on what you need to do. So let's say that I want all of this content to go into the demo folder, all of this top level content here. That is currently showing up in my navigation bar. I can actually manage all of that from the contents view. So I'll go up to the open level in the site to home. Right. And all of these things. My feeling is that they're cluttering the root of my site. And I really want them to be in the demo folder. So I'm going to cut and paste. Actually not cut. Yes, cut is what I want to do because I want to move them. And then I'm going to go to the demo folder. And I'm going to then use the paste option. I'm going to paste now. And all of the stuff that was in the root of the site is now in the demo folder. All the things I want to move. By the way, this article about trees. You will recall that in this article, we had links to files. And you might say, oh, these links are going to be broken now. But if we click the link, you'll see that it has updated. So it knows it's smart enough to know that because we cut and paste and move things around that the links should also be updated. And that I mean this late this. That's that's part of what blown calls link integrity. And it's a huge feature, especially if you have 500 files to deal with and you want to rearrange the site. The other thing that I've seen in other content management systems as well is if I once had a path that no longer exists, like something about trees. It's smart enough to know that it was moved, and it will redirect to the new location. So even though I put something about trees, it did what is called a 303 hundred redirect. And it moved me to the demo folder where I had now moved the things. So that gives you a little bit of confidence when you're moving things around in blown. You have one less step to do, you don't have to worry about, you know, the possibility that you're going to have to now update all the links, because it will do all of that for you. So I find that really useful. Okay, so that's cut and paste. And now we move to the display menu. The display menu is actually, I mentioned it earlier in the workshop. It's what you use to change the way that an item is displayed. Most commonly people use the display menu to change how things are displayed in a folder. Okay, and I'll show you how that works. And then right after that, we're going to look at setting the default item of a folder. So whereas this is used largely on folders, it's not the only place you can use it. So if you're watching with me now, you'll see that I am going to the demo folder. The demo is a folder. And the demo folder currently displays things in this manner. What manner is this, by the way, if you go to the display menu, you see it, you see the display type tick, and it is now used, it's displaying using what is called summary view. Okay, what does this mean? It means that the listing of the folder. It shows a summary of the description for items, the title of an item, and the ability to read more. For news items, which can have images, it will show you a thumbnail of the image in the news item, the description, and the title. So it will vary depending on what is being summarized. And this is really a summary of the contents of the demo folder. But maybe you don't want a summary view. Maybe you want an event listing. And if you change it to event listing, what will happen is it will only find the events. And then it will show them here. Now, if the event has gone already, then it won't show at all. You can go to past events if you want to see that event. Okay, so this is another view. Let's look at some other views. There's the album view. What the album view does is it finds images in that folder, and then presents them as an album. The styling is not very pretty. If you want to do a pretty album, you have to do some theming. So I'll give you a peek at a site that we did. Where we have a prettier album. It's using the same mechanism, but we did some custom theming and stuff so that it will look a little prettier. So this is a blown site. And within it, there is a photo album. I'm trying to remember where the photo gallery is. It's in the newsroom. And it's using all the functionality of the photo gallery. But it's styled totally differently. Same thing. This is just a folder with our album view. And we change the style a bit. Oh, and we added this capability so that we can actually go through them. In terms of, I guess we can talk a little bit afterwards about what it takes to get something themed to this level. Starting with something like this. Let's change the display again. There is a tabular view, which basically takes all the items in your folder. And displays it in a table. And then finally, there's another concept that that is mentioned in this slide, which is setting an item as the default page. And to do that, from the contents view, you can just specify what item within your folder will be shown by default. So if we go back to the demo here, I'm in contents view for the demo. Let's say that I want to always show. Typically, you'd want to show a page. So I'll use this page. And I'm going to say set as default page. So that actually overrides the display menu. So now my default item is item a page rather than one of these. Let's view it and see what happens. So if you go to slash demo now, what you will see is the contents of a page. This is really useful because you may have a section called about us. And you want that when people go to the about us. And then in the end of that folder, there is a right of about us. You don't really want them to go to the about us folder and see a listing of content. So that's, that's where that becomes really useful. There's a question here. And I don't know how long ago it was asked because I didn't look at the chat for a little while. How do you add an image to the toolbar. I assume that we're talking about this toolbar here on the left that I'm scrolling up and down on is that is that what we're talking about, Brenda. Great. So these images are actually. Actually, I have to get back to you on that. This is blown six. So it might be a little bit different. But I will tell you how this, how this works. Firstly, the toolbar is its purpose in life is really for navigation. I'm not not the type of navigation that normal users would experience. This is for people who are logged in. There are ways to add items to the toolbar. But I feel like clone five to may not have this, but let me let me, I'll show you in the site setup. There's something called actions. This, this may not answer your question 100%. You can add an item. I don't think you can add an, add an associated image. So, just for example, you may recognize some of these things as user actions. Those show up here. Okay. Portal tabs are actually the nav here. Then you have object buttons, cut copy paste, delete, rename, URL management and so on. Those are associated with individual content. So when you're at individual content, it has these options. Object actions. These also show up as your browsing content. So contents history sharing rules, syndication, I have import. Some of them are hidden, for example, syndication. But this object actions is the probably the first place I would go to do this. You also have something called document actions, which allows you to do something similar. So I want to add an object action. I want to do something silly. Make. I'm going to add an object action called visit. I'm going to add a new one. So again, this, this is not the complete solution to get in the image there, but it will give you a new menu. So I just created that action. I don't know if I, I may have done something bad by putting a space. No, I added it to document actions. I actually wanted to be object action. So I'm going to add an image. This is Brenda. Can you hear me? Yes, I'm hearing you Brenda. Okay. Maybe I wasn't making myself clear. So I'm talking about the main page on the toolbar up above where people select whatever action they want to do. Can you add an image as a background to those. Is that something that we would do in theming? So when you say toolbar, you're talking about this here. I'm not talking about our toolbar as users. I'm talking about navigation. Navigation. Yes, I'm sorry. Okay, right, right. Clearly I was going the wrong direction there. All right. The short answer is yes, you can. It's involved. Yeah, I'm just going to go back to the title. You can see I'm creating, we just launched our site and it's all basic and they did create a theme for us, but I can't figure out how to apply the theme in a way that I can manipulate what's what the content is. Yeah. So, I'm just going to go to the toolbar, just write it on my keyboard. Yeah, and this might be more in depth than what you're showing today, but I just wanted to see if there was an easy way. So, so, I think the problem is the easy way, right? Right. It's very doable, right. I don't want to say it's horribly complex, but it assumes more knowledge than you might need for some other systems. So, you definitely have to know your CSS, your HTML. That's, that's probably non-negotiable. But then on the top of that, you have to become familiar with the known theme in system, which is something called the ASO. I'll just show you really quickly. This is like, this is high level overview. This is not in depth at all, Brenda. But, what we have here is the ability to create a new theme. Okay. So I have that in my new system that they set up. So the option to add this theme to my site. But when I do hit activate and then go back home, it's not in there. So I'm not sure where to activate it or why it's not showing on the main page. Right. Right. So I don't know if that's just beyond my abilities and I have to have Plone do that, but I just, that's kind of where I'm at. Yeah. So, so if you take a little peek here, this is actually how a theme works in Plone. Right. Plones. So what you may have been given in your theme and control panel is maybe a theme that somebody created for you already. Is that, does that make sense? Yes, that's correct. They created one for us. Yes. Right. Probably uploaded it as a zip file or something and then it unpacked it and then you had your theme. Yes. So if you want to, depending on how they installed the theme, you may or may not have the ability to modify it. So, So if it was a theme that you created by copying it, or if it was created by uploading, you'll see a button that says modify theme. You'll see that button. You'll only see a button called inspect theme. Here's a difference. Inspect theme means that it's sitting down on the file system. It's not part of your Plone data. It's actually being read from the file system and you don't have pause to modify it. If it says modify theme, then you do have pause to modify it. So that's the first question you need to ask. Do you have, I have, I do have the modify theme. I'm looking at my site right now. Great. When you make a modification, the suggested approach is you pick update, minimum you pick update. What happens there is any changes that you make are reread by the site. Okay. Now, sometimes when you've done your changes, there, the oldest version is cashed. So you may have to clear the cash as well. Okay. So I wanted a quick modification. As I say, this is, there's a lot that you can do. And I'm mindful of the time. So I'm kind of putting it on a sidebar, but I think I can show you in two minutes, at least something. So the biggest win in terms of making changes is probably to change your, the theme file itself. And then make modifications in other places after that. So if we preview this, actually nothing is previewing. So we leave that up. What we have here is we have the structure of your site. Okay. We have the toolbar. We have your content header, your main navigation. Okay. You're probably interested in your main navigation. Right. Yes. Right. And there's some main navigation raffle. I'm going to add something here. And I wasn't prepared for this. So if this doesn't work, you know, don't kill me. Okay. I just wanted to see if there was a little instruction on it, but I know it's probably totally off of everyone else's. Well, there's a couple other people said they're interested in it. So it's a, it's a little out of scope, but I understand. So I'm going through this part at least. So I've changed, I've just added the word hello. Right. What should happen when I go back to the control panel. And I've saved the theme and everything. I'm going to click update. And I don't remember, which is why I said don't kill me if this doesn't work is I don't remember if the rules remove the contents of the raffle theme raffle. No, they don't. So there you go. So on every page now, we have inserted a little extra bit of code, just the word hello. And this will be on literally every page now, because it's integrated into the theme. I see. Okay. So it gives you a clue into the way that clone does female. What's known actually does is it starts with an a plain HTML file. And then in this. This is like the people who wrote it are going to be mad at me for describing it like this. They won't be mad at me, but I call it injection. You what you have is a theme. And then you inject the clone content into parts of our file. So like, for this site here. I'm sorry. Car driving by sharing their music. So for this site, what we did actually was we started with totally different theme. We designed it, and then we injected clone content strategically where we needed to inject it. So it's, it's a very different way of looking at female. You just you start to think of it as, oh, this is my HTML file. Let me replace the menu with my menu. Let me add. Let me change it to dynamically have news here. And the secret clone female and it's the last thing I'll share for now. It's all in something called the rules file. What's really happening when you're when you're doing theming in clone is you're taking anything, a really nice HTML page. And then let's do something sneaky. Right, so you're taking a really nice HTML page. And then your there are a list of rules, which will say replace this here at this before here, drop these things. And this is the part why I say, yes, you need to know your HTML and CSS. But there is this extra thing called the Diaz or rules, which control how things get displayed. So, so it's two, two bits of things. So just just to show you the last this is the last thing inside of the rules file. Your theme, the actual HTML is right here. You notice that tag that says theme and then href. We could actually put something else as our theme. So now, now we're being naughty. I'm, I'm, I'm linking to a totally different website. Good. I have to unlink very quickly after I do this, because anybody looking at the site is going to be like what's going on here. And I'm going to go back to my control panel. This is not going to work absolutely, but it will work slightly. On the advanced settings. I'm going to set it to read network. That's because my theme file is now located on somebody else's server. And the rules are still here. Hopefully we haven't lost anybody. When I click update now, what it's actually doing is it's going out to that other website and pulling it in all the HTML from that website. And then it's going to apply rules to that HTML to implement the theme. It's not going to work very well because we haven't tweaked our rules. We're going to have a different look for a site. If I did everything properly. Right. So it's, it didn't quite pull in everything properly. But you can see that site has changed quite a bit. There's one other thing that I should have done that I did not do for this to work. And that is again on the advanced settings. I need to set my absolute path. Now this is a very, this is not how you do it in real life. This is how you do it when you're demoing it to people to say, here's what's possible. And I guess we have once once we've wrapped up all the other content that I'm probably show you at least how to do this trick. And it's a good starting point to understand how phone does in it. What did I not do. I'm finding that working in clone is very, it's user friendly but it's definitely trial and error just like how you're saying, well, let's see if this works. Let's see if that works. Right. So, I wouldn't call it trial and error. But the best way to describe it is it has more moving parts. So, if you're familiar with the moving parts, you can quickly correct path and get things working. So like, I know what needs to be done and given another 10 minutes would be fine. Now, if you're not familiar with the moving parts, I fully agree. That's fully trial and error. Very painful. Yeah. So, so yeah. In fact, what one of the top I'm actually doing a talk for the conference which is targeted squarely at newbies. And it's called asking questions. I mean, they have a fancy title for it, but it's really sharing an approach that I've used over the years to learn clone and other things. Hopefully it will be useful to other people. Okay. Thank you. I'm going to fix this back before I forget to fix it back because someone is going to come on the site and be like, what's going on here. So I'm going to activate the old theme and delete the new one. And I'm going to go back for that in a second. Okay. Okay. Yeah, one of the things with clone is they give you all the tools to shoot yourself in the foot, which means that you also have all the tools to do a lot of amazing stuff. Okay, so where are we. So we're talking about the display menu and just in summary, the value of the display menu is not just for folders. I only demoed it for folders, but for other types of content, you may want to display them in different ways. Some of this may be provided for you by your developer integrator, and they may have installed additional display views. You can change those display views from the display menu. By default, folders come with several display views, whereas news items don't have any extra display views. If you want to, you can add new display views, even to news items. And we also talked about setting an item as a default page. That's a really important thing. If you have a folder called about us, you want to have a default page when people land on the about us section. Okay, publication. All right, we're actually supposed to be at the break. I just forgot that we're looking at some extra stuff. So let's take a break. We can take a longer break. I know people might want to do lunch or something. So, give me an idea of what works. 20 minutes, half an hour. Okay, let's take that break and then come back to the next section. All right. Linda has says 30 minutes and nobody is saying no. So we'll give you a 30 minute break. And then we'll come back and we look at restrictions. And then we look at restrictions allows you to control what people can add to different sections. So you may have a new section where you only want to allow people to add news. So we're going to look at that in the next session. Okay, everybody after your break. Oh, I'm not sharing the right screen am I. Let me fix that. Actually, the proper way to do this would be to download the entire theme, but that's, that's not. We're doing it a kind of bad way. So let's get back to this. Let's make sure that we revert to the proper theme. Activate. Okay. Okay, great. So we spoke about a concept called restricted shares. Well, sorry, restricted content. And it's the ability to set restrictions by section. Now to be able to do this you must be logged in as a manager. So, I am a manager at the moment. The role that is most, most common is site administrator for doing things like this. So a manager includes that capability. All right, so here we are on the a folder called bio bags. We want to only allow images here. Under the add new menu you'll see if you're logged in with the right permissions you'd see something called restrictions. So if you click on restrictions, you will have the ability to select your restrictions manually. And then you can just specify that I only want to be able to add images in this folder. So what this does is it changes what's available in the add menu. So now when a user comes to this folder, and they click add, they're only allowed to add images. So it's really useful in situations where you want to restrict what people can add to a particular section of the site. Okay. So basically setting restrictions. And you saw that screen already so we can skip through that. We're now going to move into user and group management. Just a quick reminder of where we've been so far. So we've looked at what is blown. We've done a demo of logging in password and preferences content management principles folder management and publication. And that includes restrictions. And now we're looking at user and group management. And that we're basically done. I have a section reserve here for tips and tricks. We can use that to go through other things that you have questions about. So, let's get into users and groups. Right. So, let's talk about the users and group control panel. So the user and groups control panel is one of many control panels on the site setup. So to access it, you would typically go to your profile, go to site setup, and then go to users and groups underneath there. You will be able to add new users and add new groups. And that's what we want to explore now. The approach basically This slide has a little bit of layout issues. So bear with me. This is, these are the steps. And for some reason step two is on top of step one. Let me see if I can fix that really quickly. Oops. I I changed the layout this morning and clearly it didn't quite kick in everywhere. So this was one of the slides that didn't quite believe me. So I'm just tweaking that a little bit. All right. Nonetheless, these are the steps what that you typically go through when you're working with groups. Creating users is a little more straightforward, but the principles behind groups can get a little dodgy if you're not paying attention. So these are the typical steps you're going to want to create a group. You can set sharing on the on a given folder and then add that particular group to that folder. If it sounds a little confusing, that's okay because we're going to demo it. For example, if your group is a group for diving. Whoever is part of the diving group would get all the permissions and access that the diving group has in different areas. It's best to demo this. So let's say that we have people who are allowed to add three pictures to the website. So I'm going to go to site setup. I'm going to create a user. And I'm going to create a group. In fact, I'm going to create two users. We'll probably only use one of them. And then we're going to have a group called trees. The group called trees will be people who can add trees to the website. So let's create a user. And you can feel free to use this user. Three person one. So we'll call this tree one and we'll give them the password password p a s s w or ID, p a s s w or D. You like attack that wrong. Right. So this user will just add them as a normal user. So now we've added our user called tree one tree one is going to go into a group and I'll add three, three, two after this but let's add a new group called trees. Okay. So the group is on to the call trees, trees. I prefer to use common letters for the actual group ID. And people who can upload tree pictures to the folder. All right, I'm describing it but I haven't actually made that true yet. We'll soon make that true. So we've now added a group called trees. And let's add tree one to that group. So we're going to search for tree one, we'll just search for tree. It should show everybody who matches. So three person one added to that group. It's going to be useful when we're ready to log in as three person one. Let's now create another user tree to three person to. So I'm going to the users tab. I'm going to add new user. And this user is tree person to this is tree to also their password is password. Now notice something when we're adding tree one, there was no group called trees. But because we've created a group called trees. While while we add a new user we can actually add them to the group called trees. That's useful. So we're going to register now. And we now have tree one and tree two. Now, what did we promise about the trees group that both of these persons are part of. We gave it a description that basically said that we're going to add trees to the tree. And that's because a special group for persons who could add trees, tree pictures to the bar bar folder. So we're going to go to that folder. Okay. And that follows on the demos. And what this is on the low is for the trees group. For searching for that trees group. We're going to give them the ability to add edit review. Give them the full ability full capabilities. And that means that anybody who has been added to that group can now add content to the barbabs folder. Let me log out and you can try this too by visiting the site. I'll send you a link directly to this here. You can try adding your own tree pictures. Where are we? Oh, I put it in the chat here. I'm going to try that out. Let's make use of restrictions or we've already restricted it to images. So now I'm going to log in as one of these tree persons. So let's log out. I think I'm showing the wrong screen. Right. This is the right screen. So let me log out. The expectation is that when I log in now as three person one. With the password password. I should be able to navigate to that folder and add content. If I go to the demo folder. I can't add content to the demo folder. I don't have that permission. But if I go to the barbabs folder. I have the ability to add an image. And that's because I'm part of the tree group and the tree group has gotten that sharing capability. So now I can add my picture. I should actually get a picture of a tree shouldn't I. So let's Wikipedia. About a tree that's actually in my yard. Right. This tree has so many different names. Even in the Caribbean. But we call them sweet socks. Right. Do I have a nice picture of the tree. I guess this will have to do. All right. So let's add that. So because I am. Because I have the permissions. So. To add in this context. I'm able to add my picture of my sweet sock tree. Great. And it gives credit three person one added it. So feel free to add other stuff. I've pasted the URL in the chat. And the passwords. Usernames are tree two. And tree one. So I want to try that out. Okay. So that's just a quick overview of how. These settings work. So I'm going to. Move on. If you want to learn a little more about. Plones work for system managing sharing. And permissions. You can find that information. At the phone website. And you can find that information in the chat. Okay. Right. So I did reserve a section to go through analytics. However, this is. A bit out of the scope. So I'm going to go through some of the tips and tricks. And anything else that you might want to go through. I have one tip that I want to show you. And then if you have questions, this is a good time to go through. Tips, tricks and questions. So. One thing that I find. People need to do a lot of the time. Is to embed media and things like that. And you have something on YouTube that you want to show. Let's see YouTube clone. Let's see what we get. We'll probably get the clone. Music video. No, this was a classic screencast. This was a long time ago. An introduction to clone. And then we have a new feature. The interesting to see how well it's, how well it has aged. So let's say in this news item here. And I'm going to need to log back in as somebody else. Not tree person. Cause I can't edit. So. There's a feature in the tiny MCE editor. That allows you to do. External embeds. Hello. I'm going to be introducing. Plone. Plone itself is an odd surname. Certainly in some languages. It also happens to be the name of an electronic music producer from the UK. And the clone that I'm going to be talking about. So let's see if we can embed this video. So we're going to go to share. And there's, there's an embed option when you take share. So I'm going to choose embed. Let's copy that. And then head to our news item, which we're going to edit. And most people, I guess into intuitively, your first thought is, oh, I just pasted it there. And it will embed. And that will not work. So if you do something like this. And go paste. And then you'll see that it's just going to show the code. And then you'd say, oh, well, the smarter thing to do with it to go to the source code. Since it's called. Let's paste that. And then say, okay. And that looks like it will work. So let's click save. And that actually works. But there's another way to do this, which tiny mce provides inside of clone. And it's the way that I prefer to do it. So I'm going to go into. And then I'm going to go to edit. Sorry, insert and insert media. And you'll see it has an option to embed. And then it gives me the, I, whatever embed code I need for my particular thing, I can just click embed. And then click okay. And that works. And then I'm going to go to the source code. And I have a little preference for doing it that way. But if you prefer to go into the source code. You can do it that way as well. Sometimes the source code does not work for me. Sorry. Whereas I found that this works pretty nicely. So again, that's insert media. And then I'm going to go into the source code. Okay. So there's another question. How do you add an RSS feed? Oh, that's, that one is a little trickier, but I'll show you that. And it's something that we haven't looked at. I'm going to write that down. Here. RSS feed. Let's just see. Is there anything else we need to cover? Okay. So, I think we've covered everything. So let me give you the list of what we're going to cover. From the questions. There was a question that Matthias had asked about home folder as a route of your folder tree. And the answer is yes. And we looked at that a little bit as we continued through the workshop. So there's something we need to look at that. We can also briefly look at Volto. There was a question to, you know, can we look at Volto? The RSS feed. That's feed. So we look at that. And then there's some things we looked at already default items in folders, the display menu. Maybe we can look at paginated content. So let's look at the RSS feed since that's the one that's on the radar now. So we're going to answer that question, Brenda. Okay. Let's go. So let's find an RSS feed. Do you have an RSS feed you want to share with us or should I just steal one from my blog? Post comments. All right. I think this is an atom feed. I think it should work. So let's see. I think we can do that. I think we can do that for a while. And there's a difference between an atom feed and an RSS feed. But I think prone can manage it. So let's go to the home page. And what we're going to do is we are, I see Brenda says she tried weather. So maybe we'll try that afterwards. So we're going to go back to one of the questions we already had. We'll go back to the one. You can call managed Portlets. And let's put it in the right column. So we already have news we'll have a RSS feed underneath it. So there's a special Portlet. Call an RSS feed, Portlet. and we will paste this. Now this is an atom feed. If it doesn't work, I'll just have to find a proper RSS feed. So let's try it out and see. So it looks like it may have found it. It's treating it like RSS. So let's see if it works. If I go to my homepage, there we go. There we go. Blog comments. So that's pretty useful. I can also go back and manage my right column, because maybe I want the news to show up before the blog comments. So I can change the order. Actually, I can't change the order. Just a note here. This portlet is associated directly with the default view of the homepage. That's my signal to say that we have 20 minutes left. I'll set another signal. But maybe you don't want your RSS feed to only show on the homepage. Maybe you want it to be there on other pages. So let's do that with the weather. So weather.gov.rss, which is what Brenda has shared with us. All right, so first of all, we need to locate the feed, because this is not the RSS feed. This is a page about the RSS feed. So all right. How about hurricanes and tropical cyclones? So here we go. Here are the feeds. So let's just take the first one, since it's there. And this is indeed a straight RSS feed. So this should work quite well. I'm going to do this a little differently, because my blog comments, if I go to another page, they're not going to be there, because they're associated directly with this homepage. So you notice I have news items, but I don't have my RSS feed. So I'm going to go to the website. You notice I have news items, but I don't have my RSS feed. So this time, I'm going to make sure that I'm adding it to the site and not just to a page. I'm still going to the manage portlets and right column, but I'm going to pay attention to what it says up here. You are managing the portlets of the default view. You want to manage the portlets of the container go here. So I'm going to do that. The container in this case is the entire website, as opposed to just one page on the site. So I'm going to go to the site root, and then I'm going to add an RSS feed, and we'll call this weather. I'm going to paste the URL. So it's important that you get the real URL, otherwise this will never work. And then we should be able to go to any page and the feed should be there. And indeed it is. So that's how you do RSS feeds. It feels like RSS feeds has fallen out of favor. Maybe a new generation of users who don't know about it and now doing development and content management, but they're still here and they're still useful. So thanks for that question, Brenda. Yeah, it's useful for us because we live in a weird climate. So we'd like to put that on our page to have for our people who visit. Remind me of where you're based. Albuquerque, New Mexico. Ah, okay. We're right in the middle of a high desert. So we're at about 5,000 feet, but we're a dry climate. So it's very interesting weather here. Lots of sunshine, but we still get all the seasons. Okay, okay, okay. As in how extreme is the curiosity? So like in one day we could have snow, sunshine, wind. It can feel, you have to dress in layers. It's very, very interesting, but most of the time it's sunny. So, but very dry here. We have very, very limited moisture. 20% humidity here is very high for us. Okay, impressive, impressive. I guess at least without the humidity, it doesn't get sticky. No. Okay, thanks, Brenda. Ah, so the only thing else was Volto, and then there was, I had some talking points, but we can skip those, because we only have 15 minutes left. So how many, have people looked at Volto? Just out of curiosity. How many people have seen Volto? Not really, okay. And nobody's saying yes. All right, so Volto is the future of Plone. And what they've effectively done is they've taken all the power of Plone, and then they've put a modern interface on top of it. So, I'm just Googling to find a demo of Volto. So the experience, the editing experience, feels more like what you expect when you're using some of these tools like Medium, I don't know if you've ever blogged with Medium, or Ghost, or try to think of other tools, or try to think of other things. I guess, like when you're using, maybe not so much, but when you're using Gmail, you kind of expect things to load, and you don't have to wait for them to reload, and things like that. And so that's the experience that Volto brings. I don't remember the password. I think it should be listed somewhere here. Admin, admin, good. So this is Plone, but what they've done is, they're treating Plone as a black box, and Plone provides what is called an application programming interface, or API, and then Volto speaks to Plone, and gets all this information from Plone. So when you log into Volto, it's an interface that could be running on top of anything else, but it's running on top of Plone, and it was built to run on top of Plone. So what's different is this principle of the editing button is always up here. It's, you get the familiar toolbar, and you can go to your preferences as you'd expect, but you notice it doesn't load a new page. It just provides you with settings for your preferences, and then you can save them. You can view folder contents just like in Plone, because Volto really is a layer on top of Plone, but it's a nice experience. So let's close that. If you're editing a page, the editing experience feels more like, I'd love to say medium, and assume that everybody knows what medium is, but it's a more modern editing experience. So one of the things that's common to this type of experience is that things are presented in blocks. If I press Enter, a new block is created, and the plus sign allows me to add things. So for example, adding a video in this case is a matter of getting the URL for the video. I don't have to go searching for the embed code. It will figure that out. And then I just click and my video is added. The video is treated as what is called a block, so I can change the layout so that I might want the video to be floating to the right. That's a lot easier to do with Volto. And Volto pretty much is going to be the editing experience when Plone 6 is released. So in a sense, this is just giving you something to look forward to in terms of Plone. Right, so we have about 10 minutes left. Is there anything you'd want to go through in those 10 minutes? I mean, there are other things that we can look at with Volto. Okay. Well, in that case, I'll just wrap up. I will say that... Well, thank you for participating. I trust that what we've gone through here has been valuable and gives you a little more grasp of managing Plone. So, I'm going to go ahead and go through the grasp of managing Plone. I've been using Plone for a while and I really enjoy showing people how to use it. At the same time, I also use other systems nowadays just because you kind of have to choose the right tool for the right job. I will say I do think Volto will be very useful and already you can see where you can get a lot of stuff out of Volto. And thank you again for coming to this presentation. And I will be presenting in the full conference on asking questions and how that can help you for your future self. That's roughly the title of the presentation. Okay. Well, if there are no other questions, I'm going to be signing off. I need to stop my screen share, stop my video. And yeah, I enjoy the rest of your day. I hope the conference goes really well for you guys. Okay. Bye.
0:00 Introduction & Welcomes 6:11 Assumptions 7:27 Agenda 7:54 Part 1 - What is Plone? 16:58 Part 2 - Demo and First Steps 21:40 Demo: Exercise 1 - Initial Look at Plone 30:54 Demo: Exercise 2 - Content Structure 32:27 Demo: Exercise 3 - Navigation 35:49 Demo: Exercise 4 - Standalone Pages, Images and Listings 40:34 Logging in and Logging out 42:59 The Logged in Experience 43:47 Logged in: Location of the Toolbar 45:38 Logging out 47:49 Login/Logout: Exercise 5 - Logging in and out 55:40 Managing Preferences 56:45 Part 3 - Content Management 1:01:30 The Content Lifecycle 1:05:19 Content Types 1:11:52 Content Types: Exercise 6 - Spot the Content Type 1:34:49 The Folder-based approach 1:37:51 Adding Content 1:49:22 Adding Content - Pasting Richtext/HTML 1:52:40 Moving Pages between folders 1:56:48 Navigation Settings - Managing What Shows in the Navigation Bar 2:00:40 Using Folders 2:03:11 Publishing Content 2:09:27 Publishing Content - Private vs Public Content 2:12:25 Publishing Content - Images have not state 2:17:05 Publishing Content - Controlling workflow of content types 2:19:44 Publishing Content - Using the Contents View to change state 2:24:16 Tangent (Blogging about Bulk Operations in Plone) 2:25:39 Publishing Content - Editing Tips: Linking to Files, Images, Pasting Text 2:39:06 Publishing Content - Editing Tips: Bulk Uploading 2:43:55 Publishing Content - Cutting and Pasting 2:48:03 Publishing Content - The Display Menu 2:53:30 Publishing Content - Setting the Default View/Page 2:55:10 Tangent (Managing items in the toolbar) 2:58:08 Tangent (A quick look at Theming Control Panel) 3:19:20 Setting Restrictions on Sections 3:22:06 Review of what we've covered 3:23:20 Part 4 - Managing Users and Groups 3:24:55 Creating Groups and Setting Sharing Permissions 3:35:48 A word about Plone's workflow system 3:36:20 Tips and Tricks 3:43:10 Questions: Adding an RSS Feed Portlet 3:51:05 Questions: A Quick Look at Volto 3:56:28 Wrap Up
10.5446/53411 (DOI)
So far we have not talked a lot about conspiracy theories very much, so I quickly want to mention again what I mean when I talk about conspiracy theories. These are non-mainstream explanations for political and societal events which allege secret but intentional actions of mean-intending groups who are sufficiently powerful. And to the left you can see some very famous ones like that the Apollo moon landing was fake or conspiracy theories surrounding COVID-19 which I will talk a lot more in this presentation. And intuitively we might already know or assume that believing in such conspiracy theories has an effect on society and we might also think of some examples during the last years where this became obvious in some ways but we wanted to put it to a scientific test. So we wanted to examine the effect of such conspiracy theories on the society here in cases of norm adherence, institutional trust and social engagement. And as I mentioned in a second study set we also looked at mechanisms as well as potential ways to mitigate them. But why is that important? So we had it briefly this morning that we have a lot of correlational research and some research featuring experiments but that really what we are missing is research that establishes causality in conspiracy theories but also belief polarization. So what is really an effect of the conspiracy theories or what might be a third variable that causes conspiracy theories and effects? So kind of to entangle those two. And I also think that only if we know what the effects are and how the mechanism really works that this might help us to create better interventions. So this is kind of the reason we conducted these studies. But there is already a lot of things that we know. So we know that conspiracy belief is linked to non-normative or harmful social behavior such as intentions to engage in everyday crimes, support for human rights violations, the acceptance of violent political attacks or illegal demonstrations, the support of non-normative political action against terrorism, as well as distrust towards government and the powerful, lower social engagement, lower political engagement. So the list of how conspiracy theories might have negative social, societal effects is already long but as I said most of this research is correlational and some of this is also experimental where people are confronted with the conspiracy theory. But really of the research mentioned here only this one paper that's still a preprint has longitudinal research. So really lacking the sole area of research where we look at the longitudinal effects that these have in order to really entangle what's cause and what's consequence. So to close this gap we conducted a study during the start of the COVID-19 pandemic where we looked at the effects of COVID-19 conspiracy theories on these variables. So physical distancing adherence, hygiene measures adherence, the support of governmental regulations, institutional trust and social engagement. We did so in three studies, the first correlational, the second experimental, the third one longitudinal and in the first and the third one we measured the belief in a political COVID-19 conspiracy theory and you can see the items here. For example that powerful people are using COVID-19 in order to crash the economy or the belief that COVID-19 is just one way of the government to restrict the power of the small people. And then participants rated whether they would agree or disagree with those items. Then we measured the adherence to physical distancing. We asked for example whether they would meet other people or not. The adherence to hygiene measures, for example to cover the mouth when coughing and the support of governmental regulations, for example whether they would support school closures or not. We also looked at the institutional trust. We basically asked them how much they trusted for institutions, for example federal ministries and we asked them about their social engagement. So for example whether they would go shopping for the members of the risk population. Okay what did we find? First study was cross-sectional, so basically correlational research that we did on a quasi-representative sample from Denmark. And there you could see that on a correlational level believing in such conspiracy theories was related to a lower adherence to physical distancing guidelines, lower support of governmental regulations, lower institutional trusts as well as lower intentions to engage socially. What's interesting though is that it did not seem to have an effect on the adherence to hygiene measures. Of course I have some ideas but also no scientific explanation of why that's the case. Yes, so in the second study we did an experiment where we basically exposed students from tubing in to conspiracy theory also in that COVID-19 context and you can see some of the texts that we gave them which read that doubts about the legitimacy of the measures taken by the federal government to contain the SARS-CoV-2 pandemic have been increasing. We now know that the coronavirus is far less dangerous than initially thought. And then the typical questions that are raised by conspiracy theories, why have the measures been introduced despite our criticism and maintained for such a long time and who could benefit from all this? So we tried to make people doubt what's happening and really to make them think in these conspirational terms and thoughts and well it was also to us as researchers kind of frightening to see that already reading this text had an impact on the students and I mean these are all students from the university right? Because reading this text led to lower reported willingness to adhere to physical distancing measures. We did not measure hygiene measure at this experiment but also lower institutional trust, lower support of government regulations but it did not have an effect on the willingness to socially engage. But again this just happened after reading a text that we ourselves designed for them to think conspiratorily. Full disclosure we also had a debriefing in the end and that helped to set off this balance. But again the real heart of this was to look at the longitudinal effects where we asked again students from tubing in two times in the beginning of the pandemic so it was in March and in May 2020. So it was an eight weeks difference and we asked them about the political corona conspiracy theory as well as those five variables at the T1 and at T2 so eight weeks later. And then we looked whether the belief in this political corona conspiracy theory would predict the behavior on time.2 while controlling for autocorrelation. Basically what's also somewhat frustrating is that only 137 people participated at time point two so the power was low and to be honest for these regressions and we still thought that the results are meaningful enough and important enough in the setting to talk about them. So what we did find is that the belief in the political corona conspiracy at T1 significantly predicted lower institutional trust at time point two and it also marginally predicted lower support for governmental regulations at time point two. We also found an effect on physical distancing though that was not significant. But again if you compare it to experiment two, you might argue that still believing in such a conspiracy theory has an effect on those individuals especially compared to those other variables we again did not find an effect on hygiene measures and also we did not find an effect on social engagement. So regarding the correlation found towards social engagement at the correlational research it rather seems like this is a third variable causing both and then a longitudinal effect of COVID-19 conspiracy theories. So to put it all together talking about the societal effects of COVID-19 conspiracy theories we found that believing in such a conspiracy theory had an effect on institutional trust and one might argue at least what we found in experiments is that it also had an effect on physical distancing and the support of governmental regulations which we had not a significant result for the longitudinal research. But since we did a longitudinal study we also took a chance to look at the mechanisms and specifically we were wondering whether institutional trust would mediate the relationship between believing in such conspiracy theories on normatheerins as well as social engagement. And what's missing here because it's part of normatheerins is also the support of governmental regulations. So we did mediation analyses at the cross-sectional research as well as the longitudinal research and looking at the cross-sectional research we found a significant mediation that in the sense that trust in institutions mediated the effect of conspiracy belief on physical distancing governmental regulations as well as social engagement. However, we did not find this in the longitudinal research and you can see that none of these confidence intervals are significant. Though if anything it's somewhat close looking at the support of governmental regulations. I also want to disclose that we looked at the reversed order so we checked whether the opposite could be the case that trust leads to conspiracy belief and that leads to physical distancing governmental regulations and lower social engagement which was not the case. So trust did not predict the belief and inconspiracy theories at time point two. All right, but I promised that I would also talk about ways to potentially mitigate these effects. Specifically, we were wondering whether inducing reasoning would help to mitigate the effects that conspiracy belief has on normatheerins. Why did we think about that? Well we know that conspiracy belief is related to intuitive thinking versus analytical thinking and there's also research showing that encouraging analytical thinking reduces belief in conspiracy theories. So there seems to be a way of reducing conspiracy belief and we thought well maybe if we manage to somehow induce systematic thinking that might also help to mitigate these effects of conspiracy theories in this case normatheerins. Here we measure conspiracy belief by the belief in six conspiracy theories. Two examples are the Apollo moon landing and the death of princess Dai and we looked at normatheerins. We had social norms like to not talk during a movie, to not lie to a friend and so on. And we had a second study where we measured the willingness to conduct everyday crimes which arguably is also non-normative behavior for example to pay or to be paid in cash in order to avoid paying taxes or to hide or not disclose faults when selling second hand items. And key here is that we ask participants to indicate their normatheerins either after this reasoning intervention or not and the intervention was pretty simple. So we basically asked them what is the reason this behavior is considered normative? So we asked them to write a text, why do you think it's normative to not talk during a movie in the cinema and why do you think it's normative to not lie to a friend and then they had to write a text. And here you can see so the, this line is the no reasoning condition so what happens if people like simply rate their normatheerins right away and there you can see the more people believe in conspiracy theories the lower their normatheerins but once they think about it and why it's important to adhere to this norm and why it's normative to show this behavior and this negative relation was not there anymore. So whether or not someone believed in conspiracy theories did not make a difference. We found a similar pattern for the willingness to conduct everyday crimes. So to sum it up, so we found that conspiracy theories have an effect on society on the short and long term, on the long term specifically conspiracy theories have an effect on institutional trust. We also found that effects can be mitigated through systematic thinking though I also want to mention a limitation here that also looking at other studies the reasoning might only work if the right reasons come into mind right. So in this case the participants always thought of reasons why it's normative to not talk during a movie. This might be completely different if they think about it and then they come up with reasons why it should be necessary to talk during a movie. And thirdly I think we all agree on this that more research especially research examining causes and effects is needed to really untangle those effects. So I want to say thank you to Kevin who is here in the audience who is one of the collaborators at this research as well as Kai who is here this morning as well as the collaborators from the University of Copenhagen.
Conspiracy theories offer unvalidated explanations for important societal events. Research has shown that believing in conspiracy theories is connected with negative societal consequences, such as decreased social engagement and less trust in authorities. This research has mostly used cross-sectional designs and rarely identified the underlying mechanisms. The current research set out to address both of these limitations. In one experimental and one longitudinal study, we examined the effect of a belief in a Political Covid-19 Conspiracy on attitudes and behavior in the context of the Covid-19 pandemic. Believing in a Political Covid-19 Conspiracy had detrimental effects for institutional trust and support for governmental regulations. Unexpectedly, trust did not mediate the impact of conspiracy beliefs on the detrimental societal effects. In a second set of studies we provide evidence for the role of a focus on differences as mechanism in the context of conspiracy theories
10.5446/54730 (DOI)
All right. Hello. And welcome to my talk. I'm going to start sharing my slides here. There we go. And this is on my multi submission importer for easy form, which was a challenge of a title, but I've been introduced. I'm a net developer for six feet up. And I work primarily in Planner right now, but I also do Python programming and such from the time being. And what really happened here is I had a very specific problem that we are trying to solve. We had existing forms in a site. There were quite long forms and we needed to be able to mass import or submit multiple versions of this. And I tried a couple of different things like data grid field view wasn't quite as friendly for our clients and users. And the person filling out the form wouldn't be the site admin necessarily. So when we had registration forms and we need to import multiple registrations or multiple instances, but have it act like easy form, there wasn't quite a ready solution. So I did some research and some digging around and started thinking about what would the solution need to entail. And for that solution, we wanted to make sure that it was something that the site admins could facilitate. They could give it to somebody outside of the site and that they could import and mass. We wanted to make sure that it would work with just about any template that we had because the clients were allowed to create their own registration forms. So they would be changing this from time to time and it had to be flexible. We also wanted to make sure after working out some of the import steps that the site admins could preview the data, make sure nothing needed fixing or adjusting. And then once again that it executes all the actions of easy form. And that was incredibly important to us. So let's take a quick demo of what I ended up with. There we go. So right here I've got my Burdconf registration form. And if I go into actions, I have a new action called import forms from CSV. And that takes me to a page. And what I can do is I can download my CSV template. And I can also import my CSV. So an example of a downloaded template, and I just downloaded it ahead of time, right here, is it just gets all the fields and throws it into a CSV. So you have all the header rows that you can look against. And then I could fill this out. And I have an example of one filled out. For my Burd conference, I've got Bluestj and Big Red, the cardinal registering for my conference. And this is a filled out CSV. So I can choose this file, import that CSV data. It's going to fill this into the saved data view, which I just borrowed so that I could do this easily and more quickly. And then once I feel, huh, this is great. I think that also should be big as red cardinal, save my adjustments. Oh, this is missing and false. No, you're not approved. No, you're not actually approved either. And then I can go ahead and import this data. And what this is going to do is actually save the data into my saved data adapter or whatever actions I have in Easy Form. It'll just process it through those actions. And then also, we've got, this is an example of kind of the mail. My mail hogs working a little weird right now. But this is an example of the mail that you would get when that's submitted through. So in this case, I have mail for Cardi Red, me and Blue Boy because I had confirmation sign. So that's the example there of how it works. So what I want to go through is kind of how I got to this point. And like, I really didn't know that much about Easy Form and how it actually worked before I got into this. So I had to do some digging and I kind of figured out, okay, so I knew Easy Form uses dexterity. I then kind of realized, okay, so that works on CC3 form and then put on CC3 form and I kind of dug down and drilled through the pieces to kind of get an idea of what actually supports the forms and the processing and everything. And then had to work my way back to figure out how to build this. So first generating the CSV. And that was actually really easy because, and that's just a preview of that page again. Easy Form already had a handy dandy API for getting the schema and getting the field order. And so all I needed to do was point to that and then get it into a CSV that could be downloaded. So here I have just a snippet of code. So I made my download form CSV, which said that was that page with my two buttons. And I made a choice to just use the two buttons because it's one page that my clients go to and it's one page that these users can look for everything. So they don't have to remember where to search for things and that rides along with the form. And then my button just points to this download form CSV when you click on it. Oh, and then so what I'm doing is basically just making a call and getting that request. I take a date timestamp because I figured I kind of just lose track of forms. If it doesn't have a this is the day I downloaded this in case they make changes to the form, the title automatically have that date time on it. And then I went ahead as well and just made sure that I set the header. So it knows this is a CSV, put on the file name and then just use the CSV module there and the dictator to write those field names that I pulled from the get schema form and the get fields in order. And once again, that comes right from the easy form API. So that was pretty nice and easy to do as far as getting a CSV downloaded. And once again, that's just an example in a slightly easy to read format of what an example CSV for that form would look like. And that way now I have a file that the site admins can send to whoever needs to facilitate the group imports and then they can fill this out. Now we get into actually uploading the import and getting that data back in because that was the biggest challenge I think was getting it in and then processing it through easy form. So once again, same page, but this time we have the import CSV and the import CSV data button right there. And what that button actually hooks up into is my preview CSV import view that I have registered here. And one thing and I haven't dealt with uploading data as much for forms but this multi form data if you've never worked with forms and uploading very important for making sure that file attachment gets into your quest. Now at this point I knew I didn't want to recreate something from scratch so I was going to try and reuse and recycle as much as code as possible from things that already existed and then just override what I needed to do. So lots of subclassing and inheritance to try and get this to work. So I looked through once again the code base of what I was working with and I said save data form looked like a pretty good model of what I wanted to import and this was that kind of that's the save that adapter in easy form. So I said why don't I subclass that and just pass my data into that. And I noticed that actually inherited from card form. So this gave me an idea of where I could start with this edit form factory. That's what I could customize to make the form what I wanted and have it do what I wanted to do. So now I needed to make this intermediate screen so that we could review the import and actually pass the data into this item here. And based on what I learned I knew I could build that structure here. So I ended up making my own class here CSV import card and that inherited to save data form. And this view is the view I referred to earlier and in the CZ3 form layout it actually has the wrap form. So it just wraps your form in the clone view so it could be in the context in the visual context of the phone site. So that was nice and easy to just pop that into that. And then I had to override a couple of templates because the native template for save data form of course talks about save data and I didn't want to confuse my users. So I edited this just so I could get the title like review import and some of the text that goes along with that. And then this form really what it does is it renders all of the sub forms. So it renders all of that information and the sub form being each form submission or each row in this case. Also I have the import edit form and that's going to be my custom edit form so I can override certain parts of the form that save data would usually be bringing in. So now I've got spotted what format I need and the biggest thing was what does this need for me to get this fed in. How do I need to format my data? And I looked and saw get items seem to be what fed the information into the actual slides there. So now I have this get items format here in this description. I'm like okay now I know how I need to format the stuff that I get out of that CSV to start feeding that back into that save data adapter view grid. Oh right. Nope, other way. So actually ahead of time just put a couple of code bits together so it's a little easier I hope to display on this. But here we go. Review the import. So I talked about my class already and that was my snippet there. And so what I'm doing here is I'm importing my CSV and getting that data and putting it into this and returning that. And then this is actually getting that data. So let me see. There we go. That calls upon this to actually get the data out of the file. And so I go through and I actually run a little cleanup values function because it's user input. So I needed to keep control of what the users were putting in and kind of clean that up and make sure it was going to jive what I got on the end. So I had this cleanup value here. And once I cleaned it up I was returning the format that I needed this data to be in to go into that get items function. And this is an example of the cleanup values. And what it really does is just kind of make sure and this is really easy for me if I really helps me figure that out is make sure that I was getting daytime fields were going back in the way they needed to the sets in particular. So any of the multi choice text selection bits were going back in properly and cleaning up all that information so that when it gets read it would like check the boolean boxes it would make sure the drop downs have the right values and it would select the items in the set that needs to be selected. And then that's also took advantage of the cleanup that was already in easy for this API. So I actually just use this directly within my cleanup values to make sure that the sets are formatted correctly. So now this is my new get items where I have my I check if it has a file attachment or I check to see if I hit the edit button or if I had to reapply changes at that saved out of you. And then if it doesn't have that and it's missing the file attachment it throws them back to the import forms base so just a little checking in case the file disappears or something happens there. And then next reviewing the import part two which is actually handling the data once it's in there. And so I had my review import table that's my big view. And then I needed to make a couple changes to the actual table that was in the form as well just to get the buttons and some of the cues that I needed in the right spots. So that's back to my code. So there are two classes in particular that I needed to override the batch class class to methods functions batch I needed to override because I needed to export this particular variable here and make sure that it was not trying to read what already existed and save data doctor but reading what I was feeding into it. And then I had to edit get data and please excuse my nesting that needs to be flattened out a bit. But this was also once again just making sure that I'm iterating through the items setting up my field names and I'm actually checking against the fields that are already in the schema. So if in the CSV there's something that is in the column that doesn't exist in the schema, it's just going to ignore it. So it's not going to throw errors, it'll only load things that match the titles that are in the schema already. So this is doing those checks and then once again making sure that my sets are just correctly. So back to this side. And now the biggest thing is the data is uploaded. I've got it formatted. It's in the table. We're ready to hit submit and actually submit this to easy form. And so I briefly over read this button handler. And this is actually easy forms handle submit. So this is how what it does when it submits a page. So I kind of read this and observed it and said, okay, I need to do something like this. But in my own context. So this was my entry point. So I was already using that import edit form. I made that custom form and I was going to override its apply, save changes button with my own button. And that's my import form data button. And I kept a lot of what was already there. But in particular right here, this is where I'm starting to iterate through the rows. And so for each row in that data is one submission of my form that I wanted to go through. So right here, I have a function in my import class that's called submit form. And so as I go through the rows in the form, it's going to process this through that submit form action and submit it as if it was actually in easy form. And then just count the rows so I can say these are how many rows were imported just a little feedback for the users on how that's going. And I made it I made it to what my real chief objective goal was at this point, I wanted to submit this form. And so I this is my submit form submit form function there. And I just actually imported the easy form process actions code right there and just push it straight through that process. So I didn't have to write anything extra just connect the two pieces and let it roll. So now that was kind of just the beginning it was my first foray really diving into the forms and just kind of manipulating and pushing things into place to get what I needed to work in that context. And it taught me a lot I learned a whole lot. And I think the result was pretty cool is pretty useful for our use case. And of course, there's plenty of room for expanding, because it'd be nice if they could you know have a clear button or cancel button or add or delete rows and such but I felt it was a pretty nice stub for starting to get through this processing. Being able to submit forms in mass without having to repeat it one by one so really accomplishes the goal of passing this information to the users, letting them just fill this data out and then giving the site administrators an easy way to facilitate registering all these people. Well, whatever mailer actions or save adapter actions they have or custom scripts, it'll just go ahead and process that just mimic that process. Alright, so thank you for coming along. And I guess this is my my time for questions. If there's any questions. Yes, thank you very much for the great talk and I really enjoyed it that you showed it in depth how you made this adapter and also and we have indeed one question from Anthony. It's what happens during the data cleanup if a value is unusable such as a string that can't be changed to daytime ETC. Okay, so what I ended up doing is let me go through here and get back to my cleanup. So what I did in this cleanup is what will happen most times and for date field in particular is it'll just throw the value out. So this is that this was the real importance of having that intermediate screen where if something like this happened, they would be able to join the two ends together and kind of say, okay, well, that's not right. Or this is kind of off or something like that. So I try and look at it and I try and format it but if it doesn't format it, it'll just throw it out so it won't cause an error. So that way if things are malformed, it won't break the whole process for the users. And this was kind of the stub of starting to go through and trying to catch some of those instances and of course with data cleanup, it can be a lot depending on how many things you're trying to accommodate. But this was based on the forms I had what I could start to accommodate or what I knew I had to be able to accomplish for them. But I think date time and date were probably the trickiest ones as far as throwing values go. Anything else like if the sets weren't formatted correctly, it just wouldn't highlight the item in the set. And for like these boxes here, if the value was missing and I already fixed this one, it would say missing colon and what the value was. So I'll actually just re-import that. Pull that up. So it'll tell you that value was missing from that list and then that gives that administrator a chance to say, okay, I can fix this based on what they have there. So that's what they can do in those cases. There's another question. I wonder if this could be a pull request for a new easy form feature. I think it's definitely possible. I think when I started this, and this is something that it's really easy, especially if you ever have imposter syndrome, I thought this is just a small thing I'm doing in the corner of my world. And when I presented this to my team, they seemed really interested and that's how I ended up giving a talk on this in the first place. So I would definitely like to get this out there because I think there's so many great minds in the phone community and we could actually really take advantage of this. So I'm glad to get a stub out and see where it goes from there. Okay. So Fred asks, this is something that can be added to the add-on itself. I'm not sure exactly what this refers to or what would be needed for doing that. Can you do you get what? Yeah, I think it's something that could definitely be put into the add-on. So when I did it, of course, because it was in my own little add-on, most of this is actually just written. I think all of it is actually in my view style. So it's definitely something that, now I think about it, I would have definitely liked to put this into its own view or own API or give it its own little niche. But I think if we could create some of the API endpoints or something like that to make this a little easier, so it's maybe a little less hacky, I think it could definitely be integrated and just be part of easy form. I think the cleanup of the values would be the biggest thing, cleanup and error checking. Okay, so there are no more questions on Slido.
I needed a way to mass import form submissions into Plone’s EasyForm, so after a bit of exploration and some creativity I built a custom CSV import tool. Through an Action on the tool-bar, a user can export a CSV with the appropriate form fields, update it, and import back to that form. Each row represents an EasyForm submission and mimics the submission process for that form. Even the form actions get processed. In this talk, I will walk through the my process examining how EasyForm handles submitted information, following the trail deep into CRUD forms, and piecing that newfound knowledge together to create the tool.
10.5446/54733 (DOI)
Yeah, so hello to the track two. The next talk we'll have here will be about building a collaborator. Yeah, not a native speaker. Collaborator news platform was blown and it will be presented by Ericho Andre. Please note that we will have questions, all the questions taken to GC afterwards. So please just put them into the slide on the right of your window next to the video. So then go ahead and have fun. First, thank you, Yanina. And here we go. Let's talk about building a collaborative news platform with blown. First of all, I'm Ericho Andre. Most of you know me or saw my name in one of the previous talks. I'm a Brazilian living in Berlin. I've been working with open source for quite a long time. I'm a fellow of the Python Software Foundation. And I'm one of the seven members of the Plan Foundation Board. In the past, I worked for Microsoft. So every time Timo mentions Microsoft, it hurts me. Thank you, Timo. I worked in many other companies, including Simbis Constitutia, that was the main provider of blown solutions in Brazil. And also on Rocket Internet where I was CTO for two different companies there. And right now I work at Pendact. Pendact is a collaborative news platform. It's the idea of facts are more important than opinion when you're forming your own opinion. It was launched in March of 2020. It's still better. We are still evolving the platform. And we've got some pretty good results. But first, I would like to talk about my lovely co-founders. I have Ashley Winker. It's her design wizard. She's amazing. She lives in Vienna. We have Christopher Young. That's the genius behind the idea. We worked together in a previous company. And he basically had the idea of Pendact. He's also based here in Berlin. And I'm Erico, CTO, and I'm based in Berlin most of the time. You can follow me by the footer on the Pendact. Usually I change to where I am at the moment. It's either Brasilia, Sao Paulo, or locally Sorrento next year. Right? So planning Pendact, the idea was to build a too long data-read news platform. Because we consume too much news. And we want to be informed, but not necessarily. We want to read every op-ed, every opinion article in the New York Times about something. We want to understand first and then, okay, which sources for this information. Then click one and then go analyze. The idea is that short cards are better than articles for you to get this first idea. And of course, like the Wikipedia, you go to the primary source from there. The idea is to have the cards submitted by our community members. We call them contributors. The card should include metadata that allows us to kind of cross-reference them. So I want to know everything about Joe Biden. I click and I see a list of everything that was written about him. And users can follow tags, people, organization, vocation, so on to form their own personal field. And a recent idea, but it's important to us, for each new card, we plant a tree. So you submit a card, it's published, we plant a tree. Some of the technical requirements, we needed a collaborative workflow. This is something that after working for years with content management, you know workflow is important, workflow is the key. You need to have permissions for people to submit their cards and other people to review the cards. And eventually a third group of people to schedule and publish the cards and so on and so forth. Also, the whole permission control to set who can do what and when. And we needed a platform that would help us to leverage metadata and categorization as much as possible. Of course, it needed to be SEO friendly, because even though we are a startup in Berlin, we do not have like a rocket-sized truck of money to invest in search and gene marketing. So we expect our organic growth for SEO is really important for us. And of course, it needs to be open source. We truly believe that being transparent, being in the open is the best way to go. So, Pendex Tech. First thing, content management. We decided for many reasons, one of them, of course, I am the CTO, I'm building, I'm going with something that's familiar, but Ploan was not actually my first choice. In the beginning, I decided to question my own knowledge about CMS and say, okay, how hard would it be if I build something with Pyramid instead of going with a full Ploan? And I consider a substance and I consider Koti. And at some point, I was considering developing the simple API on Pyramid itself. But in the end, when I start adding some features that would appear, it became obvious that instead of spending a lot of time building the solution, we could go have something really fast, go to market with Ploan, because Ploan brings most of the features we wanted out of the box. And what was not there, we were able to adapt instead of building from scratch. And of course, then using my own experience, back in Brazil, back with Simplice Constitutia, we had many news portals as our customers, as our clients. So, it basically means that we were able to build news portals with Ploan easily. And of course, this is a very friendly community. It's easy to ask stuff and get answers. And we have really smart people in here, including Matthew Wilkes that just published a book. The book should be there, completely messed up, but it's a really good book about Python, so you can go there and take a look. Also, I decided that if we were going to do lots of metadata and lots of categorization, we should try to find an automatic way of tagging stuff. Not only the hashtags or the tags in Ploan subject, but also specific categories. So, we decided to approach DBPDA. They have the solution called Spotlight. It's a REST API service, a REST API that you basically put a tag there and it gives you back with some of the the the market and basically annotating the original content. And this is something I wanted to play with for a long, long time since a group of friends did that in 2014 or 2013 in Brazil for global.com. And that was an idea that was in the back of my head, so we went for it. We used Spark Quirrell to query DBPDA to also get the about and abstract for everything we tag or content with. I'm going to talk a little bit more about that in the future. And we have a bit of everything else. And in here, I start with Cloudflare, IngenX, Varnish, AJ Proxy and Ansible. And I start with a screenshot of Gerbo Sonado, the Brazilian president. Not because I'm Brazilian, but mostly because this card brought us the biggest Reddit effect so far. We had in a few hours after submission between 120 and 200,000 different users. So imagine we have our amount of users and then we publish this and it goes to the front page. It goes skyrocket and I found out only the next day when I was looking at the Google Analytics. We have all sorts of monitoring, but in the end, nothing special happened because we had Cloudflare doing the static resources caching and we had Varnish doing the caching for everything that's dynamic. It was really easy to support the amount of new users coming to us. We actually survived already 12 different Reddit effects. This was the biggest one, but we had one about the Brussels, the Belgium government offering free train rides for their citizens to kind of bring back internal tourism to Belgium. And we got also a few tens of thousands users into the site. We used Tumblr to generate all the images and the scaling of images. Tumblr is an open source platform. I was expecting a hard time integrating that with Blon, but it was actually replacing one of the browser views, so it's adding a bunch of code, solve the problem really easy without having to integrate too much. We have Sentry, Mailgun, EFTTT and Zapier. When we publish something, we push everywhere else. And now we have DeepL, Scrapping Hub and Archive.org. I'm going to talk a bit more about that when I talk about the new service we implemented. So, Building Pendant, two set and add-ons. First thing first, I've been developing with Python 3 since I left SimpliSchoolStudio back in the end of 2014. So every single thing I developed after in my career was based on Python 3. I use other languages like PHP and Ruby and a lot of different languages. And a lot of JavaScript, but every time I came back to Python, it was Python 3. So, Pyramid with Python 3 and FastAPI with Python 3 and so on and so forth. Even a bit of Guillotine with Python 3. So, I got used to Python 3 and I admit that I love f-strings. They solve a problem for me because every time I need to do string interpolation, f-strings, it's basically the way to go instead of doing the old dot format and so on and so forth. I use a lot of type hints, mostly because it's a way for me to understand what I'm doing with my code. So, every time I'm writing, I'm putting type hints in there like from moment zero. And another thing I started using a lot is data classes, right? I use data classes as a kind of a contract between functions and systems. So, for me, it was a better way than just adding the passing, returning dictionaries everywhere. It's simple and of course, as I use Pyram, it speeds up a lot development and avoids a lot of errors. Important, type hints are not available for Plone, right? So, one of the things I did was to create a facade of the Plone API, adding some new methods for me to be used by my company and for all the methods I need to change, I added type hints. So, for instance, to implement the create user using UID as the user ID, Plone API does not do that by default even though REST API does and the login form does. So, I had to refactor the Plone API implemented something there and for that function, for that matter, I have type hints indicating that, okay, it's going to return you a member data. So, it becomes easier for me to work. And of course, black, I sort the code analysis and so on and so forth. And many, many flakey plugins thanks to NATO plan and to Q4Cada for doing that. When it comes to Plone, add-ons, I'm going to start with the easy ones. Collect C3C data grid, data refilled to deal with many sources for one content. It could be basically a JSON field and implement a JSON, a saving as JSON like a Voto saves the blocks information as JSON, but I wanted to give people the ability to edit easily. And I did not want to write my own implementation of a widget. So, data refilled. I used collective sentry and I believe Andres Young worked on that. Thank you. I used content rules like that was something I developed a few years back for a previous company I worked that basically every time someone does any action, it sends information back to Slack. This is something that for me, it's quite important because most of my day, I spend on Slack. So, okay, we have a new user or we have a new submission. It's easier to reach me there. And we have a super.plone. That's like one of the hidden gems of the Plone community and the Zope community because there's the Zope version of it. And I thank Eric Beho for giving a talk a long, long, long time ago in the Plone conference in Brazil about that and that's all kind of stayed in the back of my mind. A few months ago, it said, okay, we need to implement something. And I was close to implement a small API with Pyramids and then, okay, maybe it's an over queue. I implemented the same thing using a super inside Plone. Super. Often times we use the default ones, folder document most for content page like content guidelines about us and so on and so forth. Folder to organize some things. But imagine collection. Collection we use for grouping some of the content together. But we also use something called category that it's a folder with collection, collection and so on. And we also use a folder with some same defaults. So every time you click, for instance, regional news and then you'll go, you'll see a listing of everything that's regional news and then you click on regional news, Americas and then everything that's Americas. So we organize the content inside the categories and subcategories and we have a collection listing all of them. And that's the cart that's basically a news item with a behavior that implement the other categorization. And adapting Plone, we went for, okay, we know how Plone works. So how to play with it. So first thing we have the concept of my feet. When you go and follow some content. Let me, yeah. So, okay, I want to follow for instance, news from regional news or specifically from America specific from South America. I want also from American news. So you go there and follow. We're talking a bit more about that. And then we have a list of content. So the dashboard was turned into my feet. So we have the contributor profile basically leveraging the logic from outer page. And we adapted the image and scale browser views to support proxy into Tumblr content rules. We have actions for slack email and web hooks. We adapted a bit of those. And we added one trigger that's when a principal is added or removed from a group, because every time I create a new suit category, I want to create a group of contributors that can submit something there and also editors that can review content there. So, besides that, news features, new features, we have aggregator pages. As I said, we added the support for the categorization using DVP. And with that, we added some additional fields. We added a people location and organization. So, for instance, this card you see in your screen, Cantas lays off 2000 more employees. When someone submitted that, when Charlie David submitted that, I made a call to the VP saying, oh, this is the content you need to take. And he came back saying, oh, organization, Cantas, location, Australia. And we added a group of people that are involved. If there's someone preeminently in here, it would be tagged and grouped in there. Right. And we have pages for each one of those categorization. So, you see in here, this is actually payback.com slash organizations slash Cantas. Right. And we have a group for Joe Biden and so on. And the name that appears there. It's one to one with the name that appears on the Wikipedia page. Okay. And to do that, we also implemented the concept that I can follow stuff. You see in here, we have Cantas and we have in there follow. Everything about these organizations is going to appear in your feet. Right. And to implement that we use super.plum. So it's possible to follow categories, contributors, tags, people and so on so forth. And every time you do this, we go and add an entry to the, to the super catalog. And when you unfollow, we remove the entry and so on. So we can even do some reports based on people are users love to follow these organizations and so on and so forth. And the first approach I was considering was to implement a simple rest API to to deal with that and save the data on a post-risk database. But super was way easier to implement. And because it's already in plan, it's, I do not need to wait a rest or a database call to get and say, okay, you are following already this page and so on. Also implemented but did not release the voting up like saying, oh, this card is relevant or not and the bookmarking. So you can bookmark some cards. It's everything there. It's not on the user interface yet. So, a few months ago, I stumbled with the problem that was, okay, every time I wanted to do something. Like, for instance, I want to, to ping archive.archive.org, the wayback machine to store this page as soon as this card as soon as the card is published, right. So, I'm going to do that with flow right now. First of all, I start doing this stuff like that, simply adding to listening to some events and doing that synchronously. But archive.org is the greatest example of why that's not possible. So, when you go and do that in archive.org, it takes something between 45 seconds to see this 60 seconds to ping you back. So, it was clear that I would need some kind of a sync solution to do that. And the approach was to develop a special microservice, even though it's not a microservice because it's not so micro, but it's not something that was going to run on a different process and flow would basically send a message like do something. If it's synchronous, wait for the answer. Otherwise, do something and I don't care. So, right now we have everything from translation autosummary, archive, and even that reach to the DBPD spotlight implemented on this microservice that was developed with FES API plus Httpx. Okay. Of course, each endpoint has its own dependencies, Python dependencies. Ideally, we should have one microservice for each one of them. But as I'm using digital oceans apps platform, every new service is going to cost us five bucks a month and we do not have a volume that justifies that. But the moment we decide to go for a Kubernetes scenario or even a bigger deployment, it's easy to do with this application because it's easy to basically split into smaller pieces. Some of the lessons learned during this, first of all, I have a great step every time I need to deploy something. And there's a new code that changed configuration or need something there. There's going to be an upgrade set. And over time I added catalog index as I added information to the user schema profile and so on so forth. So, first of all, always be aware of hatchery configuration stuff that you store in the hatchery, but you have the full value. Do not forget to add the purgeable equals false, because that already hurt me in the past. First thing I would like to thank Philip power and everyone else involved in the trainings, because right now the planning training materials are the de facto documentation for problem. It's the most updated one, even though from time to time I need to go back to the docs.plone.com. And most of the time information that it's something I need to learn comes from training materials. I am going to do my one second of renting here I fucking hate research registry as it is today and JS development and so on. And as I mentioned in the past, he does everything already with a webpack and outside solution. I never got the time to probably understand research registries and how it works. It's working but it's not ideal but yeah it's there. And it's now simple and working a single solution. Everything that we had in the past had the same thing and it's gone. So it's something that if someone is willing to develop a new item, I would love to help. But be careful with webp images because they're not for everyone. Okay, that was a lesson I learned with the tumor, because number basically reads the the request and say okay you support webp so I send you what be right. And I have cloud flying the front saying every image you basically cash. Then we had users with iPhones. The first time, where he second time, I form, people do not see the image, it was a pain the ass. And as I prefer the benefit of the caching of cloud flare to the idea of reducing a bit the size of image. So I queue where P for us. Some future steps. First, a contribution. They, we have this feature in beta for some users that you basically put the URL, we generate auto summary. And we if it's not in another language, we already translate and then generate the auto summary. And it's going to be available for everyone I'm considering implementing that either with react or svelte. It's something I'm going to decide at the end of this conference. We are planning car tracks something like the Twitter trends when you basically have a treat that replies what we tend so on so forth. Search improvements. We want to move to elastic search I've been saying that for a few months never got the time. And last week I was able to play with it and to move to real storage, because right now we have a zeo setup. When we go out of bed. First quarter next year, we want to use blown as a headless CMS, vote, and mobile applications for for for for pendex. We want to implement more features in terms of quality control. So integrate language to to do the basic of a grammatic and spelling check, do a bit of auto tagging and something that I wanted to do before launching but I was not able to so. And that's the important part that's basically user management right now it's using the user folder, and it's already getting slow and slow, we need to move away from this. And, of course, join us. We have a lot of cute cards every card we plant trees. And I would love to see you all there, helping us like Eric Bruno, the whole does and thank you Eric. And that was it. Very, very glad to be here. I'm glad to to have this conference. Thank you all. I ask you to follow me on Twitter, it's aircraft follow pendex it's pendex HQ. If you want to get in touch. This are my contact information pendex information, and really important. My presentation is already on speaker deck slash Aircof slash building a collaborative news platform with long. That was it. I'm going to to to join you all on the GTC pretty soon. And answer your questions. See you soon. Thank you, Eric. As he mentioned, we will be on the face to face in jitzy. So if you have any further questions or discussions. Just meet us there. And I'll see you soon.
In this talk, Érico will present tools and solutions used to build and maintain Pendect.com. Plone is the core of a solution that integrates with Thumbor, DBpedia, ElasticSearch, IFTTT and Archive.org
10.5446/54734 (DOI)
Hello everyone, welcome to track three day one talks at the Plown Conference 2020. So the first talk that we're going to have here today is going to be by my co-worker Annette Lewis and a partner of Six Feet Up, Jeneanne Donnelly, who is from Sandia. And the two of them together are going to present to talk about a multi-conference solution that Six Feet Up had put together for Sandia. And they will give you some more details on that and talk about how that was all built. So Annette, you may go ahead. All right. Let me get my screen share up. All right. Hello, Plown Conference 2020. So once again there, I am Annette. I'm a developer at Six Feet Up. Plown and Python developer, I've been working with Plown since 2013 in various forms and drifting into Python has always been a great fun experience. And my co-presenter today. Hi, I'm Jeneanne Donnelly and I've been working with Annette and they have been really instrumental in helping us come up with this multi-conference system. So I work for Sandia National Labs and I do training and event management. So I've been doing that about 26 years. Yeah. And so today we're going to be presenting on how we collaborated to build the multi-conference system that we came up with. So first I'm going to let Jeneanne take it about the business case and how this really came to be. Thank you. So Sandia National Laboratories, we are a national laboratory that works with the Department of Energy. And the group that I work with is international security programs. So what we do, there's three different components within that group and we do trainings and conferences worldwide. We do them domestically as well as anywhere you can think of in the world. We work with our partners there. So next slide in it. So in this recent fiscal year, we handled 230 global events with over 6,200 participants in 107 countries and that's actually a slow year for us. With COVID we had to actually cancel a good portion of the year. So this is actually down from what we normally do. We're one department with eight people and this is just what we support for our department. So next slide. So what we do in that process for these events, we have to gather a bunch of information from our participants overseas as well as domestically, which involves us doing registration. So prior to using the system, that was all being done via email. So we would send forms back and forth. People would have to fill them out. Sometimes they'd fill them out by hand. Sometimes they would type them. Sometimes you can't read the information. Sometimes it's wrong. So we needed a system that was going to work better. So we started using clone for registration probably, I'd say seven to 10 years ago. It's been moved around from certain people to other people. So here we are. So now we're using registration. Everything comes through email. We have spreadsheets and we don't have to do that manual updating anymore. Next slide. So what we really needed with the current system that we were using, we wanted to update it so that we could increase our efficiencies, reduce mistakes. The email traffic, as you can imagine, is unbelievable when you have hundreds of participants per event. So that's all coming through, kind of clogs up your email. And we wanted to be able to provide a better look. We're a national laboratory, so we want to have a professional look. We want people to be able to register and be able to get their documents and all of that in a more professional manner. Next slide. So now it is back to Annette and she is going to explain. All right. So we had somebody go down there and do a training and just listening to their situation. We saw an opportunity to help them and solve some of these problems and started working towards a prototype. So for a starting point, we actually started with the PlonkConf 2016 policy, which is available on GitHub, just to show them some of the features and some of the functionality and say, this is a potential of what you could do to help organize your conferences and your events. And we felt that they could really benefit from that. So the first thing we did was come up with a prototype based on that add-on and built upon its functionality to adjust some of the specific tasks that we heard they were trying to accomplish. So like one of the things that we did was add the conference content type and that would be a sub-site, so each event could be its own sub-site. And then preconfigured these sub-sites with some folders and your speakers talk, some of the content types, a bit opinionated to help them get an organized site structure that could be consistent between the groups. And then also made some registration templates based on the client's needs of forms that they would need for registering people so that they could have this already pre-built and ready. And one of the big goals, of course, when coming up with the prototype, was thinking, if I was a conference organizer, what did I wish we had? And listening to them and saying this is what they wish they had and then kind of coming together and making something that could automate repeatable tasks and leave time to manage other aspects of the event instead of having to just build a site over and over again. Now I'm actually going to throw this back to Jeanine again because once we got this prototype together, we handed it over and then it was their time to evaluate it and actually see how it worked in the wild. Sorry, I guess it would be helpful if I actually unmuted. I apologize. So we have approximately 20 sites on a new server that is now using this new system. So what we have found, the system works great and actually we've added a lot of things for usability for our users who don't really have a lot of experience using this kind of system. So really our goal from my department, our goal was to be able to provide a system that you would be able to use on your own. Like we set it up initially and then we hand it over to you and instead of having to keep coming to us, you can do it yourself. So everything is set up for the user to make it as easy as possible. They don't have to know coding. They don't have to know how to change things up. They go in and they basically go through a list and they can pick their own image that they want for the top of their page. They can turn on videos. They can turn on a section that we call capabilities where they can talk about the things that their departments do. They pick their own registration form. We have a number of different kinds of trainings that require different information. So based on that, we had to come up with different forms because we can't ask people for their personal information if we don't need it. If I don't need your passport information, I don't want to ask for that. So we have different forms based on what kind of training courses we're doing that ask for specific information. If you're coming on site, there's different information. For our US citizens, there's different information. If you're coming from a foreign country, there's things that we need to do for you. So they're able to pick what they need and it populates on its own, which has really made that feature for them really awesome and they're able to use it. They get emails from the system every time someone registers. It will download to a spreadsheet. So our initial push out has been really well received. And what we're finding now is as people are using it, they come back to us with different kinds of things that they need, a little tweak here, a little tweak there that increases the system and it makes it better for everyone. So we're able to add those in and we're able to use it across the board. Regardless of what group you are, the whole system gets pushed out to everyone, which has been really useful and helpful for everyone. So we're still making changes, which is great. That was what we intended. So back to you, Annette. Right. So one of the really great benefits about hearing all of this feedback is that really makes this a collaborative process for us. So after getting some feedback and we have the test runs and as more people use it, we get to sit down and think about how we can actually expand this system and make it work even better every time and keep going forward with that. So now we're getting to what we call the multi-conference core. And now as more people started to hear about it, more people started to want it, different stakeholders and different groups became minchested in this. So we started working with different groups and having regular exchanges of ideas and we wanted to listen to the client. We wanted to distill out key items to tackle and then also suggest features that we really felt could benefit them. And then once again, on the team of people who are working on this project, several of us have experienced planning conferences. So we could really pull on that personal experience as well and add things to solve challenges that we have run into to make a smoother experience for the end users who are going to get this product. Also a huge benefit to us working with them have been that we've been demoing for groups and doing training for groups. And as we allow someone to do a training, one of the things you do as a trainer is just you listen to what people say. You listen and you watch. You see if they have any difficulties, if there's anything that they're struggling with, if there are anything that they're really excited about and kind of capture those questions and comments. And then we want to review them to see if we can distill out more helpful features or documentation or anything to help them have an easier time and to enjoy using this product even more. And then what was very interesting working with Sandia is with working with multiple stakeholders, that stakeholder was often representing an entire group. And so they would bring us back feedback from an entire group. So instead of working with one or two people, we were working with a large pool of people. So we had all kinds of different views and perspectives on how this could be used. And then we could take that information, bring it back to home base, discuss it and figure out what works best. And always keeping in mind, we want to keep things flexible. We want to make sure that they could still manage this very easily. And we wanted to keep a lot of the through the web capabilities so that they didn't have to dive into code and their users could just make a site very quickly. And of course, with multiple stakeholders, the one thing we always have to be conscious of is diverging features. Sometimes we get an idea that's really cool, but it's going to affect the group, especially since this system gets pulled out to the whole organization. We wanted to make sure that we weren't just discussing with the stakeholders, but if something diverged the stakeholders, we'd get a meeting together and discuss with them and come on to a consensus on what would be the best way to tackle that feature. So now let's actually go into the live demo and just take a look at what the system actually looks like. Other way. So here I have the multi-conference event management system. And this is actually just the homepage view of it. And let's see. Right here, I wanted to point out a couple of things that we have is this is just a homepage that we have built in for them. It's actually built in Mosaic. And we built a set of tiles so that they could make their own homepage and everyone can have a site and kind of design the front end without us having to come in and customize everything. So we gave them some options of colors and background colors and a very variety of different tiles that they can kind of piece together a look for their own page. But then once you're into the site, the chief thing that we wanted to do was make sure the event management was easy. So if you go to add new, we've added a new conference event type, content type, and you can set your title for your conference. And we have a set of registration forms, as Jeanine mentioned, different groups have different requirements and different needs. So we actually are using a vocabulary here that detects what forms are in the root of the site. And then we'll let you choose from any of those forms when you're creating your new event. So I'm going to choose my birdcon registration form here. You get a banner image because it's a sub-site and it makes up your own little site layout so you can select an image. Let's use this guy. And then we actually added in this time zone feature because as we talked to our clients more, we realized not all of them were in the same places and not all of the events were home to the same time zone. So this allows them to designate a time zone clearly on their event, along with using the event behavior that's already built into Plum. It's great when Plum has a behavior that you can just stick onto a content type because you don't have to recreate that code again. And then each event can have its own contact person for email, name, and event stop. And we actually feed that into this when we create it. So that should be enough for me to create an event. And so now I've got my header image. So I've got my own little sub-site. And it's opinionated so that we have the speakers, talks, classes, and document folders are already pre-made for you. And then right here, you can see my BIRV.com registration form was copied, the template for the base site into this site. And that allows them to create their own forms or customize their forms so that they can have one similar form for everything and then just move through the different sites where they need it. Also I have this about folder. And in the main site, we'll go back for a moment, there's a templates folder. And we actually preload this with some predetermined content that's helpful for them in the events and the types of things they plan. But anything in this templates folder will be copied into your child site when you make it. So that way, if you have specific information or payment details or lunch, you can always just copy that template. And the new site creator doesn't have to recreate it again. And then once you do that, you have your form. You can fill that out. I'm actually going to flip over to my BIRV.com 2020, which I've actually prepared ahead of time. And some of the other things that you can do is we added an action to view attendees. So that way you have attendee management. And right here, I've already registered twice, BlueJ and Tuff2TidMouse. And what I can actually do is if I said I had a private conference or I needed to manage the group of users, you can go ahead into the registration form and actually approve these users. And I can go ahead and apply those changes. And that way, I can head back to this view attendees page. I need to go to the main from the main of the site. And now they're approved. And I can sync this with my attendees group. So you can actually manage content. So if you had like a members folder or an intranet or something, this would actually go ahead and sync these to that group automatically. And now the top level of the site, I can see these members in these groups. So we really try to automate a lot of this process to help them manage their things more easily and to help them make a predetermined skeleton to keep them on brand. Going out of my demo, another thing that we really tried to do is make it easier for the user management, not just from the conference admin level, but from the site admin level. So we added the conference admin's conference organizer role. And this is a new role to kind of limit what these admins could do. So that way, they could add a conference admin who had the power to create a new event. They had the ownership of their event once they created it. So in that sub site, they could act like an admin, but they could still only see the rest of the website as if they were a normal visitor. So they could only read pages that were public. They couldn't edit content. And that way, they can control the security of these people moving through these sites. So you didn't have to give them the entire site ability to be able to modify a site. And I talked about the conference creation a little bit in the pre-configured sites. And one of the things that happens is we have a number of events that's happening behind the scenes. So the sub site is a, we're actually using lineage child sites for each of the sub sites. And one of the reasons we chose lineage in particular is lineage has a couple extra add-ons as well for a sub site. For example, there's the lineage steaming add-on that allows you to apply an individual diazzo theme. So if they wanted to theme their site a little bit different, they had that ability there. We've also got a couple of event subscribers and that's what's putting together our content as far as copying those folders over. And something that I think is really pretty cool that we did is when you copy over the registration form, the email address and the contact information that's there, those mailers are actually configured to send mail to the conference contact. So the users don't have to go into the registration forms and reset their mailers and do all of that. When you edit or save that conference site for the first time, the mailers for the form are set and it will contact the person who set as the conference contact. Another thing that we're using is that with the event behavior, that means we can use the event aggregator. And so if you go to the events page, the conferences show up if you want them to or we can turn them off. Now registration is one of the biggest things that we were focusing on when we made this. So the registration templates are made using easy form and we're using generic setup to import them back into the site. And we picked that specifically over using like a Python schema because it's imported as like an XML schema, which means the users can edit that through the web very easily. So they can go into the settings and they can modify things and they can customize the forms but we provide them the root. So if that form suits them, they can just use it as this. If they want to modify it, they can copy it and make a new form and then once again that becomes available when you create a new event as a format you can select. We also allow them to, as I said, view that attendees registration list and send emails if they need to. There's some data save adapters that are put in to save some basic data that they might need for their day to day operations. And then some of that attendee approval, attendee management part there, which is not required but could be useful, especially if they have a private conference that they want to just keep to themselves. So why the subsites in particular? Because yes, we could probably make folders and just let each folder be its own place but we really liked the sub-site because it gives each conference organizer the feel that they have their own website. So instead of making a number of sites per year, they can all be housed within the parent site but they can all have their own kind of look and feel but still be on brand. And that's really great for consistency of the brand of that particular group or organization that makes sure especially in this that it's done in an improved and prescribed way that is required of the parent organization. And also it allows the returning attendees to see consistency and have a familiar experience. So if they return, it's not a whole new thing every time they get a consistent experience, especially if it's like a training event or something that's an annual event, any kind of repeat event, they get to feel a sense of familiarity when they come back and not have to learn a whole new system every time. The nice thing I've also really enjoyed with the sub-site is that you can delete them, you can archive them, you can copy and paste and reuse them if you wanted to. And since it lives in the parent website, it's easy to find it in the future. And I know in some projects I've worked in the past where you make lots and lots of websites, it's really easy to lose track of how many sites you have out there. Whereas if all the events are made within the parent site, you can look at the contents folder and you can see these are all the events we have here. So you've got that running history that you can publish, archive, privatize, use as a template, you have that there, it's easy to find. And someone new and boarding onto the team could say, oh, this is where it all is. I don't have to go track it down or find the paper trail and try and figure out where did this come from. Now one of the things that we did, and this has been a big part of us collaborating back and forth is with the conference landing page. So some features that we've been building in aside from those preconfigured folders, which is to help us make a guide for the philosophy of how to make that website, is we've gone ahead and we've added some constraints. I'll actually go back here. So to help make it easier to organize and make sure that information comes into places that seems to make sense, we've added folder constraints. So the speakers folder, you can only add a person or a key noter. The talks folder, you can only add a presentation. Classes is only training classes. And I think documents is only files. And that helps someone who might not be as familiar with building a website to follow some good organizational philosophies on how to organize a website. And this also helps us with programming ahead of time and being able to say, well, speakers will be in the speakers folder in pretty much most cases. So we can search in this folder if we need to build some type of feature that shows all the speakers or such. Now I haven't added speakers to the site, but we also added the ability to list speakers on the home site. So there's a little button here where you say display the speakers list. So if there's speakers in the site, it'll display a list of speakers on the website. And then we also have abilities to show an agenda on the main site based on the talks that you have. And you would just use this collection and do a search for whatever exists, and it'll make that agenda. And so that way, without programming knowledge, these conference admins can start to really customize some of the things that end up on their website without having to try and configure anything or having to come back to us. So those are some presets that we put in for building out their conference. Another thing that's really great about the templates folder in particular is that once again, if there's any content that needs to be pre-approved, so like if there's needs to be pre-approved banner images or pre-approved documentations or forms, that template's going to bring that over. And then the conference admin who has the owner rights inside the folder will have the ability to use that content in their website or delete it if they really don't need it in that case. One of the other really cool things that I've really liked that we've put in is this vocabulary's tab. And this is a per site management of these details. So that way, each conference admin can say what type of talk duration, so whether it's 30 minutes, long talk, short talk, half day, training class durations, and also like if there's any level types or audience types or all of that can be set right here. And once this is set, if I go and say I need to add a presentation, so add a presentation here, those values are what are going to show up in these forms. So they can customize all of this right at that conference custom type. And then all of that will show up available for them to use when building out their sites. So for the look and feel, and what we ended up doing here is that we have a base theme and a diazle per site. So the base theme is going to take care of most of our look. And that way, they have a theme that's pretty nice and looks great between all of the sites. But then we have a diazle site, diazle on each one theme that they can customize a little bit more if they really need to. But then we also brought in a site settings control panel. And that control panel in particular, going back to site settings, has a bunch of content settings that helps them interact with the theme without having to go into the theme. So here they can set the title of their site. And then they can set some footer content that's going to show up in their site as well. They have the ability to turn on and off the login link if they don't need that. They can add a site logo, their own site, Babacon. And this is for the site level. And then they can actually make some changes to their colors and such. So if I am going to pick something pretty tame here, tame but should show up. I do that. Now I've got my purple header. And so we've added some values that we, going back and forth were some of the values that they would like to be able to customize. And so that allows them to have this basic theme, not have the coding knowledge, and then be able to pick different colors, different acts and colors, and really start to make this look a little bit like their own site and their own brand. Without having to, once again, getting to code. So it's always been a focus on flexibility, ease, management, but especially when you're using something that has a base template system, we always want to make sure they have the ability to customize and feel like, oh, this is my site and this is my own individuality. Now going back into the front page, which I talked about a little bit before. So we talked about having a mosaic layout for the front page. And we provided two options. We actually have this ad new front page view, which is an opinionated front page. And it just has a set of predetermined sections that they can fill out that would give them a nice front page. And it looks very similar to this. And I will open that up. Here we go. And it's got different sections that they can fill out and they can enable or disable these. But then we felt after we did the section, maybe someone might want to reorganize them or change the order or maybe they need two video sections. And so to give them that kind of flexibility, flexibility, we made several mosaic tiles that they could use. And so they can use from these different tiles to kind of give them different feels. And in this case, I used the call out images to just do images and cue text titles. This is a rich text tile. And this is our banner tile. But I could also add forms. So I can embed a form into the site. I can add an iframe, which is great if they need a map or need to embed a specific piece of content. You've got the video embed if they want to embed videos from Vimeo or Nets using a lot of poems based features and just putting them into tiles that they can just click and select and drag and drop. This case, I'll try the call up boxes, which I will do a title. And we actually allowed them to use icons here. So if they know I'm going to use linear home, which is one of the icon names, but they can determine from this list of linear icons, the icon they would like to use. And that icon will show up at a box. And only the first one is required in each frame. So they can use one box and they can use up to four different boxes here. And let's drop that right there. So that's my linear house. And then we gave them these color formats. And I'm actually going to use this dark accent background color. And now I've got that dark blue. And that's just an example of some of the customization we gave them. But this once again, throws back to our site settings because we actually allowed them in site settings to pick two colors, the dark accent and the light accent. So that blue is not really driving with my purple. So I think I want that to be darker. So I'll save that. And I'll head back to my homepage and now it's going to match my color. So that way they can really start to customize. We've given them some predetermined colors, but they can pick colors to stay on their brand as well. So the biggest thing, especially with this particular thing, and especially when we can collaborate so closely and have so many meetings and so much feedback with our users and users and it's really awesome to be able to talk with not just the stakeholders, but to be able to train and demo for some of the end users that are going to be creating sites. Is that I get to hear directly some of the things that they hope for. And that also makes a challenge because we have to define that balance between standardization and flexibility, especially since we're automating so many things. And when you automate something, you really kind of expect it to be named a certain way or in a certain place because you need to grab or talk to or find that. So in this approach, we've taken almost a modular approach where we've provided lots of different building blocks, lots of different pieces that are predictable that they can customize to an extent. But then we also have some of those flexibilities. So once again, like adding new forms, we've made it so that we can just search for the forms and that's a flexible aspect or the extra folders and the templates folders. And that's flexible, but we've added the groundwork for where those should be and where those need to be in the sites so that we can find them and have all of our automated processes go against that. So I think definitely coupled with the trainings and getting feedback and having the end users talk back, we've been able to build a pretty ideal solution that covers a variety of use cases for a number of teams and the number of users. Back to Janine. Excuse me. So, okay, so we've been using clone as a registration site for probably 20 years now over at Sandia, but mostly in our international programs. So in the last couple of years, we've been able to kind of expand and get it out within the corporation. So we're trying to expand and get it out corporately because we have so many groups outside of us who are trying to do conferences and trainings or have annual meetings that are just kind of going out and setting things up in places that we're really not supposed to. So what we're trying to do is keep everything in-house and kind of pull those groups in and give them a means to have something that they can go to that's quick and easy, that's already set up, that meets all of our security requirements, that we, you know, is behind our firewall and takes care of all of those things that they really need. So we've kind of come a long way and in some ways COVID has been good for us because it has given us the time and it's given us the funding that we needed to kind of take this to the next level that we've been trying to do for probably about five years. So for us it's been a little bit of a mixed blessing to kind of have the time to sit down and go through it. With our normal schedule we would have never been able to do this. So we cannot thank Annette and our team over at 6 feet up enough. So Annette, I'm going to hand it back to you. Yeah, it's also, it's been great to see this through because I started with 6 feet up in March. So I came like right into the middle of this project, but it's amazing over the time how many different things we've done. And once again, like the fact that I even have my, one of my stakeholders here to present with me, I think really speaks to being able to collaborate so directly and so strongly. And I know we might be doing some of the groundwork and the legwork and the physical work, but I really feel that we, we did build this product together and it's really awesome to see it going into the wild and this organization and seeing all the different people using it and all the use cases. So I really appreciate the opportunity as well, Jeanine, to be able to work on such a project. And I'm pretty sure all of us at 6 feet up have been really excited to see this going out into the wild and look forward to listening and just seeing what else we can do to help you and make this process even easier for you in the future. Thank you. I just got some new changes today and that we'll talk later. I'll see you later, I guess. Okay. But yeah, that's it. So thank you for coming to our talk again. If you ever want to contact us, we have our stuff up there, our information. But once again, I'm in at 6 feet up, web developer and this is Jeanine. And yeah, Chrissy. All right, so stick here for just a minute because there were a couple of questions that we had in Slido. First of all, Karl had a couple of questions for Jeanine asking, do you have any feeling or numbers on how much time has been saved with the multi-conference system lab wide? Wow. I can't really give you a good number on that, but what I can say is in a fiscal year by myself, I handle about 160 training events on my own with a minimum of 35 participants, not including trainers or interpreters, and my big events are easily over 100. So I would say that I probably, it's probably three to four full-time employees that we don't have to use that I'm able to handle those events on my own just by using this system. So it's really saved us huge amounts of time. And now that it's kind of spreading and we're getting more access, our upper management is starting to use it for their meetings. It's really kind of getting out there, which is great. So it does save us huge amounts of time. We're even able to have participants send us their documents within the system so that it's not coming through our email. So it kind of goes through this way. It's much more secure than when they send us unencrypted personal information. So I would say, yeah, at least three to four people that we don't have to have on staff on a regular basis just based on the system. And do you know how many departments are using it currently? Right now, my department is the biggest user. Like I said, we have eight people in that department. And we have started spreading out to other groups, our executive protocol group. They handle a large number of events. So there's probably, I would say, five to six other groups right now. And we're working on expanding that out corporately to everyone to make it something that's available to them. Great. Thank you. There was one other question there from Alec asking, how is the block transform order configured from an add-on? So I'm not sure if that question was specifically about Mosaic. Me neither. OK, well Alec, how about, so here's what we're going to do next is in Loudsform, just beneath here, you'll see a link to join a JITSE room. It's going to say, watch together and or talk to the speaker join, click the join face to face button. And you'll be able to ask Annette and Janelle any more questions that you have. So I mean, that's one question that we can talk about some more, get more information there. So go ahead and join that. And I'll see you all at the next talk.
In this talk, we explain the work Six Feet Up did for Sandia National Laboratories building a Multiconference Event Management System in Plone 5.2 and how the client requirements and planning informed the engineering and development of the Multiconference solution. Sandia National Laboratories conducts thousands of trainings and large-scale conferences annually world-wide. While they were already using Plone sites, they could greatly benefit from a repeatable solution that allowed users with little technical skill to create and host their own event websites. The Multiconference system allows end users to easily create pre-configured, yet customizable conference sub-sites housed within a parent Plone site. Within a sub-site, Conference organizers can manage attendee registration and approval as well build out the details of their event with built-in contents-types and automated processes to aid them. We will also discuss the collaborative process between the Plone developers and the end users to conceptualize an ideal solution for their use-cases, and provide details about the functionality developed to meet these needs.
10.5446/54737 (DOI)
Okei ennen, beninin FIR onkin ilman taimeen. Hän on kop 참en, jorssakkeluen, sun kaverinun. S 저�matenaroitetti tarjoaa.......mas invented 2700 kaihoit ja 6 eri kaihoit, joita on kohdattu, sillä on kaihoit, IT- ja humanitissa ja niin on. Olen tullut digital-telä-telä-telä-telä-telä-telä-telä-telä-telä-telä-telä-telä-telä-telä-telä-telä-telä-telä-telä-telä. Mutta sen 2003 tosi aloittaa tämä viruallinen universitus, joka oli kohdattu yhdessä ilmastossa ja kohdattu uudestaan ja uudestaan. Tämä on Jussi Talaskivi, joka vielä toimii, ja on täällä myös kohdattu Zope ja Ploon 2003. Meillä on aloittanut kaihoit, joita kohdattu Zope-telä-telä-telä-telä-telä-telä-telä-telä-telä. Tällä yksi maailmaa, joka on kohdattu Zope, oli Moni Viestin, joka on kohdattu yhdessä ilmastossa. Se oli ja se on videopubliikkia platforma, mitkä on linosteja ja rekordaamista, tekstit ja filmoja ja kaikki kohdattuja. Tällä kohdattuja voitte edita kohdattuun ja kohdattuun. Meillä on kohdattu yhdessä ilmastossa. Tällä kohdattuja oli tutkassa, research and publication data system, myös kohdattu Zope ja se on vielä käynyt. Jos universitiivista on muuttunut yhdessä ilmastossa, niin me ollaan vielä käynyt tämän. Moni Viestin, kohdattuun, oli tämä killer feature, jotka voimme olla käyntet videot, mutta siinä on kohdattu videon, eli sitä meillä addressa on seillion vastatta asiasta. Meillä voi vähän countdownia. Kohdattu fluidti on muuttunut Yhdessä Al puncturaun nelpäsuudesta. Glow 팀it kohdattu. Mennään sisilijättä, ja öppiys drinks-similaini alokoon. Two special here. Well, it was a special, it was the first one. But this change was a bigger one in 2005. A couple of faculties wanted to renew their websites. And we said that, well, we have this thing called PLOAN. So would you like to try this? And they were like, okay. Let's see how it goes. And we managed to create the three faculty websites in just two months maybe. And other faculties were after that like, hey, we want that too. And then the other faculties also joined in. And soon after we also published the university main website built with PLOAN. And we also had this lab integration. So that it would be easy for the editors to log in. Couple of years later, 2009, yet another major PLOAN implementation called KOPPA, which is a basket. It's a learning management system for our university. And it has two simple, very simple use cases. First one to show public course material for anyone. And then the other one was to show course material only for those students who have enrolled into a certain course. And we did integrate it to our study information system at that time called KORPI. The first version probably ran on PLOAN 3. And soon after we updated it to PLOAN 4. 2010, we created another KOPPA, another learning management system for Open University, which had the other 15,000 students. And they wanted this very simple feature or useful feature that students can also return assignments. So they can watch the material return assignments and that's it. And it was also built in a couple of months and it was very useful that we could just deploy empty PLOAN site and the editors could start adding the contents there. And while we still built the team and the other features. It was at first it ran on PLOAN 4 beta or something like that. Money Vesting, the video publishing platform, more features, PLOAN 4, mobile payback support back in 2010. That was kind of new and HTML5 video. We tried to went on the bleeding edge with that video platform then and still do. Other universities wanted money vesting also, so we created another installation for them. 2010, even more, a portfolio. This is a student portfolio site where they could add their skills and objectives and create different presentation portfolios. And you could drag and drop tiles there and add custom content types and export the portfolio and so on. And we haven't developed this in many years but it's still running just fine and someone is still using it. 2011 payments system. Some managers at the university wanted to sell pens and mugs but our quite more ambitious designers wanted to create a whole platform to sell digital products and we did that. And there's also PLOAN form gen so we could create custom forms to enroll to certain events and then pay it. And of course it has integration to our finance systems and the payment provider in Finland. And sales throughout the years is like 10 million euros and over 100,000 transactions gone through that payment system. Pretty nice. And you should check out Kim's presentation earlier but shortly we had and we still have faculty and departmental intranets in PLOAN. So info that's only for that department. So we create a folder and set the intranet workflow there. That's kind of it. It's very simple with PLOAN. We have LDAP groups and PLOAN groups and manual groups are really flexible and can be used in combination with the automatic groups. And this was also like a natural growth in adoption, not a strictly led project and they are still in use. Here's just a simple PLOAN 4 site created in 2011 and it has custom content types. And last time we checked it was still running. So it's there. 2012 new team to university main websites and all the faculty sites update to PLOAN 4 and it has responsive design, which was kind of new at that time to have responsive websites. 2012 student well-being service. This is based on psychology research and there's material to cope with anxiety, depression, stress and feeling down. And this PLOAN site has two main functions. First was self-study material for locked in university users. And the other one was these coached courses with anonymized user IDs for student and student coaches. And in this case, security was still very important since students write their own personal journals about their well-being. And it is multilingual with four languages at the moment. First PLOAN 4 and then PLOAN 5. I'll catch my breath and let's move on. 2013 something. We have used digital forms all the time that we have had PLOAN and there are thousands of forms generated. And those forms are generated by the content editors themselves, not by admins or technical professionals like us. And forms are very, very powerful. So, first step to digitalizing your own processes. So, use them. And we did develop one add-on for forms from which people can export their data easily to Excel or Open Office and so on. Okay, this is one of my favorites. It's a portal for selecting minor subjects and it has faceted church add-on thanks to the guys at EEA. There's custom content type and basically there was no programming needed since we could do everything through the web. And it took a couple of days to build it and it's been running ever since. Okay, Moni Viestin, you remember our video publishing platform. 2014 we created this lecture capturing in certain rooms and halls. And also integrated this to our study information system so we could automatically schedule lecture capturing. And we do have touch screens for manual use but this very much gave more effectivity to our video production that we could do it automatically. And just publish it then in Moni Viestin. Or hide it if we wanted to. Koppah, the learning management system 2014 we created this feature that student can return assignments. 2015 we had electronic exams, development time from a very experienced blown developer maybe two days. And since then we have had thousands and tens of thousands electronic exams or these assignments returned. 2016 blood-serism detection number one and 2019 blood-serism detection number two integration. So using the same system and just integrating new features there. Another favorite of mine, human technology. This is a scientific article peer review system. And here you can see the design of the workflows that they had in mind when we started this project. And they asked, can you do this? And the answer was of course, sure, yes we can do it. And it has over 15 different and combined workflows. There are content rules combined with 150 maybe different automated messages, many roles, semi-automated messages and so on. We are using blown portlets for showing state and role-based instructions that can be easily edited through the browser. We use blown comments for discussion and every other blown feature too. What we didn't have to do is to do any like theming there so it looks kind of plain blown format. It has collected 45,000 objects and many scientific articles through it. So that's nice. It wasn't easy though. 2015. Okay, there was a case that we needed customized forms to collect data in a phone survey. That would be really easy to use and easy to export the data in Excel. So we used Plomino add-on and last time we checked a couple of weeks ago there are thousands of responses collected. And we haven't touched that site in many years aside for some updates but it's very robust. Okay, moving on. Each one, teach one, a pair program for students from different countries to learn different languages. Created first at blown 3 and then blown 5 and it has custom content types, custom workflows, permission management, all the things that come out of the box with blown. FTPI something, it's a multi-school course listing and enrollment system. It has integration to HACA, which is a Finnish identity provider. It has custom content types and the build time was like a month and it still works. University of Jyväskylä main website 2017. This was a major renewal. We had new visual team and we use Mosaic a lot. And I mean a lot lot. So there are these drag and drop free from grid layouts and lots and lots of customized tiles, hero banners, embeds, content listings, RSS feeds, Instagram, Facebook, embeds and so on. It's responsive all the way and we had at first these ready made rigid templates but then users wanted more and we let them lose. Let's not talk about that anymore. Blown 5 migration at that point and the content amount was over 100,000 pages and we migrated everything even if we maybe shouldn't have but that was what the users wanted. So just serving the public. Uno Internet 2018. So this is and still is Internet for stuff and students. It has news and events and instructions and memos and search and occasional commenting. And all the basic Internet stuff that you would want, dashboards and so on. It has also the ultimate news and events generation machine where you can create the news item and pinpoint it to a certain audience. It's a slot mosaic views and it's bilingual and integrations to many different systems. Also automatic groups in there. Okay ohjaus asia kirja. Oh, I've been talking fast but so I can slightly slow down. Super vision document for doctoral students. This was built on top of collective flow add on by usko so. And this whole thing combined with blown provides a full feature digital workflow engine and solution. And it's it's mostly based on blown out of the box features. And so there are workflows and permissions and dexterity customised content types for and. And everything you want and need. And I have a presentation from one conference 2018 with more information about this. Another example of this collective flow. This is salary claims for short term employment. There are these digital forms and workflows and roles and permissions. In this case, additionally, robot framework integration. So in the end of this process we generate and send this generated PDF to external salary system and the robot in the inserts data there since the external system doesn't have any good API so we just put the robot to the dirty work. And 1800 claims submitted and delivered so far so it's a massive improvement over the old paper process that we had. Multilingual 2003 2020 and forward. It's absolutely a mandatory feature for university like us. In Finland, there are two languages, Finnish and Swedish that are like mandatory and then everyone uses English also so. That's why we need it. And we have had two kinds of translation types. We have the separate language language folders in phone. Or then we have used lingua blown or this new blown up multi lingua for one to one translation. Very important feature. Okay, here's an example of blown as a content management system in 2020. So someone has added content back in 2004. And when you come back at 2020, it's still there and it's still working through a couple of migrations. So if you put content to blown, you can be sure that it's safe there. Now we finally get to 2020. Oh finally pretty fast actually. And I go through the our major products that we have presented here. So open university copa this learning management system. It still looks the same as it did in 2010. It's fine. It's there. It now has the remote exam, it has the integration to blood series in detection software. And there are 67,000 returned assignments since 2015 and older ones were removed. So maybe double the amount and thousands of courses and tens of thousands of students that use it. Use it since. Then this university copa. This is funny that are funny. This year finally EU and Finnish law enforced the accessibility more strictly. And when we, when I went to copa to check if it's accessible since we haven't touched it, that's the theme since 2012. The accessibility checker said that no errors and no contrast errors. So it's almost a vanilla blown and it's accessible since 2012. So that was really nice. And last year we integrated it to our new study information system. And we have 7600 courses and 67000 assignments and over 100000 pages and files as a study material there. Money vesting the video publishing platform. It still looks the same as it did in 2010 running on blown four, it seems. But this year we did integration to automatic Finnish speech recognition and subtitle generation because of the accessibility laws. So we have 19000 videos and when we update automatically or manually new videos and they usually are in Finnish. The machine translates the text and generates subtitles. That's pretty awesome. And we have various workflows for public and external videos. And we can of course edit the generated subtitles in the browser as we have been able to do that since 2014, maybe when we first implemented that feature. And as it comes to the other money vesting sites that we have for other universities there are maybe 30,000 other videos, so the amount is maybe 50,000 videos. Most of which are very long. University website this year. Another theme upgrade with accessibility in focus and even more mosaic features and mosaic pages there. It's nice. And also this year, Volto, as you might have seen in this conference, we have been giving a couple of talks about how we use Volto in our university. We now have one public microsite, but one major installment, which is used as a back end for Open University study guide. It's really fast, easy to use. And next year, we are going to develop this new portal also built on loan for international students. So those were the cases and then I want to summarize all of this in a couple of slides. I still got time. So we have had some hurdles throughout the years. We have too much content and nobody ever deletes the old content. We have two good search engine optimization, people are asking why do these people from Australia keep contacting me about my venomous spider research. Well, it's because your clone page in university, if you ask me, comes first in Australia too. Someone wants a small pretty website with a carousel. Maybe clone hasn't always been the best choice for that. Sometimes heavy transactions have made sites low, but we have fixed that with caching, but caching is another problem. And just because we can say yes, we can do everything it has generated, maybe some additional work for us because we just have to do it then. And for some developers with short time at the house, the learning curve can be quite steep. But we have had lots of good developers that have been able to do blown stuff, so it's not impossible. Yeah, that was one slide with the hurdles and then a couple of slides with the things that blown excels in content management. The whole folder page structure is very powerful, easy to understand. And basically using blown has been easy since the content editors have been able to create hundreds of thousands of pages. And not that much like problems with it. Flexibility as demonstrated, we have integrated blown to anything and made it to almost anything. And it's been very, very robust. So this integration to automatic speeds recognition is to blown for and it can be done in 2020. Permission management is absolutely fantastic. And there has ever been any situation that we couldn't have done with blown. We have been very, very difficult feature request that we must do this, but not this and this one, but this one and but everything can be done with blown permissions. Security. That's one thing that's very good in blown. And the commitment from the communities really good. I hope my voice still works for this couple of last minutes workflows. Any combination of workflow transitions and states and permissions is possible. And any number of workflows can be used together. It's not always wise, but hey, you can do it if you want accessibility as demonstrated blown is accessible. And that's really important these days. So it's a meeting and configuration without actual programming. Some people can do pretty much anything in a browser through blown and so on some CSS. And that's has been fun. You don't always have the developer on hand. So this is nice. CSS classes on body so every page has exit class for folder structure role content type and so on. And it's nice to have a lot of different different types of content to use. So, yeah, so it's nice to have a lot of different types of content to use. So, yeah, so, yeah, so it's nice to have a lot of different types of content to use. And I think that's very nice. And blown has readable URLs. Some systems just don't have that. So appreciate it when you have it. It hasn't the license cost has been like nothing. Nice. That's it. Wow, 28 minutes go fast when you talk fast. I wonder if we have time for questions or we should we head to Chitzi. Yeah, Riko Pekka, thank you very much for the talk and the insights and that's a wow so many plant sites and one place I think that's really great.
I was showcasing through various projects the different usage and benefits of Plone at our university. This year I could revisit few of those, and come up with a bunch of other projects where Plone is used successfully. I wouldn't go into too much technical details, but more like highlight the core aspects of Plone through different cases: Flexible permission management, overall robustness, content management usability, possibility to integrate to other systems, some TTW-tricks where you don't need programming skills, workflows, content rules etc.
10.5446/54739 (DOI)
Hello everybody, this is the last talk in track number two for today and welcome everybody and I'm pleased to introduce somebody who most of you already know very well, Sally Clanfeld from Jazz Carta fame and she's here to talk to us about color writing with orchid data. Take it away Sally. Okay, thank you for the O. All right, I'd like to start by just saying a word about why I'm giving this talk, because this talk is about a very simple site. No new technical innovations that might be of interest to developers. But it's a really perfect example I thought of, of the kind of thing that flown is really good at. So, this is going to be a really great solution for organizations with small budgets. Kind of a case study of a of a system or a website that is a perfect match for clones capabilities. Let me just hang on here. We also found this a really rewarding site to develop for the developers who was who were Alec Mitchell and Jesse Snyder. It was really, really refreshing. Instead of these huge sites that require a lot of customization and complexity and oh my gosh if it's a site that's you know, five 10 years old you're really struggling with, you know, historical things that you've developed and trying to trying to work with that this was a fresh start, a simple site that was just really a pleasure I think for them to develop because everything just went exactly as according to plan. And for me it was a real pleasure also because of my interest in the problem domain. For some of you know, before becoming Python developer I was an ecologist and a botanist so this, this was a project that was really, really fun for us. All right, I'll start off by describing naoc the organization that the site was developed for. So naoc stands for the North American, conservation, excuse me North American orchid conservation center. And it's a coalition of organizations that you see pictured here, dedicated to conserving North America's diverse orchid heritage. So now fosters and supports efforts to preserve orchid habitats. They work to restore orchid populations. Native orchids where their populations have declined. And they promote public interest in orchids and their preservation through outreach and various educational programs. One such educational resource that now it has created is the go orchids plant identification site, which fosters an interest in orchids and empowers citizen scientists to identify them and learn more about them. So we started developed go orchids in 2013. It was cloned from the open source go botany site, which we had developed for the native plant trust as part of large NSF grant. Both. So, so these are, these are very specifically plant identification systems. And both go botany and go orchids are built in Django. And it also has a field work and laboratory work side. This is a, this is a, a poster diagram of the, of the life cycle of orchids. And they are, and they are, is, is developing national national collections of orchid seeds, and the symbiotic fungi that orchids require to grow, which are going to be available to support conservation efforts. And collections are managed at a number of different collaborating institutions, and lots of data is recorded for each sample that that is collected. And this is the relevant side of NAO as far as this project is concerned. Just a second. Okay. So, let's talk about the, the, the, the capabilities that now committed for this site. So, simply, they needed a system to manage all the information that was in their orchid samples. So, so the, they had originally started capturing their data in a spreadsheet, recording all this sample information. And they had already grown that, as you can imagine, this was a lot of data to shoehorn into a sort of simple solution. And they needed needed a system that could capture data about each sample as it went through their process. And they needed data from the field about the orchids habitat population many other details, then data from the lab the results of seed propagation, fungal cultures, where the samples and cultures are stored, etc. And the data management system was not all that they needed. And they also needed collaboration features so that people from partner organizations could contribute and disseminate information about meetings research, etc. And could can also contribute the data to the, to the, to the system about the orchid samples. And then, specific collaboration features that they actually needed on the collaborators had to be able to enter information about the samples and submit them for review the admins in the system had to review this, the sample data and publish it so that the collaborators could see some but not all of it. Anonymous users on the other hand should not be able to see sample data at all. So in flown terms what they needed were roles workflows and protected data fields. And supporting this kind of collaboration on sample data. The system also needed to support general collaboration features, such as sharing documents and posting news items and events for community for their community of collaborators and partners. And that data sample data and general collaboration features had to share a common access control structure. And to top it off, although they needed quite a bit of functionality the budget available for the project was minimal pennies as this slide is trying to convey. All right. So why did we think clone was a good fit. So the NAOC staff were already familiar with Django from their experiences with go work is, and like many scientists, they were also familiar with relational databases so they were kind of expecting us to propose some sort of web database like Django, but in fact, clone made much more sense. So all the collaboration requirements that in the OCC had came out of the box with clone and required only minimal customization so member roles workflows fine grained place full access control permission sensitive search collections which could be used for making reports, all these things are the kinds of things that flown is great at, and that just came with the phone. The sample management on the other hand capabilities were quite easy to use by adding a custom content type to with the to capture the sample data. So, a sample content type with lots of different fields of many different types. And then there's a very rich structure to this these sample data objects that could contain images files, fungal analysis that kind of thing. So whereas, if we had started with a web database like Django and added the necessary collaboration features that would have been expensive, but starting with clone and adding custom content types and workflow was easy. So we needed a brief discovery phase to define the needed features and more detail. So, we used a spreadsheet to define in detail this custom content type that we needed for samples with lots of columns that define the detail information about each field, for example, its name, help text, type of vocabulary, whether it should be searchable, etc. And that content type definition included specifying what data fields are highly sensitive and can only be seen by admins. For example, given that some orchid species are rare and endangered the locations that the samples came from must be hidden to anyone except the admins. So we picked up a simple custom view for the, for the sample content type, just using a Google Doc, something that the developers could work from the out of the box dexterity view is pretty useless so that was, that was something that felt we had to do. Okay, I have just realized the reason I'm stumbling here is that I had actually started presentation mode earlier before I did this talk and then I edited the talk, and I'm getting the old talk so I'm going to share my screen and then share my screen and, and I apologize for that folks. That's the way it's got to be. Stop sharing. Hello. This is me in person. I'm going to get out of this presentation mode. I'm going to get into the presentation mode that I actually want to be on. I had no idea that Google slides would not just catch your latest changes if you started presentation mode earlier but obviously it doesn't. So, anyway, here we are at the place that I actually want to be. And then share my screen again. I am no good at this. You're doing great. I want to think. This one. Okay. I am actually heading where I want to be. Okay. So I talked about the, the custom content type that we mocked up a view for and we also defined a custom workflow that we needed for samples and other correct collaborator content, similar to the normal flow and workflow. So like the normal three state, you know, draft submit and published workflow. But in this case the published items are invisible to anonymous, all at all times, never visible to anonymous, partially visible to collaborators and fully visible to the admin and the owner. Okay. So, that's the lead up to discovery of what we need to do and here's just a brief tour of the, of the implementation. Okay. So for the implementation. First we created the collaborator role, which was like a slightly specialized member. We created the custom workflow using that role to that will be used on samples and any other type of content that that would use that would have the same visibility constraints as the sample data content that when published would only be visible to people with the collaborator role. So that's the out of the box Barcelona at a theme through the web, adding the nail load logos logo and colors. We did not have the budget to do a custom theme. And Barcelona data was straightforward has good mobile responsiveness. So that suited our purposes. We wanted to make sure that people could use the site on mobile devices in the field, including for data entry in the field, and Barcelona allowed that and plunge nice mobile responsive editing UI allowed that so great. We also created a skeleton site structure with kind of high level directories where the samples would go reports and a general collaboration area this is looking at the site as an anonymous user. And here you see the same page logged in and now you see those top level folders for samples and collaboration. So we implemented this by setting up place for workflow on these collaborator only parts of the site. Placeful workflows can be set up by clicking policy on the state menu. And you get a form that looks like this and you can specify what workflow to use in that section of the site. So we implemented this sample content type. Here's an example of one being viewed by an admin. It's a pretty big beast with a lot of tabs on the data on the edit screen for entering 50 plus actually almost 60 fields of data, including geolocation data, the geolocation data. Some of those fields like geolocation are restricted and can only be viewed by admins like I say this is a picture of it from an admin mode. And these examples, as I said, are folder-ish and can contain regular clone images and files so people can upload photos of the orchids that they've taken in the field and other PDFs or Word documents or whatever they're capturing about information about the orchid as it moves through their process. And we also created another fairly simple content type called fungal information, which can be held within the samples. So this was sort of a page-like content type that we developed to hold the results of fungal analyses, including DNA analysis, and the genus and species of the fungus, if known, usually these species are unknown, but it has fields for that. And this fungal information content type also has a field for internal notes that can only be seen by admins. So this idea of allowing the staff of NAOQ to really, you know, have all this in-depth information that's not necessarily available to all the collaborators. So this is a continuing theme here. We also set up search indexes for many of the sample metadata fields to allow searching and filtering on that information. Most importantly, this allows NAOQ to create collections that can serve as reports on the sample management data. We also called them reports in the UI, but they're just normal collections. And we made some examples to demonstrate the possibilities and serve as templates so that NAOQ could create more. And here's what the all-sipropedium samples report, aka collection, looks like. We used tabular views on the reports to display the data of interest, and also that, of course, links to the individual samples. We also created a CSV export that can be used on collections. Here you see it on the actions menu. This is so staff can download the report data and do further analyses. We also created a CSV import of samples, which I was going to just explain a little bit more about the export. The export behavior is to export whatever data fields have been defined as the table columns on the collection into a CSV file, and that allows staff to define what fields they want to see so that they can export them for the further analysis. Right, and then we also created a CSV import of the sample data. I mentioned that the staff had already started to capture their data in these in spreadsheets. So they had, you know, a fair number, hundreds of sample data items already captured and needed to get them into the system. So, so, yeah, so we created an importer that would allow them to do that. Okay, so summing up our main lessons learned from this project. And this was definitely a big win for this use case. It allowed us to create a robust secure highly functional and specialized site in very little time. And I think Alec and Jesse really enjoyed developing the project they were never fighting the framework, but always taking advantage of it to create what was needed. And that could be done quite, you know, efficiently and quickly with just using straight up blown best practices. Surprisingly, the CSV import was the most time consuming item to develop because it required just a lot of back and forth trial and error, finding clever ways to parse things in different formats. You can just kind of imagine. The staffs had to put some effort into massaging the data in their existing spreadsheets so that it was consistent and importable. So that was, you know, a not unexpected, but definitely time consuming part of the project, everything else was really quite smooth. I want to say thank you to a big thank you to NAO and the director their director Dennis Wiggum for funding this because they agreed to fund this as an open source project, so that we could share a sort of pre configured with a sample custom content type literally an example that is a sample with it with other organizations that might want a similar sort of collaboration plus data management system for for any kind of scientific or research purposes that that it might match. I also wanted to shout out to Julian McGinnis, the data massager extraordinaire who answered millions of questions from us and provided many iterations of sample data for us to try our imports and on, and also to Alec and Jesse who did such a great job developing the site in such a smooth and efficient manner. So thank you to all of you. And now opening it up for questions. All right, thank you Sally. That was great. Rocky. Maybe I should stop sharing. Okay, let's see are there any questions in slide on that we can address right now. No, I don't see any. So, I've already shared the jitsie link in the slack channel for for track two, but if you want to just click the join face to face button the blue button down below the video window in the louds form you will also get to the jitsie room where Sally will join you now. Thank you so much Sally. Okay, thanks for everybody. See you tomorrow.
The North American Orchid Conservation Center is a coalition of organizations dedicated to conserving the diverse orchid heritage of the U.S. and Canada. NAOCC needed a system to capture data about orchid samples, with collaboration features to allow project participants to view and contribute information. Data and collaboration features had to share a common access control structure. One approach would have been to build on a web database platform like Django, but this was a low budget project and adding the necessary collaboration and access control features would have been a big undertaking. We had a trick up our sleeve - Plone, which has collaboration features galore and makes it easy to create custom content types to capture specialized data. With a short discovery process and just two weeks of development, we were able to create a system that provides Plone's usual features (member roles, workflows, fine-grained access control and permission-sensitive search), plus custom content types that capture 50+ data fields, photos and files about individual orchid plants and the symbiotic fungi that live on their roots, a CSV import of the existing data and a flexible reporting capability.
10.5446/54743 (DOI)
Welcome everybody. I am pleased to introduce our next speaker on this track. You know him well. He's a longtime member of the community and you may have seen him yesterday already at another talk. Fred van Dijk and he's gonna tell us about exporting a plot site to Word with results and lessons learned. Take it away. Thank you Fuvio. So welcome to my talk. Yesterday I used the same project we did for the last one and a half years to explain some nifty tricks we did with collective collection filter and now I'm going to focus on another part of this project which I'll explain further. I like to ask you a question you can post it in Slack or on the Slido which I now have opened which helps a lot to get some feedback at least because you're talking in the void with doing the presentation. Would you like to hear more later in this talk about the project difficulties and middle managed stuff or would you like to see a lot of code exporting nitty gritty thingies with Python doc x which is the module we used. Just throw some stuff in the in the Slack channel or the Slido channel. So yes to introduce myself quickly I'm Fred van Dijk. I'm working for Zes Software from Rotterdam in the Netherlands. We've been working remote for customers for many years but now we're of course fully remote. My direct colleague is Maurits van Reis which you will probably know is our semi-new second release manager and this talk will focus on the export functionality for which Maurits did most of the difficult technical work and finding out and I was nagging him tinkering, gluing and polishing stuff later to deliver it to the customer. This is part two of a talk I did already in Ferrara last year where I explained more about the project and we were still struggling and working on the site structure. I will point that out a little bit in this talk in the first part but then I will move to the export to Word because that's where the the fun started at the beginning end of last year beginning of this year. So we'll have a lot of details if you're interested in those and I'll do a conclusion with lessons learned and some generic stuff. So the scope of the project we did. We are working and providing support for the Flemish Environment Agency which is a kind of in parts a little brother of the European Environment Agency but of course the Flemish Environment Agency is focused on Flanders, the Dutch speaking part of Belgium and they do a lot of the lower level executive work to manage water and also manage air pollution, air cleanliness in Flanders. They also have some committees and one of those committees is the CEVE which organizes the water management and then you have to think about water flow through all of the country. Where are the sewers? How do we check for pollution? How do we should we watch out for pesticides in the water? Everything they do the executive part. So one of those committees which is called the CEVE organizes a kind of they have to write a water management plan every six years and every six years they write along a large plan they collect information from lower government bodies, they have a feedback round and then they make a new plan of okay how should we manage the water flow in smaller rivers in ponds? Where should we work on the sewers? It's a very very detailed very big plan. They have now made a new website for that so every six years they have to do this. We managed normal publication websites but they asked us two years ago a new plan is coming up can we do this more digitally? They tried it once like ten years ago and now we were like can we do this again? The website is now live you can just if you want you can visit it. It's as GBP which is the Dutch abbreviation for don't be scary Stroomgebied beheerplannen. Somebody asked if I'm online I think so. So maybe you should check the the floating streaming. So this is the live it's already live they are now in running the consultation round and they're trying that they're waiting for feedback on the plans that have been written. So the project for us is now partly finished we've already did this and the big question was can we have a website with all our plans? And can we also because this is a huge plan and it's organized around water basins so this is Flanders and a large part of the plan is organized around water basins which is probably a small river is flowing through here and this is a kind of technical way to limit the different areas. Can we also export this whole website to a Word document or to any document with a linear text flow because we have to present it to government and government has to a higher government has to approve this plan. So we have some experience with exporting website content to another format. The previous export we have a lot is with PDF export and that has some challenges mainly a pixel perfect layout PDF is like okay write it to a virtual document and the main issue we had there with a large other environmental website project was for example a table of content indexes other stuff most of these add on we have some add-ons that provided collective senders PDF we created a very large one is EAPDF which was used on the environment agency website but they all depend on an underlying tool called WKHTML to PDF which says you first generate your whole site in one huge HTML document and only then for example such a tool can generate indexes and table of contents and other things for you and we didn't we had that nasty experience so we were checking is the grass greener on the other side on exporting something more semantic structural and then let the other tool do the formatting and layout which is something you can have with Word. Well then summary I'll spoil the summary the grass isn't exactly greener on the other side but we did manage it so I spoiled the end but I'll talk you to the rest so this project 2019 and 2020 we focused on the structure of the website because you need one huge tree and not a forest and the problem with the default clone site is that you have a folder is folder item and you have a page item and you can have multiple pages in a folder and subfolders and that doesn't really nicely translate to one single tree where you can run through all the branches and every branch is heading in the linear document. I will show you now what we've created here so we've created one new content type which is called a section and actually this matches to the folder is to the folder is document which many clone integrators also use and that really helps us so let's go dive into one so water basin specific part we will now go to one water basin and this water basin is structured around introduction who is who pressures which are also ecological and pollution series the situation of the water basin and the plan the vision and actions they want to take and then I can somehow scroll through it see some this is linked to to some documents and here we have a menu so I can go to for example introduction okay introduction thank you then we have some specialties about this one it's about folders I don't know even though English translation but the Netherlands is full of folders we have canals and as you can see I can quite easily go through the structure the trick is that this for example I will jump back to the main iso back and we only have three views here which is a text view with a subsection where it will generate menus there is a view that says okay I'm on this section but all my children's sections should be text blocks and you normally use this one at the end at the leaves of the of the whole content tree or you can make a longer menu so I will show you what happens this one is now set to subsection menu so it generates all the subsections which are in here as items here I can show them if you go to the contents you see we have five subsections here and the view in this case just shows the five items and it uses the icons which are actually an extra field on the section as menu items so I can go to Ken's marking and Ken is marking again we see this same similar thing this text here is when I go to edit is the main text that's a rich text there's this image which is used as the pictogram for the upper tree and here below we can dive in for example here and we can dive even further and what you could do here is for example create three sections and then say display show them as long items I won't do it like here now so for example you see here you see four sections these are now rendered as a menu but if I would switch the view to a menu it would render them like a longer menu and if I would switch them to text blocks it would become one big story and with these three views on only one content item the section we can create this whole website at least we can create all these plans here and we have this this yeah it is necessary condition to generate one long linear list so the thing I've already skipped here is that for editors it was a bit confusing at first because this linearity of the whole website demands that they shouldn't insert their own navigation they should use the navigation which I just showed you which is either this listing view at the leaves of the whole site structure or they should use these blocks and things go a bit wrong if you for example let's go to here pressures if you would here start building your own navigation in the text section of a section if people would start here look for other nice info yada yada and they would put a link on it and this would get inserted into the word document which is really strange to read for somebody who reads the document version of the website so that was a kind of finding a balance between having nice pictures by using the image the images on the content types and by pressuring the content editors please don't build your own fancy HTML navigation and other stuff because it will and that was a struggle because it was like oh but if I can't express myself in the website and make it fancy and make it online then yeah just forget about the word expert that's not too important for me but another person in the organization would say look we need this word export do generate it okay so that's the whole system which is underneath here we need this in word and also another extra fancy one was can we exclude sub trees from the expert export the idea was that in this whole tree of nodes of sections you could one at one time say oh here's a nice graph you could say look this section here that's very interesting as background information and we might have some stuff from specialists here but can you please exclude it from the exported word document so what we did is you can say here it's core or its background information if you would flip the switch then it would switch everything behind it also to background information and it wouldn't get included into the word document unfortunately it was a kind of functional requirements but in the current live website it's not used but it works okay now to the meat of the thing and the proof of the putting let's go to iso back and and now I can say under the actions export section as a word document there we go this is a nice little trick where we don't have any async support now one of the soap threats is actually generating the word document we pull on the site route for the status we combine that with the user who created it oh and now it says info document has been created in the folder document exports and the last trick to do this so I created one this afternoon to be sure it was generated but the trick here is that we finalize the word export as a normal file content item in the document exports here okay now we can have this item is it really a document I will show it to you open it with word and here it is this is the whole iso back in section from this part of the website with the whole structure there let's see do we have questions yeah I'll answer that one later Paul so this is the whole document there's one thing we are not responsible for the layout in large parts and for example the table of contents is this nice little trick where you say right click to update compute and word will compute it and here we have our whole document I will now go into some more details later we'll come back to this document I've stored another version locally so our thinking was okay so you see now it works on the live site we thought in the project let's first generate the basic structure we had that and then let editors create more content catch errors they have when they generate the word version and they hopefully do that regularly and then we can catch all the minor details and the caveats and then optimize and specialize it the problem one was that editors didn't start inserting content until like three months before the website had to go live because they were dependent on all kinds of other external agencies and the second problem was that of course because of COVID-19 we didn't really have any context anymore with it with the editors so after we finished the basics it was a lot of remote work and fixing things so how did we pull this off how did we do regenerate it we are using two modules Python doc x we use beautiful soup both well maintained projects but especially Python doc x is stored in functionality there are many pull requests in the GitHub repo but the maintainer is conservative to add them last official release 2019 beautiful soup is beautiful and as I will show later we use an example on the internet for a rich text parser to parse the actual contents in every section Python doc x has a special trick that it first creates an empty Word document and that's kind of like the normal dots or the normal dot x where you have in word and on basis of that temp empty documents it will allow you to iteratively add text add elements at headings paragraphs tables page rate etc you can find this on the Python doc x documentation there's a huge user guide here with all things and it looks very it looks very extensive and it is very extensive for the basic things so here's a small example of how you would do this in doc x so now the question is okay how do you actually do this in for this splone website where we only have sections so we create a document object from a template we find all sections in the whole site and build them as a tree and we collect all of those UIDs that are on there also in plone and then we kind of mail the content tree then we start adding to the document the first page the document info table of contents then we loop over all the sections and we insert first the heading which is dependent on the level it was in which is calculated by this big main loop first and then for every section we do a rich text parse over the text field and we create the actual content part while we do all this we keep some separate lists of things we've inserted and other stuff and we write then at the end of the document we write some indexes for example we could write an index for images used or for references to external files and we have a special list for error logs and then I have now in my presentation I have a long long list of many many many many details of all the things we found that were issues and we had to solve before we could get this to a stable part who wants to see a bit because I've got about 10 15 minutes left now so we were warned by this because we found this page on the internet from someone I hope it's still live yeah somebody who had used Python and beautiful soup to actually parse HTML content and generate something out of it and he had this big warning like okay I switch to PDF well PDF is where we came from and PDF wasn't the grass wasn't green with PDF but we still went so actually dog X is a low level XML and it's a huge back which is used by open offers and others it should be standardized but still they do things differently in the code and now I can switch to what we have here so this is our main maybe too big yeah this is fine so here we have some special stuff to do the JavaScript etc tokens get time out to create doc we create a document we create this whole document contents here and then here is the main loop where we first find all contents in the website we pass out all those UIDs because we have to make internal links we have support for generating the root or generating somewhere a subtree and then we kind of have a parser where we add the heading and we feed into the parser every section it's you see it's from our it's we feed the object text we also feed the object text raw which is the in-processed rich value of the text field for some stuff then we handle the attachments the warnings etc and we write the word document and we're done of course the devil is in the details which is in the rich text parser which you see here here some recognizable stuff to parse HTML and this is where we actually this is where my previous text warning comes from and this is actually the rich text parser part where there's really the devil is in the details to fix c tags see images see lists see other items so this is very quickly an overview of the code one thing I'd like to show in the website is the story about the template because but one of the difficult parts we found out is that when you base docx export on the template and you want to insert into the document let's see where my word where's my word there's my word if you want to for example have these these are styled for the organization one and if you want to style these correctly there's a match match between the normal dot from the organization and there's the normal dots which is from docx and those internal names in the XML are important so what did we do site settings or content settings so I have for example we can upload in a special folder in the website we can upload an organizational normal dot and for each of these normal dots and we can have several you can choose which one to use we can map a title and a footnote anchor which is the docx kind of fixed ID of heading to the internal organization created template so we could with this we could experiment with a number of templates and our webmaster for the project could upload hits his own document so here you have the list of templates where we could go to multiple and here our webmaster could experiment and choose a different one and we could also here set some margins and other stuff for the images which I hopefully have some time for so now I'm going to very quickly go through many many details that we read into you can write low-level XML and that was needed because a lot of so you have this ad heading at paragraph and at other stuff but you don't have for example a table of contents and you need to insert some raw XML in the document stream which you can do with our XML element I've already explained the problem with styling of the headers is that you need to say I have a heading one heading two heading three heading four but maybe in your template that's not called that way and XML docx is actually a zipped archive of a number of XML files and the headings are in a separate XML internal links internal links are in and links to appendix documents what we did was if somebody in the website let's go for example who cares here if somebody creates a link in here to another document we have to check if it's inside the website and create an internal link in the word but if it's a link to any external website we decided to have a footnote inserted with then in the word document the link to the website so for example this links to Stromgebietniveau okay that's somewhere else a document that's in the management plan and not in the specific parts okay close it so these links and I will now jump to the word document are all visually here so here you see for example footnote 5 then there is here somewhere a link footnote 5 to another part and we have to try to see if it's and here you see we had still an error in this one where it still links to a PNG so somebody put a link to a PNG and it doesn't recognize in this case that it was the same website that's probably because I'm running this now in development mode and I actually generated it this afternoon so here you see okay interactive map interactive map so here you see all these references what we did as an extra requirement if somebody uploads a PDF or another kind of document in the website that we collect those and we create at the end you see and this is only one one water basin word document at the end we create a huge here's the list of added documents so in those we one two and three these are all PDFs which somewhere in the documents gets linked to and which is also in the final word document this was also all the footnote and all the other stuff we had to generate those using low-level or XML elements because they are not really available as a high level method concept in python.x passing of the rich text field is tricky editors could do all nice of things and tiny MCA is not that restrictive in the output of HTML so we strip many of the tiny MCA formatting if you look in the website and I start edit I edit this page then you will see we've limited the layout to only two headers blocks are only for graphic links so we try to limit the amount of stuff that editors could do. Listings, intersection headers there are only two subheadings but they are not in the table of contents if you don't want them in the table of contents you can't use at heading and you have to create a sort of faux heading and I'll get back to this lessons learned the photo blocks engine would have been ideal to minimize all this messy HTML to limit the horrible stuff editors can do in a normal tiny MCE rich text widget so image scaling never upload in an image the full item the full size in directly into the word document with add picture because then you will maybe add a blob for four or five megabytes we kind of pushed to 150 dpi which means your pick your image only needs to be 104 pixels and we use an image scale to convert all images to this one at maximum of thousand pixels and in the website in the image insert one we only use two sizes half width and full width image alignment and this is from one of the slide who questions yes a lot of the python doc x limitations are not python doc x limitations but are actual word limitations word doesn't have any concept of float ref left right and doc x only allows you to insert an image as an inline shape and when you want to so align an image in word on the right you would have to first create it to a floating shape let's see which one document I now have so here I did the trick because this is there's actually another document which I just opened which is this one which still has the ugly one the only way to fix this was to have this is the first output from the clone website from our export and what we did was we ran a macro and the macro converts the inline shape to a floating shape and we could pass left or right info so we should we could improve this later but then the macro runs for four or five minutes and then you finally get this one and there you see it's either half width or it's not totally but then you will have to do some manual readjustment as an editor or the image was let's see here we have full width images and that limited and made it somewhat useful because what actually happens is if you align an image to the left of the right then word is actually creating a floating shape calculating it from the margin and moving it with the kind of relative absolute positioning in this line so that was one of our hardest tricks that we couldn't really do here escape your strings if you insert a footnote with an URL with this ampersand then that's an invalid character in XML so we had a long search for why our export broke something that was actually rather easy was from my presentation from last from yesterday where we have dynamic images so we have these nice high charts images in the website these are actually separate content content type and this content type uses a back-end export server to generate PNGs and SVG for and every graph I would have loved to have inserted the SVG into the word document because of the size the problem is Python Doc X doesn't support it and the only workaround I found is that you can convert SVG to some obscure Microsoft vector format from 15 years ago and that one should be able to be inserted into the doc extreme but we didn't have any time for it so we converted all our graphs to PNGs and then inserted it I've already shown you the trick of asynchronous generating the document it would could take four to five minutes we didn't choose plan up async but we made a kind of trick where we pull background generation so that was a lot of things I've skipped some stuff but I hope you get the idea that it is doable it is workable you've seen it working but there's a lot of limitations we managed to export the whole structure to words photo would have been a good match if we had more budget and time and photo had been a bit further when we had to make this choice halfway 2019 photo matches because you have this folder is document which is matches to our section and every feature you could put that into a separate output function more lessons learned if I would start now again Python Doc X would scare me would scare the shit out of me because it's in maintenance mode and there hasn't been a release since early 2019 but it's the same situation as a lot of our PDF export stuff because that depends on this nice tool which also hasn't seen a release for the last two three years but still I think this project was great even though people didn't use all of the functionality this could save government bodies a lot of work if they would be able to handle the kind of tree structure you need there and what does an intermediary format is also great because you can create a semantic export and then you can let editors do the tinkering and the other stuff as the kind of DTP I shouldn't say this do desktop publishing in word but you can all export these problems to word then the final remarks are should we open source this code we want to it's not a legal problem but we didn't have the time yet we only this project is like two three months now open and done but also we have a lot of restrictions in here which I think okay should we just dunk this code into the collective and then let other people suffer or should we first clean it up and explain more it's still fragile we had the webmaster of this project post edit a lot of content to fix some of the export issues so that's it if you want to see the website again it's SGBP our post to link later on slack it's not it's not easy to pronounce in English it's a Dutch site but that's it thank you for your attention back to full view thank you Fred I want to see the claps in slack now but there are some questions in Slido and yeah I'm picking them up as well why don't you pick can you see them yeah I'll move to my laptop screen here would this work for iterative documents question from Paul yes well we didn't have we didn't have to so we couldn't give them the support this year to have external agencies also login for a section of the website but you could activate iterate on on the section and then have section it would become a bit more problem problematic if you would have whole sub trees that are versions but for individual sections you could just use normal workflow on it and say okay let an editor post this and have a kind of check before publish from a final editor yes a lot of the limitations of dog eggs are actually limitations in the whole word and the whole dog eggs format dog eggs just has to struggle with that and the maintainer I think did a hell of a job to block all kinds of experimental pull requests from other people but it kind of let's see a PB server publish and yes well that that we also looked at that two years ago but that project depends on an external service to convert all kinds of things and there are also two or other options where you kind of first generate your whole website into a kind of intermediary format like a you also like a huge HTML that you can feed into WKH HTML to PDF for PDF but it got very complex you have to run I think also an office server that and we we went for because we had this this very structured thing and we saw some merits in using dog eggs to generate the document as one big stream so we did consider using PP server or other similar solutions but we choose for this one yes thank you that's the name Armin it's the EMF format which Microsoft invented like 25 years ago in which was a kind of precursor to modern SVG stuff so what we found out that you could generate first convert SVG to EMF and it should be according to a pull request on the Python dog X GitHub repo you should be able to insert EMF as a shape into the stream and then you would have a vectorized image in Word I think those were the questions all right thank you Fred not sure if anybody can hear me but I just want to remind everybody that you can join the the Jitsie channel by clicking the join face-to-face button in blue down below in the center inter-center column below the video window in loud swarm yes I'll move there too then we can discuss if this is useful for the people I talked with people in Ferrara and we should continue talking about this online great okay thank you very much thank you full video have a nice remainder of the Blancons
At The PloneConf 2019 in Ferrara I presented a project website for a government department where they could collaboratively edit and publish a 'country required 6 year plan' for water management. One of the unique questions was if that plan could be exported to a Word Document as well. In 2019 we were still planning and converting our prototypes to a first version when I held that talk. Now the website is live and we have finished the Word export component. We didn't reach 100% export functionality, but we got reasonably close to be useful and used in a production setting. This talk will focus on the Word export component using python-docx and challenges/lessons learned.
10.5446/54744 (DOI)
Back, we are here to learn about form block and form builder tools from Yanina hard who works at the work bank in Germany and I've as many of you, I use while I use plone form Jen and easy form and now to be able to do something like this with Volta is something that we all are looking forward to learning about. So with that, here's Yanina. Yeah. Hello, folks. I will Start my presentation. I wanted to talk to you about form block or form builder, especially in a Volta and I like mentioned that he is working on one approach. I follow another approach and I wanted to give you some insights about the line of thoughts we had and then I will tell you something about my approach on This Team and Yeah. Okay. Starting now. Shortly about me. I'm a developer since 2010 at the work bank in Bochum and I'm currently working on my bachelor degree in technical computer science at the FH in Dortmund. And to be honest, the project I will show you later on in this presentation is my actual bachelor thesis. So, yeah, I hope that the form builder code I will publish is afterwards will be great to use. But it's just a batch of thesis. At the moment. And I'm a contributor for Plon since last year. As I Met all of those who were at the Beethoven Sprint in Bonn. Yeah. So, The first thing about the form builder I I learned was That we Must have a baseline to to build on and we had a lot of discussions, although with With Timur and Victor about what we want to do or where we want to go with us. And we mainly decided that we wanted to follow the same path. Many decided that we wanted to focus on non technical users, Not develop us so that someone who uses Volto, especially Volto can easily get into this form building. So we wanted the focus on editors and we don't want to Build a form framework. We want to build a form builder. And we got to example use cases like the easiest way to just a simple contact form in the side. Like five or six fields and just the form action emailing to someone. And and we had to use cases thought through about complex forms with especially grouping of form fields or more than one form action and so on. But Yeah, That were the first discussions and as we all know, There isn't just one way of doing something or as a saying says every road leads to Rome. I guess in German. It's a bit more accurate because there it is many ways, many roads lead to Rome. So we had to think about Criteria's we want you to have in our add on or to respect and our add on. And so we thought about different criteria's we should Include For example, we had to discuss complexity. We want to implement like do we have Validation. Do we have the purpose of multi page forms. Do we have one or more form actions. So sometimes you want to email the form and Also, you want to to to save the form. And The The other things that we have to do is The other things are the flexibility. So you have users in mind that or at least Often happens to me that I have a user in mind that is as intelligent as me or as As who works like me. I guess that's the right word. It works like me. So if I click there and there it will do this, but From experience users will use everything you give them in any kind of way you didn't expect. So the flexibility is a big, big point. And how much do you want to give to the user to do. And other big criteria was user interface. So if you have someone who learned to work with Volto, you don't want to give them an add on that Is doing different is working different than the things he has until now and we had different approaches in mind shortly after the discussion of the criteria and two first ideas we had On the one hand, the form folder content type Did that before I mean easy form and phone form again are well down products in the phone world, I guess. And it would include just An own content type to add and edit forms. So you have the possibility of build complex forms and Can can do, for example, complex things with with Form actions. The problem on the other hand is the inflexibility and Also, also the User interface will be Not as intuitive as the rest of Volta. So if you are a user and you have Volta blocks, you can just create a page and do 50 blocks and everything is cool. And then you add the form folder content type and the usage is Mostly not completely different, but it's different As before. So on the other hand, we came to the conclusion that form fields as regular blocks would be an option. That is the Approach A lock took over. So it definitely means every field is its own block. And you can, for example, mix blocks with a Form field blocks. So you have a really powerful tool and you have to learn usage from Volta. It's just For, for, for, for my opinion, I guess it could be a bit of an overload in the At block Widget Although it's definitely a cool idea. Another thing I thought about was You don't want to have always have every page wrapped with a form. So you will have to have a logic behind all of this. If there is a form field in your site, then you have to wrap it with a form. And you have to explicit that actions to that side. I would prefer the form fields more than the form folder content type, I guess, but that's just just my opinion. And Another saying says a compromise is an agreement by which both parties get what neither of them wanted. And so my approach I will show later on was a form block approach. That would mean you don't have to change the user interface because the form block is like every other block, just one small thing in this block at widget. You can click on it and then you can configure your form on the site. If you have the use case that you want to have different forms on one side, it would be also doing that. And my idea behind that was a bit of expandability because from the time I worked with Plone is every client, every user is different to the one before. So you have to make some adjustments, some fields more, some fields less, or special fields with special IDs or something like that. And so I try to make my approach expandable so you can mostly add fields easily. On the other hand, you can't mix the form fields with other blocks. So if you want to have a form which has text in it, you will have to define an own kind of input field or something like that that would display this description, for example. And I guess I will go into that later. And I guess this approach won't be able or at least won't easily be able to make complex forms like multi-side forms or it could be overhead to make complex form actions like having more actions than one. At the moment, my approach has just one form action per form. So I guess you could expand that. But on the other hand, I think that's a good idea. I think that set that just happened. I will city call for,pl delegations because in some Trans claws I worried what the тотual I didn't really have the right C, but the other way to allow for information will are configure all of that. All that said, I have to thank to TBRU for the great Voltoad-Ons training. And I will have to reflect all half of my code. So please don't be shocked about what I will show you because I guess after I refactored the code with all the information I got from the training, the usability would be a bit easier than it is now. So all that said, I would just like to give you a little sneak peek of what I have done so far. Hopefully you all see my local Volto now. And it's just the instance I'm developing on at the moment. So I will create a new site and save the trolley. And now you can add the form-branches and the at the form block as a normal block. I guess mostly I would put it in common and not in most use, but for developing, I guess it's fine there at the moment. And then you'll see at the right in the sidebar that you have fields to like buttons and you can just click there and you will see one field is popping up in the block. And you can just click on this. I may should have done it in English. Sorry, but this is just saying edit. So yeah. And now you can, for example, set the label and you see on the right hand, it will automatically change. And also if you click field required, you will get the red star to get the required tag. I have some common attributes at the moment, like the max length in the text field. But if you add, for example, I will just put both there and make the edit. The text area, for example, has a text area. The text area, for example, has a max length or rows and calls. So if you want to make it bigger or smaller, that you can actually do that for every field itself. And you have the, for example, the int label with, well, number input, where you can, for example, say the max is 1000, the min is 10, and you have a step width of 10. And then you can save it and you will see the required attribute. And if I just inspect those things, there is a call, max length and rows on this one. I guess the calls is wrong. I will have a look at that. And for example, the number will just start with 10 and will recognize the steps. And last but not least, we want to send, for example, the form. And we can add an action setting here. At the moment, I just have one action button for the whole form, because it's easier to implement, but the possibility to do more actions would be possible, I guess. And you have the text for the submit button. You could just write something else and it would, yeah, it's not in the view form until now because I just implemented it today with the button. But I guess you get a little sneak peek until now. Wrong one, right one. Yeah, that's just as a sneak peek. In summary, I guess there is no universal remedy on those add-ons because as I learned in my development time, every user needs different things. And so I personally look forward to have different kinds of add-ons for form building. So you can choose between them for every project. So for example, if you have a project where you know there will be many forms or complex forms, you would use a bigger add-on. And if you have just two contact forms or something like that, you can use a smaller add-on. But also, this is just my opinion on this matter. Yeah, as I said before, code refactoring will take place because the Volto add-on training had got me so much more insight. So again, thumbs up. And it's still a lot of work to do and I'm really looking forward to publish my code. After I finish my Bachelor's thesis, but hopefully it won't be much longer for this. And I hope that this approach is good enough for some of you and I hope some of you will come back to me with ideas or also with helping hands maybe, I don't know, I don't know, I don't know, I don't know, I don't know, I don't know, I don't know, I don't know. But I hope that this helping hands maybe, I don't know, yeah, I guess that's for now for me. Thank you for your attention. I hope I can answer your questions if there are any. And yeah, looking forward to the face-to-face later on. Thanks, Janina. That was really interesting. I think you've made a lot of us think about how your work can lead to good form packages. And we actually do have a question in the Slido. The question is, can't we reuse the existing VALTO dexterity schema editor and use easy form as a backend? I guess so. We had a discussion with, or in our discussions with Antimo, we thought about the kid concept form again as a backend. I don't know if anyone knows that. But yeah, I guess it would be possible. That was actually like, there's, you've spurred some discussion. I can see comments in the Slack too. Are you planning to sprint or organize a part of an open space to the backend? I don't know. I don't know if anyone knows that. That add on. But yeah, it would, I guess it would be possible. Are you planning to sprint or organize a part of an open space to talk about this and work it out with others? I can't attend the sprint sadly, but I will definitely come back to the community. If there is an open space track I can use, I would be glad to join and glad to host it. I love the spirit. I love your spirit. Definitely. We have open spaces. There's a Google document for Friday's open spaces that I pinned in the hallway track in the hallway channel and the sprints also. So yeah, maybe, maybe that's something to think about that. At least you can get the discussion going on Friday. And even if you can't make the sprint on the weekend, we can start more discussion that will go on past the conference. So it looks like there are no other questions in the slido, but please join everyone in the face to face. And thank you again for showing us your work and for joining the Plum community and jumping in with both feet first. Thank you.
Short presentation on my work for a block to build forms in Volto and other possible approaches
10.5446/54745 (DOI)
Yes, welcome everybody. Yeah, I will be talking today not so much technical but more about the user experience about how we work. I work for a non-profit called the King Close campaign that works in the government industry and we rely quite heavily on PLOAN and we had a very tough year this year as everybody did but for government workers it was specifically tough. So the King Close campaign, what is it? Well, it is an organization that consists of a worldwide network of other organizations. The headquarters is in Amsterdam. There are about, yeah, it is a global network of trade unions, of women's rights organizations, labor rights organizations, other groups and we have been, yeah, we concentrate on a lot of things in the global government sector. We have been at this for a very long time. We have been existing for 30 years now, 31 to be precise and, yeah, our work is in the government footwear sector and that is a sector that has a lot, a lot, a lot of issues. The supply chains are very unclear. There is not a lot known about it that is beginning to improve but still and it is known, it is infamous for paying really low wages. So what do we do? We campaign, we lobby and we take also direct actions. And we have been heavy users of PLOAN for a very, very long time as well. Currently, this is what our setup looks like more or less. Our main website uses CASL, which is a flavor of PLOAN, which I will tell a little bit more about. That is for our public facing website. Then we also run an intranet and we are using Quave, which is social intranet and we use that for the network-wide communications. As I said, there are 300 organizations on it and we produce a lot of documents internally. To keep track of that and also to keep them safe because we work in a high risk environment, we get a lot of pushback from governments, from corporations and we need a very extensive security model and Quave helps us do that. We also use a lot of campaign mini sites and we tend to base them on Gatsby, some of them with the Gatsby PLOAN connector, some just plain Gatsby sites. We also have a few other PLOAN, just plain vanilla PLOAN sites, but these, the CASL and the Quave ones are the main ones that helped us during this pandemic. A little more about the government industry. It is one of the largest industries in the world. It is also extremely damaging. It contributes about 10% to the world's pollution. It contributes more than international aviation to things like CO2 and it is accelerating at a frightening pace. There is much more garments being put into landfill than they are actually being worn. It is also heavily gendered. There is an estimated 60 to 80 million workers globally. The fact that this is an estimate already tells you that the supply chains are very opaque, not very transparent, but we do know that this is the best estimate that most people come up with. Most of them are women, up to 3 quarters or 80% are women, mostly in the global south, in Asia, in Latin America, and also increasingly in Eastern Europe again. It is highly profitable, but it pays poverty wages, but it is still extremely profitable even within this global pandemic. When the pandemic hits, it hits hard. We internally describe it as the COVID catastrophe because what happened? Garment brands have all the power in this industry. The suppliers, the factories have no power and the workers don't have any power whatsoever. The first thing that the big garment brands did was basically cut their orders. That means wage loss that we calculated for garment workers in the first three months alone was about 6 billion US dollars. I remember these people have very, very little salary to start with. But also other problems really pop up. For instance, Sri Lanka, 50% of all recorded COVID cases take place in garment factories because they are crowded, they are not well ventilated, and people were forced to show up to work. We've also seen a lot of union busting under the cover of COVID, where factories use this situation to basically sack people that try to form a union. 77% of the workers in our most recent report from two weeks ago report that they are actually hungry now. I mean really hungry. That means eating rice two times a day with nothing, no vegetables, and let alone any protein. 75% of them are currently in debt to keep themselves alive. That's why we started this pay up campaign because what the big garment brands did, they immediately started cancelling the orders. Even the orders that they were already ready and were waiting to be shipped out. So immediately they cancelled about 40 billion US dollars in cancellations. So we started campaigning together with our partners and up to dates we have 16 billion recouped where the brands finally gave in and started and agreed to pay for the orders that they had already outstanding. Which is not a bad going for a bunch of activists, but as you can see we're still way short. Actually we've recouped a little bit more because this morning just the news came in that Amazon and Arkea have finally caved in and are paying up. So we're happy about that, but there's still a lot of work, roommates are done. And then we had to align our response. As I said, we are 300 organizations spread all over the globe and getting all of them aligned and doing finding strategies that work for all of us is really, really, really tough. Especially if in a situation where there are different rules on staying at home orders, staying at home orders work in Europe, if you don't have a home to stay in as some of our southern partners that doesn't work at all. But we all flocked to our internets. The number of active internet users was up by 250% before some people were not that active, especially from the global south. But now we are. We do a lot of evidence gathering, which is also we need to get the evidence out of countries in a safe way. We need to do a lot of strategizing, and we need to prep our campaigns to take on these multi-billion dollar brands. So what have been our main focus? Well, first of all, is our website, our publicizing website and our blog. Back in March, we started, we immediately saw rising visitor numbers. And we started a live blog. The live blog has become one of our main tools of information dispersal, and it's intensively read by both the government industry and by the press. We updated three times daily before we never did that. We were, yeah, a mundane NGO, and we did a couple of updates a week. Now we had to do it three times a day because so much information was coming in that needed to be brought out to light. And we immediately needed to build some campaign subsides. So that, yeah, also had to take place. And we had a lot of new editors. We normally work with a limited number of editors, but yeah, we had to get more people in. And they are spread all over the globe. They are from Hong Kong, they are from Mexico, they are from Portugal. People that just help us to keep those updates going. So we had new editors. We had absolutely no time for onboarding. I gave them all about a 20 minutes introduction and was like, okay, here's our front page, have fun with it. Which was scary. Also because these people have various skill levels. Some of them are quite digitally savvy, others, all let's say, are not. And so that poses a challenge and mistakes happen. But yeah, we have to keep going. Time waits for no one. But yeah, there is one thing that I do wish that we could revisit once again, back in the days of clone 3. We had undo. Nowadays, that's hidden in the ZMI. I've spent more time in the ZMI than I did in the past five years to undo some of the mistakes that some of the editors did. Like throwing away an entire folder structure with 20,000 content items in them. It's all recoverable, but it would be so much nicer and it would also give those editors much more confidence if we have a big fat undo button there again. That would be awesome. So yeah, that's my plea for, yes, to have that undo button again. Next up, what do you do if you have a live log that needs to transmit lots of information? Some of the information comes directly from us as text or email comes in. So we can put it on the website. But some is also from Twitter, from YouTube, from QQ, from various other social media. You use page composition. So yeah, we did. We are using CastleCMS that has up to now the best mosaic experience that you can find to build those composite pages, which was extremely useful for the live log. But as you do, once you have a tool, you abuse it. So as our live log grew longer and longer, we had to split it out in months. But it basically means you get up to 100 tiles per page. We even had months where we had like 500 tiles on a single page. And that poses some questions because it can take up to a minute before the whole damn thing has loaded when you press the edit button before you can actually start editing. And you need to convey that to all those editors. And then, yes, we started, of course, to embed all the things. Because soon you find out that simply having just the normal page with a few YouTubes is not enough anymore or a few Twitter feeds. We needed to connect all our stuff together and really, really quick and everything connects to everything. So just to give you an idea of what we're using and what we had to basically stitch together really, really, really fast. Of course, we start with the clone. That works better probably. Social media. As you do, as we are a worldwide organization, we have a lot more social media than the ones you probably have heard about. In Asia, they tend to use quite some others. We had to stick in air table because we had to do have an air table for various reasons. We had to have large databases that we could then also show and search live on the website. We used Flourish to make pretty graphs of those large databases. And they have to be embedded. We had to connect our intranet to only office because one of the things that you really want to have is have simultaneous editing. And most people use Google Docs. That is not a good idea if you're working in Myanmar or China or other places where the government has less than friendly intentions and you also want to keep your data slightly to yourself. So only office is a fantastic project that sort of mimics that functionality. It's not perfect, but it's good enough. But yeah, you do want to store stuff so we had to connect it to our phone site. We're using CIFI CRM, which is a CRM system that keeps all of our information about all of our friends, all of our enemies, all of our lobby targets, all of the parliamentarians, basically everybody that we interact with it. That needs to be stored somewhere as well. So that is in CIFI CRM and that also needed connecting. We use various other things such as resource space, which is a place to store all your digital assets like pictures, videos, because if we get hundreds of pictures a day in and you need to tag them all, you need to make them available to writers and other news media. So you use resource space. We use Jumbo, which creates interactive sort of like a newspaper that needs to be embedded. And we're using Big Glue Button, which is sort of like YIZI. It's video conferencing, but then on your own server, again, because for some conversations, we simply do not rely on Zoom because we work in high-risk areas where we really need to know that everything that we say will stay on our servers and not on somebody else's server. And we're using basically quite a lot. Why can't I see this? Oh yeah, NextCloud is of course also one of our favorites that we're using to store other kinds of office documents where Plone is not the right solution because you also want calendars, you want lots of other things. Let's get back to discussions about Plone. One of the things we found about is the age-old discussion about should content items be folder-ish or not. As said, we're using CastleCMS, which is quite opinionated about that. It's opinionated about a lot of things in a good way. In my sense, for instance, each page is a most age page, but also all content is folder-ish. There's no question. They made a choice, and we found out especially the new editors, they get that directly. It's like you don't need to explain it to people. They were like, yeah, of course, of course it's folder. What are you talking about? So that was a good choice, but folders in and of themselves are still useful. They are useful as a mental model just to put things in right boxes. We like to put things in boxes, cats like to sit in boxes, and people like to put things in boxes. Folders still have a use even if everything is folder-ish. What we learned is that the biggest advantage that you gain by having everything folder-ish but also folders is actually we've had this whole discussion backwards all the time. The biggest win is that folders are page-ish. That means you put a folder somewhere, but the folder can have content, which HTML, listings, searches, blocks, everything. So you get rid of this whole default view index HTML kind of thing. It's just on the folder. Some other learnings, what you find when you work globally. Internationalization. PDF generation sucks. It works quite well for English, Dutch, a variety of other languages, but it doesn't quite work that well for a lot of other languages that we also need. The PDF generation from Nepali, for instance, always fails. PDF generation from Canada always fails. You might say, oh, Canada, that's a weird language. Yeah, it's only one of the languages in India, and it's only 57 million people that speak it, but there happens to be a lot of common factories there, and 57 million people is more people that speak Dutch. So we needed to provide translations in Canada, and the PDF never comes out correctly. Fact of life. Colors. Colors are culturally specific. So you make green buttons for good, red buttons for bad, and nope, that doesn't work. We're white. Icons. Exactly the same. We devise nice icons, and they make absolutely no sense in a different cultural context. So you have to really be careful about how you do it there. Another thing. Not all people have last names. There are 20 million people in Indonesia that only have a first name, and I asked, like, oh, can I see your passport? And yet they only have a first name. That's official. Binary genders. That also doesn't work. There are quite a number of countries that allow for third, fourth, or even fifth genders also legally. So you need to make sure that your systems work for that. Pain, cry, the language is a virus. It's a virus. That was Laurie Anderson. If you work internationally, you quickly learn that language indeed is a virus. So what were our main stumbling blocks? Mobile first. It's a lie. It's fake news, people. We are always told that you should design mobile first. Well, we asked our designers to do mobile first, and they did. But really have those people actually tried to work their own applications on an $80 tablet in a tropical sun, because then that is mobile first. If it works on your $1,000 iPhone in perfect conditions with perfect Wi-Fi in a Western country, great. But that's not mobile first. Mobile first means go where the users are, and my users are on an $80 tablet in the tropical sun. And then you need better things than dark gray text on a light gray background, because it's completely unreadable. We need secure alternatives for G Suite, Zoom, etc. And as I said, Big Blue Button is great. Only Office is great, but they could very well be improved. And one of the things I cannot simply explain the difference between a word file on an internet and a page. People just don't get it. For them, they're the same. They're exactly the same. You cannot explain the concept to somebody who has never worked a computer before by one thing is something different than the other. And that's not their fault. That's our fault for not making that easier. But overall, I do want to say thank you to Plone, because in the end, it has been a lifesaver for us. 300 organizations up against an overwhelming adversary in the form of this virus, but mostly also in the form of really difficult conditions. Plone has held up to a lot of stress, a lot of difficulty, a lot of internet dropouts. It creaked sometimes and that drove us slightly mad. But somehow it works. We have managed to connect our network of activists globally as good as bad as it goes. You have some good days and some bad days, but overall, it works. We overcame all the barriers of distance, language, culture, gender, you name it, and kept on working. We still gather evidence from risky areas, bringing them out of the country, making sure it's end to end encrypted. We can devise strategies and basically, clean clothes, we're taking on a multibillion-dollar industry who are vastly, are superior in terms of numbers of money that they have, of lawyers that they can throw at us. But we are basically booking campaign wins. We're countering recognition. We're really sitting at the table. They're listening to us. Most of all, our network has remained operational and we're more determined than ever. I would say that is a big win for our use of clone. It has been better tested. While running on clone and this whole array of other free software, you can actually work quite effectively. Basically, I want to tell people also at this conference, know that we're putting your work to good use. There are others who are also doing. I recommend that you look at the presentation that Kareel Yusof gave yesterday about using clone as an anti-corruption tool. That was also quite impressive. But overall, I would say a big thank you to everybody that has helped to develop clone, that still keeps developing clone. Know that we're using it. We've hardly ever paid you. But we do put it to good use. So all I can say is keep up the good fight. We are doing our bit. Thank you all very much.
Clean Clothes Campaign is a non-profit dedicated to fight for garment workers rights. As the pandemic struck, the garment supply chains were particularly hard hit. As heavy Plone users (in all its various incarnations, from Quaive to Castle to Volto) we had to deal with a rapidly changing environment. Four times more users, many of whom not very tech-savvy and relying on mobile phones as their sole computer device on our intranet. And a massive increase in both use and visitors to our public websites. Plus an increase in part-time editors, from many countries and with a wide range of skills, but mostly novices. This talk will take a user-centered view: what worked, what problems came up, how did onboarding go, how did the new editors find their way around? Can we find pointers here on long-standing issues like "folderish" vs "non-folderish" ?
10.5446/54746 (DOI)
Hello. Today I'll be speaking about the Green Party maps. At first it looks like a map of the United States Green Party, but to relay it's a hierarchical object model of the entire organization. My name is Christopher Losinski. I live in Katowice, Poland, and you can find the slides at forestwiki.com.slashslides. That's forestwiki.com.slashslides. So let us begin. What are we going to talk about today? First we're going to discuss why use a hierarchical model. Then I'll do a demo of the Green Party maps application. Once you understand the application, we can talk about the details of the hierarchical model. We'll talk about the user's experience, a little bit about the data, and some concluding remarks. So why do you want to use a hierarchical model? Well, the basic principle in human factors is there should be no more than about seven items in the category. So typically people use org charts, they're hierarchies. But it's not just org charts. The largest hierarchical data set I know of are the awesome lists on GitHub. So here we have an image from Awesome Pone. When you organize them as a tree, it's just easy to find what you're looking for. It's easy to understand them. And the other piece that's newer is JSON schema. I'm sure you're all familiar with Zop.schema. JSON schema does something very similar, but it's, first of all, it's in JSON. And secondly, it's a tree. And so it's very good for modeling complexity. So on the left-hand side, we have the JSON schema. And then we use that to automatically generate a GUI and validate the data on the client. You can then submit that data to the server. And from that, you can validate on the server just to make sure there's no funny stuff going on and then update the database. And in particular, for people, we use JSON schema for modeling people and different individuals will see different branches of the tree. We'll get into this in quite a bit of detail. But first, let's go take a look at the application. Here we have the United States Green Party Map logo on the upper left, a bunch of options. You can watch the introductory video. It's organized as a tree. You can think of it as a file system, but we all know it's on ZODB. You can click into it before we do that. You can see all the contact information. On the map, we show all the United States local parties and green parties. We can also, we also have lists of just the national candidates, meaning the congressional candidates. Okay, Zoom going on in. Let's zoom into California. So California is still showing all of the November candidates. And here, California also has a map. You can see this very nice search filter for which organizations you want to show, which politicians you want to show. And they have the contact information. They also have lists of politicians and candidates. Let's drill into this map further. We're going to go into Santa Clara. And here we have Jake Tonkel. So Green Party of Santa Clara. So at the county level, you're only seeing the candidates you can vote for. So you see different things at different levels of the tree. California, we saw all the candidates for California. At the county, only the ones you can vote for, they could even endorse independence. And they would show up at this level, not at the higher levels. Again, you have the contact information for Santa Clara. You can see you can drill down politicians can't parties in Santa Clara. Let's drill into Jake Tonkel. Now, what Jake Tonkel has done for his logo is he's replaced over this image. And then we have acquisition of logos. So all the tree, all the pages under this branch of the tree all get his image. Very useful. Again, contact information for him. We can go down and he has, first of all, he has a virtual meeting. And then he has all this other content here. I'm not going to, oh, we can expand a little bit if you want, lots of content. And all of that is shared content. So basically what we do is we take one branch of the tree and using traversal and proxy objects, we can show that in every politician. So every politician has rich shared information. Okay, so let's go into his online into his virtual meeting. So here we have another content type, which is a virtual meeting. And he has two videos there, which he's planning on showing. And we can click into his YouTube video, which is yet another content type. And we can see it doesn't have much information, not much description. Maybe we want to add some description. So this is a content management system very much like blown, we get these wissy week editors, you can paste it, do bold, italics, strike through, save and view it. And there it is. But whoops, this is the production website, we really don't want to have it there. And again, this is the ODB. So what we can do is look at the history and restore it. If you had deleted something, you could also use transactions to recover the deleted. And now it's all gone. Perfect. Okay, so let's see where we are on the tree. We're seven levels deep. Here we have the Howie Hawkins map server, Green Party US map, Green Party of California, Green Party of Santa Clara, Jake Tonkle, Team Tonkle virtual meetup and Jake for District 6 video, we're seven levels deep in the tree. And yet you just have an intuitive understanding you know where you are. For seven levels deep at seven items in every category, that's like five million items, huge complexity, nobody's lost. The problem is with security that there can be a lot of people entering data here and they can step on each other's toes. So for example, for the Jake Tonkle virtual meetup, Jake Tonkle may be too busy to manage it. And so he can assign somebody else to be the editor and they inherit security for this whole branch of the tree. Remember, this is done based on pyramids, views on objects rather than zoca to permissions on methods. And so security is a lot simpler here. Lots of use and implement. Okay, so that's Jake team Jake Tonkle. That's his meetup. Let's take a look at Jake Tonkle himself. Let's actually go ahead and edit him. So here we have the JSON schema, you can see different branches. Introduction, this is what actually goes on the Jake Tonkle object itself. It has a child, translatable content. This could be English or you could if you want to do Spanish version, you can add it. Connect. This is where you find all the social media. You can also choose which social media you want to track. And then there are several other branches that remember people and politicians can be very complex. So they can either be a candidate, or they can be an elected official or they can be a party officer, or they could have been those in the past. So we also have a branch for history. And so when a candidate loses or wins, the candidate focus move to the history. When the elected official is over, that gets moved to the history. Same thing with party officers. This is a very complicated tree of well-defined JSON objects. Okay. Going back up so view. So here you have Jake Tonkle, Santa Clara County. When you by the time you get up to California, California is a big problem because they have to manage all of these candidates. And they have to manage all of the local parties. And so what they do is they have a management view. Looks remarkably like a ZMI. So like a file browser here, you can actually see the object class, the class of this object. It allows you to manually sort. I better put them back. They're not seen will be upset at me. You can sort the order of things. It's JavaScript enabled and you can copy, paste, rename, retitle, edit. Very basic ZMI. Going back up to level one more level of the tree. So two more things. One is people may be interested in seeing what's new, what's changed in the green party either at the national level or state level. So there's a news option. And here you can see the 10 most recent items that have been added. And if you're actually, that's very efficiently computed. If you're actually the administrator, you want to see if anybody's made any edits also. So we have another progress view. And this shows you any ads or edits. And you can click in and view them. Also, when people submit things anonymously, you have to approve them. So then you get a red flag. Okay. So that's the application. I'll show you one. Well, that's the application. Let me now take you back to presentation. So just remember, we showed you national maps and state maps. They had contact info links. So that's the directory news sites for any branch on the tree recommended voting lists for the local counties. It's a contact management system. I didn't show you the discord box. And I haven't shown you the instant backup websites yet. Get to that in a minute. But now that you understand the application, let's take a look at the details of the HACO model. So a lot of these ideas come from blown. First of all, we traverse to an object. Nothing new there. Root slash map slash map slash California. But the problem is, if you rearrange the structure of this tree, maybe you add one local party under county, you have city, you rearrange things. Then you change the URL and the end user can no longer find you with the URL. And so we also have something called canonical URLs. So at the root slash or root, one level down map, two levels down the way of California and three levels down the way of slash Santa Clara. And so there are two ways of accessing Santa Clara. You can either access it by traversal or from the root. You can use canonical URLs to jump straight to. And which makes this into a graph database. And so we tell everyone it's a hierarchical model. Really, it's a graph model. Just primarily, we hide that. There are a number of other reasons why it's a graph model. The other piece we use is Pyramids views on objects. So in the upper left hand corner, we have the map view. Next, we have the edit view. After that, we have the news view. And then we have the supervisory view I showed you a minute ago. We also have a lot of views for manipulating the tree. So on this left column, we have all the ones for manipulating the tree. And on the right one, there's a whole bunch of different content types that we can add. The management that the operators of the system can add. Okay, now let's take a more detailed look at the hierarchy. So at the root, we have a root object. Didn't really look at that much. One level down, we have the green part of the United States. Two levels down, we have all the state parties. And three levels down, we have local parties. Actually, you might have like New York City. And then within that, there are additional three different parties. Okay, going back up to the green part of the United States, that's where we put the president and the vice presidential candidates because everyone in the country can vote for them. At the state level, ideally, we put the governors and senators because everyone in the state can vote for them. And then at the local levels, we're going to have all the other candidates. So here's the basic tree. But remember, they're politicians hanging off different parts of this tree. And then off politicians, we have the JSON schema and they can have events and videos. So it's a really complex search tree, but this is just the backbone of the tree. And so here, remember, I was just saying about the JSON schema, adding on the politicians here, you can see the connect branch for all the social media and other links to the candidate. A very complex hierarchy, but hierarchy works great. So let's talk about the instant and backup websites. It's a tree. It's really easy to hide all but a small branch of the tree. And then you can create a separate website just for that person. So here we have the branch for Howie Hawkins. And actually, during the US presidential first disastrous election, so embarrassing, the website went down and it was actually down for two days. But so in just a few minutes, it was possible to create a backup website for Howie Hawkins. But to do all of this, I mean, in some sense, it looks really simple, but there's a lot more complexity underneath the hood. So I'll show you just half of the some of the content types. So like any content management system, you have to have pages with WYSIWYG editor, markdown pages, particularly for the data scientists, links, YouTube videos, online events, online organizing, we didn't see those logos, banners, files, basic content management type stuff. There's also a whole bunch of map types. So there's Google Maps, but I worried about Google Track and everybody. So I went to OpenStreetMaps. I use Mapbox. I suspect they track everybody also. They're also GeoJSON maps. So these are like the states or California used to show the GeoJSON boundaries of a state. Organizations, these would be parties, local parties, caucuses, politicians can either be candidates, elected officials, party officers. And then map organizations are things like California, the United States, which are both party, but they're also a map. Okay, so those are the application specific ones. I'm not going to talk much about the hunger line skin and presentation classes. Let's now switch to the users. The data is hand curated. Okay, so here's our data curation team over on Basecamp from 18 people. I believe maybe 19 now. Actually, at first, I thought the most of the data entry was done by four or five people. It turns out one guy did 70%. And we did crawl data wherever possible, but even when we crawled data, the data that we crawled had also been hand entered. So it's a very manual curation process, leads to very low noise. It's very useful. It's a directory, any node on the tree for an organization or politician has all of their contact information, including events paid link to the events page, maybe a donation page, the social media. And the data is really loved. I asked my boss what he thought about it. He said, I'm ecstatic. I asked Holly Hart, very reserved head of the CCC Co-Chair, coordinated campaign committee. She said the CCC really likes this. And lots of other people love what we're doing. And people are really upset if it's wrong. So we added a candidate to a particular state, and that state, a party, did not like that candidate. And so, oh boy, did we hear from them, rather nasty messages. Okay, no problem. We removed the candidate. And that candidate was even more upset that we'd removed her. So people really care about this data. And of course, it's very labor intensive. The big problem with hand curated data is finding volunteers. So the awesome list managed to find volunteers. We managed to find volunteers. We managed to find enough volunteers. Either that, it can generate lots of good jobs for people. The data entry has to be managed. So the project manager made a list of all the states and as each state was completed, he checked it off. And then when one of the states had an error, we reported it, we unchecked that state and the appropriate person went and fixed the data. So it has to be managed both at that high level. And a detail level, we have this management view of which items have been changed more recently, just so we can audit and keep an eye on everything. Okay, now let's talk, let's about the end, we finished speaking about the user experience. Now let's talk about the data. So the data is low noise. First of all, it's hand curated. So that gets rid of a lot of errors. Then the users give feedback. They really care. And if there's a problem, they tell us about it. And then there are a number of things we can do to improve the signal to noise ratio. So here we have both the California and the New York maps. So those states can just link to those maps. And either on the state maps or national maps, we can either sell candidates or parties. Actually, we have a lot more filter options now. And so you can get a view of the map that just shows the data that somebody needs. So very low signal, very good signal to noise ratio. And we have this new filter option so that there's a long list of things so you can filter down exactly to what people want to see. It's very low noise. It's also very small. So not counting the images, the whole thing fits on 10.5 megabytes. And so I don't know if you remember the flopticles, maybe a lot of people don't never even use one. It'll fit on half of a flopticle. That's not counting the images. And so it's really fast for those who are not from the S. This is Roadrunner. Always too fast for a while, they require you to catch her. And why is it fast? Well, it's so little data, you just cache everything in memory. And on top of that, we have caching web server so that the anonymous user just gets instantly served web pages from very fast. And of course, because it's so small, you don't need much of a server. So even during the elections, we only had a four gigabyte server, maybe two CPUs, 80 gigabytes, $20 a month, either on line node or on digital ocean. If you're going to be efficient, you also want the code to be small. So for not counting the third party libraries, which, you know, all shrink wrapped, just downloaded from GitHub, there are only 12,000 lines of Python code plus some JavaScript. So massive code reuse leads to very small code. And so then you only need one developer. And remember, there are six different applications in here, national and state maps. There's a directory, instant backup, websites, news, recommended coding list. That's a whole content management system and a discord bot. That was in Ruby, so we're not counting that. Okay, some concluding remarks. Before the map, the green party never really knew what the strongest states are. Now you look at it, clearly there's a big gap in the middle of the country. We call that the red states. I can't help it. I met one of those red state guys. That's another story. The state maps are very useful. My boss's boss, he's the, he is the co-chair of the Illinois Green Party. And he said until he used the maps, he never really understood the state party. We also did some analysis of the most active state parties. No big news there. Let me spend a little bit of time speaking about the software tools. I have 10 minutes left here. So first of all, we all use a file system. File systems were initial, hierarchical file systems were initially used in Maltics in 1965. Actually, I used Maltics. And since then, actually, the functionality has reduced in Maltics with Linux. But it's so much nicer to store stuff in an object database like the ZODB because a file system just has files and directories. And files don't have children. In directories, they don't have attributes. They don't have methods. You can't send them a message. So let's take a look at the best example of us, the B3 images. So in the ZODB, the B3 images, they have children, which are the thumbnails. So if you go to slash social media image, you'll get the map of the entire country. But if you need a thumbnail, you just go to social media image slash 400 wide slash 200 w or slash 100 w, and you get the appropriate size thumbnail. And you don't have to store these ahead of time. What the ZODB will do is if they don't exist, it'll generate it for you at runtime. And so that's just great because I'm not a graphics designer. I never know why these things are supposed to be. I just plug in the size I need, and it gives it to me. Perfect. JSON schema is typically JSON as a single file. But if you actually look at it, let's take a look at it. If you take a look at the JSON schema definition, it is this enormous thing. It's because people are complex. And all of this information is needed. All of it's very efficient. All of it generates the user interface. But to edit this file, it was just a nightmare. And so what I did is I broke it up into a JSON folder. So here we have the top level of the JSON tree. It doesn't show any of the attributes. It doesn't show any of the children. Where are the children? Well, it's in a ZODB. They're in the child objects. So here you can see all of those different, each of these basically corresponds to a panel on the user interface. Introduction actually is for the object itself. Content is either the English or the Spanish, the editable content. Contact are all the social media links. You only get it if you're a current candidate. Elected official only get it if you're an elected official. Party officer, you only get it if you're a party officer. And so in each of these, you can click into an edit too. So each of these are much more reasonably sized, easy to edit this stuff. So by not using files, but by using a JSON folder, it made the whole JSON schema stuff really easy to do. I'm actually going to start offering classes in JSON schema. Okay, going back to... Okay, so that's JSON schema. I have a few more minutes. So let me show you one more thing. The other thing that's really nice is pug. So HTML is also a tree. And so pug is the leading templating language in node.js. And here you have what you do is you use the indentation to define the structure of your HTML. And then it generates your HTML and here you can render it. And so there's a very nice pug editor. And if you have a syntax error, because indentation isn't consistent like in Python, it gives you a flag right away. So back to my final slide. Thank you very much. If you have any questions, please contact me. My name is Christopher Lozinski. You can reach me on Twitter, Python links. If you want to visit the map, you can go to map.howie2020.tech. And the map was built on top of the forest wiki. So you can go to forestwiki.com or forestwiki.com to see the slides. Thank you very much.
Green Party Maps is a hierarchical data model of the United States Green Party, The software is currently running on the web site of the United States Green Party presidential candidate Howie Hawkins. It includes both national and state party maps. The maps show candidates, parties and caucuses. It is also a directory linking to all relevant social media pages. Any branch of the tree can be turned into its own instant website by hiding the rest of the tree. The software enables states to generate recommended voting lists. It is spreading through the US green Party. Green Party Maps is built on top of the Forest Wiki, which is built on top of the ZODB.
10.5446/54747 (DOI)
Everybody to track to day two of the plum conference, we are getting started here with Phillips talks on growing pains bill by let you take it away. Hey, um, thank you for having me and it's a pleasure to be at this amazing online show conference. I want to talk to you about growing pains. I I'm not talking about myself, even though I might have experienced those many years ago. This is about the issues that you face when your project grows in all kinds of ways. For example, The code base grows, the database grows and the problems grow with that. Let's start. I'll cover a couple of symptoms. I guess six altogether and I'll try to discuss some of the remedies that you might the courses and some of the remedies that you might Use to fix these symptoms or to heal the symptoms. Number one, that is a huge database. So cause what number one is obviously most of you probably guess that Huge number of revisions and versions. No, it's not content. It's actually only revisions and the main remedy for that is to just get rid of them all. The good thing is that most clients when you ask them, can we upgrade to clone whatever new version. And then you can lose all the revisions after some deliberation. They mostly say yes. And voila, your database springs from 80 gigabytes to 10 gigabytes. For example, there is this handy line of these handy lines of code that you can use to Just purge the whole history storage and after you pack your database or the old versions will be gone. So why is this a good idea that it is a good idea because For various reasons, there can be a lot of revisions of content that actually doesn't exist anymore. There are ways that this just happens during operations. So if you don't want to get rid of all the revisions, you can still clean up your database using collective revision manager a handy add on that allows you to a inspect the number of revisions per object and in a nice day and a nice table and remove revisions or purge or revisions on except for the last whatever one two for three four five versions. Also, good idea always is to disable versioning of files. You actually don't have to disable that by default. But if someone has enabled that that would be a good idea one client of ours had a cron job running every night and they imported some files excel files. I guess into into the plan site and they were wondering why their database was so big. So there was your answer. You can also change the versioning policy to not create a revision on every save that is usually a good idea to do that. If you already have this problem, you can just switch switch that policy and then users have to check a box if they actually want to save their change as a new revision. So you have a couple of revisions only when major changes happen. But you'll have to educate your your editors for that. Next calls is obviously you didn't you forgot to switch on packing. Just do that you need to use the zero pack script that is part of zero server and you definitely should add a cron job for it that runs not probably not nightly but weekly is a good idea. Now the the one you guessed was the first one but actually that doesn't make up that many gigabytes in my experience but there is still a huge chances to have a lot of unused content. And obviously you just need to delete it. But the hard thing is to use it to find it first. There is a handy example browser view called statistics in collective migration helpers that gives you I don't know there are 500 events from 2005 600 events from 2007 and it shows you in which folder the most content is so you have some pointers at least to go and find and delete them. Looking by hand does not always help because some some smart editors might have disabled inheritance of permissions and you may not be actually be able to see these. So number five four is the searchable text index that can be quite a big object if you have a lot of data that is actually indexed so one option is to remove the searchable text index replace it with solar or an elastic search integration. We often use collective solar for that. Actually we I never removed the searchable text index because it was not that big for our cases but I've heard of cases where that is absolutely helpful. Also you don't have to index all your files. So if you have these these converters for all these file types then all these five types will also be converted into text and yeah probably you don't need all of these. Okay, moving on. You can have large blobs. Obviously, if people upload a big data, big files then this will blow up your database. Just an example, PlonTE a website that the Plon community of Germany set up had an easel file for a Plon installation that contained a full Linux. That was pretty big and another client of us had Windows installation files also easel files just uploaded into their Plon site. That doesn't really make sense. So don't do that. A good idea how to prevent that is to limit the upload sites and size of files and get stats and remove replace to large items. If you have right statistics. Okay, the last button at least is but that is very rare or actually a board it uploads because Plon has a smart logic to keep uploads that you abort in an annotation on the portal. Just check annotation portal file upload map. Okay, try to get a bit go a bit faster because the really interesting things are further in the end. So symptom two is a slow side. It happened to me. So maybe it happened to you as well. There can be unneeded full renders of content. The most common cause is to either call an object in Python or call an object in a template which is much more easy to do because you just have to say, I'll define full context for when full is a contact an object like full context is a folder and full is a page in that this will render the full object and assign that to a variable food, even though you just want to see if that actually exists or not. So in a path statement, this will be no call column or even much better just use Python statements, please. So you make sure that you're not calling full. Please don't wake up too many objects. This makes your site slow. Always try to use brains and metadata and the difference is huge even with the exterior T listing 3000 brains in my last project took me 0.2 seconds and listing 3000 objects from these brains waking all of these up took two seconds and that is not an acceptable operation for one single page load. The same happens in Volta when you use the search endpoint with full objects because that wakes up these objects. Obviously, most views that you that are in blown itself are paginated so this doesn't happen or it shouldn't happen unless you write bad code, but still just use brains. There's so much tastier. There's third cause is no cashing. I actually wondered why a couple of my clients sites are slow and I realized that for some reason cashing was not enabled because either I forgot it or someone else switched it off or whatever. So just switch in built in cashing it already gives you buck for zero bang for zero bucks. If you want more ad varnish. There's really good documentation for that. Manage your CEO cash that is more of a science. I'm not going to get into that. So how many objects do you have in your database. How many objects do you want in your CEO cash. There is a magic number there. Ask in the community forum and in your methods use memoize if you have stuff that is called very often by different during different requests or during the same request. Speaking of during the same request. No, this is a different cause hardware. Please. If it's slow just throw some more hardware at it. That is the obvious an obvious solution but mostly the one that doesn't give you a lot of results, even though your consulting time is maybe way more expensive than just at buying eight additional gigabytes of memory. So if you have enough memory to keep your database and Ram. That is a good thing. Now slow code that is really tough. But there are nice helpers. Obviously you need to profile your code you need to read your code and probably understand what's making it slow that will be best. But if you don't see it up at the first glance or even at the 10th profiling is your solution. A very handy toy for that is Pyspy. You just run it to pass it the PID the process ID and you can start and stop it again and it even runs in Python 3 but in can even profile a code that is running in Python 2 and it's really handy if you have one browser view and you're wondering why that takes so long just start Pyspy render that browser view and it'll tell you why it's so slow probably. Also obviously don't call multiple methods multiple time from templates. Assign variables instead. You may also have slow data sources. The Internet is obviously much slower than your database itself. So you should decouple stuff that is important using relics or salary. Your choice of async implementations or something just as basic as lazy loading images if they are from outside if they come from blown up or even speed up your site. Okay, getting fancier conflict errors. So conflict errors happen when two requests work at the same time and one is expect is changing code and the other is changes there an object and the other changes the same object but expects the state from the previous situation so it has changed and yeah it is complicated but it is simple because there is built in conflict resolution in the ZODB but I see many build outs including ours bad me that by default didn't enable that because the ZO server does the conflict resolution and it needs to have all the application code available to be actually able to do that otherwise your transaction will be a boarded in your data will be lost. Yeah, the there should be other issue. Yeah, number two, the core another course for certain conflict errors can be long running requests that change data. So you should everything that takes long would be excellent if it wouldn't write in the database at all then you don't have any chance of having a conflict error. If you do something that is takes long you do should do intermediate commits. Everything that takes very long is mostly something administrative like importing data so it doesn't really hurt if you have more intermediate commits. You should prevent crossfire from external data sources. For example, cron jobs or just switch off editing. Obviously for normal operations is not a not an option to disable editing and again your choice of async operations talk talk to us about that probably. Okay, getting more fancy by the minute poskey errors. They're actually really simple. The ones that show up are mostly blot missing errors. I think that's the subclass of that. A blob is missing. Obviously you forgot to copy all the blobs during some our sync operation. You can use experimental graceful blob missing. If you're if you still want to go ahead and it'll create dummy files for all blobs that is trying to touch. You can also simply write a simple browser view that goes through all the content checks if there's a file field tries to access the file and if there's a blob. If there's an error just logs it and then you can delete it. But yeah, they're not that interesting. There can be cases when you have to see your clients and the sinking doesn't work. Well, that is really tough talk to Alessandro about that. And now for the really interesting part. Module not found error and yeah everything that comes from that. I could simply read you the whole blog post that I wrote about that. But I only have half an hour left and that would probably take one and a half hours. So I'm not going to do that. Instead. Yeah, this is boring. This is the interesting one. Sorry, I had to delete that should have deleted that. So the code to unpickle some data is missing in your database. So there are a couple of things that you can do. You can either ignore these errors. That is obviously an option. For example, if the lifespan of the project is not too hard and normal operation just works, even if packing fails. If your database will be deleted by the end of the year. These few gigabytes in the database will probably be deleted by the end of the year. These few gigabytes in the database will probably be much cheaper than the time you spend removing the broken objects or cleaning them up. So ignoring these errors as long as the page still works and has a limited lifespan is totally acceptable. You could maybe a migration is planned. Any way to the new version and you're planning to do export import migration or you just do a migration in place, which also fixes many of these issues because we will encounter them then and it will not. It will the money at the time you spend will be lost in the budget for the migration probably. So the next thing you can do is to do a migration. Option two is to fix these issues with a renamed it. I'll show an example for that option three is to work around with the new version. Option two is to fix these issues with a renamed it. I'll show an example for that option three is to work around with an alias module patch and option four is the most interesting one actually finding out what is broken why this broken and trying to fix it. So option two obviously there's no slide for ignoring because that I just ignore that slide. So is UDB update renamed dict whatever call it. Here's an example for that. I think it's stolen from Zest UDB update. Mostly or from from from soap itself or if it has a couple of these. This is a basically some. A entry point in your setup PY where you map old code location to new code location. And the good thing is the new code location can just be an interface like in the second in the second. Yeah, can I see my mouse whatever and the second thing here. I can't see my mouse whatever. So yeah, so it will replace every instance of IP assistant extra that was imported from app interfaces with a simple soap interface interface which basically is nothing, which is not true but it's not that much. So it will work again. C Zest ZDB update for a couple of examples that are actually really useful. The option three is to work around with an alias module patch alias module something David Glick I think came up with pretty nifty. And in your any PY you can basically patch your Python path so that everything for example the interface the solar index processor from collective solar interfaces is no longer. Throwing an import or module not found error or an attribute error depends on how it's used, but it'll just be nothing because it's an empty object is a BBB object. And the same example with the slideshow descriptor with a simple item. The only important thing or the important thing is that you need to assign use the right alias. It's either an interface a simple item or an object in rare cases there are more complex things. Check clone upgrade the module it has a in it file and it has a ton of these and they are run every time you start up a clone. Yes, so the important one. So finding out what's what and where broken objects are and then fix them. So you should use the UDB verify to find out everything that is actually broken. Check pick one at a time don't do all of them at the same time and then use the UDB verify to inspect these one object and find out where it is and then remove or fix it. And so number one. I'll do a life demo. Yes. So, So this is a client project. It's an older version of the database. So it still has arrows. So what you should use first is use the UDB verify it's a Python package you can put it in your build out you should you have to use the checkout from the pull request eight. It's still not finished yet I just don't have time but it works fine and run it against your database. I will do that just now. See you. And I pass the database name to that this will give me after scan it scans the database for everything that cannot be unpickled and it'll log a lot of error messages and at the end of these error messages it'll show you a summary showing which errors appear and higher how often and for which objects. It'll take another second until it's finished. It should be done anytime now. Yeah, sharing your screen during a presentation takes some yeah we're done. Okay, here. This is the reason the final report the UDB verify done it's 145,000 objects 45 of which could not be loaded. And here are the exceptions and now you need to pick this is basically the result. Pick one of them. So I and very important. You will forget what you just did because you're excited. Well, because you're debugging at the moment. So make notes. And write upgrade steps. Don't just hack the fixes away and store your terminal log in a safe place. I forget what I did all the time. Maybe it's the age but I'm pretty sure the same happens to you. So pick one of these issues. Let's pick one. Okay, let's see 45 is broken. What do we have here? Oh, I know that four digits. Okay, let's let's check for that for digits. Let's pick one. And what you do now is you ask the UDB verify to please inspect this object and you should also pass the minus D for debug flag and just press enter. And wait for a ZODB verify to do the magic. So now it already know it loaded my object. It is OBJ. It's an obviously something in the plan registry and integer field and the interesting thing and that's the problem with these UDB not the problem. It's the architecture is we don't know the name of that object because it well it's an object somewhere in the plan by the plan class instance of plan registry field in. So we guess it's in the registry but we don't know where it is used and what's the name as which that it's used for. So we can say continue and then we get a representation of the pickle and we can look at that and this is pretty fancy but it will give you some hints. For example, we can see. Okay, this is from four digits portlet Twitter. I've seen that a couple of times because it probably did not uninstall properly in the in projects. So we look at the first let's skip ahead for here up. So first you can inspect the object. Then you get the error message that you get, which is a module not found error during unpickling. Then you get the pickle. In most cases, some cases, the it won't work to display the pickle and you can then you should find out where that is, which means you should find out where it's actually referenced. And when you press continue, the UDB verify does exactly that it will create a reference tree to to for reverse lookup. It's a huge dictionary and it will tell you where this object that we're looking for this blown registry field into it into is referenced and it tells me okay this is referenced by okay this is a bucket and the bucket then is referenced by a tree and if it has a name it shows the name as well and in this case it doesn't have a name at least none of that could find so the tree is referenced by a by the registry, I guess, and the registry if ref is referenced by is portal registry for the plant side so here you see it even finds out what the name of the object is obviously these have no names that the script could find, but we can see that this is where we're going with this. So now, you can decide to do various approaches, but I will do something special as you to be very fine can be instantiated in various ways. The, the obvious one is you can do a dot bin as you do be very fine you pass it the debate database, but it calls also run it as an instance script so been instance you as you to be very fine and again the the object ID and the object ID, where is my object here is my object ID and the minus D flag. So now, the same thing happens as before, clone is loading or ZDB verify is loading the object, but in this case, it is loading the object inside of running clone instance. So I actually have a clone instance available so the startup looks a bit different, you might see, and after a while, you will have. Yes, so you see the object is as before and app in the is no longer none but now is up so app.plone is my plan site at dot clone portal registry is my registry. So now, try to look for these trees, but I know how the registry works. So, it had, I'd say, let's assign a variable for the registry, record, so let's say, let's inspect that wreck. I for I in records keys, I'd say, if for digits in I, let's see if that works. Yeah, here we are. So there are obviously settings from that darned. Portlet that was installed in that page. Let's use that as a variable records equals because this is the key and we can access that with the key so now we can do for the record in records. record in records. Record. I'd say that would work. No. Record in Dell. Rec. So, I'm going to go to the record. Yeah. So now I actually deleted these these objects but I'm not, I'm not interacting with the with the running request. So a there is no transaction. I have to do a transaction myself import transaction. Action commit. And now I can still continue the ZODB verify script is still running. It will show me the pickle because the objects are still there. I just removed the references to these objects from the records, which is the storage for the records is actually underscore records I think records is just appointed to that. But still, here's my pickle. It's building a reference tree of the ZODB because something I changed something I did a week. I did a commit. It's a JSON file in your temp folder. So it goes through the whole ZODB and builds this reference tree and the output now. Compare that to what we saw before is here's what we saw before is now nothing because it's no longer referenced. So what does that mean that means that after we pack our database. Then then these objects are actually gone and we can run as you to be verify you can now we will still get an error mess and not error message if they will still show up but I guarantee you I'm not going to pack the database now but I guarantee you that these objects are then after packing the database. So that is, I guess this concludes my life demo. Let me switch on this again. Yeah, hack it away. That's what I did. Check that it's gone. As I said the broken object still exists and the already should but the already should not be referenced anymore. So there are a couple of things that are hard about that. And let me maybe quickly go back to the to this part. It takes some practice to read this properly. But the tip is the first find is usually right. So this when when it tries to be verified tries to go through the database to find references to that it finds multiple. But the first one is usually the one that is most interesting. Looking them up because it follows each reference and there are circular references. So it stops at level 50 I guess that's what I coded in and it stops at root objects so there are a couple of root objects that are referenced the obviously Zope app and blown where it stops and doesn't reverse anymore because blown is referenced in other places. So you would get lots of circular references. But usually the first one is the most important one and a lot of things annotations are in the ID's relations. They are all stored in buckets and be trees and these usually have no name and are hard to to to to inspect in PDV that is not very straightforward. So you have to do some educated guessing to find out what it is. But the most common culprits are actually let me show you where do I have that. I had that. Yes. So this is the blog post I was referring to at the very end. It has lots of important information how to deal with OIDs and stuff like that. But there are the frequent culprits and the frequent culprits are in IDs and relations. I will have a lightning talk about relations. God about where did I lose that here about relations. And so if you have a migrated database, a lot of these error message where it says module not found archetypes products archetypes or ATC content types are actually relations and you should get rid of these. It's all documented here and you can use use collectiv relation levels to get rid of these. These are I'd say 95% of all broken objects. The more important ones are or more interesting ones are those the other ones. Okay, last but not least I have still have a couple of minutes left. I'm actually here. I will be done early is bad code. So I could talk for hours about that but this most of that is is so obvious that I don't have to explain anything to you when a project grows. You will or when you inherit a project and it's it had a long lifespan. You will encounter all of these issues and there is no easy remedy for that except for being really careful when you write code. So when your lifespan is short you can just delete all of that that's fine, but it's terrible to have unreadable code that is read written by people who think they are so smart that it has to show in every line and that is just insulting and bad and it will bite you or whoever comes after you and it's really bad practice to write code that is overly complex. If your code is untested is if it code is actually unused. Make sure it is used. If it's undocumented if it's not maintained. It's not maintained. For example, if you have a dependency that has not not be it's not maintained anymore. I'm not sure if anyone tried to install collective flow player in the couple of last couple of months. They will probably have encountered the issue that hot choir. I don't know how to pronounce that is no longer available on GitHub on pipe. So it is just it's a good idea to keep that up to date. Yeah, complicated and unreadable is not completely the same but very related to each other overly complex obviously also similar and basically just too much code. Don't write code if you can convince a client to not write code. So if you can convince a client to not want a feature because they will probably only use it once or not use it ever. That is a win every line of code that is not written is a good line of code. So with this stupid pun. Let me say thank you. Obviously, yes, we are for hire, even though we're really really busy. And I'm happy to answer a couple of questions that might have been asked already. I have no way of seeing any questions so full view. If you want you can read some questions or should I just move to jitzy right away. I will just check the lots warm platform if there are questions. Oh yeah, they're awesome. I see them. Okay, Marcus, how to limit the upload size in not a web server. Yeah, obviously my first answer would be do that in engine x or Apache in in clone itself. That's a good question. I know that in archetypes there was a max size equals something for file fields for dexterity I'd have to check but it's probably pretty simple to find it out. Find it out. Find out if you want I can I can check. Rafael would like to know if you can find my slides. Yes, I will upload them. I will upload them and put the link in the slack. I guess there is also some pre pre arranged way to link the video and the slides together. So yes, I will upload them and Paul asked, do you have any good resources to effectively read flame graphs. I'll yeah. Yes. I used I used gone. Okay, let's move to the next question because I have to look up. I wrote down a nice library that visualizes flame graphs. But I forgot how it's called. I'll tell you in the chat later. I'll look at us. But actually, I like reading the lines. The lines are absolutely fine because it shows you the Python path to the method that is taking too long and or and how long it takes. I'll stop sharing my screen by the way this makes no sense to look at Catalina's as nice as it is. And it just if you it has four columns, by by default, and you just click on, I guess you use three or four. And then you can use the same method and show you which one it is. And from there you can, in most cases, just estimate and guess what the problem is. Yeah. But profiling as is a whole science there can be a lot more as quick issues. Oh, I see more as things no current transaction TPC, TPC abort. Not yet. No. I'm pretty sure asking the community forum there are so many smart people who are who know they can read pickles in raw probably. I think only Jim Fulton can do that. If all code isn't zero. So where do you update to have to stop all sites in this zero zero. What I don't get the question. Sorry, I don't understand the question. Well, you manage your zeal clients with whatever you like. I usually use supervisor. And you can stop multiple instances at once if that's the question but I guess it was not. There's that was it I think is there another question not a question. No. Did I miss any questions. Other than that it is 1545 let's head over to the to the face to face meeting in Jitsi and I'll talk to you soon thank you for having me and it's a blast being at the conference. I buy live long and prosper. I'm not sure if you've ever seen this. You've never disappointed me with my moderating skills. Stop the street now.
As a project grows and changes it experiences growing pains. I will discuss some strategies to prevent and reduce these issues and treatments to cure them if your project is already infected.
10.5446/54748 (DOI)
Thank you so much, Andy, and welcome everybody. So we are going to start, and we're going to start now. Okay, this is my town hall in Madresa, so I just want to welcome everybody to my city, and it's a really nice city, a good call these days. And it's really great to be able to be in this conference, to all the organizers, the board, and everybody involved in being able to do this conference in these situations. I really want to say thanks to everybody. And this talk, okay, it's the Pilatina talk. We are going to go across all the changes that we did this year. It's not kind of one of the best years in Pilatina life because we've been really busy with other situations. But we have a lot of things to say, and also the roadmap. And mostly we are going to focus on use cases. So, because one of the feedbacks I've been receiving that I think that are very interesting is people don't understand what exactly is Pilatina. What is it for? Mostly, I even heard that it, because it's based on both ways, it's an SQL engine that you can use tables and columns, and it's not like that. So, let me go through all these situations. So, let me share my presentation. First, who we are. I'm Romana Barro, co-founder and CTO of Flabs and Iskra, a foundation member and a member of the Pilatina framework. And I invite also, I'm Jody Uriel, a Pilatina team member and also a contributor of some of the projects. Cool. Thank you, Jody. So, first we're going to start with what's, what's going to be the basic kind of headlines about what is it. So, first, Plum is Plum Rest API compliant. It means that where most of the things that you can do with Plum Rest API, you can do with Pilatina. So, most of the projects that you can build on top of that will be able to be used with Pilatina as a background. Second, it's a tree traversal, your apico security like blown-zoop architecture. So, you store objects on a tree and you set security in a tree and it's inherited and inherited. That's easy. That's the same as we are used to know it's the same blown-zoop basic lines that we have. So, what's not. So, we have reduced all the layers from zoop, cmf, blown in just one package. One package that has a schema, has the, you know, to configure the adapters, subscribers, all the rest of the addition of the database, the transaction, how we are serializing the database, all in one package. So, it's easy to follow up and understand without more than just one layer. There is no server sub rendering. It means that we are not rendering templates. Well, we have to enjoy the ginger to render something like this if you want. But our focus is to be an API. So, we are a really fast API. It's YAML based global configuration. It means that we are storing things on a local registry of components on a tree on the container. We have the registry of configuration, the same as blown. But we don't have templates stored on database. We have content that is stored on the database. Everything is propagated through a global configuration. So, all the containers in the same process has the same configuration. We are using some functionality from blown, the option to customize it of the containers. At the same time, we are winning a way of being much productive in order to be deterministic about what's the configuration of that environment. Well, it's our underlying layer. It's an SQL and cloud services that we can store files in a file system. We can store files on S3. We can store files on good cloud storage. We can have both resets at database, co-croaches at database. And so, all the tools that already exist around these underlying layers help us to keep the backup, the application, scaling the application wherever you need. Oh, yeah. And one of the most important things is that we are saying ourselves, don't reinvent the wheel. There is a lot of things that have been read. For example, we implemented workflows. And we say, we don't need to reinvent the wheel. We just take the basic concepts from workflows in blown and just we implement it with our interfaces. For the web server, now, right now, we are using unicorn. And we thought, okay, we can change our request to use the standard one because we don't need to maintain our own request. So we decided to that on the team of seven and going to talk more about that later. It's going to be a star. So basically, let's go. It's our main focus, trying to do less things to be to do more things and to be more productive in order to develop our ABS. So, 2020, what we did this year, we released a new Dina 6 as Eric still have already said on the keynote. It was a great effort from Jordan Maci, Jordan Colleen, Nathan, a lot of other contributors to push for an ASCII interface so we can plug to you with Korn and Korn easily. So we don't need to keep the HTTP dependency that we were keeping in order to provide the HTTP server. We added workflows, a really basic implementation, but just enough to do most of the use cases that we are facing. We have jented and played in case that somebody wants to grant it. There is somebody who wants to implement also with Chameleon that we will see if it's finished implementation soon. We have a full open API validation system where we can push all the information from the API into Swabar. We have a flow to provide user registration, resetting the password with a mail validation, integrated bookable arrays, a really nice feature called GMI that Jordan is going to explain to the entire. We did some performance improvements and a lot of book fixing in order to provide more stability and more reliability on the framework. There is also, you can image not only the Python package, it's also an ecosystem of different frameworks or software, software grants. It's one of them. It's built by every health, and it's an Angular 2 kit in order to develop applications. It's not a UI that you can use it by itself, but it provides most of the needed things, not that you built your own applications on top of your Dina. And the good thing is that it's also compatible with PUM, so you can do applications that work with both environments. It has gestational schema forms generated automatically, PastaNivea UI widgets, traversal render with Angular, and all the crew supporting for editing and getting all the options. There is also an effort to use Voto onto video Dina. You can bring Victor, Victor, to us, which is the status. He said that it's nearly 80% of the functionality working. Of course, you cannot create content types through the web, because we want to enforce that configuration is done through the YAML configuration file. So, for example, we need to support some of the long sides that we were at the bottom line, we need to develop a driver for that. We also have streaming support, non-bind to support fields with non-bind values, and Postgres PG fields to store a field of an object into a table. Okay, now in order to explain about what's Hidotina React in the GMI, Gero. Okay. Thank you. So, I found this job management thing, something critical. And I say, you can do something like this for Hidotina, and I start doing something. And then I feel that perhaps it should be more appropriate that I can extend and we can extend and we can maintain. And that's how it evolved. Right now it's a management and it's cheaper by default than Hidotina, but also it's a library that you can plug on your own project. And there are some decisions around to be able to use it like this. Like there is no routing because you can plug on whatever router you use on the app, there is a, but, and also there are some job ideas. There is a traversal, there is a registry to register views to write components to be able to customize everything on the context. And also, it's working around the context. Okay, let's move. Okay, let's move it. So, here we are looking at the TV and then we can create containers to use more or less alongside. And then we have the classic tool to match tabs, you can customize the tabs, you can have one, you can customize the visuals, materials, install norms, but no, you can install a user folder that one is provided with Hidotina, you can buy a card and we can check on the site that we have a user folder where we can start with users. For example, this model is something you might have read for every content that you have. You can use like so, around the context, and there is an enforcement around the context, only attacks the craft, you can get it in that context. Every content type, it has its own properties, you can add them, and you can have behaviors in a little. For example, this content type doesn't have any specific behavior, we are just timing a little batch now, and then we can just start buying and what it's still a bit simple, but we are using for all of the CMS, we are extending a bit by bit, so it's almost a component of the batch we are using, that's the reason perhaps it's still a small and it's still not. For example, here we have permissions, I just want to say it's a bit different, and it's not showing the models with the permissions, but we are just making this item public accessible. You can open it from the internet. There is also a search in press, you can put a catalog, in the final search you can put a catalog, it's not really a search, it's a little bit more specific, and there is batch actions for the people you can use. For example, a little bit of folder, a little bit of movie, to a normal one. That's it, follow up, you can delay, you can move, everything. Later we show you more for the people you are following as a follower, so we will be more focused. Thank you so much, Verdi. And yeah, as I said, the DINX not only CMS, it's a framework, it's more of a community of people, also inside the Bloom community, that we are really warm and happy to be in this large community. And we decided to organize it with ourselves and define it, which is the communication channel, and how we are going to organize the site where we are going, and what are the features that we are going to implement, and who is going to review the process. So we have our Github channel, Bloom.org and there is also once a month, there is a framework meeting, that it's mostly Nathan, me, Jordi, and then there is also code owners on the Github repo, so any request from any of us, it's validated, that's one of these three people, Jordi, me, and Paul. So we decided to organize ourselves to be a proper team inside the Bloom community. So this is a bit the state of the world we've been doing this year, as you see, there's not a lot, but we are still happy about it because we now go through using the projects and production, we have really great results, and that's what we are going to show you now, which are the use cases that we've been doing that are closer to the Bloom kind of environment and a less scale than maybe the ones that we've been showing at the end. So, for example, the first one, it's one that I'm quite happy because it's a cultural project. And there is a music festival, more than we have to attend this, professionals in this, just for the year for other professionals to come to see shows and choose which of them are going to hang, or the one that's a contract for the next year. So, they have, we've been doing this project for a long time, we have a lot of art, alongside all these clothes, we have artists, people send proposals, we have more balls, and all the process to buy tickets, to comment about the different shows, etc. But this year was a difficult one because the pandemic didn't allow them to do most of the things physically and on the physical test, so maybe we decided to be more digital and we needed to know more about digital. So, we needed to build something to provide a platform for professionals to be able to follow up and to be more integrated on this festival, besides they were not being able to come to see the show by itself. So, they also wanted that between the professionals, it was an option to chat, like a group chat or one-to-one chat, where people were being able to reconnect and ask for more information about different shows, etc. And they have a proper side. So, what we decided to do is we decided that the Latinas for building chats, the sync ayotes, really powerful. We decided to have a shared system that we're going to cross all the flow side and the Athena so we can request from the user, where is the flow side and the Athena one. Then we implemented a sync or a special service and Athena to be able to push information into the browser and we created a chat system. And that was a really great solution because a lot more than 1000 people, especially the professional people asking for details of show that they've been seeing. We also implemented a way of being able to organize digital streams that were streamed in professional services to digital streams. So, it was a way of following the show. And for example, we have this conference, this platform. So, what was the end? The first thing we did was we had the all flow side and the movie side, a lot of content or more than five years of information there. Then we have a guillotine with an end to push notification and user preferences chat and we worked so good and all the sync ayotes. And we also have a new feature that is the one that is going to handle all the proposed flow flows. So in the future we can replace all the flow infrastructure with Athena one. Because at the end, on top, we already moved the professional space to a React proxy web app and we knew management interface with Athena React as you saw from the phone chat report. For the web page, the CMS used to be more than used to the phone. We were waiting to use photo or WordPress depending on what the client site is. The underlying layer, the users as you see it's an app that was one of the first to work. So, just to see how does it look like. This is a pharmacy web app that you can log in as a professional. And here you can check all the information from the last edition or just present information for the new edition. You can see all your messages that you have across all the years with all the different. This is the demo users so you can see them all chat. Then you can also see the notifications that the organization decided to send is the notifications are as put notifications also the system. And you can see all the tickets that you bought and your QR code to access the different shows that were physically done. Then you can present for this year and then you can prepare your own proposals. You can be on artist, your own representative of artist. And there is a lot of information that is the professional career. There is a lot of information that is being asked in order to present your proposal here. And there is a lot of validation process of this flow forms are done with one of the favorite react libraries called forming and for people likes to create a lot of libraries for doing everything. And that's that's not. So, this is an example. And just the conditions that we release. We ended up thinking is that it's really important but we will reduce this. We need something a high level friend like maybe that the bottom that does a lot of things that have been passed the questionary or you need to choose the CSS design framework like materials or whatever. And you really focus on choosing the good, the good feet to be approachable to be used case, in order to define exactly how much time or what do you came from using. And one of the things that we do is we have a library that we use car one this thing was react libraries called Jason schema form, which automatically renders forms about its forms to just with schema that it's quite useful to have a big scheme as that and one of the boards from one of our different front of the law process that several recipe I blown and you know what are amazing to develop fast. The folder we eat the items, the way that your nest information is to be well organized to the way that front of people can use that helps from the developer. There is another use case that journey is going to explain this. I just launched a business and there Allah Saudi has had all extracted working. So, I'm sorry, and they are not available. The new series is a wine and donuts, and we have five companies, we have around 6,000 brodus, more than 70,000 products. Then, one of the companies, they have all the products, with expert methods, they see great external methods, and honestly, one of them is the technology, the R&B, the store, the marketing star, all the products, all the products, everything makes sense, and everything, from 12 years. And this is one of the reasons they can scale, or buy some signals, because we have to have last year, like last year, we have to have like 15 machines to have all the traffic, and also they are able to scale on computers, because it's super hard for all something like this. Then, we start on the project to be able to have more teams and to start evolving, and we start looking at this year at fitness, TMS, and speedos, from my experience, it's the best way to evolve on computers. And then, we follow Medellina, doing something, it's on the same family, and so, we're grown and I said, okay, we have to go with it, and we're starting working with Medellina. Right now, all of our content is migrated to Medellina, and we just started there on the 3rd, and also all the contents are like the items for the home birds, the the all of everything related to content, it's a story from Medellina, and we see it. Then, we can check out the application, also expected from the Zocla, but as an API, and we have informed and it is project from another team that it's working, but we still can't show, and we have to still evolve it because it has a lot of screens, with a lot of functionality. Then, we just created a new project, Medellina, that it's not using the storage, it's just a framework, because we need to use posters, or really business storage, and we use Medellina, which is an example, we just have to have a piece to do, and it was quite well, and we are building a new one, and we use Medellina. That's a bit the same as the last chat, but okay, so we can continue and go to the next one. Okay, let's see a small demo. This is the demo, this is the work, the home page, you can see there are a lot of logs, models, and almost everything is going from Medellina. The public website doesn't go down to posters, or information, it just starts to posters or other processing, and then I can read that. You can see we have complex lines, there's a lot of components, there's a lot of components, and notes, and it's hard to optimize, that's one of the things I need to combine with, with the truth, it's the best fit, because we have a lot of problems, we need as a service at rendering and service at work, but right now it's working quite well, and we come above super fast, and fit those super well done, and we're ready fast. Okay, here we go. Okay, right, and that's the music. This is our CMS, that we do, that it's just to show you that this is a, a key to the DINNAC, that we personalize it with some more things inside, we spot some products and management, we create those, that it has track logs, some areas, we go those, from here, but we just do this way, because we have a lot of different programs, and we don't want to be fast in things, and we don't want to be on a wall, so we're ready to go. Okay, let's move to the next one. Cool. Okay, so another one that we are kind of, really proud of, is the DINNAC, which is a big data project, this is a network of electricity company, it's a company that works for electricity company, and they work about gathering information from the consumption of electricity, and they need to provide reports to the final, the final consumer about what they've been using, how they can spend less money on electricity, or how they can reward the cost. And the main problem that they have, is that they have a lot of data that comes from a new cluster, where they are analyzing what's happening, and that they don't know how to show them, to the user. Right now, they create VDF reports, and they send that VDF reports to them. It's quite great that it's in the reasoning direction, they cannot control the final customer's seas, so we decided to create a VDN environment for them, that gets all the data from the aggregation that is on the cluster, and finally, we have a front-end application, where we are, that allows us to define the reports, and how they want to see the reports, and different kinds of reports that they are going to see. These reports then can be generated to VDF, if they want to be sent to them, and then to mail to the final client, or they can just create web applications, that are embedded on the QT website. So at the end, the final user can see in real time, how that is being aggregated on their distribution. So this application is built with two key details. One for storing the report design information, that is CSS, or is this the different layouts, that we are going to see structure on them about that, and then there is a real application, that is just connected to this VDN app, that is storing all this information. Then there is a data, where here it might be more scary, because the song that we are telling this company, has millions of thousands of privates, at the risk of a lot of information, that stores all the yearly, monthly information, for each user, and serves this information, to the screen that gets rendered on the QT website. There is a security process, with service account, that allows us to provide, that nobody can see permission, that it is not its own, and that the utility can delegate the screen to the service account. So here we will see a bit, the demo of it. It is a simple materials front-end, we have application, quite easy to develop from the right perspective. The editor that you are seeing, is one is called React page. React page is what was called, or editor a long time ago. It is a good editor, quite complex, but allows you to create different blocks, and block and play different things, and create a lot of different information. For example, in this case, we are going to drop a new graphic, that we want to put just on top of this banner, where we can define which are the keywords, that we want to display, then we can edit existing layer, existing reports, and define new layers. A bit similar to the way that Volta is doing things, just with this type of Volta is React page. And all the data that you are seeing here, it is generated through your app, through these variables, that you are seeing here, they are saved, this is the model of utility that exists in JSON fields, that is saved from the other cluster. Besides defining the reports and storing them, we can generate them, we can define specific CSS, or each of these elements, and then you can define these labels, which is the one that we want to use. And finally, you define the service tokens, you are delegating these tokens to the utility, so they can embed on their websites, their own clients, web of each. Quite simple, a React application, that allows a lot of forward to the client, to define, to allow the final client to define how they want to see their report reports, and define which elements they want to see on their consumption analysis. One of the conclusions that I would like to point out, is that A3's web React page work really well with Degutina, like the other environment, because it's stored JSON, at the end we have a JSON field, that you can store wherever you want that, and then you can go to the web site, and then you can see the results, and then you can see that it's not a good idea, and it's not a good idea, because it's not a good idea. So, I think that's a good idea, because we need to have a lot of transitions to evaluate it, but editors are hard, no matter where we are talking about, there is a lot of work, and you need to adapt things to your own needs, it has its own cons and its pros. And there are a lot of users in this project. And we told to these users, uh, that we felt like I wasn't a good one. So to really do the best that I could, and understand what the job does same with another case, Ages around the binds. We saw to the point, using the JavaScript of the past, that, how to respect and combine. So considering the data that we have today, Okay, so I'm going to go fast because it is on the front. This is another project that was a clone site project that we've been having for a long time. Complex workflow, more than 20 steps in the law of security for industrial wall. And in this case, we didn't use, we didn't use the Athena, but what we use is grants to build application that allows us with Angular to provide a nice UI that it's easy to use for industrial environment. Lots of numbers, lots of tables, a lot of columns, a lot of security or a lot of information. So we ended up with a grants application on the Macdonalds. Lally, another project, it's a site software created for musicians, so they push their music to Lally and people can download cards to give to people to download their music after the concert. It's second by the Athena, but the front is grants. Cool. So we could use grants on the clone site and on the Athena. Thank you. This is an amazing job. You can see the Lally project. It's quite simple, musicians can register on the site and they push their music, they subscribe to the service and they can generate a unique QR codes so people can download the menu that they use. I'm going to go. Okay. So I'm going to go to my new website, my new startup that is an inside engine software service that's its focus is to get all the possible inside information about your information. We don't gather all the information, we just select what is the proper information that they use to and we analyze and give the last lens. In the center, Athena manages the core information of all flags, it has connections with Nats, it's giving you a similar factor, it has multiple indexes, file storage, auto storage, house system, we are using Athena as an operations and training, machine learning models as a domain, the rest API as a cluster, a scheduler to a schedule task to be done on the governance cluster, even with GraphQL and Adrian, we integrate the property to provide a nice GraphQL interface so front of people can build easily application with them. Our conclusions. Jason, it's really great that the rest of the internal we are using for the buffer and DRTC, because it makes us a faster introspection and faster speed, in order to connect multiple servers, different languages, like for example, we use Rust and Python a lot, and works quite well. Streaming and even driven architectures are really powerful and in any software that needs to scale should at least be able to use that. Rust is an amazing language that integrates really well with Python. I think that your site cards in order to provide kind of a site card for the Rust application that connects with the Sympio are really, really great. And GraphQL, as most of you may be testing, it's great for applications and to provide a query language that frontends doesn't depend on the rest of the. So I have a few minutes, so I'm going to go through the faster. The G7 is coming. We are starting to write the branch from the G7. We still continue with the idea of less is more. We wanted to create properly with a starlet. We want to pickle. Pickles also G7 and look about for objects, having a lock system, a GraphQL experimental add-on or a door, and we are going to create also a relational system and join me once in the great servers. There is no brand new engine engineering or general engineering with..? have a plan, we can make some things, more of a communication, more examples, because it's so easy. You see, and perhaps I could put on a few videos from the United Kingdom of Oto to make out about the world-wide of the world of installation and also to add a little bit of a little bit of a flow, or perhaps it's like a reaction or something like that. But United just won't be, and they will be gone. Good. And what does it have? But in the United Kingdom of Oto, there is a big mission that Victor wrote here, some features that they want to add into Oto, and provide much more support on the United Kingdom. And in the United Kingdom, there are some differences. We have a number of behaviors, for example, or searching for this big hit brand and a bit powerful, in some cases, that long one. So we need to adapt and create this layer that provides this interconnection, also the sharing of the products. And well, all the beauty has seven milestones. This is the sixth one, it's all about the gift cap, all this issue that Victor and John wants to help us to this milestone, is one and welcome. And I really want to say thanks to everyone, thanks to everybody that you've been giving to be with me. It's a good, amazing community and I really appreciate you. Thank you, Jordy, also for talking on the positive kind of thing. So that's a great, and the thing that that's a lot. Hey, great. Thanks Ramona, Jordy. That was a fantastic informative discussion on Guillotine and the roadmap and all the things that you guys have been doing. I was really excited to be able to attend and I'm really glad that you guys were able to present to us this time.
We are going to present a group of real life projects that use Guillotina: E-Commerce CMS Big Data Plone/Guillotina mixin We are going to introduce a new ZMI Guillotina replacement with React: guillotina-react. We are also going to present the roadmap for Guillotina 7.
10.5446/54749 (DOI)
Are you seeing my presentation? Let's go. So I will talk about this institutional website. I talk about it as my longest project I have. It was interesting because I started with one guy and in like six months we did all the design web design, programming, database designing, migration and stuff in a period of time that where we don't have much like presentation frameworks like today so it's good to have this kind of technology now for me like backend developer I don't like much to do web design and front-end so these frameworks are very helpful are very the way I think they should be for developers for people that don't like to think about the front-end much. So I am like to refer to myself as a human being which was born in Florianopolis, Santa Catarina, Brazil. It's the south of Brazil. It's an island. I was born in an island so I'm very proud of saying that and I like to tell people that I was born in Florianopolis so I like to work as a teacher. It's a beautiful wonderful professional and sometimes I do develop also. I prefer to work as a teacher but I am I think as a teacher full-time even when I am developing and when I am working with people to develop I like to think as a teacher. I know how to build real beer. Sometimes on phone conference we have the lightning talks. I had one I had one I think in Italy and Japan. I talked about how to build beer for people and they love it. I love to surf. I love to do nature and I like to hide my bike then my car. So technical steps. The first surfscode I saw it was like in kind of 1988-1985 I think or before I can't remember. My uncle was playing with some computer. This computer was I don't know the brand but the model was CP 200. It was a kind of keyboard that was connected to record player, cassette player. The young people will have to research and Google about it and the monitor was a black and white tiny very tiny screen and he wrote a lot of code a lot of code a lot of code and he showed me this code and this code was just to say to say my sister's name on the screen several times just it. I was like wow you wrote it. It was very fun to see it. Then I went to very very very very like I was in I entered the university in 1998 I think the first university I have in chemical engineering. So this this university had Pascal as a programming language and I have my first experience with programming language in the university was cool to do some tasks for the teachers. It was cool. It was a fun moment at university for me. I didn't like it much chemical engineering like the subjects which was very hard for me but it was very very easy for me the programming language or computer science related technology. So I went to calculus and you know I don't know how to translate but it's numerical and algorithms and this kind of subjects I was very good at. So I left this chemical engineering graduation and start programming professionally. My first job I was working with Clipper summer for a small company that was in a room in the house of a friend. So it was very nice to start like this. After that I started to think as a professional to go on on programming and database. I started with the base database and this database was kind of wow that's so nice to work with date and save it somewhere and organize it and it was so simple. So simple but for me it was like magic. Then I got passion for tech. I did the graduation in computer science. The name is Informatics Bachelor and at the end of my graduation I started to know more about free software Linux Python and then frameworks but first Python. Then I came from MS-DOS and Microsoft DOS US and after Microsoft DOS we had like Windows and we started to do desktop software. Then with this desktop software I moved it from Windows to Linux. Still using desktop software. So I started in Python frameworks for desktop not for web. After this kind of experience in Python I moved it to web and my first technology in web was with PHP. The free software. First experience with web because I had experience with Delphi and Lotus Notes and some these two proprietary systems. Also I had a background on databases so Oracle and Postgrease, MySQL and Zodib of course. My first Plon site was personal for me. Just for me I was using Plon 2 I think and I really liked it to use Plon but I didn't have my job already have these other technologies and I wanted to use Plon but I do not wanted to leave my job so I started using Plon personally using for my personal website. So what about this project? Let's start talking about this project. We had at the first website made with lots of notes for and before that I can remember and I can remember the domain name we had. I couldn't find any screenshots so the first version I have documented it's this website with lots of notes for. The thing with this website was that people could only change pages if they were a developer. They had access, they know where to put things, they know where to change things so it was not easy to spread to all the organization to use the websites to manage the content of the website. The people around the organization had to ask for the developers to change things and when developers are busy the things are where change it very often and very delayed and some people didn't like it and were not happy. So this is what we had. I had got from Wayback Machine, the archive, the first version. It was not so ugly because you see it was missing, it is missing some GIFs and maybe some CSS also or not. I don't know, I can't remember if this site had CSS so this was the website at that time. Okay, move on. 2002, a new design made with for lots notes dominoes. There were some improvements in the content so people had more systems. People started have more power to manage their content. They had places where they could interact with some system that at the end the information went to the website so they could change things without asking for the developers, without waiting for the people that are like doing other systems. So everybody had more time. Oh sorry for the typo. I need some improvement. It's wrong. That's not great. Not big deal. Okay, this is the website. You see, also it is missing CSS and images and stuff. It was a kind of a little bit better, way better than this, but it was kind of a new phase to do lots notes. Okay, a little bit more systems but still the background was lots notes. Then there was a migration. It was smooth and easy. We had like not much rich text in the lots notes database and we had not much... we had the easy way to extract data using CSV files and using this CSV files we imported the data into a new database structure and well the website, the code was created using Dreamweaver. I don't know if I could say the name but it was an appropriate generator of scripts and web design and it's new PHP. It news how to generate PHP so we had like a ton of PHP files creating a beautiful website. For that time it was beautiful and it was made by an intern and I took care of managing the page. LMP, Structure, Linux, A-Page, Postgrease and PHP. It was not lamp but so I created like the database structure and helped it to migrate data and this intern used Dreamweaver to generate these PHP scripts and we created admin interface to manage content so we create a kind of way of managing our content. So in 2004 we migrated to this new version. It was like the start of this project I am told in telling you. We used Lamp, free software. There was one admin interface and the web itself. The admin interface we created by hand and the web we created using the website we created using Dreamweaver. It was very fast because each URL is just one script and it connects to the database, close the connection and close so it don't keep the memory. It don't do huge queries. So it is very very fast. It was very very fast but it was tons of repeated code. We have kind of the same way of declaring the same things everywhere and we have like with each change we have to fix everything everywhere because it was generated by Dreamweaver and as we didn't kept using Dreamweaver we had to deal with by hand so we have to keep changing, keep changing everything everywhere until we had courage or time to have factor and try to create functions and try to organize something. So we had, even there we had lots of repeat our scripts to deal by hand. So when we have to create a new item, new menu item for the website, we needed to develop. We needed to create an item in the website so a web page with a layout and everything similar to the other pages. Then we have to create the admin interface. So the admin interface, a new table, new script, new crewed, new insert, new delete, new update, new everything. Every new new item, any new idea we have to develop everything again. So this is the reality of this version we launched at first. This is the face. It's kind of beautiful with the way back machine kept several images so it's kind of beautiful. It's beautiful at that time. It's beautiful. Organize it and stuff. But behind that we had several scripts doing stuff and it was kind of cool. It's okay to maintain. It was not a huge thing but it was not well organized. It was generated by automatic generator, code generator and we didn't organize it. So we kept that way sometimes so it was kind of hard to maintain that way. But it worked. It was fast. People had access to information so it was good. So the work done with the website creates a culture. So the website kept growing in number of scripts. So every new script got a new copy and paste, a copy of an old script and did some changes and generated a new functionality in the website. So when you work with the new code you already copied, you started to improve it. So you had old scripts and new scripts mixed. So after some times you got several ways of working, of several people that work on that script and then the scripts went to be kind of old. That script is different. You have to deal with that script. Then how that script is old. So let's work on that script and make it looks like the new ones. And then we tried to refactor and stuff and we kept doing this. So we kept having always old scripts and new scripts. Every time we copied a new script we have two versions to maintain. So it started growing. We started that problem that we can deal that was okay. Start growing and growing and growing. And we kept that way for some time. We got a new design. This is the website. We already created some servers like streaming. Audio streaming using free software, video streaming using free software. And nowadays it is made by some proprietary system but at that time we had low money and low time. And what we did with free software quickly it was okay to have. The backend architecture was almost the same. This version was almost the same. We have just more scripts. So we didn't evolve the backend. We have over just the front end. So this is the new design. You can see we have more new items. You see it is missing some CSS also. It was a little bit better than this. As you can see we had more services also. Portal of transparency which is a portal of public data. We had three systems there. We have mail to other systems in legislation system and contact systems. What more. That's it. So the red button says red. Audio online and TV online. Radio online and TV online. It's the way we streamed was clicking the button and seeing the radio or the TV. The radio or the video streaming online. We didn't kept the videos. We just share a stream online. So in 2013 we did another web design change and did not have any relevant change in the backend. We kept growing, kept changing people. People interns quit their contracts and then we have had new interns. So we kept having new ways of writing code. We didn't have for this team at least. For this team we didn't have a huge culture of guidelines, creating guidelines for developers. So the developer just look at the code and copy and do the same thing and sometimes improve it and sometimes not. So we kept having code, mixed kind of coding styles and without a guideline. We have, it's okay to have different styles but when people follow a guideline at least we like Pep8. We at least have something in common. So this is the website. As you see it's different in the web design in front end. You see there is the radio online on the left right corner of the screenshot. There are menu items, almost the same menu items, something more, something less. The menu items at the top, the top bar, were changing, were effectoring. Okay, but the backend is almost the same. The backend is the admin interface. We had to copy things and create tables and create stuff and the front, the website is a new menu item, a new script, a new, at least we have a common template. We don't have to deal with templates everywhere. It's all by hand. We didn't use it any framework, any PHP framework. So this is coming to an end. We launched it in August 2020. We are using now, right now, the portal model from Interlegis. The portal model lives in Interlegis since a long time. I believe. I knew Rodrigo Ferri. Many of you don't, but in Brazil we have a contest. It's not a contest, it's an award. The award is named Dona Alice Tremeia. It's for the person that shows more passion and contribution in the year. So the Brazilian community decided to change the name for Dona Alice Tremeia. It's called the award. Now we have Dona Alice Tremeia and Dona Alice Tremeia award. It's the biggest award in the Brazilian community. Jé is one of the most, the biggest contributor of the portal model. Jé has passed away. We still use his contribution. So thank you, Jé. Jé started working in 2004. He started working on the portal model. I don't know the dates. I don't know. But I think that the portal model exists for 16 years. We are using it right now. It's very cool. It improved a lot. But it still uses platform 4 and Python 2. So we need to change. It's needed to change. The portal model needed to evolve for Python 2 as soon as possible. So we need it. Our portal is customized by the Fork Content. It's a company that says they do content management solutions for content.com.br. They did a great work. They did a great work. I know André did a great work. They did hard work. Very hard work. So to do a portal customized team, the team is free. It's free software. It's available for everyone. You can use it. Everyone can use it. Okay. So what do we have here? What do we have now? We are using a platform customized. And the work that Gia did, the work that people from Interlegis did, and the team that the Cuma Municipal bought for content are free to use. The public money spent on this software is public. Everyone can use. If someone, some company needs the team, the company can have the team. If the other city hall needs the team, it can use it. So Interlegis hosts almost 1000 portals. So the work from these people, the work for Plom, the Plom people, Gia and many others. André and many others, many others are saving money for people. They're saving lives because people don't have to spend millions on a content management system. The city halls don't have to pay money to host their portal. So their portal is hosted just for Interlegis. The money is spent only one time, not several times. Instead of paying money several times and losing lives, we spend money one time and save lives. Yes, we save lives. Every money counts to have a place to put someone that have co-ing. Every money counts. Every penny counts. So thank you. Thank you, Plom guys. Thank you, Gia. Thank you, André. Thank you. So this is the portal right now. It looks way better. It's way better to work with. The people from IT have just to deal with users' permissions and some information organization, but the people from the communication department do the organization of the information right now. They know how to create their own homepage. They do it by themselves. They learn and they do by themselves everything. Almost everything. They don't know how to admin the server. They don't know or it's not their responsibility to deal with users and permissions. So it's important for them just to deal with content and visual communication also. You know, communication is also visual. It's not just text or it's also audio, video and images. Images and this disposition, it's communication. So I think the portal is way better. It's a kind of institutional portal right now. It has a real CMS right now and I am very proud of it. I don't know if the longest project of my life is the PHP site or the portal because since 2004 I would like to have this in camera. So that's it. That's my work. That's what I did. I don't know the value of that. But the project is still alive. The news, the old news are here. They are alive. You remember that website that had a lot of customizations and web design? So here it is. We took off all the web design, menu and food and that's it. This is the website, the news system. We used that system to put it in the portal. It's a kind of iframe. It's called Windows with Zeta. It's kind of cool. Windows with Zeta. Zop. Cool. What do you think? So people, after all I said the thing that I have to do is thanks. Thanks to my mom, Maria Angelou, my father, for out supporting my life. To my uncle, Miguel Angelou, he showed me my first source code. It inspired me. It was not a relevant system or saving the planet system but it inspired me. Thanks to Dornalos Terminia, he supported me for being a shaman in Python Brazil 2010. Without his support, I would never be a Python shaman. Thanks to Rodrigo Ferre for all the work on the portal model and for being an amazing friend. And I miss you. Thanks to all of you for all the life and advice. Other friends from the Python community. Sometimes I tell him how the things I pass through. He knows almost everything. Not everything. No. He knows something that nobody knows. Thanks to Ericko. He's always, always motivating me. He wanted me to be here. And I hope it needs to be here. Thank you, Ericko. Thank you. And thank you to all of you that are here right now. Spend your time giving your time to hear this story. Thank you. Thank you very much for you. And thank you, organizers. You made my year better. This is one of the best things that happened in my year. Thank you. And we thank you, Hamilo. It's a pleasure to have you at the Plon conference again. We are mates of Plon conference for the last two editions. We basically stay together. You're a really good friend. A nice human being. Amazing human being. And I posted the links to Portal Modelo on Slack. And I would like to invite all of you now to join us in the face-to-face. If you're watching this talk live, click here.
What is a life long project? It is a project? The behind the scenes of a institutional website that lived for mores than 18 (eighteen) years being maintained by the website team. The talk will tell some anecdotes about my career, about the chronology of the project until it was substituted by the Portal Modelo, a product based on Plone, created and maintained by the Interlegis / Brazilian Legislative Institute (Instituto Legislativo Brasileiro). It may tell some lessons learned, lessons that maybe learned sometime, or not. :)
10.5446/54754 (DOI)
Hi guys, I want to give you a quick pointer to an interesting project we started almost two years ago doing the Plontagung in Munich and we continued on a sprint last year, on the beginning of this year at the Plontagung in Dresden before the mess started with Covid and all. We were lucky to have this event this year in real life before everything went down and we made a bit of progress there, we improved the single parts and what we actually want to do is we want to bring back an add-on listing, an add-on catalog for Plont so that you can actually find Plont add-ons easily. Nowadays it's not really possible to do that, you have some places where to look like on GitHub or on PyPy but you will never find all the existing add-ons and to solve that, yeah, a couple of people started this project. So let's have a look. What we basically want to build is a search engine where you can search like full-tag search, you can filter for different Plont versions and also filter to just show add-ons and not core packages or just seams and then you have a listing of the matching add-ons and you can click on them and you will see some details and the whole thing is built on free legs. So the first leg will be the Python package filter aggregator which is basically a Python aggregator which aggregates information from PyPy and also from GitHub. So we will have the informations from the Python packages and then also some living data from GitHub like the activity and stars and so on and the contributors so that you can have a guess if you want to use that add-on or maybe not. All the informations are stored in an Elasticsearch database and we then used a filter API to provide a small API. This is using permit and yeah, provides a really simple API. And then we have first prototype written in Svelte as an UI to query the API. And this is basically what you see here roughly and yeah, we will use all the classifiers. We have also some new classifiers introduced a couple of years ago to make this possible to filter for add-on or theme for example and the classifiers for the versions we have longer. The aggregator can be used for every Python package so it basically can scrape all the PyPy. For our use case here we will stick to what's for plon so we will actually aggregate only three packages which have classifier frameworkplon and then go from there. That's it for now. We will probably sprint during the conference or after the conference on that. If you are happy with Elasticsearch and want to get your hands dirty there and help a bit to make that reality more than welcome or fix some API calls or do some front-end work with Svelte just ping us and yeah, we will be more than happy and get you on board. Okay, enjoy the rest of your conference. Bye bye.
A new project to aggregate, and search, Plone add-ons.
10.5446/54755 (DOI)
Hello, my name is Steve Piercy and this is my lightning talk. It's called Deform and Friends, or How I Learn to Stop Worrying and Love Web Forms. First of all, I am not going to talk about tools for simple, user-created forms. It's not my market, I don't make any money from it. But as professionals, we learn, understand, and care about the details of web forms. I enjoy working with the collection and processing and presentation of structured data and making order from chaos. But to make that happen, you must have a really good interface. There has to be structure to the data, organization. Data has to be valid. It has to be an email address, or it has to fall within a specific numerical range. Or it has to be a decimal. As far as security, we want to prevent CSRF. We want to make sure that people who have permission can view this and only those people are right to it. To solve all these complex problems, I use Deform and Friends. Deform is a Python library for generating HTML forms. It uses colander for serialization and deserialization of schema nodes and for validation. It uses peppercorn to maintain structure of data in HTML. And finally, it uses Twitter Bootstrap forms for design. Deform, colander, or peppercorn are all projects underneath the Pylons project. Now let's talk about data, specifically nodes and structure. A node is the most basic element or attribute of an object. A schema is a collection of nodes, or schemas, or both, supporting infinite nesting. So you can have a schema inside a schema inside a schema all the way, turtles all the way down. Thus, schemas support structured data. And here's how that looks in code. Here, we can define a schema in Python using colander. Let's say a person has a name and an age. Next, people is a collection of persons and it's in sequence in order. Finally, schema is a collection of people. We now have structure to our data. It maps well to either relational databases or ZODB. Now with HTML, submitting forms, it's just basically sending name value pairs. It's flat. Peppercorn allows Deform to treat the form controls in an HTML form submission as a stream instead of a flat mapping of name to value. So in this example, we can see that this date object consists of a day, a month, and a year. And all of that together is a date. That's pretty cool. Well, let's go ahead and take a look at the demo. Here we have our person object. Let's add a person. And let's try to enter some data and see what happens when we try to put in some background. Let's add data. Hmm. Too old is not a number. Let's try. Hmm. That's too great. Well, let's try something a little more reasonable and sensible. Hey, that validated. And when we look at our data that got captured, we see that the first person in order is Steve. Boogie is a second. And a list, an ordered list, and that people consist of that sequence. That's cool. But wait, there is more with Deform. Not only do we have just, you know, just this one example, but we've got a long list of widgets. All the standard ones, checkboxes, radios, whatever, but combinations of sequences and mappings. Not only that, but we're getting ready for Deform 3.0. And in this version, we are now using Bootstrap 4.5. And we've got all these other cool things, including an unofficial Deform demo where you can submit your own widgets. Here's one more thing that's really cool about Deform is, you know, validating between fields. Here this is saying we need to have at least one of those two fields has to have a value. So when I submit it, it says, oops, can't do that. So I put in one value and it passes validation. Now with that, that concludes the demo. We are going to be sprinting. Here's more information to find out about that. And I hope you will join us.
because who doesn't like making webforms?
10.5446/54756 (DOI)
Hello. Here you have the forest wiki. It's a modern version of Zope 2. The biggest difference is that it uses Pyramids, security on views rather than Zope 2's security on methods. That simplifies things enormously. The next difference is this has a modern JavaScript enabled ZMI. You can sort the order of items in the ZMI. You can select all, rename, retitle, cut copy, paste, delete, all the usual things. The next difference is in the content types. For wiki pages, we have both with wizzy week wiki pages and markdown wiki pages. For the look and feel, we have HTML, CSS, and JavaScript all with the syntax checking ace editor and then the usual file folders and images. What's much more interesting are the advanced content types, coffee script, pug, and then HTML, CSS, JavaScript, and transcript folders. Let's take a look at these content types. First, we have a wizzy week editor. Next, we have a markdown editor. Then we have a syntax checking technical HTML editor. If you were foolish enough to submit that, the chameleon page templates on the server will also do validation. Maybe the most interesting object we have are these B tree images. They look like a regular image but say I need a thumbnail. All I have to do is type in 100 wide and I get my thumbnail. It traversed that object. It didn't exist. It generated it for me. Where does it store it? It stores it as a child of the image. You can do that on a ZODB. You can't do that on a regular file on the file system. We have images, HTML, CSS, JavaScript. They all have children. Here we have a CSS object, another folder, syntax checking. Here we have JavaScript folders. First, a syntax checking JavaScript object. Of course, JavaScript folders are really a tree of JavaScript objects. I do a lot of work with JSON. JSON schema is really good and about to start teaching JSON schema classes. Then, let's take a look at Pug. Pug is the leading templating engine for Node. It is much like camel. It tosses away the opening bracket and closing tags and uses indentation to generate the HTML. That's really brilliant for bootstrapped menus. Here you can generate the JavaScript and render the JavaScript on the client if you wish. Because they are valid HTML and page templates, you can also then interpolate the values on the server in Python. It works well on the client and on the server. What can you do with all of these wonderful content types? Here I built the green party maps for the US presidential election. It looks like just a map. It's actually six different applications and growing. Only 12,000 lines of Python code. If you have any questions, please contact me at forestwicky.com. Thank you.
A modern version of Zope 2
10.5446/54757 (DOI)
Plone and Webflow. Both platforms are for building websites and web experiences, but they approach things in different ways. I'm hoping that from this, this may inspire maybe things that may happen in the Plone community, as well as I wanted to bear in mind that these two platforms have different motivations and that may account for some of the strengths and weaknesses. Okay, so Plone, we know it has an enterprise content management platform and perhaps the probably two big things that tend to be emphasized are its security and flexibility. You can make it integrate with almost anything. Webflow, on the other hand, focuses very much on visual web design and content management has been kind of added on at a later date. So it has really strong design tools and it has an emphasis on no code development and was put development to make quotation marks because there's some limitations to what it can do. Let's look at Webflow strength. Top things that I like about Webflow tend to be associated with what you can do as a designer and also some of the editing experiences. Here's a Webflow interface and in this case I'm working in an existing project so I'm using pre-created symbols as they call it. You can think of these as blocks of visual code and I'm just placing those symbols onto the screen to create my layout and I can add other things as well that are not these are built-in things such as sections and what they call containers and I can put in a heading and paragraph. Now while I'm doing all of this I want to point out to you that this thing is responsive so at different screen sizes I can change the styles to get what I want. Webflow supports the concept of content types. It actually calls them collections so you can have a collection of news items or a collection of profiles. In this case these are basic documents and I'm previewing how each of these documents will look. There's also the capability to create interactions so what you see in me do here is create a simple interaction and test it out and all of this is visual and you can do really fancy interactions like what you see in here and here. So there's a lot of flexibility when it comes to designing the visual experience and here are the clone advantages. A common task like linking to an attachment is actually pretty challenging in Webflow whereas in clone it's it's pretty much standard so if I want to have a link to an image I can upload that image then link to that image. Some of those things are a little tricky in Webflow. So I call that CMS attachment but clone is also way more flexible for forms and they're half of those and other things where clone really shines. Here's a site that we built in Webflow. It's basically not more than four pages and simple whereas this one we built in Webflow but then we implemented it on top of clone so it uses Webflow's interactions but a lot of clones functionality. I would say to wrap up that you should use Webflow when you want to do simple designs and rapid prototypes. Go for clone when you need more robust CMS functionality.
Strengths, weaknesses & trade-offs for these two platforms
10.5446/54758 (DOI)
My name is Steve Piercy and today I'm going to demonstrate how to create a pyramid project in PyCharm Professional. Let's get started. From the file menu or this here welcome screen, select new project. For the project type, select pyramid. Let's give our project a name and a location. PyCharm automatically fills in the virtual environment location and the project name. For template language, I'm going to select gint2 and for a backend, I'm going to select SQL Alchemy for using an SQL Lite database. Click create to start project creation. Now right now, PyCharm is creating a virtual environment. It will then install packaging tools and upgrade them. After it's created that, after it's done that, it will install cookie cutter and its dependencies. Then it'll run the cookie cutter and create a pyramid run and debug configuration and finally pop open its readme text file. Now PyCharm is pretty cool. It runs the first three steps in this text file for you. So we can pick up where it left off by running these commands in the terminal. So we'll go ahead and do that. This will install our project in editable mode. And then install testing requirements. Next, we will generate an Alembic Revision and then upgrade to that revision. Then we will initialize our database and populate it with data. Then we will run test through pytest and we should get to test pass. Hey, that's pretty cool. Two of them passed. That's successful. And then we can start our project and view it in a web browser. Let's do that. Hey, that's pretty cool. That was easy. But hey, you know what? Maybe you don't like using the terminal so much and you like using pretty things. So PyCharm has you covered. First, let's take a look at this database. To open it up, you just double-click it. You can inspect schemas. You can see the tables. You can see the columns in a table and you can actually see the data inside of your database. That's pretty cool. All right. Well, what about running the project? Well, you can do that too. We have underneath our run and debug configurations. We can check and make sure that we open up the browser when we start the run configuration. So I'll do that. Let's run it and it should pop open a new window. Woohoo. That's pretty awesome. That's cool. Let's stop our configuration. We can also do some debugging if we like. But to do good debugging, we always have to set a break point. So let's set a break point right here and start running our debug configuration. Okay, that gets going. Pop open the window. Oh, hey, we can inspect the request. What's inside there? Lots of goodies. That's pretty cool. Let's step over this line of code. And then we can see, oh, it generated a select statement, an SQL statement. That's really cool. We can inspect that too. But hey, let's keep going. Let's actually run the statement and execute it. And we can see, oh, there's data there. Hey, that kind of matches up with our model. That's really cool. Okay, okay. I'm super excited, but I'm going to stop right there. But wait, there's one more thing. Let me show you how you can run your tests within PyCharm. We're going to do a PyTest configuration in the project directory. We'll copy that and paste it just to be right there and click OK. And then we'll run our tests in PyTest. Look at that. We got our whole test we executed to test. We can also run just one test at a time if we want. That's pretty cool. Woohoo! All right. I can't get too excited. But anyway, I just wanted you to see. I'm not a time, but go here to this web page, this GitHub page, for more information and get a discount of 20% off PyCharm Professional. Use the PLONE20 coupon code.
How to use PyCharm to get you a working Pyramid project, fast.
10.5446/54759 (DOI)
Hi, my name is Tiberi Ukim. I'm a Plonon Volto developer working with Odewaver Minia. I'm a core Volto contributor and I've been developing websites with Volto for a little more than a year. We've been working this summer for a new Volto-powered website by Biodiverse Information System for Europe and Volto Slate is one of the many open source products that were developed as part of that project. Volto Slate is a replacement for the default rich text line editor, the one that's used for the text blocks. And this is important. It is not a replacement for the whole Volto-PastaNagga composite page editor, it's just a replacement for the paragraph editor. When you install Volto Slate, it's not that different from the default editor. It has a few more buttons, but it looks and feels more or less like the default. Some things to consider before we continue. Volto is a big evolutionary step for Plonon and that's not just because it integrates with the latest of the front-end development world, but also because it allows us to work with concepts and solutions that were not in our reach as Plonon developers. Before Volto, we didn't consider a rich text editor as one of the special building blocks of an application. But the PastaNagga composite page editor, it places it right in our face. Here's your text. Add anything to it. You can build your page around it. And that's a huge potential just waiting for us. I'm going to show you just two examples of a potential that this brings. First is Volto Slate metadata mentions, which is a product that allows you to insert dexterity metadata fields in text blocks. The second example is Volto Slate data entity, and this is a product that provides integration with tabular data coming from CSV files or a REST exposed SQL database. So the editors are able to integrate text with values coming from database or metadata fields, and in their final render, the values will respect the styling given to their placeholder text. Let's look at Volto Slate in action and see what are its immediate benefits. Slate itself is just a library that you can use to build an editor. It doesn't provide the default editor. So naturally, Volto Slate evolved from the start as a rich text editor designed to fit Volto. That means we can split paragraphs into separate blocks, join two separate blocks into a single paragraph, and sub lists, for example. A lot of work has gone into the copy paste support for Volto Slate. We understand what Volto is, so we know what type of block you should get when you copy paste something, and we provide a framework to extend this capabilities. Right now we integrate with two types of Volto blocks, tables and images, but that's something that you can extend. And there's a ton of really, really small details that matter, like consistent up and down arrow keys behavior. And when you click in a text box, the cursor will stick to that clicked position. I think it's important to clarify why we chose to implement a new editor instead of improving the existing one. And there are multiple reasons for this. With Slate, we get a better plugin framework, and this is really important for us. You can see from this code fragment how easy it is to override the built-in behavior. Plugins are just wrappers around the editor, so they have access to the default behavior, and they can override that behavior. So with Slate, we get a library that's meant to be used for a custom-plugable editor, and that's in contrast to draft.js from my experience, where it's kind of difficult to build custom plugins, and it's kind of difficult to connect them to the editor, so that's why Volto needs to use another third-party library, the draft.js plugins. Slate has a simple DOM-like storage for its values, so we were able to implement the paste support quite easily, as it maps very cleanly to parsed HTML nodes. While with draft.js, it stores its data in a very elegant format, but it requires a special dedicated library to be able to render these values to the final output, and this introduces its own set of problems and bugs. And because of a simpler data model, look how simple it can be to render the Slate values to the final output. Just traverse the tree and use the original React components. Okay, so here are the bad parts. Right now we don't have any migration of any kind, so it's not possible to migrate existing content to VoltoSlate, but VoltoSlate can coexist with draft on a Volto website. Also, right now it's not possible to completely remove or replace draft.js out of Volto. We already have an HTML parser inside VoltoSlate, as we use it for the copy-paste support, and the algorithms are very straightforward thanks to the tree-based storage model. So it was just a matter of no reason to do it until now. Thank you.
A replacement for Volto's line editor, that can bring the full power of React components to editing.
10.5446/54764 (DOI)
So hello everybody. Servus from my side. It's close to Munich. My name is Stefan Antonely and I'm happy to introduce Peter. Hi, I'm Peter. I'm based in Zurich, part of the Blue Dynamics Alliance. And I think I've been doing Plon since 2004. So, little mess with my slides notes. So what's new in Plon 5? We had this beautiful new theme. We had the other teaming as a default. We switched to CES compilation with less. And we had a brand new Plon team, Barcelona. So, Stefan. Yeah, let's talk a little bit about the history, basically the story behind Barcelona LTS. Everything started, or the whole story started in Tokyo a couple of years ago now. I had the idea of a clean, simple theme for Plon 5. It was during the conference and that's why I called the package Plon Seam Tokyo. You can check that out as well. The theme was based on bootstrap with the idea of reusing stuff. I want to use components. I want to use the stuff without writing tons of CSS all the time. The package is basically proof of concept and had tons of overrides to identify what we need to touch when we want to switch the template story to any CSS framework in this moment and the decision was then bootstrap. So, the issue with navigation and editing on mobile and that's what we solved with collective sidebar, which is a small package that gets a solution for navigation and editing on a mobile screen. This is how Tokyo looks like. In the moment, the screen chart is from 5.2 and we are going to update that in a branch with version 2 for Plon 6. The good thing is basically we just have to delete a lot of stuff. All the overrides are gone and then we have a clean theme like that with almost no CSS and almost no templates. That's a lie. Let's say 5 to 6 templates to get exactly that what we see here. So, we get there. We had some community discussions about the stuff. There was a Plon target in Berlin, there was a Plon conference brand, there was also the Alpine city sprint. And of course some bottles of wine help during that discussions to find out solutions and everybody tried to use bootstrap somehow, mostly without success or real problems, have fun with that work. So our first idea is was to map variables from Barcelona to variables from bootstrap. The result was there is a lot of stuff defined in Plon, which is also defined in bootstrap. So, we came up with duplicated stuff and thought about what's the solution for that because we didn't want to reinvent the wheel. So, it was a possible solution, touching all the templates and changing the markup in the templates to bootstrap. The Plon conference in Ferrara came up. There was the first clip basically for modernizing the markup to bootstrap, bootstrap 4. Now we are on bootstrap 5. A few weeks later we made a clip for modernizing the default theme as well so everything should fit together somehow. So, we started with a new task, which is to update all templates. We know it's a lie, we never touch all templates, but the important ones. And a big major task and a little bit tricky was to tackle the form lip. C3c form. I'm not very experienced with that, but we have luckily some people helping us for touching the form lip, which means forms are on bootstrap 5 after that work. So, the story or all the work is for enable us to write templates and reuse components without writing CSS when you want to create a template. They said I have to go to Ferrara they said I have great discussions there and that's how it ended somehow so, but go to Ferrara it's a nice lovely place. So, make things easier is the headline. We think I mean development I mean developing templates templating. Things is creating a clean and and see a clean seem for clone without the mess without touching a lot of files without editing a lot of CSS or overriding a lot of CSS. That is basically what we mean with make things easier somehow. That's the reason why creating a modern website or web app is a complex task front and stuff is complex since there are thousands of devices and resolutions. A lot of things out there that needs to be handled somehow you can watch or you can look at on a website with your VR headset or with your PlayStation, you cannot support each device separately there is a solution or a friend that needed for that. It should be easy to to use components that are that just work and you don't have to care about them. So, user expect things to work. And that's the idea for bootstrap there are widely known UI patterns for features like buttons forms hover effects whatever they just work out of the box and you can now use that UI patterns. So from a developer's perspective. There should be one way to do things. We. The idea is we use a framework we use somehow standards for that and plonk or the ecosystem all the add ons can rely on CSS on some stuff that is there, and can write nice and shiny templates without touching CSS or have a lot of work to get that up and running. And when you are working with blown you're a fan of Python you are a fan of systems. You should not consider with designing stuff in the first step of course design is important but the first step is do the templates and get the stuff to work and the second step is make it nice and shiny. By the way, Peter is later on showing something that is nice and shiny. So developer you don't want to think about the markup. That's what we split strap comes you can you can just use components there is for almost everything there is a component, especially forms buttons, and a lot of custom stuff is still possible. The good news or the very good news is, there is documentation. We don't need to reinvent the stuff we don't need to rewrite documentation. The bootstrap documentation is now our documentation, where you can see their works in blown you can fully rely on that framework so far. I will hand over to Peter now. Yeah, so we already heard what new implant six. We have this water as the new default UI. And we have Stefan to try to Then, we still have the classic UI UI but we updated the mockup with pushup five. It still has the same basal and look and feel. But we updated it. So this is the five to two. And the next slide. This will be slightly updated version of Barcelona, which is fully based on both wrap. We know that we don't have any through web seeming anymore. But we haven't. So just add your quick and dirty CSS styles. And in the future or near future, you also can override CSS variables or custom properties. And follow the path of bootstrap five there. There are already a few CSS variables that you can change, but not fully yet. And it will be modernized. I already talked about is that we have a new JavaScript story or not so new. We will get rid of old stuff. And one point that maybe wasn't mentioned is that we updated or were able to update jQuery finally to version three, which is the latest version. Then let's go over to bootstrap. Bootstrap is still the most popular front of framework. It's well documented. It has tons of examples. It's tested and maintained. And honestly, we were missing out a lot of fun. While doing bootstrap, it was so easy to change stuff, create stuff. I enjoyed it a lot. So what is also new in bootstrap? They improved the overall look and feel. They simplified a lot of the classes that you can use or able to use. They updated and extended the color system. You have color palettes where you can choose all wide range of colors. They added custom CSS properties that I mentioned. They also have their own SVG icon library, which looks really nice. And they ditched or kind of came to the conclusion they don't need jQuery anymore. And it's now pure JavaScript that they use. What it also means is that they are dropping finally support for IE 10 and IE 11. Bootstrap 5, this is not correctly anymore. Just this week they switched to beta 5. We don't have to go to version 5. We started with bootstrap 4 in the process of upgrading the classic UI. But just two or three weeks ago we switched to the alpha 3 and we will switch to beta 1 this week or on the sprint. Yeah, Stefan. Luckily that's just a small change in one line as far as I remember. So let's talk a little bit about features. I don't know why we didn't do this earlier, but what is in the package now when we switch? The most important thing is Plon Core templates use bootstrap 5 markup. As I said in Barcelona, the Plon 6 branch of the Barcelona team, we started with tons of overrides. They have been moved to their own packages now. So we have lots of LTS branches. Check out the Core Dev build out for the current state. There is also blip config. I don't know if I have the links in my talk. We can provide the links afterwards. But if you want to check out the package, you can test it easily out with a newly created package. We will talk about this later this afternoon. What I want to say is all major templates have been touched already. This means editing firms is working out of the box without custom CSS, without custom stuff. The links for default content types are already in. Listings work, like the tabular listing as works is responsive now, is one that I really like now. The control panel has been updated with lots of effort by Peter. This is in right now. There is a screenshot from Peter showed already. We, we, the idea was not to redesign Barcelona. The idea was to give almost the same look and feel, but just update all the markup in the templates. So the markup works together with bootstrap default CSS that comes from your own compiling or that comes when you download it basically. So this is a small site classic clone classic dot clone dot de for the current state. It's not updated right now we will update it, I guess after the talk, where you can see the current state. This is work in progress so expect some issues expect stuff changes. Almost each week we do small changes on different things. So this is not an official demo. Bootstrap components as documented. That's what I said before works like a charm. So this is a little bit of a bit of a bit of a demonstration as the bootstrap documentation now. I really enjoy what you see there, you can copy and paste, they provide code examples and a little copy button you can copy that code snippet pasted into somewhere in blown and it really just works. We have tried that we have I guess we have a screenshot later on. That's basically for all the components. We really enjoyed it I really liked it to copy stuff and paste it and I mean no plan documentation prior to that just did exactly that. Some of the core components are already updated we changed the breadcrumbs a little bit we used more. We used cards for the portlets and the tabular listing. That's what you see here. The portlet on the right side is now no custom CSS no custom markup it's just copied and filled in with the variables from blown. You literally can copy and paste what we mean with that accordion example. I guess we had to turn off the HTML filtering a little bit for the example, but basically no custom tweaks and that's not fake somehow. I will pass over to Peter. So what is the new Barcelona? It's basically an opinionated set of pushup variables that makes bootstrap look like Barcelona and since we use all we updated our markup to that it works through the whole theme CMS whatever you want to call it. We try to use as little as of our own CSS on top because we just wanted to use the full feature set of bootstrap and I think we kind of did a pretty good job making it look like Barcelona before. You can really change every aspect of the same by variables. Doesn't matter if it's colors font sizes, spacings, grid gutters, whatsoever. What's also nice with bootstrap, they have some overall properties that will go along through the whole theme. So if you want to have shadows just turn it on. If you want to have rounded corners, turn it on or gradients within your components. Yes, just turn it on and let it go. Stefan. So this is what our variables look now is still working progress. You will remember or recognize some of these things. We'll quickly show you how the variables work. So this is next string. So this is what it looks out of the box. Change the primary color. Everything throughout the whole theme turns orange. Next we turn on the shadows. Components get shadows where needed. We're turning on rounded corners. And what you see in the next screen is also all the edit forms have already the same look that you defined in the variables. And then the next slide again. The example from before component just takes up the colors from variables and works. So yes, we just added a little bit for CSS or CSS for our own components that will be the navigation because it's a little bit more complex than all the examples that are out there on the web. Also stuff like control panels and some smaller things. We added CSS for that. So we're coming to the theme, theme in workflow. So the net will be an MPM package. You can base your own theme on that just by including the MPM package in your package Jason and run the compiler. We will build a Bob templates clone template for the new seeming workflow. I will show you later what did this somehow will look like. And you still will be able to do your quick and dirty customizations even on top of that. Questions came up. If there will be any the other. Yes, it's still there. It will work as before. I think did some optimizations in the rules XML that will allow you to make more easy modifications within the content area, which was pretty hard before. I think, Stefan. It was the other. Your turn with the icon resolver. Yeah. I come up with the idea of supporting more than just the default icons. I was in the past a little difficult or a little tricky to change icons that are used by the clone UI. So I don't mean adding icons and using an approach or stuff that is not a problem. But what if you want to change the clone appearance or content type icons for example for itself and that why we came up with an icon resolver. This is more than the first idea I had basically. So let's start first. We have bootstrap icons now. When bootstrap alpha I guess came out there were also new icons. They are more line or icons. Big difference to the glyph icons and we decided to just use them because they are bootstrap they are there. Why not using them. So if you want to do something like that, check out icons dot get bootstrap dot com for the for the icons and the good news again is all the icon names you can find in the documentation are are also available in blown with our icon resolver. We have stored in via generic setup. So there is basically an XML it's part of blown not static resources. There we define a name for an icon, a copy or paste that's the clone name and assign the past to the SVG to it. That's basically stored in the registry. That's that's the straightforward way of making it possible to change it later. We have all the clone icons are registered that way, all the bootstrap icons are there. And on top of that we registered a couple of mine type icons for the file content type. So when you insert a file with the PDF it shows something that looks like a PDF icon, and you can change it and you can touch it so that's all on the customization story. So how to deal with custom stuff override the XML. The idea is to provide a best practice packet for front awesome. After in a while, you have the option to touch each icon to override the past to your own SVG or whatever you want to go in there. The icon resolver itself is work similar to the imaging story. It's the icons method is available via main template. It has a tag method and you're all method. You pass in the name of the icon. You can modify class and alt tag and you get back in line SVG from bootstrap as long as it is an SVG. When you have a BNG registered you get back an image tag with that image reference so this is also work in progress. We used it for a couple of weeks now and for clone defaults we need for the templates for listings for example, they already work. So this is that's what's the idea of icon resolver. It's so it's 2020 we should really insert icons as SVG that gives you the option to style them or to manipulate them and the icon resolver is the idea of handing that. We updated the templates to make use of this already so check out the test rendering page from the court have built out for an example how the coach would look like and how the icons are. How the icons are done basically. Next is the showcase with Peter there you can see some bits of that already. So this is the updated overall look and feel. We also updated a little bit of the structure of the content types. We put the lead image on top, because we thought it's the most common use case nowadays. We scroll down on a typical page. We added a view let for contributors and rights. We also restart the tags down there. And we read it various listings. This is what the tabular listing looks at the moment. There's more work to be done on the listings but think this is the new fresh way to do listings as we use it. Next up is the event. We changed this event summary to be more prominent. Also in mind that most of people now use websites on a mobile phone. And we also read it related content on the bottom here. These are already styled because we updated the CC3 forms. Once I notice if you're ever looking for a CC3 form widget, you're now all in blown up CC3 form and not scattered all over those packages. This is what the control panel will look like or is looking. Yeah, complete overhaul. We skip the kind of portlets on the side and use the navigation, the stroke downs. We also updated a lot of all of the control panels. They all followed the bootstrap markup now and theming them was really easy. This is the using groups panel. And on the last one, it's a mobile view. Next, I'm giving a talk on how to theme blown based on Barcelona Neta. Then Stefan will give a talk how to do that from scratch. And Mike will show you how to bring in a custom theme that you got from somewhere else and make it work with the other and then you wait to do it. So we weren't the only ones that were working on that. So a big shout out for all those lovely people that helped with on various sprints. And one side note besides blown a lot of work was also done in Mosaic. There is also a branch for bootstrap there and it already works quite well. So questions in the face to face afterwards. What I want to mention is that we besides the sprint on the weekend, which will primarily be on Saturday for us at least. We will starting to continue our weekly, yeah, classic UIs prints every Wednesday. And that will continue on the January 13th. Thank you all so much. Thank you all. Thank you for listening. I'm pop over to the gypsy now for questions, I guess. Back to the moderator.
The story behind Barceloneta LTS