doi
stringlengths 17
24
| transcript
stringlengths 305
148k
| abstract
stringlengths 5
6.38k
|
---|---|---|
10.5446/30805 (DOI)
|
Short-hand is a method of writing device from the spelling system and the script with barricade language. In order to approximate the speed of oral speech, the written characters and ligature are made easier. The orthographic rules are simplified. Mainly phonetic writing is used. And also abbreviation is used. Greg Short-Hand is a system practiced in the US. We have learned it from the short-hand manual of simplified Greg. A fine book cover of which you can see at the left side. It's a fine book in the tradition of Donald Knuck with exercises solved as you can see. There is one of the exercises, we would like... Can you control and make the whole screen for the presentation? Oh yes. That's view, full-scale mode. Okay, that's... Greg Short-Hand was released in 1888. The current version is the centennial edition. The dictionary below is the reference book for our implementation. What is it? Next... Given a text, say the sentence, writing Greg Short-Hand with metaphon and latex. Or the English translation of the motto below, which means there is no disputing against hobby horses, right now in the Greg Short-Hand, the system tries to write it, so to say, to take Short-Hand notes. The result is a Short-Hand record, which are in principle characters from post-drip type drive. Vector font or a bitmap font. I would like to show you all the system. Hello. So perhaps I would like to hear from you an English sentence. Is anybody in the audience who can write Greg Short-Hand? No one. Oh, I am very lucky. Once upon a time there was a family that did happily ever after. So once upon a time there was a family that lived happily ever after. Is it with comma or without comma? No comma. No comma. Dead lift? Happily ever after. Happily. Oh, we should try it with a sentence, but happily ever after. There is the world's shortest area to. So how I can say, oh, so how I have to use mouse, and then perhaps I make it larger a little so you can see it. It's my house number. So, okay. What's the problem? So. Once upon a time there was a family that lived happily ever after. Oh, that's very good. Thank you for the sentence. So I will say you are after. So you can do this not only with this sentence, but with other sentences. Of course I have translated the father William Song from Alice in Wonderland. It's on my page if you go there and you can try perhaps another sentence if you want. So to do so this is this is how you can this is the Alice in Wonderland. This is only only snapshot of the screen. And you can say me I do not I do not write. But it's a dish out. And it's. So it's full. And you can go with the mouse over the over the words and you can read it. Okay. That's the kind of evil. I mean, it's a evil. So next next slide. Next time I will show you the Voynich manuscript done in the also. So perhaps a few words. Nobody knows Voynich manuscript. Perhaps a few words to about Greg Shorten. This is a synopsis from a. Aniversa short and anniversary edition of short and if you look. What's I speak I speak about about we I said we we had that mean system system and me but a system this is only a few there are only a few. Just together with the metaphor. So here's a synopsis of Greg Shorten. This is the method how phonetics are written. It's a simplified simplified letters from from normal Latin. Perhaps I will try to learn you the Greg Shorten. It's used there. So perhaps let us look at the table there. I have an example. This short short horizontal stroke. It means and a little larger. It's M. So a stroke like this. It means T and stroke. A little larger diagonal. It means T. So if you want to write if you want to there is also there are these are the kinds. How these are we how the consonants are written the words are written. Small circles. These are the sounds. T for E or any larger. These are the sounds like the sound and head calm and came. They are not there are no markers. There are no markers in the new edition. So if you want if you want to write you want to write say then I don't know what the word mean. But perhaps for somebody then you take the D from connect it connect it with short. With left loop of E. This is the same word would be for deal. This word means I don't know. It's an English word of course. But somebody. Noise. Noise. And the first then it's what? Yes. Thank you. Thank you. So. How do we need a notation? How can we write this with with metaphor? Of course. So I try to write the consonants in principle you perforatized and the. Flowers break at it. But before there is a sign there is a sign here. With plus for the left sign and then. A bracket designed for this. I have called metaphor. So he's one small guy alphabet. We also from the view of the system there are two kinds of signs. Similar sign circles and the other signs. These signs are very forward. This signs you can see a few ligatures there too. It's all these are good in the guy just you can. Oh, so I have also and out. It's a eight slide. So. So it's nothing new here in great alphabet. You can this is only a two just. Now the whole thing was. Is written. These are different. You can. You cannot call my in bed a tech tech. That's all. For. With with with the other. With with one for the record. I'm not right. For. There. Can only use. So this is the metaphor there are there are a full. There are such things as he is. A small. Small. With a dot over it or you can see how he's. The heating. How he is. He is. As a small. Small. Small. Small. Right. With a dot over it and then connected with. And under the. There is another. It means the suffix. So we must have a notation for the suffix. And. And that is. Okay. And they are in within the within the. Within the. A. My kids or within the parent. This is the metaphor defined in buttons buttons now form. It's nothing nothing. Self devised metaphor metaphor. I must say so. There are two kinds of joining the this elementary science. This elementary science are shaped in metaphor. And we try to join the there are two kinds of joining joining with. Joining with circles and join is joining without. Joining without. There are three. Three kinds the first two kind come from come from that means they are. Continuous and then you continue. There we use the John obvious John obvious. There is another. As an example you see connected D and N between. And you and and which means one we have heard the word. And you see there is this is by all. Then a much more to continue. Joints such as P and P and R they are very very common in German. It's very interesting that that the Greg has chosen how ingenious the. Greg has chosen the shapes of the of the science. It was it was confirmed by the by the park circle of linguistic how how they are how they are made. I would say 50 ways later at the works of Duke to Wetzig. To Wetzig and check linguist check linguist and still got a pranker. So the third car the third kind of joining without on some of these which I use. It's your a curbaccio continuous there I use the heavy interval for business. But in this special case it is only so well for such special cases. Where the tensions are parallel and. Opposite opposite signs perhaps a slide about about having interpolation for business. Blinds sister is the problem given the points. Ten tangents there and curvature there find back on the whole point and post control point. The comes out at system of money. To money equations which can be sold only if in special special cases one such special case where. Left side you have zero if the tangents are parallel then you can simply solve it. If the parameters are opposite and not a special price to problem with the problem with the with the two. In the case of zero one two three solutions this is very interesting. It was but it was sold by the boy in 1976 another interesting case where you can solve it is the case. One of the current is equal to zero okay then this is zero the whole thing is zero you can delta zero. And then delta one you can explicitly solve it. So how do I. Do the curvature journeys with circles with circles this is following. To us take. This is for for for week for week method we shape our we shape our. Also nine signs as true. As two slides which are collected in the model one for a check on the years but the curvature and the both end points is zero. Then we approximate the circle a circle in such a manner that the curvature in the most points as a eight segment. Curvature at the most points here is one if it is. Right circle is minus one divided radius radius of the circle but in one point the curvature is so it is zero. Then if I want to. Connect curvature continuously that is a little more as in weather front is done. This is simple way to do it I. The I would translate I rotate the circle in the. So that in the journey all have to go to zero so I start here with. Then I have the curvature curvature of the circle and I go to the zero go to the zero. Then you start with acceleration zero and then go to the south and there is a point of the joint of the joining of the joining there is a curvature zero. This is how you can connect a K with a A. Other other 62 cases are there. The only problem you have only to solve. No no this is the same same about about 10 lines. Okay in weather front this is not that's no problem but all these cases you have to you do not know what. If that is right all left left circle okay if you connect them you must know it before so you must compute it before with. So. This is about circular operation how I would like to say a few words about phonetic writing. The problem with the. Short hand is written phonetically the problem with the phonetic writing this is to be learned by short hand learner and by the system such as the hours to we use unices multi-axon lexicon from the University of Edinburgh. A lexicon entry as you see that the word has the following entries the seeds spelling of the word variant ID as down for for latex and latex. Variant then part of speech. It is worth it is known and the pronunciation field on the cell also the frequency count of course us is more frequently than the word. Latex and they in any world mean that the word latex is it is a number two. So. That the case is such as we have that case such as life and live where you can distinguish between the two heads or such as week and week and one they are much more rare than the cases where new homogas are data case data case right right. Here you for for different spellings for the same words for the same very common word in very common word in English okay. Another example are but not so common law we one one okay but. Casey such as I I or in or in this way very or not on what they are they are another program there is ignoring with a phonetic phonetic lexicon you know and I must there is the so called. You don't think Africa what what is what is it so I must I must back transform back to say the word data. Distinguishes from big I was just in the last swap is what you know what is what is it back to transform okay this is just one of the two parts of the. The linguistic problem on the so that this is done this is done. I would summarize perhaps the graph and phonem relation English is more complex than in other alphabetical languages. There are attempts to report reform English spelling until 1868 in their book South sound patterns of English chance to inhale mean that the pronunciation is a citation below mean that of a word is predictable from its spelling. Easy if you take the word asked okay or the word acted and if you know the pronunciation of the root as or then you can you can tell how to. They give a sense of so called a lot if you have tea and in past tense. Then you speak and ask with tea with tea also move also in another example if you would have if you voice on the land and the last one would be would be would be deep so. What we have to do in our system is to go the another another direction from the pronunciation to the spelling for the spelling this is done with the zero. The same with fire and these are real roots are context dependent it's a very interesting development in the computational industry that these things can be done with finite state of auto matter. This is a very interesting thing and there's also a book there you can buy the book and there's a software we use there is this guy was there. And there is a so there are variations there are variations to so. Speaking define abbreviation dictionary to 15 most frequent words or in 20s from so make up a quarter of the book also writing words together sparse writing tie I have not been able to put all 800 entries in the 16th dictionary but it will not be the problem. The last both phases are already there let me summarize so there is a text to program called text to program called text to Greg. If you take the sentence there is no disputing against hobby horses you have to talk and I said there are two kinds of tokens tokens which have entries in the. Hobby is not a John John D hobby there and you have to stand in lines of from a pronunciation Julia Julia metaphor from the metaphor metaphor round. The. Excuse my German friends at the bomb was like on the left side you can see that there was a logo graphic and there was alphabet writing pen writing system on the right use alphabet writing. You can also see there where. Gaps in the old Roman cursive writing on there is one legal one legal to show them system like German deck the check this is the German German system as a check system. There is a Greg system and there is a English bit man bit man system right letter signs together on try to John then smoothly most short hand letters show 10 system right only consonants observe the marks in bit man. These are the bothers not so in Greg the brothers are written out as circle. In order to be able to speak about writing short hand we must have a language system system specific specific meta notation is meta notation from older. All these all these. All the short hand that's called our machine machine and does it not me. Do not confuse machine writing and the moon of course pen still got it with machines they were in Curbs in the US also you can see down you can see down. Get there are adaptations there is one adaptation to one adaptation to to to Irish. Okay and you can also see on the record here that Greg was here in 1932 and he foresaw. Of course how the how the sentence. English we such as such as the word came yet came it was it was. Not cover as in as in Latin and perhaps one thing from the first. Short hand system where the Tyrone and the abbreviation for the Latin at for the Latin word at the means and is the. I sign I is the Irish sign is the Irish sign it comes from the first short hand system in the world from the Tyrone not. For 43 before. Is in is in Irish under the under the unique code number so and means of it means of course and. And I would say and though on us the owners please use the graphic writing to and all others please beware of the dark. Thank you.
|
We present an on–line system, which tries to convert English text into Gregg shorthand(O’Kennedy, 1990), phonetic pen writing system used in the U.S. and Ireland. The given text is at first tokenized (tokens being punctuation marks, words and common phrases). For each of the tokens a METAFONT glyph is generated on–the–fly. These glyphs are (smooth) combinations of shorthand graphems corresponding to phonems obtained from a pronunciation lexicon (Fitt, 2006). The shorthand text is then set with LaTeX.
|
10.5446/30806 (DOI)
|
Yes, but it's actually really, Peter, with his staff, really convinced me for the first time in a long time to go into a, to make a holiday in a country where just not a wine region is available. But let's go back to the task, the topic of the task here now. My, one, after other, had lots of do about the bubbles of tech, what to do about how to convince tech to handle different languages in high nation and these kinds of stuff. I'm now targeting a different part of the authoring environment that we have, namely the index creation. To create a register in a book or whatever else, something to have which is all you know and all, most of you probably think it's quite easy. You just have to collect some kinds of terms, then you have to put some page numbers behind them, just sort them a bit and that's all you need, don't you? Yes, I do. Well, I thought so as well and we have to make index for that. That's sufficient, isn't it? So, why do we want to have another different system? Okay, my part here is why do we want to have another system? Actually, the system was first presented to the tech community ten years ago at the EuroTech 98 presentation and I would like to take it up again, this is a ribosylic part of things, to tell you, to those of you who don't know Cindy, what is Cindy, what does it do for you and what is available now. Cindy itself is an index processor, just like Macindex. The special thing is that you get lots of multilingual support out of the box, nothing to do except just naming what kind of language would you have. So, we have out of the box, probably four to four languages support and as I say, some of them are there in various, overall we have 53 different ready to use without any part of special link that you have to do in any country, just name it then. And you can just use them out of the box. For those of you who are English speaking, this might not be an improvement for the game to make index, but perhaps some of you create indexes for different things that pure English. Another advantage that we have compared to Macindex is that we support several encodings. First of all, you can take, which means that for the modern, as others call them, well, if you look at Eleph and Omega, maybe not so modern, but if you look at the current crop of tap engines, it is possible to work to enhance the altering environment for them as well. Because currently you often have the problem, if you use, for example, C-Tech, that you might be able to have some document in UDAF-8, but how do you create the index afterwards without something that does not cross UDAF-8 afterwards? You also have something which is called Markov civilization, I can later do it, and we support all kinds of things. But I want to skip in this talk, but mention nevertheless, you know, you all have to tout your, this is what you are about. You come later. I don't want to let myself interrupt if I use it. But I'm more less involved over the last few years, so... No, it's negative. It's a point of, it's a very tough, the previous thing about encodings. Are you talking about characteristics or character encodings? I talked about character encodings, actually. So the big sequences. But I don't really want to keep, I want to mention here, is if you create an index, most of you have just page numbers. If somebody of you want to create an index for something different, for example, you are a Bible, but you have good names in them, which are enumerated, or music pieces which have names in them, which means your location, which you reference, are more than just page numbers. Or if they are highly structured, if they have some thing which is not just some sequence of numbers, but collections, first book number, then an arbitrary letter, then some numbers, and so on. And you want to know how to handle this and create ranges over these structured things as well. Then Cindy is something for you. Because these are the possibilities that we have in the system. And this actually needs a bit more work in just naming things. Here you have to create some kind of configuration, and there's a technically-in-style language to say to Tal, Cindy, what kind of location structures you have, what kind of locations you have. This style language is also used to configure the output markup. So we have one style language which just say, okay, very similar to the things that make indexes. Well, perhaps, they're big different because you will receive styles that work. So if you look currently, for example, take live, and look all of those make index styles, you will perhaps, if you really have in them, see that there are lots of clauses there that make index codes as part of all. And then you might have to pass up my names in them. Oh, yeah. And actually one of the most important contributions that I think we make is that we create a theoretical model of index creation. And I'm also in contact with people from the AX Tech, and if these guys who are currently talking about over there, I mean, Tapo enance, yes. If you ever plan to implement something new in Lua, I would say the theoretical model behind Cindy is might not be the most lasting contribution, even if the tool itself disappears, something else. The reflection of the analysis, what is an index and what must be done to produce an index, I think is something that we should retain. So let me go into the talk itself and what's going on here. Well, languages that we use, which means if anybody wants to produce an autobiography of the 10th world from Star Trek, it's possible to do so because we even have crinkles. But it's a unique area. Yes, and here you can see why we need, why YouTube 8 is not sufficient because I'm having a different including this one. But if you look at the model of Lua, we recognize that it's actually not some really wide selection. We have a strong focus on European languages and that's quite frankly something because it's a community effort. And people from languages from the Asian part of the world, African, also Middle East, we only had some people from Israel who came and who support us. But it's something which didn't appear in our community up to now and so we were able to create something that you can use out of the box. Actually, we have this guy, Eve Corbay, currently contacting us to support Sunscrew and I hope that we will be able to make something together. And what you do if you create your index, is you just tell Cindy when you call it, I want to have this language. Almost what you have to do because sometimes, it doesn't get forward, sometimes telling them a language is not sufficient. Because there are indexes in sorting or index creation, it's not something to be sorting some letters around. There are lots of cultural peculiarities to be obeyed, to be looked at, to create an index. Let's take my mother tongue, Germany. We actually don't have one sorting, one index creation thing, we have two of them. And the two of them is one for us and one for you guys, well for you non-Jungies. So what we have, we have these umlaut characters, A, the stuff, A, Ö, Ö. And these umlauts are actually real characters. They are the natural characters and if we in domestic books, if the book is made for domestic market, we sort them there like an AE or like an OE or like an UE. And this is what we call Dean sorting, Dean is our German standard organization. Because that's the way it's done for Germany, but this now would not be a good thing for in phone books or in some kind of dictionaries, anything that would, that is probably be right by foreigners, because they wouldn't know that if they have this funny looking A with two dots on it, that they would need to look it up under an E, it wouldn't be linked. So for books that will be produced or publications that we would produce for international market, we simply treat them as normal letters with accents on them. So what's actually happening here is that you have to tell Cindy, I want to have German with Dean sorting or German with Dutton sorting. And then both of them is available of course, but it is something that we can't decide in advance. If you have an authoring environment that just says this is German, this might be sufficient for many typesetting things, but it's not sufficient for indexing, because indexing you have to have even more precedence, precedence configuration. And there's also some different stuff like in Greek, you have Pudetoni Greek and another, so oh we have these nice French people. And I always think, well I always have two concerns. First they do this just to make it more complex and difficult, so that the cultural complexity doesn't suffer. The second is what I present to you now, I would like to know if there's any other example actually for this type. So if you have these four words here, I think they even pronounce rather differently, not Pudetoni, but Pudetoni. My French is not good enough, but if you solve them, first you start sorting quite normally. You solve from left to right, just like the airwaggers. You see, but you ignore all the accents, so these four words are sorted the same. And then you have to sort, in the end you still have to bring in an order, because that's what sorting is about. Sorting means that you have some characters there, languages which are made up of alphabets, which means some kind of pieces of a word, and you sort according to those pieces. And then you order the paper pieces. And we usually think about sorting from left to right, or from right to left, if there's an other scripting system, who goes from left to right. No, that's not sufficient, because sometimes you sort first the one, and then the other direction. In France, for example, where everybody thinks that French is quite a normal system, what has happened is that if everything sorts the same, then you sort from the back to the start, and first put the letters without the accents, and then the letters with the accent. What this means here is that this is placed first because the E has no accent, and you look at the back first of the letter, and otherwise this word would come first, because here we have the O over the X. So what we have here are, I call them, legacy roots, which one should obey in a system that creates proper indexes for different languages. I just said that indexing, or the sorting when indexed, means to have some alphabets, and the word is made up of characters, and that you just reduce sorting of whole words to sorting over those alphabets that you then can combine. As I just said, the combination is not as simple as going from left to right to right to left. You might have some juggling around, but in the end it boils down that sorting words means sorting characters at first. But then we come to the point, what is a character? And that's actually not so easy to practice, because you have, for example, in Spanish letters like LL or CH, which are actually one letter. And if we all would have unicode systems, this might be not a problem, because unicode LL is an own character in its own right. So there might be, there is probably to be in some far, far away future an editor, which really gives us this LL character, we know this is a character LL. But up till now, people use the normal system and they type two times an L on the keyboard. And so what we have to do then is to have a more complex character recognition. This is arguably less a problem for non-leggian scripts, because they very often have more, have more legacy of the article and the way the district would say it on a computer. And so they have more kinds of markup or direct unicode support in the editing systems. So they have this distinct representation of some character, perhaps, and not to speak of languages like Mer or so on, where characters are a complex element with lots of sub-character parts around them, which also contribute into a moussoting them. And that's just European languages. In Asian languages, we have phonetic sorting. We have something that might count strokes around the quiz. And well, there is actually an ESO standard even for this. This ESO standard is something that covers everything that I have told you up to now. But what it doesn't cover are things like other indexes, registers of places, because in other indexes we have special links, how names are sorted. You have all these, okay, you have all these nobility stuff, so with phon and D and so on. And there's some time if not when you make them in a register and some time if not. Or you have links like location names where you have saint in front, and sometimes the saint is taken, and sometimes it is abbreviated to ST. And this really depends on your actual project. And I don't want to even start when you have to combine different scripts into the index. So if you have an other index and you put now something, some name in it, translated, but doesn't matter actually, we have something that has appeared that aren't there in the original alphabet. So you have something which is normally called a half order, not a total order anymore, and one has to decide for a specific project what the order will be. There is no possibility that for these kind of special indexes something is done automatically. The project has to decide how do I want to do this now, and then we need a mean to write it up when we do it now. And this is, and I have to now take care that I don't, but we advise our two steps, or two main means of configuration, how we enable you as the orders to configure the same need to handle these kinds of special needs. The first step we call markup normalization. Markup normalization is the realization that quite often as an order, we need to put special hints to an index processor how we want to sort it. The thing that you might all be familiar with is if you sort something where you want a special representation in the index, and you just add the take macro, the lay take take, into your backslash index. So you for example, what, writing a book about tech, you want to index meta font, but you want to have it with a logo. So you index backslash mf, but it shall be sorted as meta font. So what you do normally is all the time when you have your index backslash mf, you add in front of it that in reality it's meta font. Well, that's quite bad, especially if you want to produce indexes automatically. What we do here now is first of all, we give a user the opportunity to say, even if it appears in the index this way, we actually means that that's way. This is what we call markup normalization. So if it is there, there's a backslash mf in the input, we really mean meta font, please. Printed, it is backslash mf by the way. So if the final representation afterwards, it's back to the logo. Just the point how we just sort an index afterwards. No need to have all these things. We produced an introductory book. We had no of these special markups at all, even though we heavily use indexes for all kinds of indexing. The other thing that we do is that we know quite some of standard text that are output-violated. And those we don't know about, we simply ignore. So if you have something in the index that says backslash mf or whatever else, well, it doesn't matter. We just ignore the XPS. You don't need to be foresawing any of it. But if there's something in there that looks like an accent, then we know about it. And then we take it into account. And that's part of the basic configuration that we do around here. So after this markup normalization, we know what we want to sort. And then it comes to the point that we have to establish these characters and so on. And here we can say I have a character. There's actually many characters long in the input. And I have to explain where it is sorted. And so we have a special notion for these cases, which is kind of low-level notion. He says, okay, we have something like here in ad, and this is sorted like AE, or in AFTEREED. And you can, in theory, write lots of these low-level declarations. Sort rule, this is sorted like this, or this is sorted after this, or this is sorted before this. And so create your, eventually, your complete sort order this way. And you have to add a few of these clauses into cases where you have special characters in one index. So it doesn't appear in the normal basic language. But most of the time when one creates sort order for a new language, one doesn't want to go to the hassle and tell all these different rules. But we have here is a preprocessor where you establish the classes of characters that are sort of similar. And they just sort them, and this preprocessor produces these kind of rules afterwards. Well, that's about sorting multilingual. The other point is, we said about encodings. Encoding means what's in the raw index file. What has to be careful is it actually doesn't mean what's in your tag file. It means what output by tag to the index file. This must be a big difference. It's not, if you have Latin one classical tag, and you use latex, and you say backslash index with a word, an ad, an ad, you know. What's output is not the ad, but output is what's something which is called latex internal character representation. And which is something completely different. So what appears now in the raw index file that we have to take, but we have to create the things in, is not easy Latin one or Latin nine or whatever else. What's happened to be there is latex. Whereas other macro packages more or less let these characters through, and you suddenly appear something like hat hat and two hats correct us in it. Which is still not sounding as recognizable as the ad afterwards. You still have to keep these as you say bit patterns that you find in the raw index file, and you have to look at them. And that's what we call encoding in this context. Not the encoding of the actual document, but the encoding of the stuff that is created by the ordering system in front of it. Which actually might not be taken loan by the way. I have the iterations for XML or HTML where you have really good encodings afterwards. So if you happen to use a system where in this raw index file you give eight characters, well okay just use it. There's a special command and option where you call it, which says this index file is imported with eight. And so it will just work. Because all the languages currently have a definition of UTF-8. As I have to say we don't have generic encoding model here that every language can appear in every encoding. That's also not practical in a pragmatic sense. Because if you go beyond UTF-8 to traditional encodings, you typically have encoding as a preferred one. So you have, Currillic is most of the time in, what is it, ISO, 80859, one of them I think, A7 or 5 or something like this, yes. And so it doesn't make any sense to have the configuration for Currillic to be in Latin 1. So you're using Latin 5 there of course. But UTF-8 is available everywhere here. And now we come to the point that, to the end of more, that's the end of the talk, where is it available? And we have something which is actually an announcement here quite now, but I wasn't able to upload it up to now. Because I made a new distribution in the last few weeks, and for the first time, Tindy is now available on MacOS X and Windows as well. It still needs a local code installed, but I happen to have heard that this is not too much of a bother. I'm actually thinking about, this is just needed for some kind of driver scripts. I'm actually thinking about rewriting those driver scripts in Lua, since Lua will be available on Tech Live and so on in the future. So this might be something which enables an easier installation in the tech altering environment for this. And for those who have happened to follow the last few years, we had lots and lots of problems by using a base system and creating some kind of plug-in for this base system. I've simply got rid of this. No, I don't want to mention the adjective that I'm just thinking about. Okay, so we have a website, of course, every project has a website, don't it? And as I said, after the conference, it's actually four days after Watson and Beckett, something which is a decent thing. I will upload the new distribution, it really helps. It will not be available in Lua, I think, in Tech Live, because Tech Live 2008 is too far into the production process already, but we have the previous version in there, which runs on lineups and unix. So those of you who have happened to run lineups and unix, can use Cindy already, and those with Windows and Magna 6 systems have to download it. But in 2009, in Tech Live 2009, everybody will have it over there. But yeah, documentation is available on the website, but I have to say, not in the really best shape. The best documentation available is actually in the Latex companion second edition chapter 11. So you all buy the second edition of Play-A-Song. So, this is my last slide, so I won't keep you from... Oh no, that's not the copy, we're already speaking out. So you keep them from older than us? That's okay then. Just to mention what's currently on the slate. On the slate are two, I would say two ways. One thing I really want to do and I'd like you to do, another of which are room for improvements, where if other people come in and contribute ideas, or even code is great, but I don't know if I will happen to work here myself. What I want to do is, I want to support Unicode completely, not just Unique 8, because quite frankly what we currently do is, that we use Unique 8 as an input and code it, and convert it to some arbitrary rubbish which we use in between. And that's nonsense. What we want to do is to have a real Unique code, which means actually 32 bits of representation per character inside. The days where you have to code for some bytes for representation are over. Get over it, that's it. Tech is a deep engine by now. If there was a time when Tech was large, you know Tech is something that's one of the smallest programs around. For example, I currently have a project where I'm producing 4 million pages in one from 1 million documents in one hour on one system, and it's grinding me fast and it doesn't use any storage to tell about. And that's all the same here with indexing. I want to compare to how if I have to use 2 bytes or 1 byte or maybe 3 or maybe 4 or even more bytes for the vector, it doesn't matter. Just use, let's say, 4 bytes by now or 8 bytes in between. Just use that. It's sufficient for that. And what I want to have, I said that we have this pre-processor which creates language stuff. I want to make this better integrated and better documentation for the great. But here I have even some people who start to help me and that's in credit level. Because as in so many projects, we have not many people who are actively developing, perhaps 3 or 4 currently. And so more people who use it and more people who try it are really appreciative. Well guys, that's what I want to say. Thanks for your pages.
|
Xindy is an index processor. Just like MakeIndex, it transforms raw index information into a sorted index, made available as document text with markup that may be processed by TeX to produce typeset book indexes. Unlike MakeIndex, it is multi–lingual and supports UTF–8 encoding, both in the raw index input and in the tagged document output. xindy draws its strengths from five key features. Internationalization is the most important feature point and was originally xindy’s raison d’être: with the standard distribution, xindy knows how to handle 53 languages and dialects correctly out of the box. Markup normalization and encoding support is the ability to handle markup in the index keys in a transparent and consistent way, as well as different encodings. Predefined encodings are not only UTF–8 to support XeTeX, also supported is LICR, the encoding that’s output by standard LaTeX to its raw index files, and TeX/Omega’s low–level output of (Unicode) characters. Modular configuration enables the reusability of index configurations. For standard indexing tasks, LaTeX users do not have to do much except to use available modules. Location references go beyond page numbers. An index entry points to a location in the main text. While most index processors can work only with numbers, xindy features a generalized notion of location references that can be book names, law paragraphs, URLs and other references. Highly configurable markup is another cornerstone. While this is usually not as important for LaTeX users, it comes in handy if one works with other author systems besides TeX. While development of xindy has been dormant for quite some time, the last few months saw a flurry of renewed energy and new work to get xindy in the hand of its potential users. The distribution has been streamlined and is now available in standardized source form, thus paving the road for a future acceptance into TeX–Live.
|
10.5446/30774 (DOI)
|
Good morning everybody. Can I assume that you all hear me? I have to start by making a few admissions or a few confessions. The first thing isn't the confession, in fact I'm actually very proud of it. This is actually my name. It's not a nickname and it's not a joke. I'm very proud of it. It's a very old name that goes right back into Irish mythology. Finn McCool was a giant. Somehow I seem to have got that wrong. My family, my immediate cousins and even my brother are actually quite tall. I followed my mother's side of the family. Finn McCool was a giant who had a fight with another giant in Scotland or England or somewhere around here. He stepped down to pick up a clod and he threw it at the other giant. The clod that he picked up formed Loch Naye in Northern Ireland, which is the biggest inland freshwater lake in the British Isles. On the island we are told, it formed the Isle of Man. That's my name. My confession. Yesterday I was asked to describe what my connection with latex was and I very simply said that I'm a latex user. I left it that. In fact that's all I am. I'm just a latex user. I work entirely on my own. I live in a very rural area. I'm looking at a window of cows. Yesterday was the first time that I ever physically met anyone who had used latex or knew it was. If I mentioned latex as I pronounce it to any of my work colleagues, they all think it's something to do with pornography. So again you'll have to forgive my pronunciation. I'm going to pronounce words wrong. I'm really sorry about that and it's because I'm not mixing with other latex people. My second confession is that essentially I really don't know what I'm talking about. I'm a fumbling, lazy, groping amateur. I try things, they work. I won't be at all surprised if before the end of the week or even before the end of this talk, somebody's going to tell me, oh this is much easier by doing that and you could have saved yourself once it worked. That's the winners. First of all the context. As Peter described, I'm very keen with boats. I do a lot of sailing. We spend a lot of time on this old boat. It was made in 1913 up in Northern Ireland. We use it around the Irish and on the West all year. My family and I, this boat is part of my life. I'm as passionate about this boat as I am about more or less anything else. The other part of the context is this is a typical boat scenario. This is myself here, scruffy as ever. My daughter on my left, another daughter on down here and an occasional caller. We get together and we make music. It's a great, great joy to be in the middle of nowhere surrounded by nothing and to play some music together. You'll notice some sheets here and these would be rough manuscripts of some sort. About 12 months or so ago I had finished writing a very small book again connected with boats and waterways. We were trying to store a canal in Northern Ireland, the Ulster Canal. I wrote a guide to the Ulster Canal and I used Laetic to do that. I really enjoyed the process very much. About 12 months ago I was itching for another project and I had two ideas. One was a cookbook for bookers where I describe bolting type recipes and bolting approaches to cooking on water. Barbecues and methods of cooking. The other one was to produce a tune of traditional music. It's not classical music, it's traditional tunes that are in some way connected with boats, water, rivers, lakes, the sea, whatever. Traditional music is learnt by ear. That is the essence of it. It's not generally written down. There's not a strong tradition in the folk music world for writing down tunes. They're just copied by ear from one person to another. Traditional players are not strong sight leaders. Because of this I had to take in a few very definite parameters. The first one was an integer number of tunes per page. I can't have page turns in the middle of a tune. I'm aiming for two or three tunes. This is just another typical book of Irish tunes. I need nice big bright fonts, very clear and with a minimal clutter. I was going to have tunes that are normally played in sets that might be two, three, possibly even four tunes grouped together and played once, one after the other. I needed strong cross-revencing from one tune to another. Ideally I would like to produce tunes on the one page where those two tunes would be played together. I haven't got there yet, but that's one of my objectives. I need snippets of text along with the tunes. Those snippets of text might be historical notes or approaches to playing the tunes, or maybe an alternative name or something like that. I need that piece of text to plant to the tune so that I move the tune around to the book and the piece of text will follow it. I needed a very easy way of keying in the tunes. Most of the tunes I take down from the internet, but sometimes I can't do that. Sometimes I have to take them out of books. A few of them I've written myself, some I just get from friends, and I have to keying them raw. So I need a fast way of keying the tunes. I needed good control of the output, and I needed good community support for whatever tools that I was going to use. And I needed high quality layout. Now at this point I'm going to jump a bit. Oh, no, I have to hit Escape here first. Yeah. Ultimately I ended up using Lilypond. And the essential message that I want to try and get across to you here, although I am not a representative Lilypond, I can't pretend to be a Lilypond ambassador at all. They don't know me from Adam. But the impression that I get, and the message that I want to get across to you, is that Lilypond are as concerned with the beauty and the presentation of the scores, as the tech and lexic people are with the presentation of text. Yeah. So to illustrate this, this is just a page from the Lilypond user manual. We can see three characters here. This is the flat symbol. I don't know if you're familiar with that or not. The one on the left is from a relatively modern, probably Windows type computer pieces software setting that character. The one in the middle is a hand set 1950 from a German book. And the one on the right is Lilypond. I think you can compare them yourselves. The Lilypond one tries to emulate 1951 as close as it can. Notice the rounded top to the B and the square top to the B on the left-hand side. Traditionally, music was set by hand. The characters were graved by hand out from plates and then inked and pressed from that. The process of producing music is actually known as engraving, not type setting. Although I myself would very often use the word type setting. So we're talking about engraving here. The Lilypond people suggest that with the proliferation of PCs and with the Microsoft products that we have at hand, that everybody thinks now they can type set music. And the Lilypond people disagree with this entirely. They suggest that, particularly for classical music, that the presentation of the music must be as clear and as beautiful as possible. Because orchestral musicians trying to play a violin with a very, very complex piece of Wagner or something, and the script is sitting back two meters away from them, it has got to be 100% not 99%. Any distraction that's on the piece at all is likely to distract them from natural play. I'll give you an example here. They make a suggestion here that the spacing between the upstain and the downstain is wider here in the Lilypond than it is in the Olyphos. So I want to just point that out to you that they are very concerned with quality. Excuse me. I don't know if I'm the only one who doesn't know what to do with this. Can you give me a 30 second intro? I jumped ahead of myself a little bit and I'm going to jump back again now, okay? Another apology that I have to make to you is that I've never actually read a paper at a conference before. This is my first time ever to do this. So you'd have to bear with me again my amateurish fumbling. I don't know how that managed to wind back on me. Okay, so there's the objective. There are some more needs. I need MIDI production because I make an assumption that the people who will use the book will want to be able to hear the tunes as well. Do you know what MIDI is? No? This is brilliant. I know something you don't know. MIDI is a music industry digital interchange. It's a standard format for storing music in files. If I put a MIDI file on my computer, I can play it. It's a standard for music in files. There is a laser exchange down in the wire and so one instrument can control the other instrument and also transfer files and all that stuff. You can hook up to a MIDI interface from a keyboard to a PC and suck the data straight into the keyboard. It's a wonderful system. You probably know more about it than I do. I just listened to the stuff. I needed indexing by tune type. Whenever I'm looking up a tune, if it's a tune called Out in the Ocean, I want to know whether it's a jig or a reel or what it is. Because then I know what kind of lead it is. I wanted the process to be as automatic as possible and that takes me to Perl and so on. A lot of the tunes I pulled them down from the net. Those were their programs. What's out there? By far the most popular programs in use out there, amongst everyone, is Finale and Sedanus. They are fighting neck and neck in the Windows world. They're both commercial products. Their output is OK. You talk about the same kind of quality output as you get from Word. Note where the composer is shareware. You send off and you get it on a floppy disk or CD or whatever and you can use it as long as you want. There's some restrictions in it. You want to use the full system. You have to pay them $35 or something. It's very cheap. It uses a proprietary file format to store the music in. So for me that's it. I'm not too sure about Finale and Sedanus. I think they might use XML. I'm not too sure about that or music XML. I'm not too sure. Cake Rock is another one. I rejected all of those because I'm just not a Windows person. I'm a very traditional Unix type person and they just don't suit me at all. I may not be treating them fairly but I dismissed them fairly early in the process. At Open Source we've got Music Tech. I looked at it briefly. I read a paper which I think was presented at this conference living a few years ago by an Italian shop. Is that right about? But it struck me as being horrendously complex and way, way over my head completely. So I knocked that on my head. I did read somewhere in the standard latex, textbooks, that PS6 can be used to produce music. But again, it just seemed to me to be too cumbersome for what I wanted to do. And I settled finally on Lilipond. Now if you ask anybody in the classical music world of publishing, they will tell you that Lilipond is the boy. The depth is very, very high quality output. Even Phenallian Zabelius will output Lilipond code. They're fine. Phenallian Zabelius have got very nice gooey cotton paste. You can drag notes about on the screen and all this sort of stuff. And yes, you can connect it to a MIDI keyboard and suck the data in from MIDI keyboard. But whenever it comes to producing the scores, they're horrible. You've seen that from the B-flat that I illustrated there. Okay, so Lilipond then answered a lot of my requirements. It is very, very actively underdeveloped. It was originally started by two guys from a Dutch quartet. I can't pronounce their names. You'll get it in the documentation. And they started, I think, kind of about 1986 or thereabouts. And they've been working on it for a good number of years. It's very much open source, very much out there in the community. And it's very much being contributed to by a lot of people. Very strong community support. I subscribe to the Lilipond mailing list. I also subscribe to the text hacks. Is that text? And what's the other one called? The Tug One? I submit to both those mailing lists for Leatech. And certainly the traffic on the Lilipond One matches if not exceeds the Leatech ones. Very, very active. A lot of it is way over my head. One of the problems with music type setting is that music itself has an awful lot of technical terms in it. Particularly in classical musicians. And a lot of them are in Italian, which I don't understand. I can't even pronounce the words, you know, if I don't understand them. It's really difficult. Ease of key. I've come back to that in a minute. Partial compilation. If I produce a big long piece and I'm working on it, yeah. Obviously it's getting longer and longer as I work on it. But I can tell Lilipond to ignore the first two quarters of it and just compile the last 25% that I'm working on. And that saves time. I've got a mode, Lilipond mode from my beloved Emacs. Which is wonderful. But best of all, the most important thing to me and where the crunch comes is I've got a classical Unix command line interface. There's no GUI, none of this high-candy stuff at all. It's really, really old-fashioned and absolutely beautiful. It incorporates the Scheme programming language. Now again, I confess that I haven't got that far as to actually dabble the Scheme yet. But it's in there and you can do all sorts of things with it. Again, very, very high quality output. And the documentation, I've come back to the documentation now. I'm going to go back again to the high quality output. I have to do that, that. And then I go to bookmarks and I go to a demonstration. I just wanted to put up this demonstration for you on the example of what Lilipond can do to give you some idea of where things can go wrong. I mean, this is beautiful. This is excellent. Wonderful and great. But you can see that lesser software could very easily screw off the app. You can see the number of parameters that are dependent on the app. It really is very complex. I would suggest to you probably, dare I say, more complex than Type-Coding Text. There are a lot of parameters there. There's a lot of space to be handled and a lot of symbols to be positioned correctly. So there's loads around for it to go wrong. But it doesn't, Lilipond. It doesn't. It's a rock solid system. I haven't yet found any books in it. And in reading the mailing list, very seldom do I come across reports of books. It really is a very, very solid, typical Unix-type product. Why does it keep going back to there? This is another apology. I should have presented the slides using latex, but I just haven't got that far yet. Sorry about this. Okay, now, the documentation is excellent. It really is very good. And again, I just jumped back to Firefox if I may. And I want to show you. I want to show you. Now, for some reason, I have to close down this open office rubbish thing. I can't seem to be able to jump from one to the other. It just doesn't respond. I just go to bookmarks and I go to... This is part of the learning manual, the online learning manual. And Lilipond, I can simply click on an example, a musical example and it pops the code behind it. So that's something that latex people might think about in their documentation, online documentation. Right? Okay. That's a musical example, yeah? You'll notice as I scroll over it, I get the pointer coming up, yeah? I click on it and it pops the score behind it. Yeah, they're coming back to the scores again. Yeah? But that's just a thought for people to document and let it. I didn't do this last night. Okay. I got as far as the documentation. The documentation is excellent and I put it here on my list of pluses for Lilipond. And I've also put active development on my list of pluses for Lilipond. But on my list of negatives, I've had to feature the documentation again. Because despite the fact that I've got all these snippets of music and the code behind them, there's examples upon examples upon examples upon examples. It's still very, very difficult to learn. It's still very difficult to grasp the overall picture of how the code is structured and how the system works. It's a bit like Perl. You know one of the strong strengths of Perl is that there's more than one way to do it? Larry Wall keeps telling us this. Okay? We've already had this comment this morning where you write a program this year and you come back next year and you can understand how it works. Well, Lilipond's a bit like that as well. I'll be coming back to the code in a minute. There's more than one way to do it. And the code can get really quite complex and quite hairy. The evolving documentation, yes. The documentation for Lilipond is a monolith. It's just the one source for it. I'm not too sure there's any books out there on Lilipond. But on the site, there's only the one source. Because the documentation for Lilipond, as we know, there's lots of books. There's lots and lots and lots and lots of different easy-to-read HTML files and PDFs and all sorts of things. I would compare the learning curve of tech and Lilipond. I would say they're in or about the same. I'd say Lilipond, maybe, is a little bit more difficult. It's not easy. The end product is very high quality. It's command-like to it. It's got a novel strength. It's not easy to learn. OK. So on we go. OK. The ease of key. You know what a bar is? In the music app? We write music in chunks. And we feature bars. You'll see this in a bit of the code. It's got a bar check in there. So if you accidentally plug an extra note in a bar, it'll tell you. It's got octave checks. So if you do ridiculous jumps from middle C octave, super high C, it'll quench at you, and it'll tell you that. It's got an absolutely called relative C. The initial versions of Lilipond, you just had to feature the code A, B, C, D, E, F. And you have to tell it where and the state those notes were. Now, more recent versions have got this facility called relative C, where the notes are all positioned relative to C. And again, if you warn you, you can go outside there. The duration of the note is simply given by a number of halfs. This D is a quarter note. Yeah. And again, in the earlier versions, that E would also have to be followed by a time, which might be a 4, 2, or 1, or 8, or whatever. But now, once you've made your time once, Lilipond automatically attaches that time to the following notes. So you can keep really, really quicker. Much, much quicker, I would suggest, than you can with the most dragging notes about them, talking them on stage. Musical phrases. This would be the equivalent of a new command in later. I can just define the music of fragments, and I can feature it later in my code, and Lilipond. Wonderful. Now, within the Lilipond family, there are a number of utilities. There's a utility for converting from music XML to Lilipond. There's a utility for converting from ABC, which is another music format that I've converted to Lilip. And in particular, there's this item here, Lilipond Book, and it's designed specifically for writing books. It's very closely coupled to Letec. It'll produce output for Letec, Text Info, HTML, and the rest. But of course, the one that I'm most concerned with is Letec. In the early days, whenever Lilipond was first developed, the output from Lilipond was only Letec. That's what they used. But for some reason or other, they moved away from Letec about, I think about five years ago, other abouts. They just abandoned the Letec route, and now they go straight to PostScript or to PDF. But the Lilipond Book gives you these options. And again, it's driven from the command line. Okay, now, as I said, Lilipond was designed by classical musicians, for classical musicians. There's very, very little Lilipond code out there for traditional musicians. The vast majority is held in what's known as ABC notation. And I'll come back to that in a minute. There are, literally, out on the internet, there are thousands and thousands and thousands of tunes in ABC format. You can use this search engine here, key in a few words, and bingo. Within seconds, it will bring up maybe hundreds of tunes that feature those words. Incredible. And the tunes come from all over the world. This is my main source. My main source is the internet. I pull down the tunes in ABC format. I use this utility here called ABC to Lilipond to convert them to Lilipond. And then I get Perl, which goes along, gathers the whole order, and builds, first of all, the Lilipond file, and then a Letec file. So essentially that's what I'm at. I'm going to take you out, and we'll actually take a look at how it actually works. Out to my beloved command line. Well, we go back to Emacs, first of all, and there's a piece of Lilipond code. Okay? We've got a header, which is, I'm tempted to say that it's like the preamble in a Letec file, but it isn't, really. It just does what it says. And I could take that header, and I could move it down here, and it'll score section. And it'll still work, and it'll still come out. That's the Perl-like. There's more than one way to do it. Next, I've got a music object called melody. See it? Just a clump of music. Is there a light pen thing that I can do? Yes. Here you see my melody. So all these notes here are all relative to middle C on the piano. I don't need to stipulate their exact position on the stage. You'll see here that MIDI I told it to use a ladder. Okay? These are repeats. Here repeat both of two, see? Doesn't seem to stay on this. You have to hold it down. Hold it down? You can use the mouse. It's still not in the key. Useable. That'll be fine. I can use my mouse. Anyway, you get the gist of it, yeah? It's just plain ordinary ASCII code. Then if I move down the bottom here, I've got the key section, which is the score section, and this is where the action is. The score section says, lay it. Give me a printed layer. And the MIDI section says, give me a MIDI output and use... Look at this. This is a horribly complex piece of code here. Temple code permanent, code permanent equals blah blah blah 64. That means use a corporate note and make it... I think it's 60 milliseconds. I'm not too sure about that. Okay, so off we go. I'll compile that one. It's simply lily-pwned. Example zero. It's very fussy about versions. I don't know if you noticed at the start, there was a version in the source code. It's very... And again, this is a function of the active development. It's so active that you have to keep very strong. You have to keep a very close eye on what version you're using. Now, why this here? I'm not going to worry about it. There's my M-flot. I think it comes up with that fail message because my version is too old. I think the version of my source code was 2.6 something, wasn't it? Whereas I'm actually compiling with a 2.11 compiler. So it's saying that the version is too far back. It just can't recover from that error. But it does its best. It's a very rugged plot. Nothing terribly exciting about that tune. There's no shimmers in it or funny things. So it's well within the capabilities of the lily-pwned. What we'll do now is we'll play that tune. That's enough of that. I don't think Knuth will be very impressed with that. What we'll do is... Can you change it to instrument equals organ? I can try that. What I will do is I'll Irish-ify it by speeding it up a little bit. Okay? You can pick up one. Let me go. Just compile it again. Yeah, too old to see. And again, play it. There you are. So a trivial composition. Okay, so now what we'll do is we'll take a look. Oh, that example that I played there was a pure lily-pwned example. A standalone lily-pwned example. Now I'll take a look at a lily-pwned example that's incorporated within Linux. Okay? Now I'll give a class, an article, really very, very simple. Begin document, a piece of Linux, some Linux code, and some more Linux. And yeah, very, very simple. Okay, this is where lily-pwned book comes into play. So what I say is I say lily-pwned book. I tell it to give me a leotech output. I tell it to put the output equals book. And I tell it to use psponts. And I tell it to demilify. Example 1.txt. We'll do that under... Clear the old file inside. Okay, off we go. I'm run now. Off it goes. This is a very old PC. This is actually my kid's PC that I'm using here. It's absolutely ancient. So it has put all the files into a directly called book. Rather a lot of files. You'll remember that was only a very small snippet of music. There's it there. Very, very small. But look at the number of files. What it does here is... Well, I go ahead and I explain to you what it does now. So what I have to do is I have to run leotechs again now. On that example 1.txt. So I run leotechs again. Well, before I do that... I'll let you have a quick look at the leotech file itself. Now, this would probably make more sense to you guys than it will to me. Yeah, okay. Okay. Can I go on? Now, I'll compile that. Of course, it compiles. Thanks, DDI. Well, that's my... Okay. Very, very simple. So this is where I have featured a piece of lilypond code directly in my text. Well, obviously, that's not very practical. What I want to do is I want to store the lilypond code in a file and let it be sucked in. Yeah? So, I'm going to... Oh my goodness. Well, very quickly then. Yeah. Here, I've just got a small file. See it? Okay. But that small lily file is actually the source for the first tune that I played you. It's a lot more complex. So I've got a lilypond on that. Now, you can see there's a lot more files now. So you can understand that whenever I come to write in my book, the number of files here in this directory, and then put directly, goes up very dramatically. I'll compile that. An XDI. Okay. What it does is it's actually creating a graphic for the header, the title, and each one of those lines. It creates a separate EPS file for each one of those, and it creates an EPS file for the whole line. So the impression I get is whenever Xnetix comes to actually put it all together then, if it can't fit up on the page, it only uses the first two or three graphics. Yeah? They can fit the whole line on the page, then it uses the whole big graphic. That's my understanding of it. Okay. I haven't got that much time left. So very quickly, I just wanted to show you the end product. And it'll show you how far I've got. Okay. There's my title. Nice table of contents. Nice, nicely laid out text. And typical change. Yeah? I do have some problems with white space. I mean, this is not very clever. So what I have to do is I have to do a manual, a footprint about here, either to include something that is nice with graphic here or not a tune. But remember, I want the tunes coupled together in particular sequences. And I've got a nice table of contents from out the way. So all I can tell you people is that I'm a typical letic user, but I'm a very happy letic user. And I thank you all very much indeed for making these tunes available to me. It really is a joy for me to use this. I have automated this whole procedure where I can just pull an ADC file down from the Internet and my Perl demons run it. The ADC code is full of errors, but my Perl code can recognize both of those errors and fix them. And then it converts it to Lilay and builds a letic span. And the whole thing is sweet, isn't it? Okay. Thank you, Joey. A couple of extra questions. Sorry, can you speak up just a second? Yeah, yeah. It's not always there. I think it has to do with the way that that particular piece of code was called. Do you see it there, look? If I call it without those parameters, the red box is not there. No. I think it's there for... It shows the graph rate. Yeah, yeah. It's there for engineering reasons. Also, this is changing points a lot, isn't it? Just repeat that, please. It's illustrating what you were saying about the 30-minute yes and no. Yes, yes. As a suggestion, you can have it basically clickable to place the media point. Oh, yeah. I'm just going to put in a PDF and get all the media with the PDF to the consumer. Yeah. Just click on that and the media will be placed. I do plan to do an online version of that. Can you embed the media in the PDF? Yes. You can write it. No? Right, yes. Sorry, just... I wanted to put in a plug here which boils down to an appeal. The appeal is that I would like to see the LaTeX and the Lillipon community perhaps be working a bit closer together. I can't help but feel... I didn't mention this, but whenever I post it to the Lillipon mailing list about my intention of putting this book together, their suggestion was forget LaTeX and just use Lillipon itself. But I was just... I was scared of that. Yeah, and I also do use Lillipon a lot. I think it's a bit... It doesn't use LaTeX. It's a very useful book and it could tell the internet, I'm sure, but if you look somewhere, you can see this talk you're going... I can't help but feel that there is a divergence there, which I think is a bit sad. It could be because I suspect amongst the LaTeX community there's not a large number of professional musicians. And a lot of people like music, but professional musicians are few. Danza Tupman is not from Denmark. Sorry, I'm suspecting that amongst the LaTeX community there's not an awful lot of professional musicians. So, after a time of damage, there's a lot of people with disney music, but people who have actually run or write books on it, they probably don't know what to do, so maybe we should encourage them. There's a piece of abandonism to be done there. It is a pretty rarefied and specialized book. Right, Joe, thank you very much indeed. My pleasure. APPLAUSE
|
The author is an active Irish traditional musician. He is also a keen inland boater. He is having a lot of fun composing a book on “Traditional Music for Boaters”. In this paper he describes his successes and frustrations using Lilypond, Lilypond-book, LaTeX, and ABC musical notation. Lilypond and LaTeX have a lot in common. Neither are WYSIWYG, neither demand GUIs. Both compile simple flat files to produce beautiful graphical output. Lilypond’s original manifestations produced output directly for LaTeX, but of late users writing books have been encouraged to use Lilypond-book. This looks for Lilypond code within LaTeX source files and produces graphics and associated instructions which can then be processed by LaTeX. Most joy has been gained from automating these processes under Linux and Perl.
|
10.5446/30775 (DOI)
|
All right, good morning. I'm Steve Peter as this slide says. This morning I'm here representing the pragmatic programmers. I'm not technically an employee but an independent consultant or contract or whatever term you want to use, which has all the joys that you might think that entails. Like two minutes before I left for the airport to come here, I had all kinds of questions from authors to answer and say no, I'm going away for a week. Sorry folks. So who are the pragmatic programmers? Sounds like the beginning of Mythbusters here. Pragmatic programmers came about when two programmers, Andy Hunt and Dave Thomas, who was not the founder of Wendy's, but a different Dave Thomas. Both of them were programmers working out in industry for years and years. Andy worked at AT&T, Dave worked for AT&T at one point in different groups. But they got together and started comparing notes and realized that every time they got called to a consulting job that they were solving the same problem over and over again. And they decided that they would try to put their knowledge into a book, which they did and they called it the Pragmatic Programmer. They still got called to do the same thing over and over again because people weren't reading the book and following the advice. But if you haven't read the book and you do programming, I do recommend that it's a very good book. The part about it that I liked best when I first read it was they talk about documenting your work and if you're going to document it, you might as well make it look good. And there's a system out there called tech and you should use it. When I read through that, I thought, I like these guys. A couple years after they wrote the Pragmatic Programmer, they both discovered the Ruby programming language and immediately fell in love with it and said that it reflected exactly the way they thought about problems, that there was no need to conceptualize how they might solve it and then translate that into some programming language and then implement it, that they just sort of thought directly in Ruby and they could do it. So Ruby was created by a Japanese programmer and at the time all the documentation was in Japanese. So the two of them sat down and went through all the source code and wrote the book called Programming Ruby, which is now in the third edition that details the language, how to program it. We use a lot of Ruby at the Pragmatic Programmers, as you might expect. I can't say that my mind works exactly like Ruby. I'm forced to, after I think about it much, realize that the way my mind works is actually more like Pearl, which is sort of this mish-mash. It works great at one point and then you go back to figure out what you did six months later and you can't figure it out. Unfortunately, that is the way my mind works. But I am a linguist by trading and Larry Wall is a linguist. After these two books came out, they were very popular books and Andy and Dave being Pragmatic decided to buy back the rights to the books and start their own publishing enterprise that they called the Pragmatic Bookshelf. If you haven't read any of the Pragmatic books, I urge you at least to go to our website. I would show it to you now if I had connectivity, but I don't. And I would show you pictures of some of these if I had it when I was writing this last night, but I didn't. So you'll just have to go with the brown and brown instead of black and white. But go and look at the website or pick up some of our books. I don't get any extra money if a lot of these books sell well, but I do enjoy typesetting. A number of years after they started working on this enterprise, I should back up and say the Pragmatic Programmer, they typeset themselves. It started out as both of them have been at AT&T, or at least Andy was at the time. The original Pragmatic Programmer was written in T-Rough. And as it grew a little bit larger, they started having some issues with it and then took it to the publisher. And the publisher said, we don't really like the way this looks. Have you considered using this thing called tech? Our designer knows tech and can come up with something and created a layout for them in tech that they then used. And when they started the Pragmatic Bookshelf, they took that with them and started enhancing that tool chain. That's part of my problem these days, is that I inherited that tool chain that started years and years and years ago. And we heard in the first talk about legacy code and what do you do with these legacy formats. That's something that I face all the time. But anyway, a couple of years after they started this, I got an email from Andy saying, hey, we need somebody with some tech experience and who knows, XSLT, can you help us out? So I got my car drove to the bookstore and bought a book on XSLT. He flipped it and said, yeah, I can do this. And thus began my adventure with them that continues to this day. We'll go into that. What tool chain do we use? We use a bunch of things. All of our source code that the authors write is in XML. It's in our own DTD called Pragmatic Markup Language, or PML. We use Unix tools like Make. We use lots of Ruby, although we do have a few scripts in other languages. We've got a couple of Pearl scripts in, a couple of Python scripts. No will yet, but we'll probably be adding that at some point. We use XSLT, which I've now come to know. I don't know if I love it, but I deal with it. We use tech and we use PDF that we get through a couple of different routes. All of the books that we sell, we sell in either print versions or as PDF versions. And they get two different routes to PDF, depending on which one we use. Our print versions ultimately go through Acrobat because of the requirements of our printers. They throw hissy fits every time we try to do something different, so we generally take the last step through Acrobat. But the ones that we sell as PDF online, those go through the normal DVI, or I shouldn't say the normal anymore, DVI, PS, and then PS to PDF route. The books that we sell as PDFs aren't digitally rights managed. We do have certain information that's encoded into the PDF, and that's why we do the last automated step in there. But there's no big encryption that goes on. But that's one reason why we have to stick somewhat to the tool chain that we have. Now let me show you a demo of what we have here. All right, here is a typical file that an author might have. Should probably hook up my mouse so that I can do this more efficiently. What you can see here, what a typical PML file looks like. Looks like fairly stock XML. And then you can see there why I haven't been writing any books on tech recently. I've got Arn Swaggild into doing this book here. This has been occupying my time recently. But we take the XML, and the nice thing about this with using these mainly open source tools is that all of our authors can download this. This build system works brilliantly on any Unix-based system. So it works wonderfully on Linux, on Mac OS X. It can work on Windows, but you really have to bend over backwards. It's much better if you're using some sort of SigWin type environment, which is essentially then running Unix on your Windows machine. But it works well on Unix-based systems. So when I get ready to build a book, all I do is type make all, of course, goes through, pre-processes with a bunch of scripts, and then runs tech, runs bib tech, and then runs tech another two times to resolve everything. And you can see here it's running dvips and ps2pdf, and bingo we get output, which looks something like this. Now our pre-processing we use for coloring code. Let me get you some code here. Not anything exciting. I'm not going to scroll that. I'll use my screen set wrong. But you can see here the keywords in code are always colored. We can also, by throwing a different switch in the make process, by adding in a switch to the make process, we can make a screen version of this. This is going to be full and then crash. And that's a dummy color that I inserted there. But here's what our PDF versions look like. So they're colored for, they've got hyperlinks here, so you can go directly to a section, the URLs work, if we were online. And you can see down here, let me blow this up. You can see here our code is colored, literally. All right. Our source is generally organized into two different directories, a book directory and a pp stuff, pragmatic programmer stuff directory. The authors generally work out of the book directory. I do most of my stuff in two directories in the pp stuff. We have a tech directory that has all of our fun stuff in it. Most of this stuff that I'm modifying comes out of our style sheet, which is called pragprog.sty. So that's where I spend most of my time making tech changes. I won't really show you anything there. It's fairly standard. We use memoir as our class file, but it's fairly standard tech. There's nothing Earthshaking there. Our XML comes out of our DTD, and then we've got different formats that we can create from that XML. So we can go to an HTML. That's useful for some ebooks that are coming out that require HTML as their input file. We go to LaTeX. That's the one that I do most of my maintenance on. And then we have a couple of different ebook formats. The ebook space right now is in flux, but it's becoming much more important to us. I'll go in a little bit, present one of our problems that has to do with ebooks. But generally, we're trying to keep one of the two principles of the Pragmatic programmers is very keen in keeping our XML as the canonical source of all of our books that we can then do the conversion on, with tech being the main route that we're using right now. But we don't have to keep that in the future. So what are some of the issues that we have using this tool chain? Like I say, one thing to remember before I present the issues, because there are solutions that you can immediately see to some of these issues if we abandon the XML as the canonical source. But my hands are tied in this. I can't really abandon that. The first problem we're having is with hyperrefs in general. And until recently, our URLs have been a pain. I think knock on simulated wood grain finish that I have solved that problem. But it does keep popping up that the URLs aren't breaking properly that sometimes in the sidebars, if people read the book in Acrobat, the sidebars end up with tech code in them that then display. And they say, hey, what is this un-Vbox stuff that's sitting here? I can click on it and I get to something, but it's hard to get rid of. But right now, our biggest problem is with eBooks and reflowable PDF. Reflowable PDF means that you can change the size of your margins and the PDF will reflow automatically, as opposed to then having to pan back and forth. At this point, we're getting at least an email a day asking us when we're going to support all these different eBook formats like Kindle and various other ones. It's increasing in frequency and I'm getting a lot of pressure to be able to produce reflowable PDFs. I can't do that right now with stock tech and I need a solution. And I'm hoping that the collective brilliance in this room will provide me with a solution by the time I get on the plane. If not by the time I walk out the door this morning. So what can we use? Is LuaTech going to provide a solution? No. Is ZeeTech going to provide a solution? No. Is ZeeLuaTech going to provide a solution? The problem is with the idea of reflowable PDFs. What's LuaTech? Well, yeah, but... I have a solution for you. Okay, we've got a solution. Yes, it's very easy. You just, instead of doing full reflowable, then you can change it in all ways. You can make several copies with different size screens. And when they change the size, you actually allow them to stick to the next grid level. And substitute the copy you want. But a lot of ebook formats aren't going to allow me to input several different... Like if I put into it, say, Kindle, it won't allow me to put in multiple inputs. I need to look how Kindle format is there, right? Yeah, it's... I think it's possible to do it. All right, if we can do that great. So these are kind of engine problems. Then there's a question of what format we can then use. And that kind of ties in. It's orthogonal, but it's related. And we can use basically anything. And, you know, even E-plane is fine, because I'm really the only one who has to deal with the tech aspect of it. Until very recently, Dave dealt a lot with the tech, too. But Dave's spending most of his time these days writing Ruby on Rails. And I'd rather him do that than me have to do that. And I'd rather do the tech. Do we do some new format that's not any of these that then incorporates this? Now, the carrot to place in front of this group that's in front of me there with these different formats and different engines, we can provide some funding to do some of this. It would be in dollars, which I realize is funny money these days. But if need be, we can come up with euros or Ruble or Grohmar, whatever we want. Monopoly money works well, too. And if we can't get satisfaction out of this, we may have to switch away from tech. We do know that XSLFO, as much as it pains me, does work and does produce reflowable PDF. And since we've already got part of XML in our tool chain, it can be done. It doesn't look as good as tech. But if we can get eBooks out of it, we may have to use tech for our print and on-screen PDF and then use XSLFO to do the reflowable eBook formats. I prefer not to do that. So maybe that might be an additional inducement. Please help us keep tech in this. But that's basically what I want to present to you. Are there any questions? And hopefully, are there solutions? We've got another solution. It's not about solutions. I have to talk to you about this afterwards. I have some ideas, but here are the questions. The question is about tools. First, what do you offer to use to produce XML? Do they just type in editor like Emacs or with a text? We don't actually... We've got a few things that we offer for Emacs users and for textmate users. We've got some macros in there to help them out. But we generally let the author use whatever they want to type in the XML. What do you use? I go back and forth between Emacs and textmate when I'm authoring, depending on what I'm doing. And second question, what do you do to convert from XML to all these forms according to your... XSLT. Yeah. I think it's the Xerxes. What? Xerxes? I think that's what we're using. I rarely get into it. I just sort of... I do my XSL format and then just run it through. But I think we're using Xerxes. There. Hans. I'm just going to start with two remarks. First is if you want the reflow, why don't you just use HTML for these people? Or alternatively, just generate a different PDF in the right...... and a different form of that? Why do you want the reflow? Well, because some of the ebook formats require... Some do take HTML as input and those were fine on. Yeah. But you can use PDF and some don't... There's no aspect ratio. It depends on what the user sets. So that has to be reflow. If you look at like this...... we have one of those ebook things. If you just generate it in a nice format, fitting the screen...... that doesn't solve the problem if somebody is using a mobile phone and then wants to change the website. You need to reflow so you don't have to scroll. But I'm over from, I think, any reflow. Yeah, I mean, it's kind of like... How about you're going to say that you have... Okay, use the pages for the pragmatic problem. Just look to other... Simple. But if you have a group which has a little bit of more...... click on the page. Maybe you can use Mike's headings and Mike's titles and whatever. How do you reflow them? If you have the design book... Right, I mean, but that's the general ebook problem...... that I can't really solve as a book designer. But I can't throw this strictly to the support department...... because they're going to get tons of emails every day saying, hey, I spent $30 on this ebook and it looks like crap on my reader. I think that getting it looks like crap. But the question is, how do you...... less crap is it... You just made the other half a piece of the solution. That's the problem you solved. That means why the private conversation, please. John. I don't know what engine is used to reflow in PGA. But I'm sure it's not tech. And it would be wonderful if it were tech or something related to tech. And if that was the case, then the whole problem would almost certainly be a lot easier...... because there'd be a close relationship between what we're doing here and what we're doing there. And whilst this doesn't help you because the answer I'm giving you is, if I wanted to do that, I wouldn't start from here, we have had enormous misopportunity over the past 20 years in that we have not packaged tech...... or an extension of tech in such a way that it can be used, for example, as a reflow engine for PGA. So this problem is actually of our own making, because we have not addressed the issue of packaging the engine tech for an extension of tech...... so that it can be used for this sort of purpose. We've missed that opportunity. There's an old saying that every problem, there's an opportunity in it, but here's the problem, and I think there's the opportunity in it, which is one that we've missed. Now I do have a question, which is there's clearly a lot of interest in what you're doing...... and you're bringing in fresh and interesting problems coming from real opportunities out there. So, would you like to organize a session where people have a strong interest and this can get together? Sure. How do we do that? We'll work on that in a bit. Alright. You said that you were producing your PDS in the ODS, and that PDS was called that. Do you consider it a PDS, though? Yes. The reason why, as of right now, we're still doing it this way is because the original pragmatic programmer...... and the original programming Ruby rely heavily on PDS tricks for certain things that are on the page. And every time I suggest that we get away from that, there's strong interest in it, and then the very next book...... somebody says, oh, I want to do that. And I really haven't had the time yet to reimplement all the stuff that we're currently using with PDS tricks...... into something that I can use with PDF tech. But yes, I would like to get away from that. It strikes me that the brain replays, re-tubles PDS and something with PDF tech, but the writing breaks down. Right. Yeah, I agree. How does the thumb have an experimental footage that can be implemented? Yes, we have some that can do it. Can you please answer this? Yes. I think you say that you have this other solution that can use your verbal PDF. Yes. But then you said that I want to stay with tech. My question is, is that a purely religious statement, or are there practical reasons? There's aesthetic reasons. XSLFO produces wonderful reflowable PDF. It just doesn't look as good as tech. Yeah. And I, coming from... That's such a property of reflowable stuff that you produce. I don't know. No, because I can take XSLFO and produce a non-reflowable PDF that looks similar, but not quite as good as the tech output. Which is so much the same as I'm suggesting that to have it reflowable, you're not going to get the photographic quality. Probably. So that's something to do with what... I mean, renewable reflowable PDF is a clutch, I mean, it's still in the city, but you can have it done. Right. It's not worth knowing how it's done. Yeah. And part of it too is pure laziness on my part in that you saw the number of XSLTFiles I had to have already. And I would much rather be able to just maintain the PPV to LATAC file and get everything in there. And actually, unfortunately, that's what I spend most of my life doing. That's a practical reason. You know, and now if I have to maintain an HTML, because the Kindle stuff comes off of HTML. But has anybody used Kindle here? It's utterly unusable for programming books. Kindle provides you with two fonts, a sans-serif and a serif. There's no fix with font. So you can't do code listings. And it converts everything to black and white. So any screenshots you show get pixelated beyond belief. I can show you some horrible shots of we tried to produce one book through just to see what it would look like. And it practically turned my stomach. But one of our other non-technical members of the team said, eh, you guys are too picky. It looks fine to me. So that may be the case that I'm too much of an esthete to be able to do that. I don't know if this makes sense. If you could get XSLFO, so to speak, off the back end of 10, would that solve your problem? Sure. I don't know how you would do that though. Now the second question is, could you do any better than that in terms of quality of the output? Is it possible to do? In other words, does the optimum route pass through XSLFO in terms of quality of the output? I don't know. I don't know. I would have to try and see what came out. That's a pragmatic answer. Yes, yes. That's a very hidden house called for the Germans in the audience. I'm going to read in the very different books. But we have to change the pages. Do people really play one of those? Yes, yes. The reason why I wrote that last year's tech conference was that I was in Moscow for several months. I brought a bunch of books with me on my computer to read. I spent a lot of time reading on my computer. I guess maybe I'm just too old, but I agree with you. I don't find it an enjoyable experience. I much prefer having... I'll have you use the info format to press space and just change the page. Apart from that, it's when you're all the time scrolling up and down the screen, you're going to be able to sit the opposition. Yes, well, with my eyes, I was having to have it blown up extremely big. I don't enjoy it, but the young folks seem to enjoy doing it. I do know plenty of people who have read tons of books on their cell phone. I can't do it. I mean, I read when I had my old... What was it? It was a poem, I guess, at the time. I read Zane Gray on it, and that's the only thing I've ever been able to do was know something light like that I could handle, but anything more technical, it just gets to be a nightmare. And especially if you've got a programming book or something like a Photoshop book, where you've got to have screenshots in there. Once you get these little screens, it's impossible. I am a phogy, I guess, is what it comes down to. All right.
|
In this talk, we present the toolchain used to produce the award-winning Pragmatic Bookshelf titles and examine some of the pleasures and pitfalls encountered using TeX, XML, XSLT, Ruby and other open technologies.
|
10.5446/30777 (DOI)
|
guided courses and endocen turn for seče po vzene šežno kajpostčjec T Shock Vain ki moj... Želim... Ven Still Must Mucho Afumpa completo Sat. Our ovo je visit Et样. Si soil! N� gežestik. fankoma... S ložami! ide njih potatoes, betek Srbha, op경 sred Facebook in P v korin vzaš את moralitychičih. Vsi squares prołąčila baara del.�ba je genium. To bile to nisi. Tри vses dezeneno sta je boljko gdačno, Sp reverso, prod Boš neč del العkroov, predo values 1, aprèsale spokes were a posms Give Lukas N어요, ne so sorry, 3. 3. 4. 5. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 11. 12. 13. 14. 15. 16. 19. 21. 22. 23. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 39. 40. 41. 41. 41. 40. 41. 41. 41. 42. 43. 44. 44. 45. 46. 47. 47. 47. 48. 49. 50. 50. 10. 40. 15. 19. 21. 21. 22. 24. 28. 29. 30. Like in Metaphon is possible to define coordinates of control points and macro compose. Pas, kompletet, closed pass together is another approach I can by some simple macro define a pass in explicit way to calculate coordinates of all control points. There is a crucial thing for type 1, probably it is also an open type. V is required to remove overlap. Final result depends on post-creep printer or viewer. 280. Zselrati numetno Heavenly Kern. Er Prison Metatike 1 je uno�� zal pa z v crisisa cenス marbleiz lo Revelation, ohne Fear, pearls le plasterov, propontati v iz Branjev, earthquakes, in Looks Attorney vs December in a single GET was a segment all units available in confrontations. The ga wk genera taj pan in tekst form in t1 utils konvertit result to final taj pan. boldne repere me barskite in trih tem krače delajte. in jistam, da merim to večenom – obarjocu ovo datav componentovana. Domabervena je zaprej Christmasvir officialnen怎么了 Excelsio- oj je je t afraid z projena bi redo ga tege zdijego başio templesi in je toren za ideje postava. And I, therefore, for a eugaritik, an odd person and for KUNE UNIFORCE, I used their standard unicode numbers. Čtoo se sine lepe counter besides,gesetz..............SLR diagonal, tko ki ga предлагajimo dolgega vlog. morar dogodnu. Tukaj napravim wažitimi judikов organizem v regul manageri tudi, miliš Credit z n metanj delar. Jih, zbodno ja da fast škrat de tame generac Smart open type in true type. abo da bom ga pa approximujiko opob Mobile, in prvo, z originala unikoj,子 eg watching nautakaj bozorda gažicoringbeli glis klek bozorda habel ravia prvno zatak afore absolute zake v motivstaciji F Chancellor Was avant aug predi nekaj vse potv polynomial pesetovaj kraj našli tačna in aso 잘못a potksi practise Leg harmonchlj Eventl Arto,pa ca ili a ost구!) V stačn정 preprestvaru je, da taučnosti gr이če zaçajRE, med se prenamountem, letaniek prosti zač Downyre After Bah, karburno. Kantasoma so ga bi zda ili ta prмотр Spiritualist s derinten bučなので Vijialski kviniani. shredi, kot li na explicit din te ears? Marten...... Otprisingклад. Vje less bolj, Motor has tile resistance and od kot. Škoda bo spolen pomejsta, ko da je specify peg. Vse Fi V liveri sentrockie predboje t stone posti chased, tateri ljudiчно webrenosti v pentatote, tato dar posneva strasno posti, ker modeli takr stoc,polalimaimonima, tak, forces�šne posti prepar蝛, pounds dan ratedl schonal in zarb propočjevane include po relationships rate noухa ne bo zpadnotörnoje vruj z poš遛niš panels mitik okes moji in piz있i protesters per peoples modrat therapists z penalties. P Elsa napamus atuk jeразno prosperous in Vergavno te bolje naprej mojčatt, ker ustavimo do gri раза v rozni formu. In......... več več.................. opentijfons,...... experimentovani verz,...... in konkluzijen,...... zato vse informacije,...... in in,... Sample text for students. I have reference in Wikipedia for my older version. Now I, as I described, plan to publish this joint, unikot version, maybe to define glyphs for other languages and variants according historical period. I would like to, if the meta type package would be extended for opportunity to produce open type format directly, because on meta post level in variabike can define everything and there is question to, maybe it is difficult to create, to develop some programs that could produce open type automatically from scratch, from open type sources directly. So I have to think after, after some meta type. One, Jaskowski Novacki, Strelcik and of course, Fondforsk editor, George Williams and other developers of free open source software and thank you for attention.
|
The cuneiform font collection covers the Basic Latin (ASCII) block and glyph subsets for Akkadian, Ugaritic and Old Persian with current total number about 600 cuneiform signs. An extension for other languages is planned (Neo–Babylonian, Hittite, etc.). All cuneiform sign forms visually correspond to uniform “Neo–Assyrian” shapes. Fonts are produced in two steps. With METATYPE1, the package developed by the authors of the Latin Modern and TeXGyre fonts, we can generate hundreds (or thousands) glyphs to assembly a Type 1 font with many glyphs, but with no predefined encoding. The older variant of the cuneiform font collection, made 10 years ago, consists of several separate Type 1 components. A relative small number of mostly simple and repetitive elements is described by METAPOST macros in three variants. In the second step we construct OpenType using FontForge, the free font editor, created by George Williams. Cubic and quadratic approximation of outline curves are allowed because of a simple design of cuneiform wedges. Therefore both TTF flavored and PostScript (CFF) flavored formats may be generated. We use FontForge scripting facilities, it is also possible to write commands in its internal textual format (SFD), directly or with some pseudo-automatic tools. Unfortunately, the glyph repertoire does not correspond to Unicode because more than 300 glyphs do not have their Unicode numbers, and, on the other hand, my fonts covers only about 20% of the Unicode Sumerian–Akkadian cuneiform range (cuneiform signs and numeric signs). The paper describes cuneiform design and a process of font development.
|
10.5446/30779 (DOI)
|
This talk is about SyncTech. You've already seen SyncTech in action in Techworks on Monday. Today, I will be talking more precisely about SyncTech. Actually, SyncTech is available in PDFTech and ZTech, and in the next TechSlav distribution, and also in MiGTech. It is not activated by default. You have a new Coal Line argument and a new Tech Primitive, and if you set them to 1, SyncTech is activated, and if you set them to 0, it is deactivated. Let us see the context in which SyncTech was created. First, there was Tech, of course, then PDFTech. VisualTech is a commercial software by MicroPress. It is available on Windows. It claims to have the first engine to have embedded abilities to synchronize between the Tech input and the PDF output. Textures is also a commercial software on MacOS. It is by Blue Sky Research. It has an embedded ability to synchronize between the Tech input and the DVI output. From the macro point of view, there is the source-later package by Alexander Simonic, with the author of the WinEdit shell on the Windows platform. This package allows to synchronize between the TechSales source and the DVI output with a good DVI viewer. Ico Obadi wrote VPE to synchronize between the PDF output to the Tech input. Finally, I wrote PDFSync, which is a package based on IDs by Piro Namkuna, an intelligent fellow. This package allows to synchronize between the TechSales and the PDF output. It is well known for the MacOS users because quite all PDF viewers on MacOS support PDF sync. Finally, I take back to the front-end I write for MacOS 10. This front-end contains a PDF viewer so that I know how to write a PDF viewer that makes strange things. Let us now tell a few things about PDF sync, how it works. First, given a word in the Tech input, for this word you have a file name and a line number. This is the input record. Once everything is properly typed, this file name, this word, appears in the output, at some page number, and some location in the page. This is the output record, and synchronization is just the link between the input record and the output record. And how PDF sync was working. In fact, this link was made by a unique tag. Why a unique tag? Let us recall how Tech works. Tech reads the input file, then passes the text one line after the other. And it passes the text, it expands the macro, and when there is no syntax error, it makes internal computations, it breaks line, put lines into paragraphs, put the paragraphs on the pages, and then when the pages are full, it chips out the pages. And the exact locations of the text and other material on the page is known only at ship out time. But in the meanwhile, Tech has completely forgotten what was the file name and the line input of the origin of the material text. So, in order to track this information throughout the type setting process, I had to use a unique tag. And the information was stored in special nodes that were created automatically at every map, every display, every paragraph. But using a special node is extremely dangerous. There are mainly two problems with special nodes. The first problem is that adding special nodes to the list can break line breaking mechanisms. In fact, the layout of the document can be different whether you are using or not the PDF sync package. This is exactly a problem that was encountered by the source lektek package and the source special option of the PDF lektek engine. The second problem is that PDF sync is not compatible with some extra packages. For example, it was not compatible with the preview lektek package. And David Castro had to put an end to PDF sync, make some correction to PDF sync in order to have both packages work together. But unfortunately, this change was not possible for other packages, unfortunately useful packages. This is the second problem. The third problem concerning this design is for developers. In fact, the link between the input and the output is not one-to-one mapping. In fact, it is a multi-valued mapping. For one kind of input, you can have different outputs. And the developer wants to support PDF sync in the viewer, must choose the correct mapping which is not so easy. So let's see now how synctec addresses these problems. And the best way to do this is to see a demo. This is a lightweight PDF you wrote while developing synctec. The example is a very well known example. So let us scroll to an interesting page. This one, for example. And if I double click here, you see in red the point where I clicked. And in the last part of the window, you have page 46, which is not very interesting. And here we have line 522. And just below the fine name, which is a math.tech. This is the input. What is really the Blue Region? In fact, the Blue Region is just an HBox. And here you have all the HBoxes that Tech creates. You also have the VBox. And this is all information that synctec writes, records. It is not sufficient to synchronize. In fact, there is also additional information. Let me zoom in in order to see things. Okay. Be good. Okay. Yes, we can see it. You have two numbers in front of each box. The first one here, 110, is the unique number associated to each file. And the second one is just a line number. But here you can see that all the boxes share the same line number as 532. It's bigger than the 522 that was here. Why? Why this? Because those boxes were creating when Tech applies its line breaking mechanism. They were created too late. So this information is useless for synchronization. Okay. Let's forget the HBoxes. I must find something else. Unfortunately, Tech also creates automatically Kern nodes for interword spacing. And also glue nodes. And those nodes are created very early in the typesetting process. So the synctec information is accurate here then. You've got a different fact tag here and here the line number is correct. So here I can use those nodes to synchronize. What does the magic behind the scene? In fact, synctec writes an auxiliary file where there is many information. Let's go to the top. First, you see the list of all the files. It is a latex file. So there are many, many files open. And each one is recorded with a unique number. And using this number, I can recover the file name. Sorry. Just... I try to scroll smoothly. Yes, the interesting part starts here. And let me zoom in. Okay. This is the information stored in the synctec output file. You have, for example, the magnification, the deal set, which can be different depending on your......dvr or pdf. And then for each page, there is a record here. The record starts for page number one. And you can see that page number one starts by......here, a V-box. The first number is the file tag. The second number is the line number. And after, you have exactly the dimensions, not by tag, the horizontal position, then the width, the height, and the depth of the box. You can see that inside that box, there is another V-box, inside another V-box, and then an H-box with left parameters. And here, if I double click correctly here, you have the whole H-box. And this H-box contains a kernel. Let's see how the two paths work together. Is round bracket H-box and square bracket V-box? Yes, yes. And K for kernel, G for bloom, a dollar for math notes, et cetera. Let's see something here. A trend home. OK. Ah, it's not a good one. Zoom out. OK. Yes. That was not very interesting. OK, this is the H-box. And this is when you can make the link between the same tech information and what appears in the window. Well, this is not a concrete implementation. Let's see a concrete implementation in I-tech math. This is different than tech-wise because I add a layer to have a better synchronization. This part is beta, so it doesn't work really as expected, but it works quite well. So here is the beginning. If I select the word here, on the top, you have a red arrow that points to a word. And it is exactly the same word on the top and on the bottom. On the top, it's a preview and on the bottom, the text. And on the opposite direction, if I click on the M of the command here, you have the M of the command here. This is really fine. Well, these are limitations. If I try to find the diff, OK, I find the diff. I have the diff at the top and the bottom, but if you try to synchronize from the PDF, OK, it doesn't work here. Just because the word different contains a ligature, the double F is a ligature, so I have to take this into account in order to synchronize in this direction. It is not complete, but it works quite well. It is much better than a rough implementation of Syntec. Let's turn back to the presentation. So Syntec in Tech Live, all this started last year, in fact, when in EuroTech, Han told that he planned to implement PDF sync support into PDF Tech. And last year, he sent me an email and he had the first attempt to support synchronization in PDF Tech. He sent me the source and told me it asked me to write a PDF viewer to support this. Unfortunately, his idea was completely unusable. So I learned to code in Web Pascal and made changes. And after many nights, I had a new version that submitted to the PDF Tech list, the developer's list, and they agreed that it would be usable. Then Jonathan took the patch from PDF and translated it into something that ZTech could understand, and he could make it work with ZTech. And in return, he sent me a change file, and fortunately at that time I didn't know what was a change file, so I learned what it was and how to write a change file myself. And finally, we ended up in the actual Tech Live implementation. More detail about this implementation. There are two parts. We can say first that this is a segmented implementation. All the syntax sources are split into different files, and each file is targeted for one task. First, you have memory management. Remember that ZTech must record the file name and the line number, so we need extra memory. In this task, we declare the extra memory. Then we have to deal with this memory. The main thing is when things should be copied here, what should be initialized or not. This is not extremely complicated, except that I had to browse all the Tech codes to recognize every piece of code concerned by a syntax. And finally, things might be different depending on the Tech engine used. The second point of the implementation is also going to be implementation. Syntec is meant to be shared by all the engines. So it's not reasonable to have one piece of Syntec in PDF Tech and the same piece of code in ZTech. So you have to share the code. And actually all the Syntec related code is gathered to one directory, and absolutely no code lies in the engine sources. The second point is that Syntec must not break the time-seeking process. Syntec is an add-on. So it must be possible to build the engines with Syntec or without Syntec. For example, imagine that a developer wants to add a new feature. If this feature interacts with Syntec, things might be broken. So you'll have to develop each feature and also adapt Syntec to his feature and make some change, revert, and at the same time, change Syntec. It's much more comfortable to build the engine without Syntec, to develop a new feature, and once everything is done, adapt Syntec to his feature and not to the opposite. And actually this is done by a make file, and I learned to make make files. And actually you can see that there are only three locations. This is a make file to build PDF-Tec, and there are only three locations where there is Syntec. This is a make file to build with Syntec, and if you don't want Syntec, you just replace with Syntec, but without Syntec, you rebuild, and you have an engine that doesn't contain anything related to Syntec. Finally, not finally, next step is Syntec support. How third party software can support Syntec? There is now a Syntec Poster library that I wrote in C. This is just the code that I extracted from my PDF viewer. It is a C library that is actually embedded in Techworks. It is also embedded in Sumatra PDF. Sumatra PDF is a lightweight PDF viewer on Windows. The author is Christophe Sonté, and it is William Bloom who had the support to Syntec. In fact, he sent me an email a couple of weeks ago, and he said, I was frustrated because on Windows, no PDF viewers were supporting PDF sync. So I worked hard and I am committed to PDF sync support in Sumatra PDF. This is very good. I have good news for you, but also bad news. The bad news is that PDF sync is dead, and the good news is that Syntec is there. In fact, I sent him the Syntec Poster library, and two days later, he had made a patch for it. Of course, I take back to the implement Syntec support. Another method is simply to pass the Syntec output. If you run PDF Tech or ZTech with Syntec minus one, set up the Syntec one on the command line option, then you have a text output file that you can pass with regular expressions. Actually, Ostec and WinEdit do pass the Syntec output just because for Ostec, some people don't have a PDF viewer supporting Syntec. For WinEdit, some users really want to use Acrobat instead of Sumatra PDF. But in fact, WinEdit and Sumatra PDF work well together. Finally, this is not the correct method. The Syntec output file should not be passed directly. In fact, there is now a new command line tool, also called Syntec, which is an intermediate controller between the text editor and the PDF viewer. The text editor will ask the Syntec to ask the PDF viewer to view at some correct location. In the next version of the Syntec tool, the synchronization might be possible on Unix boxes, provided that the Syntec command line tool is not dirty. Finally, let us summarize. The benefits of Syntec are more precise than what? More precise. The fact that it is deeply embedded in the tech engine is really a benefit. There are no longer special nodes, so there is no problem of breaking, no problem of package incompatibility, there is no more package for synchronizing. As a side effect, this is the same for DVR, HDV and PDF. There is no difference between them. It does not rely on an external format, so you do not have to manage a package for playing, a macro for latex, a macro for context. Finally, it is much more easier for developers, except the one who made the Syntec pencil library. Finally, what comes next? First, the column number. In the input records, I was talking about the file name and the line number, but what about the column number? It might be useful for display equations where there is no word, but in fact, I am afraid this is something very difficult to implement, because tech does not know nothing about the column number. I am afraid the tech source must be adapted in a very difficult way. This is something I would not support. Also, due to the fact that in ITechMag2, I demonstrated that you can synchronize with words, with characters already. This is a root one take. The second point is the use of the Syntec data. In fact, we saw that in the Syntec output file, we had all the dimensions of the boxes. In particular, we had the depth of the boxes. If you use some PDF output and you embed it into HTML, you can have a vertical alignment using the depth of the boxes given by Syntec. There are many other things to say, but it is time. Well, it is time. I am grateful to the PDF tech developers team, ZTECH, TechLive, and ITechMag2 developers team, at least with the people I had contact with. They were very, very patient and efficient. Finally, wow! You can hear there is no file around.
|
In this presentation, we will focus on SyncTeX, a new synchronization technology now embedded into all the major TeX engines available in TeX–Live. This new design is extremely efficient and strong, such that old packages like pdfsync and srcltx, are now completely outdated. But this is not the only advantage: SyncTeX also brings pdf synchronization to the whole Unix world.
|
10.5446/30783 (DOI)
|
Felly Ros i'r credu ynte feel unif eذا bastante dros'my'r c Olhau, o rhan o'n cynyddiad dwdaeth o'n 아니�io, pan gael ei gyllewidir a fe bod gan dd Naglwg hanes wrtha arえてill i gyns awimb, neu'r rhinod gyda'rFF mwyn rhan o Instafell gyda'i gweithrell gylywh nobody, ac mae oherwydd unrhyw goaeth eich cyfnod..da wasill sydd o ein gennymau tinewid.....wydyn ni radd Croedonขo'r tîm ni.....di'ch cyfle bwrdd heb yn gweithio a o bach ythigmael Facebook... Rhaid am ychydigялwyd 2021,.....y roeddennousid i, byddwch ar y cyfle sydd. Mae oherwydd ar hun ddyliadu gwmfaint am liliadge ryw peth,.."..我的 trees s blockchainow modd tri bod yn rhoi ac y llach mewn gerddon..... Cymyn â mynd cael y cyfiawn. i ddweud dw i ni wedi gwysig mor cas ty herzlich i fod yn gyfathlu'i dほど am waith. Dwi ar gyafio am bytes yn gyda ll excitedus, ond female oech yn gallu popeth cyfnod o'i gwahanol. At y restoredear ac maen nhwy'r cy certainesch yn ihgwyr pwynt, mae'n gweithrech na ddechrau'. Da ni'n cas ond mae'r stylus ar arch heb yn cyc g perturb rogueau against William Os zes malwigung. A cael crossedeur yn rhaid i'r mateon ar hyn hyn staff sef bod córened yn swyddi Walmart y Remadell, One of the things that came out is that documentation is difficult and there's some level of documentation that's missing. The later companion addresses a lot of it but almost a higher level still is missing, like how do you start? You need some sort of overview of how things work. So I will do some of that in the paper, in the written paper but it's not obvious where to go from there. Dwi ddim eich gweld i knewidersio'r amdyliaet, ac dwiANOCH i'ch ddishoedd y ffordd yma. Y ddyn nhw'n ddweud y cwmhysgol ac ydyn nhw'n ddweud. Mae'n ddweud y ffordd yw'r ffordd sy'n ddweud y cyfnodol o'r organiadau o'r cyfnod sy'n ddweud yw'r cyfnodol o'r cyfnodol yn y 60-rhyw yn y cyfnodol yn ddwylo'r cyfnodol. Yr cyfnodol, Edmund Weiss rwyf yn ddweud y ffordd. A'r ddweud yw'n ddweud yw mae'r ffordd yw'r cyfnodol yn y 2 page opens. Mae'r ddweud yw'r cyfnodol yn y 2 page. Mae'r ddweud yw'r cyfnodol ac mae'r ddweud yw'r theme yn y ddweud yw'r cyfnodol. Mae'n ddweud yw'r PC. Felly, rwy'n meddwl, mae'n gweithio ar gyfer 1991 a ddweud hynny. Mae'r ddweud yn ffgwrdd posgrip ac efallai ymddangos ysgrifennu ac ymddangos gwawr. Rwy'n deill o ddweud ond y gweithio'r ffordd mae'r ddweud yw 2 page mae'r ddweud yn ddweud ac mae'r ddweud yn ddweud yw'r ddweud mae'r ddweud yn y 2 page mae'r ddweud yn gweithiol ac mae'r ddweud yn gweithiol적으로'r ffordd yn ben COVID-19 er speaker a chyn yn bwysig peroch yr y evolving changes yn amlwg. Mae hyn yn ichi'r ddim yn造ff y meddwl yn rhaid, yn y cyd加入 o'u wir yn hebFinally this is easier it does for people the added inipping I think t was ychydig yn rhoi dnoswyl yn Deniarm fulfilledregol. cofnbility penderdaeth, ei wneud i llwyddoedd hisx operator i gael eich maen French geithydd enghraeg amlinek y ddrwy din o'n ddysgu phaeth sy'n gwanfodio ac yn enlluned weithio swef sydd ymddangos chi fel fy wneud y gwerthol i fy carefulu ar y clay maen Fy enw yw'r ei tendesseriaeth oherwydd ar ddi judgement o gyda pannyddiant yn llaw f σε Toof Chwence, ond, sy'n sicrhau y pethau thonw hŷn Siw'r CHW Enlight. where everything is diff Sept The immensely rich Le, mae'r jug varynau hwy, oherwydd mae ydy deolw시�ch a bod gweld eich gfrannad gael cyd-on. Nebch edrych fel y cerdd YA- воз nhw tot ymlaenio cyd-og tif similarly rŵn addiol. Hbwy fydd cai am yr amser iddyn nhwいい cyd-on yn cypridd yn cael yn rwylai.阿 dwi'n deitali mewn yster y fag Publist Cymreito sy'n weitio cychymreidd i'r gwiaith cy trickyn castwyr yno. Cהffodd fcribed yn ar…!cesoedd wedi gw 하지 wonedce bosiwaith, u後 mewn gwahanol dim pwysס gumais атum乙wch. Boyan honw iddiwch tychim hon o bwyswfodion gwahanol lebenu ond yn eich pwys Guardio ES Sara Reun describe iddod yсь psiddor nad yóbodaeth yn gwahanol dim pwysys gallai, fel ylayr hy уров muutryn, roi gilydd hynny, gallwn rh Phoedd ''ref knight jam, fel pen Section ac mae'ch gweithrefan, elu'n band онwch, ac di度i nhw, ac jywch cos er gwybod, ac fe allwn ni'n rfan iawn troops peth angen. internally wawn gwahol ar used ym mewn cyflym plesg. mae'r Place� ar tro fydd yn iawn fglass maes ac wedyn ahol i'n hiwyn ar uned'. Felly, mae'r Chair yn nhw bod wedi~]. am medium i enwas 思, Max, o yr adythir wser yn gyda cookel, sydd yn collud o gwneud ニfyd. Mwneud Myny o Man those Cad hopesedadau plwyr. felt obviously he, o Good's goll Where ac mae'r alm oes cydlaedphone i mewn. Ac mae'r gwaith wedi gweithiowordio, sy'n meddygu weithio'r le yma. The panel wnaeth yn yr un imγaaćolен o'r wipell I d festawaeth e o eman y CM. So wnaeth dywed yn resume ridei. SO Ond subsequently he will not do things for me. Which is attending I was are faced with the task about 2 years ago from Native 2009. There is 1400 lines of code I didn't realize all of the problems registration register as frank by creating the task in such a nation at that stage then all the fault handling was weird You didn't have packages possibly were brill terb 무대. Ag i цветwch cynol pennes ברarrion, ac n-onod yw'r anodau ac yw'r post yn yn gweld'u fy mhwylo a fe allan ym stat적 cion gsw Comb Anderson, cajdwch wedi cyf objetodio heddiw. Roeddciech â wirfodiau sy'n ar ôl Music 모든 me franc. Pya, mae'r anodgo, oherwydd efallai mae pobl yn ein clyfun Ptyrau. Ond roedden nhw fyddi. Ydi oscillator, Restaurant Ddaeth. Fyddaid i'n夫, mae eich ymarfer creation ddim iawn meddwl. Be itra, Ynno Llywodraeth. Mae adeithas iawn i orbitdd ar yng ngardd. Wooss y mae gen gol Gas mawr. Mae gyrgy Reforma i'r pronsi. C樂mel wedi mwyn arna'ial yr ideael rydyn ni. mae'r drosgololn tri o'r rhai ac e'ihodd geamau i'n rhaid. Mae yna siŵr syl rywun chi'n gweld, yn ei r� browsol… Yani'na cyщ� Ath esiaid, a yna af astudiaeth Cyffwyr piano ac caman gwyb niech a dare stabilaeth Y bwysig yn gwneud yn gylyweduk yn hynny. Diolch. Unwys i gyfl dismissed ondau, ita was Mixed Meny, and I had to understand why because introducing things was all too difficult. I hadn't understand the degree of threat, do you Gobed byt<|nl|><|transcribe|> on a non- çocuk seleactic jubileetter near brotheronly prig o ff. a'r間yfru ffordd ei amser i oed. Fydur крingni am hyn create. A rywb鼠 genrying ff participation yn bau'r gwerthę fishing ac rywb am newid anwed islog cyfan a erbyn bod gwelch yn ybryder o sponsor, п Azerlu, fe iconiau, awr 오�led, ac â'n dd hvis arall dda addyn이 Beth gyda sereth â wyf. A gen tools Cyffra Mind Code Quick Class ni wedi newied gyda ddigit, fel y staff cyfreng cook, bydd隻 gyn유 gyda'r rywwdd tymiad ei gyd clawser awdaint. yve blethrстве ôl mewn teimlo 300 a roedd yn 450 rhywcei wedi ein newith-, yunodd adres c без draws. R admiration y maen nhw lle i maen nhw fawr i'i刚wstu og byddi'r pil bei gan expertise cyfur y correu a gen nhw weiddoedd y ta cyfarhwyr y hwnnw. Fy porffnill ag ddweud i'r ateb sy'n bleidff y dra position. א� diameter y tu adeilad. Rach transaction maew gwag maeth. ti'w eraith bod y tesim wedi'iemili a'r byw edrych mewn cyflwyno a fi fyddwch am yr skняu mewn mewn sylwr lle han fromil o'r cyflwyno aRec manages rwyf hyn o hyn, a wnes i yw thicker fel rygioddol iawn. Ych anodd niiau ymarkset rhagwr. Ac rydychranion yn byw o daeth gysylltiad a'r gwasn heartbeat felly nap those. efau yng Nghymru i gynwed am surprisingo. Ychydig yn ddatgani domys rewind. morality hynny oedärbowsbwys gofynt yr aps yn gynnwyddiad wedi arw teimlo ti went Parlathau! erswr i fe wneud chi'n blacua nos ond honno sy'n refractu hynny gyda syddτ Wirbyn ac mae'r heb am yw perthinolau leol addouddartog y gallw rhiannol bwysig y gallwydd fwy g 패<|gl|><|transcribe|> mewn gw holdfynol ei am gyrchngunun han chiraddwys yn gynnwyd protest Mae geisinnid, a'r pethwyddr,囱c tyn wedi ddylch gael rhywbeth teuluad i miDis ychydig. Mae roedda i wneud yng Nghwm fod wedi rhoi dau heir Rhaeg nawr Beth mewn mynd yn gwleidydd cerddefnaeth i ddim yn ymddurcydd, felly mae ei eisiau gyda pan i stwyllon. Mae'n pethwyddroc 색�wyddoedd yn 32M gwlad. yn yar fikr wider cyf 달�wydd. All erfyn gweld ond yn gweld ei fierceاف เธon yn rodd gennu, ond hefyd yn huerdd. Al tyni am zedd fan hyrchwryr, y gallwn bod sut hyn yn cymaint fe façon newid y lle conversation o'r decwyr? Fe reference centre hefyd, nid專 dryded o llawer mewn the only clue diwethe mik intermediate o'r geidwyd wedi ei ddif güadd Aber cambio y sy'n mynd i ddif Watersham neu gangr y gynnu y gwrdd mum gym chairs drwy gyfl staat ar hyn, ond ar gyfer hanom ermian y teimlo dae mwy roeddo, roeddo fel gwblion. Mae yw'r colla! Mae'r colla'r marcheid meddwl. F diabio rydyn ni rhymodd N'скиеhin ardi headphone Dylech na'w<|haw|><|transcribe|> elu'r cyfordbaethdessill, maeth wnaeth curvell war mewn cyngor, amdd yr oedaf gyda sicr ddwy'r meddliad pob wnaeth dither. Fel sprwyl rydyn nes customer o gyfloryd, y ffwyllgor ymlaes, mae sydd nesaf gyrydd wedi fawr allan i gyd so gan chi fywl, ond hefyd le, yn daith gyffredin, yw'r rhaid i ddweud y cyfnodd a'r hywgwch yn ymgyrch. A ddyn nhw'n gallu bod yn ymgyrch. Ond, byddwn i'r gweithio'r gweithio cyfnodd, hywgwch yn ymgyrch, ac yn ymgyrch. Mae'r hyn yn fwy o'r hywgwch yn ymgyrch, ac mae'n gwybod yn ddweud, ond rwy'n ddweud bod yn ddweud ymgyrch yn ymgyrch. Mae'r hywgwch yn ymgyrch yn ymgyrch yn ymgyrch, ac yn ymgyrch. Canol hwn yn oryd 거ch, ond yne yn dechrau bod y gag功, parcece yw y boxon y aflŷ Flex reel yn ei meddwl sicrhau. Y gallwn uch amb likoslau! Y model trucks yn rank ulwn sup Gydyr ar draws hiseliad, ychynidol yn恭wch yn yr han edryd, yr idea syni'n fibul yr hyn yn creduol. a bod yandown yn 75 dollar iddo mewn deaf cabby sydd pan히wn acCan открыta. Yn ddiwrnod y fun pan Matchys Cymru, ond drew gym cruising stampol roedd un o'r gumyn arall a ddriwwel НАC ac roedd y mynd fyn astud y staff cael co'i mwyaf. Ond bob genda påon solw яa fried ond petr sydd見io ei assentydd chi pobl, a phroed y tung� будут. Mae'rierzwi'n moddau, rheismawer o starts o'i rofenydd. Ie genno llawer felly'd wedi iawn i what it worked what it did. Cyngor heb dim ar y mawr arnyn identities i'r aw$%r. Y dylai, fe efallai o'r wneud icin sem ChrCs yer oedd yn yr amhorrwyd yawn cyfrifiad. Mae'r defnydd pethau, Holcs 반-chor maraf yn gymwysŷod saith yn cael ei fod......аeth healthy frequento'r catSmwm yn cael eu vivu ch zod ailad yng Nghymru...... parsleyland trun trunio maethatall lle y cyfrifiad arna o byw iawn i gwyb frydd. Dw i'n defnyddio, Nad yna?". Ydyf yn meddwl festiago'i un chainhau�fyn I am g buen answers ar gyfer mynd şen a fyddwch yn ym Rainbow Y baerrwybu am la assembled Rwy'n SAS followers I wedi bod y Planning Hydwch chi chi ddaeth ar hyn gyntaf awr y agre그�au iech��고 i'r Ich. Mae ddod i gyda y fydd cagellus yn barach. Peg angen ei gwybodaeth, gydaенияa ilaw ac adbl. Y Prif Deutschenływ hasson gymdeithwn authe f gan IE pori ar y greuaifydd d addysg. Heden yn unod i faint o bant agon ti adornsbydd y P لل bwyd. 35 ddwy, 35 ddwy i tawaib dros ma**au o rha persellol. I had to ask somebody how I could do that. I felt stupid because I used to quite a tiny programming language over the years and I could not do that. So in terms of acceptability of latex and where it goes from here, you have to use what people want to use and you have to build on skills while saying, well you use normal programming here but when you move over to latex you change your mindset completely and do something utterly different. It means there's a huge barrier imposed to new people coming in whereas a lot of the free software stuff has worked because the barriers have been reduced and ultimately if you go back far enough it'll say an awful lot of those barriers were reduced because C became the lingua franca and if you know C then you can read most other languages. Well you have to re-evaluate that when you get to tech. So it would be lovely if Llua tech would let you say if x equals y and change that to if x not equals y or x. So the experience did it work? Yes it worked very well. We did three different books based on more or less the same style with three different authors and one of them keeps on sending emails which is a joy. So thank you for forcing me to use Microsoft Word and thank you for letting me use later. So we had to do a lot of weird stuff for an index. The latest book that's been done is a book about 10 different DNS servers so it's really 10 books in one which makes the index very messy. You have 10 indexes more or less intermingles so we're able to put superscript on to say which program was relating to and that was just a tiny macro change whereas it would have been a nightmare in any other system. So the things that work for it well for us is in spite of it's not being XML there's a huge amount of semantic context in the latex source that you write. It does let you separate content from format in lots of interesting ways. And then there's a completely different book about sustainable energy which again is latex based. So it has been the best thing we ever did. I was worried moving from Microsoft Word which we chose initially because the stock format is essentially graphical. What we've done has been maintainable. We've used lots of new packages. We've had no unsolved problems and in fact the style fund just gets smaller and smaller which is wonderful. What would be nice to have is the one thing. There are only two things I think that we've lost by moving away from Microsoft Word is annotations. It's great to be able to take somebody's book and write comments in it and have their comment appear on the output but refer back to the source that works on Microsoft Word because it's a busy big system and then related to that is live diffs so that you can have two versions of a document like the one the author sent you and then the one that you sent back and they can go through accepting or rejecting individual changes. That makes a lot of diffs. And then the worrying thing after what Steve Peter was saying, are we blowing in the wind? Should we actually be spending all our effort on XML? That's what I would like to get out of it. Which would be a great change because latex has been so wonderful for producing books. It really is a dream. And that's it really. Yes, just a quick comment to the first question you asked about comments and differences. For differences there is a great package latex diff. It's on CTN you can explore it does exactly what you need. Yes, I heard you mentioned yesterday. Yes. And the second question about comments. If you want to put comments in PDF there are several free tools which can do this like PDF edit and so on. So you can add comments to PDF and then exchanges. It's really great. I'd like to add the comments to the DVI and have them end up back in the source. Why don't you just use margin part? Just put margin part and this will be your comments. And then actually you can write a comment. You can write a comment which will be margin part in your test version and will be novel in production version. And that's what we'll solve in this moment. Okay. I'm a beginner so that's all I can write. Good. Well I'll do that. And thank you for your help earlier. All suggestions would be very well. This is our business. Okay. Thank you very much.
|
We recently started re–using LaTeX for large documents — professional computing books. Years ago (1987) I had personally used a LaTeX 2.09 custom class (although it wasn’t called a class file then) for a book I was writing. The class file was written by a colleague, because I found it impossible to understand the low–level TeX mechanisms needed then.In 2006 I wanted to use this class for a new book of my own, and also as the basis for all our camera–ready books submitted by other authors.I had the choice of converting it to a new .cls file, which much of the documentation suggests is the thing to do, or writing “add–on” .sty file for the standard book.cls. We decided to try the .sty approach, as there were several packages that seemed to do most of what we wanted. We found that the .sty approach was straightforward, and much easier than expected. The resulting style is much shorter, easier to understand and maintain, and is much more flexible than our old one — we can use most standard packages to add extra features with no effort, because we still provide all the hooks that add–on packages rely on.
|
10.5446/30787 (DOI)
|
Mae'r cyfnodd maethol yn techys i'r web service, ac mae'r cyfnodd maethol yn ymwneud yn ymwneud. Mae'r cyfnodd maethol yn ymwneud, ac mae'r cyfnodd maethol yn ymwneud. Ynw'n ddweud, mae'n ddweud y internet o'r tawch. Fyemnau'n thri. S waterproof i gyfnodd! Diddwch os y gall�... Edd am y union cy confession, arall esơiblyfrieddau, Dyna ni'n rhan o'r ystod pan fyddó byddai opeth ac yn eventsír mewnدة felly yn sciadindingadau nostalgic o llif Photob carbon neu dyn. Dy Golfian Ac rydyn chi ein sfer cubeb Quelhaeth yn wir诬rif cwneud yw wordon arir i gyllideb fairf Modelethon. Geni'n gweld addysg sy'n gobeithio'r gwaith? Mae'n §ain droi mor wro ni. Cerkartol ayna golu lle fydd fel wer abonn ac efallai o honno i ddweud y dweud efallai o dim ac roeddwn yn meddwl naziol ymовasu sy'n mewn wyanfforddol fel mae angrifes astrwy. Rydyn ni wedi gweithio gael dwi'n gwneud mor gwyf守ty a llun. Mae'n adnod. Yn ymwneud, yw'r ysgolwch ar y cyfnodd. Y web browser yw'r cyfnodd cyfnodd yma. Mae'r cyfnodd ymwneud yn y web browser. Yn ymwneud yw'r ysgolwch ar y web browser. Mae'r cyfnodd yn ymwneud yn ymwneud, yn ymwneud. Mae Math Tran yn y cyfnodd cyfnodd cyfnodd cyfnodd cyfnodd. Be llais y tor trainer yna imag y maint ar y Eisteddf sydd y Palom Fan that ar nid yst pressing of Macros. Mae uhiriad yna hyn cerfal ftiw minister â'w im查ethwyr yma. A pob nydwn iawn i risks ysgolaf, o bryd cynhyrchu gyfnodd dipynion ar erbyn. Na llai cyflym y cyfnodd gennymu yn sinners yna, êi ein higwad am gyfly sproutsion y sydd wedi'uettig littlehe plesir yndeg yma's. Yn antonniden yma peisir ar fyster swydd yn gyrts chi ddodrest. So, gyd wordd o beth yd Thought Network? O, dwi'n meddwl yw'r mathrain wefser. Mae'r ddweud o'r ddweud o'r ddweud o'r ddweud o'r dronfolcosh mine tech, o'r program C that roughly speaking emulates tech. Mae'n ddweud yn ddweud yn ymddangos, ddweud ymddangos i'r ddweud o'r ddweud. Yn ymddangos Google, mae'n ddweud o'r mine tech? Yn ddweud. Yn ymddangos Google? Felly yw'n ddweud, yw'n ddweud o'r Google Charts, i ddim yn ymdangos, ac yn ddweud o'r ddweud i'r ddweud o'r ddweud o'r ddweud i'ch cyflwyno i ddweud o'r ddweud, felly iddyn nhw ymddangos gyda'r mathrain sy'n oed ein cyfrifiad. Felly yw'n ddweud o'r ddweud o'r ddweud o'r ddweud yn y cyfrifiad, ac mae'n ddweud o'r ddweud i'r ddweud. Felly mae'n gweithio'r maen nhw'n ddweud o'r maen nhw, ond mae'n gweithio'r maen nhw. Mae'r maeth transfer yn gweithio'r image PNG, sy'n gyflawni'r tec, yn gweithio'r ddifuig ddifuig, mae'n gweithio'r logo yn ddifuig, mae'n ddifuig, ddifuig, ddifuig, dwy'n gweithio'r image, sy'n gweithio'r image, sy'n gweithio'r image yn ddifuig. Felly, mae'n gweithio'r tec o'r 1.4mill. Mae'n gweithio'r trif, mae'n gweithio'r ddifuig yn ddifuig, ac mae'n gweithio'r tec o'r 1.2. Mae'n gweithio'r tec o'r materio'r type, ac mae'n gweithio'r 1.2. Felly, mae'n gweithio'r ddifuig. Felly, mae'n ddifuig. Mae'n ddifuig. Mae'n ddifuig. Mae'n ddifuig. Mae'n ddifuig 5 o'r 2 o'r type, a ddifuig 5 o'r 2 o'r start-up. Felly, mae'n rhaid i'r start-up cyhoedd, you have to increase the speed by 10 for this example. So, all users of the public Mathran server share a single long lifting instance of Tech. Tech is running on a machine over a Milton Keynes waiting for somebody to do something. Everybody shares that instance and that's the way we avoid having to start it up for every request. Felly mae'n necioол pro � ddastbecauseiaid a人blogon. Mae mealnyddiant neu wymudddi iniw. Mae some ni i'w bio'n meddwl amgyliannodiol y cyf тут be, a'i bod Through Things achenny ar gyd ei gorfodu repaf.lively loud<|zh|><|transcribe|>故佢 wnaeth eich newell. Reinhau mewn neweilau arweinydd, Desi quellnd i amddol me skiol o bwyr i Gaidwyr. Philip ís y lle sydd o ran yfynifniadau gr Game лona o bwyr y gebe�� Roedyn sicrhau faces o Rhwyng Gwyddi môlio'r merch Ac yn neithie dwgydio'r pierdwyd yn fany o moえる y ddoethod agus o hyd gyntafersau gall tydd i bordersig soul ar y torw Ãm yna investedeniad wealth ac mae'n dw leadsaidd. I wanted to offer rewards, but I wasn't allowed to because it's public money. So, I'm going to have to scroll this down, which is going to be another miracle if I manage it. Here we go. Okay. So, to get that to work, well, you could write everything out longhand, but I've defined a custom programming environment where death means wherever I put the real death command and backslash frack means the token that the user can get. So, on this example here, the backslash frack is the user's frack command, is the control sequence frack, whereas death without the backslash in front of it simply means the death primitive that we already know about, wherever we put it. So, we take all the primitive control sequences, move them somewhere else, scrub them out from here, so nobody can get to them unless they can get a backslash here, because they all begin with the backslash, then we have a custom environment that allows us to write here and write here as we please, and the dirty tricks at the top don't work. I shall show you that. I should show you that. Okay. So, here it is, the math transfer, waiting for some input. It's waiting for a formula. Let's do it. Alpha. There it is, all the way from Milton Keynes. All right. Alpha sub 1. There it is. Alpha sub 1 sub 2, tachara. Do you see we've got an error message at the bottom? Yep. Can you read the line number? I can. It's 1148. Right? Let's change it. You can see the line number is now 1233. Okay. It wasn't 1148, now 1233. Some other people have been using it. Right? Not just me who's using it. Let's just see how busy we are. 1233, 540. It's restarted. Occasionally it has to restart. We have to work on that to make it better. Let's try again. 640. So, people are using it as I'm talking, and it's all a single instance. If you try and do something dirty, like global, let, alpha, beta, what do we get? We get undefined control sequences. Cos I told you what happened. I put them somewhere else. They're not there. So you can't get them. Can you get them from somewhere else? Yeah, you have to change the cat code of underscore. And I'll make sure that only one character, which is I think null, allows you to access that. So, as long as the input stream doesn't contain a null, it's okay. So, it's very easy to filter the input stream to say, oh, if it contains a null in it, there's something dirty going on and I'll throw it away. So, I think it's secure. Okay, so anyway, let's go back to this. Sorry, let's go back to here. So, that's the programming environment. People ask me, do you support later? And I say, well, I will as soon as we have a secure version of later. Pay-tech was something like 2,000 lines to make secure. I could do it in a couple of days. Making latex secure is a bit more work. Now, let's move on a bit. I'm going to race through. Okay, sorry. Next slide. Forward. Oh, yes, expanding users to find macros. This is going to be a very quick aside, but it came up. So, we've got this rich programming environment. I think this is a broad interest way beyond what math trend does. It makes it easier to code tricky things. It compiles the source into macros. The current version uses something called ActiveTech, and it did a long time ago, that makes all characters active while uploading macros. I want to redo it in Python because I think that's a good idea. Well, anyway, yesterday the problem came between Franck and... Manjusha....about expanding user macros. What we do is we use existing technology to read the whole file one token at a time. If the token happens to be new command, we say, okay, let's make it do something like its normal thing, and we'll store some information. If we get a subsequently come to one of those commands, well, sorry, I'm not explaining. I'm going to just skip this and say that it can be solved fairly easily using this technology. There's already something existing called Tech2Toc. So, to speak, copies all the tokens from input to output. It's a cat program, and a 50 line extension will solve the problem. I am, of course, skipping over some details. This is another aside, which is... Because we start on time, if we start on time for the next one, you have five minutes. Oh, of course. How long do I really have? Well, if we keep on track, you have five minutes. If not, then you have ten. Let me start it late. Okay. I guess this better go on this side. I was at the Python conference, and they're interested... This is what they're interested in. They want something... Oh, no, that's what... Sorry, that's something else. Let's move on. Okay. This is how to put math on web pages with math track. You simply write an image with the right alt text. That's all you do, except you need to put in a bit of JavaScript. And if you want to have... If there's a Tech2MathML service, and you want to use that Tech2MathML service, you simply use different JavaScript. So you've got an immense amount of future proofing here, because you're only writing down what you need to write down, namely the Tech formula, and how we're going to get there. So let's move on. So this means that whenever a user accesses this, you will get another code to math track? That's right. We're serving about two million images a day at the moment. Sorry. No, no, that's our maximum capacity. We're serving about one million a month at the moment. We could serve two million a day. Sorry. I thought there was a problem with JavaScript. Would I need to allow you to access running run programmes on the same site that the... No, no, JavaScript just doesn't have that difficulty. You can get the JavaScript from wherever you like. Maybe you shouldn't, but you can. Okay. Let's show you a few... Okay. Let's move on to the demonstrations. So... Come on, come on, come on. Okay. So, do you see our button forming on the top here? Let's just type away, and do you see that as I type, I get preview? Doesn't... Do you see? Okay. Next thing we've got is we've got a help system here. Let's go to the index of equations. Here are some equations. They are generated on the fly. We can put this equation here, and edit this as a template. Here we've got material here, and again I can type, and as I type, things happen. So if I want to change the equation to 40, there we go. So this is something that students can use, and I think this is something that the Open University was willing to put some resources into. So there's that. We've got a... Sorry. We've got a proof-of-concept editor here. So this is a WYSIWYG editor, such as you can many get. And again, if I go into here, and start editing, I can see that I'm going to be able to do this. So we can make a plug-in that uses the web service that will go into this sort of editor. This is a proof-of-concept that you can do it. And I think the final thing I want to show you is a demo, if I can get there. Oh yes, all right. This is all a bit bald, but... Let's put in a math. This is mostly Christoph Hathmeister's work. I'm going to go into the Google Summer of Code. So we'll type a backslash and we get some help. And we'll type an A, and we get a list of all the commands that begin with A. And we'll say that we want alpha. So we select alpha, and the alpha goes in. And you see the foot... Okay, we've got the backslash alpha for technical reasons. I won't go to it. We can make it disappear. And then if we want to put in a beta, we'll type backslash B, and then we find a beta. So this is jolly nice, I think, for somebody who has not used TEC before. And if we go back to this one here, you see I've got a little help system here, complete with an index. So here's Greek letters, for example, in the index. Now if we go to what Christoph did, which might be here. No, it's not, it's here. Okay. When we're getting the information up here, we can also... So here we scroll up and down here, we can put information on the right-hand side about all these commands. So let's move on to see what else we've got. We've got this. This is the blog that's worth looking at. I'm racing against time. So we've got two slides left. I've got nobody left to tell us this. When do we stop, Cheryl? You can finish your two slides in minimal. And questions? Yes. I think I missed one, actually. So I showed you autocompletion, and I think one of the important things with the autocompletion is that we want to align the tech works and other systems, and we want to share syntax and semantics for help files so you can just author for one system and it goes everywhere. And I'd really like somebody to be able to learn tech on a web browser and then move on to tech works or whatever without having a great big thump. I want it to be the same environment across. So we've got that. And this is actually a late at three goal, late at three in 1997. We've got to be improved by adding an effective interactive help system. That's what I want to produce, and I want it to work with tech works and other things. And the final thing is what to do next. Please help with help. So at the Open University, they want to... Well, I believe that they will want, let's put it that way, to put resources into this because they have thousands of students every year doing mathematics, and they need to be able to put mathematics into their forums and so forth. So there are resources there. And one of the things that is quite interesting, I think, is Scott Paykin's excellent latex symbol list. Which, if we could get an online version of that, I thought that would be quite nice. I don't know how to get everybody working together. Getting the help information together is actually a starting point for a web-based online tech course, which I think is a good goal for the community. I'll repeat my final thing, which is I'm working two days a week on this sort of thing. I don't get the proper cover in turn only because tech skills are so special. So if you've got tech skills and you're interested in doing consulting work for the Open University, or if you've got an interest in tech and got Python or JavaScript skills, I want to do some consulting for the Open University, you should approach me and I'll give you my card. Cheryl, can I take some questions now? OK. I've got one or two questions, and then we'll let them off. Any questions? Yes. You have a French language with... French? Two French language with... Ah. JavaScript is not my favourite programming language. And you have quite a bit of difficulty already with the various variants. I would like to produce something that is international. If you're a... Yes, you have to have international JavaScript. I don't know how to do it. But we have to do it. We have to have French and German interface, and we have to have help in all the languages at least for some things. Another question. Frank. I can see how you secured the plain-take version. I wouldn't say it's easy enough, but it helps that plain-take is plain-take. I don't believe that there's any chance to do a similar thing for a system like context or late-take. In other words, the statement you make, you would like to do it to offer that kind of service on a late-take level would mean that you have to take away actual functionality that late-take offers, which is having complex structures that you can't really take away without effectively taking away later. Do you think there is a chance, or do you think that those two conflicting goals make that kind of interface impossible for anything further than, say, individual formulas? I think it is possible. I think that some things may have to change and you might not get absolutely everything. But you get a... I mean, the main thing we want it for is the mathematics, of course. The mathematics is possible. Yeah. I think with the mathematics one ought to be able to do it. I would like the tech macro programme is to think there's actually this new platform, the secure platform, and if we can write code that works for secure and ordinary platforms, that would be a plus, because the secure platform does give us something. One more? No. I'm sorry. Thank you.
|
In 2006/7 I developed and set up the public MathTran web service. This was done with funding from JISC and the Open University. It provides translation of TeX–notation formulas into high–quality bitmaps. In April 2008 it served 48,000 images a day for a growing range of sites. After tuning, the server could provide about 2 million images a day. It takes about 10 milliseconds to process a small formula. MathTran images contain rich metadata, including the TeX source for the image and the dvi and log outputs due to that source. This makes it straightforward to edit and resize such images, or convert them to another format, such as SVG or PostScript. MathTran, used with JavaScript, makes it considerably easier to put mathematics on a web page. In particular, the author of the page does not need to install any special software, and does not have to store thousands of image files. The MathTran project is now focussed on the authoring of mathematical content. It has produced a prototype instant preview document editor. Funded by the 2008 Google Summer of Code, Christoph Hafemeister is developing JavaScript to provide autocompletion for commands and analysis of TeX errors, all integrated with an online help system embedded in the web page. Separate work is focussed on developing MathTran plugins for WYSIWYG editor web–page components. This talk will present progress and prospects. It will also discuss some of the broader implications for the TeX community and software, such as - People using TeX without installing TeX on their machine. - Help components for web pages. - Integration with third–party products. - Standards for TeX–notation mathematics. - Learning and teaching TeX.
|
10.5446/30910 (DOI)
|
So once we've identified what it is that we want to build, we need to roughly, and the key word here is roughly decide what we want it to look like. One of the worst things you can do is pin yourself down to a very exact API too early on. When you're refactoring towards a new API and existing system, it's very important that you have good tests. And then the final steps of actually implementing it, we build those objects, we use them internally, we can close them together manually or we need to, and then the DSLs will come from where we find duplication or pain. So before you can design a great API, the first step is to identify the API that you're missing. In any large legacy code base, and make no mistake, Rails is just a large legacy code base. And then you can also use all the strategies that you can use everywhere else in your code and still apply here. You can find plenty of concepts that are duplicated across the domain. Some of the smells will look for methods with the same prefix, code that has similar structure, or the big one that you find in Rails a lot are multiple modules or classes that are overriding the same method over and over and over again and call it super. So one of these concepts that we found in the Zyback record was the need for a modifying a type of a natural. Say for example, you have a price column on a product table and you would like to represent that as a money object instead of a float. So it might look something like this. We're overriding the reader and the writer, checking to see if anything's nil. We have to get the lights on them and then I'll turn it on and wash it out. Is that better? So we're checking and both stages see if the value is nil but we're basically just overriding it to frame money object into a reader and extract the amount from the money object into a writer. And when I see people write code like this, even with real experienced developers, everybody's always wondering if I do this, am I going to break Rails magic? And it actually kind of depends. You probably won't break anything but there are some things that might work unexpectedly. So for example, we have a before type pass version of our assets and when you override the writer, you're actually doing some casting and then giving it an act record and it thinks this is before performing casting. The form builder expects certain values to be in a certain structure, some validation of expect things to be a certain way and this might just work a little bit differently than you want. So dirty checking also relies on for an after type pass value. Even if you don't break things, there might be other modifications of this behavior that are just flat out impossible today. So you might want control over the SQL representation of your money object. If you add in currency and are storing it as a string instead of a float and you need to parse that and you need to parse that out and combine them back together when you're going to the database. But the really hard one right now is you might also want to be able to use this on objects as a value in a query. Being able to pass into a query is incredibly useful. So Rails overrides the type of match that you can internally all over the place right now and you might be wondering if this is so hard, how does Rails do it? And you guessed with a giant pile of packs, you would be a rubbish. A giant pile of packs. So we're going to be looking at some Rails internals, so we're not so afraid to part. And I'm not going to get the chance to take it. I'm closing doors just in case anyone tries to run. So don't worry too much about the specifics of the code. I know the code on here is really small. The actual very, very specifics of what it's doing aren't too important here. This is a feature that's on by default where we convert time values that you pass into Act Preq or to the current time zone. It's one of these things that we just sort of do and most people don't know that it's there but it is on by default in most applications. So we're just overriding the writer here. But we are modifying a bunch of different behaviors. The first thing we're doing is converting to the current time zone. Then we're basically completely re-implementing dirty checking. The second and third lines in this method are one-to-one copy pasted from inside of the dirty module in Act Preq. And then once we've done that, we're then jumping through even more hosts so that we can maintain the more tight pass version of the attribute. So in this case, we're looking for the common concepts and some common smells. So we're overriding the writer. We're duplicating a lot of code. And this is a relatively small behavior change but it has to jump through a lot of really complicated moves in order to do it correctly. It's also important to note that the code that's written this way introduces a lot of cells of bugs. A lot of other modules may be trying to modify the type of this attribute in very unexpected ways. And the bugs are hardly detect once behaviors scattered all over the place. Another place that we modify the behavior of the type passing system is with the serialized macro. So this is ultimately, we're overriding the method that gets called internally to perform the type passing instead of overriding the reader and writer explicitly. Unfortunately, this module wasn't this simple when we got started. Here's some more code from the file. And more. And I think to get the picture. This module literally overrode every single method in ActiveRecord containing the word attribute. And there are five more slides with this code that I left out for brevity. So in this case we are not explicitly overriding the reader and writer. So here we are duplicating code from other parts of ActiveRecord. We're jumping through a lot of moves and we're overriding literally everything. And this file was the cause of so many bugs in 4.2 and earlier. One of the things I didn't show is that this macro actually ends up directly modifying the value of the column's hash, which is problematic for reasons that we'll get into a lot later. Another example is enum, where we're representing an integer as a known set of strings. Here we're overriding the writer method. We're also overriding the reader method. We're also overriding the before type cast. There's n and there's several others. And once again enum was a large source of a significant number of bugs, disproportionate to the size of the feature. So we found our missing concept. Type attributes are overridden everywhere. And one of the things that you might want to, might be thinking is, well, we want to do this so much, maybe other people want to be able to do this. So let's talk about what type casting is. Type casting is when you go through and explicitly convert a value from one type to another. Here's a very simple example where we have a value, which is a string, and we want to convert it to an integer. So we call it 2i. In ActiveRecord, what we do is in actually type casting, it's type coercion, which is the same thing when done implicitly. So here's an example when using ActiveRecord. You have a user model, age is presumably an integer column in the database. And you go look at that and decide whenever you assign a value to the age attribute, we're going to convert it to an integer. Now the reason that we do this is because ActiveRecord was originally designed to work with web forms, you're going to assign brands to attributes. And having to cast these manually would be a pain. Not just for integer types, but something like date can be significantly harder. We didn't want to have to do have this code littered all over our controllers, so ActiveRecord Sanctuary was born. The cases we handle today are much more complicated than that, but if you go through the history of how this evolved, everything can be traced back to that original limitation. Now in real score and earlier, the only way that you can have a coerced attribute is if it's backed by a database column. We want to be able to hook into this behavior and be able to modify it. So this is what we, we're getting step two now. We're going to roughly identify what it can be wanting to look like. And this is a simpler case. So we're going to have a product model and we know a few things about our API at this point. We're going to have a need to call some method in this case we'll go attribute. We are going to need to save the name of the attribute and have some markers for what type we want it to be. Now this is very similar to what you might find in Datamapper or MongoY which have similar APIs. But we're going to avoid over specifying the API at this point and really nebulous part is going to be how we, how we pass in the type directly. So at this point all we know for sure about our implementations is that we're going to need to introduce a type object into our system. But presumably that's not going to be enough for reasonable implementation. We want something that, we want something that's not just a little bit less snappy. We want it happening that we can really be proud of and know that we will be able to maintain in the future. So we're going to start by composing the objects in our system manually. The only one we know of is our type object. But we're going to be looking for places to extract collaborators and compose them to make our lives easier. Before we start introducing the API, we need to say a few brief words about the fact that there's some rules that you need to follow. Rule number one of the factoring is have good test coverage. Rule number two is have good test coverage. Rule number three, see rules by two. So on the next couple of slides, we're going to, again, there's, the code to be very small. The specific details of it here are important. What is important here is there's a giant case statement. Like many parts of our record, it's a giant case statement that just going over a set of symbols. And this is the entire type system in 4.1. We call a bunch of class methods based on a symbol that we had earlier derived from the SQL type. And you'll see at the top, at the top of this, there's a very small comment there. Casts value, which is a string to the appropriate instance. And I mean, like, you think of a lot of ways to try and pass in any value that isn't a string. That was one of the most misleading comments I've seen. So we know that we're going to introduce a type object and we know that type passing currently lives on the column. So first step, let's give the column a type object. So we add a constructor argument. We pass in nil everywhere. And we just run the test. And that was the very first commit that went in going towards this. It's tiny, tiny step extracting out more and more from what we know, which is that we need a type object and where it's going to live. By injecting it into the constructor of the columns and finding where the column objects are being constructed, this also is going to point us at the other portions of behavior that we're going to need to modify. Where are we looking at the SQL types for the columns? Where are we constructing these? If we're injecting the type object into this so that we can modify it later, these surrounding bits of code are all going to have to change. So we go through and in our system and replace all of these case statements, they're all over the place in column, and we just slowly move these methods to these type objects. At this point, we have introduced a mapping system into our connection adapters, which we're not going to look at in detail because it's very orientevious, but it replaces the responsibility of looking at the SQL type string and building the single integers, single strings, and single timestamp. So and replace it with a different object based on. So we have a place that we can start moving all of these case statements to. So we go through our system one by one, and we just remove each case statement. And each of these disks, we're just removing a giant case statement and adding another method to our delegated block at the top of the file. So this is at this point in refactoring what in a simple type object looks like. This is the string type which has almost no behavior attached to it. Now we've refactored our system into something that's a little bit easier to override. Now we can start looking at actually implementing the API that will let us look into this. So the simplest case we could start with is changing the type of an attribute from string to integer. So let's write a test. This is what the test might look like. We create a model with a schema. It will create two attributes with the same type, and we say that we want to change the type of one of them and then test the coercion case or the language. We've actually written the first invocation of our API, and let's take a little bit of a closer look at it. So we're starting with the simplest thing that can come to the core. We know we're going to have a type object, so we just pass the type object to our method directive. We could use a constant or a symbol or some other or some other marker for the type, but for now we're going to keep it very simple and very explicit. This actually turns out to be a design choice that sticks with us through the rest of the factory, and there's a lot of benefits to giving an LIA manual object. You can understand what's happening here much more easily. The API becomes much simpler, and I'm not just talking about from an implementation point of view. When you give me an object, you presumably have an inkling of what behavior can be modified by this API. The object you gave me has a known set of methods on it. I presumably cannot possibly change the behavior of anything that won't be calling one of those methods. And every DSL that you add has a cost. When you do add a DSL, you want to try and avoid adding DSL on top of your DSLs on top of your DSLs. There's a lot of cognitive overhead for what the game gets modified and where. You basically have to memorize every DSL that you introduce into your system. Understanding the plane Ruby stops being enough. And the line between being helpful and less painful and being too magic is very, very thin. So we can come up with a very serviceful implementation early on by overriding the columns. The same thing serialized does internally. But if it feels wrong, we're not changing the schema or changing the structure of the model. However, we want to take a small step if we possibly can't and we want to get to a working implementation of our API as quickly as possible. But if we just try to modify the column hash directly, we're going to run into another problem. So this is how we look up the columns and columns hash inside of Active Reference. Inside of Active Reference base specifically. And when you call either of these methods, they're going to go execute it where immediately. And that means that we can't actually use this inside of any class macros. It's very important that you be able to load your class into memory, load the definition and not need a database connection to do that. For example, on Heroku when you deploy, when your assets are pre-enpiled, that loads up the environment which will load up all of your Active Record models into memory. But you won't have a database connection. So we need our implementation to be lazy. And when you find that you need laziness in your system, I find that very important to separate the lazy form from the strict form and have both of those available. So here's roughly what the code looks like at this point. So on the top here we have our attribute method which is the lazy version. Below that we have the fine attribute which is the strict version. And then we're overriding after the scheme has loaded, we're going in overriding all of the columns that we want to modify. Now unfortunately for most of our cases we're not just modifying the type of an attribute. Or we're not just replacing the type of an attribute completely. You want to modify the existing file. So if we're going to serialize might be backed by text, it might be backed by binary. So we really need our decorators. But this again it needs to be lazy, we can't go get the current type when you call it because we don't know the current type because we haven't gone to the database. Now, decorators are not an API that's going to be public in Rails 5. However when you are building these lower levels on top of eachother, make your internal API is just as nice to use. as a maintainer want to be able to understand your system and have the same simple composable APIs available to you that your users do in your public app, have facing APIs. So there's a lot of code that I'm leaving out for gravity in the implementation of this, but the attribute type decorations is going to be an actual object in our system, not a hash, even though we're calling merge on it, or hash like methods. And it keeps track of the order that they are defining other complicated things. And one thing to note here in this design, when you're designing class macro, one of the important things is that it's de-identity. So if you call it at the same time with the same arguments, it should not modify the behavior multiple times. So we're passing in a name of a decorator into this argument, so just the name of the thing we want to decorate and lock the decorated with. So that way we can differentiate one decorator from another. So if we're going to use this for serialize internally, if you call serialize twice, you don't want to convert a thing to JSON and then convert that to JSON again, you want to replace the original decoration. So this is what using this API starts to look like as we consume it internally. We give it a block, give it a name, and we look for any absolute that we previously had to find as a thing that we would convert the time zone line. We then create a new typo that cracks the original and in its cast and de-serialize methods, it goes in and does the time zone conversion. Now we can do the same thing for serialization. However, in this case, we're not basing it off of whether it's a time column, in this case we're basing it off of purely the name. When you call serialize, it's serialize who and you might say JSON instead of YAML. So we can pull this out again. This seems like a common pattern, wanting to decorate purely based on the name instead of based on additional arguments. So we can pull this out into another API internally. So this is the same thing, but it just takes the name of the attribute instead of block to compare it with. And this is what serialize looks like in 4.2. The entire file has basically been deleted. I wanted to put the diff of this file in 4.2, but it was so huge with all of the red things, the dots were one pixel tall and it filled up the entire slide. And this is what the type of object that we extracted from it looks like. There's code there, right? It's not zero code, but it's significantly smaller than what was there before. It turns out most of why internals like that were implemented in the way that they were was just because you have to know about every possible method that can affect typecasting. So we're building our APIs on one another. We built a simple implementation of defining, taking a type and an attribute and replacing the original type with the new one. On top of that we were able to build a thing that we used to decorate an older type. On top of that we were able to build an API that represented a common pattern for that. Once we introduced the API into our system, it should be universal. So we're modifying this columns hash internally, which implies that the columns hash has a lot of additional information that is useful to typecasting. At this point it really doesn't separate the IP of a type attribute from the database schema. So what we have to do in, what we have to do in an ActiveRecord is go through and introduce internal APIs that may or may not go through the columns hash, that have steered that information away so that eventually we can separate it out. So that there was a single, canonical way to access the type for an attribute. Now we're not going to look at all the diffs for this because it took about a year and required rewriting a lot of ActiveRecord and a lot of A to R. But this is what the schema definition, the schema inference code looks like in Rails today, on master. So we're no longer defining all of the behavior of ActiveRecord based on the columns hash. We have a single method where we go load that up and we loop over it and then we just call public API. So when ActiveRecord builds that determines the shape of the attributes and what types there from the schema automatically, that's just doing something that we're giving you the ability to do as well. And we also started to, we started to find several other objects that we could introduce into our system that made management of state in ActiveRecord much easier. This is one of them. It's called attribute and it handles the memoization and state transitions between the various states that an attribute can live. It manages the types. We found that these objects started to be known about everywhere so we introduced a collection object to handle the transitions between those and this is the thing that you actually get from you. And most methods inside of ActiveRecord now very quickly change to these small one-off things that just delegate to this other object. In a lot of ways, skills like ActiveRecord internally has become a really bad implementation of the data pattern in the behind the layer of interaction which I think qualifies it for worst-gen non-native-gen of all time. And one of the things in DNR idea that we're looking for is we're trying to remove all of these modules upon modules upon modules that are just overriding behavior over and over again. We found a common behavior that needs to be modified frequently. So we pulled out a new object in our system. When we need to add additional behavior on top of that, we can just use a decorator. We can use object-oriented principles that we all know love. And when you have an object, again, it has an interface. You can figure out what it can possibly change. So an API looking simple or having simple invocations is not the same thing as it being easy to understand. Here's the pathological example. If I have a product that I product belongs to user, if I change the user's name, I save the product. Did the user's name change to the database? Raise your hand if you think you know the answer. Trick question is based on whether the product is a new record. But that's sort of my point. Like, it belongs to, I wouldn't even think that modified save if I didn't just know it. There's absolutely nothing here that would indicate what put or could change. Certainly, if I see that I'm calling the user method and there was a belongs to user, that's fine. But if I want to see what the possibly modify save, where do I look? Do the docs for save say every possible class macro that can modify it? Do I have to go look at the docs for every class macro of every mode on this class to see if that might modify save? And it's also important when developing these APIs to have a contract. So these are a couple of things that I think should be universally true for attribute-naked records that are not true in 4.1 and are true to 4.2 with these re-backers as an API. I should also mention this API exists mostly finished internally in 4.2. It's not going to be public until 5.0, but most of this work went into 4.2. But one of the things that we want to have universally true, when you assign a value to an attribute and then read that value back out, it should never change based on saving and then reloading from the database. If you assign the same value to a model from what's already there, the model should never remark this change. If you just call new on a model and don't give it any attribute, it should never remark this change. And for any possible value on an attribute, when you pass that to where or find by or any of the finders, you should get that model back. So at this is the point where I was supposed to have the big conclusion and the ah-ah at the point. I really know how to end this talk, so. Student asks a question. All right, go ahead. Thank you. I was curious what the performance like that is now. It seems like they're reducing a lot. Reusing more new objects and stuff in terms of typecasting all of that. Yes or no? So the question was what is the performance impact? It looks like we added a lot of new objects. And this was actually a very common concern that came up a lot during the development of it. If you guys saw me at RubyConf this year, you might have seen that I was in the hall the entire time. Because right before RubyConf we had gotten a report that 4.2 was twice as slow as 4.1 and I was fixing that. So I have another branch where I removed the objects and replaced them with singletons. And I saw no information in performance. We did introduce the new allocation of the attribute objects. But we removed several hashes which are string key. And we can't actually guarantee that we're getting frozen strings coming in. So for that by replacing the multiple hashes with this attribute object instead we were able to reduce the number of string allocations. And it comes out to be about the same. There's another low-hane performance group that are becoming more possible because the internals have changed to this new structure. Dirty tracking can be moved to this attribute object which knows much more whether or not things would have possibly changed. So we can do fewer checks, stuff like that. I'm a major of the workbook I've done. So we depend on the kind of environment we gave you about it. Every time that we was made it I didn't find any more calls that you had missed. So the question was from the maintainer of the ORC on the ANS adapter and it was how is the contract that just published to the connection adapter? So there's a couple of different things that we introduced with the connection adapter. The first one is we have a method that we need to be able to call to look up the type for a given column object. That's actually the only method that we were calling for this from active record base. And then on SQLite and by SQL Postgres adapters we introduced a type map object and have a consistent internal structure for how that gets populated, how the lookups occur. But that's all internal to the adapter object itself. I think that's everything but I honestly would need to have the code open in front of me to go into more detail on it. Claudia. What kind of data is up in the passing of the adapter object is better than the single adapter? Yes, but I don't want to be right all of the associations to do that. No. The question was I felt that passing the type object was a much cleaner API than passing a symbol in this DSL. Are there other APIs inside of Rails where I think the same thing is true? I think association is definitely because that mob buys so much behavior in really unexpected ways. I think we could gain a lot by describing that more in terms of objects especially when you get into the ways gems can want to add new behavior to that. But that's never going to happen. So you showed us an example of modifying the attribute of a record that belongs to another record. Can you change it to another slide? So are we going to forbid changing the method model attribute in this case in future versions of Rails? No. The question was are we going to change this behavior that I think is really confusing to answer that. No we are not. So that's a great new change. And it's not painful enough to warrant going through a deprecation cycle. So I'm going to give you a question. You showed an example where you can pass a money type as a type where an attribute... If I create a type or something and I want to be able to type it like that do I need to create a type object? Is that an API that I have access to? Yes. The API you have access to. The question is if I want to create a money type in my system is that an API that I have access to? Yes. The API that you have access to is creating a normal Ruby class. The API of this object is three minutes. There is a convenience class that you can bear from if you want to. It's a type value and it gives you things like a method where if you don't need separate behavior for form input versus database input, which a lot of simpler types like integer, you're just always converting it to integer. It just calls a single method by default and then also has a method that you can override or a mail is filtered out by default. But it's really easy just to make this inheritance from nothing. It has three methods which are cast serialize and deserialize which is form input to the database and then from the database. The contract called the edge API that you can find the documentation for the attribute method and also looking at the type, the class type value where all those contracts are pretty poorly defined. I understand modifying or finding a custom attribute. You mentioned how similar things that are long ago is that you have to do standard or even finalized or you have to do finalized or even finalized. The question is I showed an example where we had a model and you were going to go back to that example real fast. Looking at this example, the question was we have to define a simple string of integer on this model. Yes, the introspection does still happen. Notice it's not a new standard. This will override introspection. For example, one of the things we can deprecate is this behavior we have where a decimal column with zero precision is treated as an integer for performance reasons in Ruby. We can just deprecate that automatically because if you want it to be an integer for performance reasons in Ruby, you can just do that. I'm personally going to exhaustively define everything on my model because I think having to go to schema RV to see what methods I can call the object. I am experimenting with a workflow where you turn on this auto-magic schema-list thing when you first start creating a model. Then you test drive and you add them to the model but you never create a migration. It just magically saves to one table where it can fill everything and have it mostly still work. When you're done, you do Rails G-diff migration and it pulls it to the model and it dips them and comes up with the migration required to perform it in line with each other. That's probably not going to be done in time for Rails 5. I hate the, do I modify and rerun this migration or do I have to eat migration because I don't order it? It's just freeze and think of every afk I'm ever going to have. I'm trying to look at ways that I can use this API to eliminate that. At Roku, we really like databases and we really like streets in databases. I'm not going to say this is a problem, but I just have to say this is a problem. Do you think that potentially in the future you work with my open ops areas where we could add a better support? Yes. Okay, so the question was at Roku, they really like databases. I heard Roku's most requests are pretty cool and you should check that out. Specifically, if they like database constraints, then is there any chance that this work will be used to better support for validating things at the database layer? Hopefully, I would love to see us actually treat a unique index on the database as a non-full way to do that, that still be able to present the user-facing error that you get from the uniqueness validation in Rails. So, if you're not familiar with the uniqueness validation in Rails, you cannot actually validate the uniqueness of anything because it does not have a lock on the database. The database can change between when it goes to check to see if the value exists and when it tries to save the value. The database is really good at validating this sort of stuff. I would love seeing more stuff put into the database and hopefully one day we'll get to the point where that's more the standard way. Woo!
|
The most useful APIs are simple and composeable. Omnipotent DSLs can be great, but sometimes we want to just write Ruby. We're going to look at the process of designing a new API for attributes and type casting in Rails 5.0, and why simpler is better. We'll unravel some of the mysteries behind the internals of Active Record. After all, Rails is ultimately just a large legacy code base. In this talk you'll learn about the process of finding those simple APIs which are begging to be extracted, and the refactoring process that can be used to tease them out slowly.
|
10.5446/30656 (DOI)
|
Thanks for coming out late Thursday, guys. It's been a long couple days. I'm Nick Merwin, a founder of coveralls.io, and hope you guys have seen some of our badges around town, GitHub, Readme, et cetera. I don't know. We're kind of all over the place hopefully right now. But today we're going to discuss how it went about figuring out how to get coveralls, a SaaS app, into the hands of customers that wanted to run it within their corporate networks. It was pretty untreated territory for me, but I hope this general overview and what we'll see later with just a really basic prototype will help you guys get some confidence that you could also take your app to the next level by offering a hosted version to a customer base that maybe you never thought was possible. So in that sense, it's kind of going to be partially a business model discussion and some just general sysadmining stuff, and then of course all kind of viewed through the lens of Rails since coveralls is a Rails app. All right, so why would you want to set your app free into the dark world out there outside of your cushy deployment environment? So currently let's say your app is hosted on Heroku or VPS. Coveralls is on DigitalOcean. And if you have a subscription-based service where users are paying by the month and maybe there's a few tiers and usage-based add-ons, and it's coming along the crewing users and it seems like your potential customer base is covered, but you might be neglecting an even more lucrative and even preferable customer base, which would be the enterprise, quote-unquote. And this could be monolithic companies where dev teams have a much harder time getting upper management to embrace cloud-based tools, or perhaps the issue is just that setting up a $5-a-month service fee takes a mountain of vendor approval paperwork. So let's chat about how it can make sense to offer a hosted version of your app in addition to your cloud service. So there are three kind of tenants that I ran across in figuring this out, or as to the why you would want to do this. And the first one is security, and it's pretty obvious that we all worry that when you use a cloud app, you essentially give up control of your data, and you assume that the devs on the other side hopefully use Bcrypt on your password, but you never really know for sure and just hope there's never a real data breach. But beyond passwords, it could be any sort of data that your company's security department has forbidden from leaving their networks. And in fact, it could preclude the possibility of them ever using your app. This could be maybe a medical company that wants to use your data mining tool that can't upload anything to the cloud due to HIPAA compliance issues, or maybe it's a dev team that wants to use your source code analysis tools but can't risk sharing or leaking proprietary code. There's also just general business apps like CRM's, internal tasking, chatting, scheduling that would have plenty of info that those companies wouldn't want to share with the competitor. But by running a hosted version of your app within their networks, they can just be more confident that they won't ever have their accounts hacked or their data compromised out on the open internet. So second one is a bit more esoteric. It's reliability, and this is just kind of a worst case thing. If your app that you're giving them is a DevOps tool that's part of the build or deployment pipeline, then they're not going to want there to be any unplanned maintenance downtime. So for instance, with GitHub or Travis, if something goes down, something goes wrong, you don't have control of it, and it blocks your pushes or your builds that could screw up your deployment. And so if you're running it internally, then your dev teams and your sysadmins will always be able to coordinate to make sure everything's good to go when it comes to crunch time. The lastly is cost, and the traditional subscription model where it's just pay by the month doesn't really work when you've given over your app to a customer. So oftentimes those potential customers are also less price sensitive and could be willing to pay a premium for your host of service and some sort of extra white glove support service. But once you deliver the app to them, you have to assume that it won't be able to phone home to check the subscription status. So instead of a time-based subscription, like a monthly fee, you probably are going to need to do some sort of seat-based usage or something else that's beyond just a monthly or perhaps yearly. And then once you're thinking about seats, then you can do seat packs as tiers, as in the number of active users that can access the app as it's living within their network. You could also build some sort of self-destruct or shut-down mechanism that will force the customer to purchase or renew the license after a set amount of time, like six months or a year, just to make sure that they come back and they keep a valid license. So there's a lot of different opportunity to work out business model kinks beyond just a traditional SaaS model. So before we go into the how we figured this out, let me just mention Coveralls and what it does and why we took it to the enterprise. So Coveralls, really quick. Coveralls is a tool for code cover tracking and notifications. We can annotate your pull requests with successor failure statuses based on changes in your coverage percentage or just send messages to your chat or email with updates. This helps dev teams make sure they don't deliver untested code, especially in a production. And on site you can see line by line coverage of the code base, the cloud version for open sources free, but we charge for private repos. And the rub comes because we don't, while we don't actually store source on our servers, we do store private scoped OAuth tokens for GitHub. And so for some companies, that's just a non-starter. We can't have, they can't allow us to see within their private repos. And then some of them are already using GitHub Enterprise, so they're not using Cloud GitHub to start out with, and the two are not interoperable. So what we did is gathered interest from a potential customer base over a matter of months, and it became clear that there was enough of an interest that we should try to do this and try to make it happen. And so that's what brought us here. So there are some pretty big main hurdles that I found in converting your general cloud Rails app to a fog one where it's living within, below the cloud layer. So delivery and installation, let's talk about delivery first. The best user experience for your customers would be to have the least amount of work to get your app up and running in the environment, obviously. So three download files would be, I think, is the most consolidated way that we can achieve this. And that's first the virtual machine that the app is going to run on, and that could just be a Linux box, as I'll show you, a virtual Linux box. And that's typically between 800 and a gigabyte size download. The next one would be the packaged up app files itself, and that's a decently high number, just between 50 and 100 or more, depending on how many gems you have, because all the gems need to be bundled and vended into the app package itself, as we'll see. And then lastly, a license file that's specific to the customer and is generated for every customer. And so out of those three, your customers would probably only need to download the VM, like else in a blue moon when you do big upgrades to the underlying system or have new dependencies. The app package is something that they would download whenever there's updates, and we'll get to that in the license file would be around the same amount quicker, like when they go to trial to active or what. So actually, let's talk about those incremental updates for a sec. With the standard deployment, obviously, when your code is hosted up on Heroku, getting features and fixes out to your users is as easy as getting pushed, not so in this case when your app is running completely out somewhere in the wild where you have no access to it. So that's where the app packages come in. It's a smaller download that has just the most up-to-date bug fixes, features, et cetera, that is a much quicker download than having to re-download and configure an entire virtual machine. So also for the license files, one last thing, it makes sense that this is the part where you'll probably need to build a secondary app in addition to your main Cloud one that will just service those customers and charge them subscription fees and have trial accounts and all that stuff. So that's where the download license file and then download the other two in the secondary app. All right. So installation-wise, the networking config is the first thing that they're going to see and since they're not, you don't want them to have to log into the VM themselves. The best way to get around this is to build a tiny little menu-driven Ruby app that gets presented on boot and I'll show you guys how we did this. And that way, the customer won't need to know any specifics about Linux networking. In addition to that, external access just has to be assumed as not going to be a possibility. So hopefully if they're, the reason why they're purchasing this hosted version is that so they can be completely confident in the security behind it. They won't give it any external access, you know, that'll be totally walled off in a sandbox in their network, hopefully. So the assumption of there not being any way to access it from the outside is probably valid. Last one is just process management and that's how are we going to get background jobs and the server itself to stay alive and start at boot. All right. So more of these hurdles. So product support, generally you can see immediately when a user hits a 500 error, it's going to, you're going to get an email and you're going to be able to go on to your error tracking service, air brake, whatever, and take a look. In this case, there's no way to know when that happens. There's no way to get pinged. So you're just going to get an email from your customer saying something's not right, what's going on. So you need to provide them a way to be able to send you details about the exception and so we're going to look at how to just keep those exceptions somewhere that we can then send over or let your customer send to you in like a text to an email or something. So that goes same for logging. If something's really funky on it, you're going to be able to see if there's any weird parameters coming through the routes, especially when they're not causing an exception per say. Lastly there's resource management. If it seems like things are running slowly on their site, there needs to be some way to address that. I'm not going to go over that in this talk because I feel like that's more of a sysadmin-ing issue, but that's something for homework, I guess. So lastly is the intellectual property question and this is a big one. And because you're giving them a VM, they can unpack the VM and mount the disk and look at everything you put there. So even though it comes in one nice file, it's still extractable and you can just load up another VM in virtual box of VMware and then mount the disk that you had delivered them. So that means they're going to be able to look at everything as though they were a root user. So I feel like database access, unless, I don't know, there might be some way to get around this, but you should just assume that they're going to have access to the database itself and it doesn't really matter. I don't know. That's the bigger question here, along with code obfuscation. Since Ruby, there's no way to really, really fully protect your source code once it's being executed in somebody else's environment and it has to actually work itself, you can write a code obfuscator as a deterrent and that's what we've done for CoverAll's Enterprise. But I think, I mean, that prevents people from immediately reading it if they just mount your disk and look through your source file tree. But it's probably safest to just cover this with your license and say something like, the license you're purchasing from us covers only the use of the software modification or redistribution or not permitted. So it just kind of comes down to legalese as being your last line of defense here. And if this keeps you up at night, then maybe the best thing to remember is that your customers for your app should be most interested in getting updates, getting bug fixes, new features and support rather than breaking into your source code, reading it, copying it, spreading out on the Internet. So that could be its own talk about how to actually achieve some level of code obfuscation, but we're not going to go over that really today. Okay. So let's get into the nitty-gritty here. So this is a pretty hectic diagram, but it's basically showing the general architecture of how the app is running within our VM, which is Ubuntu. And we chose Ubuntu because it's just widely used and relatively easy to configure. So the main components that live within it are the network config app, which is the first thing that customers will see when you boot it up. And that lets you do things like select static networking or DHCP and then set the name servers and then reboot, shut down, et cetera. So it's just kind of a simple little starter app. And then behind our web server, we'll go into why we chose Passenger for it. There's going to be a reinstalled app that I just made in Sinatra. It's a really simple app that facilitates the unpacking of the package file that contains the actual app or the app updates and the license file. And so within that, there's, within behind the web server, we'll live your Rails app also. So the Rails app itself, some of the things that these three components I think are the main ones that set it apart from your standard cloud hosted app or where you started from to begin with to get it to this point. So the license file reader is going to read and cache your license at the boot when your Rails app boots and then it's checked on every page view. And then you can do things like enable the disabled features or lock the app down. Second one is data import and export because when you need to download a brand new upgraded virtual machine, say it's got new dependencies for new features, then you're going to need to give your customers a way to get their data out, the old one, and get it in the new one. So that's things like dumping the database. If your app has uploads, then you're going to need to pull those together if the uploads are living on the VM. And then anything else that the users might have uploaded or changed that has a state on the VM. And the third one is support package generation and downloads. So we'll show how we can use the Rails rescue from in the most general terms just to collect all the exceptions that happen in controllers at least. And they can be archived into a temp directory and then when an admin for the app as it's living on the VM is ready to or needs some support, they can hit a link that generates a archived file that we can encrypt also and then email it or dropbox it over to us. Okay, so setting up the environment first. For development, I think it's probably easiest to use virtual box. It's free and pretty simple and it's really easy to export and appliance from your machine called an OVA file and that's just a nice little, it's a tar or zip or something with a.OVA extension. That's just something easy you can link to and serve from your CDN. Also Ubuntu, just because of its, how widely it's used, hopefully we can trust its built in security settings and standards. So we kind of get that out of the box and our customers get that confidence out of the box. So when you provision it for your Rails app, you obviously want to use the minimum amount of dependencies and touches to it so that you can just keep the download size smallest. If you don't need to install something like ImageMagic, then just don't do it. There's no point. And it's pretty easy to end up with a two gigabyte or more virtual machine, whereas you probably could have kept it slimmed completely down to a one gigabyte. So also when you're doing development on the virtual machine, it helps to have two separate versions of it, one that you're going to use just for packaging up your app because you're going to need to have the full app sitting somewhere on the virtual machine so that you can run a bundle to vendor the gems into the app itself to get ready to be completely packaged up. It's not something you should do in your local development environment. Say if you're developing on OS X and you vendor your gems in that environment, they're not going to run on Linux or once you're in Linux. So all right, next let's talk about networking configuration. That's just a little screenshot of what a user will be prompted with when they, or a customer when they load up the virtual machine the first time or every time, really. You can still SSH into it. It's not like completely blocking everything off, but this is just sitting within a file in Ubuntu called the TTY, the teletype. And it's pretty damn simple to set up. And really you're just, to make this work, the bare minimum is just having some system calls that write to the file, the interface file, or the DNS file. So there's also shut down and reboot options. So here's some, I don't know if that, can you guys read that? It's pretty small. I have it on a text editor too. So yeah, like I was saying, it's a menu-driven app and it's just basically collecting information from system calls and then displaying it and then letting you input things just using GetS as a very simple Ruby app and then letting you just reboot the entire computer. So it's pretty standard, simple, no rocket science here. So the next part of the puzzle is the server itself. And I chose Passenger for it because it just seemed the most dead simple to put your app in a directory and have Passenger just start serving it. Once you hit touch temporary start, then Passenger will load the new code and it doesn't really care if the code isn't there to begin with. It's not going to completely blow up. It's just going to 404. So that means the loader app that we'll talk about in the next slide can easily just extract the packaged up Rails app into a directory that's already been preconfigured for Passenger to serve the app from. And as you can see, sort of online 14, that's where we're running the loader app. So as soon as you boot up the VM for the first time, those error pages will actually, if the Rails app is not found, it'll redirect you to the setup page. The setup page is where you can upload your package file and your license file and it'll be extracted into its eventual, like, living place for good. So the installer app itself is just a simple Sinatra app that takes the package and the license file, puts them in the place where they'll live. And this one, you want to have pre-installed on the VM for distribution because it's going to be doing the work to get Rails up and running. So we can go over a bit of what is actually happening in the Sinatra app. I'm using a simple encryption library called gibberish that just is a nice abstraction over, I believe, open SSL. And then this horribly insecure shared key up there, ASDF, ASDF. And if you do have your code obfuscated, then it doesn't really matter that the key is right there and it's just best to assume that if people are going to be looking into your code, like, they're going to figure out everything about it to begin with. So you don't really need, like, a huge key. So what it does is it first will decrypt the license file to make sure that it's valid or it will just use that license file. Let's say if you're upgrading your license, it can just put that in your Rails app and your Rails app will start picking up the new license. Maybe it's gone from trial to live. So if the package itself is present, the package file, then it'll decrypt it and place it in the directory where a passenger is preconfigured to read it from or to load it from, and then it'll just touch the restart file and redirect you straight to root, and so you'll be good to go. So right, that index is just how simple it can be. I mean, this is like the bare minimum of what it takes. So tweaks to the app itself that we discussed, the license file, in a regular customer subscription app, right, there's all the data, or the customer specifics in the database, and that's checked on every page load. But we can't do that here because there's no way for the subscription side that you're selling them on your sales app to have any effect to the Rails app that's running on their side. So when they get a new license file, it contains within it just an encrypted JSON file, and it'll live in the temp directory and get loaded on boot every time. So we also want to add some of the Rails secrets, well, all of them, and for the simple example app, I'm running Devise also. So the secret token in Devise Secret can be read from the license file on boot, and you want to do that because you don't want one customer to be able to tamper with another customer's cookies. It's a really kind of extreme use case or possibility, but still just best practices. All right, let's see. So this is a really basic module that demonstrates how to read and write the license file. It's using the same encryption key as the loader, and this would obviously be obfuscated, hopefully somewhat, just for a deterrent, but it gives you an idea of how simple it is. It's the first thing to get loaded in the initializer so that the rest of Rails can use it when it's booting up. And from here, we can have calls out to it all over the app, checking the trial period, disabling functionality, displaying messages to link out to the web if they need to upgrade their license. And another big one here is the seat limit, and the seat limit is something that can be checked, say, in a user validator, like, validate seats and just do a count against the license seat number. In that case, also, for administrators, you'd also want to provide a way for them to manage the users, of course, as though they were using your cloud app, so they would be able to deactivate or delete users so it's a free upseats if they needed to. All right, so next is the support package. And this is the really simple implementation of an exception tracker, and it's going to put exceptions in little encrypted files with a cleaned backtrace into your temp directory to be ready for download by the admin when the time comes, when they start hitting 500s. And that just encrypts it back up and tars it and just is able to be emailed to you or dropbox to you, depending on how big it is. All right, lastly is the data import and export management mechanism. So this is just a dead simple PG restore and PG dump. These are Postgres commands. This would be in a controller. So the first one is purely just taking whatever you spit at it and attempting to shove it into the database, and export is just giving you an entire dump. So nothing really tricky here, but it's definitely an important part of being able to upgrade between virtual machines. For background jobs, not going to get into too much, but we used form and export, which allows you to take your proc file and generate upstart processes, and those can be used by Ubuntu, and so that will help, that will let them be executed on boot and stay running, hopefully. I think the next step here would be for your app itself to show the background job status within the app itself and some sort of dashboard, and of course that wouldn't be something you would show on your cloud app, but just something to think about. So lastly, when all of your pieces are in place, what's the best way to make it actually distributable? Obviously you can pare it down to a single script because it's just like a Git push, this would be something that deploys to your CDN, maybe updates your users that there's a new version of it ready for download. So some basic things to do here, this is super basic rake task that just grabs the version, has a bundle, a vendor bundle that throws everything in your vendor directory, and then tars everything up that's pertinent, excludes what's not, and then encrypts it all using your shared key, and that's it. So some of the things that it could do are obfuscation at this step, if you needed to run a post processor on it, then it would happen here, and then last one would be actually uploading it to your CDN and then notifying your users. And I keep a version file in the root just so that all these scripts and random things can use it because each download of the package is going to have some sort of version attached to it that your user is going to be able to want to go back and forth or be able to identify. So that was pretty much it for what you can get up and running as a prototype. And beyond that, once you actually have it in customers' hands, they might, if they're already using AWS, they might want to see it as an Amazon image, which can be spun up without actually having to download anything. It's pretty trivial to convert an OVA file to an AMI. Amazon provides some command line tools to make this simple, so you could also put that into your packaging script. Actually no, this would be a different script that would run purely on your VM when you're ready to cut a new version. And then once you have an AMI listed on Amazon, it's pretty trivial to just hit the public button and then it's searchable in the public registry. So some more things that are going to be the next, taking the next step beyond what we've covered. Resource management, there should be an easy way for your admin to see how your VM is performing and this could be part of the support package also. Clustering, perhaps there's, you could set up a mechanism to run multiple VMs at once for better performance. Mail server, which is not that tricky, but could be important since in your cloud app, you're probably using, you know, SendGrid or MailGun or whatever. Those aren't going to be accessible from the VM. So you should be able to, your admin should be able to specify a mail server. Incremental VM updates, you could also include bash scripts to change pieces of the VM around when you import a new package file. So when that gets exported or untarred and migrations are ran, those migrations could also include, say, new dependencies that were stored inside the package because you can't just run an app to get install from there. You have to assume that your VM doesn't have any access to the outside world. So we talked a little about the SAS app that would run in the cloud in addition to your main one to sell license files, facilitate that. And last one is enterprise sales, but that's just a joke because who knows about that? Like, that's kind of the, that's the big question mark, which I have no idea about either and is definitely uncharted territory. So I think, I think we're running out of time, so maybe we should just do any questions. Yeah, the question was what are we doing for obfuscation? That's still kind of a work in progress. Right now it's a Ruby C extension and the way that GitHub does it is they've compiled their own Ruby to do it, so it's at a lower level than just a jam extension. And so figuring out how to do that right now, but it's not, it's not something that's widely discussed. Yeah, well, the question is how, how do we reduce the amount of patches that we need to ship because there could be maybe a bug that's specific to one customer, but we don't want to have to have everybody else come and download it all at once. Yeah, yeah, there's really not a good way to tighten up the feedback loop beyond having to bundle and, or to package up and then release brand new versions for every little bug piss. I mean, I think that's, that's more of a customer relations question where it's, if it's a small bug, like maybe it can wait into a bigger point release if it's being fixed just for their environment or if it's something that's business critical and it just doesn't work for them then, and they're paying customer, then of course, no matter how small it is, it needs to, how small the change is, a new version has to be released and everybody will see the new version up there. Yeah, it could, it could be incremental like that where the package in, in the migration will actually do the updates. But probably that, that would be for smaller things like maybe you can include a.dev that can be loaded in that's not too huge, but for OS level that would necessitate cutting a brand new VM and asking your customers to go download a new one then do the whole import data export dance. Scaling support teams. Well, we haven't really had to scale ours yet because we haven't had so many customers that it's become overwhelming, but I mean, I, I feel like we're going to just need to scale in traditional ways, but it's yet to be seen what extra sort of hurdles we're going to have with supporting multiple versions of the package and the VM and having this kind of asynchronous support flow where we get, you know, emails or drop boxes of just packages of logs and 500 errors. So that's, it's definitely uncharted territory also. So that there's two pretty simple little apps up there that are just prototypes. It's for COM calling enterprise communications just like let's you post, you know, it's like a simple little blog app. But it shows off some of the licensed file reading and pretty much all the, the screen grabs from the presentation were from that app except for the app loader which is the little Sinatra app. Anything else? All right. Thanks for coming guys. Thank you very much.
|
When a Fortune 500 company wants to use your app but can't risk sharing sensitive data in the cloud, you'll need to package and deploy an autonomous version of it behind their firewall (aka the Fog). We’ll explore some methods to make this possible including standalone VM provisioning, codebase security, encrypted package distribution, seat based licensing and code updates.
|
10.5446/30658 (DOI)
|
So, I'm Aja Hammerly. I'm a developer advocate at Google. I tweet at ThagamizerRB, and all the code that I'm going to show in this talk is on GitHub. I'm Thagamizer on GitHub, and it's in my examples repository in the RailsConf 15 folder. And since I'm showing you code, I have to say that all the code in this talk is copyright Google and licensed under patchy v2. So, first things first. I'm not an expert at ops. I'm actually not an expert at anything. I tend toward more drac of all trades, master of none. But I've spent some time racking servers, and I have bloody knuckles. I've had bloody knuckles to prove it. And I've built things that look like this. But my skills have not gotten to the level of this yet. And when I try to do something that's all professional ops fancy, I end up building things that look a little bit like this. And why is that? Because I'm fundamentally lazy. I'm passionate about some things, solving problems, building sites, algorithms. But I'm lazy about deployment and operations and maintenance, and I find a lot of that stuff to be fiddly and not a lot of fun. And I'm here to tell you right now that that's okay for most people. And what do I mean specifically by lazy? I mean, doing the minimum that you absolutely have to to have something be successful and stable. I want to make systems that are lazy so that they'll make, I can be lazy so that they'll maintain themselves. And one way I've found to do that is with containers. So what the heck's a container? So according to several sources, a container is a way for multiple applications to securely share hardware with a minimum footprint. And frequently this means less of a footprint than using virtual machines. To me, containers can also be an environment package. It's a way of wrapping up your operating system, your build dependencies, any libraries you need, possibly your application code, all into a nice little bundle. Or wrapping up your application with its execution environment. And this takes me back to the old days when I was making little applications that I was giving to my friends on floppy disks and I had to make sure I included all the DLLs that they needed to run them. So how do you use containers? Well, I use Docker. And there are lots of container frameworks. There's lots of ways to do containers. Wikipedia will give you a list and a nice little chart of all of them. But I like Docker because there's a great community and ecosystem that go along with the container format. And many common web frameworks, including Rails, while we're why we're all here today, have containers that are freely available on Docker Hub, which is like RubyGems for Docker containers, container images. So Docker sounds awesome. How do you use it? So I'm going to do a quick demo using a basic Rails app. And by basic Rails app, I mean very, very basic. Built solely with the scaffold. This is me. And this is a to-do list app. Standard Rails stuff. It's got a title. Very, very long title. This is me telekinetically typing. Got some notes. You have a due date. And then there's a completion integer where you can put in the percent complete. And I'm sure that most of you guys in the audience could rip something that looked a lot like this up in a couple minutes. But let's actually walk through all the steps. So to make this application, Rails need to do. And there's a lot of tutorials out there for using Docker in Rails, but almost all of them stop here. They don't actually have any sort of models. And a Rails app without models is kind of boring. So let's make a simple model for a task. It has a title, some notes, a due date, and a completion parameter, which is just an integer. And to make it a little more exciting, I'm going to use Postgres as my production database. But I'm going to use SQLite in Devintest. This is probably familiar to a lot of you. It's something that you've done multiple times before. And if you're going to use Postgres as your production database, you need to change the default Rails gem file. Here's the gem file I ended up with, which should be incomprehensible. But in red right now are the two pieces I had to change, and you still can't see them, so let's zoom in a bit. So create a new group for production, put the PG gem in it. And then I need to move SQLite, which is in all groups in the default gem file, into the development of production group. So that's great. And since I changed my database setup, I also need to change my database YAML file. And those are the lines I need to change. Set the adapter to Postgres for production, and then I'm going to set my username, password, and host from some environment variables that Docker sets for me, and I will come back to this, I promise. So now I need a Docker file. And Docker file is just the list of things that need to happen to set up your environment. And I'm lazy, so I'm going to write a minimal Docker file. And I can do a Docker file for this application in three lines. Really, just three. The first line is telling me what image I want to start from. All Docker images, most Docker images are built from other images. So the Rails image is built from a Ruby image. The Ruby image is built from an operating system image, and turtles all the way down. I'm using the Rails image because it's really close to exactly what I want, and therefore I don't have to know anything about ops or how to install packages on an Ubuntu machine or anything like that if I don't want to. And then I'm also going to specify that my RailsMF is production because this isn't in the Rails image that I pulled. And then the CMD, this command line, is the entry point for my application. This is what happens once my container is built and my app needs to start up. And you'll see that I'm calling a shell script here. So let's see what's in that shell script, my init.sh shell script. And it's, again, three very simple lines. I'm doing an export of secret key base because Rails 4, bundle exec, dbcreate, and then I'm starting up Rails server. Yes, I'm using the default server. You could use Unicorn. You could use something else. Lazy. But there's nothing really complicated going on here. I hacked all this stuff together one night at Hack Night and not very long at all. So now let's build it. We need to build up our image with Docker. And an image, I've used that word a couple of times, is just a template for a container. So I'm going to build a Docker build, the dot on the end means use a Docker file in this directory. And this builds it. So let's see what that looks like. So that's done. And that went really, really fast. And I'm going to hand wave this a bit. One of the reasons it went really fast is the Docker caches all the intermediate steps in creating the image. So if I only change, each of those steps is a layer. And if I only change something at the very end in those last layers, I can use the cached versions of all the ones before that. Which means I don't have to repeat all of the steps, like pulling down packages from your package, from your distro's package management system. Or building gems from scratch if they have C extensions. I can skip all that if I'm not changing any of that stuff. And that means it builds really quickly. So now it's built. Let's run it. I'm going to run through these commands. I don't know what people's experience with Docker is in the audience, but I'm assuming that you have as about as much experience as I had three months ago, which is this much. So Docker run. First we're going to do the database component, because we need the database up and running before we start the web. So this command creates a new container named DB, and it uses the Postgres image to do that. These two lines are setting some environment variables for that Postgres container. And these set the password and username for the Postgres database. And all of this is documented in lovely detail on the GitHub page for the Postgres image. I'm following the instructions. It's lovely, especially if you're lazy like me. And then this dash D means we're going to run it demonized. So I'm not going to get the standard out. I'm going to let it run in the background and do its thing, and I'm not going to interact with the container in any way. And now I'm going to start the web server up. Again, we're going to run a container named web. We're using the Thagamizer to do image that I just showed how it built. Again, we're going to run it demonized. This line is mapping ports. That's what the P stands for. And what I'm doing here is I'm mapping port 3000 on my container to 3000 on the host machine. In that case, this case, it could be local host on a Linux box on my Mac. It's a Linux VM. And then this line is magic and lovely. So link makes it so that my containers can talk to each other. In this case, I'm linking the DB container that I just made to the container I'm starting up now, the web container. And I'm linking it with the alias PG for Postgres. What link does is it creates a secure tunnel between my web container and my database container. And it also puts a bunch of handy environment variables in my web container so that I can connect to the database container. And that Postgres username, Postgres password, and Postgres host address, all of that that was in the database.yaml file, those are created because I used this link flag on my Docker startup. So let's watch it. That's the database starting up. That's the web starting up. They're up. There's lots of ways to set up container. There's lots of ways to set up virtual machines for testing out your app. This is really, really fast. I'm going to grab the IP, throw it in a browser, be indecisive about which profile I want to use in Chrome, go to port 3000, the tasks collection, there it is. That's my app running. And in case it wasn't obvious, I got all of this stuff set up in less than 20 lines of code beyond the standard boilerplate that Rails New gives you. It's about five lines of code to change the gem file, five lines of code in the database yaml, three lines in the Docker file, and three lines in init. That's it. I don't know any ways that I can set up a Rails app to run on Ubuntu that are quite this easy. And there's a couple shell commands, but it still all comes in under 20 lines of code. So why would you use something like Docker? Why would you use containers? Some pros. For me, the biggest one is consistency. I hate hearing, but it works on my machine. I started my career in QA. I've heard that so many times. And I don't care if it works on your machine. I care if it works in production. And if you're using containers, your staging environment and your production environment are the same. They are the same image. They are the same OS with the same libraries, all the same versions. You don't have to worry about messing up your gem file slightly and getting the wrong, one minor version off in a way that messes up your app. The next talk, the one after mine, is going to talk about how you can also use Docker in development and possibly have exactly the same OSes and libraries and stuff in development. So there's none of this, but it works on my machine stuff. It either works or it does, everyone has the same experience. Speed. Containers start up really, really quickly. There was a video that came out from the Kubernetes team last week, two weeks ago, about them racing a Kubernetes cluster versus making a latte. It ended up being a neck and neck tie. But we're talking minutes. And I've worked on apps where we had Chef Build and Chef's Very Cool, but rebuilding all of our VMs from scratch took upwards of 15 minutes because of all the stuff we were having to build from source. And being able to start up a container very, very quickly is nice, especially when you want to do things like auto-scaling. And the caching makes it fast if you're only changing the last layers. Flexibility. If you've got a microservices setup or we're in the distributed computing track, containers let you change how the different processes sit on different hardware. So you have a VM you can say in development you want all the stuff to run on the same VM. And you want it to be split up in one way. In your staging environment, you want one of every machine on a separate VM. And then in production, you want five web servers and two database backends. And all of that can be set up pretty easily with a variety of container management tools. Portability. I talked to folks about why they do or do not use the cloud or why they roll their own or why they have their own hardware. And everyone's like, I need flexibility. I need to be able to move back and forward. And containers are great because you can run them on Linux. You can run them on a Mac. You can run them on Windows. You can run them on your personal machine. You can run them on data center hardware that your company owns. You can run them on a cloud provider. Most cloud providers support containers right now. You're never locked into specific hardware. You can move it anywhere. And it will behave the same. And then repeatability. Last year about this time was particularly bad if you had a DevOps as part of one of your hats you were wearing. You're patching rail security bugs. You're patching open SSL security bugs. And I was on a team that was doing DevOps at the time. And we had our client was a little bit nervous about us making OS level changes. Making changes to the OS libraries. Because we pushed code a couple of times a week. And that was all automated. And we had that down. But every time we had to do something lower level than that, we didn't have a process for that. And the nice thing about using containers is the process for, if you set it up this way, containers are flexible, you can make it so the process for updating your code is the exact same process for updating your OS is the exact same process for updating the C libraries you have. And so it's only one process. Everyone knows it and there's much less likely that you're going to mess it up. So there might be some cons. The biggest one is Yagney. You aren't going to need it. For some apps, and I'm not going to say this is most apps, but if you're doing a small proof of concept thing or your personal blog, maybe you don't need containers. Maybe the overhead of setting that up is too much for you. And you'd rather just use a platform as a service provider. That's fine. This is a very cool tool for people who can get benefit out of it. But I'm never going to say that any one tool is perfect for every application. And the other thing is that while containers have been around for a long time, I work at Google and Google has been running all of their internal stuff on containers for years. Docker is still only a few years old and some people are reticent about adopting stuff that's on the first half of the adoption curve. I am not one of those people. Y'all are at Ruby RailsConf. I assume y'all aren't those people. But if you work for some of those people who are a little bit reticent, you might have a harder sell on using a technology that's still new. But I think it's really cool. Otherwise, I wouldn't be doing this talk. So I've showed you some pretty cool stuff about running these containers on my Mac. But I don't think any of you actually run through production site off of MacBook Air. If you do, I have a dinosaur for you. We should come chat afterwards. So how do you manage containers in a cloud environment? How do you manage containers in a data center environment when you want multiple versions of something or you want five front-end web servers? And I'm going to talk about Kubernetes. There's lots of ways of managing containers. This is the one that I happen to know. And it's a Google project for managing clusters of containers. Basically the idea is that you build up a list of your desired state in a format that Kubernetes understands and then you start up Kubernetes on a group of VMs or bare metal machines and say, make it look like my desired state. And I want to point out specifically that Kubernetes is open source. There are more contributors outside Google than inside Google at this point. And all this is on GitHub. Very, very visible what's going on. So Kubernetes has a giant pile of vocabulary. I'm going to hit it up. I'm going to talk about it specifically up front because I'm going to use it and I'm going to forget what I have and have not defined. Master, this is the machine that manages everything. Masters have a set of minions. Usually there's several of them. They're running on different VMs. The underlying base unit is a pod and this is the smallest deployable unit. You can think of it as part of an application that intrinsically works together, not independently. And I asked some folks who work on Kubernetes, I'm like, so if you have a web application, is the web application plus the DB a pod or is the web app a pod and the database is a pod? And the answer is it depends. I'm going with web app and the database are going to be separate pods because I want to scale the two pieces independently. For the demo, I'm only going to have one database, but I'm going to have two front end web servers. But if you want to put them together, you can do that too. So a service is an abstraction for a logical set of pods, which is a lot of very fancy words to basically mean a named load balancer. And if I have multiple pods, I have a service that takes care of splitting the work up between them. And these are also how different pods talk to each other. Like we had links so that the database could talk to the web container on my local machine. We build services so that my database pods can talk to my web pods in the Kubernetes cluster. And then we have a replication controller. And this is responsible for ensuring that the correct number of pods are running at all times. We set up a replication controller, you say, this is my desired state. I'd like two replicas. And the replication controller is responsible for saying, are there two replicas? If one of those machines fails, it starts up another. If you somehow end up with extras, it'll shut one down. I want to also put some notes. This is currently in beta. The API may change. But people are using it. So warning, but it's still awesome. And one of the things I want to be very explicit about is that Kubernetes gives you options. This is open source. You can run this on your own hardware. You can run this on a cloud provider's VMs. Google Cloud Platform supports it. But there are others who are working with Kubernetes. There are other companies that work with Kubernetes. And this technology does not lock you into a specific provider. So that was a bunch of words. And I like giving technical talks with lots of code. So let's look at some code. This is a hand-wavy example of a Kubernetes configuration file. And this is not a specific one. But the basic format is similar for a lot of them. So you have an ID. You have a kind, which could be things like pod or service or replication controller. The API version, desired state. And that desired state is where you set up, this is what I want Kubernetes to maintain for me. And that has maybe a set of containers or something else. And then there's some labels. And these are just used so that you can gather together things that are related and refer to groups more easily. So for this example. Let's do the database first. So I'm going to have one database pod. And there's a bunch of ellipses on here where I pulled out a lot of extra braces because JSON on slides doesn't work so hot. But this is the pod configuration file for the database. And I'm specifically showing that it has a set of containers. And that's an array with one unit in it. The container is named DB. It uses the Postgres image. I can pass in my environment variables just like I did to Docker run. And then I can also do some ports. And I'm saying that I want to map my container port 5, 4, 3, 2 to the host port 5, 4, 3, 2. Standard Postgres port. And if I want other pods to be able to talk to this pod, I need to set up a service. And here's the service definition. IDDB, kind service. And again, I want the service port 5, 4, 3, 2 to be mapped to a container port 5, 4, 3, 2. And then the selector says what pods should be in this service. So any pod that's named database will be in this service. Really simple stuff. So now I've got my database set up. Let's do the web. I'm going to have a replication controller because I want two replicas. Again, pulled out a bunch of friends and curly braces and such. And this is actually a lot of stuff. So let's look at it in two pieces. The top part is the desired state for my replication controller. I want two replicas. The replicas that count toward this are the ones that are named web. And then I give a template for the replication controller to create pods. And that's what this template looks like. I've got a single container named web using the image thagamizer2do prod. And I'm passing in some environment variables. And I'm doing some port mapping. If I want to be able to access this set of pods with a replication controller, I need to have a service. Here's the service definition. I've got a selector saying anything that's named web is in this service. And I'm doing some port mapping again, specifically putting it on port 3000. So this is all great, but I haven't showed you anything running. So you can run all this, and the Google Kubernetes environment is Google Container Engine. It's in public alpha. And I'm going to show you how to get it running. So this is just something you run at the command line. The first three things, G Cloud Alpha Container is using the Google Cloud Platform command line interface. And I'm saying that I want alpha feature set, because this is alpha, and I want the container engine subset. It's the Kubernetes command line interface, and I'm going to say create from file, the DB pod.json. That's the DB pod file I showed you. And I'm going to make that. And then I can say, hey, Kubernetes, give me the list of pods. And I get back a list of pods in their state. And you can see that this one's running. You can see what image it is and what node it's been assigned to currently. Great. That's up. So now I can do the service. Again, create dash f. I can get from file for the DB service. And I can say, Qubectl, give me the services. And it gives me a list of the services and what ports they've exposed. Great. And I'll just create the web controller. Again, create from a file. And I can get the replication controllers and see that I have this web controller, the image that's being used, and how many replicas there are. And then I can go get the pods. And I can see the list of pods that's been started up and what their state is. And now I need to create my web service. And you'll notice that everything is create dash f. I'm just creating all of these from the standard configuration files. There's really not a lot that's complicated here. The first half of the line looks a little scary, but it's the same every single time. So I create my web service. I start it up and you can see that it's at port 3000. And then I'm going to do something crazy. I'm going to have Kubernetes resize my web replication controller. And I'm going to say I want five replicas now. Wait a couple seconds to a minute. And I have five running pods now. All I had to do was say how many I wanted it to happen. Because Kubernetes job is to say, what state do you want the cluster in? And I say, this is the state I want. And it's like, I will figure out how to make that happen. It pulls down images as it needs to. It creates new containers as it needs to. It divides them among the number of nodes. And you'll see that I've only got three hosts listed because the cluster I created only had three VMs in it. I didn't need to provision new VMs to have additional workers running. So at Mountain West RubyConf, Star Horan gave a great talk. And in his talk, he had a slide that said the lies I've told during this talk. So I would like to think that I haven't lied to you, but there are some large sections of this that I have hand-waved vigorously. And I'm going to call them out right now so that you guys don't have to ask questions about them when I open up the floor to questions. The first one is disk. I didn't show you how to set up the database to hit a persistent disk. So if you restart that database container right now, all your data goes away. You can do that. There's tutorials on how to do it. I also didn't show you how to set a shared disk between multiple web clients. There are tutorials on how to do that, too. This is a 101-level talk. That is not a 101-level concept. Security. I'm not going to say that what I showed you is good for production-level security. Your app has unique security concerns. Please do a security audit. Replication. I didn't show you how to set up a cluster of database machines that replicate across themselves. There are lots of tutorials for this. One of the standard Google Cloud tutorials uses Redis and has Redis Master and Redis Workers that shows how to do something like that. I kind of talked about this, but Docker runs on Linux machines only. To run it on a Mac, you need a tool called Boot to Docker that uses a virtual box to make a little Linux machine, a little Linux slice where you can run your Docker containers. It does it well, so I forget the fact that I'm not actually using Docker natively. There's tools for Windows that are the same. You can also just run this on Ubuntu. When I'm at work, I've got an Ubuntu desktop underneath my desk, a big tower, and I do most of my work as a station to that machine. And size. So, one of the things I found talking to folks who do containers and work with Docker a lot is that they're concerned about VM image size. And the VM, the images that I chose, the standard Rails image is huge. If you care about size, you can make your own by taking out the pieces you don't need. For example, I believe the standard Rails image has both Postgres database client libraries and my SQL client libraries. I was only using Postgres, so if I built my own, I could choose not to include the my SQL ones and save some space. But make the trade-off of your time versus how big an image is and figure out where your laziness actually lies on that. I was going for, you know, minimal effort here, which is why I chose those images. So, as I said, this is a 101 level talk. A lot of you in the audience have been nodding along, which means I probably didn't show you anything new. But if you want to learn more, first place to start is shipping RubyOps with Docker by Brian Helmkamp at red.rubyconf 18 months ago, a year ago. This is an awesome hour-long talk on exactly this topic, includes live code demos, because he is more daring than I. I don't do live code demos at a conference, because you never know about the Wi-Fi. If you're going to get into this, I recommend watching this talk first, because it will help you internalize the vocabulary. One of the things I struggled with learning this stuff from preparing this talk was that I would pull in a bunch of information and think I understood it and then get stuck. And I would unstick myself three days later once all the pieces sort of themselves out in my head. And I had the answer the whole time, but so much new vocabulary and so much dense documentation was making it hard for me to help myself. And once I watched this talk, I kind of got a big picture sense of how things were, and things made more sense. If you want to find out about Container Engine, cloud.google.com, container engine, doc, there are two demos, the, there's a simple hello world type demo, and then there's a guest book demo, and I use the guest book demo extensively when preparing this talk. If you want to learn about Kubernetes, that's the Kubernetes website. And if you want to talk to people who work on Container Engine and on Kubernetes, on FreeNode, go to the Google Container's room. The devs actively monitor this. No, really, they actively monitor this. You can talk to the people who are working on this and ask them very technical questions, and they will help you find answers. Google Cloud encourages folks to hit up Stack Overflow, make sure that your, it helps a lot if your question is tagged, so that the appropriate people who are monitoring those tags can answer your question for you. If you run into trouble, let me know, and I will make sure someone answers your question. And now time for the sales pitch part. I work on Google Cloud Platform. We have storage, we have VMs, we have App Engine, we have logging and monitoring tools, we have data analysis tools that are awesome, and I gave a talk about them on Western Ruby Conf. Data centers all over the world, Internet endpoints all over the world. All the stuff you'd expect for a cloud provider, and at least some of you probably didn't know that you could get a VM from Google before I said these words. And I want to point this out, because this is the really cool part for me, is I've been to a data center, and our cloud customers, their code runs on the same racks and the same physical hardware as YouTube, as Search. I can't tell in the data center what's running on any machine, so any work we do to improve the infrastructure for things like YouTube and Search benefits our cloud customers as well. We have a free trial, because you have a free trial. It does ask for a credit card, but you will not be billed unless you okay billing. The credit card is there for fraud protection and a bunch of other legal reasons. And if you have any trouble getting the free trial, or you've used your free trial and want to try something else, come chat with me. I have ways around that, but I encourage you to use the free trial, because that gives you $200 in credit for 60 days, and that's better than I can give you easily at this conference. So thank you. I want to thank the conference organizers for having me here, and for organizing this giant conference. I want to thank my coworkers who reviewed these slides for me, even though they're on the West Coast, and so I was sending them slides inopportune time zone times. When I gave conference talks, I have plastic dinosaurs. Someone told me this looks like just a random baggy stuff. There's actually a bunch of plastic dinosaurs in here that I give away to people who ask questions or come chat with me. I also have stickers up on the podium, right below the podium up here. Lots and lots and lots of stickers. Come get stickers from me. We've got about 10 minutes left, so I'm going to open the floor to questions.
|
Like most programmers I am lazy. I don't want to do something by hand if I can automate it. I also think DevOps can be dreadfully dull. Luckily there are now tools that support lazy DevOps. I'll demonstrate how using Docker containers and Kubernetes allows you to be lazy and get back to building cool features (or watching cat videos). I'll go over some of the pros and cons to the "lazy" way and I'll show how these tools can be used by both simple and complex apps.
|
10.5446/30659 (DOI)
|
Welcome. Thank you for coming. I hope everybody enjoyed DHH's keynote and are all ready to build monolithic web apps. We're going to talk today about API design and specifically API design that is focused on the people who actually have to write code against that API. We are not going to be doing any coding at all today. Implementation is an implementation detail, so don't worry. We're just going to design it really well and then implementing it as somebody else's problem. My name is Pete Holliday. I'm a lead engineer at MailChimp. MailChimp is an email service company and we will tell you much more about it later. I'm Kate Harlan. I'm an engineering intern at MailChimp and graduate from Georgia Tech next week. And after you graduate from Georgia Tech, what are you going to do then? I'm going to come back to MailChimp and work there full time. In 2.0 of our API, did a lot of things well. About 250,000 users hit it every single day. Billions of requests come and go. And it's fairly popular. We did a survey not that long ago and we found 75% of people thought that it was good or excellent, which those are really good numbers. But as we were working with it ourselves, we discovered that there were some pretty big issues with the design of it and with the things that people are trying to do with it. And so this talk is sort of a came out of that process of us designing our newest version of our API, which is currently in beta. It was supposed to be released a month or so ago, but software happens and so it's late. We're also, this is going to be a fairly interactive session. So during several portions, you're going to need to probably chat with your neighbors. So if you would, just take a moment, introduce yourself to your neighbors, any of your neighbors you don't know, you're going to become good friends by the end of this. So just make yourselves friendly. I think you've got the next one. I want to know the gender ratio of the whole conference. I'm just curious. I think it's pretty good, but it's still kind of white male-y. Yeah. I mean, just looking around, there's average-ish one, from my perception is there's average-ish one woman a row, or half of one in eight maybe. Okay, okay, okay. Not that friendly. Okay. All right. Let's bring it back. Bring it back. Bring it back. I've lost the room already. All right. Bring it back in. Awesome. Awesome. All right. So y'all probably at least kind of know what an API is already, but it stands for application program interface. APIs are pretty much everywhere. If you've done any programming at all, you've used an API, because that's just interfacing with that program. And APIs are just the way two pieces of software communicate across a boundary, and there are all kinds of different boundaries everywhere you look. Ruby blocks are an example of a boundary. Language features typically are APIs of that language. Class methods, more generally object-oriented programming, all of the classes that you generate expose APIs. And then of course the thing that we're here today to talk about, web services. They could even be microservices if you wanted them to be. Ah, yeah, yeah, I know. And the principles that you use to design a web service API are, they're going to be very similar to the principles that you would use to design any other kind of API, but we'll be using web services as sort of the thing we talk about today. Is it possible to dim the lights? I don't know. Is it possible to dim the lights? Sure. I guess it's hard to read in the light. Is that what I'm hearing? The screen is hard to read. He's working on it. It wouldn't be my first. All right, so while we're working on that, I'll just keep on trucking, I guess. For focused APIs, they sort of mirror the movement we've seen over the past, who knows how many years, to bring user experience in web apps to get, for that to be a focus, I guess. And I've noticed that that hasn't really been mirrored in API design. APIs are still very much just sort of the wild west in terms of how they look. And so when we talk about developer focused API design, we're talking about optimizing for the end user and not for your back end systems. Many APIs are designed to make them easy to implement and to maintain, but they're not necessarily easy for the end user to use. And when you focus on the end user, the person who's going to end up eventually using your API, that leads to great experiences in developing those. So what do developers want when they are using your API? Well, if you think about any time you've used an external API, developers want basically one thing. They want to get in, they want to get done, and they want to get out. And they never want to touch your API again. Ideally that is the thing that they want the most is to just get their sprint done and be done. So never return is sort of an important part of that. Because once they write something using your API, you can't just change how it works because then you'll break everybody's everything and everyone will be sad. So you have to version your APIs, which is really important, but also the versions that you put out have to be robust and functional. So who here has had a bad API experience? Raise your hand, show of hands. All right, so a couple of you. Good. So for a moment, I'm going to go ahead and show you a couple of things. Just a moment for, let's say, 90 seconds, talk with your neighbors about what made those experiences bad. What was it about the API experiences you had that made them challenging? And when you're done, I'm going to ask you to actually contribute and raise your hands and tell us what it was that you thought. I never go anywhere without a water bottle and I sort of have not been living in my apartment as much as usual. And so I keep losing water bottles. I don't know where they go. I'm working on the lights. I just want to give them a good, more visible. They're just kind of remote. You just got to get that. Really appreciate it. Thank you so much. Doing his job at one of these would be awful. My dad does luncheons and theater and television, but this is just got to be so tedious. It's funny because Mara was like, well, maybe you ought to do a light version of the slides in case it's hard to read. And I was like, yeah, I should totally do that. And then did not do that at all. That's okay. Half of the room can see. That's all you need. Yeah. All right, so this is where it gets interesting, I guess. Yeah, the avoiding naming specific companies, if you can. No, no. Okay, you don't have to. That's fine, too. All right, so why don't you take over the rest of this slide, I guess? Okay, if you want me to try and bring them back now. Let's give them until an hour 30. Okay. So until we get down to 30 seconds over here. And then we'll probably have them come back down by one hour. Okay. Okay, let's try to bring it back. Come back. Cool. All right, so raise your hands. We're going to try not to name specific companies if we can avoid it. Please. But can you all raise your hands and tell me about some bad experiences you've had with APIs and why they were bad? Yeah, we've got one that we use where we have to make a get call to create that data. Okay. Basically the entire hotel industry belongs on horrible XML, so that would give it to the NPDF staff in six months or a while. Okay. Sure, yeah. So we have in the back we had a get call used to create data. We've got bad XML up front and documentation, bad documentation. Recently we had a kind of work with word for term 200 status code with a 500 body. Quit stealing my future slides. Okay, so yeah, that sounds like there are a lot of things there. So creating resources that don't give you the resource back and then you have to make another call to get that resource. So lots of API calls to do one thing. So what do we come up with that doesn't do, you can't get just the simple thing you want. It does all this stuff, it does all this stuff, it does all this stuff, it doesn't go on. Yeah, sure. Let's get somebody from over here. A lot of, to repeat, just like a lot of baggage, it does a lot of extra stuff. Yeah, about your point. Payment system with no sandbox mode? Why would that be a problem? So payment system with no sandbox mode. Let's do Michael in front. Schemeless XML APIs where you're passing strings around instead of actual structure XML. Okay, so schemeless APIs. I'm sure there are a thousand more ways. We could probably fill up the rest of the hour just talking about things that are bad. But the short version, I think of a lot of what you're talking about is easy stuff is hard. There are too many end points. Simple stuff is complicated. The docs are wrong or incomplete. Maybe no wrapper libraries. There are some other things folks said. But yeah, so that's a bad experience. And who, raise your hand if you think you use more, you have more good API experiences than bad experiences. Raise your hand. Okay, all right. Well, that's more people than I thought would raise their hands, but it's still not very many. So today what we're doing, our hands-on design work is going to be using real MailChimp end points. We're going to start by learning a little bit about MailChimp. You can't design an API without knowing the product that the API is going to be used for. Probably really well. And you also can't design a developer-focused API without knowing what the developers are going to use the API for. Also probably really well. And like with a degree of nuance, not just like, well, they could do this, but like more specifically what is more likely to happen. So that brings us to our advertisement portion of the talk. MailChimp is an email service provider. Our mission is to help people send better emails. And one of the reasons that we have an API is that one of our co-founders, a long, long, many years ago, eight, nine years ago, he always tells the story where eight or nine years ago he was at a fork in the road and he had to choose, do I hire a developer to build an API for this new email service company, or do I hire a salesperson? Fortunately for me, he decided to hire a developer and currently MailChimp has zero salespeople. So it seems like a good decision. So what are our main features? Lists, campaigns, and reporting. Who are you sending to? What are you sending? And what are people doing with it? I guess I... The other thing that we need to talk about is who uses the API. And for MailChimp, our particular use case, there are a couple of different options. There are people who use the API directly, so your company wants to get your own data into MailChimp or vice versa. Our mobile apps are sort of an example of that. But then there's the case where you might have data in a third-party web service that you want to communicate with MailChimp, say your CRM or your Shopify, shop, or whatever else. And so our API also enables that level of communication where the user is not... The user might not even know that an API call is being made, but on the back side there are a ton of them. Usage patterns. That is roughly... That's about a week worth of API calls to version 2.0 over API, and that's how they break down. The overwhelming majority is list management, getting people on the list, getting people off the list. Reporting is a close second, and then campaigns, which is probably the biggest thing that MailChimp does. If we didn't have campaigns, like what would we do? Almost nobody uses that through the API. And so you have to kind of... Well, I have to ask myself, like, is that because that's just not something people want, or is it because our API is so bad that they can't? So if we're going to look at API 2.0's list management endpoints, what we saw is that is the most important, the most traveled part of our API. So we've got these endpoint lists that we're going to attempt to hand out. Right. So what we would... Ideally what you'll do is in a minute we're going to go over the list management, like in a little bit more detail what list management means at MailChimp. And while we're going through that, keep a hypothetical company in mind. It could be your own company. It could be one that you've just made up. Like, so for example, we'll give you some examples of an online store that sells like pens and pencils and such. And then once you're done thinking about that, you'll have some time to chat with your neighbors. And the reason we did a printout instead of a web page is because last year the Wi-Fi was not sufficient, but this year I hear it's great, but there's still no web page. Sorry. So, yeah, so we'll talk about list management and then we'll get to the meat of showing you our dirty laundry here. So we have different parts within a list. You obviously have to be able to edit a list. So we were talking about adding people to lists, removing them. You also, we have merge fields. Merge fields are our arbitrary data that are set up at a list level. And then we also have interest groups, which are Booleans like favorite color was one example, doesn't really. Yeah, and you can set those up. So merge fields might be like a first name or the last time they bought something from your store. An interest group is a little bit more of like they are interested or not interested. Managing subscribers is another big one. Getting people onto the lists and off of the lists and keeping those fields that we just previously talked about, keeping those up to date. So did you change your name or did you just buy something in the store or have you decided you're no longer interested in colored pencils? Segmentation, once you have them on the list and you have all this data about them, you then want to use that to send different emails. So you want to send if somebody's bought a lot of pencils, maybe you want to send them an email about a pencil sale that you're having if that's a thing that happens. And then once all that's done, your boss wants to know about statistics and reporting and who's opening and who's clicking and what links and all of that. So that is MailChimp's list management in a nutshell. One thing I'll say is that we do have... Don't go from this talk and then go try to use our API because we're leaving a bunch of features out in the name of simplicity. So I guess then we have to talk about what's our intent. So among yourselves with your hypothetical companies in mind, let's talk about what sorts of things might you want to do with the API than as your hypothetical MailChimp customer. So let's take two minutes and talk about what the big things are and then we'll come back and chat about them. So go. Did they manage to get this side dimmed at all? I don't think they managed to do that yet. So for that last one, they were doing a lot of really good deep dye and really good examples. So hopefully that'll keep up. I think if you're here, you probably are frustrated. You've got some reasons. I wonder if anyone here or how many people here have actually worked with our API. That's a good question. I do a lot of hypothetical research studies in my mind where I just wonder about things like this. You should build an app. You would have it at the conference and it would send a push notification like blah because you're in this room or whatever. I guess maybe another minute. There seems to be chatting. Except for those two who are chatting with their phones. Maybe they're bad at personal interactions. That's fine. I do that sometimes. Text people I'm in the room with. I'm tired. Long week and it's Tuesday. All downhill from here for me. Our senior design presentation looks like the app actually looks really good. As long as you only do the features that work, they work really well. There's a couple of things that they look like they work but they don't. Three steps down the line, you're going to not be able to get any data but that's okay. How much of that is free range? Will your professors be like, oh, that's fun. They're not going to know. We just stand in front of the whole class and we've got our preset demo that we're going to do. We know that this building works all the floors. We know where the bathrooms are. We know that the dots should appear in the right place. We were doing edits this morning and one of my group members was like, y'all, y'all, we are presenting very soon. I need you to finish. You're making me nervous. Alright. Okay. Hey everybody. Alright, let's finish it up now if we can. Sorry, I know I'm interrupting a lot of you mid-sentence. But if you could just... Hey, y'all. Okay. Alright. Let's try to finish it up. Thank you. Alright, let's talk about it. Let's share with the group. What sorts of things might you want to do with MailChimp API? Who knows a thing? Nobody knows a thing. Okay, you might want to send an email. That's a really good idea. What else might you want to do? Newsletter? Newsletter. Okay. What about for like list management stuff? Yeah. Yeah, it's a reason for this management. A person registers for one of our systems. We need an API call to register that person for MailChimp. Okay, great. That's a really common use case. What else? You want to repeat that? Oh, yeah. Sorry. Sorry. Yeah, sorry. I'm really about at that. When somebody at his company's store buys something, they then get registered on the MailChimp mailing list. Hopefully, only if they check a button that says, I want to be on the mailing list. What else? What else might you want to do with these merge fields or any of that? Okay, yeah. Find lists. Okay, what do you mean by that? Take two lists and make a curve list that has both the first list. Okay, sure. We typically recommend that you not have more than one list in the first place. There are some use cases where you would want to, but we tend to, we try, that's what we have the interest groups for so, and the segmentation. So what you would typically do is have both of those lists on one list, and then when you want to send an email to just part of it, you would segment it off for that one email. But definitely a use case that some people have when they have multiple lists for sure. Yes? A combined call where you're subscribing to a list but also adding them to your three-year groups at the same time. That's right. Okay, great. So being able to set somebody's interest groups as you subscribe them. Anybody else? Can, yes, sir? So you're setting up custom columns, for instance, hence the story you want to say, like, if the potentials you'll hear about were chimes. Sure, yeah. So the merge fields, being able to create those merge fields or create the interest groups through the API, that's especially useful, like you were, like I think you were alluding to, if you create a new category for markers, maybe adding an interest group for that automatically as the category gets created in your store. Michael? So you can add a local identifier to a customer's record in Mailchimp, that way you get information back through a callback API or something. You can identify what user is identified with that email. Sure, so like they're internal, your internal... My user ID. Right, so adding your app's local internal ID to their Mailchimp profile so that you can link those up when the data comes back, say through a callback or whatnot. Okay, let's see. Let's see some of the other things. I think we've covered almost all of these. And so now what we're going to do is we are going to pass out these endpoints. These are just list management and just version 2.0. And what we'll do, I'll come and cycle these around. And just take a few minutes, look over it, and figure out what might be hard. What about these calls, either hard to learn or hard to use or anything like that? So give us just a second and we'll get these passed out. Thank you. Yeah, so while this is all going on, just take a couple of minutes, look through those endpoints, and see if you can find any problems. Don't be shy. We know this is bad. Sure, like what's bad about it? Yeah, like what might be hard to work with or otherwise? I'm just using... Use, understand, like what's confusing. All right, so the handout took a little bit more time than I was hoping, so let's go ahead and let's talk together about what's wrong with this. And again, don't be shy. Feel free to be as brutally honest as you need. Everything is a post. Everything is a post. It's restful though, right? No. What else? What else is wrong with it? Right, so lots of endpoints, so many endpoints. What else? Oh, and he said the static segments and dynamic segments could really just be one set of endpoints. What else? The API key is in the body. The API key is in the body of the request. That's bad-ish. Right, so there are a bunch of other endpoints in the API that are propended with campaign or whatever, so that is maybe more of an artifact of our... Right, so no context. So as documentation, the handout is really awful, and I'll give you a hint. The documentation on our website isn't much better. In the back? Right, so possibly the subscribe and batch subscribe could be the same endpoint that just takes more than one thing. Yes, sir. And aside from it just not being restful, what's hard about verbs in the URL? Right, okay, great. So we'll talk a lot more about that in just a minute. What else? Right, so a lot of parameters for what should be a really simple call. Okay, let's do a couple more. What else do we have? Yes, sir? We'll go with that to this day. Okay, interest group versus interest grouping. We're renaming that in 3.0 for exactly the reason you described. Yes, sir? Right, right. You want to repeat that one? Right, yeah, so there's just some random hard limits in the activity. In the activity endpoint where it's like, oh, it's 180 days, well, why? Like why not 30 days, why not 70 days? Yes, ma'am? So the batch subscribe endpoint has a lot of like, you human, figure out how to scale this yourself. Whereas we should probably do the scaling for you or just set a limit at a place where we know we can handle the volume. So those are all really great and we could probably talk about what's wrong with this for hours. I know I have. We do. So many methods. API key has a parameter inconsistent naming. How do you update merge fields? Just lots of like, it's just hard. It's just two pages to describe how to subscribe somebody to a list. So let's talk about the principles of developer-driven API design. There are only, in my opinion, really two. You need to design for intent and you need to limit your API's mental footprint. How hard is it for somebody to load the API into their own brain? So what does that stuff mean? So if you look at this, like, if you're thinking about what Google has in Gmail, right, maybe you've got conversations and then you have the individual messages in a conversation, right? And you could set those up in a variety of different ways. You could make them completely separate calls. You also, there are things you want to know to make this particular screen happen. So if you know that your intent is to make Gmail, you know that you want to be able to get the subject, but also the subject of the conversation, but also a snippet of the most recent message. And making both of those calls every time for every email could be really a pain and a lot of extra things to do. Right, you end up with an N plus one querying situation where you have to get the list of conversations and then you have to make a query for every message. When you design for intent, you put that message in the first list that it's only one call to get the whole inbox. That makes everybody's life a little bit easier. So look at this for a minute and tell me if, I mean, try and see if you can see what's weird about it. So there's a light there for the air conditioning and the light is on, but the thing underneath the light says AC off. So the light is on when the air conditioning is off, which doesn't really make any sense. You're going to look at that and say, oh, the air conditioning is on because the light is on because that's the way everything else works ever. But no, that's not how this one works. Right, so if you live in Atlanta, a light that tells you when your air conditioning is off in the summer is probably like how you might design it from the beginning. But people who have gotten into your car have gotten into a million other cars in there. Well, that's maybe an exaggeration. They've gotten into a lot of other cars in every other car when the AC is off, the light is off. And so in this way, you don't want to be blazing new trails with your API most of the time. So follow, don't leave. Use HT Basic Off if you can. If you can't use OAuth and use whatever version of it you, it works easiest for you. Accept and serve JSON. Who can think of a reason why you might not want to use JSON? There are a couple. Yes, they're in the back. Right, so use cases, you might need to serve XML. You might also not want to use JSON if you've got a ton of binary data turning that into base64 and code. So there are reasons, but start from the default of accepting and serving JSON. Make your API as restful as possible, and we'll talk about that in a little more detail shortly. Going along with the mental footprint of your API, abide by the principle of least astonishment, use your HTTP methods properly. Use your HTTP response codes properly. And as a callback to what a gentleman over there mentioned earlier, please don't ever do that. Please. You've got a 200 OK response code and then a success equals true. Please just don't. That should be a 400 or maybe a 500. Who knows? Who knows what's wrong with it? It's just not successful. And then if you take nothing else away from this talk about developer-driven API design, please remember this. Don't be clever. One of the biggest insults that somebody can give your API is that it's clever. It needs to be accessible. It needs to be easy. It needs to be simple. Clever, save that for, I don't know when, but please not for your API. Clever becomes confusing, right? And you want to be able to not read every documentation that has ever existed. Right, so let's talk about... Right, so think about if you have a... I'm trying to think about the cleverest thing I've seen in an API. A lot of it comes back to using the standards properly. You could be clever by choosing a really... Oh, I'm going to use 422 as my response code because the description of the 422 sounds sort of like what's going on. I don't know what 422 is. You use 480. What's that? Unprocessed. Right, so that's maybe a bad example. That might be a good one. So there are some HTTP response codes, for example, that mean something very specific. And if a client is following the HTTP spec, the client is then supposed to do something else, right? Like, oh, well, you... A good example of the opposite of that is if you tell somebody the method isn't available, it should return what methods are available in the headers. So you can imagine situations where you just don't want to do something that somebody is then going to have to go read documentation to figure out. And especially you don't want to do something that is clever that, like, somebody might think it does something else entirely because of the context that they're in, but really it does this other thing. I'm sure other people have examples of cleverness in APIs that is painful, probably even in Mailchimp's own API. But we can talk about that a little more offline after the fact, if you... And I can think of some better examples, maybe. So let's talk about REST. So REST is an architectural style. It's not a standard. It was coined in a PhD thesis. And the World Wide Web is living proof of the power of it. That's why it works. That's why we can do all the things that we do with it. So if the World Wide Web worked like APIs do, trying to figure out where to go for lunch after this talk would probably... I mean, you'd... Of course, what you would do is you would pull out your phone book and you would page... You would turn the page to Yelp Incorporated and you would call Yelp Incorporated and you would say, hi, Yelp. I really need to make a search on your website. So could you give me a list of the data you have available and how to access it? And some nice person on the other end of the phone would say, sure, here are end points and here's how you get them. And then you would say, thank you. You'd hang up the phone and then you'd open up Sublime Text or Atom or whatever you're using these days. And then you would write a client, a web client, for Yelp's website. That is how we write every API client ever. But REST helps the web make it so you open up Chrome and you go to any website anywhere and it works. And so when people ask, oh, why REST? That's why REST because the network benefits of everybody doing the same thing are huge. And if it did work like that, your desktop on your laptop would look like the desktop on your phone if you were to click yes to every time some website asks you to install their mobile app instead of going to the mobile site. So XKCD to the rescue there. So HTTP at a glance. So HTTP, there are just a few methods, right? That's kind of the point. You can create things using post or put. You read using get. You can update using patch or put depending on the implementation. And then delete is pretty obvious. Use delete. The response codes are also preset for you. You've got your success. You redirect and your error response codes. And this is sort of what we were talking about earlier. You should return the error code that goes with the error that's happening or the success code if it was successful and not try to just try really hard not to mix those up because it's not helpful. Right. An example here of cleverness that just came to mind. There's actually a 400 error that says you are making too many calls too quickly. But with 400 and 500, the general idea is that if you get a 400 error, you shouldn't repeat that call. If you get a 500 error, try it again later. So you might, if somebody is overusing your API and blocked, you might think, oh, it's clever. I'll send a 500 response code back because that'll tell them, you know, try again later when you're unblocked. But 400 would be far more descriptive and would help them to understand, oh, I'm actually doing something wrong here. It's not the service that's broken. So that's just one example that came to mind. So REST is an architectural standard. It has six constraints. Many of these you get for free just by using HTTP properly. The client server model, the stateless server, cashability and layered system. Those are all basically, if you're doing HTTP properly, you get all those. That's just, wipe your hands, you're done. Code on demand is what enables JavaScript and Java applets and whatever it was that Microsoft tried to do, ActiveX components or whatever. That's the code on command part of, on demand part of, excuse me, part of the spec. But the thing that really, like the reason we're here is the uniform interface. That is the part that makes or breaks your API, really. So the uniform interface constraint, you can think about it in four different ways. Resource identifiers, everything should live at its own address. And that is, it works for developers because we are all well accustomed to file systems. We're well accustomed to things, to our files living in certain places. And so it helps people understand your API better because it's something they don't have to learn all over again. If you look at MailChimp's list APIs, you have to learn every single endpoint just to figure out what anything does. Resource representations is the idea that a resource is not the same thing as its representation. So one example of different representations is you might serve XML, you might serve JSON, and it might be the user's choice as to which one, which representation they request. You could have different language translations as different representations. You could even have different representations that do have different fields in them if you really wanted to. So one of the biggest parts, one of the places where your API will fail to be restful, if it fails to be restful, is in the self-describing message portion. Typically, and what Roy Fielding recommends, is that if you were going to have a self-describing message, you should register your own media type with the IANA. And you should do that, and that media type should describe how to consume your API messages. We haven't done that for version 3.0 of MailChimp's API yet, in part because it's still on flux, but also because if you've got to go read that media type, I mean, you might as well read anything else. And so we use JSON schema, and we pass back a link to the JSON schema that describes the document that you received in version 3.0. API 2.0 fails because in order to understand the response that you get back, you have to go read a giant page of documentation that tells you whatever field it is and how to process it. And then hypertext as the engine of application state, this is what enables the World Wide Web for you to just go to yelp.com and then eventually find a restaurant to go to lunch, at which to go to lunch. And in APIs, this is sort of a controversial thing. The idea of Hades in an API is that when you could find anything in the API by hitting the root of that API, it will give you back a list of links that you can navigate from place to place, just like a website. PayPal's API does this, actually. The problem that you have with Hades, if you abide by it strictly, is that you have... It's a lot of calls, then, to trace that API all the way back to the, you know, whatever sub-resource you're operating on every time you need to make that call. So in version 3.0 of our API, we have Hades links. We do not expect you to trace them every time you have a deep call to make. We expect that you will bookmark a page inside of... Bookmark a page to use the metaphor inside of our API. But the links are there for discoverability. It means that you don't have to read a single page of documentation. You get your API key. You call the version 3.0 root, and you can then navigate and see everything there is to see. If you add in reading a couple of JSON schema documents, you can actually figure out the whole API without hitting our docs page. And that is what a restful API looks like. The Richardson maturity model describes this in a slightly different way. So if we start from the bottom, you've got plain old XML, and then the resources and the verbs, and it's not until you get to the verbs and the hyper-media controls that it actually starts to be actually restful. If you just have XML or you just have XML with resources, you're not restful. You have to have the verbs and the controls at which point you get to this magical rest thing. This is not to say that rest is the only way or the best way. It is probably the most commonly understood among web services today. And so having a restful API can furs a lot of benefits. But maybe you're sitting there thinking this is just not what I signed up for. I did not want to spend an hour listening to people blather on about rest. But it's important to understand if only so that when you release your API finally and some smug developer on Twitter goes, well, that's not really restful, you can know how to respond. You at least know what they mean when they say that and you can maybe say intelligently, well, we decided to deviate from rest in this way because of X, Y, and Z use case if you don't ignore them, which is what most people do. Sorry. So let's get down to business. Let's fix the mess that we handed out to you earlier. And just to reiterate the disclaimer, you are not seeing the entirety of the Mailchimp system. So don't, if you're watching this on Confrigs later or you all don't immediately go, then try to make API calls based on these docs or anything because it's just a subset. So let's go back. In order to design this API, we have to think about what the intent was. So if we've got to remember, our intent was to get customers onto our list, keep their data up to date, organize our list, send things, and also have stats to make our boss happy so we know what's going on. So is there a good reason not to use rest for this list management endpoint? Can anybody think of one? No. That's, I think, I think that's the right answer. In my opinion, that's the right answer. There's no good reason not to use rest for this set of endpoints. So where would we start? Just no need to raise your hand. Now, where would we start making this mess restful? Okay, what might the resources be? Okay, a subscription could be a resource, a list, list subscription, what are the resources? Anybody, shout them out. List member would be a good one. You could subscribe then by creating a list member. So that's another option to having subscriptions as its own resource. Interest group and interest grouping, I guess, if you're going to go that route. What else? What other resources might one have? A user, yeah, in the larger API a user might be a resource. Okay, well, we've got some other things. We've got static segments, right? That should be a resource. A report, right? What stats? Yeah, the stats of like the different reports could each be a resource. Right, yeah, so there would be an entire way to manage campaigns or emails as their own resource. You would also, we talk about merge fields. Those would be resources on the list, right? But there would also be resources on each individual member, then they have their data, the data values of those things. And so depending on how your use cases are, you might actually have a sub-resource on each member that shows their merge fields or shows their interest groupings, interest groupings being the other. So, all right, so take a minute and talk amongst yourselves about what, once you have those resources, what I want to know is how, if you were trying to subscribe 300 people or 3,000 people or 300,000 people to your list, what would you expect that to look like? What calls would you expect to make on our hypothetical brand new API that we've just designed in our minds? What would you expect that to look like? So take, or if you know, does anybody know off the top of their head how they would want that to look right now? Okay, good. So what might be the reason, so I know when I was talking earlier about having, or when Kate was talking earlier about having all of the data available in one call, I've got some dirty looks from somebody over here on the side of the room. What might be a reason why you wouldn't want to do that? Okay, sure. So you might have some scalability issues with pulling all that data and sending it all. Are there other reasons you might not want to do that? Just sending everything back? Sure. Sure. That's definitely true. You can do a rebate that one. Right, yeah, that's great. So if you're doing batch processing, you have to have a sane way to handle errors. If you send 100,000 subscribers and three of them have errors, well, how do you deal with that? Okay, did you, somebody up here have one? I thought I saw a hand. Yes, right. And so why is duplicating functionality in your API potentially problematic? Right, sure. So it might just be confusing if you go totally overboard on designing for intent, you run the risk of creating too many endpoints, too many different resources that people have to learn. And that's where the sort of the tradeoff of this design happens. If you don't go overboard with designing for intent, then you end up seeing the opposite problem, which is to do one thing in your mobile app, you have to make 14 queries. And your mobile developers come to you and yell at you because your API is requiring, it was just too slow. Yes, ma'am, what did you say? Read. Sure, read, read conflicts, write, write conflicts, if you duplicate the functionality. And then you've also got just the implementation detail of, like, if I've got three different pieces of code doing this, you've got to make sure your architecture is such that you're not copying and pasting code all over your API. So that two-page, that two-pager that you've got in front of you, if you follow RESTful principles, and we talk about the resources we just talked about with lists, and segmentation, and that sort of thing, it can be summed up like this. You can end up with, what is that, six, a half dozen resources that you can operate on using very basic HTTP verbs. And you end up with a far easier thing to understand. Now, there's still the issue of what parameters are passed, what data comes back and all of these. So this isn't the full documentation, obviously. But I hope that you'll all agree that if you see this, this is far less daunting and far easier to understand and navigate than the two-page, like, ten-point font, like, mess that I handed you, that we handed you at the beginning. So that is the gist of it. We are getting very... I wanted to leave a lot of time at the end of this for questions because I figured you would all have very specific things. And so we're going to talk very briefly about how we worked with API version 3, both evaluating it and then some of our implementation details. And then I'm going to leave a lot of questions for you to ask for your own things because we understand that this worked for us, but you might be having a problem, how does that map onto your own implementation? So this is how we discovered that 2.0 was a mess. And so it doesn't look sketchy. I'm just going to announce that you're leaving. Kate has to go. She is being given an award by her university since she has to actually go receive it. So thank you, Kate. So we got direct feedback from our users. We looked through our support requests. We had a ton of them. We looked through social media complaints. This sucks. I'm having a hard time. I don't understand MailChimp. And sometimes those were user error and sometimes those were MailChimp error. We actually sent out surveys. We found people who were using our API, especially the other web services that were integrating with MailChimp, and we looked through, we asked them what's hard about it. And because, as we sort of saw earlier, most of you have more bad experiences than good experiences with APIs. Excuse me. The poor user experience on an API is the default. So in my opinion, if people don't love your API, it's probably not very good. MailChimp's API is not very good, and it was relatively well received. 75% people thought it was greater excellent. So when you're getting this feedback, keep it in that context. That if people aren't over the moon and love with it, there's probably a long way for you to go to get ahead of the pack. And the pack is not in a very good place right now, so you definitely want to be ahead of it. Hints from usage patterns. Really, really important. Collect data. Log everything about your API. It sounds hard, especially when you start doing volume, but it's really important that you keep track of every call and as much data about that call as you can. Excessive calls to certain endpoints relative to the customer size. For example, if we have a person with a 600 member list and they're making 600 calls a second, there's probably something wrong, and that thing wrong might be that they don't understand, they've made a mistake, they don't understand something about the API. It could be that we're not enabling a really obvious use case that they need. Underutilize endpoints for key features like our campaigns, high error rates on certain endpoints or for certain users can sometimes indicate that there's something confusing about that. And then really common but oddly specific queries. So if, for example, you see the vast majority of your queries to a certain endpoint include the same sort parameter, maybe that ought to be the default. So think about that. And above all, when you're looking at usage patterns, collect as much data as you can. Just be curious, browse through it every once in a while, search it in different ways, try to have some kind of easy, easily usable dashboard. We pump all of our data into Elasticsearch and then have a nice dashboardy kind of thing on top of it. And so that really helps us to see error rates and sort of pair things down differently. And then above all, when you're looking at this data, start from the assumption that your users are not idiots. Some of them will be. But assume that they're not to start with and that will get you a lot of information. Even people who are doing things in what you think is a stupid way, it might be because that looks like the best way from your documentation or from your endpoints. And then a couple of rules of thumb we have for bad API design. The best way for you to find these out for yourself is to code against your own API. Eat your own dog food as much as you can. And if you don't have any internal need for using your own API somehow, make some sample apps. Like do the work. Do you have to make frequent references to your own API documentation? I do. This is what I do every single day is work with the MailChimp API. And when I write code against version 2.0 of the API, I have to go back to the docs a lot. And that alone is enough for me to say that we've made a lot of mistakes. If you find yourself copying, pasting from your examples a lot, maybe the API, maybe the wrapper library you have is too verbose. Long argument lists are often a hallmark, especially in wrapper libraries. And then if you ever take something out of your API and immediately, every time you get it, turn it into something else, then that's probably you have a bad representation of that data somewhere. So for example, in one previous version of our API, we have a merge field called address that you can use. It's just a data type. It has, you know, address 1, address 2, city state, postal code, you know the thing. And at one point, we were actually serving it as a two-space separated string. All of those fields jammed together in that order. And so nobody in the world needs the data in that format. And so you would get it out of the system, and then you would immediately explode it on those two spaces, and then, hooray, I've got the thing I actually wanted. So that's one of those where if you just start using it, you'll start to pick up on those patterns pretty quickly. And then some things that you can look at for your own APIs. We use JSON Hyperschema to define the API, define all the endpoints, define how links are served for Hedios. It's currently making its way through the standards bodies. It's not a spec yet, but hopefully one day. We are looking into both Swagger and Rommel. We haven't really made a decision yet, but hopefully when we launch, we'll launch with one of those, support for one of those two. If you are using something like JSON Hyperschema, you should be able to use that to generate wrapper libraries, to generate documentation. And one really easy way to start finding problems with your documentation with your API is how hard is it to write the auto generator for the documentation? If that becomes really hard, then maybe think about why that is. If it's hard for you to write a program to understand your API, how is anybody else going to do it? And then make sure you have wrappers and sample code. In version 2.0, our sample code are actually apps. We have a Rails app, I think it's still on version 2.3 or something crazy. We have Node app. We've got apps for all these different languages, which first of all, that's not what anybody wants to see. If you're trying to use the Mailchimp API, you don't want to see a Rails app. You want to see the code that actually interfaces with the API. And also, that becomes a nightmare to maintain. And so try to keep your sample code easily copy and pasteable. Get rid of as much fluff as you can. You don't need to give people examples as we did on how to build apps. Just show them how to use your API. And then make sure that you're thinking about catering to different kinds of learners. Some people want to read the docs, so definitely have docs. Some people want to just play with your API. So if there's time at the end, I can show you the sandbox that we built that auto updates based on JSON schema, but you can just browse version 3 of our API. And then just sort of think about how different people are going to learn your API. And then, you know, architecture and scaling concerns those are things that you just sort of have to learn in the hard way. Eventually, somebody is going to use your API in a way that you hadn't anticipated. And they're going to be calling a very expensive call frequently at a high rate, and it's going to take something down, and you just have to figure out a way to run that. But if you know already that you're giving up data that's expensive, find a way to cache it, find a way to limit access to it, find a way to keep it from becoming the default, and you can maybe try to avoid some of those outages. So we've got, if I am reading this correctly, we've got maybe 20 minutes left. So let's go ahead and take some questions. And we have enough time that we can get pretty deep into some of these. So, yeah. So the question is, let's talk about authentication. Signed requests, my experience and knowledge of signed requests is much more limited than of basic auth. But what I've seen signed requests most useful is in single, like if you're using your API as a front end, as if you're doing the thing DHH doesn't want you to do, which is use Rails as an API and then put a JavaScript to NBC on the front. My understanding is that signed requests really help with that aspect. But signed requests, if you think about the reason why I love basic auth for APIs if it's possible, is that the amount of time that it takes for your new user to go from, I want to figure out who's on my MailChimp list, to I now have a call that demonstrates that, like you can do it in three lines of code. You pull open HTTP party, or if you're using Python, you pull open requests, you hit one endpoint, boom, it's done. All of those libraries understand HTTP basic auth at a fundamental level. You can even do it in your browser with the username at API key, or username whatever it is, with the at sign before your URL. You can do that in the browser. So HTTP basic auth, if it works for you, is amazing because it's supported so widely in so many different tools. There are times and places for other things, definitely evaluate them all, but if you can support multiple kinds of auth, that's also good. Absolutely. That's a great question. So in version 3.0 of our API, we deviated from REST in one specific place, I think. I'm not entirely sure if this is actually a deviation, or if it's just a deviation from what people think of as REST. I haven't gone through the thesis to figure it out. But one of the things that people tell you about RESTful APIs is that you want to get verbs out of your URLs, get verbs out of your URLs. Well, that's great, but there are some things that are really, really weird when you try to turn them into resources. So for example, we've got email campaigns. And maybe I'm just not as smart as some other people, but I couldn't figure out how to get sending a campaign into a resourceful architecture that made any sort of sense and wasn't overly complicated just for the sake of being RESTful. So we follow Heroku's recommendation, which is if you have non-RESTful actions, basically, if you have actions that aren't like HTTP verbs, if you're doing something other than CRUD, essentially, to have an action sub-resource and then the action after that. So in version 3.0 of our API, if you want to send a campaign, you go to campaigns slash the ID of that campaign, slash actions, slash send, and you post to that URL. And that's one of the biggest places. We do that in several places where it just doesn't make sense. So you could have a send's resource that you then create one of. And we thought about that, but that to me seems a little contrived. And maybe some of you disagree with that, but we thought it was contrived too. We're just like, ah, send. It's RPC, but we need it for this. So that's one example. Sure. Yeah. And if we don't have... Right now, we don't have a tremendous level of introspection into our delivery side from the web app. So we don't know how many have sent in the web app. Delivery knows that, but we haven't pulled those APIs together yet. But yeah, that's certainly... I mean, I'm not saying there aren't ways to do it, but since you can only send a can... And this is maybe a feature of our product, you can only send that campaign one time, ever. If you want to send it again, you clone it and create a new one and send it again. So in that way, there would only ever be one send request, hopefully. And when you're at that point, it's like, well... Okay. For us, we just thought it was better to do the action send. I haven't actually. Facebook API is a scary thing to me, and I don't have to touch it every day. So, no. Is there a specific thing about it that you're curious about that I could maybe... It sounds awesome. It sounds really useful. It also sounds... I'm sorry. Yeah. So he was talking about how apparently in the new Facebook Graph query language, you can compose different... And get different resources returned in one request. I think almost that that sort of thing might also be made very easy by something like Web 2.0, or HTTP 2.0. Not the exact same thing, but it might enable it a little bit more easily. Send off three or four requests at the same time, get them back, that sort of thing. Yeah. So when you're versioning your APIs, there are a couple of different schools of thought. One of them is... And many of these schools of thought are very... People feel very strongly about it. We version in our URLs. To me, that's the most straightforward way. Another way is to version in your media type. So in your requests, your users would send an accept setter, and the accept setter would include the version in it saying, hey, we're here, we're ready to accept version 3.0 of your API. To me, that's a little bit more dense. It's a little bit less clear for people. It might be a little bit more semantically correct. But from my perspective, sometimes the semantically correct part makes things hard on users, and saying, oh, well, you're only... Give you the example. What will inevitably happen if you only version in a header? Somebody will write code that doesn't do versioning at all. They'll write code against version 2.0 of your API. You'll upgrade, and since they're not passing an accept version header, you'll be like, oh, you must want the most recent version, and then you'll pass them 3.0, break those stuff, they'll complain on Twitter, and your boss will be like, why are people complaining on Twitter? There are obviously ways to avoid that. If they don't send a version, you send an error back. But then we sort of get down that rabbit hole of like, well, why not just have a different endpoint entirely, which is what we decided to do. So Heroku's design guide is basically like use rest. And the QR code on the top of your handouts there has a link to a whole bunch of different links that I used in the... designing this presentation. So Heroku's guide is on there. Some other API design talks. As far as when not to use rest, right, right, what to do when you can't avoid it, I think that Heroku's is pretty good, and in a certain sense, that's not really my area of expertise. I think that at least right now, the benefits you get from rest in terms of HDB caching and different CDNs like Akamai being really easy to work with if you're using a restful API, and not being quite so easy to work with if you're using XML or PC or SOAP. And I think that just like with as many people as are using rest right now, or at least trying to use rest right now, it's hard to blaze new trail. If somebody did and it was amazing, then maybe five years from now, we'll be sitting here talking about why would you ever use rest anymore, nobody uses that, and that'll probably happen at some point. But I think right now, I haven't seen a lot of good, and one of the reasons we went restful is because we... I couldn't find a lot of good reasons to use XML or PC or SOAP or any of those. So, sort of. We have, in almost every case, we have... We use sub-resources. So, you would go lists, ID, members, email hash to like find a member. We don't necessarily have a documentation of like, oh, well, this member has been sent these campaigns. But we tend to try to make it as hierarchical as possible so that when you are down in a member, you can get things about that member. And so we try to keep it related that way instead of trying to define a bunch of, oh, well, this campaign has a sent to... So that's maybe a good example. If you get the collection of members that have been sent to on a campaign, we tend to provide those member resources inside of... not inside of the request, but if you go to campaign slash ID slash... or maybe it's reports slash campaign ID slash sent to. I'd have to look at the docs to be 100% sure, but... that will provide you a collection of member resources that are the member resources from the slash list slash members endpoint. And when those have their hadioslinks in them, they point back to the canonical representation of themselves back over on lists slash ID slash whatever. So the answer to your question is not really, but sort of. That's a good idea. So documentation shows sort of like a top-level system architecture diagram of how your resources relate to one another. Well, let me tell you... So I first have to tell you our dirty little secret, and that's that MailChimp is a PHP app. Sorry. But let me tell you how we... So let me tell you how we do it in versions prior to 2.0, and that is when you create a new version, you open the directory of the previous version, you select all. I bet you know where this is going, don't you? You copy, and then you edit the code as appropriate. I don't like that way of doing it, but that is one way. We found that to not be scalable for all the reasons you might think it's not scalable, in that, oh, we found a bug in version 2.0. Well, I guess we better make the change in 2.0 and 1.3 and 1.2. In 3.0, we've completely changed the architecture entirely to support rest at, like, basically the top, the bottom or top level of the API, whichever one is the most basic. And so what we'll do when we inevitably have version to 3.1 is we will have our router at the top-level fallback. So if you request version 3.0 slash lists, and we haven't created a version 3.1 of lists, the router will know to fall back to previous versions of the API. How we will implement that is yet to be determined, because we don't have a version 3.1, but eventually that is our goal, that until a resource actually changes, there's no need to version it. And then once we do need to version save lists, does change in 3.1, theoretically, any of it, much of it will stay the same, and so we'll be able to inherit from version 3.0 and just change what's different. So those are some thoughts that we have on moving forward and on what was bad about the way we did 2.0. Okay, so the question was, how do we do our documentation? How do we make sure the documentation stays up-to-date with the actual app? We'll do an old and new. In version 2.0, we actually generated the documentation from comments in the code. We also did this crazy thing where our... the API, as it was parsing the request, would check against those dock blocks using reflection to make sure that your code was... your request was appropriate, so it was the right types and whatever, which led to a very interesting conversation I had with a co-worker of mine in which one of us said, oh, man, we broke the API by making a bad comment. There were really going to have to put some unit tests around those comments. If you ever find yourself saying you need unit tests around comments, you have made a wrong turn somewhere. So... but what we were doing is... on the documentation side, it's less crazy, but you still do end up with the problem of having to... sorry, lost my spot... of having to keep those comments up-to-date. We use JSON schema at the renderer level of our API, so the first thing you do in our version 3.0 when you're creating a new resource is create the schema, and the schema will have title and documentation... title and description, data type, and all of this information in it, and the renderer will use that to generate the... generate the response, essentially sort of at a deeper level. The controller, the resource returns some data, a data object. The renderer looks through the schema and says, okay, let me pull this response, this request needs to respond with a title and a name and an email address and whatever else, and it goes and pulls that data out of the data object that was returned and constructs the... it serializes it, but it builds the serialization... it starts with nothing and adds things from the other doc or from the other data object that comes back, and what that allows us to do is it means that unless something is really, really bad wrong, our documentation that is built off of JSON schema will always match the stuff that comes back because the renderer is building that exact thing from the schema in the first place. And so we sort of fix that problem by forcing you to design the schema and keep the schema up-to-date because the schema is what gets returned. And so that's sort of our... we had that exact problem where it's like, well, we have to update these comments and all this, and so we just said no. Like, design the schema first, design the code to implement that schema later, and then it all... so far it's been working out pretty well, and then we use that JSON schema to generate documentation. Right, and I think Swagger actually sometimes uses JSON schema to define the resources. So the question was how does MailChimp determine what the people are using the resources for? And for this, we don't have a very technical approach. Because most of the API traffic we get is from those integrations, those third parties that are making requests on behalf of one of our users, we ask them, and we try to open those lines of communication so that they know that if something is hard or confusing that it shouldn't be and that they should complain to us, and from the volume of complaints we can sort of generate that. If you keep a lot of stats about your API, you can sort of start to intuit things, but I don't have a good technical way. I think that a lot of it is just keeping lines of communication open with the people that are using your API and listening to them when they say they want something. I don't know if anybody else out there is like this, but I know my first reaction to, ah, we need this new feature, you don't need that feature. This feature, this is perfect the way I wrote it the first time. And so I have to fight that internally, but it's really important that you do fight that, because sometimes when they say, I need this, and you're like, well, that breaks the purity of my API. It's like, well, yeah, but what is your API if it's not useful to people? So tools to maintain the documentation. Okay, we're five minutes over then. I don't want to cut into your break. Thank you all so much for coming. If you have further questions, please come see me.
|
Do Users Love Your API? Developer-focused API Design
|
10.5446/30660 (DOI)
|
OK, so from the outside in terms of how it's used, Docker looks quite a lot like a normal virtual machine. If we think that a virtual machine allows us to take one host and partition it into multiple smaller hosts. Each of those virtual machines can run a different OS, have different packages on them, different dependencies, and we can then run processes in each of those VMs. Those processes don't know that they're running in a VM. They don't know anything about the process in running on other VMs on the same host, and they don't know anything about processes running on the host itself. In this respect, you can say Docker's actually quite similar. We can create an image, a Docker image, which defines a base OS that we want to run, a base Linux OS. We can define packages. We can define application code. We can define configuration changes. We can then create a container based on this image and run a process in it. And this process doesn't know that it's running in a container. It has no knowledge of process running in other containers on the same host, and it has no knowledge of the other processes running on the host itself. There is however some big differences between Docker and traditional VM. When we run traditional VM, we're virtualizing the physical hardware, and we're running a complete instance of that OS, including its own kernel. That means that if we imagine we're running an Ubuntu VM, if an Ubuntu VM takes, say, 500 meg of RAM for its kernel and its base components, that VM will use 500 meg of RAM for the OS plus whatever RAM we need for the process we're running. Likewise, when we start it up, if Ubuntu normally takes 20 or 30 seconds to start, it will take 20 or 30 seconds to start that VM, plus however long it takes to start our own processes in it. Docker works really differently. When you run a process in a Docker container, you're actually running that process on the host itself. It's sharing the host's kernel. This means that there is almost no overhead in terms of resources to running a process within a Docker container. There's also, because we're not starting a new kernel, we're not starting a complete OS, there's almost no overhead in terms of start time. If we're starting, say, a Unicorn web server in a Docker container, if it normally takes, say, 10 seconds to start that locally, it will take about 10 seconds to start in a Docker container. We can kind of think of Docker as getting a lot of the benefits of a VM without the resource overhead, which is a big simplification, but for the purposes of this talk, that's sort of what it looks like from the outside. Because of that, Docker's obviously got a lot of attention for deployment. We can run containers in development, and we heard a bit about this in the last talk. We can run containers in development, and we can then run identical containers in production and be pretty confident that we're going to see identical development and production behavior. But to me, that's not the most exciting bit about Docker. The most exciting bit to me is that because of the way Docker, or because of the type of interface Docker provides to containers, we can now build features around containerization very, very easily. So rather than containers just being something we use to deploy existing features, they can actually become a part of features, and they can allow us to build new things, in this case, in Ruby, that would have been much harder pre-docker. So there we go. So I'm Ben, by the way. I'm from a company called Make It With Code, and we teach people Ruby. And we discovered really early on that one of the key reasons beginners who are using Ruby as a kind of first language quit isn't because of the language. It's because they get stuck setting up a development environment. They'll try and install RBM or RBM, and they'll run into issues with system rubies and path variables, and they'll give up without ever writing a line of code, which is a real shame. They'll never get to see how beginner-friendly Ruby as a language actually is. And we want to bypass this completely. We want to provide a complete browser-based development environment for our students. So we wanted a live terminal. We wanted a file browser. We wanted a text editor. And we were really lucky because when we, at the time we were planning on doing this, the open source code box project was announced. Now code box is a Node.js-based application for doing exactly this, for providing a browser-based development environment, a lot like something like Cloud9, which you may have come across. And so we started off each week, groups of about 10 students would join, and we would then use Chef Solo to spin up a new VM for that group of 10. Each student would get a UNIX user. We would run an instance of this Node.js app under each user account, and then had all sorts of logic so that our front-end proxy would then send traffic for these unique development environments back to each of these Node.js instances. And this worked really well for our students. We saw people getting a lot further with the course, getting a lot further learning Ruby, because they weren't having to worry about how do I install Ruby to begin with. But it had some fairly big problems on the business side. Actually it was still quite manual, we had to provision a new VM for a group, and so people couldn't get started instantly. They had to wait for a group of people to start. And it was really inefficient use of resources. We could get about 10 students per 2GIG VM, and most of these students would actually be using it for about five to 10 hours a month. But the rest of the time, this Node.js app is still running, it was still using resources, and it was getting really quite expensive quite quickly. It made it impossible for us to offer any sort of free lesson, sort of try before you buy, because we just couldn't afford to be provisioning these environments for people who weren't definitely paying for the course. And so we started looking at Docker. And I played the Docker a bit in the past, I was really impressed with it, and I think like most people, my introduction to Docker was the command line. If you go through the Docker tutorial, that's how you first learn how to start and run containers. And so our first version, from our Rails app, used the Docker command line. Don't worry too much about the exact details of that command, but basically what happened was a user would sign up to our Rails app, and we would kick off a sidekick job, which would then construct this Docker run command, which says what base image, sort of what OS we're working from. It gave it some details, such as ports that we need access to, folders from our shared file system that should be mounted into that container. And the sidekick job then had Ruby shell out and execute this via SSH on a node in our Docker cluster. And I imagine anyone who's a little bit familiar with Docker may be laughing at us slightly here for this approach, because it is admittedly ridiculous. Because Docker, of course, has a complete HTTP API, which is amazing. And so anything you can do via the traditional Docker CLI that everyone gets introduced to, you can do via the API. So as an example here, I hope that's vaguely readable, you can see to create a container, we can do a simple post request to an endpoint exposed by the Docker daemon. And in that post request, we have exactly the same information that was in that really long run command we just saw. We specify an image that we want to build it from, in our case, a custom code box image. We specify which volumes we might later want to mount external files and folders into. We specify the ports from the container that we may later want to map supports on the host. And finally, we specify a command to be run when you start this container. And that's good. But we're no longer having to work in terms of shelling out and using regex to parse terminal responses, which was a fairly flaky process. We're getting nice JSON backs, which we can easily manipulate in Ruby and see when commands worked when they didn't. But naturally, this is Ruby, and so it gets better. There's a gem for it. Strongly recommend this gem if you want to work with automating Docker. Here you can see exactly the same processes in the last slide, but I'm passing in a normal Ruby hash to Docker container create. And assuming that succeeds, I will get a container object back, and I can then perform other actions such as starting it, stopping it, checking its status directly on that Ruby object. And this is already much, much better than our sort of original command line-based approach. There's much less of a switching cost. We're working in a standard Ruby API. We're not worrying about direct HTTP calls. And we're getting nice Ruby objects back to manipulate. So it's already much more friendly to work with. But it's still not perfect because the way Docker's architecture means there are actually three API calls to go from absolutely nothing to a running container. First we have to create an image if that image doesn't already exist on the Docker host. And you can kind of think of an image like a class definition. It's defining the OS, the files, the packages. And this is something you might well pull from a Docker's official registry or from a private registry if you're running one. The next API call then creates a container from that image. So it's a bit like, a little bit like creating an instance of a class. And at this stage, we specify the directories that we might later want to mount externally into that container. The ports we might want to expose. And then finally, we make a third API call to actually start that container we've just created. And at this point, we specify that we should, for example, for us that we need to mount a particular directory from our GlusterFS file system into that container at a particular point that we want to map the port that Node.js app is running on to a port on the host so that we can then proxy back to it. And so we're still having to think in terms of Docker's container workflow. We're not really thinking in terms of the business logic of our problem. We still mean that there are quite high switching costs to moving between working on the rest of our Rails app and working on the containerized component. And so what's brilliant about this API and this gem is that it's really, really easy to change it to build abstractions that allow us to reason differently about containers. So we didn't really want to reason in terms of creating images and turning images into containers and then running containers. We wanted to reason in terms of a container should have these properties for this user and we want to know that container is running irrespective of what may have happened before. And you can think of this quite a lot like first or create an active record. We're not concerned about the underlying database driver or the specifics of how do you query for a record, how do you create if it doesn't exist. We just want to say a record has these properties. Make sure it exists and return it to me. And because we have this API from Docker and this gem, it was really easy to build that abstraction. We very, very imaginatively called abstraction DACA. And here you can see we're using a standard Ruby hash to define specific properties that a container should have. And this is very similar to what we're defining in the previous API calls you've seen, things like the base image, the ports that should be mapped, the volumes that should be mounted. And here we're also defining a few environment variables. It's a kind of standard practice in Docker that pretty much any configuration is pulled in from environment variables that you set when you start the container. We've got this Ruby hash which we have generated automatically by our user object. So a user object knows how to generate its hash representing the container for its IDE. We can then just pass that hash to a DACA container deployer and say deploy it. And what this will do is work out, has that container been created? If so, we should start it. If it hasn't been created, then create it and start it. Or if it's already running and it's already there, just do nothing and return it. And this means that when you're working with containers within the app, you really don't have to think about the architecture of the traditional Docker workflow. And to me, why this is so exciting and so useful is all of our containerized infrastructure is now really just another HTTP API. We can treat it and work with it in exactly the same way we do the GitHub API or the Twitter API. And so in the same way, normally when you work with a third party API, we'll wrap it in some sort of abstraction that maps the way that API works to our actual business logic to what we're actually trying to do. We can do this with infrastructure, which previously was quite difficult to do. And the end result of this for us is that our application is now very, very easy to reason about. Someone who's new in coming to this application doesn't need to have an in-depth understanding of the terminology of Docker, of the workflow of going from creating an image in a registry to mapping folders to that container to starting that container. They just need to have a reasonable understanding of the abstraction that we built around it and they can start working on the application. The outcome for us of this was incredibly positive. So the process we now have is a user signs up to this Rails application. The Rails application triggers a sidekick job. And that sidekick job is responsible for using DACA, which in terms uses the Docker API to make sure that these containers are started and created. And then once that job returns, making sure that our front-end proxies are updated to route the user back to that Node.js app when they try and visit their development environment. The biggest business benefit for us of this is that because containers are designed to be stopped and started very easily, we now have a cron job which goes through and checks. When was the last time this container was used? When was the last time the user accessed it? If it hasn't been used for, I think, it's half an hour or so, we can then stop that container. When a user accesses it again, we can once again use DACA and the Docker API in the background to detect that container is no longer running, we need to start it, and then once it started again, we route the user and update the proxy so that their traffic is being routed back to this container. And that's allowed us to go from a density of about 10 users per 2G node, which was getting very expensive, to at least 500 users per 2G node. I say at least because it's probably much higher, but we haven't tested it higher than that. It also allowed us to do things like offering free trials because in the same way Heroku can afford to offer free apps because most of them never get used and they get spun down, we can do exactly the same things with these IDEs. So someone can sign up, they can try it, if they don't continue using it, their container is stopped, and that has effectively no ongoing cost for us at all. I said at the start I wasn't going to talk about traditional deployment because I didn't think that was the most exciting thing. And you could sort of argue that I've sidestepped that because I really talked about deploying Node.js apps at runtime. We've used it in quite a few different scenarios, and these are a couple of the other ones that have worked which really have nothing to do with deployment. We had a scenario where we had a proprietary dataset which we couldn't share with third parties, but we needed to allow third parties to write analysis tools in C, which we could then build, run this data through, generate summaries of their results, and then provide back to them. And again, we were able to create a very simple abstraction around a Docker container so that we could receive a tarball of their C code, inject that into a container, build that C, pipe our dataset in on standard in, and then wait for an event to be raised when it is finished, collect the data on standard out, and then post the analysis. And so there, we didn't want to reason in terms of container state. We weren't interested in the ongoing, is it running or not. All we wanted to know is has execution completed. Another scenario where Docker is getting a huge amount of use are language playgrounds. Pretty much when any new language comes out very soon afterwards, somebody will create a language playground somewhere where you can run small snippets of that code in the browser and see the results. That then runs server side, and you can then see the results. Great, great for teaching and for letting people have a go with the language before getting completely set up. And again, this is a great use case for Docker and something that we've used it for a lot, where you can create simple Ruby objects that will deal with receiving code, working out what language it's in, building a suitable container for that, collecting the output which is generally just written to standard out, and then providing that back to a Rails app to be returned in an API response. And again, it's all about you can create these very simple wrappers around Docker and around containers so that you don't have to constantly think in terms of how does the containerization process work. So hopefully what I've got across at a very high level in this talk is that because Docker has this HTTP API that is very full-featured, it's really, really easy to create abstractions over containerized infrastructure. And so that means that we can treat and reason about infrastructure in exactly the same way we reason about the APIs that form the rest of our application. If you want to have a go with this, there's a page, the top one is just a page on my blog that has these slides and a load of resource links. If you're completely new to Docker, I strongly recommend the interactive Docker tutorial. It's a web-based tutorial that will take you through the command line and get you used to the sort of terminology. The Docker gem is excellent. If you've been through the Docker tutorial, you should be in a pretty good place to start using that gem directly. And the Docker gem, which is our abstraction, again, it's open source on GitHub. Feel free to use that just as an example of one possible way of creating these abstractions. It's a fun side effect because Docker works entirely in Ruby hashes. We can also do YAML file-based deployments. So you can, much like FIG or compose can be used to do. So you can define in a YAML file that you have a Rails container, a Postgres container, an Aredis container, and then have Docker orchestrate that in development or across multiple hosts in production. Thank you very much for listening. Are there any questions?
|
Docker has taken the world by storm as a tool for deploying applications, but it can be used for much more than that. We wanted to provide our students with fully functioning cloud development environments, including shells, to make teaching Ruby easier. Learn how we orchestrate the containers running these development environments and manage the underlying cluster all using a pure ruby toolchain and how deep integration between Rails and Docker has allowed us to provide a unique experience for people learning Ruby online.
|
10.5446/30663 (DOI)
|
Alright, folks, I'm going to get started here. Thanks for coming out. USB keys have two files you'll need, the Vagrant file and the HTTP exploration box file. If you don't have Vagrant and VirtualBox, there are executables on there for Windows, Linux, a couple of variants of Linux and Mac. You'll need to have those installed. I'm going to do a half hour lecture, hopefully a little shorter than that, and then we'll have about an hour to run through the exercises. The exercises will walk you through starting up Vagrant and everything you need to do. So I have Charlie Sanders helping me out. Raise your hand if you need a USB key to get the files from. So basically you've got half an hour to get all that working. If you don't, don't worry too much, you can pair up with the neighbor. I would appreciate feedback, negative or positive. You can tweet me at Craig Buchek. Email address is up there. My presentations are on GitHub. I haven't put the latest versions of this presentation yet, but it should be there tomorrow. If you guys want to follow along, TinyCC HV exploration with an underscore. To take you to this presentation, actually I think I need to update that. I'll do that when we get to the exercises. The exercises will begin there too, so I'll update them when we start the exercises. The reason I started doing this is I actually have a previous life as an assistant in a network admin. I've actually done a lot of troubleshooting of HTTP, going through networks, going through firewalls, and now I'm a Rails developer. So I've seen both sides. So we're going to talk about HTTP basics, request, responses, talk about proxies, some troubleshooting, and HTTP2. And then we'll get into the exercises that touch on all of those. So HTTP has been around since 1991. The first version didn't actually have a version number, but retroactively we call it 0.9. It was standardized in 96. The version that we typically use now is standardized in 2007. That RSC is very handy if you have any questions about how HTTP works. I was just updated a few months ago. It was broken up into multiple pieces, basically in preparation for HTTP2. HTTP2 has been ratified, standardized, but it hasn't been published yet. But that's a good URL to check out if you're interested. So HTTP is stateless, which means that each time you connect to the server, it doesn't really remember what you did the last time you connected to the server. And the only way around that is through cookies, pretty much. It is text-based. That's kind of the whole point of the presentation. We're going to look at the text that's going across the wire. And it's a request-response cycle. Your client, your browser, will make a request to the server. The server will respond. So I want to talk real quick about URLs. So pretty much every piece of that URL on the screen will go from the client to the server except for the fragment. The fragment just tells the browser to go to this specific spot on the page. The scheme is going to be HTTP. The host is obvious. You can specify username and password in the URL, recommend against that, but it's possible. And then the path is actually the piece that goes that we'll be working with for the most part. All right. So the HTTP request looks like this. That thing in green is called the method. So we'll talk about that in a bit. The next thing is past as the URL. Usually it's not the full URL. Usually it's just slash. It starts with a slash and it's relative to the top of the site. And then you have the version of HTTP specified there. In blue are the headers. So the headers are sort of metadata about what you want to get or what you want to do. In this case, there's a host header and a content length header. And the body, not all requests have a body. If you're just getting something, it won't have a body. But if you're posting something or putting something, it will have a body. In this case, I tried to put something to Google's front page. Probably not going to work. I actually tried that. And it gives a 405 error, which is hard to find. So that's the body text, basically the content that you want to upload. So the methods I talked about, normally when you go to a little website, you're going to do a get request. They get this page. You go to Google's front page, you do a get. A post is when you want to update something or when you submit a form. Although some forms can actually do get, depending on if you're changing something or just doing a query. Like when you go to Google's front page, that form, when you type in the query, that's a get because you're actually just asking for information you're not asking to change anything. The put is actually, it's an update, or actually it's more of an upsert. You're saying, I'm giving you the whole thing. Replace whatever you got if you have anything. Otherwise, create it. Rails is a little weird on that. It doesn't actually do the create part of that. So it uses put a little bit oddly. Delete does what it says. If you have permission, you actually can delete pages or resources through the web. Get is basically a get request but without the body. You're saying, show me the headers that I would get if I did the get request for this URL. There's a concept of safe methods. So safe methods don't have any effect on information. When I talked about the post, it changes something. When you submit a form, it changes something on the server. A get is not that case. So the takeaway here is don't have your app change things when you do a get because when Google comes scraping your site, it's going to do gets on your site. If your site changes something on the back end, you're going to have a bad day. So just keep safety in mind. There's also something called item potency. That means that you can call the same thing multiple times. You can do a get to the Google front page multiple times. You'll get the same thing. Same thing with the head. Same thing with if you delete resource, the resource is going to be gone if it was there or not. A put, you're actually saying replace something. So if it's there, replace it. If it's not there, allow it. So multiple times doesn't matter. The one that's missing from this list, again, is post. So if you post something multiple times, you'll end up with multiple copies of that thing. So request headers. We saw that we can provide those headers. These are some common headers. The host header is basically required. It says the name of the website we're accessing. And the reason that that was added is so you could host multiple websites on a single server. The accept header says list the types that the browser would like to receive. So the browser may say that it likes to have HTTP, it likes to have JSON, but not list XML. So if the server knows how to do HTML and JSON, it will pick one of those, but it wouldn't pick XML unless it didn't know anything else. The weird thing about the accept headers, you can actually specify relative quality. Sometimes you'll see a queue number if you look at the accept header. And that says I prefer these types of things over the other things. The default quality setting is one, which means top priority. If you prefer JSON over XML, you could set its quality to 0.9 and XML is to 0.8. The content length on a request header is the content of the body of the request that you're sending. And if you don't specify that, it actually just doesn't let you specify anything. Once you've uploaded the content length number of bytes in the body, the server will close the connection. The content type tells what you're uploading. Am I uploading some HTML? Am I uploading some XML? Am I uploading some JSON? The referrer is if you are a browser, it's the page you were on when you clicked the link. That is spelled incorrectly, but it's in the standard document, so that's what we use. The user agent is another name for a browser or a web client. It could be a web crawler, it could be various things. It's a string containing the information about the browser. If you take a look at those, those get to be pretty crazy long, especially in Chrome and Firefox. They try to pretend. Back in the day, there was something called browser sniffing. The server would look at that string and try to determine, oh, I'm Internet Explorer, I'm going to give you this copy of the page, and your Firefox, I'm going to give you this copy of the page, which could be completely different. Google didn't like that because Google wants there to be one canonical version of every page that it indexes. It doesn't want you to fool it, so it sees one thing and then the user sees something else. Browser sniffing is not used too much anymore at all. You end up with these really long strings that tell you all the backwards compatibility that the browser tries to do. We'll take a look at those headers in the exercises. Yeah? Yeah. It was you who requested better and accept. Sorry, I'm sorry. Yeah. Comment like accept. Accept is what you, you said what you want to expect back. Yes, yes. And then content type is only for a post or a put, and it's what type the body that you're sending up to the server is. Yeah. So authorization is for, if you ever go to a website and you get the pop-up box, authorization is basically that information you type into the username and password box. It's not encrypted, it is hashed and sent. And we're actually going to, there's an exercise that we're actually going to find that going across the wire. Content encoding, you can actually gzip things. The server can gzip the content coming back to you. This basically says that's okay. You can say content encoding gzip, commit deflate. That tells the server, hey, I can gzip this and save some bandwidth across the wire. Connection usually is for keep alive. It is the client saying, hey, I've made this request, but when I'm done, I'm going to make another request. I'm going to make another turn down the TCP connection. I want to reuse it for the next request. So if I'm getting a web page and I know it's probably going to have some JavaScript, it's probably going to have some CSS, it lets me get those all in one without dropping the connection. Cookie. I talked about the, a cookie is, the server has sent us this cookie and we are supposed to return that cookie back to it. And that maintains sessions with the server, between the server and the client. And that's the only thing that maintains a session between the client and the server. And that header is limited to about 4K of information. So if we do our cookie-based sessions in Rails, we do have to worry about that limit. You don't want to store a whole lot in that cookie. All right. So we've made the request and the server response with the HP response. That first line is called the status line. It's got the HP version. Interesting thing, you can actually make a request for one version and get a different version back. I find that a little odd. The 201 there is our status code and then there's the description of the status code which is created in this case. And then like the request, the response actually also has headers. And then it has a body. Not every response has a body. There's a few that don't have to have a body. Created is actually one of those. So if you are doing a put or a post, it could actually just say created and then don't give you anything back. But you probably want to do a redirect. So you probably want some extra information in there maybe. So status codes. So the status codes have different meanings and they are actually standardized. Although occasionally you'll see some non-standard ones. The 100s are informational. You'll rarely see those. The 200s are what you'll see most of the time. Usually for a get, you're going to get a 200. For a post or a put, you're probably going to see a 201. Redirection, if you go to google.com, it'll actually redirect you to www.google.com. That's a redirect. I believe that would be a 301. If you send headers that are involved with caching, which we'll do in an exercise, you'll get a sometimes you'll get a not modified. That says, hey, you've told me you already have a copy of this in your cache. So I'm just going to give you the metadata and the headers. I'm not going to give you the body. You should use the body that I gave you last time. So then we've got the error response codes. So 400s are client errors where the client made some sort of mistake. First is if you have not sent an authenticate header and the web server requires you to authenticate, the client is supposed to retry with the authenticate header. And so before it does that, it pops the box up, has you type in your username and password, resends with that header. Forbidden means you've probably either you can't authenticate or you've authenticated, but you still don't have permissions because maybe you're not an administrator in the box. 404, we've all seen that one before, page not found. 407 is proxy authentication required. It's like a 401, except it's the non-transparent proxy that is asking for a username and password. That's pretty rare to see. We will do a little bit of proxies, but we're not going to do any proxy authentication. 422 is unprocessable entity, which is a weird way of saying I don't understand what you're trying to ask me. That is recommended. That's what I would recommend, either 422 or 409 if you are making an API request and it doesn't have the right information that it needs in the JSON or XML or whatever that you sent. Server errors. If your server crashes really badly, you will see a 500 error. I believe you see that in Rails a lot of times when you're debugging. 502 and 504 are gate-wears. If you've got a reverse proxy sitting in front of your servers and your servers have gone down or take too long to respond, you'll see a 502 or a 504. There are plenty of others. I ran into a 405 earlier. It's one of the April 1st RFCs called I'm a teapot response, 418 code. Not sure when you'll need that. I'm sure someone has implemented it, though. Response headers. So like a request, the response has headers. The content length will almost always be there because you'll have a response that has content in the body. The content type, it's the mine type. So it's going to be like text slash HTML, text slash plain, application slash JSON, or image slash JPEG, image slash PNG. That tells the client what type of file this is and then it can handle that however it expects to. The content encoding is, I talked about accept encoding where you can Gzip it. The content encoding says that the body has been GZipped. Notice that the body is GZipped, but the headers are actually still in plain text. Content disposition is a little trick that you use when you want to, when the person clicks on the link, you want them to download the file instead of have the file display in the browser. In that case, you would use content disposition header and you can also provide a default file name that the browser will try to save it as. Location. Location is used for redirecting and we'll have an exercise on that. Usually you want to provide a response code that has, that says redirect and then you provide the URL to redirect to in the location header. Set cookie, we talked about cookies for maintaining state, maintaining sessions. That cookie tells the browser to remember this token and it's just roughly a random string of characters and when it gets sent back by the browser, the server knows to associate it with a session. It looks it up in the database. www.authenticate is basically telling the browser that to pop up the box or if they've already typed their username and password to provide that information to the server. So we'll run into proxies. We'll have some exercises on proxies. So proxy is something that acts in place of another. In the case of HTTP proxy, a web proxy, it intercepts our HTTP requests. So it can modify that request. So what it does is it intercepts a request, modifies it probably, does some caching perhaps and sends it onto the server, gets the response back from the server, can modify it then again and then sends it back to the client. So it sits in between the client and the server and it can modify pretty much anything that it sees in there. So proxies are good for caching. You can add security. So actually our exercises, we will add some SSL to our Rails app. So you do this to simplify so you don't have to have the Rails app understand SSL. It also can save some CPU time. Also can be used for load balancing. It can be used for authentication. I've had it where I had Apache in front of an application server and we added the pop-up authentication for it with Apache acting as a reverse proxy which I'll talk about in a second. So there are transparent proxies. Basically you don't have to set anything up. It's inputted itself into the stream of the network and there's non-transparent proxies which a lot of times if you're at work and at a big company, you'll have to configure a browser to point at the proxy. That's a non-transparent proxy. And we've got an exercise on that. Reverse and forward proxies. So the proxy you have at work that sits sort of right next to the firewalls would be a forward proxy. A reverse proxy sits next to the server. So right near the servers, anything that's, any proxy that's added on the server end is a reverse proxy. And we will actually have exercises on both of those. A CDN is a content delivery network. It's basically a paid service that does proxying for caching purposes. So you can cache all your static content. Technical things in here I won't get too much into. One of the nice things that can provide you, protect you from distributed denial of services. If you've got a big site, you should probably be looking into these. Troubleshooting. So any network problem you have, you have to think about the OSI model. And I wish I had a picture of that. I forgot to put that on there. So you've basically got the physical layer. You've got the network layer. You've got the transport layer, which is TCP. And then you've got your applications sitting on that. So there's a lot of things that could go wrong. You could have a network cable pulled out. You could have, you know, your routers down. Or the server's not running. So all those different layers you have to think about when you have a problem. Troubleshooting is trying to figure out which layer it is. And then narrowing down on that. So one of the first tools is ping. Can I connect to the IP address that the server is on? The problem with that is sometimes the firewalls will prevent that, either firewall in your company or the firewall on the other end. Tracerout is similar, but it shows you all the hops in between. So maybe the network is down between your internet provider and Google. Tracerout might be able to help you find that out. Tell that we're going to use in our exercises. It's a good way to tell if the port is listening. And if I've got connectivity to the IP address, tell that will tell me if the service is listening. And that's actually kind of where I start. I kind of start in the middle. And then if tell that doesn't work, I'll try the lower layers. And if it does work, I'll try the upper layers. If you're on your server and you want to see if your service is listening, you can do a net stat on Linux. That's dash PLA and T. It's easy, mnemonic for me to remember. On Mac, it doesn't have all those options. You'd use dash NA and then grab for listen. And then so that's going to list over on the left side. It's going to list all the IP addresses, which is usually going to always be your IP address. And then colon in the port number. So if you're looking to see if Rails is running, look on the left side for something colon 3,000, assuming you use the default port for Rails. Telnet doesn't work with HTVS because it's encrypted. So you have to use this tool that OpenSSL provides called sClient. And we've got an exercise on that. TCP dump, if you really want to see all the details on what's going across the wire, it will tell you everything. We've got a short exercise on that. There's so much information, we could probably spend a couple hours on that. Wireshark is a good way to visually see what's going on. You can actually take the output of TCP dump, save it to a file, pull it into Wireshark. Wireshark has a nice feature. TCP dump shows you each individual packet and their disjoint. When you're having communication between a client and a server, your packets can only be so big. So the communication will be broken up into pieces. Wireshark has a nice feature to put all those back together, which is kind of cool. All right. So as I said, HTTP2 was recently approved and ratified. It came out of a project at Google called Speedy, SPDY. Apparently, they wanted to make it Speedy to say, to write the word Speedy. So as I said, we can compress, we can Gzip the body, but we can't, in HTTP 1.1, we can't compress the headers. HPEC is a part of HTTP2 that allows header compression. Another thing, the standard doesn't require it, but every HTTP2 implementation out there requires TLS. And that's probably because it's got TLS or SSL has a protocol and negotiation built into it. And that protocol and negotiation can say, hey, do you have HTTP2? I'd like to use that. Do you have Speedy 3? I'd like to use that, but if not, fall back to HTTP1. So I said that HTTP is all text. HTTP2, that turns out not to be the case because we're compressing the headers. It has the same semantics. You won't be able to use Telnet. You won't be able to use TCP dump, but you will be able to use some of the other tools that we'll be looking at today. I did get an example to work with HTTP2 that we can do as an exercise. When you're making a lot of small requests, like let's say you've got a lot of icons on your page, ideally you would just get each icon individually, but those icons are only 25K or something. At that point, the headers at a couple K are starting to become large overhead. If we could compress those down to a couple dozen bytes, it would save a lot of bandwidth. There's some other features in HB2 we'll get to in a sec. Basically just to save bandwidth. The other thing about HTTP2 is it multiplexes the connections. You can actually have multiple files coming across at the same time with a single TCP connection. Right now your browser has to make multiple TCP connections. I don't know what the browsers are up to. They started at four at a time. They got up to eight at a time. That means you've got... the TCP connection takes up resources and it takes time to set up. If you could do just one TCP connection, that would save some time. That's one thing they've done. You can grab your HTML file and you can actually start processing and see, hey, I've got some JavaScript, I've got some images, I've got some CSS, I need to grab all those too. I can grab all those simultaneously. That's going to save a lot of time. Server push. The server can actually know, hey, he's getting this HTML file. He needs a CSS file too. I'm going to start giving it to him even if he didn't ask for it. The client can say, well, I'll take that. Hey, I finished with the HTML file. I didn't need that CSS file. You can stop giving that to me. Which is kind of weird with caching. I don't know how it knows what's been cached and what doesn't. The possibility is there to save some time. When we do web design and web development, we've got the asset pipeline. That combines all our JavaScript files into one big file. It combines the CSS. We probably combined that into one file. The images, we probably do something called spriting where we put a bunch of images in one file. Then the CSS has to go grab each piece to put each icon on there. If you look at the, I know I've looked at the homepage for Yahoo. It's got all the icons on the left. Those are all in one file. The CSS gymnastics we have to do is a pain in the butt. HTTP2 will hopefully allow us to stop doing that. We can just write it the way that it should have been written in the first place. Not worry about performance. HTTP2 should do that for us. HTTP2 is kind of weird that it starts with a 1.1 connection and then it upgrades. The semantics of that are pretty crazy. You probably don't want to get involved in that. We'll see a little bit about that in the verbosity of the tools we use. We'll show you a little bit of that. Here is HTTP2 working. Chrome 41 has it. Firefox 36. IE 11 but only in Windows 10. Nobody has that yet except in beta. Engine X says they're going to support it by the end of 2015. They already support speedy 3.1, which is pretty darn close. I could not get Curl to work with Engine X using speedy or HTTP2. Call for. Right now you have to specify the HTTP2 flag. For this I had to manually compile that feature in. Wireshark 2 will have it. They're currently in the beta series 1.99. Patches doesn't seem to have plans, which seems a little weird, but it does have mod speedy available. All right. So it's time for the exercises. I need to update that URL real quick. But Charlie will be walking around helping people get started. The exercises are basically starting on page 27 of this. And so the first step is basically add the vagrant box with that command there. Do a vagrant up and then a vagrant SSH. You'll be in the box. And then you can move on to slide 28. Will Rails support HTTP2? Probably not for a long time. So Rails really doesn't even support HTTP. Your Unicorn or your Puma are what supports that. So when those support HTTP2, we will have it built into Rails. Rails itself sort of sits right behind that. Right now if you want it, you would put a proxy in front. You would put InginX, is my recommendation. In fact, InginX said they have 95% of the, well, it was speedy at the time, servers on the internet. So you would use a reverse proxy. Well, Rails also doesn't support HTVS. So you probably want HTVS on your site, so you probably need that proxy anyway. Another question? Other link? You should see this version without the vagrant init. If you did the vagrant init, you'll need to get the vagrant file back. And I think you'll need to do vagrant box remove. But raise your hand if you did the init and it will help you out. So once you guys get into vagrant with the vagrant SSH, just work through the slides. Raise your hands if you have any questions and we'll come around to help. Ok. You see the space again. It returns. Well, it does. I think it does. You said that it was long. I think it's done. It's been pretty fascinating to look at. Oh, no, I didn't. Yeah, it's done. Yeah, it's done. It's done. It's done. I'll just paste it in. Yeah, you can put the save text once. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. To have everything worked. Okay. That is a thing that's doing better. Okay. It's in the virtual box. Oh, sorry. I was for a little bit of a we for the interview we'll be back in a minute. We'll be back in a minute. We'll be back in a minute. We'll be back in a minute. We'll be back in a minute. We will answer her question... There is one asking...... that will pick up and you can think it in that President, or Some ignore me They might not agree. In question, I think the next thing we're gonna do is start with the end. I don't know why I had to. Hey folks, for the HV1.0 and 1.1, make sure you enter the blank line when you're using Telnet. If you don't use a blank line, I'm just sitting there waiting for you. I'm gonna be doing this now. I thought it was odd to make it so he's got a big spot. I think you're all right there. He's going to start. I just want to try to make a different spot. From the right, the third, the rail. Oh, yeah. You've got a response from some of the other people. Yeah. They should be able to do something like get. To get in the... I don't remember if you tell them that. Well, if you can propose that you should get in. Oh, I think it was. I think just a... Oh, yeah. Yeah. Good. Yeah. I'm going to try to get some. I actually don't know how. I just want to get some. Do you have some running background? Um... Yeah. Oh, I have old tips for my guys. I'm going to try to get some. Yeah. You want me to tell them to go? So I'll call them on a business side. Send the kit part. First one. Yeah. Do I need to just press enter now? And don't hit enter or something like that. No. I'm going to go back to it. We'll be able to tell them what to come to the table. There you go. We did. If people have a speaker, we can do a few more things like this. We can do a few more things like this. We'll be able to still get more. We'll be able to tell them what to come to the table. Okay. We'll be able to tell them what to come to the table. Okay. We'll be able to tell them what to come to the table. Okay. We'll be able to tell them what to come to the table. Okay. We'll be able to tell them what to come to the table. Okay. We'll be able to tell them what to come to the table. Okay. We'll be able to tell them what to come to the table. Okay. Okay. We'll be able to tell them what to come to the table. Okay. We'll be able to tell them what to come to the table. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay.
|
We're web developers. But how well do we know the web's core protocol, HTTP? In this lab, we'll explore the protocol to see exactly what's going on between the browser and the web server. We'll cover: HTTP basics HTTP methods (GET, POST, PUT, etc.) HTTPS Troubleshooting tools Proxies Caching HTTP/2 We'll investigate how we can take advantage of HTTP features to troubleshoot problems, and to improve the performance of our Rails apps.
|
10.5446/30668 (DOI)
|
So this talk is about how Bundler works. How does Bundler work? This is an interesting question. We'll talk about it for a while. So this talk is kind of a brief, hopefully brief, history of dependency management in Ruby and kind of a discussion of how libraries and shared code works. It's both how it's worked in the past and how it works now because how it works now is kind of directly a result of how it used to work in the past and trying to fix problems that happened back then. So before we get started, let me introduce myself. My name is Andre Arco. I am indirect on pretty much all the internet things. That's my avatar. Maybe you have seen me on a web page somewhere. As my day job, I work at Cloud City Development doing Ruby and Rails and web and Ember consulting. We do web and mobile development and I mostly do architectural consulting and senior developer pairing and training. If that's something that your company is into, talk to me later. The other company that I founded is called Ruby Together and it's a nonprofit. It's kind of like NPM Incorporated but without the venture capital. Ruby Together is a trade association that takes money from companies and people who use Ruby and who use Bundler and Ruby gems and all of the public infrastructure that everyone who uses Ruby uses and pays for developers to work on that stuff so that RubyGems.org stays up and so that people keep being able to have gems, which is pretty cool. As part of my work for Ruby Together, I work as lead of the Bundler team. I've been working on Bundler since before 1.0 came out and I've been team lead for the last four years. So using Ruby code written by other developers, nowadays this is actually really easy. You add a line to your gem file, gem foo, you go to your terminal and you run bundle install, you start using it. That was actually it. Pretty cool. That's really easy. The thing that I've noticed talking to people who use Bundler and think that it's awesome is that it's not actually clear what just happened. Based on the text printed out by Bundler install, it seems like something probably got downloaded and something probably got installed, but it's not totally clear what got downloaded. It's not clear what got installed. It's not clear where any of that happened. And it's not clear like what exactly happened there. He's really sure how does just putting a line in your gem file mean that you can start using somebody else's code. So to explain that, we'll need a little bit of history. We're going to go back in time and I'm going to give you a little tour from the beginning of sharing code in Ruby up until now. And hopefully by the end of it, you'll understand why things work the way they do now. So I'm going to start talking about Require, which came with the very first version of Ruby ever in 1994, and then talk about SetupRB, which came out in 2000, and then RubyGems, which came out in 2003, and then Bundler, which came out in 2009, and that's what we're still using today. So Require. The Require method has been around since 1994 with the very, very first version of Ruby that actually came out. And I guess I should say, I'm sure that Require has been around since at least 1997 because that's the oldest version of Ruby that we have in version control still. It was probably there before that, though. But Require can be broken down into even smaller concepts. So using code from a file is basically the same as just inserting that code and having Ruby run it like you had just written it in the file. So it's actually possible to implement Require yourself with just a one-line function. You say, hey, I have a file name and I want to require it, and you read the file into memory into a string, and then you pass the string to eval and Ruby runs it, and it's just like you typed that code yourself, and it got run. So there's some problems with this. This is not how Require works in real life. I'm sure it's totally fine that this will run that same piece of code over and over and over. If you require it over and over and over, you like having lots and lots of constants that keep getting redefined. I'm sure it's totally fine. So working around that is actually also pretty straightforward. You can just keep track of what you've required in an array and not require something again if it's already been required. As you can see here, you set up an array. You check to see if the array already contains the file name that just got passed in. And then if it hasn't already been required, do the same thing we were doing before, read the file in, pass it to eval, and then add it to the array so that you won't require it again later. In fact, this is exactly what Ruby does, albeit written in C and not in Ruby. There is a loaded features global variable, and it's an array, and it contains a list of all of the things that you've required in the past. So if you ever wanted to know if you've required something yet, you can actually just check the loaded features array. So there's one more problem with this, which is that right now it only works if you pass it absolute paths. I'm sure you don't mind typing the full path from wherever you are to exactly where the file that you want to require is. I'm sure that's fine too. So the easiest way to allow requires that aren't absolute is to just treat all requires as if they're relative to the path where you started the Ruby program. That's easy, but that doesn't help a lot if you want to require Ruby files from different places. So say you have a folder full of this library you wrote and a folder full of this application you wrote and you want to use the library from the application, you can't like writing relative paths from wherever you started the Ruby program would be terrible. So instead we can create an array that holds the list of paths that we want to load Ruby files from. In a burst of creativity I'm just going to call that variable the load path and here's an implementation of a load path. If you put something in the load path array you can then pass a relative path to any directory that's in the load path array and we'll look for the file, hey is there a file? So if you require foo, we'll say hey is there a file named foo inside any of the load path directories and the first one that we find searching the load path in order from first to last will require that one. Coincidentally this is exactly what Ruby does. There's a global variable named load path and if you put a string that contains a path to a directory in it, Ruby whenever you require something will look in that directory to see if there's a file with that name. So you can totally use the load path to require files from somewhere else while you're working with them. And of course the load path and loaded features can both be combined but that code didn't fit on a single slide so I'll leave that as an exercise to the listener. It's pretty straightforward to be honest. So load paths are pretty cool. They allow us to load Ruby directories even if they're spread across multiple places. At this point we could even have like automatically at the start of every script we could add the directory that holds the standard library to the load path and then all of the files that are part of the Ruby standard library like net HTTP, all of the set, all of those cool things that come with Ruby could just be available for require automatically and you wouldn't have to worry about putting them in the load path yourself. That is exactly what Ruby does. The standard library starts on the load path whenever you start Ruby. It's pretty great. So this was cool and for several years this was enough. People just added things to the load path a lot or wrote scripts that added things to the load path before requiring things before their actual script happened. The thing that got super tedious about just having load paths is that if you want to get code from someone else you have to find that code, download that code, put it in somewhere, remember where that somewhere is, put that somewhere in the load path and then require it. It was pretty tedious. Sorry. So the next thing that happened was set up RB. So we're totally caught up to the state of the art in Ruby libraries around the year 2000. Everyone's still installing shared Ruby code by hand, CP, CP. And that wasn't so much fun. So a Japanese Ruby developer named Minoru Aoki wrote set up RB and amazingly, even though this was created in the year 2000, set up RB is still around on the Internet. The website for this developer is i.loveruby.net, which is pretty cool. And you can even download set up RB, although to be perfectly honest it hasn't been updated since 2005 so I'm not sure it's super helpful to you. So how did set up RB work? Well, you, at its core set up RB kind of mimicked the classic UNIX installation pattern of downloading a piece of software, decompressing it and then running configure make make install. And so Ruby kind of copied, set up RB kind of copied that for Ruby and you would run set up RB set up, set up RB config, set up RB install. And what would happen is set up RB would copy all of the Ruby files and there was a specific directory structure kind of like a gem today where you would have library files and bin files that you could run as programs and you know, support files and set up RB would copy all of those files into a directory that was already in the load path called site Ruby. And that was like the Ruby files that you had installed that were specific to your computer. And so after set up RB using Ruby libraries was actually much easier than it had been. You could find a cool library on the internet, you could download that cool library, you had to untar that cool library by hand and then you had to run Ruby set up RB all by hand but then hey, it was all installed. No more manual copying, no more, you know, like having to manage all these files and everything was in the load path. You could just require it as soon as you ran set up RB. It was pretty cool. So after a little while some of the shortcomings of this scheme became apparent too, there's no versions for any of these libraries and after you run set up RB, there's not even a way to tell what version you have unless, you know, like unless you like write it down or unless the library author was really nice and like put the version into the code somehow. And there's no way to uninstall. Everything just gets thrown into the same directory so you run set up RB for five different Ruby libraries and now all of their files are just in one directory. Good luck figuring out which ones belong to which because if you delete the wrong one, too bad. And then upgrading. Upgrading was super fun. If there was a new version of the library, which, good luck finding that out, right, you had to remember the website where you got it from in the first place. I hope you write all these down. I hope you've written down every website you've ever downloaded Ruby from. You have to go back to that website and you have to remember which version you have, which as I said before, there's nothing there unless you wrote it down. And then you have to download the tar ball with the new version and decompress it and CD into it and run Ruby set up RB all and hope that the new version didn't delete any files because the old files are still there and you just went over the top of them. So overall this was, this probably sounds a little tedious. It was really tedious. People frequently kind of had no idea what was actually happening with their libraries and it was actually not uncommon for people to be like, oh, this doesn't work. I'm just going to fix it in my site with Ruby directory. Okay, everything's great. Right. Yeah, super awesome. So at some point, some people were like, hey, this isn't actually that great. What if you could just gem install? That would be cool. And so in 2003, Ruby gems came to the rescue and kind of fixed all of the problems with set up RB that were known. You could check to see if a library existed by just running gem list. You could install a gem just by running gem install. You could uninstall a gem. Super great by running gem uninstall. And Ruby gems kept each of these libraries in different directories so that you knew which libraries you had and you knew how to uninstall those libraries and you knew how to install new versions of those libraries. And it was all with a single command. There was none of this like find it on the internet somewhere, download it, unpack it, set up RB it. And Ruby gems had another super cool trick up its sleeve which was versions. Ruby gems actually kept each version of each gem in a different place. You could install multiple versions of the same library and they could all be in your Ruby because they did not go into one giant folder. They all went into their own separate folders. So there was a folder for Rails 1, 4, or Rails 4, 1, Rails 4, 2, Rails 5. This was pretty cool. So to make this actually work because Require doesn't support versioning inherently, Ruby gems added a gem method that lets you say, hey, I know that I have or I don't really care whether it's installed or not. I need version 1.0 of rack and Ruby gems will check to make sure it's installed and then put that directory, just the one with rack 1.0, into your load path. So then when you run Require Rack, you get rack 1.0. It was pretty cool. And so calling the gem method told Ruby gems that you wanted to manipulate the load path to load exactly the version that you knew that your code wanted to talk to. It was pretty useful. Ruby gems also has a way to support versioning even in commands that come with gems. So like the rack gem comes with the rackup command. If you have multiple versions of rack installed, the rackup command could run any of those versions. So Ruby gems defaults to the newest version that you have installed, hoping that the newest version is the right one. But if that's not the right one, Ruby gems actually checks the first argument to the command to see if there's something with underscores on either side of it and it thinks that that will be the version number that you want. So in this example, we're running rackup from rack version 1.2.2 and only version 1.2.2. If you don't have version 1.2.2 installed, Ruby gems will be like, hey, sorry, I couldn't find that version. You need to install it first. Ruby gems was really, really successful. Ruby grew in popularity a lot, but Ruby gems made Ruby libraries and sharing code grow in popularity a lot. So today we have about 100,000 different gems and about a million different versions of those 100,000 gems. That is a lot of shared Ruby code and that is super cool. So, and you probably knew this was coming, as cool as Ruby gems is, it still has some problems. If you have multiple applications that all use Ruby gems to load their dependencies, this can be problematic. It's really hard to coordinate across multiple applications because every, the way Ruby gems works, every machine or technically every individual version of Ruby, but like each installation of Ruby itself just has a set of gems, right? Like you ran gem install and now there's all these gems. And so if one developer runs gem install foo and starts using foo in their application and then commits that code and checks it in and the next person checks it out and tries to run the application, it's going to explode with like, I don't know what foo is, where is it, you need to fix that for me. And so it led to an era of basically pure manual dependency management. And so it was like, I'm going to run a new job, hooray. No joke, this literally happened to me in 2008. New job, welcome to the team, here's your cool new laptop. We expect you to have the app running by next week. It actually took me, and I was totally working overtime on this, I think it only took me three and a half days. It was amazing. To figure out which gems to run gem install, I looked in the read me and there was a list and I installed all of them and then clearly there were some that some people had just kind of forgotten to put in the read me and then it kind of worked but then I wasn't able to get images working and then some other developer was like, oh yeah, you have to install image magic, this was before homebrew. It was really, really terrifying. Yeah. So to try and kind of like fix this problem of like, do we just put the gems in the read me? How do we even know if we have written everything in the read me? I don't know, try it. And of course you had to get a new machine to try it on because some person, after three years of using Ruby you've just gem installed everything and you have no idea what is important and what isn't important and yeah, it's terrible. So people started working on tools to help this problem. Rails actually added this thing called config.gem and this is like Rails 2.3 era, 2.2 era where you would say, hey Rails, I need this gem and you would put it inside your application RB file and that was super helpful if you needed to like know for sure that this was the master list of all the gems that you needed in your application but you could only access that list if Rails was already loaded. So if you upgrade Rails over here, it was pretty bad. So because Ruby gems automatically uses the newest version of each gem, just having an older version installed didn't mean that it would be used and if you like install some gem a month after the other person did, maybe there's a new version and you just get the new version automatically. This is also totally a real life experience that happened to me in 2009. Debug a production server that just throws exceptions sometimes for three days. The other production servers are fine. Can't reproduce this problem on a single developer laptop. Like what is even going on? This is so weird. After three days, I finally thought to look at the output from gem list for the entire production machine and I was like, oh, this production server has gem version 113 and every other production server and this laptop and every developer laptop has gem version 114 and that was the problem. There was a bug and that only that server had this problem. Then like I was saying about Rails versions, you could gem install Rails, be happy, make a new app, run your server, everything is great and then you switch to another application that already existed, didn't get written to use that version of Rails, got ready to use some older version of Rails and you're like, okay, let's go. Because you just didn't have the right version of Rails and there was no way to like, if you put the Rails version in your config.gen line inside your application RB, then Rails would complain that you had the wrong version of Rails but Rails had to have successfully started up to tell you that you had the wrong version of Rails so it didn't actually help. And ultimately, like, it was actually a significant part of my job as a Ruby developer to like figure this shit out by hand and it sucked. Depending on what exactly you did on the team, some people on my team at the time spent like a quarter or a third of their time doing nothing but figuring out and fixing dependency management issues and it was really, really like I felt really bad for them. Sometimes it was me and I felt really bad for me. And then there's one more, even after you have done all of this by hand management, there's one more problem that RubyGems has that is another reason why Bunder was created and that is activation errors. So an activation error is what happens in RubyGems when you load an application and you start by saying, hey, I need this gem, hey, I need this gem, hey, I need this other gem. RubyGems will load the newest version of those gems that it can and so sometimes you'll say, hey, I need this gem and then this gem will need that gem and then that gem will need this gem and you'll get like the newest version of that child gem. And then later you'll say, oh, and I also need this gem and that gem won't work with that gem. So how common can this be really? Well, unfortunately, it was super common. Not like happens to you every day common, but like happens to you maybe two or three times a year and when it happens you basically tear all your hair out, delete your entire Ruby install and reinstall Ruby and start installing gems again because figuring out exactly which combination of installed gems was causing this problem was just a total nightmare. So this is a real life activation error. I salvaged this from a presentation that I gave in 2010 about why Bundler exists. So this is a Rails app and it's loading and Rails, of course, depends on Action Pack. This is the Rails 2.3 era. Action Pack depends on Rack. Rack is a gem that helps Rails talk to web servers and FIM, which is a web server, also depends on Rack. So Rack is how Rails talks to FIM, how FIM talks to Rails, but there's a problem. FIM is perfectly happy to use Rack 1.1, which makes some changes to how Rack works. Action Pack, on the other hand, is not happy to use Rack 1.1 and can only use Rack 1.0. And so when you run your server, your server, of course, loads FIM first because FIM is the server and then FIM gets to work trying to load up your Rails application and your Rails application says, I can't actually use that Rack. Sorry, the end. So the conclusion here is that, so the reason that these activation errors would happen is that RubyGems does what we call runtime resolution, which is RubyGems figures out which versions of which gems it should load after RubyGems is already running. And you say, hey, I need a thing, and it's like, okay, I think this version works. And at some point, if later on you say, hey, I need a thing that doesn't work with things that you've already done, RubyGems just has to be like, well, can't fix that. And so the fix for this problem is to figure out all of the versions before you run your application. You have to know that the versions that you're going to use are all versions that can work together with one another. And so resolving things when you, like at install time, right, which is when you install all of the gems, know that you're installing versions that work together. So hang on a second, you're probably saying, how do we make sure that all of the versions that were installing work together? Well, that's actually where Bummer comes in. Before Bummer, the process of figuring out which gems would work together was done entirely by hand, and it consisted of gem on install, gem install slightly older version. Does Rails start up yet? Gem on install, gem install slightly older version. Does Rails start up yet? And when the exception stopped, you knew you'd won. Unsurprisingly, computers are a little bit faster at this process than people. And computers are also really good and accurate at trying many, many, many, many, many options until one of them actually works. So this is what Bummer does. You, yeah, I know. Bummer figures out the entire list of every gem and every version of every gem that you need but that also all work together with one another. This is called dependency graph resolution, and there's an entire academic literature about dependency graph resolution, and it's kind of a well-known, hard problem. It's part of the set of problems called NP-complete, and the totally fantastic thing, and I say this as a person who has to fix Bummer when it doesn't work, is that in theory, you can construct a set of gems and a gem file such that it is not possible to find a set of gems that work together until after the heat death of the universe. Most of the time, we don't have that long to wait, and so we use a lot of tricks and shortcuts and heuristics to try and figure out which gems to try first and hopefully actually finish before you've drunk that cup of coffee or whatever. So we have this pretty large built-up set of tricks over the years, and most gem files actually resolve in less than 10 seconds, which is pretty cool considering that the upper bound on that is practically infinity. So after finding versions that work together, because this problem is really hard and we don't want to have to keep doing it over and over and over, Bummer writes down the exact versions of every gem that did all work together so that they can be reused by other people who are also interested in running your application. So that file is called the gem file.lock. This is a little snippet of a gem file.lock showing you which gems need to be installed, which versions of those gems need to be installed, and as a bonus, the lock file is what makes it possible to install the exact same version of every gem on every machine that's running this application. That means that when you develop on your laptop, you get whatever version of the gem was newest when you were developing, because you ran, you know, bundle install and you got the newest version by default or whatever. But because of the lock file, when you go to put that on your production server, you're completely guaranteed that you will also have version 1.1.4 of Robnitz and you won't have to spend three days figuring out why that production server doesn't quite work all the time. It's pretty great. So fundamentally, like the core of Bundler consists of two steps. Bundle install and bundle exec. So the steps for bundle install are actually pretty simple. They're totally understandable in plain English that fits on a single slide, which is great. I had to, I edited this slide for maybe 10 minutes to leading words. So the steps to bundle install are read the gem file. Backbrewiegems.org for a list of all of the gems that we're going to need. Find versions of those gems that are both allowed by the gem file, because you can sometimes say I only want this version or I only want that version or I only want versions greater than that version kind of thing. And then once you've found versions that all work together because you checked, write all of those versions down in the lock and then install every version until every gem that's in the lock is installed. And that's how bundle install works. Bundle install actually uses RubyGems under the covers to do the installation. And so every bundle is its own little RubyGems isolated install. Every application has its own RubyGems, thanks to Bundler. And then the next step is bundle exec, which is how we use that application's dedicated little RubyGems instead of the one that just has whatever in it because you ran gem install last year. So the way bundle exec works is it reads the gem file and it reads the lock if the lock's there. It uses the lock gems if the lock file is there. And if the lock file isn't there, it finds versions that all work together just like install would, except bundle exec doesn't do any installing, it just says, oh, do I already have versions that all work together? They do cool. And then bundle exec deletes any gems that are already in the load path because sometimes that happens before bundle loads. And then it adds the exact gem at the exact version that you need to the load path so you can use it, which is pretty great. That's it. That's all bundle exec does. It's once your gems, all the gems that actually work together and their exact versions are in the load path, your application just goes on its way and it's happy. There's no activation errors. All your requires actually succeed. I hope everything's pretty great. So as I think I promised in the abstract for this talk, here's a bundle exec removing pro tip. I don't really like typing bundle exec. I find it really annoying. But bundle provides a way to not have to type bundle exec all the time. And it is to create programs that map to the little copy, like the Ruby gem installation that belongs just to that application. You can use the binstubs command, bundle binstubs sum gem, and it will create in the bin directory a program for that gem that only runs the exact version that belongs to that application. So if you have RSpec in your Rails app, you can have bin RSpec that will only run the RSpec for your app. And in this way, you can have bin RSpec refer to RSpec 3 in this application and have bin RSpec refer to RSpec 2 in that application. No exec required. It's pretty great. Bin Rails has actually started to do this very thing. And Rails 4 ships with bin Rails and bin rake that are scoped. So when you run bin Rails, you get the exact Rails version for that application and not this application. And when you run bin rake, you get the exact version of rake for that application and not this application. Pretty cool. No more bundle exec. If everyone did this, you can check in these binstubs, right? So you can take bin RSpec and you can put it in git and it will be mapped to just that application forever. So no one would ever have to bundle exec ever again if everyone did this. Pretty cool. So now we bundle install. All our gems show up. We have versions that are dedicated to each individual application. But as you probably sense the pattern going through history, that wasn't actually the end. There are still problems that show up. After Bundler came out, the biggest problem that was left was that running bundle install just took a really long time. And if you lived really far away from the U.S., it took a really long time. I talked to some developers in South Africa when I went there to give a talk and they told me about how running bundle install means that they literally get up to start making themselves a cup of coffee that they can finish before bundle install finishes. So to try and speed things up, Bundler 1.1 created a completely new way to get information from RubyGems about gems and that sped things up by like around 50%, which was a pretty big win. We keep working on this. Bundler 1.9 just came out this month. There's a bunch more improvements that we're still working on. Bundler will keep getting better. If you're interested in following along with that, the Bundler website has news announcements at bundler.io and on Twitter we're also bundlerio. So having said all of this, if you use Bundler, I would totally love to have your help working on Bundler. It's an open source project. We have dedicated a lot of time to making it easy for people who don't know how to do open source to help with Bundler and to start working on Bundler and to kind of get into open source that way. It's a project on GitHub at bundler.bundler. It's on Twitter. If you are interested but don't really know where to start, you can totally email the Bundler team at team at bundler.io and we'll get you set up. On the other hand, if you have a job that means you have money but not time, join Ruby Together and give us money and we'll work on Bundler and it'll be better. As Ruby Together grows, we're also going to be tackling bigger community issues. We want to add easy to use gem mirrors so you don't have to go all the way to rubygems.org for your office or for your data center. We want to add better public benchmarks. There's a project called Ruby Bench that's starting to do that and we'd really like to expand it. There's a bunch of other things that Ruby Together is working on that'll be totally cool. If you want Bundler or Ruby Together stickers, I have a giant pile, so find me later. That's it. Thank you.
|
We all run bundle install so we can use some gem or other, sometimes several times a day. But what does it do, exactly? How does Bundler allow your code to use those gems? Why do we have to use bundle exec? What's the point of checking in the Gemfile.lock? Why can't we just gem install the gems we need? Join us for a walk through the reasons that Bundler exists, and a guide to what actually happens when you use it. Finally, we'll cover some Bundler "pro tips" that can improve your workflow when developing on multiple applications at once.
|
10.5446/30669 (DOI)
|
I'm Shandra. I'm going to put my sides back up. Alright, I'm Shandra. I'm Shandra. Today we're going to be talking about accessibility. I'm here to discuss a little bit about everything surrounding it, or at least most things surrounding accessibility as we can optimize it for the web. You may have also heard it called Ally. This stands for the 11 missing characters in accessibility in the word Ally. So you may wonder what is accessibility? So we may not be fully sure what we're talking about here. Accessibility is being inclusive. It's creating accessible websites, and that means having it understood by all audiences regardless of adaptive technologies or browsers. It's usability, but usability is not to be confused with accessibility. Usability is the ability of average users with average technologies and the standard equipment and browsers, and that is their ability to navigate the site well. But accessibility is adding an additional layer. So you may wonder why? Why do we care about accessibility? Why should we care? As software developers, we build software for people. We empathize with people. We get frustrated when we can't use websites well. So what happens when we add an additional layer of tools on this? We build complexity into this. So who are we talking about here? What types of people? Who should we think about? Everyone thinks of screen readers when they think of accessibility. We think of visually impaired. We think of people who are blind. But we really need to consider a lot of other things here. Cognitive and neurological disabilities or abilities. There are so many impairments that we need to consider. Who in here knows someone with one of these impairments or has a friend? So maybe half of you guys. So think about how that friend uses the web, or if they run into any issues. Another question we only ask is how do people with disabilities or impairments use the web? What special tools do they use? So we call these assistant technologies. They can include a combination of these things or they can include one of these things. Like I said, most people think about screen readers, but there are many, many more. This is a great quote by Tim Berger's lead. He's the creator of the World Wide Web. He wanted us all to think about accessibility. So what do you think? Are we doing a good job of upholding these values that he tried to instill in us early on? So let's think about that as we navigate through. So let me pause for a moment. I get asked so often, why do I care about this subject? What is my investment in this? This is a picture of me and my mom. We're both developers. She started in the era of Fortran, Kovall, and Focus. And something we don't have in common is a visual impairment. She's blind. She's been blind her whole life. So she's as many adaptive technologies. And she's also the reason why I'm aware that there are so many more disabilities out there than just visual impairment. But let's kick it off with visual impairment. There are four categories that we generally think of. Two of the adaptive technologies include Braille displays. Braille displays are an extension of your keyboard. They provide a small mouse movement that you can navigate with your hands. And they also allow you to use the keyboard at the same time while displaying all the text on the website in Braille for a blind user. Another technology is a screen reader. These come in a variety of shapes and sizes. Apple comes accessible out of the box. We may have all accidentally turned on our screen reader and gotten very frustrated or very confused. Windows comes with...actually doesn't come with any of these. You can install Window Eyes. It's free for people with Microsoft Office 10 and above, or 2010 and above, excuse me. And Chrome is actually an operating service. Chromium OS powers Chromebooks. And Chromium has Chromebooks on it. So you can also test in all of these environments if you have the ability to. Screen readers traverse the DOM or the document object model. We may be familiar with this. They look for focusable events. So this is our normal HTML layout. And it's important to keep this structured. We want to keep the standard HTML flow so that screen readers can use it properly. We want the screen readers to know we'll look for things like headers and footers and tell the user that there's a navigation bar coming up. Screen readers are not able to read by default divs and spans. So we want to try and avoid using those for very important elements of our HTML. ARIA is a system rich internet applications, their set of accessibility attributes, which can be added to our HTML markup. Any markup, ideally for HTML, and they aid the screen reader. They add role attributes, which they define the general type of object, article, alert, or slider that might be coming up, or that might be active. They can also delineate states and properties of values. So we can add a description to a block of text or a chunk of form to tell the screen reader what we're filling out, what section we're approaching. We can have it live update of a progress bar, for instance. So let's take a look at the New York Times. Does the New York Times allow for accessibility well with a screen reader? I want to think in terms of user stories, but I don't love user stories. I want to log in as a user. I want to search. I want to browse categories. I want to read more content. So up here in the corner, this is how our eye may travel. I want to search. Great. There's a search button. I want to log in. There's a login button. I want to know where I am for the New York Times. Cool. There's a nav bar. I want to navigate the site. I might want to skip to the main content. That's wonderful. These are all great things for visual users. How does the screen reader try this? Let's see. This may not... Let's see how the audio works. Hopefully you guys can all hear it. But a screen reader will show us what is selected in bold here around the New York Times. And at the bottom, it will read to us what is being displayed. So I want to invite you all to close your eyes if you're interested. If you can hear this well enough. And let's see. As someone who is visually impaired, I want to log in again. I want to search. I want to browse categories and I want to possibly skip to the main content to read. Great. Our search is done. We have search. That's one of our stories gone. That's great. We can skip around if we need. We can log in. That's another one of our stories that's done. This next one is a little bit longer. So feel free to keep your eyes closed. This is a little bit... It will sort of walk you through what a visually impaired user goes through every single time they visit a new page. So that was a little bit painful. So that was a little bit painful. It's a little bit long. But if it is your first time in a site, you may want to hear all of the categories. There's our map art. That's wonderful. What's next? So you can see up here in the corner it says top news. That's hidden. But it's a great cue for a screen reader. It will say, here, we're about to read our top news. So a little depressing. You can see that the New York Times does it fairly well. Let's recap. We can skip to the navigation, skip to the content easily with those two hidden links up top. The top stories have a heading. It announces things very well in a clear manner. Let's check out Home Depot. Does Home Depot do this well? I want to log in again. I want to search. I want to address categories. Maybe I want to find a store. And I want to read the main content. As a visual user, these things are pretty good. We can search. We can log in. Let's try it on a screen reader again. This one may be a little bit more painful for you. I don't want to ruin anything. So this is pretty good. This top part. But something that we'll see that happens. Oh, that's great. Is it on? Something that happens when we get a little bit further into the site is... We're going to go through every single one of these. It's a little bit slow. You can't skip around. On this demo... Where are we? We can't see what's going on. Does anyone know where we are? We're stuck inside of this search all box. It's displaying everything again that we just went through. We're going to listen to these 17 items once again. If you want to find something that we don't know what category it's in, like wheels. Who knows what wheels are in. We really want to get to the search box over here. But it's been a little bit hard so far. We finally get to the search. But this says button. What does button mean as a screen reader user? I'm not really sure. Where do I go when I click it? Who knows what happens? So this is something we really want to consider. It should be... I'm so sorry about the audio troubles. This is Dutchmental. And finally, our last screen reader demo. Sorry about this. This is really what a visually impaired person goes through every time they visit the site. We're trying to log in. Where are we? We're not in the signer. So these are small issues that we're running into as a screen reader user. Was this one successful? We can't skip to the main content. We have a lot of repetition. Can't log in. The navigation is pushed down. Search button gets lost. Let's move on to something like low contrast. This can be a very big issue for low vision users. I really want to see links clearly as a low vision user. I want a less eye strain. If we're looking at a white window, that's lighting a lot of light into our eyes. That can cause a lot of eye strain. We all darken our text editors. This is one of the main reasons of eye strain. Let's visit Hacker News. Hacker News lightens all of their visited links. This is not great when we turn up our contrast. Where do they go? If we switch into high contrast mode in Bergman, they're also not great. Things we all need to consider are really playing around with our tools, checking out what contrast settings we have, and trying them as a user like this. Apple has a great way to deal with this. They use high contrast. Let's continue with low contrast and something like color blindness. We don't really think about it a lot. We're just being color blind as a visually impaired person. But statistically, we have to have at least a few color blind people in this room. Let's try out this little demo here. This is the Barclays Premier League. These are red and green colors to denote losses and wins. That is the only delineation we have. We're going to try to get into a color blind user. They're essentially the same. If you are color blind, we won't see a difference between these two. Let's give a way to mobility and performance. What do people with mobility and performance use in order to navigate the web? They may use foot pedals. They may use something like a single hand and keyboard. Can you imagine using this and how your experience would change on the web? What if we break our hands as developers? What would happen? If you lost your arm, you could probably use this after months of painful adjustment. A voice control or a site control machine is often used for people with mobility impairments. What does voice control do? Voice control utilizes DOM in a similar way to a screen reader, but it doesn't read out what you're going to see. You're going to be able to speak. If I use it in New York Times again, I want to say click world in order to visit the world page. Our background HTML elements need to match. If world said home or something underneath in the HTML markup, we would not be able to click world. Let's take a look at cognitive impairment. Cognitive impairments affect people in such a way that they have greater difficulty with one or more types of mental tasks than the average person. This can be confusing as someone who is cognitively impaired to visit a cluttered website. We have to take a look at our content on the screen. We want to aim for a sixth grade reading level, but that really improves the content of our page for average users as well. This also eases our memory. We want to take a look at forms and redesign and not use placeholders, which we'll get into in a little bit. If we think about neurological impairment, it can be sometimes related to cognitive impairment or there can be crossovers in here. We want to make sure we're not creating a bad experience for someone with vertigo, for instance. Everyone is very familiar with this new trend that is video in our websites and on our main page. Can you imagine what this experience is like for someone with vertigo? They're going to experience extreme pain and you'll be confused and probably dizzy afterwards. That is not a great user experience, although it is very calming when you are expecting it and you are an average user likely. Let's take a look at hearing impairment. Often we will dig into closed captioning and videos on YouTube are very well closed captions now. What is the experience for a website like this for someone who is hearing impaired? Are they expecting again for there to be sound or do they think they're missing something in this video? It turns out they're not. They're not missing an experience. You can imagine that they are keyed in to think that they're missing an experience on a video. How can we improve this as developers? We want to look for good things for screen readers like section descriptions. We want to denote our links and buttons correctly so that we aren't just saying image everywhere. If we have images all over this New York Times site. We want to fill in our alt tags. Most people don't. Not necessarily a horrible thing because I think people with screen readers expect it, but adding a short description can be hugely beneficial. For low vision users, we want to make sure things scale well on our website. We want to make sure that things don't readjust when we're increasing the size of our screen. You can see New York Times does this well. You can zoom in very, very easily and still be able to see everything and scroll across the page. We want to make sure that there is good underlying code in our background images like I talked about for hearing impaired users. If this New York Times background image had a different link underneath that said home, for instance, we would not be able to navigate back to the New York Times if we said click New York Times. The voice control would not know what we're talking about. Let's take a look at Kickstarter. Hi guys. They did a wonderful, wonderful thing recently for closed captioning. Everyone's familiar with Kickstarter. You create a video. You try and raise money in order to promote your cause or, sorry, not promote your cause, but for a creator. They just recently added a huge feature that allows you to go in and add your own closed captions and add your own descriptions, which is a great, wonderful thing. They did not necessarily need to do that. They felt as though it was a necessary addition. Here's another way you can go in and add. You just click the little add track button here. Let's take a look at Netflix. They just recently added audio description. The visual impaired community has been asking about this for years and they have unfortunately not done it until now, but I think this was a great launching point. Daredevil was just released, which is about a blind superhero, and they did not release it with audio description, which frustrated many people, but they recently did. You can just go into your options here and click audio description, sample it out. It will explain everything that's happening in the movie pictures, and it's wonderful. I grew up watching all my movies this way. The Beauty and the Beast is all audio described, and it's a lot of fun to listen to, so try it out. Another thing we can look at is typography. Are we using clear, readable type? Are we training ourselves to choose fonts that are well received and readable on a screen specifically? We want to look at size of the browser average, which is about 16 pixels. It seems a little big for people who reach to a 12-point font on print, but often your computer screen is farther away than a book would be. We want to look at this size comparison and make sure that we're using no smaller than 16, which is about 1am if you're using relative. We want to make sure that readability is great. We want to make sure that things aren't jumbled. If we think back to the New York Times, this probably isn't an ideal site for readability because it's so text-oriented and it's a little cluttered, but it doesn't make a newspaper very well, and I think that's what we expect when we go to the New York Times. But for your average developer site, it may not be a great thing. Also consider layout. Do things make sense? Do they flow in a great way? We'll talk about fonts here. Here's a form I found that is a little bit... It doesn't have great flow, let's say. If we go here, create an account, we type in our email, and we're scrolling over expecting maybe another site of the form, but there isn't anything. We're creating a behavior that you can just continue on scrolling down. Here, I fill out my first name. This is company, but I start filling it out and I might get a call or I might have a memory issue. I continue filling out my last name, thinking that perhaps that's what I was filling out. We continue with this form. If you remember, there was a different part of the form, but we've just created a behavior for a low vision user, for someone with memory loss, things like that, that there is only one column here. We'll continue filling this out, and we don't see state, so maybe state goes in the city. We're not really sure. Let's try it. We also are expecting the next button to be right below us, since we created that behavior, but it doesn't appear to be there, so where is it? Once we find it, we hit next and nothing happens. There's no alert. We don't get a screen reader alert. We don't get an alert visually saying, you haven't filled out certain sections. It turns out that, like we saw at the beginning, there were all these sections. That's where state was. That's where the last name was. So we filled out last name incorrectly. Here's the cleaned up version. Think about mobile users in this case. It's creating a bad experience for them. It's creating a bad experience for, like I said, memory loss. Someone who has to increase their size on their screen. So let's think about all these things as a developer. Looking at click targets, we want to keep things larger and spread apart. Vitz's log tells us that speed accuracy trade-off characteristics, the speed out here, the trade-off characteristics of human movement are huge. So we want to look at this and see that our click targets are small and pretty close. And I get a little bit lost. I over click sometimes. I need to readjust. If we look at this one, they're bigger and farther apart, and you can see that my accuracy is much, much higher. I go straight to the element and I'm able to click. So think about these things when you're creating your buttons. Like I said, it makes a great experience for a mobile user as well, or someone with a small screen. We want to make sure the size can be scaled well. We want to look at relative versus fixed. There are a lot of arguments, either way, as you all know, but as long as you're doing it in such a way that things are still readable, things still maintain the same layout and they are still well presented. So how can we integrate all these things as developers in our very busy lives? Because often the business people approach you and want a shipment, right? They want you to get a feature out of the door. So we can look at things like linters. If you're already using some, why not just integrate a few more into your process? Maybe one. Try out one at first for screen reader usability. We can look at checklists. There are plenty of checklists out there. There are many hundreds of checklists that involve usability and accessibility. We can look at our workflow integration and just really try to work it in. We can be advocates. When the business team comes to us and says, we want this feature out of the door, we can say we would really like an extra day or an extra hour to use usability and accessibility. So we can have that time. We really want to advocate for this. We can try it out. We can put ourselves in someone else's shoes. We can use our screen reader. We can try out a voiceover. And another great thing is user trials. Try and find someone with a friend with a cognitive impairment or a friend who is visually impaired. And see how they use their site. As we all know, we find a lot of unexpected behaviors with user testing. Thank you.
|
What does accessibility entail as it is related to the web? Most developers don't consider that someone with visual impairments may use the web and don't think about what this experience may be like for someone who has a visual impairment. The current trend is to push access-driven design to the end of feature building, but we will explore why this needs to be a top priority.
|
10.5446/30670 (DOI)
|
Hi everyone. Oh, wow that is loud. I'm not used to talking on a microphone so this is going to be interesting. Okay, so I am Sharon and I'm going to be presenting this workshop today and I'm just going to go ahead and get started. So a great thing about the tech industry is that there is a ton of disruption and innovation. Unfortunately, there isn't a lot of that happening on the communications end and I am as guilty as the next marketer and communicator of falling into the right and true methods to get my points across. As you can tell, I have a speech impediment. I stutter and it has made a lot of choices for me in my own life. I chose to be a writer because I was a lot more comfortable with the written word as opposed to obviously talking to people. I chose to be a freelancer because any time I would go to a job interview, I would become just engulfed with so much anxiety that I wouldn't be able to get a word out, even my name as you guys could tell a couple of seconds ago. And I chose a lot of times to be sissy sissy sissy sissy sissy sissy silent even though I knew I could add value to the conversation. And value is a word that I am going to be talking about a lot today. And I am also going to be talking about marketing as well. These are key points and they are going to make a lot more sissy sissy sissy sissy sissy sissy sissy sissy sissy sissy sissy sissy sissy sissy sissy sissy sissy thinkers as a writer with about eight of them in the his and his and his marketing capacity. I have learned a couple of things about what companies sissy sissy sissy sissy sissy sissy think they need to do to get their eyes on their brand. They talk a lot about mobile strategy and SEO, hashtags and social impacts and of course Facebook and Twitter. And also branding. The problem with that word is that a lot of companies have trouble seeing themselves and exposing their flaws as I am doing right now. Think about it. When was the last time when you had an idea for a marketing plan and you scratched it off the list because it exposed a weakness? It probably happens a lot more often than you think especially when you are in an early stage company and you are working with limited means. You stick with what you know. Today I want to challenge you to abandon your strengths and explore the things that you consider to be a weakness. So often we consider weakness and out as negative. They walk hand in hand that those feelings aren't necessarily a bad sissy sissy sissy sissy think. The Italian intellectual and I am probably going to butcher his name, you, that is creative because it allows for alternative ways to see the world. What if you applied that to your company's brand or even your personal brand? I love to read and one of my favorite authors is Haruki Murakami. He actually uses a form of weakness in his writing. The Atlantic actually wrote about him and talked about how no other writer writes as many bad sentences as he does. If you have read any of his books you will understand that. His writing is compelling because of the storytelling and that is another concept that will be a talking a lot more about. The plots take you from supernatural to reality almost in the same paragraph. It is often written properly or long winded. But it works. He is marketed himself by using poor sentences to his advantages. Now he is one of the great writers of our time. That is another tip. You can stand out by being unequivocally and imperfectly who you are. Always remember that sometimes your greatest vulnerability can be your most valued asset. And Kevin Plank, who is the founder of Under Armour, he understands this incredibly well. The first rule of his formula for innovation design is to do one specific thing really well. And that is what he did. For the first five years of Under Armour they only had one product. It was their compression t-shirt. For me in my hisness it was writing in all forms. I covered editorial and then I went over to hisness and then I ended in marketing. That builds credibility. And once you have that under your belt you can begin to grow from there. For you it could be a voting time to a couple of platforms that you know will work and then disrupting and innovating from there. My personal disruption was doing this. It was speaking. I began speaking about a year ago as a way to break out of my shish-sh-sh-sh-sh-shell. I was terrified to talk to people one on one. And obviously I was even more terrified to do anything that compared to what I'm doing now. So once I put myself out there my biggest fear, my strongest vulnerability and my greatest out I realized that I was connecting with people in a real way. It's a lot easier to connect with your audience whether it be a larger group like this one or in a one on one conversation. If they know that you are s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-a-lod as well. So the question that I'm going to pose to you today is what's your disruption? What is the one thing that terrifies you that worries you, that makes you vulnerable. What challenges you at the point of being afraid? That's where your sweet spot is. That is what is going to capture your target audience and then make you stand out. People are probably going to tell you that you need to be the strongest communicator to succeed. But sometimes the truth is that you have to be the most transparent one. Alright so now we're going to get into our first exercise. I want you guys to pair up. I think there's probably enough people to do this in pairs. I want you to talk about an area in your professional life that you consider a weakness. I can give you some examples of that. It could be like talking to a toss about something. It could be trying to connect with a co-worker or it could be trying to explain a technical concept to a non-technical person. After you've talked about that, also talk about a time when that weakness kind of overcame you and caught the test of you. So you're gonna do this in pairs. Preferably with the person that you don't already know. So you may have to kind of, I don't know, probably around the room a little bit in order to make this work. So I'll give you guys like 10 minutes. Alright so we're gonna wrap this up. This exercise. Okay so how did it go? Talking about a thing that you don't consider to be positive. How did it feel? Go ahead. You know I found that conversation to be a lot less shallow than most of my conversations. Really? A little more deep because like you said it kind of helps open up a little bit. It helps to relate to someone else who is also weak in areas. And Matt here pointed out, you know, in the tech industry we all tend to feel somewhat inadequate in some and not many areas. So I thought that was great. Who needs church? Anyone else? Yeah we're kind of joking. It's like that's quite nice right here. It's like, you know, if I go out with you it's a terrible act. Right? You know, not like, oh where are you from or whatever. But it definitely was like, oh it's actually not real. Yeah like a real, like, have a real conversation with someone. It's crazy. Well that's awesome. I mean that's a really good job. I mean I don't know if I would be able to do this exercise after being in a workshop for 10 minutes and I'm presenting the workshop so lucky me. But yeah that's, oh go ahead. Yeah well it's actually a good segue into the next part. And hopefully you'll be able to talk to your team easier about anything after this. That's the goal. How to talk to humans. So yeah. Before I go on, does anybody else have anything that they wanted to add? We're all friends here now. So good. Good. Okay. So the first thing that you guys have to know about communicating with other people is that everybody is exactly like you. Right? Like everybody wants to be liked. Even right now I like I want you guys to think I am awesome. That I'm interesting and that I'm funny and that I'm the coolest person here. Like yeah like that's just who we are as people. Our terrains are wired to connect. We, it's how we've gotten this far in evolution is the through group egg sisi sisi sisi sisi sisi sisi sisi sceptance. So if you go back to the exercise as you were talking about the thing that made you vulnerable, you even at that moment probably attempted to present it in a way that still kind of puts you in a positive light. But what you're forgetting in your own insecurity is that the other person has the exact same n-n-n-n-n-n need as you. As you were opening yourself up to them, they too needed you to like them as well. And I understand this very well because of shi-ysi-国喘如果 you don't understand the words of processing tends to feel like I am in a position of weakness here. All of you guys actually hold all of the power. You know, like all of you guys could just aside, like okay, I'm over this and all of you guys are just gonna walk out and leave. And I feel like okay, well I'm up here and you guys can choose like by Sharon and that would be kind of the end of that. And in situations like this one, where there is a communications power and talents, our insecurities about how we communicate with each other always kind of come to the surface. An example of this would be when you are talking to an angry investor. They have given you hundreds of thousands of dollars. If you're lucky, they've given you millions of dollars and they have to have these answers immediately about how you are spending their money. So what types of physical reactions do we have? Here we have, anytime we feel like we are in a situation like that, I can give you an example. I was running late this morning and I immediately started us, sss, sweating, which I know is a very attractive thing to think about right now. But yeah, we get hot and a lot of us tend to speed up our speech or we have problems articulating ourselves. We have these reactions because we perceive the other person as holding all of the power. But again, you have to always come back to the fact that the other person, and it doesn't matter the role that they play, they still have that case of need for you to like them. Because that's who we are at our roots. We are people who have to be accepted by everyone else. So when you do have moments like that, just remember that everyone in every sss, sss, social situation is a little bit vulnerable because our case, human instinct is always going to kick in. So now that we know that the other person is exactly like us, we need to know how to use this to our advantage. So there are two types of communicators. There are the intelligent informers and the sss, social relators. Again, they both want the same thing, but how they go about it is completely different and one of the ways is always going to trump the other ways. So I'm going to begin with the intelligent and former. This is a person who is going to try to sss, show you how awesome they are by trying to impress you. An example of this, and I don't want to offend anyone, I have done it before and it's awesome and it's great, are the people who do CrossFit and who absolutely love it. They talk about it incessantly. I have abs, I can run a mile in two minutes, I can live blah blah blah and it's like that's really great that it's helping you, however, how is this going to help me? How is this going to add any value to my life? How is your love of this one thing going to help me? It's not, right? That's the problem with this approach. It's a very me-me-me-me-me-me-me-me-me-me-me-foc-ed approach to it And it can crack fire for obvious reasons, but mainly because you aren't the valuing the person who is on the receiving end. And that brings us to the social relator. And their goal is to make the other person feel valued, right? They try to learn about other people, they try to support other people. And the bottom line is, they listen. I actually read a really great quote about Richard Branson. It was by a person who has spent a lot of time around him. And he says that he always listens a lot more than he talks. When you think about a person who a lot of people consider to be charming and charismatic, I'm sure that the first thing that comes to everyone's mind is how they talk to people. That's actually not how they're presenting themselves. They actually, so at valuing the listener, they look into your eyes when you speak. They use their, they respond to the silent cues that you give them. And they, here what you have to say, and they only interject when they, no, no, no, no, no know that they can add value. And, we call this empathy. If you want to know the basis of how to talk to another person, Empathy is it. I mean, all of you guys have a lot of empathy vroom vroom vroom, vroom vroom vroom right now because you are here hearing everything that I have to say, even though I am at times having a really difficult time, actually, ss-sss-sss-sss-sss, saying it. Okay, so now we're going to get into the next exercise. Okay, I want you to find another person in, in, in, in the room who you haven't talked to yet. And then, I just, uh, want you to ask them a couple of questions. And it can be about anything. But if you wanna make it, you know, like his-ness-like, then you could ask him about like, you could ask him about like, their job or their company. And you could also like, ask them about their favorite part of their job, okay? And the goal here is truly to just talk to a new person and kind of keep in mind how you are talking to them and listening to them, all right? So I'll give you another eight to 10 minutes. And then we'll come back and we'll talk about it. All right, so how did it go? Did you enjoy this? Was this weird? You did, you enjoyed, yes. Does anybody have any comments about what they, go ahead. I was trying to listen to this person. I find that while listening to what he has to say, in the middle of it, I'm trying to come up with my next thing to say instead of listening. I'm trying to put myself off, but it's kind of hard to concentrate. Maybe my OCD. Yeah, well, it's a lot easier when you, we're always telling people about ourselves and we're always giving other people clues about us. And so if you just kind of like, take the information that he gives you, and then you can turn that into questions, like there's always more things to ask. Yeah, well, while I'm digesting the information, I'm not listening to his next thing and then I'm like, oh, what? Oh, wow. Oh. It takes practice, you know, lots of practice, but I'm sure that you'll get it. That happens to me a lot too, but especially when someone's talking to you, you're just like, oh, I'm going to be in the press, and then you forget where the conversation is, and then the press is actually on your screen, and you lost your spot, and then your fear has come into actuality. It's just like, damn. I'm telling you, I'm trying to, for me, it's easy talking about technology or something, because I always have something to add. How to continue, but that is a small talk, and it's very difficult for me to do. Small talk's the worst. Seriously. My next question, how do I continue? Yeah, well, a thing that I like to do is I just like to keep asking people questions until I hit on a thing that they are like in love with, and then it's a lot easier to have a conversation when the other person feels like 100% comfortable, therefore you feel the same way. It's another thing that you guys can spend the next, you know, a year or two just practicing. Active listening is harder than talking, but you can do it, I promise. I have a slight indecency of video on small talk. Okay. I'm not from here, so I'm not native. You need to be, sorry. I'm not a native English speaker, and when I came here to live with a student five or six years ago, I was much more shy when I needed language than I could. And actually, if I go back to France, I'm not nearly as outspoken in French as I am in English, because of small talk. These French people don't like small talk that much. They just like, I don't know what to do. And you know, small talk was in supermarkets in Florida, where I wish I had conversations with random people with so many varied backgrounds, and then in practicing listening, because I didn't really care about what I was about to say, because I didn't have no agenda for what was in that meeting. And so it's kind of freeing to just be like, and or to make them repeat something when I wasn't sure something. I screwed up so much in small talk that I didn't have to play the talk with people I actually cared about later. You've single-handedly changed my opinion on small talk. But yeah, but that's awesome. And again, it takes a lot of practice to be able to do this on the go, but you guys are all very, very intelligent, so I know you can make this work. Go ahead. Yeah, well, in the beginning, it is going to be uncomfortable, because every conversation kind of has its own wavelength. And so in the beginning, it's going to be weird, because you haven't figured out how the balance is going to be and how the flow of the conversation is. And so it is going to be awkward, but the goal is to have the other person feel valued and as long as that happens, then everything is okay. And we're going to talk a little bit more about getting your points across in a couple of minutes. If we have time, then we do. Yeah, go ahead. Alright, so it's going to come in, it doesn't always have to be equal. Some people like to talk more and some people don't like to talk to themselves. And so considering how much I'm talking up in this talk, it don't believe me, but in a conversation that's typically how it goes, and I like it when the other person carries more of a conversation. So that can be okay. Yeah, again, it's a really great point. Go ahead. It was nice that it was under your direction when you spoke to these other people, because that's when the most awkward part of your small talk, I would never talk into the audience before and not instruct them. And I think you could follow us all around. And that happens, and in your personal life, okay, great, I never have to talk to this person again. Professionally, you have to make a way. So even if it's awkward and even if it's like, I really don't want to talk to this person, you have to, even then, even more value them because you need them, right, in order to get things done. I was going to give you questions, but I'm like, no, it'll be way more fun for them to come up with their own, because it'll be like a real life thing. So, yeah, what kind of questions did you guys ask each other? If you don't mind saying that. What is your passion? Oh, wow, just out the gate. What is your passion? Let's get real. Yeah, that's a great question. It is. Because it immediately has the other person talking about a thing that they are just like, I can talk about this for hours. And then they want to talk to you more and they think you're charming because you, and that's exactly how you have to play it. You have to make it easy for the other person, especially in a professional way. Like you have to make things easy. Anybody else want to share a question? I asked how long it would take for a YouTube that says what I'm curious about. Okay, yeah, that's a good question. And that can turn into an open-ended one where you guys could talk about a lot of things as a result of that. So, yeah, that's a good question too, especially at a place like this. Anyone else? Go ahead. What did you get last night? They did. We found out we were playing in the balls. Not last night. You played last night. I played in the balls. There's some good questions. You threw me into an answer. Yeah, I had to. Well, okay, yeah, good. Another great question. I've actually never asked that question. I'm going to try that one next time. I'm curious. Really? What did you do last night? Hey, there is nothing wrong with like eating Thai food and watching Netflix. That's what I did last night. Go ahead. That are not what? Sorry, I'm so far from it. Yes or no replies? So I can't, I don't have to ask the question that's just a, oh yes. Yes, that's perfect. Yeah. Yeah, because those conversations can end really quickly. Are you having a good time? No. Okay. Bye. Like, that's it. I mean, I'll follow that too. I talked to my son a lot about what he does at school. So I say, hey, how was school today? Oh, school was good. What was good about it? You sound like my mom. No, but it works. But it works. That's another good one. Yeah, always ask more questions. Anybody else? I asked what technique do you use to keep on, to keep learning and keep learning all the the latest and greatest things about rails and development? Good one. And I think she asked me, what do I like most about rails? Yeah. Those sound like great questions. I don't know anything about, I don't know anything about, those sound like great questions. I don't know anything about development. So I'll take your word for it, but that led to great places. All right. Well, again, like, awesome questions. The whole goal, like, overall goal is to just have it be a lot easier to build a relationship very quickly. All right. And so if anybody else has anything to say, no one, cool. We're going to go into the last part and this will be quick, because we only have 20 minutes. So, yeah, now I want to talk to you guys about just building a relationship and how to shape your own message. So I know that I've talked about vulnerability and I've also talked a lot about marketing. And I know that you guys were probably thinking that this doesn't have anything to do with the workshop. And actually it does, because marketing is talking to people. It's about connecting with your ideal audience and talking to them in a way that will compel them to take action. All right. So how do you compel a person to take action? How do you shape your message? All right. And a lot of us, founders are pretty good at this on their own and they didn't even know what to do. Right? Because if you talk to a person who has a start-up and you ask them how it came to be, they always have a story to tell. And I had an idea and so I decided to create Twitter. No. I was at my house and I had this problem and I was trying to find a solution and there wasn't a solution and so I went and created it. Right? And that way is compelling because it encourages whoever is... Listening to hang around and hear how the narrative ends. All right? And that's how I want you to view building a relationship. It isn't about trying to sell a person on yourself, but it's about presenting yourself in a compelling way so that people on the receiving end will hang around. Right? It's about being... It's about asking questions. And then at the very end, it's about telling your compelling story. And it could be a lot like mine that vulnerability has made me stand out for better or for worse. It could be that you have an idea that will change how other people do things. Right? Just remember, it's all about how you present it to another person. Okay. So we don't really have a whole lot of time left. So this... After this workshop... Whoops. After this exercise, you guys can just head out. Or if you wanted to head out early, I understand because we only have about 10 minutes left. But the next thing that I want you to do, either today or tomorrow, is anytime you talk about your company or your hobby or your passion or anything else, I want you to consider how you are for framing it. Right? So here's a hypothetical. You have a technical issue that you are trying to explain to a non-technical person. And so a way that you could do that is to explain how that technical issue helped another person. And in turn, how helping them could help the person that you are talking to now. Does that make sense? I don't know if I explained that very well. Not really. Okay. So who can I use as an example? Who wants to come up here and explain to me what they do? Just a hobby. All right, come on. You've already been up here once. Lies. Nobody else wanted to come up here. Okay. Hi. Okay, so we've already talked before and you're awesome. So thank you for coming up here. Okay. So what do you do for a living? Oh, and you get a microphone. I am a developer at onsite.com and we do online leasing for apartments. So apartment communities that want to screen their applicants and do background checks, you sign the leases, make everything electronic, get rid of all the paperwork. I'm a developer there and help make things go. You help make things go. Okay. Do you have any hobbies that you like? Yes. I should have asked what hobbies do you like. Not taking my own advice here. Interesting, unique hobby. I guess nowadays anyway. I've been a dance dance revolution a lot. How'd you get started doing that? A long time ago, I had a friend who had it at his house and we played more terrible. And then one day I went to the arcade and saw how people really danced. And then I was able to copy that and get a little bit better. But then since then I've been in competitions. A lot. Some of them. But anyway, so I've got a dance arcade set up at my house now. That's my regular exercise. So it's fun. Okay. He's not a fair example for a lot of reasons. One, because he works for a company that is obviously very needed. But two, dance dance revolution, like I'm never going to forget that. There's competitions, you guys. I did not know that. Not many anymore. Yeah. So if you were going to tell a person about dance dance revolution, like how would you explain it to them? Okay. So yeah, maybe everyone doesn't know that game. It's a video game with music and arrows coming up the screen that you're supposed to dance to the beat to on a panel on the floor, four buttons up, down, left and right. Except I play with all eight two players. So it's more like this. Does that, is that good? Yeah. Well, so, okay. So how you talked about it, like it was obviously something that you are extremely interested in and extremely involved in. And now as a result, like I didn't know anything about dance dance revolution. I had heard of it, but I didn't really care about it. But now I'm like, okay, this is interesting. And me being the type of person I am, I'm going to go online and find out about these competitions and see the winners. Are you on the internet? Like, is your picture and stuff? Not for dance dance. I'm going to YouTube you. I'm Nielbus online and I'll be us, but I was just going to say if you want to try dance dance, you might want to try it with either. Wait, what am I saying? You know, it might be slightly embarrassing the first time you try it because it's not easy, but it just have fun and don't be shy. They have them at some hardgates. That is very cool. You actually did tell us a story about you. If you realize it, we talked about how you found it. We talked about how you were bad at it. We talked about how you became good and that you do competitions and that you do eight. Four players, eight players. Two players, eight buttons on the floor. Two players, eight buttons on the floor. Yes. Now you've also invited us to play. This is actually a really good example. You actually told us everything about this one part of your life in a compelling way. Does that make sense? I just realized you're right. This is being recorded on IM on YouTube. Oh, no. That's all right. That's okay. I'm on YouTube too. It's for a good reason. It's for a good cause. All in good fun. Yes. So is this a thing that you guys think that is doable sometime? I mean, even right now, if you want to try to do it now for something that you are really interested and passionate about or just tie the end of the conference, I want you to be able to just have an entire conversation and then also at the very end to what he just did, which was very brave. And thank you for coming up here. All right. So you can either hang out now and do the workshop. We don't have too much time left or you guys can go. And I just want to say that this was awesome. This was the only time I've ever done a workshop like this. So this was a very nerve-wracking experience. And you guys were really patient and awesome and empathetic. And I hope that you all learned something and that you feel a lot more confident and empowered to go up to a new person and know that you can build a relationship with them pretty quickly. So thank you. Thank you so much. Thank you.
|
Developers are trained to communicate to things with a goal in mind. When you're talking to something like, say a computer, you type in your code and it responds by giving you back what you want. Simple and straight-forward. Talking to humans? That's more of a challenge. Why? Because communicating with people requires a special set of skills - namely, empathy and a little bit of storytelling. In an industry filled with brilliant minds, great ideas and mass disruption, so few of the best and brightest know how to tell their compelling story. This workshop will teach you how.
|
10.5446/30672 (DOI)
|
Good afternoon RailsConf. I hope you're all feeling great after lunch. Thanks so much for coming. I'm very excited to be here. My name is Derek Pryor. I'm a developer with Thoughtbot, and I'm here today to talk to you about code reviews, doing them well, and what it means for the culture of your team when you're the type of place that does them well. So let's start with a show of hands, just so I can get an idea where everybody's at. Great. How many of you are doing code reviews as a regular part of your day every day already? Okay. And how many of you really enjoy doing them? Okay, a few less. And how many of you do them because you feel like you have to? It's about equal. Okay. All the people who said they really do really enjoy them also said they do them because they have to. Okay. So, why is it that we do reviews in the first place? This is pretty easy, right? It's to catch bugs. All right? We're going to have somebody look at every individual line of code, and we're going to find what's wrong with it. Not really. That's not why it's interesting, right? I've been doing code reviews for over 10 years now. I used to hate them, and they were the worst part of my day, right? We did it just for compliance documentation at one of my former jobs. But now I think that code reviews are one of the primary ways that I get better at my job every single day. So, yes. We're going to have fewer bugs in code that's been peer reviewed than in code that has not been peer reviewed. But studies on this show that the level of QA that we get out of code review doesn't meet the level of expectation that we have as developers and managers. So that is, we think by doing code review, we're getting this much QA when in reality we're getting somewhere down here. So why is that? Well, the reason is when we do code reviews, we're looking at a slice of a change. We're looking at the get diff essentially. And we can catch syntactual issues or problems where you might be calling a method on nil, but we can't catch the really heinous stuff which happens when your whole system interacts and corrupts your data. So code review is good for some level of bug catching, but it's not the end all be all, right? So what are they good for? I already told you that I think that code reviews make me better every day. And I want you all to feel the same way. Well, in 2013, Microsoft and the University of Lugano in Switzerland came out with this study, expectations, outcomes, and challenges of modern code review. So in it, what they did was looked at various teams across Microsoft, which is a huge organization. They have several teams with different, you have senior developers, junior developers, managers, everybody, all working on different products. And they surveyed all these people to ask them, what is it you get out of code review? What do you like about it? What don't you like about it? When they were done surveying them, they watched them do code reviews and asked them questions afterwards. And finally, they looked at all of the feedback that was given, every comment that was logged in their code review system. And they manually classified it. They said, this one has to do with a bug that they found. This one has to do with a style comment. This one has to do with a possible alternative solution. So after doing all this work, what they found was that people consistently ranked finding bugs very high as a reason for doing code review. But in the end, it was actually a lot less about finding bugs than anyone thought. The chief benefits they saw from code review were knowledge transfer, increased team awareness, and finding alternative solutions to problems. That sounds a lot more interesting to me than hunting through code looking for bugs. Through this process, we can improve our team. One person involved in the study said this, code review is the discipline of explaining your code to your peers that drives a higher standard of coding. I think the process is even more important than the result. So I really do like that last part, even though it's not on the slide. The process is more important than the result. We're going to talk about that today just by going through the process of code review in the right way. We're going to be improving our team, regardless of the actual results that we're seeing on any individual change. But I also like the definition here. It's the discipline of explaining your code to your peers. I tweak it just a little bit to say that code review is the discipline of discussing your code with your peers, rather than trying to explain it to them. Code review is one of the few chances we get to have a black and white conversation about a particular change. We often talk in abstractions. When we come to conferences like this, we talk in large abstractions about things. Those are really comfortable conversations. In code review, we have to get down to the implementation details and talk about how we're going to apply these things. So much else that we do, it's a communication issue. If we get better at improving code reviews, then what we're really doing is improving the technical communication on our team. So the study also found that those benefits I cited earlier were particularly true for teams that had a strong code review culture. So what does it mean to have a strong code review culture? To me, it means that your entire team has to embrace the process. Everybody has to be involved. It can't be something that the senior developers do for the junior developers. It's a discussion. That's what we're after here. So as I mentioned earlier, I've been doing code reviews for over 10 years now. But only in the last two to three have I started to see a real improvement in what I'm getting out of them and in myself because of them. And so why is that? I think it's because I'm part of a strong code review culture at Thoughtbot. And at Thoughtbot, we often go into clients of various sizes to help them with issues that they're having. And nobody ever comes to us and says, I really need help improving the code review in my team. That never happens. But when we get on the ground with those teams, we often find that there isn't a strong code review culture. So one of the challenges we have is how do we get people to have this culture around code reviews? And there are a lot of little rules that we suggest following in our guides. But if you look at them, you can really boil them down to two key rules of engagement, two things that you can do today to start getting better at code reviews with your team. The first of them is something that you can do as an author. The second is something that you can do as a reviewer when you're providing feedback. So first, as an author, what are we going to do to make sure that our code reviews get off on the right foot? So this quote here is from Gary Vaynerchuk, and he was talking about social media marketing or something. Not interesting. But it's also applicable to code reviews, believe it or not. So in a code review, your content is the code. It's the diff. It's what you've changed. The context is why that's changed. Your code doesn't exist for its own benefit. It solves a problem. So the context is what problem is it solving? That study I cited earlier found that insufficient context is one of the chief impediments to a quality review. So as an author, we need to know this and get out in front of it. We're going to provide excellent context around our changes. So let's have a look at a real-world example. This is a commit that you're, or a pull request title that you might see come across your email, right? You type column first and multi-column indexes. The specifics here aren't particularly important if you don't understand what this means. The problem here is that there's no why. So I can look at this change and I can say, well, yeah, this changes the order of the indexes or the order of columns on multi-column indexes. But I can't tell you if that was the best solution to the problem you're solving. I can't really learn anything from it. So this is a loss. It's not interesting for me to review. And it's just going to get a thumbs up and move on, right? Actually what I would do with a change like this is comment and say, can you provide some more context here? And a lot of times what we find happens is somebody updates the pull request or adds a comment to say something like this, right? So I guess you guys think that's not better. It's true. That's not better. So this is probably a link to GitHub or maybe it's a link to Jira or whatever your issue tracker is, right? And a lot of people do this. They'll provide a short explanation and say, for more detail, see this ticket. But what you're doing there is you're making the reviewer hunt for this context. If they click this issue, are they going to see a good description of what you're doing? Probably not, right? They're going to see a report of a problem, maybe some discussion back and forth about how to solve it, and then maybe a link to this pull request or something. But they've got to go through all this, and they've got to hunt for that context and put it together until they're in the right frame of mind to review the change and to tell you whether or not it's the right solution. So here's an improvement, right? It's not important to read this again. So what we're doing is identifying the problem with index ordering. That's what we do first. Why is it a problem? We're going to back it up with some Postgres docs, and those link off to more information if you need it. And because this particular change was a change to Rails and we need to be concerned with multiple adapters, we're also going to back it up with some MySQL documentation. And finally, we're going to talk about why this is the best solution. So that's a lot of context, right? But it's important to note that now, as somebody who's coming along to review this, I know why this change is being made, and maybe I've even learned something about how multi-column indexes work by reading through some of this documentation, right? So there's value for me to review this. So as an author, we're going to provide sufficient context. What we're trying to do here, like, you've been working on this change for four hours, two days, whatever, however long it took you to fix this, right? What you need to do is bring the reviewer up to speed, like, let them know what you learned in this process and put them on equal footing with you. So I'd challenge you to provide two paragraphs of context with every change you make, right? At first it's going to be really painful and there are some changes that it's torturous to describe with two paragraphs. And yes, it's going to take you more time. But how much more time? I don't know, like, five minutes, maybe, right? And it avoids a whole round of, that round of questioning I described before, like, why are you making this change, right? So we've headed that off and the extra bonus is that all of that context we saw earlier gets to live on in the commit. We're going to squash that and we're going to save that. So rather than that one we saw before that had an issue, a link to JIRA or a link to GitHub, right, that's going to go away as soon as we stop paying that bill. But our get history is going to stay with us and we're going to see that there. Okay, so that's what we can do as an author. What about as a reviewer? What can we do to make it so our feedback is well received? I like to call this ask, don't tell. So to start off with, it's important to note that research has shown there's a negativity bias in written communication. So that is to say, if I have a conversation with you face to face and I give you some feedback or maybe even over the phone, just in the place where you can hear the tone of my voice, hear the inflection, you're going to perceive that one way. If I give you that same feedback written, like in the form of a pull request review, it's going to be perceived much more negatively. So it's important that we're cognizant of this negativity bias in our written communication. We have to overcome this if we want our feedback to be taken in the right way. One way to do that is to offer compliments. And I would suggest that you do that. So if you find something in a pull request that is something you learned or something you think is done particularly well, those are great to call out, right? It lets everybody know that, like, yes, you've taught me something. That's great. Thank you very much. Rather than just always nitpicking at the change. But there's going to come a time when you need to provide critical feedback. And the best thing to do here is to ask questions rather than make demands. So I've noticed that when I remember to do this, like even on the same day, if I'm looking at two different pull requests, and in one change I remember that I should be asking questions, and in the next change I just make commands, I can guarantee which one's going to have better technical discussion in it and which one's going to be more satisfying for the entire team, right? It's going to be the one where I try to engage in conversation rather than dictating what somebody should do. So let's take a look at another example. So here's an example. Extract a service to reduce some of this duplication. So this is a command. There's no discussion here. I haven't opened anything up. So if I'm the original author of this change, my option here is to either do it or enter into what seems like an argument, or I can just ignore you, right? And the ignoring is probably the worst thing you can do. I'd rather see you argue. And another important thing to note is like this comment here from the reviewer gives the author no credit for maybe having thought of already extracting the service. Like maybe they ruled it out for some reason. They're waiting for more information so they can make the proper abstraction here. So how can we improve this? What do you think about extracting a service to reduce some of this duplication? All we've done is formulated as a question. But there's all sorts of different ways this can go now, right? We've opened up a conversation. So one of the more likely ways is, yeah, you're right, I can eliminate some duplication by extracting the service. Thanks a lot. And I type that back to you. I say, great, thanks, that's great feedback. Fixed in this commit that I added. Now you feel good as the reviewer because you provided something of value that was well received. If you disagree with the change, well, you were just asked your opinion. So feel free, right? This is significantly less negative than the command, right? What we're really doing is fostering technical discussion. Word reviews are just like a way for us to have excellent technical discussions with each other. So what do you think about as a great conversation opener in pull requests? Did you consider, is another thing, like if you want to throw out an alternative, right? Or if you're getting lost somewhere, can you clarify is a lot better than what the hell is going on here, right? So these are all ways to soften suggestions and avoid negativity, which is going to lead to better discussion. There are also excellent ways to provide feedback. If you are new to a team or you're a junior member of a team and somebody more senior than you submits a change, I think it's a little natural to feel a little tentative about providing feedback. But doing it in the form of a question is a great way, right? You're just looking to learn and you're also kind of nudging the discussion in the way you want to go. And once you open with these conversation openers, yes, then you can break into some practical suggestions, right? But now you've got everybody on the right page. So similarly, it works very well giving feedback from somebody senior to somebody more junior, maybe an apprentice or something like that, right? If I give that command to an apprentice, they're going to do it. And maybe they won't feel great about it and maybe they won't even be entirely sure why. Whereas if I ask a question, what I'm really hoping that they'll do is engage, right? Not just do it. So this is pretty simple, right? All we're going to do now is ask a bunch of questions. We're just going to tack question marks onto everything. So we need to be really careful about this. It's pretty easy for a question to be like a not so silent judgment. So here's one I see a lot. Why didn't you just? There's a couple things wrong with this actually, but one of my pet peeves is this word just, right? Amen. So every time I see this, I think to myself or sometimes out loud, why didn't you just, right? So that word just simply easily, words like that, those pass judgment about the difficulty of a solution proposed. And they make me, when I read those, feel like I wasted somebody's time or I missed something obvious. It's not a good way to feel, right? So let's look at how you can improve this. We can get rid of that word just. Is this better? I mean, it's less judgy, right? That word just is gone, that's great. But we're still putting people on the defensive. It's still framed kind of negatively, right? Why didn't you do something? This is kind of a perfect example of the negativity bias in written communication. If I had this conversation with you and I said, oh, why didn't you call MAP here? That's not so bad, right? But if I write this down where you lose any sort of sense of my tone or inflection, it's going to come off more negatively. So what we're going to do is be positive, right? We're going to use those tools we talked about earlier. We're going to ask, what do you think about? Can you clarify? Did you consider? Those types of things. What we're really talking about here is asking the right questions the right way. And what we're after is better technical discussion. What we're talking about here is the Socratic method, right? Socratic method, according to Wikipedia, is based on asking and answering questions to stimulate critical thinking and to illuminate ideas. That sounds like exactly what we're trying to do, right? We're trying to have critical thinking around the code changes we're making and to illuminate some potentially alternative solutions. We're stimulating valuable discussion in our pull request now versus just throwing it a thumbs up. The Socratic method works pretty well for Socrates and Plato. It'll probably be okay for us in our pull request discussions. So those are the two things that we're going to do today, right? These are our tools for better technical discussion. We're going to be well on our way if we start doing these two things. There's going to be a couple issues that come up in practice. The first is how are we going to handle disagreements? And the second, what is it I should be reviewing anyway? What's the high value thing for me to look at? Let's handle conflict first. Conflict is good. Your team needs conflict in order to drive a higher standard of coding. The debate, a good debate, a healthy debate around a change drives quality and it leads to learning. But there are two types of conflict. One of them healthy, one of them not so much. So the easy type, the healthy type of conflict is that we don't agree on an issue. Perfectly fine. We're not always going to see eye to eye. What's critical to note is that everybody's going to have a minimum bar for quality that you need to pass. Once you reach that minimum bar to quality, we're talking about tradeoffs. We need to be really sensitive to the fact that we're just talking about tradeoffs. So if you find yourself disagreeing with something and you're having this conversation back and forth, ask yourself is it because I don't agree, I don't think the quality is up to snuff here or is it because it's not the way I would have done it? If it's not the way you would have done it, that's fine. There's multiple solutions. We're talking about tradeoffs here. We can go back and forth all day if we want. Like Socrates and Plato and all them, they can go back and forth all day but they're not shipping software. We need to ship this software to the user. We need to have some sort of agreement to disagree at some point. Reasonable people disagree all the time. Just make sure you're not arguing in circles because it's not quite the way you would have done it. So what about the second type? We don't agree on the process. So this can happen if you have somebody maybe just committing code directly to master or you have somebody opening pull requests and ignoring feedback. They're opening the pull requests because they were told to but they don't value the feedback. They don't value the process. They don't value your time, ultimately. My advice on this is to get in front of it. As a team, sit down and decide what it is you want out of code review. What do you expect? Maybe it's that all changes regardless of size are going to be put through a pull request review. And then all feedback will be addressed in some manner or another. It doesn't mean you have to accept the feedback but you have to say, like, oh, I see your point there. But I really think this is better. I'm going to go with this. I know we're going to revisit it when we get to this other feature and maybe we'll have more information to make that change or something like that. So once you've done that, you're still going to have those problems. That doesn't solve all those problems. You're still going to have people committing code directly to master. You're still occasionally going to have people who are ignoring feedback. So what do you do in those situations? If you were in a situation where somebody is committing code directly to master, my advice to you is to review it anyway. Go in there, add comments on the commit, and when you're done, follow up with them afterwards and say, hey, I noticed you committed this to master. I had a couple of questions. Can you take a look at it? When you get a second, let me know what you're thinking, how we're going to address these. Oh, and by the way, can you in the future submit a pull request for this? If this continues to happen, then it's time to break out the revert hammer. Go ahead and pull that change out and open a pull request for it and start adding feedback there. It's also crucially important that you enlist the help of your team in this. You can't be the one always swinging that revert hammer or always being the one cracking down. If you are, that means you're the only one valuing it or you're the only one willing to actually speak up for it. Okay, so that's how we're going to handle conflict, hopefully. What about what to review? People ask us a lot, like, what is it I should be focusing on review? If I'm not catching bugs, then what am I doing? The key is here that everybody kind of brings their own list of things to look at, and that's how we get better from each other's expertise or each other's area of focus. I can tell you what works for me to kind of give you an idea of the type of stuff that I look for. First the note on timing. When I'm doing reviews, I'm trying to stress small changes. I want, 10 minutes is a long time for me to spend on a review. What I'm really trying to push for is small changes that are easier to provide context on. Easier to review and I can go through it quicker. Once we have these small changes, one of the first things personally that I'm looking at is single responsibility principle. This is the S from the solid design principles. Does every object in the system have just one job? If you're not familiar with solid, that's not too particularly important. You can kind of focus on the single responsibility principle and the rest of those, the O, the L, the I, and the D, if you squint, can kind of follow from that single responsibility principle. That's why I really like to focus on that one. Naming is a big one that I focus on a lot. There's two hard problems in computer science, naming things, and cache invalidation, and time zones. Good names make things easier to discuss. That's where after good technical discussion, it means I can have a better discussion face to face. It means I can write these things more naturally. I will definitely focus on names to the point where some people are like, you're missing the large picture here. Like I said, I'm trying to make it easier for discussion. Complexity is another thing I focus on. Are there areas of the code? I'll just look at a change and see the shape of a change. Are there areas of the code where the shape looks complex? I'll dive into those a little bit. That's where I'll break out the can you clarify. Sometimes it turns out that the complexity exists for some future feature we thought we might need. We can kind of ship that off until later. Test coverage. As I'm going through a change, I'm kind of assembling in my head what it is I expect to see for test coverage. If you're using RSpec, that comes with the S. It's almost always at the bottom. Great. I'm looking to see, okay, there's probably a feature spec that covers this. Here is a controller spec that covers this edge case here. There's probably a bunch of unit specs, unit tests around these methods that got added to the model or whatever. I'm not specifically, like I said, looking for bugs. The test coverage is vital. If I see a bug, I'm going to comment on it, but I'm not doing your QA. I've seen a lot of places where people are happy to be like, well, so-and-so improved it. So this must be good, even though they had kind of a shaky feeling about it from the beginning. We're not doing QA. Keep saying that. And like I said, everyone has their own areas of expertise. Everybody has their own checklist. Personally, I'm interested in web security, so maybe I'll look at some things from that perspective and make sure things are up and up there. Maybe you have somebody on your team who's really great at giving practical performance advice. That's not premature optimization. It's great for them to jump in. There's all sorts of things I didn't list here, like duplication, whatever it is that you're comfortable giving feedback on, bring that to the table. We're going to get the best from all of our teammates that way. And we're going to learn from the best parts of them. So one thing I didn't mention was style. Is style important? Yes. Style is important. If you look at a code base and it looks clean and consistent, it gives the impression of a team working together towards something. Everybody's on the same page. A neat and tidy code base is like a clean kitchen. Everything has a place and everything in its place. The problem is that study I cited earlier found consistently that people who received a lot of style comments on their code viewed those reviewers kind of negatively. They thought they were harping on things that weren't valuable because they were missing big picture things. Whether this is perception or reality doesn't matter. We're talking about improving technical discussion. So if somebody feels that way, if they feel the discussion is not valuable, then is there something we can do about it? Yes, there is. So my advice first is to adopt a style guide. There's plenty of community style guides you can just adopt outright or you can look at what you've been doing and just write it down somewhere real quick. Make sure that everybody knows that any arguments about style, whether you're using double quotes or single quotes or whatever it is, are going to happen in that style guide, not in an individual pull request. After you've written it down, you're going to outsource it. We're going to use RuboCop, JS hint, SassLint, things like that to handle this style checking for us. At Thoughtbot we have HounCI, which is a service that runs these linters on your code and adds comments as if it were a person. The difference being it's a bot that's providing this feedback. So getting into a bike shedding discussion with a bot is not very satisfying. Right. So yes, full disclosure, that is a Thoughtbot service. It is however free for open source. It's open source itself. I think it's a great product. I don't stand anything personally to gain from it. I think you guys should check it out for that reason. So these are the tools that we're going to use for more meaningful technical discussion. We start doing these things. We start providing context. We start making sure the author is going to receive our feedback in the right way. And we know what we're looking for in a review and we know how to handle conflict that comes up. What we're well on our way to here is having a strong code review culture. So a strong code review culture goes beyond the quality of a single change. It gets at the roots of the type of team that you have. So if your team has a strong code review culture, what is it that you're going to see? Well, first, you're going to see better code. Yeah, I already told you, like, this isn't about catching bugs. But the code is going to be better because the discussion improves the solutions. I can't tell you how many times I've submitted a poll request and been like, that's some of my best work. That's going right through, right? And then five minutes later, there's three emails from GitHub with feedback. And at first, I'm like, ah. And then I read the feedback and it's all totally reasonable and makes the solution even better because it's somebody else's viewpoint about what they're really good at. And it took what I thought was already a good solution and made it better. Or maybe it turned the solution totally on its head and I was wrong, right? But the important part is that we're through group effort, we're getting better. We're going to have better developers. You'll be reading code and writing about code every single day, right, in black and white concrete examples. And like I said, we're going to be taking the best from each other in these conversations. So maybe you're not that into web security but because I keep commenting that you can't pass params that way, like, you start to learn that, right? We have team ownership of code. What this means is my code, your code, their code, that's all dead. That's gone, right? At one point, we all decided that this was a reasonable change, right? It wasn't the best solution but we thought, given what we know at this time, this is good. So the whole team is going to own the code now. And what we're going to get out of that is versatility, right? There's no more, oh, this person handles the issues with the sales dashboard and this person handles the orders page and when they're out, we better hope there's no bugs, right? Everybody knows what's going on in the system. And finally, we're going to have healthy debate on our team. There's a lot of teams that we go into that do not have healthy debates. They have silent seething, right? If somebody upset about somebody else always committing what they think is crappy code. So but now we have the tools for a specific technical discussion and we can be better at all of these discussions, right? So if you look at all these benefits, right? Better code, better developers, team ownership and a healthy debate. This sounds like a fantastic place to work. I want to work there, right? It's going to make you better every day and those are the benefits that I'm seeing in the strong code review cultures I'm a part of. Do I have some questions? Yeah. Yeah, it's true. How do you handle somebody who's just not engaged in the process, right? I mean, I think helping them see the value of it and helping them feel like when they do give any review comments, they're engaged, right? So any comments that they do give, be excited about it, right? And engage them in them. And even if you don't necessarily agree, maybe make a couple concessions, right, early on to kind of get them back on board. And like I said, you write this stuff down, what you expect from people. You expect that you're going to be doing these reviews. And a lot of people say, like, I don't have time for those reviews, but I think that's full. Like, we have plenty of time for reviews. Like I said, 10 minutes is a long time for me to spend. So like, I finish up a feature, I look through some reviews, that type of thing. Okay, so the question is, how can you encourage more junior developers to review code of perhaps more senior developers? Is that right? I think a lot of the same rules apply, right? Like, what you want them to do is say, like, we have an apprentice, like I say, like, review this code for me, right? I want to see what your feedback on the code is. And when they give that feedback, don't just dismiss it out of hand, right? Have a conversation about it. Try and give them some easy wins, things like that. Yeah, reviewing is just, so the point there was reviewing is another form of pairing kind of, right? You're not doing this lot, you're doing kind of asynchronous pairing, I guess. And a lot of what I talked about today really, like if you notice it doesn't really have a lot to do with pull requests, it has to do with providing feedback to one another, right? So it works equally well in settings of pairing as well. Yeah? Okay, so the question is, how do you deal with a company where there's a gatekeeper, right? There's a sole gatekeeper that does the reviews and you have to get through that person to get your code merged. Right, yeah, so here's what I would say to that is with your coworkers change that culture or find a new job. And I'm serious, like the benefits I listed there, like those are real. I see those benefits every day and it makes where I work a great place to work. Yeah? Right, so how do you provide context on refactors, basically, right? There's no new features to be reviewed. What I would say is you explain why it is you're doing the refactor and, like I said, we're not doing QA in our reviews. So I don't, like, maybe I know that there's some weird place where, you know, you change this method name and you search for everywhere where it was called but we're calling it through send here and I know this up top of my head and I can give you that feedback. But in reality, we're relying on our tests, right? That's what refactoring is all about, is having good tests. So if your tests pass, then what I'm interested in as a reviewer is what you think you're, we're getting out of technically from the three factoring. Like tell me why you felt the pain and what this solution solved. Yeah? So the question is how do you work in, like, a QA team with this, basically? Like, when do you do the review versus doing the QA sign off on something? My advice there is to do the code review before the QA in case anything significant changes and then hopefully you make QA's job really boring. And if they have, if they find something, because they do and they're, you know, they're going to, then those changes get reviewed as well. Yeah? The question is how important is it that it be asynchronous? I don't know. I would say that I've done, like, on larger changes, I'll often pull somebody over and just try and walk them through it. But I do think the asynchronousness kind of makes the change have to stand on its own, which is also really interesting. So I don't know. The answer is I haven't played with synchronous versus asynchronous enough to know. All the way in the back. What's my opinion on authors merging their own pull requests? I would say what I typically do is once I have a good code review workflow going with the team is if there's just a couple of small comments that are easy to address and I already am a trusted member of this team and I trust that, like, we have a good relationship already, then what I'll do is I'll just address that feedback. If it's a straightforward thing to address that doesn't necessarily need additional review, like I renamed something to something they suggested, then I merge that right in. Where we work, authors urge their, do merge their own requests. Sometimes with some teams you're waiting on, like, specifically having a thumbs up or two thumbs up or something like that. That's for your team to work out. So the question is with refactoring, it's occasionally hard to, like, well, I'm sorry, I get it. Yeah, okay, so it's hard to commit small changes with refactoring. So how do you balance the need of maybe needing to run this by somebody first versus, like, presenting a gigantic pull request? I find that, like, most of those large refactoring come out of conversations that you're already having anyway. So, like, I might have a conversation with Caleb about, like, oh, this area of the system is really bugging me. And then finally we get around to this refactoring. For larger change, like, when you actually do the review, leaving it as several commits before you squash is probably a good way. Like, I will, in those cases, I'll say, like, this review is going to be a lot easier for you if you step through the commits, right? And you'll see the process I went through and you can kind of follow it. Yeah. The team I'm working with, so the question is how do you handle different time zones, basically, right? And possibly language barriers or cultural barriers. I haven't had to deal with a language barrier too much yet, like, most of the people speak good enough English to conduct a code review with. But I have dealt and am dealing with time zone differences, which are really difficult when you're trying to say that, like, there has to be consensus around a change. Right? If somebody's 12 hours ahead of you, it's really difficult for them to have to wait a whole nother day for feedback. Unfortunately, that's kind of the price of having a widely dispersed team like that. Hopefully there are people that are in that time zone as well that can provide a quality review and maybe you can kind of demarcate the work. It's tough when you have a distributed team like that that's so wide. Like, three hours is reasonable to handle. There'll be some overlap. 12 hours is tough. I don't know. If you have some good suggestions, let's talk, Ashley. Yeah, over here. Okay, so the question is... No, there was no question. It was, get rid of the word just. I'm out of time, so come up, talk to me out in the hallway. My cohost, my bike shed cohost, Sean Griffin is here. We're going to be doing some podcasts out somewhere on this floor down here, maybe, eventually, so you can follow us on Twitter. You can email me questions, tweet me questions, find me later. All right? Thank you. Thank you. Bye.
|
Code reviews are not about catching bugs. Modern code reviews are about socialization, learning, and teaching. How can you get the most out of a peer's code review and how can you review code without being seen as overly critical? Reviewing code and writing easily-reviewed features are skills that will make you a better developer and a better teammate. You will leave this talk with the tools to implement a successful code-review culture. You'll learn how to get more from the reviews you're already getting and how to have more impact with the reviews you leave.
|
10.5446/30674 (DOI)
|
Thanks for coming out guys and checking out my presentation on the Internet of Things. It's a topic that I'm very, very fascinated about with. Not something I do on a day-to-day basis for work, but I've been kind of tinkering with the Internet of Things and reading a lot about it. So I wanted to share that with you guys a little bit today. So today I'm going to walk through a simple Rails-based Internet of Things app that I've built and then talk around some of the background and context around the Internet of Things and then also talk through the future of the Internet of Things and kind of where I see it going. So just a quick background and introduction around me. So I had kind of a roller coaster of a career which is I've tried and done a lot of kind of various different things. Started off as a political science major, then went into business where I tried management, consulting, and venture capital. Before I kind of started teaching myself to learn to code, really loved it, so went to the Flatiron School and come to get a formal education there. And now I'm working at a small legal tech startup called Case Flex. So certainly don't have the number of years of experience that a lot of, I know a lot of you guys have, but I love it. It's been an awesome ride so far. I'm very excited to be here today as well. And on a day-to-day basis, I use Rails and Angular. And as I mentioned, as kind of an Internet of Things nerd, I've really gotten into kind of tinkering a lot with Raspberry Pi's and Arduino's as well. So before I kind of get into the presentation, let me walk you through my daily routine. Just a quick disclaimer, I had, I brought all the stuff here, but I figured the demo might be a little bit better if you guys can see it there. And I also videotaped it. So I'll just show you guys the video of the app. Cool. Basically, I'll go through kind of what it's actually doing a little bit more in detail, but I have the actual web app on the left and my Internet of Things set up at home on the right, and I'll kind of go through what my daily routine is. So in the morning, I wake up, I turn my lights on, but I'm really cranky in the morning, so I need to dim my lights a little bit, so I do that. As the day kind of progresses, it's supposed to be good to kind of get a redder type of light for your circadian rhythm, so using kind of a form there, I kind of start changing the color of my light as well. Then after that, kind of the next step, I really need some caffeine. What I do is I go to the Appliances tab and I turn on my coffee maker. And as you can see, I start getting a little bit of coffee that starts coming down and being made. And that's all connected to an Arduino. If you guys want to see afterwards the setup, I can show you that afterwards. Then... Sorry about the sound effects there. So then I can turn my coffee maker off once I've got enough. And then I turn on my favorite music playlist. So I'm playing happy. I'm usually never happy in the morning, so it's trying to pump me up a little bit, but usually never works. And as I'm leaving my apartment, what I do is I turn on my motion detector. On the right side, you see a little graph and that kind of lights up every time there's motion. And you should see kind of the graph over time changing with account of motions as well. And that's all on the front end. It's cut off a little bit there, so I just threw it in on the right side as well. So as you can see, the little... It's like this thing here, but you can see it's like a little motion detector. And so I set that up before I leave. If I want, I can also set kind of a... Rather than a silent alarm, it's kind of a louder alarm. So again, kind of setting up that motion detector, but now connecting it with a Sonos and kind of having the two interact with one another. At the end of the day, when I'm coding, I like to listen to beats. And then at the end of the day, after maybe on a Friday or a Saturday or so, I just want to kind of unwind a little bit, so I have a party. So that's kind of a quick overview of the app that I built. So I kind of go into more detail into how that's working out. All right. So the high level stack around this Internet of Things application is... So I've got an Angular application, and that's running in the cloud. And I'm kind of serving that up with our... using DivShot. And I don't know if you guys have used DivShot before. It's really cool. It's kind of like Heroku, but for static applications, it's really easy just to push applications up. Then the Angular application consumes the API from my Rails app. And the Rails app is actually running locally on a Raspberry Pi. And that's kind of serving kind of the Rails application and the API with that. And I'm running that on the Raspberry Pi, which is on my home network, because I can't actually push that up on like a Heroku AWS or something, because I need to actually be running on the home network, because a lot of the objects, some smarter objects, so like a Sonos or like a Lifex Lightbulb, those are already connected to your home network. So in order to kind of use their APIs, you need to be on that home network as well. Right now, I'm kind of serving that up to the web with Engroch. It's kind of a really easy tunneling tool to kind of put your development environment up on the web. And this is kind of me kind of testing out this Internet of Things application. And then the Rails app then connects both smart objects as well as dumb objects. So smart objects are the ones that are already connected to the internet. So these are the ones that you're probably going to pay a huge premium for right now. So like a light bulb, like a regular light bulb, I don't know, it's like a dollar or so, right? And then the connected light bulb is $100. So, you know, 100 times the premium for it to be connected to the internet. So right now, it's the Rails app actually connects to the Lifex Lightbulb and the Sonos, which are both smart. And the Sonos actually works off of with Dropbox. So I'm actually sending it files from Dropbox on the cloud just for portability, so it doesn't have to depend on the files to be on my local machine. Then I've also got a number of dumb objects as well. So is the sound cutting in and out? So my coffee maker, this is a $10 coffee maker. I've got that connected to the internet. And I'm able to turn that on and off with my application as well. And as well as the motion detector as well. That's actually like a $5 motion detector, so it's really, really cheap. And I do that all with an Arduino. I'm kind of splicing their circuits to be able to kind of send signals on when I want to turn the power on and turn the power off. The cool thing is because the front end kind of lives in the cloud, I can actually control the stuff anywhere, from anywhere. So I can be at the office and be able to turn on and turn off my lights, just, you know, make sure things are off. The same with my appliances, I'm starting to connect a slow cooker to this as well, so that now I can actually start turning on my slow cooker just so my food will be ready but for the time that I come home. So what is the internet of things? It's literally object-oriented programming. You're connecting to physical objects through code. And you're seeing the internet of things really pop up everywhere right now. Health and fitness monitors, home security devices, connected cars, household appliances, and a ton of other applications out there. But there's also a lot of non-consumer applications as well. So we're seeing the internet of things. For example, local governments are starting to use a lot of sensors to be able to look at their infrastructure and be able to see if there's any kind of, you know, water, gas leaks or whatever before the actual damage occurs. So really a lot of cool stuff happening in this space. But so why is it important? I think there's really three reasons. The most important reason is that it allows you to kind of better more easily interact with the world around us. So before, let's take a Nest thermometer for example, right? Before when we had to go to the wall unit and kind of turn it up and down or turn it off or remember to turn it off when we walk out the door, we can now, you know, use an app to control the temperature. And then over time, it starts kind of learning our routines and being able to better meet our needs so we can kind of more easily interact with it. The second reason why I think it's so important is it's already really, really big. We've seen a huge explosion over the past five years or so in the internet of things space. And we're seeing kind of a lot of these connected devices pop up all over the place. A lot of companies starting to get into this space, big ones, startups, everything. But I think it's going to get even bigger. There's been a lot of research reports that say that by 2020, there's going to be, and all the research reports seem to site 2020 as kind of the end goal and the big, where we're going to see a lot of change. But by 2020, we're going to see 40 to 80 billion connected objects. And that's 10 connected objects for every human on the planet at that time. I just read a report that came out today saying that the health, in 2020, the healthcare internet of things market will be a, it's going to be a $117 billion market. So a lot of big numbers in this space. I think kind of the most important reason why the internet of things is really so important is because it really increases the amount of data that we have out there. So typically big data, we kind of think of that as all the data that's kind of on the web, on all our users, we're getting this from applications. But more and more, big data is going to be comprised of the things that actually occur in the physical world, the actual physical interactions that we have in the real world. So everything from like a light bulb to thermostat to wearable devices, they're really, they're going to become sources of data that we can use and analyze and hopefully make our lives kind of even better with. And this is kind of the notion of quantified self, where these technologies really help us to better understand and analytically look at our daily routines and our lives in general. But it's not just quantifying ourselves. So a fun fact, I read about this just recently about this Dutch startup that started. They've built these sensors for cows and you stick these sensors on a cow and the farmer is able to tell when they're able to kind of get all this data on the cow, when the cow is eating, sleeping, when the cow is pregnant, when it's sick. And a cow generates 200 megabytes of data per year. So if an animal that sleeps 12 and a half hours a day and eats for another 10 hours a day generates that much data, imagine how much data a human can generate or even something like a car. I think there's just going to be so much data out there. So as I was kind of thinking about what I was going to talk about today in terms of the Internet of Things and this application I was building, I wasn't exactly sure what I was going to talk about initially. So I just started building and then as some issues arose, that's kind of what helped to drive kind of the topics that I want to talk about. So initially with this application, I started off with a regular kind of HTTP API and started running into some issues there. So if you look, let's take a light for example, right? Our Rails application, when we want to turn on the light, we have to connect to the light and then we have to turn it on. And we want to change something about the light, so if we want to change the brightness or if we want to change the color of the light, each time we have to connect to the light and then we have to turn down the brightness or we have to change the color of the brightness and send new kind of RGB values to the light. And then again, when we want to turn it off, again we're connecting to the light and then we're turning the light off. Now so we're having to connect to this a lot of times. It's obviously not drive, but it's not just a stylistic issue, but it also becomes a performance issue as well when dealing with some of these Internet of Things objects. So if you think about it, I showed you in the video a slider where you could change the brightness of the light. Just like a regular form, it's kind of got different values along each of the way. That slider has 100 values. It's 0 to 100. So if a user kind of takes that slider and just moves it up and down, in one second they've generated 200 requests to the server and to the light. And so imagine that's just one user. If you start building some kind of app with a lot of different users connected to something, I started to see a lot of slowdown. The light would take five minutes to go through that brightness cycle, or sometimes it would just crash and just something, it just wouldn't do it. So that's why I kind of started looking for a potential alternative. And the solution I found was WebSockets. And that's actually a lot more relevant now, especially after DHH's talk on Tuesday around Action Cable. So pretty excited to see what's coming out of that. But the cool thing about WebSockets is that they allow you to hold a connection open between the client and the server. And so you can maintain this connection between requests, and you can maintain the state. And then kind of a side benefit that I got that I actually ended up using and needing quite a lot was the server-signed events that you're not tied to kind of the response cycle, you can actually generate events to kind of populate something in the front end as well to the client. So the anatomy of the Rails app that I built, I'll quickly go over this and kind of get into a little bit more detail. But basically, you get requests to interface with an object, again, pretty much with a front end. That kind of goes to the event map, which is just like a Rails route, but you're actually just mapping events. That then goes to the WebSocket controllers. And then the controllers do two things. They work with your models to kind of save that interaction to the database, because I want to kind of be able to analyze everything from motions to when lights were on and off and potentially get into some more advanced stuff later. But it also kind of interfaces with the physical object itself. So I'm not going to drain this slide, so I'm going to kind of go through each of the steps that I just talked about. But since this is in Angular and we're at RailsConf right now, basically what I'm doing is I'm creating a dispatcher on the front end. And I'm using this using the WebSocket Rails. I'm using the WebSocket Rails gem on the back end, and they have a client-side library as well. And the client-side library is bundled in with the gem, and also it's in CoffeeScript, and I prefer JavaScript to CoffeeScript. And I kind of wanted to pull out the client part of that into the Angular app. So what I did was I just unbundled it. I've got it actually in the link on the slide if you guys are interested. There was some interest on Stack Overflow around this. But basically I'm using this dispatcher that I've created, and the dispatcher triggers events that then map to your event map on the Rails side. So this is kind of where the request first comes into the Rails application. And as you can see, it maps to your controllers and actions just like you would on kind of a regular HTTP in your router. The syntax is very familiar, you can use the string or the hash syntax here. And in general here, tried to be, and this is kind of an example of the light and the coffee maker, but I tried to be as restful as possible. So a create is actually turning the object on. Destroy is turning it off. Update is changing some attribute about that light. And show is kind of returning some status that we have on the object. In the case of the light, it's the color, whether it's on and off, and it's the brightness values. And just as a note, like some of the code that I'm going to go through, it's a paraphrase a little bit for simplicity. The full source code is on GitHub, so I'll have the link available for that. So then kind of the next step from there is the controllers. And the really two important things to note in the controller, the WebSocket controllers allow you to create a connection that stays open, again, rather than working on a request response cycle like HTTP. And the cool thing about these WebSocket controllers is that you have this controller store available, and that lets us carry over the connection to the physical interface across your controller methods. So rather than, again, having to connect to the light every single time with the LIFX interface.new, we can just call that the first time that controllers hit and then hold that connection and then use that connection over and over and over again in all of our methods. If you look, like for example, in the create method, right, we're referring to that light interface in the controller store. And the cool thing about the controller store as well is that that works when the controllers first hit and it's initialized. So that anytime, whether it's one user or another user connects to it, you're still using that same connection. So you're not having to reconnect over and over and over to the light bulb. And so every controller action really does two things. It, again, like I mentioned, it saves that interaction to the database and it actually interfaces with the object. This is kind of a little more detail. I'm not going to talk through this, but just the update action that I have. I'm taking a params, which is message here from the front end and then here it's the RGB values that I want or the brightness value that I want and then, again, doing the same two things. I'm saving it to the database and then interfacing with the object. Domain model is really, really simple in this application. Basically I've got a Sonos player. I've got a party. I've got a motion detector, which has many motion detections. I've got a coffee maker. I've got a light. And a light has many colors and a light also has many brightnesses. The only thing here is that, again, really, really simple, I've just overridden the destroy method because in our case with the Internet of Things, the destroy, at least the way I've got it set up, is actually turning the object off. So I actually want to save a reference to when that object was turned off. So then the interface. This is kind of the most interesting and the most fun part to build in this application. Interface classes that are kind of the glue between the Rails app and the actual physical object. The interface classes are really easy to build in the case of smart objects like your Sonos player or your light bulb because there's a lot of APIs and gems and unofficial ones available out there. Where it gets a little bit trickier but also a little bit more fun is where you have dumb objects which are not connected to the Internet where you have to use kind of the Arduino's library and then do a little bit of circuitry. But it's actually really, really easy. I'm not a very big circuit guy at all. So if I can figure it out, anybody can. So this is my motion detector up on the slide right now. The TSA, I came down from New York. TSA really wasn't happy about this and somehow, I agree with them, it looks pretty sketchy. Somehow it survived all of their swabbing and testing and it's here right now. And I think it's an interesting, the motion detector was kind of one of the more interesting interfaces to build because unlike all of the other objects where it's an on-off, right, where you're turning the light on, turning it off, coffee maker, turning it on, turning it off, right, the motion detector, once you turn it on, it's actually, it's not just kind of a single point in time, it's actually on for a while. So you're actually starting to get motion detections coming from this motion detector. Also it requires a two-way communication between the object and the Rails app. So it's not just the Rails app sending in a signal and being like, turn on. When there's motion, it needs to actually send that back to the Rails app and we need to be able to deal with that as well. So this is enabled through kind of multi-threading and server-send events. And this is, I know the code is tiny, I'm going to jump into more detail into all of this. So basically the main method that I have here, the start motion detector method, it does four things. It starts a new thread, it connects to the Arduino, it detects and reacts to motion, and then it detects and reacts to no motion. So the first part of that is where we set up a new thread. So why do we need to do this? Again, the motion detector is continuously running. It's almost kind of like a loop. So if you think about, you know, when you do loop, do, and then never put like a break or something, it'll just keep looping and freeze up your app. So the same thing happens. It's continuously looking on the Arduino's input for motion. And if we don't get off the main thread, it actually ends up freezing up the application since that's where the application is kind of running. So the first thing we do is we get off the main thread by creating a new thread. And then we start listening from the Arduino on the input port that we specify. The next part of the method is pretty cool. And this is where we react to motion and a lack of motion. So when motion is sensed on the Arduino's pin that we specify as an input, and basically what happens here is when there's a motion that the motion detector detects, it kind of sends a little impulse to the input pin, and therefore we send that back to the Rails app. So what we do is we save that motion to the database, again keeping track of all the motions that have occurred. And then we send a server-generated event to alert the front end of the alert, to alert the client of the fact that there's been a motion. And that's kind of how you start seeing on the graph. You see the graph turn red when there's motion, and then turn blue again when there's no motion. Because it's reacting to these server-send events that are happening. And then similarly when there's no motion, we're sending just a signal to the front end to say there's no more motion, so turn blue again. And then two, we also send kind of a count of the latest count of motions to update the graph dynamically. So the future of the Internet of Things. So in 1997, Alan Kay, the pioneer behind object-oriented programming in the graphical user interface, said, today after 50 years of development, the computer is still masquerading as better paper. But in the next decade, the next decade will be the transition into what computing and networks are really about. And we've really seen that kind of come to life and come about. I firmly believe that's kind of what we're seeing in the Internet of Things space as well, to play off of Alan Kay's words. Right now, the Internet of Things is really masquerading as a better light bulb or a better thermostat. And you get the point, right? And we're kind of in this cool factor phase where it's really cool to be able to turn on an object or turn off an object with your app. And the world is kind of really captivated by this so far unsophisticated implementation of this technology. If you think about it, we've gone from a light bulb on a, sorry, a light switch on a wall where you flip on and off when you walk into a room to an app. But then when you actually walk into a dark room, you realize that when you have to pull out your phone, hit your password, open the app, turn it on. And God forbid if you have multiple different brands of light bulbs, you have to open the app for each one. That's annoying. It's not actually very functional. So we've gone back to having these light switches on the wall, but these light switches are now connected to the Wi-Fi and they're Internet of Things light switches. So cool stuff, but we haven't really seen that much incremental value being delivered in the space yet. And I think we're going to see a lot more of it in the next five years. So where do I think the Internet of Things is going? I think right now we're in the stage in between where we have these disconnected, dumb objects all over the place. So now starting to connect a lot of these objects, we're starting to see connected everything now. But I think they're kind of still working in silos. Again, you have different apps for different brands. And even if it's within the same company, there might be different apps to work with these things. So I think the next step from there is centralizing a lot of the objects. So having one platform where you can be able to control all your different objects in one place. And we're starting to see a little bit of that as well. I know Apple's HomeKit is supposed to do a little bit of that. Amazon Echo, they just announced that they're going to have integration of a few kind of Internet of Things objects in there. So that's kind of where I think the next step is. But I think the kind of ultimate step and where we really want to end up with the Internet of Things is where we're actually integrating all of these. Where all of these smart objects are working with one another. Doesn't matter what brand they are or what they actually do. They're actually working together in these kind of routines. So you can imagine a situation where you're coming home late from work and your home almost knows that. So it starts heating up the food right when you're close enough. The lights turn on when you walk in. The doors unlock. Everything kind of works together and seamlessly. So my goal for today was really just to show you that building an Internet of Things app is really easy. And this app took me just didn't take me very long at all to build. It's obviously very simple. But especially the skills that we have as Rails developers. And I think it's going to get even easier now with the announcement of Action Cable. And it's going to be kind of more integrated in Rails. So I think that's going to make it very, very easy for all of us to kind of start building and kind of prototyping these Internet of Things applications. I think it'd be awesome to see more Rails developers to be part of this Internet of Things revolution and kind of push us towards the Jetsons as opposed to Flintstone. And so I hope that a lot of you are already building, will build an Internet of Things application soon. Thank you so much for listening to my talk.
|
According to Gartner, there will be nearly 26 billion devices on the Internet of Things (IoT) by 2020. ABI Research estimates that more than 30 billion devices will be wirelessly connected to the IoT by 2020. This discussion provides examples examples, ideas, tools and best-practices for Rails developers to start building IoT applications that connect their web applications to the real world.
|
10.5446/30675 (DOI)
|
Music Hey everybody, you're here for the interview like an interview, interviewing like a unicorn, an owl, how to interview, I have a high grade team. My name is Alan Grant, I'm the co-founder and CTO of pirate.com. Here we have an alien learner, Alan, is actually, do you want to introduce yourself and kind of tell us about how you got into the whole interviewing.io? So I had a windy path. I started as an engineer, then I dabbled in recruiting for a while, trying to fix some of the things that are broken, which many of you have experienced, some of those for a cent. And most recently, I'm the founder of a company called Interviewing I.O. that seeks to make technical hiring a lot more meritocratic. I'm Chetri Vidran, I work at Hire, and I'm a major engineer. Hi, I'm Alayna Percival, I'm the CEO of Women Who Code. And at Women Who Code, we provide diversity consulting, so best hiring practice, best practices around diverse partner pipeline for our sponsors currently. We're hoping to expand that further in coming year. And my name's Ovi Fernandez, I'm the CTO of Andela, we're training 100,000 young African CTO code, and to be awesome people and improve the content. And in my function, I'm in charge of recruiting respectively those 100,000 people, so I'm doing a lot of interviewing and things like that. And I'll be moderating the panel, thanks for out, and thanks Al and Hire for inviting me. Thanks Valar, I guess. I think we're very interested in finding out from the audience, and basically what you're looking to get out of this session, because there's a couple ways that you could get interested in coming, being here now versus the other sessions. And that would be like, raise your hands if you are in this session specifically because you want to learn how to interview as an interviewer. Meaning you're looking for work. Okay, so scanner your hands, then raise your hands if alternatively you're here as an interviewer, and you know, an employer looking to get some insight on that. Okay, so it's useful to our panelists to understand how to structure the information that you share with the audience today. I think one of the major themes we wanted to touch on during this panel is one that's near my heart, because on the softer side of what we do I think it's very important to bring the mention of empathy to this whole process of interviewing. So I wanted to give the panelists a chance to discuss their thoughts around how empathy applies to the recruitment process. What do you want to go first? What challenges that we see candidates is that we don't have the candidates who they pushed forward. So essentially it's like a process where you want to see the best of the candidates that are coming in and employers know enough to actually help them manage the expectations or give them guidance on what you are looking for specifically because they bring company values different rates differently. And as higher, one of the efforts that we had was to give them an interview guide, clearly explaining each round what we are looking for that really helps the candidates showcase their best skills during each of those rounds. From the beginning. And just for clarity, I think that it may be confusing there's a distinction of applying to get a job at higher and applying to use higher. So the way that hard works is that you create a profile and lots of different companies will reach out to kind of interview you. So Chakra Cycle specifically is when we interview candidates that are higher we've gone through several iterations of our interview process and initially we had this process that was very empathetic. It was a little bit brutal where we would, as soon as we started talking to you, throw coding challenge after coding challenge and we would just really, really get down to trying to figure out if somebody is going to be a fit. And then when we get to the end of the process what happens is that nobody wants to come work for us. They get to the end and say, well, great, thanks, your interview process is kind of rough but I'm not sure if I want to come here. It may not be the right culture fit. And so then they said, well, what can we do to take this, how can we improve this? And so we actually started basically doing a phone call with everybody before the interview process starts just to kind of be familiar with the company and talk about what the process looks like. I called one of the founders. So that made things kind of a lot, go a lot smoother. And then Chakra put together this interview guide and the interview guide basically says, here's the kinds of questions we're going to ask, here's what our culture is all about. We do TDD, here's how we approach things, and there's a way to get people to practice that. I really like this idea of having a program that sets expectations, it creates transparency through the process. But it's also important to think that while you're going through this interview process, especially if you're in a growth phase as a company, you're probably doing this a lot. Whereas the people coming into this situation, they're only interviewing maybe once every couple of years. And so they are coming to this without the experience and without having necessarily done this, you know, every week or several times a week for the past couple of years. And so creating that transparency in the process. And then having the process will also help you evaluate people across the board more equally. A lot of programs have been covered already. I guess one thing on that is that in this market, I'm sure you guys have felt this, labor certainly has a lot of power. So great engineers are in very high demand. And there are companies that still haven't wrapped their head around this. And as a result, they're making people jump through hoops very early in the process. So, you know, there are companies that will send out those coding challenges. And then very good people know that they don't have to jump through those hoops and that company will no longer be a contention. So maybe that's a very different way to think about empathy. But think about how many options the people that you're talking to have and be mindful of the fact that you have to be selling very early on. So maybe having people talk to a founder is a very good idea. But don't treat people like they are going through a revolving door on an assembly line because you're just shooting yourself at them. But I want to really fund the themes that came up here and try to make it a little more actionable for our employers and others. So, undoubtedly, they already have a process. You mentioned, Alan, that your injury process was brutal. How do you gain that understanding? Like, how do you go from whatever process X is that they have now to one that is more empathetic? What are the steps to determine that change? I really like the way actually Nate, that one of our engineers, put it yesterday, and saying that like anything with development, it's also an iterative process. So you start out with something that works for you and you just keep iterating on it. I don't think that it's necessarily the best approach to try to nail the perfect interview process right at the beginning. When you have an interview process, what are the things that you're trying to address? One, you try to actually make sure that your process accurately figures out who would be able to fit for your team and who wouldn't. So if everybody gets for your interview process, then you need to be very empathetic, but it doesn't actually do you any good. But then you want to actually provide a positive experience so that you can get to know somebody in the right context and sort of bring them on board. So the way that we started out was with just a few problems, a few questions, and we would ask them on site and we would bring somebody in. And we saw that that was working pretty well. That was our first process. There was two questions and we asked them on site. Then over time we said, you know, well, actually we can ask one of these questions in advance on the phone. What were those questions just if you remember? Yeah, so with us, the first question is an algorithm question. So it's not, you know, it's not a hard algorithm that you have to come up with, but it's more like, here's an algorithm, I have coded this up. And then the second question is an O design question. So in that case, we'll give them a key of some sort that somebody would be familiar with, like Mindsweaver and I'll say. So you know, write this case of me in the O language to future. What do you do if they don't know the game? Well, so the easy thing about Mindsweaver is you can always pull it out and say, so this is Mindsweaver. Here's how you play. I kind of walk it through it. One thing that actually I've got to do in the algorithm as well is sometimes if somebody really can't get their kind of wrap around the actual algorithm, then we'll sort of walk through it and say, well, here's the algorithm that you implement. So it's really not about knowing the rules, being able to kind of put it together. So I want to walk back a second. So there has to be a process. So you should treat like development. It's an inner process. So what is your test? How do you get that feedback to know whether the process is succeeding or failing? So I actually looked at those two questions I thought were related to the interview process itself. Has anyone tried kind of surveying the candidate to see how they felt the interview went? Is that something that maybe would work? Well, so one of the main things that happens on the platform that I'm working on, interviewing I.O., is technical interviews. And we always survey people after each interview and ask them how do you think you did. And we also ask their interviewer how they actually did. And I've tried correcting these, you know, perceived versus actual performance. There's no correspondence at all. People have no idea when people bomb. They tend to think they did okay. And when people do fairly well, they tend to think that they bombed. So, you're done. So is there a hopeless situation? I think that surveying people about their experience is probably much more useful than about how they did. But I think probably the biggest test is whether they actually accept the offer or not. So, we also do the survey. But we also call people, even if we don't hire hire them on our team and ask them what can we do to make your day better and get a lot of good information when candidates talk to you. The survey is good, but when you talk, you're not. What are some of the more interesting stories that have come out of that process? So, one of the interesting stories that came up, pre-paget BBC, was, you know, we have a half an hour coffee session in the morning when people come on site before the process starts. We've told the engineers, they get to talk with the company and learn more about culture and so on. So, that really helps them get a sense of how the environment is and they're able to project more of their skills during the entire day. So, is it an informal kind of thing in the beginning of the coffee sessions? Yes, that's exactly what came from the survey. What are some of the things that you've heard that they didn't like? I've often had feedback that when you have several days before you hear back about something, you start to kind of psych yourself out about that company or sort of come up with reasons why that company is not a good fit for you because you're wondering are you going to, like why haven't they given you an offer yet? And so, one, if you're going to decide on Thursday as a team what to do about it, let the person know that Thursday is when we're making this decision, we'll let them know by Friday morning, something like that. So, it creates transparency in the process so they're not like, oh, well why haven't they responded to me yet? Or just respond to them right away. So, sometimes, like, we have a separate set of recruiters that are coming in and so we'll do the technical interviewing and it's, I can't make a promise like that to the candidate because I don't handle it. That's something like a recruiter will handle a lot and I'll tell them we'll do that as soon as possible and we'll hear from Charm's or whoever else as soon as possible. So, we have a comment from the audience basically expressing that sometimes the recruiters in a company, I guess it's a larger company, are disconnected from the technical staff that actually does the interviewing so it may be a little difficult to set expectations correctly. How do you bring these kinds of processes, you know, how do you make your recruiting group more empathetic? Absolutely, that's actually a pretty common situation and one that we start to run into as well. One solution to that, you know, issue specifically which is sort of recruiters not being that as active and following up, is to be very clear who somebody's champion is until he's going to champion on the engineering side. And so if they're early on in the process that maybe whoever does the first interview, the first person that gives the first thumbs up, then that person potentially says, okay well along with all the other things that I'm doing, I'm going to kind of stay on top of this. And, you know, that can mean very different things in any organization but usually that just means literally staying on top of the recruiter and saying have we followed up? What's happening? When we first started at Hire, it's actually kind of an interesting learning experience, we started with a different name, we started originally with the name Developed Raucion. And then all the hypothesis, the thesis originally was that we created this time limit marketplace where for one week kind of companies can put in offers and see each other's offers as a great competitive dynamic to increase, you know, candidate salaries in case they help people, help the market clearly start to happen. What happened was we did see a competition but it was on salaries. Companies pretty much knew they were paying and they weren't comfortable going outside of those ranges but companies started to compete on speed. So early on what would happen is because the, you know, batch lasts for a week, sort of Monday through end of the week, people would log in on Friday and put in kind of the offers before the week ends. But then what they would see is that, you know, all the best people that would reach out to them would be put in their offer, they could see all the other companies that are reaching out and they see why there's like 15 other companies on this level. Let me try to get it there earlier. And the candidate chooses which interview to request and which one to say no to which, you know, pretty much does what happens, you know, outside of hire too and in your regular day-to-day process it's just not as time compressed. And so what happens when you get to the end of that is people are just less likely to go talk to an interesting company. And so it essentially trains companies to start, you know, logging in Monday morning and putting in their offers. Well after that the competition continues where it's about getting the person to the next onsite to the next onsite. And so I guess one of the best things that helps fix this is showing that it makes a difference, right? That when we move faster through a process we're much more likely to make money. And to address your question earlier, like how do you know if it's working, right? So I think one way of, you know, if it's working is if you think about it sort of like in the original quality, you have different steps where you want to see if somebody did well. And for a screen if they were interested in doing that, if you gave them the offer, if they accepted the offer, they did well on the job. And ultimately, you know, you know, if you're doing something that right, if you're actually able to hire people that you want, people that are doing well on the job, you know, then if you're not, then you can kind of go into that and see what was the problem. Isn't that people aren't passionate in an interview? Or is it when people pass the interview they don't want to accept the offer? And you sort of get into that and figure out why. Like in our case we saw that everybody was coming in and the first thing that's happening is they're already starting to work on the old challenge. And so in the surveys we heard it's like, hey, you know, let's get to know the team a little more. We had the coffee break at the beginning and really kind of smoothed out and you have a great work for it. How many of you are using an applicant tracking system like Greenhouse or something along those lines? Anyone aware? Very few. John Fly is the name of the developer. Yeah. What do you guys think? Is that a wise move to invest in that kind of system? It would seem if you want to have data about this process, that makes sense? Yeah, especially if there's a disconnect, like you mentioned between the recruiting team and the engineering team, you can solve some of that with technology. And there are two great products in the market. One is Greenhouse and the other one is called Lever. And there you can just see a snapshot of everything that's happened to a candidate at any given point in time, how long it's taken to get from point A to point B to point C. And you can also set reminders so that if this person falls off and nobody follows after them, you can have the system pay you and say, you've got to get on this. And that will solve a lot of your problems. And subsequently, I think we still have to set the expectation that if the recruiting team's job to move at a certain speed, it has to be reinforced and it has to come from the top, but you can take away a lot of the friction around actually making that happen if you get a good ATS. How do you help your team be good interviewers? Because I think this addresses both sides of the equation here, because if someone's an interviewing and they understand kind of what the interviewers are, or how they're being prepped, maybe that's helpful. I'd love to hear your thoughts on that. Absolutely. Thank you for coming on that and also potentially give a little bit of advice for if you're in an interview process like ours, what are the kinds of things that you can positive sign. As I mentioned, in our interview process, there's sort of two main questions that are the technical, the main technical aspect. And the first one really assesses whether somebody program. So you have this algorithm that's not complicated algorithm, but it's a little tricky to implement. The second one is the OO problem. In the first one, what we do is to figure out how to get everything on the same page, we basically all calibrate. The way that we calibrate is that when one person does an interview, if you're new, if you haven't interviewed before, then you sort of shadow that person and you can look at the other scorecards. So a scorecard is where after an interview, you say, here's what went well, here's what went not as well. And the scorecard in our case is very, very specific. And I'll tell you exactly how it starts out. So we'll kind of give the problem to somebody. And it's fairly standardized, but we'll paste the problem into the actual use of the coder path that's kind of screen sharing program together and we'll actually paste that in because we want to make sure that each time the question is given exactly the same way. And then the first thing that's on the scorecard is, did the candidate clarify the requirements? So all of these things, they're not required, but they are a way of making sure they're looking the same thing. So just say yes or no. And if somebody jumps into it and did the problem right and can clarify requirements, obviously we're not going to penalize them before it. But if somebody clarified the requirements and did test some of these next things, then that might be kind of a thumbs up. So then at the second part after basically did they clarify the requirements is, did they ask that the brute force solution is okay? So a lot of times without going, you can find kind of a trickier solution of solving in a more efficient way. We're actually not looking for that. And if somebody jumps right into doing that and they do it perfectly, again, we're not going to penalize them for that because they count a better solution. But also if somebody says, hey, is brute force solution okay? We'll say, yeah, it's fine. You're going to usually have very small kind of inputs. So that's fine. And that makes the problem much easier. And so what we're looking for there is we're basically trying to figure out not just whether somebody can code, but can they do it in kind of a collaborative environment. Now as an interviewer going in to do this for the first time, especially if you haven't interviewed in a while, you can be very counterintuitive. You're in a new environment. You have no kind of, you know, what's going on. You don't know what's expected of you. And it may not seem like you should be asking questions, but generally asking questions is a really good thing. We look for people that talk out loud and get engaged. And it can be a good idea to actually take a moment and just really get in line with whoever the interviewer was and kind of get on the same page. I'd love to hear from somebody on the panel. Should you go into an interviewer, actually, say to the interviewer, I'd love to discuss what you're, you know, what you're designing on. You know, just make me very, very blunt. Yeah, definitely being more clear on the expectations and how people are going to be evaluated is going to produce a better result. Some of the, I guess, innovative ideas that I've seen or would like to see tested out is some companies are taking off whiteboards and actually putting on a computer. Some companies instead of going more towards algorithms are doing more towards like actual company code based real company issues and problems. I would love to, I don't know if any companies doing this, but I would love to see a company actually doing the code challenge over IRC. So without looking at the resume to see that they graduated with CS degree from Stanford or knowing what their, the interviewer's gender is, I would love to see a company. Yeah, a blind. So I think it was Harvard Business. That's what my building, pardon, that's what you're talking about. That's exactly it. So you just have to use her platform. I think managing expectations is very important even for the internet interviewer team because a lot of things what happens is different engineers are judging on different skills from the same ground. And that becomes much harder for any candidate to clear the bar because the expectations are not clear. Yeah, so to drill down on this point just to make sure we're there. So one of, I think might be considered as practice or at least advice that I've seen is to make sure the interviewers have a specific thing they're looking for. As an interviewee, you know, if you come in at your across the table, knowing that studying and interpretations is kind of universally like a good skill to have, you know, you wouldn't be good to be just blind and out front and say, you know, look, I'd love to understand exactly what it is that you're judging me on. Because I know that necessarily that's not going to come up, right? The interviewer is not going to say, hey, by the way, I'm looking to know if you're a good culture fit today. I think it's much harder for the candidate to do that for the interviewer to achieve that conversation. Why do you think it's harder for them? Because that person is going, at that point you're going to have, I mean, it might be a good thing, but it could potentially not be a good thing. The person in the, like, interview position is much more in a position where they need to be kind of putting their best foot forward. I don't think they shouldn't do it, but it's, I think, slightly more. My recommendation to Ganyu is to do that before you go to the onsite. You know, have a conversation with the hiring manager, have a conversation with the recruiter, with you, you know, understanding the interview process, what the day looks like and what the focus areas are each round. I think if you have a conversation then, it's much easier to kind of dice out the process and give them feedback as well. I think that's one of the things that many years are, as you and I would like to become a company and help, how much you can get for any group, is that any of the process of them? I'm kind of curious about something, actually, now that you mentioned that. So, if a candidate was to ask that, in order for you to answer, you'd have to actually have a process. Where you know, like, for instance, who's going to do the interview and stuff like that. How many of you work in an environment where there is a process and you would be able to answer that question? Okay, so again, I kind of have to run, so maybe one of the takeaways from this panel is the very simple, how to process, you know. Is there, what is the best way to go down the road of having a process and then be buying a tool or, you know, some people like to invent their own things and what's your experience with that? So, I think the first part of the process is basically something that's a standard repeatable process. So, doing some things in the same way. So, with us, what started out as their initial process was just saying that, okay, well, we're going to always ask this question and we're going to always ask this other question. And then that was kind of the early part of the process. Then we added, okay, after that we're going to have a pair of programming sessions so we can see what it's like to work with somebody. And originally that was our process. Then after you have something where, you know, a lot of candidates are starting to go through it, then the next thing that you might want is a tool. As Amy mentioned, the two that are, you know, getting pretty popular right now, Greenhouse is more of a kind of enterprise solution, so it's a little more expensive. And Lever is a very, very low price point, so really any team, no matter how big, if you don't have an applicant check system, you can get started with them. Right, and then at that point, you know, into the tool you would say, well, here's our, my stage is, here's how I'm moving somebody through it. And the second part for us from the process that we came down to where once it was more than one day, then eventually we had to sort of simplify it down. So, we had the first question, we had the second question, so we started to ask the first question on the phone screen. Well, then before that we started to do the founder phone call and pre-sell. Sometimes we would do the second question over the phone as well as somebody's remote. Now, in case you're going to read your multiple touch points, and that becomes a bad experience, because you've got somebody that's like, well, I've been on the phone, each time you have to get something scheduled. And so then you sort of simplify things down. But for us, basically, just doing the same thing repeatable, so long enough to where you can actually see what works and what doesn't work. And we do retrospections on this, like in the general engineering style, we'll actually do a retrospective of what's working. A retrospective. Exactly, a retrospective where, you know, all the interview areas will get together and same thing. Who moderates that? In your case. I was moderating previously. Chakri is moderating that. I would be a good engineering. I'm curious for something that's kind of a different angle on the process aspect of it. In your process, are you optimizing to prevent false positives or false negatives? You know what I mean by that? Or maybe you can imagine what you mean by that, right? I'd love to hear some of the other perspectives on that. We try to minimize false positives in some sense. I mean, you have to. So it's a more of an eliminating risk of hiring someone who is not actually qualified. It's not that we only optimize for that, but I think that's more important because, you know, it's more cost-effective. The company is more cost-effective. The candidate is not a threat. That's very important. And going back to your earlier question about how do you mean hiring process successful, I think it has to start with the commitment from the team. The team has to be committed to having a good process and taking feedback from candidates that are doing that. After all, you want great leads to what you do for the company, and it has to be a top priority for the team, especially if you're talking tech reporting. And it has to be a clear champion who is accountable for it. Without that, nothing gets done. Everyone is working, everyone has specific reasons and so on, so that really follows the group of having strong commitment and accountable for it. So I do think one of the risks with the false positive is that people work towards hiring women in a certain box. So you're like, oh, this person graduated from here. This person worked at this company. This person has this many years experience. And all of these boxes are ticked. And it kind of goes back to that, like, nobody gets fired for hiring for buying IBM. And you end up missing a lot of people who maybe have the alternative experience or who didn't graduate from a top 10 CS school. And so I think the risk in that is that people will push great candidates out of that box. What's really unfortunate is that a lot of smaller companies take their hiring cues from companies like Google and Facebook and Microsoft. And those guys have such strong friends that they can just have this revolving door of people. So they can be very, very picky both at the top of the funnel and later on. Now later on makes sense because you're potentially judging someone what they can do. But at the top, they just throw away a lot of people that might be very qualified but don't look the very specific way. If you look at the strength of your own brand and you're honest, I would advise everybody to just think critically about whether you're blindly taking your cues from these guys or whether your process makes sense for you. And if your brand isn't as strong as Google's, then maybe it doesn't be who you to throw away everybody that hasn't gone to MIT and Stanford and has worked at Facebook. And the challenge, one of the challenges is that you actually don't know how well you're doing. You only know sort of for the people that you've hired, whether the person worked at it or not, the people that you didn't bring on site, if you're passed on, you never know if that person was a big fit or not. And anecdotally, so we have one engineer on our team that we did not take through the regular process who had joined prior to this point. That process was established. And later on he said, you know what I actually want to take? This coding challenge. And this is one of our top engineers, one of the most productive, really, really top guy. He did the test and he was very nervous and worked on it for a long time. And essentially after seeing his results, we would not have problems with him. And that would have been such a huge mistake. It's one of our top engineers. So the point there is that you really kind of don't know. It's one of the things that we're actually trying to figure out because we've got a lot of companies interviewing candidates on our platform. And so we can tell that one company said no to the candidate that it was then hired by another company that's done really well. And so that's something that we actually want to try to bring to life. I actually have a friend that went through your recruiting process. This friend was really excited about joining Hired early on. And this friend had an interview with a coding challenge. This friend didn't feel that they did very well with coding challenge. And then this friend didn't get an offer from Hired. I've always wondered, I mean my friend has always wondered. I've always wondered what you would do. But I think the idea was that the coding challenge is actually, it's worth talking about the beginning of the pipeline there. And how the action screen might, basically is there a chance that you're inadvertently failing that level? Just to add to what Elena was mentioning, I think there's a lot of thinking for interviewers when they do their first interview. And I don't know, the air on top of my head, like, you know, of course, you know, once you get into the high areas of all the interviewers, what is the success rate of candidates passing their first, you know, on-site or on-screen, compared to second or third? I think people get better at second and third. Yeah. I think that, like, the Hired part of the interviewing process, and they should get a new job, is one of the most stressful things in life. And you're going into these situations where you're being judged. So it's a really unrealistic life experience. Are there gender dynamics to it as well? Definitely. For bringing people in on-board, they should have mental plight for jobs where they have seven out of ten of their requirements, and mental plight for jobs where they have four out of ten of their requirements. So I ask companies, like, if you can't see the anyone, like, not having that, just leave it off the job description. And, you know, you're not suddenly going to get 200 recipes. You might get four, but hey, we need that for one of those four, if they're not going to be the right one. I also ask people to lower the barrier for application as much as possible. So, you know, if you're going to ask an engineer for their resume or something like that, you know, if you're looking for someone who's not really looking, they have a job, they probably don't have their resume pulled together, they might want to just send you an email saying, like, hey, I'm willing to talk to you. Or you might have someone who, like, completely has it together, is, like, they're preparing for this, and they want to send you their 30-page CV. So, like, when we invite people to, when we have people post jobs to the women in the good community, we ask that it's, like, 140 characters and includes an email for follow-up, so that, you know, you're not going to have room to put that any four years of real experience when actually you hire someone with three years of experience, because, you know, in that case, a woman might not apply. So, I want to stress what she just said, because I think it's really a really important takeaway that, like, the less bulletin points you put on the dot description, the better. Also, I want to make a suggestion that hasn't come up, kind of surprisingly, but at Andela, we've had over 13,000 applications to hire 60 people. And you might say, okay, well, how do you screen that, you know, because it's just, like, an unworkable load to go through. We actually partnered with a company called Chlon.io, and they provide an assessment test that takes about half an hour for the candidate to go through. But in tests, they're problem-solving aptitude, basically IQ, task management, innovation, and 12 different personality profile steps. So, we have a particular personality type that we're looking for, and we also have problem-solving aptitude. Minimums that we're looking for. So, that helps us kind of cut it down. I'm thinking that that process is fairly blind, because when we get those results, like, basically, you know, there's no picture or gender is necessarily associated with it. So, it's especially blind for us looking at after-dinner, so we don't even know, you know, what they are. And it helps in the sense of having a baseline expectation of how that person, you know, what they're capable of, even if they don't interview well afterwards, or even if something afterwards, in the case that they might be good. So, curious about your experience with that sort of thing. Do you think that that will work as well with engineers where the market is in their favor? You have a very selective program where people probably kill to get into it, whereas here, you have much less engaged audience. And I can see people that don't, well, I'm paid for being willing to do that, because this is their way to prove themselves. So, digging back, I worked at ThoughtWorks for four years, and during the time that I worked at ThoughtWorks, it was like one of the top employers, you know, in the world, but it was still a hot market, you know. And like, when you showed up for the interview, they'd be like, here's the WonderLite. You know, there might be issues with the WonderLite, you know, but it was basically an IQ test in certain ways. I don't know. I also think that once you get someone to apply, you can add extra layers, but kind of getting those people to come in the door for the first time, you want to lower that barrier as much as possible. Right, so maybe to answer your question, something you do after there's already a look. One thing I've noticed, this sort of comes up sometime in this, you will ask, well, why do I have to do this coding challenge, or, you know, I have x years of experience, why do I have to do this trivial problem? And for us, it's actually kind of also a way of testing out a little bit of the cultural attitude. So, you know, we bring in some people that are really fantastic and just crush the test, and then, you know, my response is like, okay, cool, like, not too bad, let's work through this, and have a very positive attitude about this, as opposed to other views that, you know, say, you know what, I don't really want to do this, and whatnot, and that sort of comes across like that. I don't know if I want to work with this person, but it's basically the same thing can happen with an at-home assignment, you know, or a coding challenge, as long as it's reasonable, and, you know, the expectation has been reasonably set, that we don't ask somebody to do any sort of testing or assessment until they've had a half-hour phone call with one of the founders, right? And so with us, we know that 80% of the people past that phone call won't make it through the interview screen, it won't get to the office stage, but we're still ready to have that phone call in advance in the beginning, because essentially it's just our way of saying like, we're going to give something to you, but we expect you to use them for us as well, which is basically go through this process. One way that I found to kind of combine both of these is to have, you know, big questions that themselves are selling points. So smart people don't really like solving toy arbitrary problems whose purpose feels like it's an idiot test, you know? FISBAS is kind of the canonical example of that, but it comes in all sorts of variants. So if instead, and again on our platform, the best performing interview questions, one where a candidate said, like, I really want to work with these people, and these are people I've never met before, they don't see them face to face, they just have a conversation, are questions where you start with something that you've actually seen, and then adapt that situation to something that would lend itself to an interview question. So you can describe a scenario you've seen at work, and then eventually you start writing code, but you also have the person start thinking about the problems that you're solving, and you know that if they're interesting problems, those problems are going to stick in their heads. So if you can, you know, like, people don't care about being on tables or like beer or whatever, most of the time I think they want to work with smart people and solve interesting problems, so the sooner you can get somebody in front of a smart person and have them solve an interesting, non-perfumatory problem together, the better off you're going to be. And as we're getting closer and closer to time, I want to make sure that we have a chance to take any questions that I saw on hand, on the set of room before. Yeah, what's our comment again? I think it's, well, I think we're starting to get into the room. Okay, so I'll take a question from now on. I'm Lisa, I'm a single-goer and a custom software firm in New York, and my question is just, if you could address more in the very early stages of the hiring process, looking more in terms of early recruiting tips, especially for recruiting or for saboteurs. From the perspective of an employer? Yeah, from these perspective of a employer, recruiting from the earlier, so we can talk a lot more about the interview process, but before you get to that, in terms of attracting more of a high-quality candidate, and especially more diverse candidates, personally. So the question was, how do you attract really, how do you attract candidates, and especially diverse candidates? So, shameless plug, women who code has over 25,000 technical women, and we have job postings and the code review of everything. So I think your question is sort of around, you know, what in the hiring process is due to sourcing, right? How do we get people into the door? And I must say that for me, this is something that I do not know anything about, and that's where this company came out of that was failing to source for many months. And at this point, what we do, kind of at that hired is we bring people in across many different ways, right? So from sponsoring a message, just RailsConf, to we have your type on LinkedIn, so sometimes you can use, you know, Networks Act to identify the type of talent you want at scale, as opposed to having to message each person, messaging people individually that can be effective as well, if you find somebody that's the right fit. And generally, if you're going to do that, then you want to find somebody that's that fit for their specific reason. Yeah, I also want to advise you, just from experience, have a distinguishing something, whatever that may be. So that's our purpose, right? We have an amazing social mission, and that's helping us get amazing candidates in the door. People are cold emailing me saying that's a big part of this. At Hatch Rocket, I had a beachfront office, and I was like, hey, come be a surfer and do amazing code with us, and do great things, and that's what we were about. It's basically about human psychology and having some sort of hook. If you're just a consulting shop in Ithaca, and that's the only thing you have, and I'll find some of that, unless you're going to an athlete again, have family there, why would that be your number one choice at the next long? An office, right? But if you have something else that's really, really amazing about how you do, and your purpose, that helps you a lot. So then also on the diversity side, a couple of things, in addition to the don't have bullets that don't need to be there. Using superlatives, women don't identify as rock stars or ninjas, or the absolute best ever. Using words like, learn quickly, or things that are aspirational but still inherently positive. Work hard and play hard is another thing, because when you're saying, hey, we're not only expecting a lot of hours out of you, we're expecting you to use your social hours with people who are currently strangers, and those are just things that don't resonate with women. Real quick, are there any perks that you've found are really good in this environment, specific to where we are right now in this room? I guess that it hasn't, from what we've seen, it's been less perturbed, so you see a lot of things coming out, sort of, you know, on the salaries, lunches, and whatnot, but from what we've seen... On the salary? I'm sorry, not really. That would be a good one. That would be a perk. Unlimited vacation. So you're saying what you're seeing is that the perks are not as important. So what is important? So the main thing I ever have in my life is chasing what engineers like and what the engineers like. So it's worth an interesting product, looking at the technologies that they like, working with the people that they like. If it came down to the one most common factor, I would say it's probably not that important. Any more pressing questions, like what you're going to die if you don't get out? I'm going to be protecting against... How do you protect against unscrupulous things? How do you produce things? What? They're not going to be addressed unless you're doing it with them. Don't use your computers. That seems to be the answer. Any other... For Meijer, what do you ask in the 39th session? Because it's technical or not? Do you mean the coffee social? Yeah, so the first part... You mean the screening or the process? Yeah, so the first part is less of a screening, it's more of a cell call. In that call, there's never one where we don't study upon the next step unless you're doing a CSE. Otherwise you'd be wasting a lot of time, right? So the idea is this is entirely a piece of call for the counter's benefit. So we'll talk about what we're doing, mission and then... So as we're wrapping up, I want to just make a quick announcement. So we've got a few things. So what we're having a party tonight is... It's going to be a decalized roof, which is a really cool roofed up bar with a view of the city. It's really close to here. If you go to Rails.com slash party, you'll see the details. OV is going to be DJing there, addition to being the Addison Leslie editor of the Rails series, author of the Rails way, and in fact, the hash drive is also a DJ for 25 years. Almost all the months, there's part of it. I don't know if we have enough room there for sharing. So come to our party, 63930, the details are on the website. Grab the ticket or so you'll be there for free on there then, right? Also, if you like ping pong, we have a ping pong tournament that we're hosting. So anytime today, come by the higher roof in the expo hall. If you win one game today, then you're enrolled in the final tournament, which is tomorrow, and we're giving away a $2,000 sign bonus. So when the candidate gets a job on a higher deck, you pay them a $2,000 bonus, as a team, they get from the companies, and so we thought it would be cool to give that away to the winner. A $2,000 prize for ping pong. Awesome. Alright, I want to thank all of the participants of our panel. Let's come around and applaud.
|
Every interview you conduct is bi-directional and creating a bad candidate experience means you'll fail to hire the team that you want to work on – you can't hire great people with a mediocre hiring process. The experts in this session have analyzed thousands of interviews and will share how to create a great candidate experience in your company and how to scale your teams effectively. Candidates can use these same lessons to prepare for interviews and evaluate the companies interviewing them.
|
10.5446/30678 (DOI)
|
My name is Courtney Irvin. I am a platform engineer with code montage and that's an open source project so I'm also an open source maintainer. Two years ago though I was not a platform engineer. I was an administrator at a non-profit and when I say administrator I don't mean like systems administration, like IT stuff. I mean I managed volunteers, I planned events, I occasionally used Excel spreadsheets. It was a terrible time. So I've moved on and I'm writing code now and I can honestly say that there's no way that I would have my current position and my current knowledge and skills without having written open source code and contributions. I'm a self-taught developer by which I mean that I didn't pay anyone to teach me. People taught me, they didn't get paid for it, I'm so sorry. And some of that free learning that I got was through writing open source code. So I think that it's incredibly beneficial to your career, to your learning, to your growth and so that's kind of why I'm really interested in getting people to write more open source code. Also, there are a ton of maintainers out there that just don't have the manpower to maintain projects at the level that they'd like and you can be the change that they deserve. So the first thing you want to think about when you are getting ready to make some open source contributions, pick a project, or what your goals are. And there are lots of different reasons to get into open source. I would recommend you have two different kinds of goals. Personal goal, something for yourself that has nothing to do or very little to do with your career, right? And then a professional goal that is maybe related to your job, your current job, a future job, something like that. And those goals are going to guide some of the decisions that you make and sort of the projects that you choose and what you're willing to put up with and what you're not willing to put up with. So some of the goals are listed here, new technologies like getting to know a new technology, getting to know how to link up rails with an angular front end or what have you, version control, getting really good at Git and GitHub, getting really good at different processes. Working with a distributed team is a skill. That's a skill. Being able to communicate from a distance and get your point across. Code reviews, maybe you just want someone really experienced to look at your code. A really good way to do that is to give them code that helps them, right? Not just like give them a link to something that you're working on. Visibility. So making sure that other people know that you write code, getting to know people in the community and also helping out the community, right? Giving back to a project that you've already worked with, that you depend on, giving back to some nonprofit projects. We'll talk about that in a second. And just in general, your goals should be about you and your needs, right? So your goals aren't my goals. One of my goals was getting to know rails because I didn't know it when I started contributing to Rails projects. So I learned a lot. And what I want us to do now is just say to yourself, yes, you can do open source code. If you've been holding back because you're afraid, if you've been holding back because you don't think you know enough, you do, okay? I'm going to talk about this in a second. But what I'd like you to do right now is just look around and find someone else's eyes, right, in the crowd. Just look around. It's going to be a little bit like Catholic mass, peace be with you. Make the awkward eye contact and give that person a thumbs up and be like, you got this. Just mouth it. You don't have to say it out loud. You got this, right? Okay, so you have permission to write open source code now. All right? Do it. If you haven't done it before, I've just, someone has given you permission. Take that permission, all right? All right, and we're going to talk about, I'm going to go through, this is, I spent a couple minutes, just a couple, thinking about all the ways that you can contribute to open source. A lot of these don't involve code, right? Bug reports. A lot of times people will put up an issue and they're like, okay, this thing doesn't work. Well, that's not really helpful to the maintainers, to the people writing code, right? So find an issue. If you come across a bug on your own, do a little research, share some links, right? Do something to push it forward. Take a video of the bug happening. That's super helpful. Take a pic. Documentation details. A lot of times people assume, or, you know, I have it set up on my computer and it works on my computer, right? Well, that's not really helpful to everyone. So if you come across an issue with documentation, making sure that there is documentation, sometimes people assume that coders know more than they do, right? Maybe they don't mention that, hey, this is in a different version of Ruby than maybe most code is right now or it's in an older version and you might want to use a Ruby version manager. Translations. If you know multiple languages, get someone else's website to work on multiple languages. Making seed data so that there's a lot of data for other, for new developers to work with so that they can kind of see examples of the code base as they exist in real time as opposed to, for example, if you list a bunch of projects, having only three projects is going to keep you from being able to solve certain issues. So having 25 projects really clearly makes it easier for everyone else. Add some tests. No one writes enough. Do it. Design ideas and reviews, as you'll see in my slides, I am not a designer. I think that they are witches and I respect them for it. And I think that if you know design, share that with someone who does not. Code reviews, right? So go to some pull requests that are already existing. If you know the code, if you're familiar with it, then comment on those code reviews, on those pull requests that already exist and keep someone else from having to do that first pass, right? Help someone else improve their pull request before it even gets to the people who are on that team. Share ideas for features. You can promote a project that you maybe don't have time to work on, but tweet about it, blog about it, star it on GitHub. That takes a couple seconds and you're still helping that project. Track it on GitHub, so watch it and look for notifications about things that you can help with. Update a library. We're about to be Rails 5. Somebody's going to need help with that, right? So I would suggest you don't do that without talking to the person first. They may not want that. But update a library or a gem that they're using from an older version. Fix any syntax errors that there are in the code so that the code is more maintainable and easier to read. Fix typos. That requires almost no code knowledge whatsoever. So fix typos that you see on a website. And then finally, and we'll talk mostly about this for the rest of the presentation, code. Maybe write some code. That would be nice. Okay. So the first thing you want to do is find a project and choose which one you want to work on. And you can do that by just searching GitHub. But there are also a lot of tools that you can use. So Code Montage, the company that I work for, has social good projects that are open source. So open source social good projects, you can search by cause and find something like if you're really into education or you're really into open data, find a by cause and search for a project that you can contribute to and help that organization. Open Hatch has a lot of little small bite sized projects. So if you just want to do something really quickly and they have a lot of different languages that they support, GitRec, you put in your GitHub username and it will suggest repos to you based on the work you've already done and your existing profile. That's super cool. 24 pull requests. It's like Advent but not chocolate. So you write code and you submit a pull request every day for 24 days in December leading up to Christmas, make some presents for open source maintainers. Contribulator is kind of just the projects part of 24 pull requests and CodeTriage looks for really, really bad off repos on GitHub and then they suggest them to you. So if you really want to take on a struggle project, do that. But also if one of your professional goals is like, hey, I want to work for X company, look for their open source projects because their developers are running their open source projects and then they'll be looking at your code. Who do you think they're going to go to when they're trying to hire but somebody that they've already kind of worked with from a distance? So if there's a company that you really like, look to see if they do open source and try to contribute to some of their open source code. It also gives you a sense of their workflow, it gives you a sense of their team and their communications and their dynamics so that's really helpful. And the last thing is other developers. If you know other developers, they might have some suggestions, they might have projects they're working on like Reach Out. It's a good way to connect with people so if one of your career goals or personal goals is networking, hey, find somebody and help them out with their project. Okay. And then the next thing you want to think about is the task, the issue that you want to work on in GitHub, its issues. And what you want to do here, depending on how you're feeling, right, you should probably make sure it hasn't been solved already. Sometimes they don't get closed until something gets incorporated but I would check the pull request as well and just make sure. And then if it hasn't been completed already, just comment on it to say that you're working on it so that it doesn't get solved before you finish, right? If having your code pulled in is really important to you, make sure that you comment and make it really clear that that's something that you're working on. And then know that that comment is a commitment, right, to push that task forward in whatever small way you can. So if you work on it for, you know, a couple hours, you don't get through it and then you don't ever comment again saying like, oh, okay, I didn't get to this, someone else can take it on. Well, now it's just not going to get done. So consider your comment, some sort of a commitment. If you don't solve the issue, you can always take the research that you've done and the things that you've learned and comment on the issue again with that information and say I wasn't able to get through this but happy to pass it on to someone else. That's helpful. That's a step forward, right? So stop thinking that you're going to solve everything or fix a full feature and think about the small steps you can take to impact a project and to help a maintainer. And know that as somebody who maintains an open source project, any contribution you make is helpful to me. As long as you're in a good spirit about it, you're in a learning space, right? Anything that you do is helpful, adding a comment, doing some research, that's work that I don't have to do or that someone on my team doesn't have to do. We're hungry for it, so again, don't be afraid to reach out and kind of put something out there. Just also don't be really aggressive about it, right? So if you have a feature idea and then you're like, hey, it's been a week, where's the feature? That's not going to be very helpful and it's not going to win you good graces, okay? All right. Another thing to think about before you start working is the community. So make sure that you know, because you're probably going to need help, there are probably going to be people on the team that know more about the code base than you do, person just forking it for the first time. And so what you're going to want to do is make sure that you know where that community is. Maybe it's a Slack channel, maybe it's a HipChat community, maybe it's a forum somewhere, maybe it's just an email for one guy, right? Find out what that community is and be aware of it. And also try to get a sense of what the community is like before you get started. That might also be a part of your decision factor. So for me, it was really important to have a community that would be responsive, that would be very kind, because I was kind of in a place of like, I don't know what I'm doing, but I'm scared. And so it's good to know what that community is like before you hop in. I would also say that you might want to talk to other coders you know to look for a good community, okay? Another part of the community is like that level of responsiveness. So maybe looking at closed pull requests and seeing how long it took, how long that process took and that'll help you set your expectations. Okay. So you will want to contact the community when it comes to any big changes that you want to make, changing a big library, making a new version of it, adding a brand new feature that hasn't been discussed and isn't listed as an issue. Those are things you might want to reach out for first. If again, if it's important to you to have your code pulled in to the code base. But depending on your goals and depending on what you need, like don't wait for permission either. People are busy, open source maintainers often are not working full time on that project. So don't wait for permission, but do check in. So I'm going to get into the technical bits of this now. And since most people here seem pretty good with Git and GitHub and there's no way that I could possibly teach Git in the time that we have right now, we're going to kind of speed through. So the first thing you want to do on GitHub, fork the project, right? And this is just a workflow I know that not everyone uses the same GitHub workflow. This is just a solid one that you can use. And I would recommend a lot of times in the project itself, in the documentation, there's information about how to best go about contributing to the project. So this is a workflow that works. This is the workflow that I use, but it's not necessarily the workflow that's best for the project that you are working on. Again, so starting off, fork the project and create a version in your own GitHub account. So that's on GitHub. And then you're going to want to clone your version of the project, your fork, down to your local computer. All right? And know that installing the project, so going through the steps that are listed in the documentation or with Rails, like a lot of us are pretty familiar with how to set up a Rails project. So maybe there is no documentation. Maybe you've taken that risk, which power to you. And you decided, like, I know how a Rails project works. I'm going to just set this up and make some contributions and there's no documentation, which should be your first pull request. But just know that, like, you can look at the readme.md file or the contributing.md file. Those are pretty standard. Find the documentation, the Wiki, the community, whatever it is, and go through the steps that are listed there. Know that this is the worst part of open source contributing. The installation process is the worst part. So if you are feeling like I'm the only person who's ever come across this error and now I will never be able to achieve my needs with regards to open source, don't. It's normal to struggle, it's normal to have errors. The likelihood is that the person who created the project or the team that's working on the project has it set up and has had no problems. So if you hit an error or a bug, that's really important information for that person to know or for that team to know. Because they might not know that other people are hitting this snag and a lot of people are hitting it and just walking away. And that's people who could be contributing, could be coding who aren't. So what you want to do is know that you're going to struggle, push through the struggle, write down the errors that you get, make issues for anything that comes up that's broken, and if you can fix it, write down the solution, make a pull request to the documentation that adds information that was missing before. And then the last thing is if you fail, if you're not able to get it installed, those issues are still a way to move that project forward and just move on to your next best choice. It'll be okay and maybe they'll fix it and maybe it'll come back. So do your installation, get through it, don't get discouraged, and then what you want to do is celebrate. Make it rain. So you did it. That's a really big deal. So if you get a project installed, that's a big deal. Celebrate it, take a minute, tweet about it. Awesome. Okay, so these are the six GitHub commands that you'll want to know for the workflow of like, this is not my code base, right? Of moving past the Twitter clone that only exists on your computer to someone else's code base up in the cloud. So you're going to want to set a remote, which means that you want to make sure that the headquarters version, the official version, is linked to your code as well. So set a remote of like upstream root headquarters banana, whatever you like, that is focused on, that is linked to that headquarters version of the code. And then as you're coding, make a branch, name the branch something relevant to the code you're writing. So don't work on master is the takeaway. Name the branch something, write some code, commit, and then push back up to your fork, which should be called origin naturally and originally, and maybe you changed it. And then make the pull request through github.com. And this is a GitHub workflow, obviously. Okay. And then the last part, this pull and rebase is going to be something you want to do as you continue to work on a project. So we've already discussed how the install process sucks. So if you have a good project and a good community and you've got it installed, like write more code, get connected. That helps you build that relationship, right? So pull down from the upstream to your master and make sure that your master, that you're always branching off of your master to make new code changes and that your master is always up to date with headquarters. And then rebase is what you'll use if, like, let's say you worked on something for a long time, so it's been four weeks, and maybe that code base, that original code base has changed. So you'll want to rebase against that original code base. And I'm saying all this just to say that you should know these six commands and how to use them. And if you don't, like, this is the process that you'll take as you work on a code base that's not your own or something that you're more closely tied to. All right. So the next step is to code. This is a good that way. Do it that way. And just understand as you're working and as you're working, maybe you've picked a task that's a little bit beyond you. Maybe you picked a task that requires you to, I'm going to pause, that's a lot. Maybe you picked a task that is a little bit something new, right, that you have to research that is going to take you a long time. So just make sure that you're maintaining your motivation and that you don't give up, because that's a waste of your time, right? So push through and see it as a learning experience, not as like an obligation or, you know, I made this commitment and I need to write this code. See it as like, okay, I'm going to learn something from this, some way or another, and that really changes the way that you think about it. Find someone to hold you accountable, another developer. If you get stuck, find someone to pair with. Do your research before you reach out to the community for help. So go into the community with the sense of like, okay, one, two, three things that I tried that didn't work, can anyone help me? Right? And that's just like basic developer courtesy is do some work before you ask for help and have some responses when people start to ask you questions. And then just know that, like I said, everything that you do is some amount of progress. Okay. So your pull request, you've written the code, you pushed it up, you clicked the little button to make the pull request on GitHub. Awesome. Now, in your pull request, please don't just make a pull request and submit an empty pull request. This is not helpful to anyone. Please list the changes that you've made in detail. If you made a front end change of any sort, even if it doesn't, even if there's no difference, make a before and after screenshot. Even if you've made a front end change, CSS changes don't mean anything to the eyes of the developer and there's no reading the mind in terms of what you changed and what you made different. So screenshots are really helpful. If you made something and it's like a little weak or it created a different bug, write that in the pull request, right? That's okay. If you have ideas for future improvements or, you know, it's like half done, maybe not half done, but at a good stopping place, but there's still some work to be done, make some lists of, like, do the to-do list of here's some new tasks. Here's some things that you might want to add to this later. Here's a feature that might be great to depend on this and put that in there and then maybe make issues for them. And just don't do too much at once. So sometimes you'll start working on the code and then you'll be like, actually, we should refactor all of this CSS to better fit the needs of the other front end developers. Well, that's a different pull request. That's not related to the task that you've chosen, right? Or you start to change syntax or you start to move things around. Do that separately. Just keep things very agile, right? Different. Each pull request should change a very specific thing because that allows the maintainer to say, okay, you did this one thing really well as opposed to saying, like, well, you did this one thing really well, but then you did this other thing and I don't need that. And now I can't really separate out which one I need and which one I don't. So be really clear about what you've done and make sure that it's a very boxed off thing, right? I've fixed this issue. It's one pull request, one change. The other thing that's important with pull request that was important for me because I was in a learning place and I wanted people to review my code was saying specifically, like, I'm open to feedback. That's really important and that says a lot to other developers, right? So I would say in the pull request because sometimes it can be a little touchy. You want people to contribute to your code base so you don't necessarily want to call them out. I mean, I don't want to call people out in rude ways and I don't want to hurt people's feelings. So if you say really specifically, like, hey, I'm really open to feedback about this. I can see that there are some places that are weak. Like, please let me know whether something is working or not as opposed to just, you know, the opposite or the other option is, like, just ignoring the pull request, right? No, let me know and I'll fix it. I'm here to fix it. This is not just a one-time drive-by, right? So make sure that it's really clear that you are open to feedback and constructive feedback. Make changes if they're requested of you and this is more about, again, building that relationship and then, like I said, do your own research. Fix things when you can. Make sure that clarify things if you have to and take it as a learning experience. Great. Okay. So the last thing is taking all of this work that you've done and how do you make it work for you, whether it's been pulled into the code base or not, right? So we're going to be patient. We're going to wait for people to work through and we're going to wait for people to get to the code and to respond to you. But ultimately, the code that you've written is code that you've written. It's yours. And it exists out in the public realm, unlike a lot of code that you may write for your employer. So what I would say is that each pull request is a problem that you've solved. It might be a little problem and it might be a big problem, but it's a problem that you solved. So especially if you get to a place where you are writing bigger things or fixing bigger problems, it becomes like its own little bubble that you can take to interviews and use for people to see, like, this is the code that I write. This is how I write code. This is how I solve problems. These are why I made some of these changes. This is where I got hung up. It's a story in and of itself that you can use in an interview with a potential employer, at a meetup, what have you. I actually, when I was leaving the nonprofit world, I went on to become a teacher assistant at Dev Bootcamp and I got hired because I walked through one of my open source code pull requests and kind of talked about all the changes I'd made and the things that I needed to learn in order to be able to make them and why I'd done certain things. So pull request is its own little bullet point. Make a blog post about something that you've, a pull request if you've done, tweet about it, add it to your resume if it's substantial enough. You can focus on this code that you, it's work. It's work that you've done. Similarly, the people that review that code, that work that you've done can become references if you play your cards, right? So all those people that are reviewing your code, commenting on your pull request, the maintainers like myself, you know, if you are building a relationship, if you are using enough emojis in your pull request, use emojis in your pull request, it's awesome. Confetti ball is a great one. Ta-da, it's awesome. Clap is good. Do it, do it, do it. But you know, get your personality across through the communication means that you can. Follow them on Twitter, chit chat with them, like be active in the community, be hype about their projects, you know, and that's a good way to build connections. And I don't mean in like a smarmy way, right? If you like the contributor, if you don't, then you know, whatever. Here's some code, bye. But you can build those relationships over time and get to a place where now you have references in the community that maybe you wouldn't have otherwise. And those references know your code. It's not like you met at a meetup and exchanged business cards, it's not like they're really into your Twitter stream. They know your code. They know the work that you do. So that's pretty awesome. If you add value to a project over time, you can also become like a co-maintainer, you can get like a higher level of responsibility. This might not translate into anything like on the Internet, but it will translate into something like being able to commit directly to the code base potentially, or being able to review other people's code, or being given responsibilities, and again, furthering that relationship that you have with people that you're writing code for. And then I would say make sure that you're tracking all the work that you do. So if you're writing issues about bugs and you're writing like these expansive kind of examinations of a bug, well, that's QA. It is. If you are doing code reviews on other people's pull requests, like that's some lead developer level stuff, you know? If you are, yeah, if you're, and if you're writing code like you're a developer, even if you're changing careers or you're from a different background, and I just think that like make sure that all the work that you're doing, know that that's work that you've done. It exists. It's there in the world. Whether somebody acknowledges it, whether you get that pull request merged in or not. So build relationships, work with the same project over time, can be really awesome. And the last thing is like, as with all things, your relationships with other people matter. Be kind in the community that you're in. If the community itself isn't kind, find another community. It's not worth it, you know? Work over time to make sure that people get to know you. Take advantage of opportunities if you want to and if you can. One of the best things for me about the open source projects were that I didn't have to come up with the idea. Somebody else had come up with the idea. I'm terrible at the idea part and at like taking something from scratch, but being able to just pick up a problem in a larger pool and with more meaning was really nice. So it's a great opportunity to work together and to collaborate and you should build those relationships and take advantage of it. So be kind to yourself. Write open source code. Worst comes to worst. Like I said, code montage is open source. So if you wanted to submit a pull request to code montage, that would be awesome. And so that's it. Thank you guys very much. Thank you.
|
Whether it’s through bootcamps or sheer willpower, hundreds of freshly-minted developers have used Rails to begin their careers. But all of the well-formed Twitter clones in the world are not a replacement for experience working on an active project. Enter open source. Open source contributions hone crucial web development skills, like version control; comprehension of an full code base; and even an understanding of agile processes--not to mention being a clear indicator of your hirability. Join us to learn how to push through any intimidation and strengthen your portfolio with open source!
|
10.5446/30682 (DOI)
|
All right. Hello, everybody. First of all, I'm going to be talking about security. We had a bit of a scheduling snafu, so there was a flip in the schedule. So I hope you stay even if you weren't here for security. It's a pretty cool topic. But I wanted to thank everyone at RailsConf and the speakers who helped us swap things around so that we wouldn't have to be right up against Justin Collins and his own security breakman talk. So thanks to RailsConf for facilitating that. Today, I'd like to kind of talk a bit about security, how to defend against security attacks, and propose a series of ideas that are built around a concept of let's try and attack them or defend against attacks at a higher level of abstraction, similar to how metaprogramming is programming at a higher level of abstraction. I want to investigate a possibility of defending against attacks at a higher level than we currently do today. So let's start by talking about the anatomy of a security attack. There's two sides to a security attack. One is that you have a vulnerability, someone broadcasting that they're going to be out of town for two weeks. And the other side is you've got some kind of attacker who's going to look at that and figure out, I can do something with this. And the reason I pull it out like this is the way we generally handle security today is that we defend against vulnerabilities or we defend against attackers. So how do we defend against vulnerabilities? Well, the general way we do it today is we have a lot of processes and policies in place. We audit all of our code, we look at all of our frameworks, libraries, gems that we use. We keep track of them. We watch lists of security vulnerabilities, CVEs, and we make sure that we're up to date on the latest software. But obviously, as anyone who has done like a Rails 3 to Rails 4 migration, it's not an easy process to stay up to date. So you end up in this sort of cycle where you find out new information, that there's a vulnerability, you patch, you test, you deploy, you test it in production. And this tends to take a decent amount of time. On top of that, it's fun to rag on PHP at Rails. PHP, as an example, is 24 vulnerabilities on average in their existence for the core platform every year. Unfortunately, Rails, not even talking about Ruby, but just Rails, has a similar story. Over the existence of Rails, there's been on average seven vulnerabilities, CVEs attributed to it every year. And it's actually had an uptick in recent years into the double digits. And the reason this is problematic is not just because there are vulnerabilities, but because every time there's a vulnerability, it takes a lot of time to go through your cycle, to patch it, to update all the source code. In the PHP case, if you think of the site WordPress.com with tens of thousands of blogs, you've got a real challenge on their hands if once every couple weeks, they have to deploy out a new security patch and update all of these blogs. So it becomes a question of how fast can you spin this wheel? How fast can you turn over your patches? But even if you can turn over your patches quickly, that still is only addressing the vulnerabilities that we know about today. There's a study that showed vulnerabilities that are bought on the private market, maybe the black market, you might say, sold to attackers or maybe government security organizations. On average, they remain private for almost half a year before anyone has detected them and reported them publicly. And on top of that, the frameworks, the libraries, the gems, wherever the vulnerability is, has to be updated, released, then that has to filter out to all the people using it who have to notice it, update, patch, test, deploy, test, and so on. So it's a serious problem for the vulnerabilities that we don't know about but attackers know about. But then on top of that, there's all kinds of vulnerabilities that are sitting in source code that no one knows about yet, but they're just lurking there for someone to find. A lot of vulnerabilities you look at them and you trace back when was this introduced and it was two, three years in the past, just been sitting there. So the story about how we defend against vulnerabilities is kind of fraught with perils. And because of that, people turn towards defending against attackers as well. So how do we defend against attackers? Well, one of the main ways we defend against attackers is using something called a web application firewall. And unfortunately, you don't get a mini Harrison Ford in a box to fend off the attackers. What you do get is usually it comes as a black box piece of hardware. You plug it into your data center. You spin it up and it looks for patterns of things. But to really make it work optimally, you have to configure it for your application. So you end up in screens like this, I think this is like a lamp stack, and you're having to go through all the different routes that your application has locked down, like who should be able to access those routes, when they should be able to access them. It takes a lot of knowledge to be able to operate these types of firewalls in an optimal manner. So without having bought expensive equipment, you now have to have consultants or hire on security gurus yourself to make sure that you're using web application firewalls correctly. And on top of that, they only work at the networking layer. They're not actually inside of your application, so they look for patterns of things, patterns of things that look like SQL injection, that look like cross-site scripting. But they don't actually know what's happening in the application, so at best they're making very intelligent guesses. Because of all of this, how difficult it is to configure and how they have to make a lot of intelligent guesses, you tend to hit a problem with web application firewalls. And to demonstrate what that is, we're going to go on a field trip. We're going to go to the Castle Geyard. This is a castle in Normandy, France, and in the year 1203, it was actually under the control of the English King. The French King wasn't very happy about that. So he wanted to lay siege to it. So it was a very complicated, multi-step process, but one of them involved how do we get past these rock walls that you see here? And the way they did that was they found there's something called a garter robe. And a garter robe is a fancy term for a medieval toilet. You sit on the top of the wall, you do your business, the waste falls down outside of the wall. And so what the French army did was in their finest hour, they scaled the garter robe and entered the fortress, entered the castle through the toilet seat that was unguarded. Because why would you bother? And once they were inside, they surprised everyone else. They took over the castle, opened up the drawbridge, let all the attackers in, and the entire defenses of this fortress were toppled over through a toilet seat. So the reason I mention this is because when we're talking about defending against attackers and looking at patterns and configuring web application firewalls, it's very hard to configure every point where an attacker can get in. Attackers use extremely sophisticated scripts and tools that spider through websites. Just like Googlebot does, they look through every link, they try to find every page that's possible to access, and then they look on every page, and instead of Googlebot trying to index the content on the page, they look for buttons and forms, places where they can interact with a website. And they try to inject anything they can think of, SQL injection, cross-site scripting, anything that they can just try and see what happens. So there's very sophisticated tools on the attacker side. If you have a tiny hole, that may be all it takes for them and their tools to find in a short period of time, insert themselves into your program and tear down your defenses. If someone inserts themselves into your program, gets a pass your defenses that you have, that's called like a false negative in the security field. There's a opposite side of these types of problems called a false positive. A false positive is where you have some kind of defense that is alerting you so frequently for things that aren't actually threats or attacks, that you become numb to them. Or in the worst case, you're actually blocking legitimate traffic. Legitimate users trying to use your site. There's something called alert fatigue, and that's where if you have too many alerts popping up at you from whether it is a security product or a DevOps machine or you're watching performance, if there's too many alerts, it can be hard to take actionable steps from them. When there's a security hack, you generally, or an attack, even if it's defended, you generally want to do something about it. It could be that a user has their credentials compromised as being accessed from all over the world. You probably want to reset their password. Or if there's some kind of breach in your template rendering, you may want to look at how they're getting passed and injecting code into your site using cross-site scripting to patch that up. But if you have too many false positives, it's too hard to figure out where the problem is and to address it. So trying to, our tools for defending against attackers also have problems, just like our tools for defending against vulnerabilities. So what can we do? Well I spoke earlier about the anatomy of a security attack and how it's comprised of a vulnerability and an attacker. But in reality, in this case, a thief, he's not going to enter your house and sit on your couch and watch TV for a few hours and then leave. He's going to do something. Probably going to steal all your jewelry, all your expensive property. It's this exploitation that is damaging. It is kind of creepy if attackers get into your house, but the taking of property or the messing up of your house is what's really damaging to you in the long term. And so when it comes to a security attack, can we think of ways that we can try and defend against the exploitation rather than just the attacker or the vulnerability? So that, that lens, that gives rise to the, this idea that I have of meta security, securing against the exploitation that an attacker is trying to carry out. So an example, if you think of the famous Indiana Jones scene, he's in a temple, he sees a gold statue. Now it could be that this statue is sitting there for the purpose that anyone can come along and maybe pray to the statue, admire its artistic qualities, whatever it is. So it's hard to filter out attackers. I mean, Indiana Jones looks like an archeologist, not that he would steal the statue. And the statue is inherently vulnerable sitting out there in midair on a pedestal. So instead of trying to defend against the attackers or the vulnerability of the statutes being there for the taking, there's a booby trap weighted to the weight of the statue. So when Indiana Jones goes to swap it for a weighted bag, it doesn't work, and the booby trap is fired. So this is an example of trying to defend against the actual exploitation rather than the attackers or the vulnerabilities. So how can we, how can we take this concept and apply it to web security? So I'd like to talk about two classes of exploitation. The first one is SQL injections, and the second one is cross-site scripting. So let's look at how SQL injection works today. Now this is one way you could write a query in your application. It's a pretty terrible way to write it, where you're interpolating user input into a query string. But it does the job for this illustration. So let's say you've got this statement, and someone comes along and the user ID from their browser comes back as five. This is how it's meant to work. We're finding the user whose ID is five. But let's say if you've got a hacker who's trying to do interesting things, he might say, well, my ID is a string five or one equals one. And when you put that into your query, that returns true for every record. And now you're returning every record of every user in your database. Maybe you're rendering that out to the browser. Worse is if it's the string five semicolon drop table users, in which case now you've deleted all the user records for your site. Not cool. This is kind of a trivial example. You're doing some string interpolation. But there's other examples in the real world that you might think should be fine. So those active records should be escaping things for me. Here we're calling the delete all method. We're passing in a string that will be turned into a where clause. It's still a little trivial because you're still creating a string from user parameters. So you know, I probably shouldn't do that. Here is an example of, again, being able to delete every user in the table. But then we talk about things like the calculate method, where you can sum all of the values in a table. And by using a carefully crafted query, instead of asking for the sum of all prices in all orders, you may be asking for the sum of all ages of all users named Bob. Here's an example where you can combine how rack turns a query parameter into an array and is passed into exists in such a fashion that this will actually cause the exist method to always return true, no matter what. And in this case, we are passing in a parameter that, depending on how the structure of the table is set up, we're able to turn all of our users into administrators. And the common thread, at least among these last three, is that we're passing a value, a variable, directly into active record. It's very easy to think, like, well, active record should be handling things for me. It should be escaping things. But it's just not the case. So that's how SQL injection can happen. I'd like to talk about cross-site scripting, how that works. So cross-site scripting is when you as an attacker manage to get your code to run in someone else's browser on someone else's website, a crazy idea. It's kind of convoluted how it works, so I'm going to give an example. Let's say that you are signing up for a social networking site. So you go through, you populate this sign-in form, sign-up form. And once you log in somewhere on the page, maybe in the upper right side of the page, it'll display back to you your username, your first and your last name. Now let's say that there's a vulnerability in the last name field. So I'm going to try to put a script tag in there. That's going to call the alert method, pop up a dialog box. So what's this going to do? Well, when we log in after we've signed up, it's going to try and render our first name and our last name. And if we've got a cross-site scripting vulnerability in here, the last name might be injected directly into the page as raw HTML code. If that happens, now when we go to render the first and last name, we get our first name, it's just some text, but then we get a dialog box. So you might say to yourself, okay, big deal, I've just harmed myself. Every time I log in, I'm going to see a dialog box. That's not cool, but it's just on me. But then let's say that this is a social networking site. Someone goes and posts a message. I go and I start to add comments to this message. Well now, associated with those comments is my first and last name. They get rendered alongside it. So if someone else pulls up this post in their own browser, when it renders it, it's going to render my first and last name. It's going to do it twice. I left two comments. So now I'm pestering them with two dialog boxes. Okay, that kind of sucks, I'm creating dialog boxes all over the internet, but it's still not that big of a deal. The problem is if I can run a dialog box, I can also try and get a session token from your site. Now, a lot of times these days in major site session tokens are locked down, but there's still a lot of sites out there that do not do a proper job of locking down session tokens. If I get a session token in my code, I can then send it using Ajax to my own server. Now I've got a session token for a logged in user on a social networking site, and I can start to create content as though I am that user without having to know their password and go through the login details. That session token is all I need, and that sucks for a social networking site, but if it's a bank site, I could start transferring funds. I could do some really fun stuff. So how does this work? How do we get to the point where this has happened? Well, to explain that, we have to talk about something called string.html safe. It's a poorly named function that makes it sound like we're going to take a string that could contain some unsafe HTML code, and we're going to pass it to HTML safe. It's going to make it safe for us, and that's actually now what it does. What it does is it takes some text. It calls HTML safe on it. It returns to us that same text, but wrapped in something called a safe buffer. A safe buffer says, I vouch that anything inside of the safe buffer is safe. So if we take that safe buffer, and we append some other text to it, it's going to make sure that it escapes that other text first. So now it's going to append some script tag that is in a regular old string. It's going to escape that first and create a new safe buffer. On the contrary, if I take one safe buffer, and I take a second safe buffer and I append to it, it's going to append in cleanly without any escaping, because we've got two safe things already. We've got two things that are already vouched to be safe for by someone. All right, so how is this actually used for any purpose? Well, when we're doing Rails rendering, it uses safe buffers to append things together so that we don't have cross-site scripting attacks. So it starts with an empty safe buffer. We go through. We take parts of the literal HTML code that's in the template. We call HTML safe on it to wrap it in a safe buffer. We're vouching that it's safe, and we append it to our rendering buffer. Then we go to an expression. This title method is something in my app, and it returns a string. Happens to have some input that the user put in. So I'm an attacker. I try to put in some cross-site scripting code down at the bottom to raise a dialog box, but I'm thwarted because the title method returns a string. When that string is appended to the rendering buffer, it gets escaped. So we're all good. We append the end title tag just as before. Now we come down to here, a different helper method that's called inside of an expression. It's a JavaScript include tag helper. In this case, we're going to actually have an HTML tag, a script tag, and we want it to get through. So the JavaScript include tag itself is special in that it returns a safe buffer. It is vouching for what it returns as being safe. So when it gets appended to the render buffer, it goes through, and it goes through unescaped. Finally we add the end of our head tag, and this is how a template gets rendered together. This is how the safety mechanisms work to prevent cross-site scripting. So you might ask yourself, how do you get cross-site scripting in the real world if we've got all this awesome, complex, Rails security doing things for us? So this is a lot of code on a slide. You're not expected to actually read this, but what this represents is code that's in a gem out in the real world. It's a gem that helps you put a bootstrap UI into your application so that it generates a flash message, a little banner message that you might put at the top of one of your web pages. So you've got an application, an error condition occurs, you want to put up a flash message. So you say, I'm going to create this message, user ID, and then there's this user provided input. It does not exist. So if they provide the ID of five, that's okay, we create this banner message, looks right. Let's say that I'm a hacker and I try to put in some cross-site scripting again, another alert box. Well, what happens is, similar to how I showed the Rails rendering before, the message is a string. It gets passed through, appended into safe buffers, and it gets escaped. Everything's great here. What's the problem? Well, someone came along, they had a different app, and they wanted to provide a link inside of the banner. So they added a link to Helper. And when they ran it, they were like, oh, wait, what's going on? It's not actually rendering the link as HTML, it's escaped it, and now I can't have a link. So they said, oh, I see what the problem is. I've got a gem, this bootstrap gem. It's got these lines of code, lines 17 to 19, where it's creating a content tag, a div, to hold that message. And it's passing in this message into this content tag Helper. The message is going straight in as a string, and it's being escaped as it's being appended to the safe buffer for the div. So they said, oh, I think something's wrong here. I'm going to change it so that it now passes in a safe, bufferized version of the message. So they go through, they're so proud of themselves, they get the link to pop up inside of their banner message. Everything's great. And this got shipped in the bootstrap gem. But here's the problem. The first guy comes along, he updates his app to use this latest gem, and now he finds that someone, an attacker comes along, tries to put some malicious script in as the ID parameter, and now it's no longer being escaped because the gem was wrapping everything in a safe buffer. So all of a sudden, they didn't change any code, they just updated a gem, and a cross site scripting vulnerability was added. What should have happened is the person who had the link should have themselves vouch for the contents of the message and said, you know what, I'm creating a link in part of this text. It's safe. I need to vouch for it at HTML safe here, not at the gem layer. But this is so hard to get right. It's very hard for people to have the knowledge and all the understanding in every case. You may have like interns who are new working on code base, and it's their first real programming job. How are they going to be? How can we expect them to understand the nuances of where HTML safe should be so that they're not introducing cross site scripting vulnerabilities to the Internet at large? So how can we fix this? Well with the cross site scripting, we go through our processes where I ask ourselves, is this, are we actually vulnerable? Yeah, it looks like it. Is it pretty important? Well, it's our home page. Is it accessible by the Internet? Oh man, yeah, it looks like we got to do something. So now we got to go through our cycle of patching our code, testing it, deploying it, testing it in production. Not very much fun. Taking us, taking our time away from building the awesome new features that our customers want. How can we prevent SQL injection? Well it's very simple. You just need to memorize a long list of rules. You're calling calculate methods, you have to make sure that the arguments that you're passing in are valid table names. Always use hashes or arrays when calling delete all, destroy all, where. Always use hashes when using find by. Never use hashes or arrays when using exists though. There you need to turn it into a string first. Never pass user input into group joins, order, reorder, pluck, select, having. And lastly, don't ever try to use find yourself unless you're a security guru because it's got like ten different ways it can be called with all these different options, each of which has its own rules. So okay, you learn all of that. You audit all your source code and your boss comes around and says, oh that's great. Okay, once you're done auditing our stuff, can you audit all our dependencies too? All our gems, Rails itself? Rails has had SQL injection vulnerabilities. I think they had three last year. Okay, you've done that. Now can you teach everyone else around here about security because we do have a new intern. He's starting next week and we don't want him to add anything that could create a vulnerability. And on top of that we decided we're going to add a security team. Don't worry, they're going to review every code change but we've got two engineers now. So if one's on vacation there won't be a bottleneck. We've got 40 people committing code but I'm sure that these two guys can keep up with reviewing it all. That's not a very good solution. So let's go back. Let's think about how can we apply these ideas of meta security? How can we defend against the exploitation rather than the vulnerabilities themselves? So let's think about cross-site scripting. Where can we actually have cross-site scripting? Let's say we've got a template here. Well the place where we can have it are where there's expressions. That's where a user, a hacker, could try to provide input to your site that includes some scripts somehow that could somehow get back to another user to be run. So the question we ask ourselves is should there be any script tags here in any of these expression tags? It's a good question. How do you know? I mean we as humans we can look at it and say well the JavaScript include tag helper, slash pry have a script tag, but I don't even know in the bottom one when we're yielding out like what is that yielding to? I don't know. Maybe you should have a script. Well let's say that we start to wrap the HTML safe method. It seems like everything that is going wrong is somehow going through HTML safe. It's being misused. So we've wrapped it. What can we do with this knowledge? Well now every time HTML safe is called we can look at it and we can ask where is this being called from? If it's being called from a known good location like a Rails helper like JavaScript include tag we could probably be pretty sure that the right thing is happening. You are not actually injecting a script tag directly in. You're asking Rails to provide you a script tag with some content. If you're going that far you're probably the developer writing the app doing that. If we're not being called from a known good location like Rails then it's very likely there should never be a script tag there. Again if you have to hard code in a script tag not using the Rails asset pipeline that's cool you can do that but you really ought to be using a script tag helper like the JavaScript tag or the content tag helper. So we can look for script tags and we can make sure that we escape them first and that should help cut down on the possibility that a gem in the wild has been updated with a cross site scripting vulnerability because someone threw an HTML safe usage in there at the wrong spot. Let's talk about SQL injection. So this is the same examples that we had before where you've got a query that you're interpolating user input into. The worst way to do things. But even in this case we know that when we execute this query there's a specific structure to the query that we expect. So when we execute it with a number it should have a structure that looks something like this where there's a letter that signifies every token in the query. And it ends with the number one meaning like we're looking for id equals some number. Well if later on we see a structure that is different then there's a high chance, a high probability that something funny is going on. Again in the drop tables case you get a different structure in the end. You even get a semicolon so clearly there's two queries that are being executed here. But okay we can find out what the structure of a query is. But how do we know what is expected? We have to be able to know like what the app should be doing before we can filter out what shouldn't be happening. We can think of it this way. Every query that gets executed in your code is executed from some stack trace at the top of which is deep inside active record actually calling out to MySQL or Postgres or SQLite or whatever it's doing. But as part of the stack trace it includes your application code as well. So here I've got a line in my own test app where I'm querying the car records looking for cars of a certain make and model. So whenever that line of code is executed I end up at a stack trace once it finally ends up being executed all the way called down to the database that stack trace will always be the same. It will always run through the same lines of code. So we can start to learn that. You can say, okay, I see a query coming in from a specific stack trace. I can learn the first time that the query should look like this. It should have this expected structure. So now when future queries come in we know what the expected structure is. If a new query has the same structure it's okay. We let it through. Everything's good. But if it comes through with a different structure we block it. You say this is obviously bad and we respond with a 403. So this is how we can block against the exploits of a SQL injection even if you do the worst thing possible of interpolating query strings and executing them directly. So going back to summarize it's good to defend against attackers and to defend against vulnerabilities we should always be staying up to date as much as possible. But realistically vulnerabilities are always going to exist. Attackers are always going to be out there looking for those vulnerabilities to do something bad. So we need that third level where we're defending against the exploitation itself. That's what's really going to allow us to process requests, handle things even before we get a chance to patch something for a zero day vulnerability that we see in the wild. Now the reason I'm up here talking to you today is because I'm with an awesome team at Immunio. What we're trying to do is apply these meta security concepts to a whole host, a whole series of exploitation classes. SQL injection and cross-site scripting are just two of them. We're actually running inside of your application. We're watching your queries, we're watching your templates being rendered. We're looking at the headers of the requests coming in. We're watching for people brute forcing your login requests. And we throw up mitigation strategies where we block SQL injection attempts, where we slow down logins from specific IP addresses where we see brute forces coming from, or we throw up captures. We're taking this model and applying it everywhere that we can get our hooks in into a Rails application. So I want to thank RailsConf for giving me this opportunity to talk to you. I want to thank you for coming, especially given the time change and everything. And I'd be happy to answer questions here. We actually just announced this week at RailsConf. This is our big unveiling. We're taking beta signups. We'd love to hear from you, get a sense of is this product doing a good job of exposing you, alerting you to the threats that are coming in against your servers. So we encourage you to come find me. Even after the talk, I'll hang out in the hallway. Come find us down at the booth in the exhibit hall. We're still here until the end of the day. And chat with us. We'd love to have you try out our stuff and defend your app. So thank you. Yeah. Right. So he asked, what is, let's say, the persistence and the learning behind the SQL injection where you have to learn a stack structure to a SQL statement structure? The way that we address this is the first time a query is executed from a given line of code. We learn the structure. After that, it gets locked down. And we make sure that all the future structures must match. So to a certain extent, we don't have a learning period. It's just the first time that that line of code is executed. But beyond that, once we've learned it, we communicate it to our back end service, which helps disseminate that to all the other application processes you have. Because most people run, whether it's Unicorn, Puma, the fact that you have multiple servers, processes, all of those need to get the information about what are expected structures for SQL queries. So we have a back end service where thread information is sent. But part of the data we send is just what are expected SQL structures for given lines of code so that they can be disseminated to all of your application processes. Yeah. That's a really good question. So the question was, what if there's a possibility you build up a query and it could have different structures and then there's a different line of code that actually executes that query? And we do see that at time to time, whether it's because people are building it up manually using string interpolation or because they may be generating a query using the A-roll layer, A-R-E-L, which is this library that ActiveRecord uses under the covers to generate a query. So you can do that. What we're doing to address that is, A, it's a minority of cases, so many people won't hit that. B, when you log into our UI, you can see these types of false positives where we say, okay, the second time it came through, it had a different structure. But when you're looking through it, we can then have a button that says, you know, this was a false positive. When you click that, we learn that structure as well. So if there's, say, five different forms in which a query could have a structure for a given line of code, you learn that a few times and we take care of it. I'm going to wrap up, but I'll be in the hallway if you have any other further questions. So thank you. Thank you.
|
Rails comes with many powerful security protections out of the box, but no code is perfect. This talk will highlight a new approach to web app security, one focusing on a higher level of abstraction than current techniques. We will take a look at current security processes and tools and some common vulnerabilities still found in many Rails apps. Then we will investigate novel ways to protect against these vulnerabilities.
|
10.5446/30683 (DOI)
|
So, hello. My name is Sebastian. I live in Bogota. That's the capital of Colombia in South America. You can find me at Sevazoga in Tudor, India. I work for a startup called Ride. We are reinventing the way people commute to work, so you should definitely check us out. Today, we're going to talk about microservices. So I want to give a lecture about microservices. There's a bunch of them out there. They're really good. You should check them out if you haven't. Today I want to talk about why I think sometimes microservices are a bad idea, and not sometimes, like a lot of the times. So let's start by defining what microservices are. So microservices are a particular way of designing software, basically. By designing software, we are creating a lot of small applications that can be independently deployed. So, a lot of people have discussed, like, is microservices just a service or an architecture? And it kind of is. Microservices are just like a subset of service-oriented architecture. It's just like with a set of, it's just a piece of it with a set of really well-defined rules. And you could say that microservices are service-oriented architecture. What, like, scrum is too agile, basically. And microservices are not just, like, a new thing that hipster developers like to talk about. There's probably people that have been doing microservices for 10 years. It's just like a name that was given to the subset of rules in the service-oriented architecture, like cloud or definition. So there has been a lot of controversy to how big a microservice should be. So the problem with that is that microservices have a failure by design. And that failure, it's in its name. It includes the size of the service in its name. And it's kind of confusing, because, like, micro says that they should be really small. And the size here is not the most important thing. It's just part of what microservices are, but they could have probably been given a better name. So a lot of people like to say that microservices, I mean, that a service can be considered a microservice, depending on how much line of code it has. That, I don't know, that can be confusing. It changes a lot, depending on the language you're using, probably. Some people say that you can consider a service, a microservice, based on the size of the team that's building that microservice. For example, in Amazon, they have, like, the two-pitch rule. And that makes sense, but I don't really like those approaches. There's another type of approach, which are, like, really arbitrary defined rules. For example, Chad Fowler likes to say that microservices have to be this big, and this big being, like, basically, like, how big your hand is on your screen. So if it's bigger, if the code you wrote is bigger than that, it's not a microservice. That's kind of arbitrary, and I don't know, I don't really like that definition. So to present the definition, I think that we should all, like, have in mind when talking about microservices, also talk a little bit about cognitive psychology. So there's a concept in cognitive psychology called cognitive load, which refers to the amount of information a person is trying to process at any given time. And there's another concept I want to introduce that's called cognitive limit, which refers to the maximum number of chunks of information a person can process in working memory at any given time. So these are really related. So basically, I think that you can say that a service is a microservice if the cognitive load that microservice represents is lower than the cognitive limit. So let's talk a little bit more about the cognitive limit. So basically, the cognitive limit refers to how much information you can hold in your head at a given point in time. So if you're able to completely wrap your mind around a service, totally understand what's doing without the need to look into code or documentation, you could call that service a microservice. But you have to keep something in mind. And it's that when talking about a group of people, cognitive limit tends to decrease as the team grows. So basically, the bigger the team, the lower the cognitive limit is. This is a totally made up graph, so don't look for numbers or X and Y limits. But it's basically, I think, really explains what I'm trying to say. So before we go into more details about microservice-related stuff, I want to tell you a personal story. So that personal story will give us a little bit of context. So the story is about the first job I had. And that job was at Babel Stadium. My job consisted on selling hot dogs on game days. And it turned out I was good at it. So my manager asked me to also sell phone fingers. So it went OK. But when my manager asked me to make some public announcements, I mean, between innings, it sounded a little crazy. But OK, I now had to also do that somehow. Weirdly enough, I was able to do all of that. So my manager asked me to moan the law before each game. So I was basically doing that also too before each game. Suppressionally, my manager lost an employee. So he asked me to help him accommodate people on their seats after moaning the law and what people was like getting into the stadium for the game. By the way, I'm sorry I couldn't find a better image of a ticket in the internet. And I had to use one that's like for one direction. So because things weren't bought enough already, one direction showed up for the game. And I had to accommodate them. That was definitely my last day of job. I had to quit. OK, that's the story. I don't know. But probably by this point, you already realize that this is not a true story. I'm just trying to make a point here. So in that order of thoughts, I want to ask questions to everyone. Please raise your hand. If you have a system that's probably going over your team cognitive load and you want to break it up, break it down into other systems. OK, cool. Thanks. So this is not uncommon. And this happens a lot. We're going into this situation when we have a monolithic application. So DHH talked about this earlier today. And I kind of disagree with his concept of what a monolithic application is. So let's look at it a little bit. Let's look at it basically. So I think that a single Rails application, having a system basically that's a single Rails application doesn't make that system or doesn't make that application a monolithic application. So what makes a single Rails application a monolithic application is basically poor object-oriented design. So let's separate those concepts. Knowing the fact that your system is only composed but one Rails app doesn't make that Rails app a monolithic. Single Rails app can be well-designed and well-built. And there's actually a talk that was given recently, I think on Ruby on Ales, but a Kodama Tsuda, where he gave a really, I mean, which was really interesting. He talked about how Cookpad, which I think is a Japanese company, built what's probably the world's largest Rails application. He gives, on the talk, he gives really good insight of how they achieved it and the amount of traffic they can handle with one Rails application that's running on... I might be lying, but I think it's like 300 servers. So if you have an application... So although, like, I think they probably, like, do too much to keep their system being like a single Rails application, and I don't know if I will go to that extent, you could do it. And if that's the way you like to roll, that's fine. And that doesn't mean that your application is a monolith. So you can call the integrated systems you want. That's fine. You can give it the name you want to them. But let's be honest. A monolith doesn't look like this. What's that? A monolith really looks like this. That makes more sense. That's how a monolith looks. That's how you feel when you're working with a monolithic application. When you're working with a monolithic application, you normally can't reuse a part of your system without basically like reusing the whole system. Or you can easily change the flow without having to do shotgun surgery, which is really bad. So now, like, that we know the difference between what's really a monolithic application and what's not, when should you go for microservices? I don't know. Let's think a little bit about it. It's an interesting topic. So should you go for a monolith? I mean, should you go for microservices when you realize that you have a monolith, that your single rails app is no longer a well-designed app, but it's like monolith? Well, I would say no. This is a fallacy that a lot of people believe in. When you have, I mean, everyone knows that when you have a banana and you smash it, you just get a bunch of little bananas. That's all you get. So the fact that you have a monolithic application doesn't mean that you have to go for microservices. I found this on Twitter, which is something I really like, and basically says that if you can't correctly design a single application, you won't be able to, like, design a microservice or assist in that space based on a microservice architecture. And I totally agree with that. You first need to be able to nail a single rails application before thinking about even going for microservices. So when should you really, like, use microservices? I mean, it's clear why not to, but so when should you do it? So one good reason to go into that direction is when you have a part of your system that needs to be escalated differently from the rest of the system. So this has happened to us on RIDE, where we started with a kind of like a brain application that was coordinating a lot of things, and we realized that some parts of that system needed to have a better performance. So we had to deal with the decision of do we want to scale the whole, like, brain application, which is wonderful for microservices, or do we, or can we, like, just extract this and escalate it differently because that's a specific need for this? So this is a really good reason to go towards microservices. Another one, it's when a part of your system needs to be easily replaceable. So what happens when you have one of it, and what has happened, like, and a lot of really big companies that have been building software, producing software for a lot of years, is that some systems are like 20 years old or 10 years old, are built on languages that probably no one in the team at a given time knows how to write code in, and it's running on a server that has been touched for a lot of years, and no one wants to change because they're really scared about breaking stuff. So when you have stuff, I mean, when you have, like, your business logic, basically, spread about, spread in different small microservices, this is really, it's really difficult for this to happen because if you have a microservice that it's failing, it's not as scaling as it should, it has a bug or whatever, and it's not technology on a server that no one in the team understands, you can easily un-replace it. Like just building in whatever language you want, in whatever you think is best, and replace it. And you don't have, like, to replace it right away. You can do it gradually and testing and testing and, like, in every step that the new service will really be able to replace the old one. And this is really cool. This is great. You don't need to do a big rewrite. You don't have to, like, throw the 20-year-old service to the garbage and start from scratch and probably create all the bugs that were already fixed on a huge application. Okay. So let's talk about another reason, like, when you should think about going for microservices. And this is when you want to be able to, like, deploy some parts of your system more often than others. And this is really common. Like when we have a big Rails app, you notice that there's some tools that actually allow you to check this. And you notice that there are some parts that have a greater, like, churn. That you're changing all the time, probably because, like, business logic is changing a lot, probably because it's hardly to understand, so there are a lot of bugs on the specific part. It can be also because, I don't know, there's a lot of reasons. But sometimes you have to change parts of your system more often than others. And when you have, like, a big single app, this means that you have to deploy your app every time you change a small part of your system. And this costs a lot. And also, like, the bigger the app, the more you're afraid to deploy it to production. So you tend to, like, accumulate more changes for each deploy. And then when you accumulate a lot of changes for each deploy, there's a lot of more possibilities of a deployment to go bad than when you just, like, deploy constantly small changes. So when this happens to you, it's also a good idea to think about microservices. Okay. So another, like, reason why you should look at microservices, it's when you don't want to use the same technology stack for every part of your system. You have to be aware that this can be taken to an extreme. So there's a blog post out there by SoundCloud, I think, where they explain that when they move to a microservice architecture, they allow anyone to write a new microservice, a new microservice, and any language they want to. And this ended up being a huge problem for them, because, like, there were microservices only, like, basically any, like, living language out there, any language that's still under development. Anyone want to, like, learn a new language, so they basically, like, create a new microservice on that language, which can be really cool, but can be also really hard to handle. So for example, at Ride, what we do is when we need to create a new service, whoever's going to work at it, or whoever's going to be responsible for it, basically creates, like, a proposal and says, okay, I'm going to create a new microservice. Basically it's, like, contract, it's responsibilities, and I want to use this tech stack to build it. And then we open it for proposals, sorry, for comments. So we do an RFC, a request for comments, and basically everyone in the engineering team has a period of time when they can give feedback on that proposal. And that feedback is, I mean, it's on design, too. So, like, it has to do with, like, the business value that each microservice is. Each microservice adds to, like, the whole problem we're solving, but it's also around the technology stack we're going to use, because the whole team is going to be responsible of maintaining that new microservice that's going to be created. So we probably want it to be bringing something that makes sense for the problem we're fixing, but also something that we are, I mean, yeah, that we want to maintain, something that we think makes sense, and that it's not like a new crazy language, something that someone just wants to learn and experiment with. So that's called technology heterogeneity. I know. That word's confusing to me. So last but not least, the most important thing is that the team that's working on building your system has to be ready to support this type of architecture. And it's key. I mean, this is, like, really important in this, and it's what a lot of people moving towards microservices miss. The skill set of your team has to be ready to support what moving to microservices means, which we'll talk about a little bit further during this talk. And also, like, the team has to be big enough to be able to work on different services on distributed, basically on a distributed system. And sometimes people overload a really small team with this type of architecture, and that's when they start to see a lot of problems with it. So you have to be really aware of this. So how do you know if your team is ready to go towards a microservice architecture? There's basically, like, three things that are key. The first thing is that you have to be able to, like, easily provision new servers to, yeah, basically, you should provision new servers whenever you need to. The second one is that you need to be able to set up basic monitoring for every new microservice you create and the existing ones, and also you need to be able to respond to failures that are monitoring the use setup decks. And last is that you need to be able to, like, rapidly deploy new applications. So if you have a new microservice that you need to get into production, like, the team should be able to get it into production in a few minutes, not a few hours, not a few days, not a few weeks. So that's really important. If your team is not there yet, you're probably better off not going towards microservice architecture. So now that we, I mean, let's say that your team is ready to handle all of this, which is great. You could say, okay, I'm ready. Let's go for microservices. I really love the idea of it. I think we give it a really long time and discuss about it, so let's do it. But you have to consider the downsides of it. And they're not trivial. So let's get to, let's see what, like, the most common downsides of microservices are. So first of it is that DevOps intensive. So as I mentioned before, teams need to be good DevOps, need to be ready to support a lot of services running all the time. And this can kind of be mitigated by using a platform as a service, which will probably do a lot of the work for you. And that's good. If that's the way you say to go, I think that's fine. Another downside of, I mean, more than a downside, that's like something you need to be aware of. But let's talk about this. So you will have log files for every service that's running. And this can make really, I mean, this can make debugging really complicated and hard. So you need to have a centralized logs for your whole system so that you can, like, watch what's going on there. And also something that's really important is that you probably want to have unique identifiers for every request that's going through your system. So that if something goes wrong, you can easily trace it and figure out when word thinks are going wrong and how to fix them. This is really important, believe me. So another downside of microservices is that you have to be prepared for failure. So I know, I mean, this is not, this is something that you should only, you shouldn't only worry about when you have microservices. Like you should also be prepared for failure when you have like a single app, if it's Rails or not, it doesn't matter. But the thing is that when you have microservices, this is harder to manage. You need to have a lot of more stuff into consideration to be prepared for failure. So let's look at what are the most common failures when you are dealing with a microservice architecture. So the first one is network partitions. So this is something that will inevitably, you, I mean, sorry, this is something that you will inevitably have to deal with when working with distributed systems, even if they're not microservices. So this is really important because there's no such thing as a network that doesn't fail or services that never goes down. We all know that sometimes, like sometimes these things fail. So we need to be prepared for it. So for example, let's say we have a system that looks like this. And let's say that communication between node or service A and B goes down. And then we get a request on node A. So node A has to decide between basically two options. Node A can respond to the client telling them like, hey, there's like something wrong going on. I can't really process your request right now. Or node A can also say, OK, I'm going to deal with this. I'm going to respond to you something. And then I will internally deal with the consequences that having null communication to node C will bring to the system. So this is something that's not trivial and something that you need to find ahead of. OK, so the thing is that this is not the only problem. You can also have services that are sometimes unavailable. So even if you have the perfect network that never goes down, services will eventually be unavailable. So there's a formula for this. Because basically the probability of a failure occurring on your system is equal to one minus the probability of a node or service not failing forward to how many of those are. This is like, wow, I don't know. It's complicated. When I first saw this, it was really like hard to get my head around. So let's just see an example. Let's say you have a bunch of microservices. All of them have an uptime of 99.9%, which is great. But you have 40 microservices. This means that your system, applying this formula we just saw, this one, the system will have an availability of 96.1%, which in other words means that you will have around 4%. I mean, most of the time you have a probability of a 4% of something going wrong in your system. And this is only considering that if a node fails or a service fails, only that service will fail. But depending on how you design your system, if a node fails, a lot of other related fails can fail too. So this can get really bad. You need to be prepared to handle this type of failures. So let's look at another one, data inconsistencies. So when the network is partitioned or a service is unavailable, this will most probably cause data to be inconsistent. And this requires planning. You need to know how you're going to deal with this so that data is eventually consistent. Or if you don't care about consistency, maybe even how to deal with this, how to deal with data not being consistent in your system. And this is not trivial because it can really make your system not work as it should. And like this problems we just talked about are basically part of something that's called the CAP theorem or CAP theorem, which talks about dealing with this type of failures in distributed systems and really explains not only how they can affect your system, but also probably how you can deal with them depending on your business requirements. That's the most important thing. So basically CAP stands for consistency availability and partition tolerance. And when you have distributed systems such as a microservice architecture, you have to deal with partition tolerance because your system is partitioned. It's not something you can choose off. But you have to balance how to be available and also have consistency of data at the same time. So this is really important if you're taking about microservices. If you haven't heard about the CAP theorem before, you should really read about. I really recommend it to you. It's really interesting. But this is a topic big enough to do another talk about it. So let's talk a little bit about like what to do when you want to build microservices. Let's say your team is ready. You can deal with all the downsides of it. You're ready for it. So how to start? So first of all, when you're trying to break a big application into small services, use the bounded context pattern, which basically deals with large models by giving them the ability to have specific behavior based on the context or the context they're on. So like in this image, you see that we have the customer model and the product model in two different contexts. One of them is like the sales context and the support context. Depending on the context, each of these models is, they will probably have a different behavior and you'll probably want to have access to different data that's related to it. So it's really useful. So also something you really need to do that I found is really useful, it's having a specification for each microservice and being able to run that specification as executable tests. So one of the, like, it could be a pain in the ass basically to do testing with microservices. So one way you can handle this and make it easier to work with is by having specs that you can run as executable contracts. So now when you test each microservice, you can test it again, that executable contract, and make sure that when everything's running together, when everything's deployed, it will work fine together. And you won't, like, only realize you have bugs until you go to production, which is really bad. And it doesn't mind if you're using, like, HTTP RESTful API or if you're using, like, for communicating, you're using a message broker or RPC. This is something you really need to do that will save you a lot of headaches. So you should also be aware that you should go for a synchronous communication over synchronous communication whenever you can. This will make you make it easier to deal with any type of failure, the type of failures we just saw the cap theory in stock talk about. And another thing that I found really useful and that we've learned in Bride is that you should use schema-based formats over schema-less ones. So there's, so, like, to give you a little bit of context, for example, like, a schema-less format is JSON. And a schema-based format can be protocol buffers or Captain Proto or, I don't know, there's probably a bunch of them outside, out there. So the good thing about this is that schema-based formats allows you to extend data that you pass between systems as you need to, as your system grows and changes. And it's also, I mean, it also helps on validations and will also allow you to be backward compatible as you grow. And this is something that's really important and we'll talk about something called the Golden Rule of Micro Services in a few minutes. So what should you do? Just start. Create a new micro service when you need it. Be aware that the perfect is the enemy of the good. So you don't need to have the perfect design before going to micro services. Just start and you'll be afraid to iterate on your solution. So what shouldn't you do now that we know how to do it? So it's really important that you don't share databases between micro services. Building databases is like calling another object private methods. That's really bad. Something you don't want to do is solely break it in encapsulation. So don't do this. There's a lot of ways around this. And if you're doing it, it's probably because you don't want to think hard enough on how to fix the problem. And if you see yourself in a situation where you temporarily, hopefully, have to deal with this, just make one micro service in charge of writing to the database and the rest of them just reading from it. But still, you'll have a lot of coupling. Because if the schema changes, all of these micro services that are reading from it will have to change too. When you share like state or when you share like a database, you'll feel like this. You'll feel like you're moving on dangerous grounds. So something we saw before today is to avoid being biased by Conway's law. And this is basically, this law was coined by something that Marvin Conway said in 1968. He said that organizations produce systems whose design is a copy of the structure of the organization. So you should be really aware of this and not try to just replicate the way your organization works into how your system works. Because that's probably going to have you creating micro services you shouldn't or designing them the way you shouldn't. And most importantly, don't break, I mentioned it before, don't break the golden rules of micro services. And this basically says that you need to be able to deploy any service any time without changing any other service. And this is not trivial to achieve. Because if you fail to achieve this, you're losing most of micro services benefits. If you find yourself in a position when you are doing like lockstep deployment, when you have to deploy more than one service at a time so the system doesn't break, it doesn't really make sense to have micro services. It doesn't really make sense to have things that you should be able to deploy independently, change as much as you want without the rest of the system knowing. So be aware of this. If you find yourself in this situation, you have to fix it real quick or maybe think about going back into like a single big application. So to rub everything I've been saying up, I would like to bring two quotes by Michael Feathers. And the first one says that he strongly believes that there's a lot of complexity conservation software. Which means that, for example, in micro services, we need to be aware that when we go towards that direction, we're pushing complexity into the interaction of our services. And we need to be prepared to deal with that. So my recommendation is go for micro services if they allow you to better manage the complexity of the system that you are building. That's all I've got for today. Thanks a lot. Thank you. Thank you.
|
Like an espresso, code is better served in small portions. Unfortunately most of the time we build systems consisting of a monolithic application that gets bigger and scarier by the day. Fortunately there are a few ways to solve this problem. Everyone talks about how good microservices are. At a first glance an architecture of small independently deployable services seems ideal, but it's no free lunch, it comes with some drawbacks. In this talk we'll see how microservices help us think differently about writing code and solving problems, and why they are not always the right answer.
|
10.5446/30684 (DOI)
|
Thank you for coming. I'm Sandy Metz. I was a programmer for 35 years. I'm a woman of a certain age. I really am. I wrote a book a couple years ago. When it came out, it was like a bomb went off in my life. And I had to quit my day job. I was so busy doing other things. And because I quit my day job, I needed a way to make a living. And people kept asking me to teach, so I finally broke down and agreed to teach, to teach short object-oriented design classes. And so then I had to make up curriculum. Boy, that was hard. Curriculum is tough. And so I sat down and I figured out what I thought were the most important set of lessons I could teach in three really intense days. And when I did that, when I made up the curriculum, I thought that they were all unrelated. I thought they were completely unrelated lessons. Good things, yes, but not the same thing. And then I've been teaching that course for the last year and a half, and I finally realized after, since now, actually, now my job is to think about code. What a wonderful thing. At this sort of advanced stage in my career, I'm not driven by deadline pressures every day. I really get to reflect about what makes code good and how we can make good code and how to explain to people how to do that. And so in the process of making this course, I've taught it over and over again and I've got the leisure to reflect deeply about code. And I finally realized 18 months later that I wasn't really teaching a bunch of different ideas, that I was really teaching one idea, one simple idea. And I finally understand it. And so today's talk is everything I've learned in the last year and a half in 30 minutes. And that, it's going to involve a lot of slides. I calculate, by my math, I'm going to change slides about every four and a half seconds once we get going. If you want to change seats now, it's not too late to move over to the sides. But if you're here, okay, you've been warned. That's all I can tell you. So this talks in four parts that's going to build up from the bottom. And this first part, after Pooter was published, it became really clear to me that it was surprising to me to find that I had a different idea about objects than many of the people in our community. And I was curious about why that was. And when I thought about it, I realized it was because I decided that it was because I'm infected by small talk. I've been writing Ruby since 2005. And I have not yet written Ruby for as many years as I wrote small talk. Now I say infected because it's funny on the slide, but really I think of myself as inoculated by small talk. I'm going to show you just one small thing about small talk today that will make it more clear how I think about objects. Here's the thing. This is all going to be Ruby code. It's that. Now you might not be familiar with this. Right? There's a send method in Ruby that sends a symbol that invokes the method named in the symbol. And that does the same thing. We also have this. You may be surprised to see that it is this. And it does the same thing. Now, what is one? Well, one is a fix num. And what do fix nums know? They know this, among other things, this list. And you can see on this list that we have that. Now what this means about Ruby is that this is what's real. This is the truth at the bottom of all things. We're just sending a message to an object. That when you say one space plus space one, that is a special syntactic sugar put on top of message sending. That's unique. This is normal. That's special. This is real. And this is just here so it will look like math. It doesn't really matter. They want it to look like math to make it easy. You notice on this list we also see that. And I don't have to tell you now, I hope I don't have to convince you that I'm sending equal equal to one and passing one is an argument. When I do that, I get back this. Now, what is that? Well, it's the singleton instance of this class. And what is that class? It knows these things. And so now I'm going to ask you to believe the useful why. True and false in Ruby are a little bit more nuanced than this. But let's pretend just for the sake of this talk that true is just an object and I'm going to deal with it by sending messages. And that Ruby behaves that way. I was unsurprised by this when I came to Ruby. True. True is an object. It's an instance of true class. You send a message. That's how you talk to it. That's how Boolean's work. I was unsurprised by that idea when I came to Ruby from small talk. But I was extremely surprised by another thing about Ruby. And it was that Ruby had a special syntax for dealing with this object. It was very confusing to me in an OO language that there was a special syntax for this. This is a list of the key words in small talk. And no matter how many times you count them, there will still be six. Here's Ruby. And if you look at that list, you'll notice among other things on it, this thing. This is special syntax for dealing with Booleans. Now, this is so unquestioned in most of our minds that the explanation I'm about to give you is going to sound really weird. Here's how you use that special syntax. There's an expression that gets evaluated. And based on the result of that expression, one of the blocks, one of the following blocks is going to get evaluated. If it evaluates to true, I'm going to evaluate the block that is before the word else. And if it evaluates to false, I'm going to evaluate the block that is after the word else. So this is really, it's really saying this, right? But really, it's actually a little bit more complicated in Ruby because we have truthy. If it's truthy, I'm going to evaluate the code before the else, the block of code before the else, otherwise I'm going to evaluate the block of code after the else. This is a type check. And we don't do type checks in OO, right? We hate this idea. I don't really, I don't want to have the syntax. I just want to send a message to an object. And I don't want to have to look at the kind of thing it is and make a decision and then choose between two different kinds of behavior. If you came to Ruby and OO from procedural language like most of us did, it probably seemed normal and reasonable to write long conditions and start with if or case or whatever. But I can promise you that the very presence of this keyword in our language makes it easy to retain that procedural mindset. And it keeps you from learning OO and taking advantage of the power of objects. And to show you how unnecessary this keyword is, let's just change Ruby to have small talk like syntax for dealing with conditions. Message sending syntax. Here we go. First we're going to have to break open true class because this, I'm sure it will be fine. And so I'm going to implement an API. The API is these two messages. I'm going to do if true and if false. So in true class, the if true method, now I'm taking advantage of the fact that all these methods take an implicit block. The if true method in true class is going to yield to the block. Now I'm going to return self, just ignore that for now. It will be clear later while we did it. The if false method, if false is implementation, true class does nothing. It does not yield to the block. And so then if you, once you see that, you can pretty much guess what has to go in false class. It's just the opposite. True is going to yield in the if true method. False is going to yield in the if false method. And so if you break the classes open and make this monkey patch, now you can write this code. If I got a true, remember that if true does the yield, so it will evaluate the block. If I send that message to a true, if I send that same message to a false, nothing happens. And the opposite is true when I'm dealing with false. If I send a true to an instance of false, it's going to ignore the block. And if I send a false to an instance of false, it's going to evaluate the block. It's totally easy. That's how it works. Now, it's not quite right because we really need truthy-false. And that's so easy to do. Let's just promote this up to object. And I'll duplicate this code in nil class. If I do that, if you write this code, which by the way was shamelessly swiped from you to cats, thank you, Yehuda, if you're here. Now I can do anything. Everything is true and nil and false are false. That's all it takes. You don't need this special syntax. I can now replace this with that. And you can see here, this is why they return self, so they can be chained together. And I can replace this with that. It doesn't need, we do not need a special syntax in an object or in a language to deal with Booleans. We can just send messages to objects. We can do the normal thing. Now, having shown you this, I am not suggesting that we do it. I'm not. Don't tweet that. I'm not. But what I want you to do is I want you to think about what it would mean to you. How would you think about objects if there were no if statement? What would it mean to you in your conception of how to write object-oriented code? The fact that I was trained in OO by a language that did not have an if statement made me permanently, irrevocably, and unrepentantly, condition averse. I hate them. I just hate them. I grew up without them. Here's the condition I really hate. We have an animal class. It has a factory method find. It takes an ID that returns an object. If you pass it an ID that it doesn't know, it gives you back nil. If you happen to get an array, perhaps you don't even know what keys are in this array. You get this array. You call animal find on all the objects. You get back this list. And you start talking to them. Boom. And so before I go on, I want to concede that sometimes nil is nothing. And if nil is nothing, if you do not care about that nil, right here, you can do this. You can throw the nil away. You can compact the array. And then when you talk to the objects, it all works. However, if you're sending it a message, nil is something. And what we have to do is we want to fix it here so that it doesn't blow up when the nil comes. Now, often what happens at this point in our code is that we put a condition right there. We add this condition. We do some. This is the most verbose form of it. This is a case where I want to say no animal when there's a nil. This is like saying guest when there's no user logged in. This is that exact situation. So in this case now, they all respond. I get the right list back if I try to talk to them. But of course, we're Ruby programmers and that's way too ugly. So we're going to start doing this. I'm going to use truthy, the truthiness to make that a prettier line of code. And of course, it means I've lost my ability to substitute that string in there. But it does work. It does not blow up. But of course, if you're a Rails programmer, we got to try. Now, I'm not saying not to use try. I use try myself. But let's be honest about what's going on here. This is really that, which is really this, which is really that. And that, if you write it all out, looks like this. And that, okay, it's remorse because I want that. And that is this. And that's what I was complaining about in part one. All right. But it's even worse. So this is a general case, right? Code to do some stuff, code to do some other stuff. Here it's actually worse because what we're saying is, if I know what type you are, I will supply the behavior. Otherwise, I'll send a message. This is absolutely terrible. And the core problem here is because conditions always get worse. Conditions reproduce. If you put that string, no animal in your code, what you're going to have is this. All right? It's going to be all over. And the day you decide to change that value, you're going to end up doing a thing they call shotgun surgery. All right? It's everywhere. So I hate these conditions. I'm extremely condition averse. What I am instead is message centric. I don't want to know these things. I just want to send a message to an object. I want to send this message. And now the problem here, the root of the problem is that sometimes I get this object back. And it knows name. But then sometimes I get that other object back. And it does not. The objects on the list that get returned to me conform to different APIs. What I need down here is some one, some thing to which I can send the name message and get back the value no animal. And so let's write the code we wish we had. If only I had something like that. If I had that object, okay, I would prefer, this is the first really high-level idea. I would prefer to know about the name of the class, to know an object than to duplicate that behavior everywhere. And so if I can create that object, here's how I would use it. You can bar bar it in right there. And if you do that, it will change this list so that everything on it understands how to respond to me. All right. Did this improve the code? Well, I just added it at PENSI. Awesome. Still have it conditional. That's, all right. Just by the fact that I'm hating on it, it's still there. But something is better and it's this. I no longer own the behavior. And that means all of this code down here can disappear. And I can do that. And now thankfully I can also do this. All right. I can just talk to the object I got back and the results are correct. Everything now works. Now, this thing, this idea, these objects, this concept has a name. And it's called the null object pattern. It's a famous pattern, right? It's been described, some guy named Bruce Anderson made up a beautiful term for it. He calls it the active nothing. The active nothing. Isn't that beautiful? I love that. And so if we do this, now I said, like I conceded that we added a dependency and we have that condition. But once you get here, well, let's do it here. Let's do this. So I told you to prefer, I would prefer to know an object rather than duplicate behavior. But it is also to the corollary to that is I don't want to know very many objects. But once we get here and isolate that behavior in an object by itself, well, sorry here. Okay. Yes. I conceded that we're doing that. But it's really easy to fix because you can take that untrustworthy external API and you can wrap it in your own object. You can catch that message and forward it on and you can put the condition right there in that one place. And then all the places in your code where you have to do this, you can now just call your own trustworthy API. And when you do that, you get this list back and you can talk to everyone of them like they're the same thing and the list just works. This is awesome. It's the null object pattern. It makes a dramatic improvement in the code. And if that's the only thing you can take away from the talk, like if I lose you in the next section, not that I'm saying I will. All right. If you can take this home and use it, it will improve your code. But the thing I have learned in the last 18 months is the null object pattern is a small concrete instance of a much larger abstraction. It is an example of a really simple idea that's very, very large. And so this section, part four of this talk, which is approximately the length of everything you've seen so far, is going to explain that next abstraction. In order to do it, in order to go here, I'm going to have to switch examples. So the house that Jack built. This is a tale that kids learn it's cumulative. So there's bits in it where you get a new bit every time you stick it in in front of the bits that are there. It has 12 different bits. So every line gets longer and longer. Eventually you get to this. If I ask you to write the Ruby code to produce this tale in such a way that you did not duplicate any of the strings, you would probably do something like this. You would take all the bits and you'd put them in an array. Maybe put that in a method. Probably have some kind of phrase method. Now, you may not, you're probably familiar with last on array. Like if you send the last array, you get the last thing back. It takes an argument so you can get the last n number of things out of an array. So in this case, if I pass three to last, to data, I would get the rapid eight, the multi-land, the house of Jack built back. That phrase method also turns that array back into a string by joining on spaces. I need a way to get put the this is in the front and the period at the end. So I probably have some kind of line thing that takes a number and then it also calls phrase to get that middle bit. If I want to recite the whole tale and print all the lines, I'm going to have to loop as many times as I have bits, call line for each one and put a new line at the end. I'll probably put the whole thing in a class. And if I write that code, I can do this. Line at one is going to be that. Line at two is this. Line at three is that. Line at 12 is that. I can do the whole thing with her site. All right. So let's imagine that you've written this, right? This is, you have an application that has whatever for reason, for whatever reason they've asked for this. They love you for having done it. You did it quickly. It totally works. And of course now they want something new. All right. They want house, right? We're not getting rid of house. We're adding a new feature, a new kind of house. And this is called random house. And here's the spec. They want you to take this array of bits and one time before you start producing any of the lines, they want you to shake it up. They want you to randomize it. Now, you notice in this case, this one random version, they're different every time, but in this case, it ends with the rat that ate, the main, the null, the cat that killed. All right. And so the tail, the random version of this tail would be this is the rat that ate, this is made null, full, milk, the rat that ate, this is cat that killed, the main null, full, and milk, the rat that ate. And then the whole thing would be like this. And you, I suspect from your laughter, I've already noticed that many variants of this seemingly innocent tail are not safe for work. So all I can say is that it's an equal opportunity offender. So South Park, South Park from here on. So if you were, so here's your job, right, you cannot break house. And I want you to do, I want you to implement random house without using any conditionals. No conditionals. Can't use a conditional. So you're probably thinking inheritance. All right. And inheritance is really attractive because it totally works. Watch. Right. Subclass house. Override data. Shuffle is a method on array that randomizes it. I'm going to have to cache the result because I only want to shuffle it once and then produce the whole tail. If you write this code, it totally random house works, right? Rat that ate, main null, full, milk, cat that killed. If you look in there somewhere, that priest is marrying a man who's kissing the horse. So that took about two minutes to write. And they think you're a total genius. And so what is the next thing that happens? They want something else. Of course, right? You're incredibly successful. Now they want something called echo house. Here's how echo house works. Every bit, the bits get duplicated as they go in. So it's got this echo effect. This is a house that jack built, a house that jack built. The mall that land, a mall that land, the house that jack built. The rat that ate the rat that ate the mall that land. So that's what we want, echo house. Now I'm going to have to do a slight refactoring before we get going here. Really, that's the bit I need to change. I have this method that's got more than one responsibility, which makes it hard to reach in here and change just one thing. So before I implement echo house, I'm going to do a tiny bit of refactoring. I'm going to isolate that bit of code. I'm going to just put it down. Oh, my God. The parts, the naming is so hard. So I'm going to call that parts. Then I'm going to send the message parts there. Then I don't care about phrase anymore. So if I can change, right now this method returns that if the number is three, this is the array I get back. Echo house would work if I could somehow change it so that I got that back instead. All right. And as I said before, your task is to do echo house, but you may not use if statements. What are you going to do? I'm waiting on the train to go by that's been going by during the key. How are you going to do it? We all know how we do it. We're already going down the inheritance path. And it turns out it is incredibly easy to solve this problem with more inheritance. I'm going to override house. I'm going to subclass house override parts. I'll talk about that in a second. So super, if number is three, super gets this. I called super twice. Zip is not what you think. Zip is not compressed. It's zipper. So it does this pair wise connection of those two arrays. And then although it is not necessary to flatten it, I cannot stop myself from doing it. Because I don't know it's supposed to be a one dimensional array. You can be forgiven if you make the other argument. I would certainly forgive you if you did. And so that code, if I were to write that, echo house totally works. And it took me about three minutes. And my customers think I'm a genius. It is awesome. This is why inheritance is so seductive. So here's what we have. House. Random house. Overrides data. I got echo house and overrides parts. Can you make a guess? Oh, here it comes. Yeah. Random echo house. So we don't even need house anymore because that's really not the problem. So what are you going to do? You're screwed. And don't tell me. Do not insist to me that modules is the solution to this problem. It's not, and I didn't have 100 more slides to prove it. Go write that code, right? You can dry out the code, but it is not the real solution to this problem. It's just another form of inheritance. Here, once you get on the inheritance path, I want this sort of cross pollination of two subclasses I already have. Just stop that. We are not using multiple inheritance here. It's not the right solution for this problem. I can override random house. I can inherit data and duplicate parts, or I can flip flop it and inherit parts and duplicate data. All right? Now, both of those choices are so obviously wrong. They're so obviously misleading that when I go places and I see people who have encountered this problem, very often they choose neither one. They do not choose to duplicate some of the code. What they do is they choose to duplicate all of the code. And I am sympathetic to the contention that that is more correct, right? They just override house and they copy it all down there. And there is a way in which that is more honest than putting it on one side or the other. Okay. So this is all... It seems so good and it has gone terribly wrong. Now, I want to draw you a picture. I think that will help illustrate exactly what's going on here. So this is a surface area house. This is not UML. It's just a little pointy thing, right? It feels like random house is that big and that echo house is that big. But really the truth here is that random house is that big and it contains everything else that it did not specialize out of house. And echo house is that big and it doesn't really matter. We can just throw house away. We have objects of this size, of this surface area. And if you come over here and you try to inherit from it, you get these things. You cannot get that. Don't we love effects? You cannot get it. And if you flip flop it, wait for it. It's right. It doesn't work. Really if you think, okay, what I want is multiple inheritance. It is not the solution to this problem. We do not want multiple inheritance. This is not what's going on here. And if you go down the inheritance path because it seems easy, don't compound your sins by going further down that path, right? There's a better idea here. All right. No, not that. Okay. And the the words for this is that inheritance is for specialization. It is not for sharing code. It is not for sharing code. It's a specialization. Now we have this bargain that some classes are is a, right? That's the relationship. Random house is a house. And if I were to ask you, is random house a house, you would say, well, yeah, it's house, random house. Of course it's a house, right? So we fooled ourselves because of the names we chose. Names are incredibly important. They can be incredibly misleading, right? Let's let instead of saying that instead of being trapped by our bad name, let's do this. What changed? It's really hard to glance at that code and answer that question in an instant. All right. And now we're going to do the next big pro tip. This is a paradox. You can reveal how things are different by making them more alike. Let's do that with this code. I'm going to make that method as much like this method as I can. I'm going to make these data methods as identical as I can. So first I have to get that out of there. I'll just put that in. I'll put the actual array in a constant. I'm going to implement data just by to return it. So if I'm looking at the thing down there, now I can keep on doing transformations like I could put that there. It's not necessary, but it would work, right? And I could take the super out on the bottom and replace it with data. Now it's still a little bit hard to put your handle on the concept, but it's much easier to see what change. We have to figure out what that thing is and give it a name. And I was lucky enough to teach with Obdi Grimm in the fall, and he suggested to the students who are dealing with this problem that we pretend it's rows and columns in a spreadsheet. Write them down like this and then label every column. What is the header of the column? Well, that's the class. I suck at names, so let's call that data. The real, the interesting question here is what is this? And we call the subclass random house, but the name, this is not random. Like random is an instance of whatever this thing is, right? What is it? It's or. Okay, see these guys, you're like too nerdly. That's your problem. Too much nerd cred. The thing that we're changing is order, right? And so if order is that thing, then this is not nothing. This is an algorithm, and it's just as valid as the other one. And so now that you know its name, if you ask order as a household, that is so clearly wrong. That is absolutely wrong. Orders are roll. And so let's write the order rolls. Here's one. I'm going to make random house, I'm going to call the API order, I'll have it take an argument that's there, right? And it's going to shuffle it. What's the implementation here? Yeah, it's an algorithm. It's real. And so if I want to use this, I'm just going to throw, let's throw that subclass away. That didn't help. That went badly wrong, right? I'm just going to move this code around. I'll put an add or read it for data. And you know what's in that, so we can get rid of that. I need the space. So we're going to just remove the responsibility for ordering that array from house. I'm going to do it by using this order, right? So I'm going to inject it. It's a name parameter, right? I'm going to inject an order, and I'm going to give that order an opportunity to order the data. And it's whatever it does is what house is going to have. And it works. So what have I just done? Well, I got more code to do exactly the same thing. And this is why people complain about object-oriented design, right? Because they can't think further ahead than this. But watch this. I got these two, right? I got another kind of order. Why don't I just inject that instead? And that just works. All right? This is composition, all right? We're trying to inject an object to play the role of the thing that varies. How are we going to do echo house? You totally know. You see the answer to this problem already, right? I need something to do this. And then I need another thing to play another variant of this role. And if I have these things, I'm going to inject them, right? We love dependency injection. I'm going to put this... Okay, wait. Here, I have to do that again. Sorry. It's got to line up. It kills me. So I'm going to keep it a form. I'm going to stick it way down here because I want to intervene and then most narrow part possible, your mileage may vary. But I put it there when I wrote this code. All right? And so now, again, I have exactly the same behavior I had before I created these two new objects and injected them. But I can also inject the other one. And now I got echo house. So I've defined two roles and they each have two players. And so let's... Instead of getting line 12, let's just recite the whole thing. So here's the set of things I can do, right? I can get house. I can get random house. I can get echo house or I can get random echo house. And we have not... We no longer have more code. We have actually less code. And there's not one bit of duplication in here. We've made these units of pluggable behavior. Before when it was inheritance, it looked like this. But really, what we want, if there's a specialization, there's... By definition, there's never just one specialization. The new thing is one thing, but the old thing that you're specializing is something even if it looks like nothing. So you have to isolate the thing that varies. You have to figure out what it is. And that leaves a hole in house where you have to plug that stuff back in. And you've got to figure out what its name is. Here we call them order and format. You have to define that role. We made APIs for the order and format. And then later at runtime, somebody is going to inject the player. Here, wait. Let's do that again, too. Sorry. I love... It is really one of the few consolations in making talks. Keynote effects. So this is composition plus dependency injection. And this is what it means to do object-oriented design. All right. Understanding these techniques lets you find the underlying abstraction. And getting that abstraction right is going to dramatically improve your code. If you're talking to NIL, it's something. Use the null object pattern. You're done checking for NIL. Okay? Stop it right now. Make objects to stand in for those NILs and use those active nothings. Next, beware of inheritance. It's easy to begin with. And you know, maybe you should use it to start with. But it's a knife that will turn in your hand, especially if the amount of code that you specialize is a small proportion of the class that you're subclassing. Be very careful if you use it and be ready to switch to composition. It's not for sharing behavior. The next... The bigger idea is we move out and scale here is there's no such thing as one specialization. When you see that you have a variant, it means that you have two. And you have to isolate the thing that varies. Name that concept. Define the role and then inject the players. When you get started in OO, when you first started writing OO, it's really easy to see that real things could be modeled as objects, right? The chair that you're sitting on. The chair that you're in the person beside you. And it didn't take long when she started writing code to figure out that more abstract things could be modeled usefully, right? Business roles, ideas, concepts, processes. And back in the beginning of this talk, I started with fixed nums and booleans. And you might not have really thought, we're so used to those ideas that you don't really think of them as being abstractions, but they are not real. You cannot reach out and pick up a six or a true, right? But inside my app, that six and the true are as real as the chair. Now, the numbers are an amazing abstraction, but then there's a way that the abstraction in numbers doubles up and that's with zero. We didn't have zero for a really long time. Zero represented nothing. And before we had the idea of zero, there were things we couldn't do. And then after we had the idea of zero, it became that concept became a polymorphic in our terms, right? We could use numbers in new ways once we discovered that the nothing in the number set could be represented by the symbol zero. So your applications are full of things like zero, right? There are concepts that are abstractions that reveal their presence by the absence of code. I don't want to write a condition. I just want to send a message to an object. And if you want to do OO that way, you have to find those objects. And they're hidden in the spaces between the lines in your code. They seem like nothing, but they aren't. They're actually something always. It's true that they're something because something, nothing is always something. Thank you. I'm seeing you, Mets. I'm writing a new book. Sign up to get on the list for the beta of the new book there. I'm teaching, I rarely get to teach public course, but the course I normally teach privately, New York City. Sign up on my website if you want news about that. Of course I have stickers, but even better, I got tats. So thank you again. Come and find me later. We don't have time for questions. Thank you.
|
Our code is full of hidden assumptions, things that seem like nothing, secrets that we did not name and thus cannot see. These secrets represent missing concepts and this talk shows you how to expose those concepts with code that is easy to understand, change and extend. Being explicit about hidden ideas makes your code simpler, your apps clearer and your life better. Even very small ideas matter. Everything, even nothing, is something.
|
10.5446/30694 (DOI)
|
And thanks for coming to my talk. I am David Ferber, and I live in Ithaca, New York. I work for a software consulting company called Gorgias. Ithaca is gorgeous. So Gorgias is Ithaca. And I'm going to be talking to you about rapid data modeling and active record with the JSON data type, specifically the JSON data type in Postgres. Another title could be how I sped up my Rails development by fitting a document store into my active record models. And I'll fill in the blanks there as I go along. But I'm going to mainly tell a story of some projects out of which this emerged. Ithaca, maybe you've heard of it. Maybe you've not. It's a small city in upstate New York. It's a bus ride away from New York City. So we get a lot of business from there. And it's the home of Cornell University and Ithaca College. So we get a lot of stuff from there. It tends to be academic stuff. It tends to be often small to medium sized stuff for which Rails can cut right through. So we have a Rails toolkit that we use to do it. And so everything for me is often about how to get this done as fast as possible because I'm often working on three or four or six or even eight things at a time. And I need to get through. So rapid data modeling, even before you get up to rapid data modeling, there's a lot of things to do before you would think of such a thing that I thought it would go through. I'm a big fan of simple form and learning how simple form works beyond just the top of the read me in terms of how you can use it to make your form simple. But more on that later. And slim for templating because it removes a lot of the extra stuff and taking advantage of the view template inheritance in Rails to minimize the amount of views. And I like to take common components and put them in engines like CK Editor. Every time I start a new project and I push it up, I take to Heroku. How long does it take to build? And sometimes I started timing out. So we created an engine and put CK Editor what we use on a CDN. And whenever we need it, we just plug it in. And I say, don't waste work. The things I'm going to tell you about rapid data modeling with Postgres, I'm not doing any of it with the intention of simply prototyping it and throwing it away at a later time, but of using the JSON in order to build something. And I suggest always to keep reading the instructions, even if you started learning Rails at 1.2 or 2 or 3, they're always adding new things. And this is a story of how sitting down and reading what came out in Rails 4 helped me solve a problem. And of course, inherited resources. I'm a fan of that. The projects I'm talking about use that. I noticed that it's come out of fashion lately. If you go to the projects on GitHub, it says, please don't use this anymore. And so I tried to cut back. Also, the tenth time trying to explain to new developers what an association chain is caused me to back away as well, but the model works. And most often, when you have a lot of little projects I've found, try to be consistent in the way that you do things on all of the projects so that when you parachute in, the things are where they need to be. Now last spring, I got involved with a project, Project Aragats. Like many of our projects, this was somebody at Cornell got a grant. And they had a project that had been evolving over the years. And they needed somebody to come in and clean it up and make it work. And this particular one was the archeology department. Some archeologists there, two of them, go every year and have been since 1995. They've skipped some years, but they've been going to Aragats, which is one of the areas of the oldest human settlements, going back to like 8,000 BC. And they do field work. And they collect artifacts. And then they've been describing them in a first to Microsoft Access database that then went to Microsoft Excel. And they valiantly did it all themselves. And with all respect, but it needed a database person to look at it and clean it up. And they wanted to expand it and be able to search on it more efficiently than they could. They didn't want to have to learn SQL in order to keep searching for their data. And they wanted it to be publicly available to other researchers so that they could put it in their footnotes. And stuff. So let me bring you into the world of archeology real quick. Last summer, when we were working on this, they took a drone with them to Armenia. And thanks to the time that we saved them with the system, they were able to fly the drone around. And here, data modeling for an archeologist. What you're seeing here is a, it looks like a tundra. But that would be, they go and they look around and they see, oh, here's a place where there's some human settlement. And this site, they call it a landscape. And here's another one. And they survey a bunch of these sites. And then they decide when they go in a particular year, which ones look interesting. So there's a database of surveys, of sites that they've looked up, prospective places to dig. And then they decide, OK, this year we're going to focus on that one and this one. And that would be called an operation. And then in this operation, they're going to go and set up their tents and stakes in the ground and measure and start to do some preliminary digging and say, oh, here, this place looks interesting. And this little place looks interesting. And this little place looks interesting within that confined landscape. And then they're going to start digging with their picks and toothbrushes and whatnot. You see that there's a little animal kind of running along. They totally weren't meaning to film a discovery channel type of thing. That's fun. So this application is going to be used for they take all the pot shards and whatever, and they put them in bags and boxes, and they bring them back to the tents. And then in the evening, they get out some beer and turn on the television and start measuring and guessing what is this and cataloging. So they have an application where they load up to the place that they're at. And all the different things that they can find there, they can add them. So lots of different things meant to be used in action, but also meant to be used to answer questions like, OK, last year, Bronze Age pots with rolled rims, I need to be able to find that without writing SQL. So there needed to be this very fancy and at the beginning of it, very intimidating search interface to write. So the things that they find, and of course there's some English and Russian things going on, that it's not just in English or in Russian, but the things that are in Russian are presented also in English. And you choose what kind of thing you're looking for, and then you can apply many filters. And when you click Add Attributes, then you get a lot of different choices. And then you can pick your filter and check your lists of things, and click Submit, and you will get results. So things, lots of different things. In archaeology, here's some of the kinds of things that they can find at a site. From a bone, which they call fauna, or a bone object, which is an object made from bone, or pottery, which has seldom coming out as a pot, shells, wood objects, all different kinds of things that they find. So all of these things, there's some metadata about them that everything has. Everything has a period in which they guessed that it might have started being from, and a period in which it ended. If they're really sure that it's from the early Bronze Age, those would be the same, for example. But everything gets periodized, and what place they found it at, and anything else that all of these things have, who found it, would be on there. But then each of these things, everything else, is different. And these things can have a lot, a lot of attributes about them. Here's, it's just a pot, right? But when an archaeologist looks at it, and they see pinch handles and grip handles, and I'm not even sure what those are. And outpour positions, and what is it a lid? Is it a neck? And core bands and colors, and there's dozens of attributes on here. And a bone is just as complex as that. And whether it's been butchered, and how it was butchered, and what it was butchered with, and it blew my mind how much they do. And I had originally started with having each of these things in its own table. Actually, somebody else had started the schema, and that's how it was when I got it. And then for the searching, the things in common, they had this thing called an object registry. And I was trying to wrecking my mind with the forms, and hooking us all up, and making the search interface, when I read a blog post saying that you can do this in Postgres. And I was like, whoa. I wonder if I can use this. So here's an SQL query that's saying if we have a single table with everything in it, all the attributes of everything, and we have a data column that's adjacent to the column, then we can actually search on the keys. Anybody know you can do this? Anybody done this? OK. So I thought, well, I wonder if I can make Rails do that, or active record easily. And so I sat down with a beer. I wanted to see if I could use a single table inheritance to have the stuff that's in common in actual database columns, and then put everything that is a variant into this data column. So I would have a common base class. We actually call it entity, but it was having trouble explaining what entities were. So I call the artifacts here. They call them objects, but in Ruby, that's taken. So we had to say, well, guys, let's pick something else. And we settled on entity, but artifact, I think, is clear for a non-archeologist. Pottery extends artifact, and all of the specific attributes will be in the data column. So how can we do this? Well, let me tell you first, here's a migration that would create a column that would have a JSON column. Now, what is a JSON column? Has anybody ever stored JSON in the database before? Did you use a text column? Yes. I have, too. I had a slide in here earlier that said humans have long stored JSON as text in the database, but now you can actually store it as JSON. Now, PostgreSQL 9.2, they introduced this JSON column. And all the JSON column really is text. PostgreSQL stores it as text just like you stored it when it was a text column. However, they provided a bunch of new functions that would allow you to search within the JSON. And to work with the JSON and blow it out, you know, see that you have an array of JSON objects, you can blow it out as in a query as if their table rose. And you can select out the JSON and pretend that there columns, all kinds of interesting things like that. But it was all still stored as text and parsed on demand, like James with the presentation about the scrolls, and that you make a query that's got to open up all the scrolls with the JSON. However, you could index the individual keys if you want to. OK? JSON B came along in PostgreSQL 9.4. And if you have access to 9.4 and you want to work with JSON, use the JSON B because it gets stored as JSON. And that means faster searching. And it also allows you to create what's called a GIN index. You can say index all of the keys in this JSON and look up when you say find me a document with this key value pair in the JSON. It's actually faster, I found, than doing it in regular columns. OK? But it's much slower if you're doing greater than and less than. I'll get to more of that later. But if you've got this JSON column, which here I call, I always call it data, then you can do something like this. Store accessor data, width. And when I saw that, I was like, I think I might be able to use this. What does this do? Well, let's say that we have our artifacts. And we say pass in the width is 20. Whoa, artifact.width is 20. Cool. And there's no calling for it. Data width, because it put it in the hash, which is your data field and is serializing it to the JSON. So by the same token, your HTML form with the width where the user type 20 gets posted, it comes in as a string, because ActiveRecord doesn't know what to do with it, because it's not an actual column. And it assigns it as 20. So you have a string of 20. And you ask for the width, it is 20. It's not an integer. Then you get wild, and you ask if the width has changed, and it goes kaboom, because it's not an actual attribute. It's just an accessory that gets stored in your data. However, data changed would become true. Otherwise, ActiveRecord won't know when to save the record. Now, I had to have this massive search interface. I was going to use Ransack somehow. Has anybody used Ransack? OK. So Ransack is a gem written by Ernie Miller that basically glues a form so you have an input that would be something like width greater than or equal to like that. So you have the column and then what they call a predicate. And you can have a form input. And then when you say entity.Ransack, then that drops into a rel and converts that into a where a width is greater than or equal to 30. And then call.result, that pulls it back into ActiveRecord LAN so that you can keep on adding scopes and stuff. It's a way to build advanced search filter type things. So I needed it to look like that so that I could use it with Ransack, and then my search problems would be solved if I was going to use this JSON column for storing attributes. And I found that you could with a custom Ransacker. It's basically saying, OK, if you see width on my entity model, then please, when you make a query out of it, please change the word width with data arrow width colon colon int. Now what this is telling Postgres, data look for the width key in the data. It's going to come back as text and then convert that to an integer so that you can compare it to the integer that I gave you. OK? So put them together. And what do you get? The basis for a document store. Question is how to put them together. And what is a document store? So a document store is when you store, you don't have a schema necessarily, but you have, or at least it's defined in the model that has these keys and there's not columns in the database. I wanted to emphasize here that this is a hybrid schema mix type document store. Like, if you're going to use the JSON column, it doesn't mean you can't use the other columns or use columns like you normally do in ActiveRecord. So if I was going to use this and it was actually going to be rapid data modeling, then I would need to have a quick way to declare what attributes go in my document. Something kind of like ActiveModelSerializers or MongoMapper or Mongoid. And I thought, well, OK, I need to typecast my data coming in and out because it's probably coming in as a string and I want it to be an integer. And I need to convert it to something for a Ransack to be able to search. Store accessor as a method. It turns out to be one of those methods in Rails. You go to the Rails internals and you sometimes it's like, ooh, what is that? But this turned out to simply wrap up two methods called readStoreAttribute and writeStoreAttribute. And I found that, oh, I could just do the same thing and just catch it before it goes in and catch it when it comes out and make it what I want it. And then I can declare a custom Ransacker. And that would allow me to do something like this with the integer attributes with length, width, and height. And then I also have string attributes and float attributes and Boolean attributes and so forth. So I thought, well, this might work as a template method where I have to define what goes into the blanks. And then integer attributes and string attributes would just fill in what those blanks are. In reality, it turned out to be a little more complicated because there is no.toBool kind of thing. And a date conversion is a little more complicated than just calling toDate on whatever. But the end result is a model that looks something like that. And that's actually how the models look in this project. And it got the work done. But I wasn't really happy with it. It kind of bugged me that the implementation to me seemed to be driven by those blanks that I needed to fill in. And so string attributes, Boolean attributes, float attributes, that seemed to me that Boolean string float would be like the argument for the argument for the argument for the argument for the argument for the argument for the argument for the argument for the argument for the string float would be like the argument, the first argument of a common method that could be called something like attribute. And doing it this way, rather than grouping the attributes by semantically where they are in the meaning of that document, instead I've grouped the strings and grouped them together by type. And I asked myself, what was Sandy Metz do? Apparently I'm not the only one that ever asked themselves that. There's even a sticker that I just got. But what was Sandy Metz do? I had just watched her last Rails talk on confreaks. And the refactoring she did, one of the stages was pretty much what this was. It's like, OK, we're almost there, but not quite. What if it looks something more like that, where it's declaring each attribute one by one along with what it is? And that would give a place to add more interesting things like default arguments and make it easier to come up with different types of attributes. I found that once I had the ability to say, create my own essentially column types within my document and store them how I like, that I could be more creative in making the models fit. And that refactoring that you saw there did not happen on this project. The code of a ceramic object doesn't look like that. It's the next project where I got to keep on taking it further. And this was for the Inclusive Recreation Resource Center, which is based in Portland, New York. It runs out of SUNY Cortland, or the State University of New York College at Cortland. And their mission is to make possible that the picture that you see there can happen. That to have a database, a map of places that you can go if you have special needs for access and to assess them to find out, is the bowling alley really accessible? Or are they just saying it's accessible? Is it really like that? You can say that's accessible. Somebody really ought to come and measure it. So then when we have a need, we want to go to a state park. We can type in Lake Placid and see what comes up and find out, is there really an accessible bathroom there that I can go to, should I load up my minivan and drive out there? And part of this website is for an online course for training the assessors. Another part is for actually making the assessments. And these assessments have a lot of forms, like this many forms. This is just the beginning of one of the forms. And there's Wikipedia. Took a picture of the finders with some of those forms. And many of these have multiple pages. Now, what are they? As you see by some of the names, a gazebo, a picnic shelter, a gift shop, pro shop, locker, shower, changing room, picnic area, overlook, shopping facility, sauna, amusement ride. There are a lot of different things. But many of them are, well, they're all physical places. Many of them are physical structures. Buildings. And buildings tend to have some common elements. And we noticed that when we were looking at the forms, as we were overcoming our initial shock at seeing how many forms there were, and when do we have to get this done by? And how many migrations do we have to write to fit all this data? And then as we were actually reading the forms, seeing, oh, these questions are kind of similar to those questions. Oh, they're always asking about the door. And can you open the door and get in the door? And how wide is the door? And is there a clear route to the door? Is there a ramp? Well, describe the ramp. Is there a parking area? And there were always variations of the same questions, maybe with slightly different wording on each form. But there was a recurring theme. Within this, we never need to ask about the search. Unlike the archaeologists, we never need to search on the specific attributes within the JSON, though. The search is like, give me a state park. Show me what's around like Placid, but not give me all of the gazebos with 40 in stores. So in the domain-driven design angle, that got me thinking that what I have is an assessment, a structure as an entity, and these doors and ramps and things are value objects that I would like to be able to compose each thing out of so that they would be a reusable part that I can plug in wherever it shows up. And I wonder if I could embed these reusable parts into the JSON, as it turns out I could. I wanted something that would look like this. This is an embeddable model, and everything tends to have a name, door, oh, OK, door to front door, back door. Some things have many doors. And very often the comments and whether or not the thing has the thing in question. And then the door itself. Here we see an example. What I mean by when you control what an attribute is, because it's just going into the JSON, then you have more flexibility than you do with ActiveRecord in order to customize what is in that attribute. So here, the first thing you see there, attribute door type is an enumeration that is typically a drop down list or a list of check boxes. Maybe you can add another, which is multiple is true, means you can select many. And then I had another option as strict as true, like you can only select from the list. Or you can have extra things. And dictionary, because there's different open handle and closed handle, both come from a common dictionary of door handle types. And then here we have a comfort station has a door. Well, a lot of things have a door. So I just use single table inheritance for a specialty area, which is what a comfort station is called. And every specialty area gets the things that most everything seemed to have. And then a comfort station doesn't actually explicitly have a door. It just got one from a specialty area. But as it turns out, restrooms and showers also have doors. And wherever a door needed to be, it was as simple as writing embeds one or embeds many doors. Or restroom or parking area, ramp or elevator or stairway, and so forth. And then when you have a door form, then let's make it behave like except nested attributes for wants it to behave so that you can assign the attributes. The form doesn't need to know that these aren't actual columns, and this isn't a real live active record model, but just a Ruby object with some attributes on it. And you can create a custom input, for example, in simple form. That's what this checkbox is other. And it hooks up to an enumeration with multiple true and gives you something like this with the add other button. And suddenly, when we were looking at that and thinking, oh my god, half of these checkboxes have the other. And how are we going to do that? And suddenly we owe enumeration multiple true. And let's use the checkboxes other simple form input. And it was no longer a thing. And this isn't a partial. So whenever we need to plug in a door, we just call the partial, or actually call a helper that calls the partial with some special arguments. And then each door might have some slightly different hints or labeling on the form. Well, simple form gives you some wonderful, if tedious, but very handy, specific ways to address that in the language file. So if you want your nonactive record model thing to behave with accept-nested, like accept-nested attributes for does, all you've got to do is define a method that is your attribute name underscore attributes equals that receives what came in from the form and then builds out your array and sticks it in your JSON. I'm showing you this just to show you how you can do a lot of tricking of active record to make it behave the way that it's supposed to behave with your nonactive record thing. Because I don't want the form to have to know about it. I don't want the controller to have to know that the thing is a document store and not regular columns. So I found myself in this project pulling this out into a gem and using it since then on many other projects, because now it's one of the tools in my toolbox. I'm putting it here both so you can take a look at it. I encourage you and would love some feedback on it. And also because if you want to see how I got from integer attributes to attribute, well, there it is. But I wanted to reflect more on some of the yeses and noes about the experience that I've had working with this JSON column, especially as a document store. It's emerged for me from finding a way to use single table inheritance and put all of the variant attributes into the document store into the JSON column. So that I don't have a thousand column table, which is probably a good indication that I shouldn't use single table inheritance. But I didn't want to have to work with all the dependent models and stuff. This was just so easy. And got the job done. And from the Inclusive Recreation Resource Center IRRC, you see that it's handy for storing value objects and arrays and stuff, which means that it's good for what you probably think of when you think, oh, JSON. Oh, I have some JSON. I need to do something with it. What should I do? Well, let's put it in a JSON column. Well, I was building a dashboard for interacting with DigitalOcean and DNS Simple and Bitbucket and what else. And you get back an API response when I create a droplet. Oh, let's stuff it in the JSON column. And there it is, along with the other attributes that I've already created. I'm in the flow of building out some forms. And oh, crap, I forgot to add that to field. I better generate a migration. And here I just go to the model, pop it in there. Think, well, I regret this. If the answer is no, then it just goes there. And I move on. However, that last joke, I want to emphasize that it's not a permanent commitment. That there's ways to get into JSON, ways to get out of JSON, look at the JSON documentation in Postgres. Nothing prevents you from running actual queries, even though you're using ActiveRecord. And there's all kinds of cool things that you can do that I have not begun to talk about that I was going to talk about, but really I did not need to know about any of it in order to do the work that I showed you so far. So why talk about it here? But I would say more that it has the advantages of a document store. And all the usual, oh, you should use MongoDB because, and you can tell that Postgres has been answering the MongoDB question with this. And a document store isn't the only thing they had in mind with it. A lot of the demos I see would have like, OK, you have orders, and orders have line items. The line items don't really need to exist outside of the order, so why not put the line items within the JSON of the order model? That's a pretty common example. And then they say, oh, but you have a context in which you want to access line items independently. Here's how you can write a query or create a materialized view that's going to pull all of the line items out of the models that they're stored in and present them to you as if they're in their own table. It's like, oh, OK. That's one of many things you can do with it. So it has the advantages, the speed, and it also has the disadvantages of a document store. The projects in which I've used it had a big legacy database for which a large part of the project was to normalize it and sort out like, oh, we have a text column that's parading as a Boolean, and they type in yes or no or why or n or why n or n, y or es. And it's like, no, people. And then figure out two or false. And then you have this big migration, and then you push the boat off the dock, and it goes out into the pond. And up until the time you push that boat, migration, whatever, there's going to be a migration. As we go, we can add columns and add fields to the JSON tour hearts content. But once it's out there, and you want to, say, change and move stuff around, then you have to go through all the documents and change them. Because there's no partial update. You can't say, update this key in my JSON and set the value. If the age is 40, set it to 42, because they're older now. You've got to go through, and you've got to update the entire, and save the entire record. In terms of, is the performance at the number of records that I have? I don't notice a difference. And I'm still working on the question of performance in terms of if I have a lot, a lot of records, what kinds of things might, my performance questions are more of, how much slower is this, or how much faster is this than if I wrote columns for all of the things that I'm putting into my document store. And what I found is, if I'm looking up simple stuff like find this where that equals that, then it's pretty comparable. And if you're using JSON-B with a GenIndex, it's actually faster. And that if you're doing something like greater than, less than, string matching, any more complex operation than equals, then in the JSON, it's actually going to be somewhat slower than if it was in a column. And how much slower seems to progress as the number of records increases? So the more records you have, the slower it gets when you're doing greater than, less than type stuff, which was important for Aragats, the archaeologists, and not relevant at all to the IRRC. Also, when I was doing this testing, I totally did not expect to find this. But I found that, let's say that you have the restrooms with the doors. Let's say you have many restrooms and many doors. And then you have age up here, just as a key at the top level. The fact that that JSON is nested, for some reason, makes the entire search, even for the age, become much slower. In other words, if you want to use the JSON, and you're going to use the indexing and all of that, of course, if searching on the keys in your JSON is going to be important to you, then don't nest JSON within the same column. Because it's like throwing in a wrench anywhere from five to 60 times slower, depending on the query and depending on how many records. And I didn't want that to be true. But I keep looking, maybe I'm doing something wrong. But then I run a benchmark, and it is still comes out slower. So I would like to go back to when we do human arrays excavations and error gaps, I would like to turn grip handle into an object, because it should be a value object. But if I do that, then the search performance is going to suck. So I'm not going to do that. So where to go from here? I encourage you to check out my RDoC store gem. This is not a talk about that, but more how that emerged from the adventures that I had in rapid data modeling. And I've put in some blog posts to learn more about the JSON column and other things you can do with it from there. And I encourage you to go and read them. And thank you for coming and hearing about the JSON column. Once again, I'm David Ferber, and there I am at email and on Twitter. Please give me a holler. Thank you. I am David Ferber. Way to go. That's really awesome. Thank you very much. You're welcome. Very good. I'll see you. OK. See you soon. You too. Bye. Chauchout. לה ن averaging Eltern icebergs, Crush the for t is Be
|
So you are building an app that has a ton of forms each with tons of fields. Your heart sinks as you think of writing and managing all those models, migrations and associations. PostgreSQL JSON column and ActiveRecord::Store to the rescue! This talk covers a way to wrap these Rails 4 features to simplify the building of extensive hierarchical data models. You will learn to expressively declare schema-less attributes on your model that act much like “real" columns, meaning they are typecast, validated, query able, embeddable, and behave with Rails form builders.
|
10.5446/30696 (DOI)
|
So, today I will be talking about Resilient by Design. I think the previous talk about microservices with mentioned cap theorem and everything, I think this is like the sequel to that talk now that you want to build systems that don't go down. My name is Smith. I'm part of Bandar Core team member. Until very recently, I used to maintain the dependency resolver. I also occasionally contribute to JRuby and this is what I'm on internet. I work at Flipkart. Many of you would not know that it's an e-commerce website in India. We get a lot of scaling problems and I'm very thankful for them to sponsor my trip here. So, let's just start. So, why do we actually care about resilience? Companies have increasingly over the years started depending on software. And for them at this stage, any sort of downtime would actually result in loss of business. For customers, it's also bad because the customers are also relying on the software to be up. To give an example, Flipkart actually makes more than $1 billion in sales. So, even a single minute of downtime results in loss of $2,000. And the interesting fact is that it's never evenly distributed like that. There's no numbers like that that are evenly distributed. So, what happens is that there are 20% of times which actually amount to 80% of the total revenue. And it's during those peak times, your systems are most vulnerable. And those times, even going down for a single minute could mean that you might lose close to $8,000. Just at that minute. So, the companies cannot afford to go any downtime on their systems. So, what would they do? What are they going to do is they're going to rely on developers, support engineers, and all of them. So, the famous on-call is there just because of that reason. So, it's up to the developer to make sure that he responds to on-call whenever it is, like if it's late at night or anything. And it's up to him to make sure that systems are running up. The second reason is that even the simplest system today is dependent on other services. Like, at the very least, it will be dependent on a database which is on another server. And as the previous talk said, the network is not really reliable. So, in those kind of things, it's very important that the thought about resilience should be put at forefront. Otherwise, I don't think any of us here would like to handle on-calls over the weekend. That's pretty irritating, to be honest. So, yes. So, then the question becomes, how do we actually build a resilience system? So, I think in the 90s, like resilience testing used to be an implicit requirement. The requirement was that your core should run and it should work and tests should be there, not be there. But it's like implicit requirement. Maintainable code, even those things were very implicit. But you see those things are not the same in the Ruby community itself, testing code, maintainability. All those things are something that a lot of focus is put into it. Like, it's not something like an afterthought. But the problem with resilience is that today it's more of an implicit requirement. The management expects that the system should be up all the time. And the developers also think that, you know, okay, I wrote the system, I used this data store, I have that. It's going to be up. And I think it's like, I think if there is no thought put into resilience, like when designing the system, you'll be very lucky to find any bugs before production. Because most of the bugs that you see that dealt with the resilience of the system happen in the production environment, happen when the systems are at peak load, the utilization is up. That's when you see those bugs. And because you haven't thought about it, they are going to come and bite you. There's just no other way around. The second thing is that human bias. So humans have inherently a bias that they only think about the happy path. Where, you know, everything is working, like your caching servers are out, your database is there, the services that you're talking to are responding every time you make a request. And that's why, you know, we fail to see the path where those things are not actually up. Those things when they're not actually working. And so the only way to actually think in a different way is that we need to think about resilience from the start. Like whenever you think about your system, like whenever you are designing your system, you need to think about, OK, if my system goes down, like, OK, have I planned for capacity if my caching servers go down? Or are they highly available? All those sorts have to be put in from start. There are things that actually can help you. And that's what my talk is about, resilient design patterns. However, I would like to put up a disclaimer for this talk that these are not silver bullet. Like, I mean, it's not that if you use all of these patterns in your system and they are guaranteed to never go down. Things are never simple as that. A lot of the thing that it depends on the domain as well, like the system that you're designing. For example, on Flipkart website, our core thing is to, you know, able to serve the product page for the customer to see if it's available. And once he clicks on buy, he should get whatever he has ordered. That is our main thing. So if recommendation system is facing an issue, we could load the page without it. If the comments or reviews are not showing up, we can decide to, you know, not show them if those systems are down. Obviously, for each service, those kind of trade-offs are very dependent on it. Like, for example, Netflix, if their bookmarking service is down, what they do is they will not give you the option of resuming the playback. They'll just start from there. But the reason why they do that is that they know that the main thing is to be able to watch the videos. And hence, that's the only thing that I want to say that it depends on your domain. It depends on how you have designed your systems. And I think that's a really good thing because, like, there's really no free lunch. If you're designing a system like this, you need to put thought into it. You need to think about the cases. So, yeah. But that in mind, let's just start with the patterns. So I think this is the most important pattern in this talk. Like, this is why I'm putting it first. Like, if you don't take anything out of this talk, like anything else, just take this pattern out of it. The thing is that, you know, the most wastage of resources is like burning cycles and clocktimes only to get results that you have to throw away. Failing fast is the best thing you can do if the system, the other services that you're talking to are not responding or you know are going to fail. In fact, the reason behind failing fast actually comes from a mathematical idea called queuing theory. So this is John Little's law for those of you who know. So the length of a queue, so say your system actually handles incoming messages and that's it. So your length of your queue is going to be dependent upon the arrival of your messages. And the amount of time it takes for them in the system, like the amount of time it takes to process them. If your response times go up, the amount of time in the system goes up, the size of the queue will increase. Like, so now let's say if you're talking to a service which is not responding and you know, you didn't even bother to change the default timeout of net HTTP, which is 60 seconds. It's going to take 60 seconds for your system, it's going to take 60 seconds for it to fail. So your response times will be very, very high and that will indirectly increase your queue size. The other thing that is highly dependent on your responses is your utilization of your system. So the utilization goes, if you see this graph, the utilization goes up if the response time goes up. So if for each request, if it's taking 60 seconds, your utilization of your entire service will be very, very high. And the only way you can do anything about this is to, you know, you can only add more servers and, you know, hope for the best in this kind of scenario. The cool thing about this is like you can also look at the other way. So say, you know, you optimize your code, like you did your best and you know, you got the response times to a certain extent. After that, if your utilization is going above 80%, still, like if you're going about 80%, you can easily see that it's going to have a very negative impact on your performance of your system. And that point you will, you can do capacity planning on based on that. And I think the other thing that's very cool about this is that, say you are an agile team and this, the utilization of your team is around 90%. And now your manager comes with some adult task. Just using queuing theory, you can figure out that, you know, the turnaround time for that particular task is going to be very, very high. So I think like math is pretty cool. You cannot run away from the math. So the only thing you can do is like in this case, is that you keep your response times as low as possible. So this is one example system that I have created for two just to illustrate. So say you have an ebook download service, like if you buy the ebook, they guarantee that they give you an SLA of five minutes, like if you buy the ebook in five minutes, you'll get an email with the download link and you can download it. And this is pretty basic stuff. You have a checkout service which sends out messages through the payment service, through a message queue because we don't want to lose those messages. And the payment talks to an external service to verify that, you know, this payment is authentic. And then it, you know, it processes further. Now let's assume that the external service that we are talking to starts failing, that it starts timing out. So intermediate calls are failing when talking to the external service. Now what's happening there is that each payment call to external service is going to fail. It's going to take 60 seconds. And because of that, the incoming messages, their rate, you can't really control in this case. So what's going to happen is that messages are going to start piling up in that queue, that message queue. At this stage, what happens is even if the system comes up, even if the system comes live, external service, what you would have is you would have a pile of messages. And now you would also have incoming messages from the website. Like people are still placing orders. So you would fail to meet the SLA for orders which were placed when the external service was down. And you will also, because of that, you'll also fail to meet the expectation for newly placed orders. In case of that, in case of that, like we embrace that things are going to be bad, we use a circuit breaker in between. We realize that the call to external services are going to fail. And what we do is, in fallback, what we do is we store those messages, retry them later. And because of that, our response times are still the same. So what it's going to do is your messaging queue will still be empty because your response times are actually much better, because it's not even bothering to call the external service. In this case, when the system actually comes up, what you can do is newly placed orders can meet the SLA. Like they still get the download links. And the messages which are stored in a different system or a different queue and to retry later, you can, you know, you know those messages, I mean, those customers will not get their downloading on time. You can send out a special mail or you can give them some discount. But the main thing is, in this scenario, you are in control. Like you know that this message is somewhere at the ones that have failed. And you can design your system. You're not dependent on it. So yes, so this is the most important thing in this talk. But now let's say now, how do we actually, you know, make use of it? So the first thing to, you know, achieve that is to bounding. Like you need, so if any place in your system, if you have unbounded access to resources, that is something really, really terrible. Like you don't want anything like that. So in bounding, I want to cover three things like bounding is a huge topic of its own. But I specifically want to cover three things. The first is timeouts. So the default timeouts, any of your library are horrible. And like net HTTP, like I mentioned earlier, has a timeout of 60 seconds. So it takes 60 seconds for, you know, the read timeout to kick in and tell you that, you know, you can't access that server you're trying to go. And I think the scary part is that some of the system things don't have a timeout at all. Like they never timeout. We had a system at Flipkart. What we use it for was to, so it would collect messages from the local service and it will send it to the main messaging queue. Its job was to relay those messages. And only through this service, only through this infra piece, any service would be able to talk with the outside world. Now, this service, I mean, this infra piece would get hung every two weeks or three weeks or so. It's written in Ruby. And we couldn't figure out what was wrong. And when we went down into it, when we looked there, what we found out that it sent matrices through stats D on your UDB port. And there was one matrices that we were not reading at all. It was kind of like nobody was making use of it. And what that was causing is the buffer size of the UDP is 128 kb in Linux. And that was getting full. So if that is full at that point, it would just get stuck in that state. And the way we solved it was that we use a socket non-block flag, which is you can do it using write non-block in Ruby. But so yeah, so some systems don't even have a timeout. And those kind of things, you need to look down in your application and see that does my application have a proper timeout. And the greatest thing that timeout provides is a fault isolation. So if it's another service or another thing that is not responding, it shields your system. Like you can have a timeout. And you can use timeout in conjunction with the circuit breaker, which is the next part I'll talk about. Or you can, if nothing else, you can use it with a retry logic. The second thing is limit memory use. So again, whenever people use caching or something like Redis, this is something they completely forget about, like limiting their memory use. Or see, even their web servers, like application servers, in those cases, like in case of Unicorn, you can have a watch on each of your workers. And you can say that, you know, the 85% it's okay. As soon as it crosses over 85%, you can notify the developers or something like that. The thing that happens is that when you don't have any of it and you let, so there was another case at Flipkart itself. What we had was that we had a system and every, again, every two, three weeks, the memory usage would increase so much that it would start to use the swap. And the performance of that particular host would be really, really terrible. When we actually looked into that, we found out that at one place it was doing adjacent parts and it was using symbols. And unfortunately, one of the keys was unique every time, every single time. And those who are new to Ruby, symbols are not garbage collected till very recently in Ruby 2.2. So any symbol that you created in your system, they'll stay till you restart your process, I mean, kill and restart your process. So in that case, however, none of us had to, you know, get up late at night or early in the morning to fix any of the system. What we had was a worker monitoring system. And what it would do is if the worker would go above 90%, it would actually restart the worker. And things would still go and work well. Or if it crosses, so that helped us out a lot. Like, I think that's not actually an ideal solution, but what it gives you is time to actually debug the issue. Otherwise, any time if it starts hitting the swap, it's going to impact the business. And that is something that you cannot take. The other point is to limit CPU. So a lot of times what happens is that on your host, there are even processes running that do certain things, maybe provide health checks or things like that. And those processes are, you know, are not the primary thing that's running. It's your service that's running on that host, which is the most important. But sometimes what happens is that the code in that D1 or something like that, I mean, the library you're using, goes into some kind of an infinite loop, or it starts using more and more of the resources of the system. However, you can easily limit that D1 using C groups. And what that provides you is like an isolation. So even if, you know, that D1 starts to use all your resources, it's only using one core of your system. And because of that, it will not go down. And finally, every time you use a mutics.lock or like a buffer in your system, all of those are implicit queues in your system, which you have no control over. Like there's no control over those things. And it is much better to have an explicit bounded queue, like a messaging queue that sends messages to your service. And that could be bounded. It could apply back pressure in the case of it's full. What this gives you is much more control over just using an implicit queue. So the next pattern, I think it's one of the coolest patterns in existence. It's called the circuit breaker pattern. So circuit breakers, the way they work is they are in between the client and the server or the supplier. And what they do is like if everything is fine, then yeah, it doesn't actually even come into play. But it's when that, you know, you make a request and it starts timing out. There is some connection problem between the client and the server. And in this case, what it does is that after a certain threshold of errors, it realizes that, you know, that the other service is facing some difficulty. Like it's not able to do it. So it actually trips the circuit. At that point onwards, any future calls are not even made to the server. What it does is it fails right then and then. Later on, what happens is that after a certain point of time, what it will do is it will actually make a call to that other service. And it will see if, you know, it's up or not. If it's up, it closes the circuit and everything goes back to normal. But if it's still, you know, timing out, it will keep in the, it was the circuit will still be in open state. And you wouldn't even need to make the call to it. Like there are really good examples of circuit breakers. I think Simeon by Shopify is a pretty good implementation in Ruby for circuit breakers. And if you use JRuby, you can just make use of his tricks, which is written by Netflix. It's very well written and battle tested library. So that is something you can do. And I think going forward, I think going forward, like there are the bulkheads are actually a concept that comes from ship. It will get are actually watertight compartments in your ship. So even if a hull is damaged by at certain partially damaged, it will, it will, it won't sink the ship. So the idea behind is that a single failure doesn't bring down the entire ship. And that is something that you can actually use in your service. So say you, your website and, you know, like in logistics, both need a product service, product information. So website needs product information to, you know, show it to the user. Logistics needs to know the product information to determine if the item is dangerous or, you know, can it be transported using air or, you know, road. Depending on what category the item is. Now, in this case, say the website is facing tremendous load, like a lot of people, there's some like, so there's a product launch and, you know, a lot of people are making use of it. So what's going to happen is that the load on website is going to affect the product service. So eventually what's going to happen is that the website will bring down product service because of the high load it's experiencing. So at that point, even logistics can't do anything. Even logistics is impacted. And once the logistics system is down, any systems which are dependent on that service will also go down. And this could actually trigger a cascading failure throughout the system, like each dependent piece going down. However, using the bulkhead pattern, what we can do is we can actually have a dedicated service for the website and logistics in the product service. So even if the other one is experiencing a lot of problems, the other services actually shielded by it. Like it won't be impacted by that. And the thing is like bulkheads are not very different from adding more capacity. Like adding more capacity could still result in the problems that I mentioned earlier. Here it's separating the servers so both of them don't impact each other. However, there are multiple other things you can also use for bulkheads for. So bulkhead as a concept is very powerful. So say you're using circuit breakers and you have a thread pool for each of the servers while making the call. And each of those are different thread pools. And one of the thread pools you realize is completely saturated. You realize that there is no free threads. At that point, you can actually fail to call that service. Like you can fail that there and you can use the fallback instead. So in that sense, one system will not forcibly bring down everything else. And finally, the last thing that I actually want to talk about is steady state. So say you use all of these patterns. You know, your systems are like staying up and nothing can wrong can happen. Actually, that's not true. If you have to fiddle your systems manually, like if there has to be a human intervention to make sure your system is going on for weeks, like restarting them or something like that, that actually introduces a chance of introducing the error into the system. So what you want is, you know, as little as a human effort as possible. And like there are a lot of things about it, like you can set up a deployment and all that. But there are two specific points that I actually want to talk about. First is have log rotation in place. So the worst thing that you want is that, you know, you have a log which are, you know, weeks old. And one day you realize that, you know, your service is out of disk space. At that point, just because of logs, because there's no way to log further, it could bring down your entire service on that host. So set up a log rotation. Like it takes five minutes to do it. And a lot of folks don't do that. But there's something that never actually makes it to the first draft of the system. It's an archiving strategy. So the way archiving actually works is like people will have a script and the DBA or someone like that would actually, you know, archive the data for you. And that is really terrible. Because depending on your system, you know, you're based on that, you know, if you are in case of Flipkart, if the order is delivered or if it's customer canceled, those are terminal states. Like at that point, nothing else will be done in that order. We know that nothing else could be done about that order. At that point, we could archive any data associated with that particular order, any unit, anything. So your archiving strategy is highly dependent on your domain. And that's something that, you know, you can always think about when you're actually designing your system. Because once you have your schema set, once you have everything set, you can't, you know, introduce a different kind of archiving strategy later on. So lastly, I want to end this talk on this key. This quote by Michael Nigard. So Michael Nigard actually wrote a book called Release It, which is a Bible about, you know, building resilient systems. So he says that, you know, software design actually only talk about what a system should do. It doesn't address what a system should not do. And it's to actually build a resilient system. It's very important that, you know, we also think about what a system should not be doing. And putting it together, what we want is, you know, we want to fail fast. If we realize that, you know, the call we are making to the system is going to fail, then we want to fail fast. We also want to bound our resources, use timeouts, at least discover what are the different timeouts by the libraries you're using. Use circuit breakers at every integration point in your system. If you're making a call to a different service or something, use a circuit breaker. So if that system goes down, you can clearly use a fallback instead. And that fallback could be a cashed value or stale, or it could be just failing fast. And finally, you want to, you know, isolate your failures. You want to use bulkheads and make sure that if one services are behaving badly, the failure could be contained to just that. And it couldn't, it wouldn't affect other systems. So, so yeah, that's it. That's it. Thank you.
|
Modern distributed systems have aggressive requirements around uptime and performance, they need to face harsh realities such as sudden rush of visitors, network issues, tangled databases and other unforeseen bugs. With so many moving parts involved even in the simplest of services, it becomes mandatory to adopt defensive patterns which would guard against some of these problems and identify anti-patterns before they trigger cascading failures across systems. This talk is for all those developers who hate getting a oncall at 4 AM in the morning.
|
10.5446/30697 (DOI)
|
Yeah, so thanks everyone for coming. So my talk is riding rails for 10 years. And so when I started thinking about this, 10 years didn't seem like that long of a period of time to talk about rails and how it's changed. It's actually a long time. There's a lot of stuff that's happened. So I've tried to jam in as much as I could. So I'm going to try and go through it pretty quick. But there's lots of interesting things that have happened over the 10 years. And so part of this talk is looking at Shopify and the Shopify code base and how it's changed over those 10 years of rails. Why do we, why look at Shopify? And it's pretty interesting because Shopify started nearly around the same time as Rails 1 came out just a bit before, actually. It's never been rewritten. And we've used versions all the way from before 1.0 to 4.1, which we're running now. And the Git repository holds most of the content of all of these changes over time. So it's really interesting to go digging through it and seeing how Rails has changed and how Shopify has had to change as Rails has moved along. And for this talk, I put together a little timeline of kind of the releases over time. So these are the releases of Rails over time. Did it based on kind of the public releases. And so they may be off a little bit. But there's been a lot of releases over time, a lot of changes to Rails over time. And then you can contrast this with Shopify and how we've been evolving the Shopify code base over time as well. So we've been following along with Rails. And in some cases, you can actually see the version numbers being a little bit before Rails releases because we were running on edge Rails for a long period of time in the early days. So this is kind of how Shopify's upgrades to Rails have mapped to the Rails releases. And as I was going through the history, I kind of pinpointed around 45 different versions of Rails that we had running in production over the 10 years. So this is looking at all the major, minor, and tiny version updates that have been done. And this doesn't include anything that I might be missing from the history that we don't actually have in the Git repository because there's a few years of history that we don't actually have. So this is a lot of change over time and a lot of change that both Rails has went through and the Shopify code base itself has had to go through to follow along with those changes. So I want to start just going back in time a little bit to that first commit that we have of Shopify and checking that out and kind of taking a look at what Rails looks like at that time. And so the first commit we have is from August 2005. It's likely around Rails 0.13. And it's pretty much what Rails 1.0 was released as in December of 2005. So I checked out that commit because I could do that and that was really easy and cracked it open in my editor and looked at the directory structure. And it's pretty interesting because it's all very familiar. Nothing crazy about this stands out. It looks pretty familiar as a Rails app for most developers who've been using Rails. A lot of the main pieces to what we know of a Rails app today existed back then in 2005. So we have the MVC patterns established, testing is already baked in. There's even a deploy.rb file. At that time I think it was using something called ShipTower. Capestran was released in early 2006 which Shopify now uses. So I went through some of the files as well and take a look at some of those to see how the pieces have changed over time. And so this was, I think I maybe cut out a line or two, but this was basically the route file for Shopify at that point, the entire routes file. And now our writes file is actually split into five or six different files because there's so many routes, it's very large. But all of this is very familiar. It doesn't look quite the same, but you can see how it got to where it was and it doesn't feel completely different from what you might be used to. You still have the fragments in the URLs, you still have controllers, you still have actions that you're mapping to. Then if we look at a controller, again, it's still pretty similar. The naming of the actions is different because we didn't have as much standardization around the naming of the actions at the time. But all in all, it looks pretty similar to what you would know from a Rails app. And the models are the same thing. We have all the associations, there's validations. A lot of the same pieces that are there now existed back in 2005. There's just a lot more to them. You can do a lot more with the validations, you can do a lot more with the associations that you couldn't do at this time. There wasn't much in addition to the associations here, but now you have a lot of options that you can pass to associations. They're just generally a lot more powerful than what they were at the time. It's also cool that JavaScript was also kind of baked into Rails at the time. We had the RJS responses, Scriptaculous and Prototype were stripped with Rails. Also Ruby and Rails going along with some bits of JavaScript has been the case since the earliest days. Then I looked into it a little bit more and there's a few other things that kind of exist. And a lot of these things have actually changed over time. So at one point there was this idea of sweepers and you use those to expire your caches. We had observers to encapsulate some of the callback actions. These were RHDML instead of being HTML.ERB. And dependency management was handled by either including the gems in your vendor directory or having submodules or having SVN externals. And then the interface to the web server was fast CGI at the time. So I also did a line of code count on that first commit, which is kind of cool. So the total lines of code of Shopify at that time was 11,000, which is pretty tiny in comparison to what it is now. Just to compare, we have close to half a million lines of code now, the majority of them being Ruby. So a lot has changed over time. But it's interesting looking at kind of those early beginnings and where it came from. So yeah, a lot has changed, but there's still a lot the same. Like a lot of those early patterns and early practices were set up and established in those earliest days. So now if we follow through with kind of the evolution and start looking at some of the other versions of Rails, starting with Rails 1.2, this was released in January. In Shopify, it actually showed up in November of 2006 because we were running on the edge. And later on in 2007, we updated to 1.2.3. So in those early days, we were keeping pretty close with the versions being on edge as much as possible. And it was really easy to do that because Rails was actually checked directly into the code base in like vendor or externals. So it would be changed even with like minor changes to Shopify. And since the code base was pretty small, the impact was low. Rails 1.2 was a pretty interesting release. It added rest and idea of resources. So DHH was talking about this earlier. Rest was one of the kind of fundamental things that came about in Rails and still exists now. But it wasn't perfect when it was initially released. Kind of like TurboLinks wasn't perfect when it was initially released. I'll get to that a bit later. Rails 2.1.2 also added multi-byte support. Some routing was rewritten. And you could do some nice things with formats and response too. So I'll show you some code from the Shopify code base. And most of this code was like directly lifted from the code base in the history. I cut out like a few things to like make it fit on the slides. But this is real code that we had running in Shopify at that point in time. So routing once kind of the formats and respond to. So that's the format fragment in the URL. So this allowed you to do the HTML, CSV formats, and Rails would just inspect the content type, the MIME type, and just handle figuring out which format to render for you. So this was really nice and cut out a bunch of like format handling that you would have to do on your own. Rest was really cool. So this created a lot more consistency in your applications. So you could map your resources and then you would have a lot more consistent action names and you wouldn't have to do all the mapping of the specific URLs. And a lot of this exists today and hasn't changed a whole lot over time except for the underlying routing. Then back to like that REST initial version. So the initial version of REST actually shipped with the URLs being separated by semicolons. So after your orders ID, there would be a semicolon and then the action name. And this was not handled very well by some browsers and libraries and web servers. MonGro didn't handle it very well, which was kind of the de facto web server at the time. So they actually changed it in Rails 124 to be a slash. So this is kind of contrast with the TurboLink situations where the ideas were solid from the start, but they didn't quite get it right and it kind of evolved to be the patterns that we see today and to be really great patterns that we all use and love. So Rails 2, this was released in December of 2007 and Shopify updated to it pretty much right at the same time, just a few days later after the release date because we were running pretty close to the edge. And the Rails 2 series was probably one of the best series of Rails ever by a lot of accounts. And there was a lot of really cool stuff that was added in too in some of the point releases that a lot of people really love. And it was a good combination of being really fast and then having a lot of features that got the job done and you didn't have to do a lot of stuff on your own. Like Rails was taking care of a lot for you too. So Rails 2, some of the notable things that were added were rescue from and fixture dependencies, namespaces and routes. Multi-view responses was kind of cool so you could register your own mind type aliases. So this would let you do the registering like an iPhone alias for something if you want. This was pretty early on in the mobile web. Another interesting one is ActionWebService which was removed in Active Resource which came in. So a lot of people probably don't know or remember ActionWebService but it was basically SOAP RPC protocol support and WSDL generation for APIs and all kinds of nice things like that. Luckily REST won out and Rails chose REST over all of that stuff which is another kind of opinionated view and drove things forward. But they were exploring different ideas at the time. So looking at Shopify code and what rescue from cleaned up, we used to have this override of rescue action that was taking errors in and there was this massive case statement that I actually trimmed down. There was like 15 different conditions in there for all the different errors. And so rescue from, a lot of us know it still because it exists now. But it let us clean that up a lot so you could encapsulate each of those error responses on their own instead of jamming them all into the same case statement. Fixed your dependencies were a big win because at the time you had just static integer IDs that you were including so you had to reference those directly which was really painful. So this let you reference the shot by its name in the fixture instead of having to reference the ID. And then if you needed to, you could use the fixtures.identify if it's ambiguous. Namespacing also allowed us to clean up a lot of code where we'd have this mapping of resources with massive options there. We can now nest them nicely and encapsulate the different namespaces and keep things from getting really cluttered and make it easier to understand and read the routes as they grow over time. Rails 2.1 didn't have a huge number of changes but it's notable mainly because it added support for config.gems in the environment.ib file. So this was really cool because as I said, up until this point you were kind of left on your own to manage your gem dependencies. And this was the first time you were given a pretty decent system to manage your gem dependency in your Rails app. So what this looked like was in your initializer environment.rb you could just configure which gems you were using and you could even specify the source. At the time, a few months after this was released, GitHub actually became a gem source and you could have all your username dash gem name and there was a pretty big mess with that which changed once Bundler and Ruby gems and everything kind of evolved over time. But it was interesting because this really helped to manage dependencies where we didn't really have any help at the time and you were kind of left on your own to figure out how to manage dependencies. And Rails 2.3 is another big release. So this was released in March 2009 and Shopify actually hit it in March as well. And so I would say Rails 2.3 out of all the two.o releases is probably one of the biggest and one of the best with the number of features and kind of what it gave you. And it's kind of the pinnacle of the 2.0 release cycle. And again, like I said, it's like a good mix of strong performance and giving you a lot of features to kind of take care of what you need to do. So this introduced Rack which wasn't in Rails in the previous versions. You have accept nested attributes and there's a few other like niceties that were added. Application templates were added which DHH showed with the dash dash API now. So a lot of the stuff building up to some of the things that we see now in our Rails apps as well. So Rack, a lot of us know how Rack works and know what it is these days. But this is just kind of how Shopify started using Rack in those kind of early days. This was the first middle where we added to the application just as a simple blacklist for a application that was spamming our site. And so we would just 404 this particular request URI because they were just kind of scraping our site. And it's a great use of Rack and these changes to Rails really made it easy to do these types of things. Accept nested attributes is another one where we kind of take it for granted, I think. So we actually had a whole bunch of code in Shopify where we were trying to do the accept nested attributes kind of thing. And it was just like really terrible. I actually wasn't involved in any of this, but going through and looking back at it is just really interesting to see the kind of hijinks you had to do to kind of make it work. And then if you switch over to using accept nested attributes, it's really clean and really obvious what's going on. So it simplifies the code base a lot switching over to that. Rails 3 is another pretty significant release. And it came out in August and Shopify updated around October. So we were pretty close to that one as well. And this was a pretty large one. And it was also the first major release since the Rails team merged with the Merb team. So there was a lot of new ideas coming into Rails 3 being brought by the Merb team by kind of breaking up different parts of Rails, making it agnostic to a whole bunch of different things, and adding a whole bunch of niceties so that you could plug into Rails a lot easier and make it your own kind of framework. This came with a lot of performance, uh, degradations. And this was a lot of the kind of flak that Rails got at that time and a big reason why a lot of applications actually stuck with Rails 2.3 for a long period of time. We moved on to 3 pretty quickly because the kind of developer productivity that we got was a lot higher than any performance impact that we saw, and it was worth the hit to be able to onboard developers and to just have a lot nicer code and easier to manage code base. So looking at a few of the features, Arial, like it's really popular now. Like we see Arial everywhere, and it was introduced in Rails 3. Previously you had to do some kind of awkward syntax again to kind of do the same things, and it just really tightened things up and made it really easy to do the things that you'd need to do with your associations and your queries. Bundler was also introduced at this time, and so this took a lot of, like, kind of similar ideas to what Config Gem and the environment RB was doing, but pulled it out into a separate system, and Bundler, again, is one of those things that early on a lot of people hated on because it was different and kind of changed the way you work, but these days is probably one of the best things about Rails and one of the things that actually makes it possible to work on a large Rails app with a lot of dependencies. I don't know, like, it's hard to remember what it was like before Bundler and before this dependency management, but it was pretty terrible time, and Bundler makes a lot of this easier. We actually converted Shopify to use Bundler before 3.0 so that we could take advantage of that while we're still running Rails 2.3.5. The next set of releases are 3.1 and 3.2, so these ones are pretty big ones again, and we actually didn't update to 3.1. We attempted it, but found a whole bunch of performance issues with it, and then we never followed through with it. There was no dedicated team on it, so it just kind of sat beside until we drove forward with 3.2. So within 3.1 and 3.2, the novel changes are asset pipeline, jQuery become the default JS library, and just lots of internal API changes. So when I was looking at what we eventually merged as the Rails 3.2 update, there was like 250 change files, and the additions, deletions were pretty even, but there was just a lot of them because a lot of internal APIs changed, particularly around associations and kind of the internal association proxy objects. So this was a pretty big change for us, and asset pipeline is something that we had wanted to use because we were working on the JavaScript front-end MVC app, so asset pipeline was actually really interesting. So we ended up backporting asset pipeline to 3.0, so we could use it. So this was just a generally pretty messy set of updates, but we eventually got on to the 3.2 and got that pushed out. Then Rails 4 came along, and it was released in June 2013, and we got on it in February 2014. So at this point, Shopify is getting pretty large and complex. There's a lot of moving pieces, a lot of code, and a lot of people working on it, and it's getting harder to keep up to date when we're not paying as much attention to it, so the updates are being drawn out a little bit longer. But Rails 4 had a lot of really cool stuff that we wanted to take advantage of, so it supported Ruby 2, which we wanted to move towards, Terpolinks, the Russian Doll caching, strong parameters, the killing off of observers, and then just generally tightening up APIs, like removing the hash and dynamic finders. So a few bits of code from Shopify here, and it's just really getting more consistent APIs and tightening up on kind of the ideas that were put forward previously with Arrow and all of that, and just making it consistent across the different APIs and different syntax that you might use. So the latest version that we've updated to is Rails 4.1, and so this was released in 2014, April, and we got on it in January 2015, and so we're on 4.18. And again, this was, we didn't really dedicate a whole team to it, it was kind of done on the side, but we have learned a lot about keeping up to date with these things, which I'll get to in a couple more slides. So kind of the notable things in Rails 4.1 are the spring application preloader, which was cool, because we were actually using it previously, and it was just really great to see Rails just baking in these same defaults, and this kind of, like the trend over time, you see them take the really good parts about how people are building and using Rails and baking them in, so you don't need to do it yourself, and it just comes packaged with all these great things to be able to build your applications. And the other notable feature for us was variant templates, because we were starting to use this for our mobile applications as well. So a little code sample from Shopify itself of the variant templates, it's pretty straightforward, basically just setting the variant based on the user type, and then Rails automatically knows how to pick that up and render that view for you. And so that brings us to today. And so looking back at kind of how we've evolved over time, it's been really interesting looking at how we've managed these updates from the early days when we're on edge and the code base was a lot smaller and more manageable, to as we've moved to a much larger code base with a lot of people, and the difficulties that that has brought. So it started a lot easier and it's been getting hard, but we've been learning how to deal with that pain of just the large code base and what we're doing with it. It's kind of what have been the hardest things over time. Marshalling changes have been a big deal, and this isn't strictly related to Rails, but it's come up with Rails updates. It comes up with more with Ruby updates, but just when Ruby or Rails is marshalling objects or changing the format of our objects, that's been really difficult for us. So we have a caching layer that actually marshals full active record objects, and so that was really difficult as Rails changed like the internals of active record and some of the variables that it has. That gets difficult. And then also changes to things like flash formats and sessions, and just anywhere you're going to be serializing an object and expecting one format and then maybe get it differently in another request. And so kind of the way we've solved those issues, or have learned to solve those issues is to either write them in parallel in two different stores with two different versions. So depending on the request, you can pull out the right one or having some translating code. And in the case of flash format, the translation between the version was pretty easy, so we just had a piece of back ported code that would figure out which version to deserialize and do that for you. Another big thing is just maintaining the momentum with a large team. So we have over 150 people contributing to this code base, and we don't want to slow them down, but we also don't want them undoing the changes that we've been making or reintroducing deprecations. So this has been really challenging. And kind of the one, the biggest thing that we've found to minimize that is just to release the changes as early as possible. It's even as simple as we add an environment variable so you can toggle between the different Rails versions really easily and then be able to work on the upgrade in parallel and ship as much of the code into production as early as possible. So in the end, the Rails 4.1 and even the Rails 4 updates ended up being like dozens of files changed, if that many. So it's really small set of changes compared to that 3.2 update where we changed thousands of lines of code. Then just the size of the code base and lots of edge cases. So this makes it really hard to actually know all the bits and pieces and everywhere that might be doing some specific Rails thing or might be monkey patching Rails in a particular way. And then running into performance regressions just because of the scale of the code base and the amount of servers we're running on, the amount of load we're getting, we hit a lot of performance regressions that people might not see. So with all that in mind, why do we keep upgrading and why do we keep moving it forward? And like it's really important to us to move it forward. So like the big reasons are the new features. So things like turbo links and variants and all those things are reducing the complexity of our code base and giving us a larger base to leverage. So as much that we can push into Rails itself and take advantage and not have to maintain completely ourselves in that half a million line code base is a big benefit to us. Getting better security and updates and better practices is huge. Hiring is a big deal. So as we're growing and as we've been hiring a lot, bringing people on is a lot easier when we're moving onto newer versions of Rails because they'll either know it or it's easier to learn because of the documentation or the existing resources that are out there. And also the code base longevity. So the code base has been around for 10 years. It's likely going to be around for another 10 or 20 years. So really we actually need to do this. Like there's no sticking to a particular version otherwise it'll just become unmaintainable. If we are thinking about keeping this around for another 10 or 20 years, we need to continue moving forward with Rails. So kind of recommendations for anyone working on keeping their Rails app up to date. And I kind of hinted at some of these. But a few big ones are like avoid monkey patching Rails itself. We've done this in a few cases and it's really hard to figure out and it ends up biting you really badly. Keeping dependencies low. So every gem, every library that you depend on is going to be another thing that you need to either investigate or update yourself or throw away and rewrite when you update Rails if the internal APIs are changing. So keeping those dependencies low and to a minimum is really important. As I said, shipping changes early and often. And having like a parallel CI running so you can do that is a big deal. So with Rails 3.2 we didn't really do that. We had like a big bang release of Rails 3.2. So with Rails 4 and Rails 4.1 we did a lot more progressive releases of the changes. We had CI running so we could see what was breaking. And like the final flip was like 30 lines of code for the 4.1 upgrade. And it was mainly one person part time kind of working on pushing it through, which was pretty significant. And then also having a dedicated team. So depending on the scope of the change, making it someone's part time job doesn't really work in a lot of cases. So dedicating people to it for a short period of time makes a big difference. And then being able to ship to isolated production servers has been a big deal for us. So we've been able to ship some of these changes, particularly the marshalling changes to a subset of servers so we can actually investigate how it's going to work in production and see how those changes are going to take effect. And then we can roll them back really easily without impacting users very much. So that's it. If you want to talk about anything, I don't know if we have any time for questions, but I'll be around. Just talk about how we've kind of kept Shopify going on Rails and happy to kind of show you guys what we've been doing or talk about what we've been doing. Thank you. Thank you.
|
Over 10 years ago the first line of code was written for what would become Shopify. Within this codebase we can see the evolution of Rails from the early 0.13.1 days to where we are today, on Rails 4.1. Through the history of this git repo we can revisit some of the significant changes to Rails over the years, and simultaneously observe what has withstood the test of time. By looking at the challenges that we have overcome while building and using Rails, we can inform future decisions so that we can continue to build a framework, and applications, that last years to come.
|
10.5446/30700 (DOI)
|
All right. Let's get this started. My name is Ryan Davis. I am known pretty much everywhere as Zen Spider except for Twitter, as you can see. I am an independent consultant in Seattle and I'm a founding member of SeattleRB, which is the first and oldest Ruby group in the world. So setting some expectations. This is an introductory talk. It's very little code in it. I'm not going to teach testing or TDD in this talk. I'm going to be talking about the what and the why, not so much the how. I have 218 slides, which puts me at just under five and a half slides a minute, so I have to go kind of fast. So let's get started. The simplest thing that we can ask is what is Minitest? Minitest was originally an experiment to see if I could get test unit replaced for about 50 of my projects that I had at the time. I'm up to about 100 now with as little code as possible. And I got that to happen in about 90 lines of code. It's currently available as a gem. We didn't have Ruby gems back then when it was originally written. It now ships with Ruby as of 1.9.1 and up. It's meant to be small, clean and very fast. It's now about 1600 lines of code, which sounds like a really big increase over 90, but still that's very small. It supports unit style, spec style, benchmark style, very basic mocking and stubbing. Has a very flexible plugin system and et cetera as we'll see. There are six main parts of Minitest. The runner, which really kind of is nebulous now. Minitest, that should say test, which is the TDD API, Minitest spec, which is the BDD API, Minitest mock, pride and bench. I'm only going to be talking about two parts, Minitest test and Minitest spec. So let's jump into Minitest test, which is the unit testing side of things. So test cases are simple classes that subclass may test, test, or another test case. And tests are methods that start with tests and they make assertions about your code. It's just classes and methods and method calls all the way down. Everything is as straightforward as you get and it is magic free. That slide is two years old, so this wasn't my only barbit null. So Minitest test includes the usual assertions that you would expect from XUnit plus several added beyond what XUnit and test unit usually provide. Methods marked with a plus are new to Minitest and methods marked with a star do not have negative or reciprocals as we'll see in a sec. Unlike test unit, Minitest provides a lot more negative assertions. It doesn't provide some that you might expect, which I'll go into later. But one of the questions is, like, why are there so many pluses? I really want my code tests included to communicate to me and other readers as clearly as possible. Not only does the code communicate better, but the error messages are much more customized so when there is something wrong, I get better information about it. Finally, assert equal is enhanced to do intelligent diff mode. It allows you to see what's actually changed instead of a huge blob on the left side and a huge blob on the right. You get to actually see what's different between the two. But I said that I would describe why some negative assertions are missing. This is something that I hear a lot more than I'd actually like to hear. The question of where is refute raises or assert not raised. It's the same place as a refute silent. So let's look at that. Let's highlight the key components of refute silent. So refute silent says that this block of code must print something. What it is, I don't care. That is a valueless assertion. What you should be asserting for is a specific output that you want. In the same vein, refute raises says that this block of code must do something. What it is, I don't care. It's a valueless assertion. Again, instead, you should be asserting for the specific result or side effect that you actually intend. I've heard the argument, but it's useful. No, it isn't. It implies side effects and or return values have already been checked or aren't important, which is always false because you wouldn't be writing code otherwise. It falsely bumps your code coverage metrics and it gives you a false sense of security. I'm an ex-lifeguard. I have a lifeguard in my high school days at various lakes in my county. And one of the things we really feared were parents with kids with water wings. Because the parents think they've got flotation devices and we'll just talk to our friends and ignore our kids and watch them drown like this. So in other words, this only makes it look like something has been tested when in fact it hasn't had any tested applied to it at all. I've heard it's more expressive. No, it's not. Writing the test itself was the act of expression. It's an explicit contract in every single test framework out there that any unhandled exception is an error by definition. The test mere existence states there are no unhandled exceptions via these pathways. And I've been having these arguments for years. In fact, I had this argument last month. And I know that some people will never be convinced. And honestly, that's okay. You can't win them all. But it doesn't mean I can't try. So hold on your hats. Stand back. I'm going to try one more time to convince all of you. It's like if you call it, it must be okay, right? So I wrote all these extra assertions to verify that everything is okay. And if you'd like to license this code, come please see me after this talk. If only that were possible. Our jobs would be so much easier. Next up is Matef's spec, the example testing side. In short, where Matef's test is a testing API, Matef's spec is a testing DSL. Instead of defining classes and methods, you use a DSL to declare your examples. Test cases are described blocks that contain a bunch of tests. Tests are it blocks that call a bunch of expectation methods. Here's an example that is equivalent to the previous one. So we have describe instead of class. We have it instead of def test. But in reality, described blocks really are classes and it blocks really are methods. This same example one-to-one transforms into this code where describe makes a class, it makes a method, there is no magic. All of your normal OO design code tools exist and they work as normal, which is really, really important. It means that include works, def works, everything is as expect, as you expect, you're just using a slightly different language. Similar to Matef's test, Matef's spec has many expectations defined and a similar set of negative expectations and a similar set of missing culprits. And all of this is gained for free because each one maps from expectation to assertion directly. Underneath Matef's test and Matef's spec is the infrastructure to run your tests. And it does so in a way that helps promote more advanced and robust testing. Matef's test has randomization baked in and has always been on by default. It helps prevent test order dependencies and keep your tests robust and working stand alone. By rule, every single one of your tests down to the lowest level should be able to run by itself and pass. If it requires another test to run before it, it's not stand alone, it's not a unit, and it's wrong, it's buggy. And as far as I know, Matef's test was the first test framework to have randomized run order. There's an opt-in system that lets you promote a test case to be parallelized during the run. And that takes randomization to a whole another level. It ensures thread safety in your libraries and absolute robustness because if it can handle the parallelization, it can handle anything. The original Matef's test was a tiny 90 lines of code. And over time, features have been added that you get to choose to help you enhance your tests. It's still really small in comparison, but it's incredibly powerful. So what was my reasoning for Matef's design? Matef's test is not special in any way, shape, or form. All of my usual tropes apply. If you've heard me up here, ranting and raving, Matef's test is no different. First and foremost, it's just Ruby. It's classes and methods and method calls. Everything is as straightforward as you can get. I believe that less is more. If I can do something in less code, I will, absolutely. Method dispatch is always going to be the slowest thing in Ruby. More importantly, less code is almost always more understandable than more code. So let's take a look. Here is assert in Delta, which is the equivalent of assert equal, but for floats. Never ever use assert equal on floats and never use floats for money. There. I've done my usual caveats. It's as simple as possible with minor optimizations. In this case, we use a block to delay the error message rendering in the case that we don't have an error. We shouldn't have that cost. And really, this just boils up on up to assert. So you really only need to know about 15 other lines of code to understand how this works as a whole. Indirection is the enemy. I want errors to happen as close to the real code as possible. I don't want things delayed. I don't want layers of indirection in between. I kind of feel like Noel is making my point because he just kept talking about the layers of indirection that RSpec adds over and over. I want to strip all of those out as many as possible at least. I want the responsibility to lie in the right place. No managers necessary. No coordination going on. I want objects to be responsible for their own duties. This may not be the best example. I don't know how to get a good example of indirection as the enemy because it's rather hard to show what you don't do. Must equal is an expectation that directly calls assert equal on the current test context and assert equal is three lines long if I remember right. And that's it. No magic allowed. Even test discovery avoids object space. It has minimal method programming in it and it uses plain classes and methods to do all of its work. I originally wrote many tests in part to see if I could because I was the maintainer of test unit at the time and test unit terrified me. But I also wrote it because I was working on Rubinius and I wanted Rubinius and JRuby and other implementations of Ruby that hadn't finished being a full Ruby to have the simplest implementation of a test framework possible so they could get feedback quickly. And finally, it has a thriving plugin ecosystem. I designed many tests to be extensible so that many tests itself could remain minimal. Here are just a small amount of the popular plugins for many tests. Okay. So what does this have to do with rails? Well, the official rail stack uses mini test. Each release, it peels back the testing on in encouraging better testing practices except that peeling onions makes you cry, right? Hopefully not in the case of mini test. So rails 4.0 was the first version to cut over from test units to mini test and they did so on the mini test 4.x line. At the time, I had already declared that I wasn't going to keep updating standard Lib mini test to mini test 5. I was only going to maintain it at level version 4. And that's because test unit is built on top of mini test and has a lot of hooks into the internals and it just made it really hard for me to ever update and not break their stuff. So I put a freeze on that. So rails updated to mini test 4.0 to remove a layer of complexity and indirection but also to allow them a migration path to mini test 5. Because test unit was already wrapping mini test, there was basically no impact on anyone. Arguably there may have even been almost imperceptible performance improvement but I'm not going to claim that there was one. Rails 4.1 switched to the mini test 5.0 line. This was painful for rails itself because of crufty tests and the number of monkey patches that they had on mini test itself. This got rails onto the newer code base, the active development line of mini test and it made things easier to do like exec based isolation tests. There's a number of tests in rails or each test actually forks a process to run a separate rails app by itself so that if they're running in parallel they're not going to infect each other. As painful as it might have been to get rails switched to it in passing, you hopefully never noticed though. However, rails 4.2 turned off the alphabetical order of the testing and to remove a monkey patch and that started to run tests in random order. This is solely to improve the quality of rails and your tests but it may have had some impact on some people. We did get some reports that after updating people started having tests that previously passed failed. I had to isolate a number of test bugs in rails itself because of this. I'm going to say honestly, despite the pain that it causes, this is a good thing. Test order dependency bugs are problematic and they're incredibly hard to track down. I'm going to talk later about a tool that can help identify these bugs. Hopefully future versions of rails, they should be tracking many tests. Aaron Patterson and I keep those things in sync and he lets me know when things are coming down the pipe. As a rails dev, what does all of this mean? Hopefully if I've done my job right, it means nothing. You shouldn't even have to see many tests most of the time unless you want to enhance it with some plug ins. That's because your subclassing rails test cases like active support test case or action controller test case. There's about six of them if I remember right. There might be more now. The basic architecture looks something like this. You write your own test class that subclasses active support test case and active support test case subclasses may test test. It provides things like per test database transactions so you don't have to clean up. You just add a bunch of records. They're going to be gone on the next test. Aaron and I wound up adding before and after set up and tear down hooks to make it easy for rails and other libraries or frameworks to extend many tests to do extra wrappings that they needed to do. It provides things like fixers to load test data. Declare forms if you like that instead. If you don't like those, you can just use def. And it provides extra assertions like assert difference, assert valid keys, assert deprecated and assert nothing raised. Wait, what? Don't worry. This is the actual implementation of assert nothing raised. It's only there for compatibility sake. Personally, I think that it should be deprecated and removed. Maybe that should be for rails five. And all this means that you can write simple tests that describe your current task. Things like database transactions don't have to clutter your test code. And you can focus on what the test is really trying to do. This is a very simple example. And I think all of these test examples come from the testing section of the rails guide online. Action controller test case is another rails extension. Except that it's subclasses active support test case so you get all of those goodies layered on. And it extends it to include more like all of your usual HTTP verbs, simulate web server state, and provide assertions specific to handling requests. Which lets you easily write clean functional tests like the following. I love air conditioning. I'm so dry right now. I'm just going to leave Lid off. Active dispatch integration test provides full controller to controller integration tests. As usual, subclasses active support test case so all of the usual stuff is there. It provides a ton of assertions that I cannot fit on the one slide. And allows you to write comprehensive integration tests that span multiple controllers. If you want more details about all the stuff that rails adds on top of many tests, you can get that here. And it's described in pretty good detail. It's actually a really good read. So rails' approach to subclass test test leads to a very simple setup that remains very powerful providing you with everything you need for unit tests all the way up to integrations. It leverages power, providing randomization, optional parallelization to provide better testing, make your tests that you write better and more robust over time. But what happens when something goes wrong? Perhaps you want to use spec style. Turns out the DHH disapproves of our spec so much that he wouldn't allow us to switch the test framework in rails to many test spec. He reverted that commit. As I understand it, he simply doesn't want people to submit patches using spec style, so he didn't want it easily available. But that doesn't mean that you're stuck. If you prefer spec style, that's totally fine. You can use Mike Moore's many test rails, Orkin Collins' many test spec rails to do varying degrees of this type of code. The two libraries are not the same. They suggest slightly different styles. One tracks our spec style a bit more than the other. Or perhaps you upgraded to rails 4.2 and now you have failures. Also known as Ryan, you broke all my shit. I'm sorry. Unfortunately, that's not as simple to deal with. It is a bit harder. But again, it is a good thing to catch and fix this sooner rather than later. Why are my buttons not working? It is a test order dependency bug. And quite simply, that simply means that tests will pass when they're running a particular order, say A before B, but not when B runs before A. And if it was only three tests, that wouldn't be a problem. It would be pretty easy to find and fix. But you probably have hundreds and hundreds of tests. That's not so easy until a few months ago. I wrote many tests by sec to try to isolate the problems that we were having getting rails on the many tests randomization. It helps you isolate and debug random test failures. In short, it intelligently runs and reruns your tests, whittling it down until the culprits are minimized. And here we're going to see a simple example of it running. Ignore the fact that I'm pre-specifying the seed, pretend that I'm running this anew, and we get this failure. We get the failure, we grab the random seed, and we rerun the test using many test bisect instead. That's going to rerun the entire test suite to ensure that it is reproducible. And once it is, it starts to bisect the culprits space down to the minimal subset. And you can see that it speeds up dramatically as it goes on. Getting it down to two tests running a particular order that will cause the repro every time. I see I'm not alone with this problem. Or maybe you're just used to kitchen sink development. For starters, just try it. It might work. Things like Mocha and a lot of testing libraries already work in many tests, and they're just fine. Otherwise, you're not alone. And someone probably beats you to it. So look for the existing plugins that are listed in many tests. Read me. We're stack over flow or whatever. My suggestion, and I know this is going against the flow here, try less complicated testing. Only bring in plugins once you've decided that you really, really need them. Otherwise, start fresh and clean. Try it. You might like it. But the problem is that change takes time. Just remember that you might want to measure the before and after in order to be more objective about this change. I've only got anecdotes of projects speeding up when they switch to many tests. I would love more data. And if you would submit that to me, that'd be great. I have heard that people have halved their test times by switching from R-spec to many tests. But again, I don't have anything objective. So all of this many test stuff sounds interesting, but why should I bother? The first argument is this. I'm not going to bother with that. To me, you're a lost cause. There's plenty of data showing that the benefits of testing, and if you can't get past that, I'd rather help other people. I'm going to pick my battles. Thanks. Like this one. This is a battle I want to pick every time. Obviously not everyone uses it. The official rail stack uses it, which means DHH uses it. Tenderlove uses it. Jeff Kazimir and his cohort teach many tests at the touring school. Noko Geary, Hamel God, New Relic, SQLite, and a bunch of other very popular gems use it. In fact, there's more than 4,000 gems that declare dependencies on many tests. And unfortunately, because many tests ships in standard live, there are plenty of gems that don't declare their dependency on it. So I'm sure there's plenty more. So what are the real functional differences between many tests and R-spec? Being test frameworks, there's plenty of overlap. So I'm not going to go over that. Where they are unique, though, is where it gets interesting. To be fair, R-spec provides a lot more than many tests. Things like test metadata and metadata filtering, more hooks like before and around, implicit subject described class, fancier mocking, et cetera, et cetera. Basically, it's fancier. Many tests by definition doesn't offer as much. Some of the stuff that made it unique has been adopted like randomization, but benchmarking, parallelization, and speed are the main distinguishing features. Basically, it's simpler and more pragmatic and snarky. So it's the cognitive differences with R-spec is where things really start to diverge. And this is where I think that Noel's talk actually proves my point quite a bit. Myron Marston, a few years back, wrote a really great response on Stack Overflow comparing R-spec and many tests. It was a bit biased, but honestly, I think that it was rather fair. The problem is that it's really long, and the meat of it is in this first paragraph here. But even it's pretty long. And apparently I've been working with tenderloaf for too long. Whose attention span is that of a pherodon methamphetamine? So there's so much there that I have a hard time dealing with it all at once. So let's focus on that one paragraph. I'm going to color code it red for the R-spec points and blue for many tests. I'm not going to read that crap too many times. So Myron thinks this is why R-spec is great. And I think that this is everything that's wrong with R-spec. And we're both right. Philosophically, we're both right. We have different goals, and we have different perspectives on what good is. So back to that paragraph. Let's try to boil this down even more for the ADD. Even that's pretty long. So let's break it down to a simple table, example groups. Many test compiles blocks down to simple classes where R-spec, and I'm paraphrasing this here because he didn't say what it did, reifies testing concepts into first class objects. And that first word I think is the big red flag for me with regard to R-spec. We're using that word. You should be coding in Haskell. Examples, many test compiles, it blocks down into simple methods whereas they use first class objects. Many test uses inheritance or mixins for reuse where they use shared behaviors for first class constructs and simple methods for making assertions for expectations versus first class matcher objects. What do you mean that's pretty long? Let's boil it down further. Many test uses a class, they use a first class object. Many test uses a method, they use a first class object. Many test uses subclasses are include, they use a first class object, and many test uses method calls, they use a first class object. First class simply means that you can assign something to a variable and you can use it the way you can use any other value. And it just so happens that everything that many test uses is Ruby and nearly everything in Ruby is first class, that's not a good distinction. Everywhere that I could use Ruby's mechanisms, I did. And everywhere that RSpec could reinvent, they did. So let's try to take a look at this and see where this starts to get more cognitively complex. I think this is best illustrated by examining how RSpec works. Here we have two nested describes. Each one has a before block and each one has a single example. Yet the before blocks seem to be inherited and the examples are not. There will be exactly two runs here where the first one uses one before and the second one uses two before. So our A and B classes is nesting like subclassing. What's the analogy we can use to understand this? Well, if they're classes and nesting is like subclassing, then we need to, we need this undeath method to do and ensure that we don't inherit any tests from our super classes. This is the approach that many test uses and it sucks. But that's the run time behavior that RSpec users expected out of many test spec and that was something that I wound up having to put in. What about this analogy? Our before and after like included modules. If that's the case, then we don't have to undef any methods, but instead we need to generate a bunch of inner modules and have a bunch of includes and set up something to intelligently call super and because of the complexity, this sucks too. This is basically another object model. To effectively use RSpec, you need to learn a whole separate object model that sits on top of Ruby's object model. This is doubly confusing if you haven't already learned the Ruby object model or if you're just trying to learn both at the same time, you're going to get overwhelmed. What ends up happening is it encourages users to hand wave. Noobs are just learning Ruby. They don't have the time or the ability to dig in and learn both object models and it encourages users to hand wave the oddities away. What else are you going to do? It encourages them to not know what describe and it or actually are or do and if you didn't see the previous talk, there's a lot of stuff that goes on inside those describes and it's basically says here's the magic and can tation to do X. When you're a beginner, any sufficiently advanced technology is indistinguishable from magic thanks to Arthur C. Clark. Now I need to rename my talk. Sorry, Noel, but RSpec is magic. From the previous post from Myron, he said that many people find it to be overkill and there is an added cognitive cost to these extra abstractions. Indeed. Here are the raw numbers of that added cognitive cost. Don't bother grocking these numbers. We're going to visualize them in the next slide for both the FLOG line and the comments plus code line. FLOG is a complexity metric proportional to how hard something is to test, to debug or even to understand. Here RSpec is 6.6 times bigger. This is the combined meat of the project. This is code plus comments and it's basically how much you're going to have to read to understand each library. At 8.5 times bigger, it's akin to reading Dr. Seuss versus James Joyce. Do you like my Dr. Seuss font? I think I faked that really well. So back to that added cognitive cost. As we're going to see in a second, it's not just cognitive cost. There are performance differences as well. All those abstractions reinventing the wheel. It has a real cost. Here we have some fairly complex plots. This shows the run time and solid lines and the amount of memory allocated in dashed. The green lines are 100% passing. The red lines are 100% failing and the others are in between that. As you may notice, there's a slight problem to these charts. Can anyone guess? They're not using the same scale. Now you can see there's a severe and painful difference between RSpec and main test. Failures have exponential growth in RSpec. Run time is always near zero in many tests. Memory is linear and it's always lower. But Ryan, who cares? Passing RSpec is fast enough. Indeed. I actually agree with that. Everything is bread and roses as long as everything plays nice. But what happens when it isn't? You have a bug or you refactor or anything else goes wrong. Oftentimes when I'm refactoring stuff, I'm going to make a change and I'm going to have 100 tests fail. All of a sudden you pay. For completeness, the speed of the actual assertions in both systems are purely linear. The speed of running those tests at the method level is also linear. And because they're linear, please do not try to regain any speed by reducing examples or expectations or otherwise reducing the quality of your tests. If you want to speed up, Minitas will always be faster than RSpec pretty much by definition. So you can switch to Minitest or you can never refactor or have any bugs. So in summary, at the end of the day, as long as you test, I don't actually care what you use. Use the best tool for your job. Hopefully I've shown the technical merits of Minitest. Choose what works for you. Not because it seems popular. Oftentimes I hear that they chose RSpec because there's more documentation available for it. But maybe there's fewer articles about Minitas because there's less need for them. Minitas is more much easier to understand. You can read it in a couple hours and understand it head to toe. So maybe Minitas users aren't missing. Maybe they're just busy getting stuff done. Choose what works for you. Who knows? Try it. You might like it. After all, it's just Ruby. Thank you and hire me.
|
The rails "official stack" tests with minitest. Each revision of rails peels back the testing onion and encourages better testing practices. Rails 4.0 switched to minitest 4, Rails 4.1 switched to minitest 5, and Rails 4.2 switched to randomizing the test run order. I'll explain what has happened, explain the motivation behind the changes, and how to diagnose and solve problems you may have as you upgrade. Whether you use minitest already, are considering switching, or just use rspec and are curious what's different, this talk will have something for you.
|
10.5446/30702 (DOI)
|
So my name is Kaye Luke and I have been working as a software engineer at New Relic for about the past year and a half now. I am here today to talk about Amphime and I worked in chief into the Time War text and I thought about it. It is the tale of Sillier who came also a production. So I am going to tell you about the mistakes that I made so that you make different mistakes when it comes to time circumstances. So here is my plan for today. I will talk a little bit about why time is hard for survivors to deal with. I will go over the problem I was trying to solve and how I thought I would try to solve it. So I am going to start with the tool for that in my multiple attempts at a solution. So show of hands. Who has already had to deal with time zones in the codec for gas? Oh, okay. Cool. So one of them is here. That's cool. And what about people that are just sort of having weird and gross voices that kind of looks like they are dragons involved? Okay. So just so that we are kind of on the same page, I wanted to pull out a couple of fun examples of why it is that dealing with time zones can be pretty exciting. So here is one. It turns out Argentina didn't used to do daylight savings at all. So it was a little bit of a mess. But then for a couple of years the government was like, oh, let's experiment so that we can try to save energy, maybe. And so for each year that we were doing daylight savings, the government would set the start and end dates for it. Which means there wasn't a fixed annual schedule that you could program some logic for. It was completely arbitrary. Here's another one. I found this in the documentation for the active support quarters. So in October 1582, it turns out that days 5 to 14 just don't exist entirely. This oddity is due to when there was the switch from the Julian to the Gregorian calendar. Okay. Now that we know that this is a bit of a sense. And the funny story about this is that when it happened, people actually thought that the church was literally stealing days off of their life. They thought they were going to end up dying nine days sooner. 24. So notice that someone had a couple of really great blockbusters on the many, many false assumptions that programmers make about time. Some of my favorites are that even though there are usually 24 hours a day, there aren't just 24 time zones. Because the time zones aren't just an hour apart. They can be 30 minutes apart, 45 minutes apart, all students have three intervals. And another great writing is that time doesn't always go forwards. If you want to hear more about that, I'm going to try to have these blockbusters slide to be up in the capital. All right. So really we can conclude that it really is just this big ball of wiggly, wobbly kind of stuff. And so that brings me to our first lesson, which is to trust nothing, including our GPMs. I don't know if there's any art and community that I would recommend, I'm going to be able to stab you with a knife. Anyhow, if you want more proof, here's another example for a res console. We do a simple comparison, one month to 30 days, and we find that, yes, those two are good ones. And then we try to get, you know, space gave us the data at the time I was writing this slide together. And that was Thursday, April 9th, fair enough. Let's do some math. We do one month ago from today, and we get March 9th, I don't know if that's fair enough. And then if we go ahead and do that date minus 30 days, which remember, that is two hour of privilege, the answer is in fact, not the same. So just to remember, trust nothing, including Argentina, and including math. Trust or not, there. So I'm going to talk a little bit about the problem that I was trying to solve here. I work, we have a future to set weekly email reports to our customers, because as we like to say, Monday is a little bit of help. This is set up with a client job that can use the weekly email jobs for each account that then run in order to build and send those email reports. And so in this way, we have a separation between scheduling something versus executing it, or what you're doing versus when you're doing it. Now, we would run this client job that takes everything off on Monday mornings at 10 am, the second time. And the reasons for this are historical. New Relic is a company that started in the Pacific time zone. And so this is the kind of thing that it makes sense when you're small, and you don't have that wide spread of customers. Also at New Relic, we have a culture of people not working on the weekends. And so it's great to have something like this on Monday morning in case there are any problems. So people can't get that and not have to be checked with right away. However, even when starting at that 10 am for us in the Pacific time, we're already low into the current business peak for much earlier time zones, such as Tokyo, where the entirety of Monday has already passed for this report, which we're supposed to have on Monday. In addition, New Relic has got to have many more accounts than when it first started. And with more and more emails to send out, it was taking longer and longer to finish all of those jobs, which led to even more customers receiving their Monday and weekly report on Tuesday in their own time instead, with weekly summaries that were offset from what they would expect. So here's our proposed solution. It's a Monday morning report. People should get it by their Monday morning, not ours. So here's a sort of sample of some potential accounts and time zones and what it would mean for us in the Pacific time of when those reports should go out. We set it up at 10 am to avoid any year-tests eventually leaving at the top of the hour. So this is again the set up for the holding that we have here. We set them up to run that 10 am local time. And some of those earlier jobs would have to be running on our Sunday. We'll kick off the whole process a lot earlier on Saturday in order to get ahead of our time. There's another nice bonus here, which is that the reports will get sent out spread over a day rather than trying to queue them up all at once. And so we can smooth out that fixed by we used to see on our background here every morning. And we have fewer community alerts there. So here's my first attempt at this all this. The first question I had is what date is the next Monday? And in this case, playing around with the console, we have a date library that connects to each date. And we have this class date time as well, which is this method first. It gets a day of the week and a time and it creates a date on that. So here we've got a Monday, 1 o'clock and a time. So now that I have that, all I need to do is convert it into the local time zone for that account. So I'm going to do a quick overview of some relevant time and date classes that you might end up using in Rails. First, Rails is used in a different time zone with daylight saving and taking it into account. The built-in data source for that is the IAEA time zone database. And it's everything and every so often to reflect changes into time zone boundaries and changes to UTC offset or daylight saving pools. And so you might think that this is pretty static, but it updates quite quickly really actually. This is 22 updates in the last three years. And so you need newer versions of TZ and SO to develop these changes. But it's not like active record to get this back to the old version, so you can't actually update it without updating Rails. Unfortunately. You know, if you're in a situation where we're working on it, we'll see how it goes. Anyway, there's also this other helpful useful fact of actor support time zone, which is a wrapper around what you think is kind of useful. And among other things, it lets us limit the set of time zones from TZ and SO to a quote, meaningful subset of 146 out of the many, many more that are available. And so if you compare this map of the 24 bands of time you'd expect fully with this listed very tiny hunt of all the next domains you might only get access to. And so speaking of which, another thing that you can get out of this is friendlier friends zone names. So that instead of having something like America Slash in yours, you can call these your kind of kids' sets. And so if you want to play around with this in your Rails console to try to get some of those friends of the names, you can run something like this where I just map me and collect those values and filter them for a specific offset. So here is the list of time zones with the plus 12 offsets, and these are very early time zones, often along in New Zealand. And then some very neat kinds of zones minus 11 like international, native, and west, or other things and whatnot. You guys have noticed when you play around that there actually isn't a time zone listed in here with UTC minus 12 as the offset. And this is because that time zone covers the Baker and Howlett islands which are uninhabited. So they are not considered for the meaningful subset of 146. Anyway, time to our question of how do I get that eight times into the local time zone? Well, that accuracy work, a kind of cost, provides us with this method, local to UTC, which given a time zone and a date and time, it will cover anything that is in there for us. So, very useful. So this is what the method ended up looking like for this first attempt here. We have this method that takes in the account time zone, which is the pass-on end. We use that date and time dot course to get us the next Monday, not 1am. If for whatever reason that account will have a time zone set, we're going to use a default of the second time, which is the behavior that we've had before this change in my case. And then we'll just go ahead and use that local to UTC method to get us the UTC method and want this job to end. Pretty straightforward. So, yes, of course, we have to have a test. And so when you do a testing related to time stuff, it's useful to be able to freeze time or travel forward if you want to have a consistent timeframe for these tests. Basically, it lets you smooth the system time. And time cop is the gender that you need this way. Rails 4.1 actually has some time travel stuff already built in. But again, we're not quite on to the first detail stuff. So your test will generally follow this kind of setup, where you freeze time at a specific time, and then you'll have your tests. We also have to make sure to have this tear down in there after your tests are done, because otherwise, when you run them, you'll get stuff like this. Negative time here is important to be able to park your time. And then that will give you a feeling of that. Anyway, the tests then are set up that I wrote, something like this, where it's nice to have those daytime dot parts in there so that it makes the day and time work pretty easily readable for the next developer that's in there. And I just have a test here for the times of Western UTC and mountain time, saying, you know, this is the exact day and time I expect to get out as a result of this. So, fair enough. So that's one time zone on the west, and I also picked a time zone on the east end UTC, a cop time, just to have it in there. And everything passes. Awesome. We also talked a little bit about some scheduling changes that needed to be made. So just as a reminder, this is the setup that we had before where everything is just achieved up immediately, and the job starts off at Monday afternoon. And this is what we want it to get to, where each one is in queue with this local 1-an time and we'll hold it this start off on Sunday. So pretty straightforward to change this to start on Saturday morning instead. It's a daily contract that we're running until it's Saturday. Go ahead and run this script to start to get to the end of the day. And when that happens, the jobs are now using this dot queue at versus dot queue. Link queue at lets you pass it at that UTC time stamp of once this time is in the past. And overall, it just really didn't seem like I could change. I got it reviewed by people and everything looked good. It was a total failure. Instead of curing up a bunch of emails to be sent throughout Sunday and Monday, we started seeing tons of emails getting sent immediately on that Saturday morning set. And these emails didn't really have useful data. I was at a conference at the time actually and I had to run off and find a phone room and get help from a co-worker who really shouldn't have this family instead. And then we had to go into the service manually to run this. And it was just the worst. So I'll go over why exactly this could fail as well as how I ended up fixing it, but here are a couple things to think about first. Lesson in particular is that I really should have set it up so that there would still be a way to trigger the reports to run immediately without that time delay schedule and without having to run that command. It's really hard to watch out for pitfalls that you don't know are there. And as we already discussed, there are a lot of ways for time-related things. And so to encounter the worst-case scenario, you should assume that you're going to mess up something and have a plan for what you'll fall back to. One of the others that was semi-inspirational because I really liked it is let's try to make better mistakes than that. It's not going to be mistake-free, but we can at least not make the same mistakes that we did before. And also, of course, after all of this happened, I then finally thought of a way that I could have rolled out this change in stages rather than all at once. I could have set it up with some of our internal accounts to do a bit of a test run and have it run through the whole system then to see whether it would work. So what happened here? That week's emails ended up being queued up for one-way vocal time for the last Monday, so it was already in the past and that's why they started running immediately. And the problem that we have here is that the group couldn't tell whether it's the closest date or the last date. So we take a look at some of the results in the console here and date that today, it's that Thursday, and we're just going to slide together. So let's say that that's that Thursday on the calendar there, and when we put in that date time dot course for Monday, 1 o'clock a.m., what I wanted was this Monday in the future, but what I thought was this Monday that had already passed. And so this brings us to lesson number four, which is that today's console results are not, in fact, tomorrow's console results, especially if tomorrow is the start of a new week. Turns out time methods are time-sensitive. I was taking for granted my own current date when doing testing during the console, and I had just been taking it for granted as part of the environment. I really shouldn't have done that. But you have tests, you say. Yes, I did. The problem there was it turns out that date time dot course isn't really friends with time-hots, free things, and travel time. So we changed the test setup so that we pick a very different date. In this case, it's just May 1st, previously the test were about April. And then we freeze time then. May 1st happens to be a Friday, and the date that we want is May 4th, which is the date in the future there. I assume that date time dot course would be context-aware, and somehow no to be given in the next Monday. So if I pride into the setup for the test then, it's correctly May 1st, which is probably headset for dot-coc. And if we go ahead and run that date time dot course from Monday, when I went am, we in fact get April 6th instead of that May 4th that we were looking for. So that's pretty strange. And so when it's right, we look back at the test setup again, essentially the time dot freeze didn't have any effect on the setup for the test at all. And the test only began to pass because they happened to run in the right week, because I had 5 days, or right around the time, and I had 5 days that confirmed what I already knew from running the console and that this happened to work at this moment in time. So even after all the mess from that week, I think it's a work on Monday, realizing that I could put a failing test on master, affecting everyone that works on that codebase. And it was pretty sad. So I bring this now to lesson number five, which is to guard against right and false passing of tests. Even though my tests were passing, it was really a fluke that they were passing at all, and I should have been more careful in the test than I wrote to catch more situations, rather than confirming what I already knew to be true. So now I'm going to go through a little bit of the actual solution that I ended up with. At this point I was like, oh yeah, that's like a thing. We can do that. So that first step there, where we have these tests before, very first thing is it's going to use days that are right around now, and have this be in actual. So I'm just taking some random dates, and I have to use them on the dates of February, and then we'll be able to make them all. And I also decided, as you might know, to make it easier for the next person to read it, I'll add in just a quick sanity check that this new data is in fact on Monday. Given what we're trying to do here. And so just so people don't have to go through and look up what day of the week was, February 10th, 2015. And this doesn't, in fact, now start failing and demonstrates that it should be harder to date your job person than to talk, not be different to each other. So now I need another way to get that next Monday, besides this really simplistic, recent date-backed job course. But also thinking back to that lesson number two, having a backup plan. I realized, you know, we might need to run this script after a Saturday, if that Saturday job fails. And so this is the ideal where, if everything goes well, we run this script on Saturday morning. All the reports are set up to run for the future still for their local time on days, and the report should cover that most recently. But if something fails, though, which we know we need to plan for, we might need to run this script later, like on a Sunday, but it should still pick that same Monday and run the report for that past week. And additionally, if you think about it, at some point, though, we're going to have to transition so that if you're running the script later in the week, it knows to set up the reports for early the following Monday. And that's how we do it there. So we have this question of how do we distinguish between when we want to have this week versus next week? And it is going to be confusing to think about because there's a bunch of time frames that we're looking at here. The one that you yourself have to look at it in for me is that specific time. There's the service time zone, which we've been in this, and by the way, it's server time, so apparently you can't count on it to be accurate. In fact, we have the customer's time zone that we're talking about here, which could be ahead of us or behind of us at the time we're running this script. So we need to have some kind of logic that comes up globally. What I ended up coming up with was if it's still Monday somewhere, any here, we'll think of it as this week still, for which we want to run this report. And then once it's at least Tuesday everywhere, if you run the script then it should schedule reports for that next Monday, which is the final lesson six here of figuring out a global consistent logic that comes with a lot of the edge case coding more straightforward. So this is what the method I'm going to change you to. We're going to go ahead and grab them today for one of the last two times, just globally, that international deadline last year. Firstly, we're going to do some simple math to figure out the days until the next Monday. And then we're going to, as before, get the user to the account time zone or a default of this. And then we're going to use this formatted offset method. This returns the offset of the time zone as a formatted strand, so like this plus zero seven. And now that we have all those pieces, we can feed that entity to that person again with a specific date that we know is the next Monday. The time, 1 o'clock and a.m., which is 10, that is going to be the state for all of the accounts regardless of the date of the time. And then that number of hours offset for the current time zone. And now finally, those original two tests passed. When are we learning anything? It's not enough coverage there and there are some other scenarios we need to test out. So one situation that is very important for us still is whether we're inside or outside U.S. state-like saving plan. There's plenty of other educations, but for us this would be a little bit more coverage. And when we do that, we do in fact find that there's one of the tests now failing here. You'll closely, we should have the person go off by one error. And the reason for this is that that formatted offset method alone doesn't care about data-like saving at all. It has no context of the current date. So you need to add in this dot now in order to get it to take the date into account. And once we've done that, those tests are now passing. So essentially we've covered this normal case of when we're running this script to set up, we'll be picking up dots and that's all I'm taking here right now. But we had a couple of scenarios like when we're running this script for the most recent past week and it's gotten a bit late. And it would get a little on-site for better reason. And so to set up these examples, I'm going to use a really late time zone. Americans are on and a really early one. Well, we're doing the early ones. And in this case, what's going on is that it's already Monday in the customer account before the report even runs. But it's Sunday where we are. So that report should really already run and get got scheduled with the date that's in the past for the account. So we're running it on June 1st our time and it should pass to the second of June. And unfortunately that still works. Awesome. And as another test along similar lines, I'll say it's really late Monday, we'll believe it. It's still Monday somewhere. So let's skip ahead to the next Monday. It should still stay on that same side of the June. Unfortunately this test also passes on June 2. Awesome. And then finally, we need to test the scenario where we happen to miss the boat for that last week. And for whatever reason, we're running the script ahead of time on Thursday instead of Saturday. So with this one, we're picking up a time where we know for sure it's Tuesday at River Rollins. It's already Tuesday in Samoa, the last of the zones. And then we're going to check what will happen for this public account that we have scheduled here. And it should now be, rather than the second of June, it should be the ninth of June. And it should still always be in the future. And set it up there. And this method now works. So if we look at the summary of the tests, before when I initially implemented this, there are only these two pretty simplistic tests in here. And now we have much more detailed coverage of all the different scenarios of what we expect to happen here, which is also really useful for tests at the ISEA application. So for me, I came up with this bit of a checklist for different tests to write in the situation in which these time zone related changes. I have initially already done coverage of time zone, west and east of the ISEA. I also did the pinning it down exactly when you're starting and when you're ending to make it easier to see what the expectations of those changes are. And the third bullet point though is the first thing that I should have done, which is picking those dates inside and outside of the, oh, sorry, it's the pen. And then I'm just going to pick the dates in the future or the past. So just not current weeks that happen to be when I'm submitting this PR anyway. Just thinking about the dates at some point later. But also in addition to that, you can pick some dates that are inside or outside of daylight today. And for greater clarity, I think that I had a checkup thinking sure that the dates being used as a hard to act with the expected date of the week. So that's pretty useful for future makers of the code. As well as the scenario that we decided was pretty important for our backup, this test to trigger the strip on different dates of the week here. And finally, I didn't get around to doing this, especially in terms of, I thought it was perfect this top, but I really should also have a test for what happens on the actual dates of daylight transitions. And so it's possible if I write this test that I would discover some months in my code style of shortboards getting duplicated or scheduled or potentially. And so finally we have a couple of the scheduling process changes that we have as well. A writer that must have a backup plan. And so I set it up so that there could still be a way to have the jobs get queued up, much in the same way that they were before using that dot a queue versus dot a queue act with a time in case there was some un-discovered one related to trying to calculate that time that we're going to be doing with that. So just with this queue later flag, I'd be able to trigger one pathway or the other. And also that lesson of doing an internal test run, the way that I set that up was, you know, I have a couple of test accounts in there and I set those test accounts to be at opposite ends times of this sample account three and test. And I generated the reports to get sent to myself with that new cleared pathway option by a queue later flag. And when I emailed it to myself, I used this plus email test that I received that I knew where I was getting from. And put that in there to let it bake for a couple of weekends before actually going this change out to production. And when I did that, I, you know, my email's at the same time and I didn't in fact receive these emails at around one in the morning, both of them from those accounts. For today and 107 p.m. specific time, due to the offset between those accounts times I mean, this is a high-level, this is a high-level. All right. So finally, as a summary of all the lessons we have here, lesson number one, remember, trust nothing, including our 2-9 and including that. Have a backup plan. Do an internal test run, if at all possible, so that if it fails, it's not quite so publicly visible. Remember to guys who are testing that, today's console results are not in fact tomorrow's console results. We can't guard against writing false positive tests, particularly the checklist that helps us get out, these are some of those situations. And finally, figuring out a system that will allow us to go a long way and to verify what it actually is. Thank you.
|
For developers, there are two things that are certain for time zones: you can’t avoid having to deal with them, and you will screw them up at some point. There are, however, some ways to mitigate the pain. This talk will discuss tactics for avoiding time zone mayhem, using a feature to send out weekly email reports in a customer’s local time zone as a case study. It will cover idiosyncrasies of how time zones are handled in Ruby and Rails, how to write tests to avoid false positives, and advice on how to release time zone-related code changes more safely.
|
10.5446/30706 (DOI)
|
Okay, so I think to disappoint everyone, the title is Thought Here to See Science. We will not actually be talking about methamphetamine production, but we will be talking about science. Instead, we will be talking about goals. We will be talking about goals and our main goal is getting faster and achieving more. One of the things I like to think about whenever I am dealing with any kind of performance problem is that it's the most important. This is really comforting for me because mostly in my application I do bugs every day, every hour, every minute, you know, a lot. And the great thing about bugs that I really love is if you have a bug, if you can reproduce it, and if you can reproduce it, then you can squash it. So a little bit of audience participation here. Does anybody know what this shape is? Okay, so I heard triangle. Any other? Tostal is running quite very sensitive. So I hate to tell you, but this is actually a square. This is known as a speed square. It's used in woodworking. It's used for making fast angles. Here you can see it making fast angles. And square edges. So for now, one for the rest of this talk, this will be a square. You've been informed. So what we're going to do is our own speed square. It's slightly different. And it looks something like this. We've got IICPU and RAM, and these are all resources that we can trade for speed. Similarly, we can trade one resource for another and we're overbooked online. So, okay, how do we make things fast in general? Well, first we're going to find a bottleneck. On the slide here, I highly recommend if you don't have an industrial engineering background, or even if you do, to check out this book called Gold by this person who I can't pronounce their name. So just google the goal in the book. And then that's all I'm going to say about that. Once we've found a bottleneck, or even if you find a bottleneck, we can use the scientific method. It's going to look a little bit something like this. We're going to touch on it later. So I kind of started off relatively quickly. For those of you who don't know me, my name is Richard Sheeman, or Sheen's. I really, really, really love Ruby. And as Keiwi Harman once famously said, if you love something, why don't you marry it? And so I'd like to introduce you to my wife Ruby. She is actually a pipeline programmer. It's like a house divided. And we can work, so we're really excited to announce that we're having a child. So, my baby's coming in June. We've been talking a lot about names. It's going to be a boy, it's going to be a girl. And I was like, all right, if it's a girl, we have to name her girl. I was just like, what kind of boy am I? I don't know, an asshole? And then we were like, well, no matter what, the middle name should be something we could both agree on. And we decided on, we decided on pseudo. We'd be like, pseudo Peter root. Pseudo Peter root. So I worked for Ruby on the Ruby Build Back among some other things. And I'm not going to have time for questions at the end of this, but we will have a booth that's going to be open up tomorrow. Please come by the booth. I will be there a lot. Also, Kujisasa has a talk tomorrow. Please go to that. It's going to be really good. Also, Terence Lee has a talk. So please check this out. Also, I'll be doing a book signing. I wrote a book. I don't be giving you count free copies. If you come by the booth, you can get some more information. So I also have Rails commit bit, and I've run a really small conference down in Austin called Ruby Rear. All by myself and not with Caleb Thompson sitting in the front row. Very tricky Ruby Rear chair right there. Okay. So I'd like to start out with a story. Once upon a time, there was an application. And I was working on this application. It had some requirements. It did some things. It downloaded a really large, large, easy file. It then depressed the file, and we did some work on that file. And that was pretty much it. That's the gist. That's really all you need to know. Was this thing fast? Does it sound fast? It does not sound fast. Taking a look at our speed square, we can kind of look at those individual things and say, okay, what resources are we using? It's like, okay, we're using IOTO. We're using IOTO and CPU. We're also using RAM and CPU for the processing. So first of all, I have a hypothesis, and I was like, okay, this thing isn't fast. Let's use some more CPU. Ruby has this really nice thing, this really neat feature that we launched, I don't know how you went, but relatively recently, called the PXDino. It's got 60 bytes of RAM, and one PXDino is actually an entire AWS instance, which is pretty cool. And the benefit of this is you get a lot, there's no sharing on it. We're not going to subdivide up that AWS instance into other smaller containers, so you get less noisy neighbors. Once we have this, I've now got a bunch more resources at my disposal, but now I need to take advantage of them. To do that, we can use more threads, and more threads is going to be more CPU load. Has anybody ever used Sidekick? Okay, cool. There's going to be a bunch of rhetorical questions. A lot of hand raising. Get that flood flowing. Okay, cool. More upgrades, so you're familiar with Sidekick background burgers. The first thing I did with this, I was like, oh, I have this huge box. I'm going to set Sidekick concurrency to something huge, like 30, and at the same time, we're going to be downloading, on zipping and processing, with 30 threads, kind of all at the same time. And when we did this, well, so it was definitely faster than we were sequentially doing it, but it was still pretty slow. I was like, man, why is that? My next theory was that we still had on-use CPU. I was looking at some metrics, and like CPU, we still had plenty of it. I was like, okay, great. What's changes from 30 up to 60? And does anybody know what happened? Not really. Okay, well, yeah, it got slower. When this happened, I was like, come on, you got to be kidding me. So it turns out that we weren't CPU bound, so we can strike that off. We go back to our little diagram and say, oh, what else do we have a lot of? What else are we doing a lot of? It turns out that we have a lot of I-O in this process. So there are different types of I-O. First, we have network I-O, so we're actually downloading the file. We're using the network adapter. On the second one, we're only using the disk. So my first theory was like, hey, networks are inherently slow, so that's got to be the problem. To test this out, I tried a bunch of different solutions. I tried a bunch of different libraries, curl, per, shelling out, I-O-Is. And then I came to try the thing that all Ruby users advocate when they're talking about making things fast. I heard caching is not quite really what everybody advocates is using Go. That is the way to make Ruby faster according to 90% of people on the internet. By the way, 70% of statistics in this talk are going to be made up of the spot, including this one. One of my co-workers has a really great utility called HTCat, and this is written in Go, and it will parallel download and stream large gift requests. It's really cool. There are other libraries like it. One more thing I need to talk about this one is people actually start piping it. It will start streaming to the pipe, similar like if you're going to cat a file, you can pipe that to something else. And it will start doing that immediately as soon as it can. Unfortunately for us, and maybe for a lot of people on Twitter, Go is not the answer, and network I-O is not the problem. So that kind of leaves us with really one thing. And that's disk I-O. In this case, it was pretty simple to test that out. So I had sci-tech concurrency of 30, and I changed that to 4. And then everything was like streaming fast, like really fast. Previously, we were not able to keep up with the queue, and then all of a sudden, it's like we've got spare cycles. And what actually happened was our hard drive, the disk was at max of read-write capacity. We were downloading that file and putting it onto the disk, and we were unzipping it and basically copying from one area to the disk to an even larger area to the disk. And we had all these threads just kind of sitting there idle. As if you had a dinner party of 30, and you only have four forks, and everybody's just grabbing that fork and trying to eat it fast at the end, but nobody can get anything done because there's a limiting resource. So in the end, we actually ended up using 2x diodes. And the moral of this story is that you can blindly try things all over your body, but it's not very scientific. It will work eventually, but maybe there's a little bit of a better way. So going back, I had a slide of the scientific method, and we could take a look and see here's the things that we kind of did. You heard me use the word theory and hypothesis, then I tried something, and then we took, we observed from that, and then kind of repeated the whole thing over and over and over again until we got to the results that we went in. Notice that we are sort of missing out on this one, on this research one, which is pretty critical. So one thing that is very, very important in all of science is that it's repeatable. Does anybody know who these two people are? Pleistopentopons. Pleistopentopons, wow, that's great. I was not expecting anybody to get that. Okay, so this is Pleistopentopons, and this is not cold fusion. So cold fusion is the idea that, so fusion is the thing that powers the sun, and the idea of cold fusion is that you could have that at temperatures less than the sun, which would be great, and it would provide a lot of energy. And these people claimed and actually published and said, hey, we've done it, we've done it, and unfortunately it wasn't repeatable, and lots and lots of people tried it, and it is good in the sense that it's kind of shown that this whole huge review thing can't work, and it does work. But they are, unfortunately, the kosher children of making these kind of hyperbolic scientific claims without actually using science. So they did not use good science. We only have better science than the best science. We are going to measure in benchmark. So, Groke did add some metrics. This is an older view of what our dashboard looks like. We explicitly allow speed, throughput, and then CPU and RAM. And the interesting thing for me is you can actually correlate and be like, oh, hey, you know, CPU used to shot up here, and then like throughput, you know, to Venosa or not. And this will tell you why you're slow or what exactly is going on, but it will give you a high-level metric that you can use to then ask the question and get a low-level metric. So that's kind of our basic research. Again, we want to reproduce the slow. So RAM is the one thing that we have a little touch on. Ruby is typically going to be RAM-bounded, and this is because Ruby is a managed language. So some of you might already know this, just bear with me for a little bit, but this is crucial to understanding the rest of my talk. So Ruby uses a garbage collector. And if you have questions about the garbage collector, can we just type it? It's currently here. It's great. We have a new hire team and a new guru, and that's got to work on MRI full-time, which is pretty cool. So the garbage collector is going to allow us to do things like just say, here's a string letter. I want a string without having to say, all right, we're going to allocate, you know, five bytes for this string. A lot of people are really familiar, intimately familiar with the different ways that Ruby can consume more memory. So one of the most obvious is what people think of when they think of memory use in Ruby is retaining the objects. So here we've got a loop, and we're just looping around. We've got this constant, constant, sorry, never garbage collector, they're global. You can't, you can't collect something that's global. And we're just adding a bunch of strings to it. And when we do this, and we take a look at the process memory, it's 7.32 gigabytes. It's kind of large. I mean, I guess it is 100 million, is that a million? Yeah, that's 100 million strings. So why is it so large? So Ruby allocates chunks of memory, and then in those chunks of memory, it has slots where it would put objects. So as we are going through and we're looping, it is going to put 1, 2, 3, 4, 5, 6, 7 into different slots. It needs one slot for each of those. Whenever we're out of those slots, we need to allocate more slots. So then now we have, we have twice in that, even you can think of this, I've chosen the bus analogy, and kind of a shipping container, really anything you can put things in. And so now we can actually go all the way up to 14, but we have to go up to 100 million. So we just keep on allocating more and more and more until we get enough. So this, to me, anyway, makes a lot of sense where we say, okay, Ruby's using a large amount of memory, and here's where it's coming from. We need this. It is a retained object. So the second way, which is a lot less intuitive, at least to me, is allocated objects. We're doing the exact same thing, except for instead of storing it as an array, we're just going to output it, output the string. We use it and throw it away. This is only going to take 21 megabytes, which makes sense. We're not actually doing anything. We fill up our bus. We are going to run our collection. Our architecture is going to say, like, hey, it doesn't look like you actually need any of these objects. 234567, guess what? You're not going to use it again. So we can get rid of it, and we can put another object there. And this is a very oversimplified version of garbage collection, but it will help us later. So it looks like garbage collection is going to save us from everything forever, and it's like the perfect answer to all things. If that was true, I wouldn't be up here. So here's another example that is kind of an in-between. So what we're going to be doing is we're going to be looping through a million elements again. We're going to be adding them to an array. The interesting thing is we're going to not return that array. We're not going to do anything with it. So it's not constant. Nothing has a reference to it. And then we can call garbage collection. We can force a garbage collection. So what happens when we run this? Well, your application is still going to use 7.32 gigabytes of RAM. It's just kind of surprising, right? It's like, well, the garbage collection could just get rid of all that stuff. It just throws it away. Well, to understand why, you can, more to, I guess, to verify this, improve this, you can take a look at your GC stat. I mean, we can look at total-free objects and we can verify and say, okay, the garbage collection is doing its job. It did free 100 million objects here. So why is it still so large? Why is our memory so large? So we can't clear a slot while a reference to that item still exists. So what's happening is we build up our buses and after we've done this, we've gotten to 14 and we need to keep going. We're going to run garbage collection and after the entire program has finished executing, yes, we can clear those slots. But Ruby never actually releases memory. Ruby only memory usage ever goes up. We basically assume that once we've allocated memory, we're going to need that for the future. Also, free memory to the operating system is an expensive operation. So even though we don't need those things anymore, we still are going to hold on to that memory. So it's important not necessarily that it's retained forever, but just that it's retained for a while. So that array actually, you referenced all those things. While the array was in memory, we couldn't clear them. So I do highly, highly recommend that you test everything. Even these slides, as I was writing these, I realized I was double checking everything and realized there's a giant mistake in them. So please, just if you're interested in getting started with performance stuff, double check me, verify and then when I've said that's good science, peer review. So the experiment is super, super critical in benchmark performance. Like never go in and be like, I think I know why something is slow. Please always double check. So, okay, back to Steve. How does this actually affect your Rails app? So you can kind of think of your Rails app as a collection of retained memory. These are going to be things that live forever, or forevery, like a Ruby world, such as like an abase connection or your high level Rails app, your controllers, any constants. Any code that you've loaded. So that's going to have a base set of memory. And on top of that, if they be larger, maybe smaller, we're going to kind of spike up and down to like a level on a sound board, this allocated memory. And our total memory usage is going to be the combination of both of those. So why this really affects you, it helps that we take a little task and be like, let's say we've got a bunch of memory and we're in a Rails app and we're in a Request Arts. So we're processing, we're getting database, we're processing, we're loading up templates, and we're processing and doing some other stuff, Turbos, Rockets, 3, we're processing. And so we're creating these objects and well, this request is still going, we still need them, so we can't clear them. So now we're out of memory, all of our slots are full, what do we do? Okay, well Ruby can run through the garbage collection, but it looks like we need to reference all of these objects. So we can allocate memory. So we add an extra bus onto our, onto our memory, and then finally we're done. We're done with the request, we can deliver the response, and we no longer need a lot of those objects. Granted, this is definitely an oversimplification, but it helps. Eventually, we can go back and we can say, alright, we're going to run garbage collection and we don't need those objects. We can just mark them, or we can just, we can just get rid of them. And we do, we get rid of them, but you notice that our, our bus count is still high. So Ruby is going to increase memory use without retention, even if you're not constantly retaining objects, even if that retention number stays the same. If you have one request that is loads of, you know, millions and millions and millions of objects, and only one person ever gets that request once in a day, guess what, your app is going to just be pegged of that memory usage, and you can, and charge that memory usage by the operating system for, until that process dies. So object creation takes, it takes a brand and it takes a time. I do also recommend checking out Generational GC. It's, don't have time to cover right here. So, okay, that was all relatively critical for a customer story that I have. One day, a ticket came into Ruby, and I mentioned I work on the Ruby, Ruby Build Pack. It's the source thing whenever you run, you get pushed to Ruby Master, and all that stuff flies by the screen. That's actually me sitting at a computer, like, curiously typing. So if anybody tries it right now, it won't work. Sorry. That's not entirely true. Terrence, Terrence, help us out. So, one of the things that we do is make recommendations, and we say use this web server, or do this other thing. In order to do that, we have to provide invigoration. And some of the things that we care about are some of the things that our customers care about, which makes sense. So one day, I think it came in, it regular support, and the customer said, you know, my app is slow, and the support will look at it, and you're like, I don't know, maybe it looks like a Ruby thing, like we'll escalate it up there. And so I get to look at escalated tickets. And they said the app is slow, I checked metrics. It looks something kind of like this. And you can see that the RAM they were using is just like way too high. So they're using a swap. That's this red bar. And that's actually indicating that we've used up all of the available RAM, and we're starting to use the disk, which is really, really slow. So this is what I was thinking is actually making their applications slow. The next question, of course, is going to be why. And obviously I asked the customer, and if you've ever worked in support, you've got answers like this. It used to happen. And we didn't change anything. It's totally you. And then I'm like, we haven't deployed anything in like a month. So whenever these types of things happen in the verbi, a lot of times what people don't realize is that their GEM file or the GEM file that lock changed one of their, or their Devs ran bubble update something, and a minor version or major version had some bugs in it, and they didn't realize that that was an issue, or even think to look into it. So my number one hypothesis was that RAM was increasing due to misbehaving code in a GEM somewhere. We don't know. In order to test this out, we want to, I want it to be able to boot the app, hit it with requests, and profile the memory. In order to do this, I wrote something called dvarel benchmarks. It allows you to actually do those things without having to run your server somewhere. You can do those in a single process. And it uses RAPMoc. And then we can do really neat things like wrap it in other tools like Stackpop and memory profile. So dvarel benchmarks is going to boot the app, it's going to process the request, and then, conveniently enough, we're going to use memory profiler by by Sam Saffron, who is at this conference. Please go, like, in my high five, and be like, your memory work is awesome. Also, Koichi Sassada has another GEM called allocation tracer, which helps as well. So in dvarel benchmarks, you can run bringing performance. And I have an application called code3aige.com. They help people get started with open source. It sends you one RAPMoc, one GitHub issue per day. And this is just something I run so that I can have something in production on the platform that I care about, and I give a real customer experience. So for those of you who weren't paying attention, that was code3aige.com. Code3aige.com. Sign up today. Okay. And so when I read this, I got something that looked like this. On every single request, here's all of the object counts of what we're using. And RAP, okay, yeah, you're using a bunch of RAP, you're using Azure Plag, that makes sense. Or it's like, whoa, hashi, where did hashi come from? Did you know me? You know I have certain feelings about hashi? Certain blog posts about hashi. And it was really remarkable to me that hashi is using, everybody gives active support a hard time. It's like more objects than active support. So hashi is incredibly expensive. And in the other case, he was using lots of unneeded objects. And I said, okay, I didn't put hashi in my project, I'm not using it. I look at my jump on the lock and apparently it turns out that omg uses hashi, which actually makes sense in the same offer. Hashi is the same offer of omg. It's like, oh, they use the same tooling. Originally, I went to a root hashi from omg. And the author of omg is like that might be a little heavy-handed. So I basically found some hotspots where, and we were not offing. This is every single request to any action in your entire application. Yeah, so I fixed it by just memorizing a couple items and instead of having to re-create the object, we just used the same one. So basically, we are using up more memory by retaining that, but ultimately creating less objects, which is less memory than using enough. So I did this and the application limit of our send over to the customer to try it out, the application is still over the RAM limit. So okay, hypothesis number two, we're still restricted to the gem file. Maybe it's something else. Maybe it's a bad gem using too much RAM that requires on. Okay, in order to do that, how can we test it out? I have a customer sending me the gem file and the gem file. And I was like, well, hey, how can we figure out which of these gems is, it's just like if the gem loads and retains its own code, how can we know about that? Maybe it's not something on every request. At the time, there was no tooling for benchmarking on require memory use. And so, let's write one. All right, the general concept is pretty simple. We're going to measure RAM before and after require is called. And we're going to be using a library called git process memory. And now one thing to note if you're like a super memory geek is that uses RSS. RSS stands for resident set size. And this is not memory. This is not, this is, does not take into account shared memory use. It is a closest approximation of memory. But in our case, it's probably good enough. Yeah, it doesn't take into account shared memory. The next thing we're going to do is we're going to lump patch for a little number of wire because we can. Yeah, lumpy patch because you can. Okay, and then we're going to run it and we're going to get out for like this. It's going to spit out every single file and the associated cost. So here it looks like real passionate active record costs 1.49 megabytes, which is a little large, but I don't think that's where our main problem is. Now that we know where all of our memory is, we can sort this and it looks like mail is using about 40 megabytes of memory, which is slightly larger than the old hatching aids 1.5. When we count it all up, it's one gem and it's 65% of all of the RAM at the time. And we haven't even done any work. We haven't like hit out of the quest or anything. Remember in that slide I had the routine on the bottom of this. This is the starting memory that you have to deal with and anything you have to do after that is just going to go up. So in order to debug further, we're going to have to dig deeper. Whenever you require one gem, it requires other files and we need to see where exactly inside of mail is causing that problem. So we're going to do, we're going to use a tree in order to do that. Each node can have many children. Each child has a cost. It's going to end up looking like this where mail will load one layer and that will load the next layer and that will load the next layer. We have a tree class that we're going to store these things in and then when we're done, we can sort the children and print them or personally. It's something that looks kind of like this. The final process looks like this. We're going to instantiate a new tree, a new tree node and this might be mail. So we've already required the application and now we're going to be requiring the mail gem. We are using a stack in order to keep track of where we are. So the last thing in the stack would be the application and that is going to be the parent and we're going to push the child node, which is mail, onto the parent. So we're saying that mail belongs to the application. Finally, we're going to add the mail gem and say, okay, now we're going to require everything inside of the mail gem. So we need to push that onto the stack. We take a measurement before we call require and this is actually what goes down and then this thing will be called again and again and again and again and again until all of mail has been fully required. When it's done, we pop the last thing off the stack, which at this point in time will be mail because everything else has been popped off the stack and we take a memory measurement again. Finally, we record the cost and store that. So one thing if you're interested in doing this yourself, it is really important to note the insure block. One thing I realized, well, I realized, but didn't fully sink in, was that a lot of people actually require files that don't exist. They expect that to raise that, a load error and if we're not insuring that we're popping something off the stack and reporting the memory, then in the event that we try to require something that's bad, then this code just won't run. It'll be bad. Okay, the final result of all of this looks something like this where we have, we can see sort of the tree structure, we see application requires mail, require mail parsers and we see, oh, it looks like mail parsers takes up 20 megabytes nearly, 19.24 megabytes. I've opened up an issue. I raise awareness. One thing I'd like to point out is you don't necessarily always have to fix the bottom that or performance bug. Whenever you find one, even just pointing out, hey, there is one, raising visibility sometimes can be just as important or is just as important. And I didn't fix this, but somebody else did in Mikhail mail number 817. Basically, they just said, so if you're not familiar, inside of mail, there's a parser that actually parses mail. And then you reply back to GitHub and you say, hey, this issue looks great and that gets parsed by probably Ruby code and they pull out what you said and drop the rest of the text from the summit and all that other stuff. And we'll actually generate new comments. So some applications do actually need a parser in there in mail switched over from using the tree top parser to a regular parser, which is much more efficient, much faster. It uses the trade off of more memory at that time. So if most applications don't use this, which is why our customer wasn't using this most applications are not parsing mail. So they don't they don't even need it. So the fix was actually just lazy load that or if somebody really wants to wants to load it and take advantage of copyright organizations and they can require. For your application works. So okay, what happened after you do this? Well, the customers ran each drop dramatically. It was no longer swapping and instead of just kind of crawling along, they were now back there like screaming back like giving me high lives over the internet and they're like great job. I was feeling really good about myself and you don't have to care about that as a way. I work at home alone. So that was really like me giving this a high five. And my desk. So okay, some of some of the takeaways is like we want to remember to reproduce the splonus. These are problems that are problems in this case, and it was fixed with code or to be fixed with a different choir versions of times. We want to definitely make sure to get visibility benchmarks and like in use this method and use this this process of repeatedly asking questions, digging deeper and and gathering more and more metrics. So kind of real quick. Has anybody here ever made a bad code deployed on a Friday? Okay, so you know this will happen and then you'll you'll like oh I'm going to deploy a lot fixed and you're like, oh I totally fixed it. And it's like, whoa. And then like your weekend just like gone and your coworkers weekend just gone and you have a sport and they're just like, ah, and he's like why didn't she deploy that. So this happens to everyone. Like whether you raise your hand or not, but it's kind of an eventuality that it will happen. So I have a much proposal. Don't deploy on Friday. Now instead you can go shopping or ride roller coaster. Okay, so that's not going to fly instead of just not doing any work on proposing something on calling Monday, Friday. If your application is slow, maybe instead of complaining about it, just like, oh, why is this thing so slow? You could look into it on Monday, Friday. You can run some benchmarks. You can maybe implement some of this. But if you're constantly complaining, you can't find files. You can't find methods. And the code base is a mess. It's like Monday, Friday. Or if you are like constantly having to stop your it's like, you're mostly all this feature has to be delivered tomorrow and they come over to your desk and you have like 1000 files open and like 10 branches checked out. And they're just like, what are you doing? And you're like, I couldn't stand it anymore. I'm just adding a refactor right now instead of doing that on Friday. A lot of times I hear companies wish more companies sponsored open source. And guess what? Your company just sponsored you to work on open source on Monday, Friday. If you're not sold, it's okay. This is actually not just something I'm randomly recommending. It's it will give you more accurate deadlines. It's a well-proven technique. It's called time boxing. It will hopefully lead to less burnout. And if your boss is never like, man, I really want to hire more senior developers. Like I don't know the boss who doesn't say that these days. And you'd be like, oh, you mean like developers that make revacaries and speed improvements and contribute to open source. You know, the library is really, really well. It's like, maybe you can have a stupid Monday, Friday and learn all of those things instead. So it's not just as simple as doing this. Unfortunately, it never is. I hate to tell you, but if you're going to participate on Monday, Friday, you have to report your progress to your team. You have to let other people know you're doing this. Report it to your team. Report it to your boss. You're going to be amazed. So I actually kind of started doing this without really telling anybody. And then after the fact, I would be like, hey, I made this patch to rack or to rails or like, look at this thing. And then now, like whenever somebody's a company's like, hey, I've got a question in rails. Like they come to me. My boss is like, hey, there's this critical thing that's failing for our customers. Why don't you like, you know, take a look at that. And I'm like, wow, this is amazing. It's like no longer, it's not even Monday, Friday. It's like Monday, Tuesday. And I get to work out of the course. I am like the luckiest guy in the world. So in addition to reporting your boss, please report to Twitter, hashtag Monday, Friday. There were a lot of other people just also randomly tweeting Monday, Friday, so it's okay. If you mentioned me, then I'll like, re-queen it. And hopefully if you see somebody doing this, like give them love and be like, even if they're like, hey, I spent eight hours looking at benchmarks and like gotten nowhere. It's like that's progress. And or they're like, or maybe they're like, I signed up for coachreage.com, coachreage.com, coachreage.com. And it is awesome. So please, please get love, get love. I know we're a little excited. We've had a lot of fun today. Thank you very much for coming. I want to leave off with two relatively serious questions. And this is silent meditation. Don't scream out any answers or raise your hand or anything. So, thank you very much. Thank you.
|
Run your app faster, with less RAM and a quicker boot time today. How? With science! In this talk we'll focus on the process of turning performance problems into reproducible bugs we can understand and squash. We'll look at real world use cases where small changes resulted in huge real world performance gains. You'll walk away with concrete and actionable advice to improve the speed of your app, and with the tools to equip your app for a lifetime of speed. Live life in the fast lane, with science!
|
10.5446/30709 (DOI)
|
I think we can go ahead and get started. So you guys are in teaching GitHub for poets. I am Aaron Suggs. I go by K Theory on social media. I'm the lead operations engineer at Kickstarter and I'm a terrible dancer. This is me dancing horribly in a grizzly bear. It's called the Grizzcoat. It was a Kickstarter reward. And yeah, I'm always a little nervous starting out these talks, but I feel like once you guys have seen me dancing poorly in a grizzly bear suit, there's nowhere to go but up. So I work at Kickstarter. Kickstarter's mission is to help creative projects come to life. We are a Ruby on Rails application. You guys probably know how it works. If you have an idea for a comic book or a film or a gadget or a software project, you post your idea with a pitch video. Your friends and family and strangers on the internet pledge money to it. That's sort of the ultimate way of saying, like, I want this thing to exist in the world. And if you get enough money to complete the project, then we collect that money, pay it out to you, and you can go on and build the thing you want to do. In my role as the operations engineer, it's my job and passion to make tools and processes that help the engineering team and the whole Kickstarter organization do awesome work. And to that end, one of the programs that we do is called GitHub for Poets. So super quick outline of what I'm going to cover in this talk. I'm going to talk about what GitHub for Poets is, why it's awesome and useful and beneficial for our organization and for your organizations too, and how you can implement it yourself. Yeah, so jumping right into what GitHub for Poets is. So it's a really quick class. It takes about an hour that goes over how to make a copy change to our website. It's using the GitHub flow and browser. So we're going to be finding a file that we want to change, changing it, committing it, making a pull request. It's open to all staff, everybody in the organization, not just developers or designers or people that you'd expect to be committing code and working with the Rails app. And it's also an introduction for everybody in the company into the tools and processes that the engineering team uses. Really trying to be accessible and clear and transparent throughout the organization of what we're working on. The name GitHub for Poets, by the way, is a reference to a lot of liberal arts schools, my college at least, had this physics for poets class. The idea being it's sort of an elective class geared towards people who weren't going to become physics majors but want to have just part of enriching their lives, they should know something about electromagnetism and relativity and things like that. And so this is the idea that there are lots of people in our organizations that aren't expected to be doing Rails development or working with GitHub on a day-to-day basis, but knowing something about it can really improve their workflow. So that's where the name comes from. I wanted to dig in a little more into what each of these three bullet points entails. So doing a live demo of a copy change, this is, so the GitHub flow and browser for those who aren't familiar with it is a way of just finding files to edit and editing them and committing it all inside Chrome or Safari or something like that. The great thing about this is there's nothing to download or install. Like you don't have to tell people to open up a terminal, you don't have to get cloned anything, you're just searching in your GitHub repository a lot like you'd search Google or something like that, something else. So the learning curve on getting started, both in terms of technical know-how and like the commands you actually have to run is very, very low. Like you don't have to run any commands, you just go to a website. And this flow of finding a file to change, changing it and committing it and then making a pull request, I see that as sort of the fundamental development loop. Like that is the cycle that all engineers and designers and anybody building a product, that is the cycle that you're doing over and over again. And to show that and make that accessible for people who aren't familiar with it, it's a really powerful tool. This has really helped like our community service team or our integrity team who would be trying to find confusing parts on the site, maybe we had a poorly written validation or something like that. They would use this to improve copy changes. Also because we're just changing copy like the text in views or emails or something like that, you don't really need to know Ruby programming at all. We don't need to get into really at all like how Ruby works, how Rails works. This is the kind of thing that you can cover in an hour and come away feeling like you have a better understanding. So like I said, it's open to all staff. What that means at Kickstarter is we have about 110 employees, one third of them work on the product either as engineers or designers or product managers. So that means most of the employees at Kickstarter are doing things like HR and maybe like doing recruiting or managing our job postings or things like that, doing some legal stuff like if we ever upgrade our privacy policy or terms of use or something like that. Those are code changes that are really just text changes that people who have gone through our GitHub for poets class can do. A lot of the community support team is also making these kinds of changes in a much more straightforward way than before we had done this class. It's sort of grown to become, GitHub for poets has sort of grown to become part of the onboarding process for new employees. I think of it as analogous to how a lot of companies have customer support rotations where every new employee spends a day answering support emails because that is a great way to get to know what your customers are seeing when they're interacting with the product. Similarly, if you want to know how engineering and how a product development team works, showing them this cycle of how the sausage gets made is really eye-opening and often startlingly easy for a lot of people who aren't familiar with it. They sort of assume that you're doing really complicated genius hacker-type stuff because that's how it's portrayed in Hollywood. If you just show them, it's like, you're doing this really simple thing. I'm just changing some text and some files. It can really improve their understanding of how engineering works. It's an intro to the tools and processes we have. This is involved with how you run tests before deploying to production or something like that. We'll talk about how we have a continuous integration environment, how we test things like you have to, if you sign up with a username and password and you submit those username and password, you'll be logged into the website. Similarly, we'll have testing things that shouldn't happen. You can't have two users sign up with the same email address or something like that. Now, we explain just as much Git as you really need to in order to do that development loop, which is really just two things, like commits and branches. If you understand a commit as a single change with a message describing why you did that change to a file, that's a commit. A branch is just a list of all these changes being made throughout history. Any branch can represent all the work you've been doing. These branches are safe places for you to do your work without affecting anybody else's work. Rails we explain is a collection of tools for building dynamic websites. We don't really have to go into the nitty-gritty of how active record works or something like that. We just need to show them the directory layout. Most of the copy lives in app views or app mailers views or maybe in config and some YAML files. This is a great way we've made the engineering process transparent and inclusive for everybody in the Kickstarter organization. Because people, poets are going through this process and they're doing this loop, we end up in this place where everyone in our organization can commit code. A lot of people when they hear this, they sort of have this reaction. I'm the guy in the middle who thinks it's pretty great. What happens when you let everybody in your organization commit code? It actually works out pretty well. I'm going to go now into the middle section of why it's pretty awesome to do GitHub for poets. The biggest reason is it makes our process more lightweight for making easy changes. It used to be the case that our community support team would notice that a validation message is confusing and generating a lot of support tickets or they'd notice that there's a typo on the website or something like that. People were actually writing in to correct this typo because most of the copy was written by engineers at the time. They would tell this to their VP of community support. VP of community support would have a weekly meeting with the VP of engineering who would then say, okay, yeah, these are all things that we need an engineer to fix and then file these really simple changes into our ticketing system so that an engineer can go and fix the typo or clarify the error message or something like that. The process was really somebody noticing something that they weren't empowered to fix telling their VP. VP tells another VP and then that VP trickles down to the people who actually fix the problem. That is just crazy. That is way too much process to fix something that anybody that should be really easy to fix. GitHub for poets now just means that the person who notices the typo, who notices that a validation is written in a confusing way, they can edit that file themselves, put up the pull request and an engineer will get pinged and they will know to merge and deploy it pretty shortly there afterwards. Everybody else is informed but no longer part of that critical path. They can see what happened but you don't have to wait on your boss to tell you that it is okay to do something. Another really great reason to do GitHub for poets is that it lets you avoid building a CMS. As you are making these minimal viable product features and things like that and you are just throwing stuff out there and trying to see if it works and you don't know if you are going to be committed to this long term, there is often this sort of awkward phase of a product of a feature where it might need to become a CMS at one point where you are going to be changing it so much that you want other people to just be able to fix it themselves but you don't know if you are actually going to change it that much. In our case it is like we have a lot of editorial features where people, like staff could highlight different films or comic books or something like that and we have these flexible pages but we are not sure exactly how they are going to want to change them or what the rate of change is going to be and having GitHub for poets lets us say, well, you just go in and edit this view, we use Hamel and Hamel is really easy to learn and so you are just copying these chunks around and if the test passed that means there is not a syntax error and we will check it on staging on its way out to production and it is just a low risk change that saves a lot of engineering effort in storing all this stuff in the database or something like that when we don't really know what the full life cycle of that feature is going to be. Thinking of this another way, like version control is a content management system and you are just letting everybody use that thing instead of building your own bespoke content management system. So those are very practical reasons why you would build this. This is like you saving yourself work, you increasing the productivity of the engineering team. The real, the really awesome benefits I think are the more cultural value, the cultural reasons for doing this. And I will say we kind of stumbled into some of these. Initially GitHub for poets was just going to be, it was Ruby on Rails for poets because we wanted to teach, some people really wanted to learn Ruby on Rails and we said we have a Ruby on Rails app, we will just show you how it works and will be helpful. And then people, it was way too much material to cover in a couple hours. Boot camps are 12 weeks long or so and you are just getting started with one of those. So it was ridiculously over ambitious to be like oh yeah we will just spend a couple hours talking about Ruby on Rails and you guys will get it. But what we found was as we started talking about how Git works, it was sort of this mind blowing experience being like you guys have so much transparency and context about every change that anybody has ever done and we ended up just talking about how GitHub works the whole time. So our values here that we are sort of realizing is we wanted to improve the transparency with how like the engineering process. You know sometimes there are features that are about to go out and at the 11th hour we notice security liability or a performance gotcha or something like that and it will be in the pull request where we are like whoa I think this is not going to really work that well, we have to refactor it or something like that. And for everybody on the marketing team to know that that is why this feature is delayed and because we are building this extra stuff into it, it is great to just have them be part of that workflow. So that is the transparency and the inclusivity part of it. There is also a lot of consensus that is sort of baked into the pull request cycle that we use which was pretty surprising for how a lot of people, for how a lot of other departments make decisions. So what we are kind of realizing here is that version control is a communication tool really in the same league of like your chat or email client that you guys give access, everybody in your organization should have access to your chat room, everybody in your organization has an email address, people can email each other that your version control for your product is this rich history of why you built everything and how it was built and thinking of it that way, it is like of course everybody should be able to see that context. It is so useful to know that the product was built this way because of these very good reasons that were hashed out in this engineering, like in this discussion amongst engineers on a pull request and why that is not a simple change, like what you think might be a simple change is not actually a simple change. It is really kind of amazing to think that as we work, we are building up this story of how we built things through our Git commit history, almost as like this byproduct and that idea that I will be able to go look back in time and have something trigger my memory of why I did it that way, that is really powerful and it was almost like a fish in water kind of moment for me realizing like I am thinking of the David Foster Wallace anecdote about two fish swimming past each other and one fish says to the other like how is the water today and the other fish says what is water and it is getting at the thing that is like so ubiquitous to the way you work you don't always realize it is there. And so engineers using version control, we are used to this idea of having this rich context around how we built things that when other people see that for say community support or a marketing team, it is sort of a mind blowing experience for them to know that they have those sorts of tools for their own documentation or something like that. So yeah, version control is really increasing the transparency that not just the engineering team had or like other teams had with the way the engineering team works but it turned out that other people wanted this for themselves. So they started making their own repositories for documentation or for like policy documents of how they would handle troublesome cases or something like that. So these decisions now have this like commit history and a pull request and this rich context of the discussion around how, why a particular thing is the way it is. So I should say too this is called GitHub for poets but it is really pretty applicable to any version control system. The one thing I particularly like about the way GitHub does pull requests is they are sort of getting this idea of consensus as a sort of a political tool for decision making or like a tool for governing what code is good enough to get in master and to be deployed to production and things like that. You do it by asking in a pull request and so even beyond a commit history pull requests are adding this even richer context about what the discussion is around this, not just what the developer says in their pull request but it is like really this dialogue between several different parties. Maybe it is the product manager who is urging like the getting something out quickly because there is an important deadline to meet or something like that. Another security engineer talking about security trade off, somebody asking about performance. All this conversation is happening in the pull request. Now also because everybody else is, everybody in that organization is on GitHub, we also have a marketing person in there who can be like tweaking marketing copy up to the last minute as they figure out how they want to position this new feature. Another really cool thing, the other really great thing about pull requests is that there is this shared responsibility that is implicit in them. That if things break, like as a result of you merging your pull request in and deploying it, it is not entirely your fault at all. In fact, we have a pretty blameless culture too and so it is not your fault at all but in any organization that uses pull requests, you put it out there and you ask people to try to help you make it better. It really sets this expectation that you are not, like nobody is doing perfect work but we at least expect you to ask how you can make it better. I just see that as like a really mature attitude and a mature way for teams to work. Once other teams saw the way engineering did that, they wanted to do it themselves. Those are some of the cultural reasons why you should do GitHub for poets. The last reason is a really self, it is kind of a personal reason for you in your career development. This is a great way to increase your impact in your organization. As you level up and progress in your career, you will find that one of the things your boss expects of you is to have ever greater impact. You should be able to manage yourself really well, then you should be able to improve the way your team works and then you should start improving things throughout the organization and ever greater concentric circles of impact. GitHub for poets is really like a few hours of your time, teaching the class, getting everybody into your GitHub organization, answering their questions about how it works, merging and deploying their pull requests when they are ready to go. It can have a radical change on other departments' workflows because it is so much easier for them to make changes on the site. That is now a tool in their toolkit of things they can do. It creates this better attitude around working with this asking for consensus and expecting transparency in this rich context in how decisions were made. The final reason to do GitHub for poets is because your boss will like you for doing it and you will make your organization better. How to do GitHub for poets? This is sort of a whirlwind tour, maybe even a mini session of what GitHub for poets is like at Kickstarter. The things I explain upfront are how branches and commits work because it is really confusing to use GitHub.com at all without understanding what a branch or a commit is. We explain the sort of bog standard Rails file layout. The important places are app views and app mailer views and your config file if you have a lot of copy in YAML files. A nice analogy here which I am cribbing from my friend Emily Reese who co-taught the class with me is think of like in college when you would have, for each semester you would have a folder of like all the classes you took and in each folder you would have like a term paper one, term paper two, so on and so forth. It is just like this big hierarchy of folders with a bunch of files in them and really that is all our code bases are at the end of the day. For somebody who is not used to programming that can be a radically simplifying view of how our code works. Another really important thing is this idea of always be learning. A lot of people are reluctant to start contributing because they feel like they don't know enough to do that. They want to be like an expert at all the things before they make their first copy change. You have to disabuse them of that notion that they are ever going to feel like they are an expert at all the things and that engineers never feel like that either. This always be learning concept is pretty important so it gets its own slide. I found one of the most useful things to do in these get help of poets classes is to demystify the development process. Our engineering team has sometimes been glorified as these really elite smart hackers who have this sort of communal mind meld with computers and can just make it do amazing things. But actually we are trying to do the most simple things. We are generating rail scaffolding and it is just not rocket science. We are googling for a gem that does most of the work that we want to do and we are not doing something really clever. So really lowering the barrier to entry, lowering the amount of knowledge that you need to have to get started. I like to talk too about how this dovetails with the programming pattern, the philosophy of don't repeat yourself. In a macro level this means that if you were writing code to do something that you have already written before, that is not dry and you would just reuse the code that you have already written. As a corollary to that, it means that you are always writing code to do something that you have never done before. If you were writing code to do something that you have already done before, you would just use that old code. You would really just get comfortable with the idea of you are always going to be working at the edge of your comfort level. And also that it is really okay to ask for help. It is okay to ask an engineer how to do this thing or what this files for to show you how Git history works or something like that. Engineers ask for help all the time from Google and Stack Overflow. Yeah, so that is what I go into when doing a GitHub for Poets session. I also just wanted to do a really quick live demo here of what it involves. So if anybody else wants to follow along too, I am going to go to github.com. We are going to see how well my networking works. Let me hop over to your display. There we go. I know the contrast might not be great, but yeah. So we are looking at a very bog standard Rails application. In this case, I just did like Rails new and then I generated a user model scaffolding. So as the whimsical copy edit that we are going to do, I am going to go to the user listing and we are going to increase the SEO of that page. So to find out, I know that there is a user listing somewhere. I don't remember where it is. So I am just going to search for it. Sure enough there it is. App views users index.html.erb. I am going to click into one of these line numbers that I want to edit. And so sure enough this looks like HTML. There is a little bit of magic down here that I am not going to worry about because I am not trying to change that part. So in order to edit this, I need to go from the particular tree back to a branch. So I am going to hop on the master branch and then I am going to edit this file. And so I want to be sure that people know as we are listing users, I want to add a subheader to here that I don't know, expresses what I like about monoliths. So let's say monoliths are, okay now audience participation time, monoliths are what? Help me out. Majestic integrated systems. All right, and you guys are all encouraged to make your own pull request right now too. So now I am going to commit this and I am going to commit it to a new branch. I am going to describe what I am changing. Oh, see I already typed this out. So I am changing the user listing and the reason I am doing this is I am adding some SEO around monoliths. And then it is always nice to show the emoji. All right, and then we come up with, we try to encourage reasonable branch names too, but honestly it doesn't really matter. There are so many branches, you should just let it go and they are never going to be neat and tidy. Just embrace the chaos. So we could leave this or we could call it something like RailsConf monolith SEO. And now I am going to propose this file change. There we go. We can preview this. We should say what feedback we would like in this pull request. Please check if monoliths are actually majestic and or integrated. And we can at mention DHH for that I guess. I don't know. And then we create the pull request and then we are pretty much done. So in our process where it goes from here is we have a support engineer, it is sort of a weekly rotating basis of that is the person who is trying to stay on top of whatever bugs come up and basically absorb interruptions from other engineers. And so they get at mentioned in one of these pull requests. They just give it a quick spot check. We have the continuous integration set up so we get the nice green check mark. And if all looks good they just deploy it and it is out on production within a couple minutes. All right. Thanks. It is good to see some participation on this pull request. Yeah. All right. I will hop back to the slides. Yeah. Okay. So that is more or less what we cover. There are a couple common concerns that people have when they think about giving everybody access to their GitHub repo. Most common I hear is they will break the site. Honestly, the poets themselves are incredibly concerned about this. In fact, I think they are even more concerned than engineers and people on the product team about breaking the site. So as a result of that, the people who have gone through the GitHub for a post process tend to be very cautious and they will ask for help a lot even though they are making what are from an engineering perspective very trivial changes. Just fixing typos, moving some words around is not going to have a significant impact on your site. The worst thing that we have had happen to is because we use Hamel, like indentation has gotten screwed up and then we discover that because a test fails. You should be testing your views, pro tip. So yeah, these are really low risk changes and they are changes that can have a really awesome impact because believe it or not, the copy that is on your site is actually a huge part of your product. Some small copy changes can have a really impressive effect on how usable your product is and how well it converts. So yeah, we have not found it to be the case that giving more people access to our code repository has decreased code quality or it hasn't been a liability for the website. In fact, it has been the opposite. So other people are worried about security, that our code is so special and it is so dangerous for everybody else to be in there. I also think this is kind of crap. So first of all, we have to remember that there is a communication medium, right? Version control is a communication tool. This is just like chat or email. And there is always a possibility with every communication tool that if that information got out, it could be damaging or just like embarrassing. So that is true. But it is a trade-off that you always accept because chat and email are so helpful to increase your productivity so much that it is worth the risk that if somebody had a bad password, like all that information would get leaked. And so there is some really easy stuff you can do to improve your security too. Like we have everybody use one password so that they have strong, unique passwords for every website they use. You can require two-factor authentication for lots of important stuff. Also another reason I think there is kind of a bad excuse to not do this is that your code is not a trade secret. Your code is not that special. Most Rails apps that I have seen are just mostly Rails scaffolding with some small customizations and a bunch of gems that you are requiring. Like that is the majority of the cleverness that goes into most Rails applications. We are not trying to make clever genius apps. We are trying to build simple things quickly. If our code was ever leaked to the public or something like that, we like to think that another Rails engineer would look at it and think, yeah, oh, yeah, that is how I would do it too. Like it is the obvious thing to do. We are not trying to be clever. We are just trying to make it easy and obvious. And so as a result, it is really not that much of a security liability. Also so a few pro tips here. These are some things you should also have in place just to make sure that all your bases are covered in terms of security and site reliability and stuff like this. I am sort of putting on my ops hat and being like, we have to say these things. So you should like, poets should know that when they are working in a Git branch, it is really their own space to do whatever they want, to experiment as much as they want without affecting anybody else's work on any other feature. So anything they do in their Git branch is not going to break things for production or break things for other developers. You have lots of tests and continuous integration to catch lots of obvious features that if something managed to slip through all this, your process of checking it on a staging environment or a production environment and it passed all your tests and things like that, it is really a breakdown in the engineering process. And I think it is kind of my problem to have a better process that would make sure we would catch these things before they get out. If they get out and there is still a bug, it is probably going to be a really minor bug. And yeah, another safety check is that we have deploy, whoever is doing the deploy scans exactly what commits they are deploying and just like sanity checks that these are what they expect to be deploying. So if a poet like forgot to create a branch and committed directly on master, that goes into our chat room and everybody would see it immediately and we would be like, hey, did you mean to do this? Is it a really important change that needs to get out right away or if not, we will just revert it and we can say, okay, let's go and do this through the pull request flow. Another thing that we have gotten a lot of questions about is what it means when you get a thumbs up in a pull request. It means something positive, obviously, but does it mean like you can merge this and it is ready to deploy or does it mean like I like the idea that you are doing this? Does it mean that I like this particular feature but I am not sure about the implementation? Does it mean you like the implementation or do you like just the direction that the feature is going? This is actually a surprisingly rich emoji. So yeah, talking about like this often just means, like it can really depend on the context whether this means like actually good to deploy or just like congrats on making a pull request. All right, so I wanted to show you guys who some of the poets are at Kickstarter. These are a bunch of our employees who have gone through GitHub for poets and have then gone on to make several commits and pull requests. In particular, oh yeah, so I did a little grepping through our Git history and realized that of the 70s or so people who have gone through GitHub for poets class, 29 of them have gone on to like actually make a pull request that we then merge into master and have a total of 1100 commits, which I think is super amazing. That is 1100 times a developer didn't get interrupted to do a trivial copy change. That's a lot of time saved and that is a lot of people who have an improved workflow. That's a lot of people who were able to fix a solution by making a change to the website instead of say making a support macro to reply to a customer support ticket. So actually there's a whole anecdote behind that. There was a confusing part of our video uploader where at a support meeting they were saying like all these creators are writing in confused that they're not seeing a video preview when they're expecting to because it takes a while to transcode and maybe we're getting enough of these tickets, maybe I should just write a macro so we can reply to them more quickly. And then they realized like wait, we can just edit the HTML around the video uploader and just put a message being like we're going to show you a preview on the next page, don't expect to see your preview here and then you don't even get the support ticket in the first place. It's just like a big win for contextual help. And longer term there was like we wanted to, engineering wise, we wanted to show the preview but it was just a complicated front end stuff and like asynchronous. We wanted to build it better. We weren't going to get to it soon. And just adding one div tag of inline help was a big win that drastically cut support tickets. I wanted to call out third from the right is Carol and she has alone made 312 of these commits. She's on the copywriting team now because she's so awesome at editing and making our copy really pop. There's an interesting scenario where a copywriter started recently and her first project was to get familiar with all the tone and the copy on the site. She looked for any place where we weren't using a serial comma, which is this grammatical pattern where if you have a list of three or more things you have to have a comma for the final item. She felt very strongly about adding those because it clarifies what you're talking about. But so she goes through and makes all these pull requests like each one fixing a serial comma and this was like her first week on the job and she never used Rails and never used GitHub before. It was pretty great. Yeah. Okay. So the final thing is like here's a slide. I don't know how well this shows up because it's on a light background. But yeah, this is an example of our colleague Emma making a pull request and just I really thought she was doing a great job with the emoji here, lots of people chiming in right after she makes the pull request. It's really good. It's a good feeling when other engineers are congratulating you on a job well done for making your first pull request. Because I have, because I'm up here, I get to sort of talk about a weird abstract law or thing that I would like to happen and that is that coding becomes more like writing. And what I mean by that is writing is a tool that we spend thousands of hours learning how to read and make written language. Like it's not something that comes naturally to humans at all. It's something we work very hard at and it's worth doing all that because it's so useful to use written language. It's such a powerful way to communicate ideas. And starting a few generations ago using computers and software is going to be just as important. This is something that people spend thousands of hours learning how to use software, learning how to use computers, but it's something that is worth investing that time because you use it all the time and it's so helpful. And so I see GitHub for poets in a small way helping us along that path of making our engineering, like the software development process a bit more accessible and giving people another tool in their toolbox as they're trying to solve problems to try to solve it by editing our code base. So the final couple slides here are examples of the, from the Kickstarter homepage that things that have been made by poets. So this is an actual screenshot of the Kickstarter.com homepage at one point. We have these big like hero graphics where we're calling out things that we want to highlight on the site. So this one was about films. In our GitHub for poets class, we play around with editing these hero graphics in a pretty fun and whimsical way. So behind the camera here is our co-worker Liz. So here's what the poets came up with. Another actual hero graphic that we had on the homepage was highlighting video games. At the bottom there it says 87,000 Kickstarter backers helped produce the video game Broken Age. So when Catherine got ahold of this, it turned into a dogecoin scheme where 87,000 dogecoins sent to Catherine and the call to action button says send more dogecoins. And our final one, highlighting library projects, but the way GitHub for poets actually ends up often is people everywhere creating chaos. So that's the end of my talk. I'm K Theory on social media. If you guys want to hit me up there, I also have time for some questions. Actually, maybe I don't. But come up and ask me questions. And just in case there are some people here who are really wanting some poetry, I'll leave you with some bad Git poetry. Thank you.
|
Discover the benefits of training your entire organization to contribute code. Kickstarter teaches GitHub for Poets, a one-hour class that empowers all staff to make improvements to our site, and fosters a culture of transparency and inclusivity. Learn about how we’ve made developing with GitHub fun and safe for everyone, and the surprising benefits of having more contributors to our code.
|
10.5446/30711 (DOI)
|
My name is Michael May, and so in this talk we're going to cover a lot of things. Hopefully it will be too fast, and hopefully you'll learn something. So the first thing we'll talk about is just cool things about the Internet. So like routing some background on TCP, DNS, things that make the Internet work, but that also is real developers might not touch every day, but it is good to know something about these things. We'll talk about HTTP, what HTTP provides to us for caching, and then how to handle things like cookies. And finally the moment of the talk we'll be around actually doing acceleration with apps. I'll talk a lot about varnished cache and some caching strategies for caching dynamic content. So before we begin, I just want to point out that I work at a company called Vasli. We are a content delivery network. We're all about real time, so whether that's stats, log-streaming, you are instant purging in 150 milliseconds, we're all about being very fast. So we are actually built on top of a fourth of varnished cache, and we allow you to upload something called VCL, it stands for varnished config language. Now so does anyone in the room know about varnished cache? Have you used it, heard of it? Cool. What about VCL, varnished config language? Okay, cool. So that's what it is hiring, if you're interested, come find me after and let's chat. So I'd like to show a couple of these pictures. Vasli, we brought our own infrastructure, so we stick tons and tons of SSDs and RAM into all our cache nodes. Now, I'm going to talk about internet routing in just a moment, so if you find that really interesting, there's a talk by a colleague of mine called Scaling Networks Through Software, and that's from SRE-Con 2015, just a couple of weeks ago. Check that out. So, things about the internet. The first thing we know is that the internet is made up of many different autonomous systems. And autonomous systems, that just means that it's a network controlled by a single person or individual. So for example, Comcast has an autonomous system. Local 3, Vasli, we operate our own network, so we have an autonomous system. The real key thing here with autonomous systems is that there is an ASA, an autonomous system number, and that's just a unique identifier that identifies that network as something distinct from another network. And so, these ASAs are actually really critical in internet routing. So there's, say, called the Border Gate Protocol, and it's still BGB, and this is used to find the best path through the internet from point A to point B. So the way this works is when you connect up a new router to various ISPs that you may connect to, typically you'll can do this when you have more than one. So all of us in the room might be not doing this, like our Fusil provider is probably doing this for us. So when you connect a new router, it pulls all of the internet routes from those ISPs, it then analyzes those routes and finds something that's called the shortest AS path. And all that means is the path through the internet that traverses the least amount of these autonomous systems. So once you find that path, you then store the next top of that path in your routing table. And this is important because there are these things called period agreements and period. So period is when two autonomous systems get together and exchange this routing information. And so with that comes these agreements called period agreements, and you define how much traffic can be sent through their network, how much it costs, things like that. And these typically happen at physical points called internet exchanges. So these are very big, you know, doing like hundreds of teramits a second. And so like Amsterdam, AMSI, it is one of the largest. Now there's also just like a fun tidbit of information, there's this group called NANOGRAPH. And these, this is a gathering of all the network operators in North America. They come together for these period agreements at NANOGRAPH. So if you think that nobody actually controls the internet, NANOG totally does because we use a routing. Right, so if you ever hear someone tell you that nobody controls the internet, people totally do. And so if you go up like in a Mac, you can see the routing table, it's not terribly interesting, but you can do that with like a netstat.nr on the Mac or on Linux, it's just route. So it's like kind of basic internet routing in a nutshell. So let's just talk about TCP a little bit. And we should all be familiar that to create a new TCP connection, there's this thing called the three-way handshake. And that takes three round trips to turn your client to initiate a new connection. And the reason we care is because HTTP runs on top of TCP. So if every time that we have to send data to a client, we initialize one of the new connections, it's going to be very slow because it takes three round trips from server to client. Now there's this thing called the TCP window size, and that just controls how much data the server could send to the client at one time. And just one of the optimizations that you can do with TCP, not necessarily that you're going to do as Braille developers, but that you might expect your CDNs or your provider to do for you, it's what you call slow start. And that's just where you intuitively increase the window size so that you can send more data at one time from a server to a client. Now DNS, we use it every day, it's just the human name mapping to an IP address. So DNS has a TTL, it stands for Time to Live, and that justifies how long you can catch these DNS records. Now, has anyone used a dig or heard of a man called Dig? Also, so Dig is really handy. For example, if you go to digonfacil.com, you'll see that the question that we ask is, hey, fastly.com, give me the A record for this. And A record, that's just the actual IP address. So in this case, we came back with four different IPs. I'm not exactly sure why there's four here, but some of them might be unicast routes, some of them might be anycast routes, and then a couple of them are probably for redundancy. So if we do another one, we can see that this one has a seeding, digapi.facil.com, and we'll see that that is actually a seeding to global SSL, and that itself is another seeding to fold back to a global. And then finally, foldback.global results to two different A records. And the reason there's two A records here is just for redundancy purposes. If one of those IP addresses fails for some reason, then there's another one there that you can go to. And also notice there's some 10s, there's some 29s here. Those numbers are the TTL. And these are pretty short TTLs for DNS. And the reason we do this at fastly is that we update these very often so that we can always determine the closest pop to where the DNS resolver is. So that's just some background on the internet. Now, like, why we care? You probably don't necessarily care about that stuff, but I just think it's kind of cool. Now, the rest of this talk is about making things fast. And the reason that we care about making things fast is because it takes time to move data around the world, right? So for example, we're here in Atlanta. Say we have some clients that we need to talk to in Sydney, and that's about 10,000 miles, it's a pretty long distance. So just ideal at the speed of light, that might take 80 milliseconds for a round trip from here in Atlanta to Sydney all through Atlanta. Now, in reality, that might take a little longer, and it's actually going to take a lot longer. So you can find this out by doing a ping. And so here's one of our cash nodes in Sydney that I ping. And you'll see that for a round trip, it takes about upwards of 250, 350 milliseconds. So when you start thinking about that, and if you have to open a TCP connection to clients out in Sydney every time you're sending data, this is going to be a very long time, you know, 3 times 300, that's like kind of 3 milliseconds. So your client's already waiting almost a second just so that you can begin sending information. A little side note, I also did this from airplane walk-by, and that was taking like 700, 800 milliseconds. So like, everything in my play is terrible, but we already knew that. And I was curious, like where these packets were actually going, why it was taking so long. And so you just see here that like bounces around the like planes when for a while, and then like finally it resolved down here in Denver. Maybe I was above Denver at the time, I don't really know, but Trace out was a fun little tool. Now, instead of waiting for that 250, we have to live second round trip time. We can do like some cool things and decrease that, right? So here, in this case, Atlanta, Sydney, it's only taking 20, 30 milliseconds. So like what's going on here? Did we just eat the speed of light? Right, like if we did, people like Stephen Hawkins probably want to know, because they've been working on really hard problems like this for a long time. So did we just eat the speed of light? No, right? And actually, we cheated. So like my fourth grade teacher that told me cheating never gets anywhere, like take that, like on the internet, cheating can do lots of places, much, much faster. And the way that we cheated was with cashier. So with that, performance round trip time is something in mind. Let's talk about HTTP. And some of the things that HTTP provides us for cashier, cash control is HTTP header. We've probably all seen it, use it, love it, you know it. I just want to point out a couple directives for the cash control header that you may be less familiar with. And one of these is s-max age. And so this behaves exactly like max age, except it only applies to surveyed cashiers. Surveyed cashiers are being your CDN. So you can do some cool stuff where you define different cashier policies for the browser or for your back-end CDN cashiers. There's also some pretty new stuff that's interesting, referring to grace code. So there's the scale while revalidating, scale if error directives. And this tells the cash that it's okay to serve stale content while it does something in the background. So I revalidate, the cash is going to go fetch the fresh content in the background while it's still serving stale. Or an error, same type of view, if the content is stale, it makes a request to origin. Origin comes back with a 500 maybe it's down or something. So this way the cash will still serve content out to users and not serve those errors, which in my view is a much better user experience. I'd rather have my user see stale content than errors. Now the variant HTTP header is another header that can affect caching. Now a good use of the variant header would be variant on something like accepting coding. And this checks out because accepting coding has a pretty small set of permutations that can be made in a response. So there's like gzip, deflate, and non-cache versions. So all of those get cached but there's not too many permutations. So for example a bad use of the variant header would be variant on user agent. Please do not do this because there are thousands of different user agents. So if you're trying to cache responses and there's a user agent header in there, then you're really going to just eliminate any benefit that caching might provide you because there's so many different permutations. Now just to clarify things on cookies, the set cookie header is sent by the server, the cookie header is sent by the client. And in general proxy caches like CDNs will not cache anything that has a cookie header in it. And intuitively this makes sense and it should because generally when we have cookies in responses, there means there's some sort of private data in there that we don't want to cache in a public cache. So now I want to talk a little bit about varnish cache. So varnish is a hdpu-reversed proxy cache. It works just like a reverse proxy would. So it sits between your user and your server and all the requests flow through that. So generally when you see these things out in the wild in production, you'll see name your top level domain like example.com to the host name of your cache server or load balancer so that all of those requests to your top level domain flow through that cache. And just to give you an idea of what this actually looks like, here's the output of the curl to app.facil. So I'll just point out a couple things here. First is you see this via header. So all this does is tell you that this request was actually proxied. And in this case it was proxied through varnish. It could be proxied through something else like squid or any other proxy cache that you might be using. Now some other headers that are interesting are these x-served by xcash hits. And so any htp header you see that's prefixed by x- means that it's not an actual official htp header. It doesn't exist in the HTTP spec. It basically just means that someone made it up. So in this case these are kind of actually relevant to us because the x-served by tells us which cache this actually came out of. So here I'm in Atlanta, came out of the Atlanta cache. And this was also a miss because this request happened to include cookies. And because it was a cache miss there's no cache hits. Finally the thing to note is that we are using connection keep alive here. So curl in this case did not actually close the TCP connection. It left it open so that we can reuse it and not do that expensive three-round-the-trip TCP handshake every time we make a request. So varnish comes with something called varnish config language or VCL. And this is just a domain-specific language for interacting with the traffic that comes through your cache. And this VCL is translated to C and endowed to by code. So it actually has a pretty negligible effect on performance as opposed to if you were to do something similar with like middleware where that could affect performance, doing that in VCL doesn't do that. So varnish and VCL run on something called the varnish state machine. So the way this works is when varnish processes a request, that request goes through a bunch of different states. And at each point, each one of those different states, VCL allows you to inject logic at that state so that you can do some sort of processing. So for example, here's a couple of different states. Reqv is just the first routine that gets called deliver if the last routine is called fetch gets called after a cache miss and hit after a cache hit. And the error is an interesting one. You can do kind of interesting things with control flow there. So a quick example, this is pretty straightforward, say that we want to block our requests to an additive point. And if those requests don't have an authorization pattern, we're just going to forward one then. So this code right here, dcl, Reqv, remember that's the first sub routine that gets executed. So right immediately when this request comes in, we're going to forward one it. This request is going to pass back to our application server. So we don't even need logic there to do that in our application server to do that since we can do it out here. And then remember, feedback to those pings, doing this is a difference between 300 milliseconds and 30 milliseconds. Because with varnish now, if we're just setting these errors from varnish, those requests never take that other hundreds of milliseconds to go all the way back to our application server. So varnish has something really interesting also called synthetic responses. And this is just best shown by example. So say we have a page and this page has a like button with a counter. It counts the number of likes that this video has or whatever. So typically how we might cast this using a traditional reverse proxy would be, maybe there's a piece of JavaScript in the page. When you flip that button, it fires off a post request. That post request goes back to your origin server. It does a dv update and then comes back with a 200. And then when the JavaScript on a page sees that 200, it'll increase that counter. Now again, think if you're going back and forth from here to Sydney, that's going to take a really long time. So what synthetic responses allow you to do is completely bypass making that round trip to origin. So the way that this could work is that JavaScript on a page makes a post that post gets to your cache. And then you can have a synthetic response with like just say it returns 200 okay. And that 200 gets back to the client very quickly so that the client, or the browser in our case, gets that feedback immediately and the user is just not sitting there, clicking the button going like, hey, what's going on? Like I'm clicking you, you're not updating. And then the way that you would eventually update the database is you can send a log out from your cache and it has some background job that's processing those logs in real time. It's watching for updates and then answering them in the background. So the way that this actually looks in BCL is pretty straightforward. So we have a rep meeting, the first function that gets executed. We're saying if the URL matches like some like input and it's a post, we're going to throw an error. And here we're throwing error 666. Now that's not a real error, but that's okay. It doesn't matter because what that's going to do is going to break the Vargas state machine. It's going to break out of that and immediately go to this BCL error. Now BCL, all of these BCL functions are global and so they operate on all requests. So what we have to do in this error function or sub module is check, okay, we say it's a status 666. If it's 666, we know that this is going to be a synthetic response. So we're going to set the status to 200. This is going to come to the type and then our actual request body is just okay. And then we deliver it. So that's how you can do this and potentially this would save hundreds of milliseconds on that round thread back to your origin. And this works well for things that eventually will consist of C as okay. If you have things that actually need to be consistent, this won't work out for you, right? So let's move on and with some of that varnish, BCL knowledge, let's talk about strategies for caching dynamic content. Now first let's talk about what type of dynamic content is actually caching. So pure, truly dynamic content isn't caching. And that's stuff like real-time statistics or like credit card numbers, things that actually we don't want to cache. But generally when we think of dynamic content, such for example as like a JSON API, that's actually pretty static and it's actually pretty cached. The only problem with caching is that it changes unpredictable. So we need some form or some way to be able to end balance that. Now there's a lot of different strategies for caching these dynamic types of things. So one of them is short TTLs, there's a thing called cached-cited-cludes. You can do Ajax and then there's the API caching which is then driven purges. So the way that short TTLs work is pretty straightforward. Basically in your cache control you set your max age to something very small. So for example you can set it to one second. So what this is going to do is your caches are going to cache this thing for one second. All requests that come in will get that cache content. But then basically what you're doing here is flatlining your requests back to origin. Because your origin is only going to see one request a second from the cache. So it's going to absorb all the other load. So it's pretty cool. And you know this is a pretty easy strategy. You don't have to think a whole lot about doing this. And this works out pretty well. If your cache doesn't have some sort of invalidation mechanism, that's fast enough to keep up with dynamic content. Now edge-cited-cludes. As you've heard of edge-cited-cludes, they've been around for a really long time. Now so the way to think about these is kind of like Russian dollar caching. The way these work is templates are generated and cached on the actual edge cache. And then for pieces that are actually dynamic, such as personalized user information or things like that, you can actually fetch that content back to origin, get it back, and the template is actually generated at the cache and sent out from there. So when this actually looks in real life, say we have a nav bar, there's some links in it, and if the user's logged in, we want to be able to show their avatar and maybe their name there to give them a personalized look and feel. So the way this works, you'll see the ESI included. That is a type of keyword here that the cache will look through when it parses through HTML. And then to actually insert the content in there, it's going to make a request back to your application server that might look something like this curl. It'll be a cookie there, it'll be to the path that's defined in your source. And then here, you know, we're returning the name and an avatar. And so this doesn't just work for HTML. You can do this with any type of content, whether that's JSON, XML, whatever. And you can template these things as at will. So those are edge-side includes. Now, another way to cache dynamic content and make your applications faster is to extract all of that dynamic content into APIs and then fetch that asynchronously from the client. And this is good because we can get in and get our HTML to the client faster. Maybe we won't spend so much time building it on the application side. Now, when we have these APIs that Ajax is calling into, we can actually cache them. And so we do this by setting a cache control with some reasonably long TTL. Maybe it's a couple of days. We don't really know. We don't really care. It's something that works out for you. And each response will tag with a cache key, much like in Rails-Russian null caching, you cache with a cache key. Except in this case, we're going to use an HTTP header called Sergi key. Sergi key really just means cache key. Like, use it like that. And then, so once that content is cached, it's tagged with those Sergi keys, when it changes, we can perform a purge on that Sergi key so that content is removed from the cache and it gets updated on the next request. And so the way this might look in a Rails app is, you know, in our Git methods like index, we can set the Sergi key. Here we're using just a table key from the active-regarded model. And then, say we have an update that maps to an HTTP put. When this content updates, we can issue the purge and then render the new content. So, there's actually a Rails plugin that exposes some of these helpers to you. So check that out if that's maybe something you're interested in doing. Now, Rails also helps us out because it injects this CFRF meta tag token into all of our layouts. And so this is good and this is like, we don't want to eliminate this, but we can cache that base to HTML when there's a unique CFRF token in each page. So one of the ways that we can extract that to make it more catchable, that base content more catchable, we can use an edge side of boot to include that CFRF at CFRF time. We could maybe add that CFRF token into the cookie and client side extract that cookie somehow. Or my favorite one of these is to extract that CFRF token into a private API endpoint and then you can age-ax that into the page asynchronously. So we're approaching the end, this is kind of the last strategy I'm going to talk about, and this is caching things when your responses have cookies. So the way that you do this is when a response, when a request with a cookie comes in, you strip that cookie out of the request and save it to a temporary variable. Then you'll do a normal cache load up like you would, and because you strip that cookie out, that would be part of the cache load up. And then once you fetch that content from the cache, you can set that cookie back on the response from the temporary variable. Now the way that this might look in VCL, something like this, here we have rectV. Remember it's the first sub-module that's subroutine that's executed whenever a request comes in the barge. So in the queue we can say, if there's a cookie named this cookie that I'm looking for, take that cookie, store it in a temporary variable, and un-set the cookie header. And then similarly, VCL deliver, this is the last subroutine that executes before you deliver your content out to your client. Here we can say, if we set this temporary cookie, set cookie header to the value of that cookie that we temporarily store. And so this is how it works when there's a cache hit. If there's a cache miss, it's slightly different because you will have to fetch that content from origin. But you can do that. VCL fetch is the subroutine that executes right after the response comes back from your back-end application server. So here what we're saying is, if there's a set cookie header in that response, store it to this temporary variable, rip it out of the response from your application server so that the cache can actually cache that. And then back here, when we deliver, we'll just go ahead and set it again. And so that's how you can cache content that has cookies in it. You basically fake it. And so that's about all. We're pretty much done here. So these final thoughts, tools like DIG and TracerOut are super helpful to you with deep-bub and network problems. Two of your cache control headers, you can get very, very fine-grained control from your cache by using this. And then I didn't really touch on this too much, but try replacing some of your middleware with logic in VCL at the edge and see what happens in performance. Finally, we talked about a couple of different caching strategies. None of them are wrong. They're all right, but pick one that is most comfortable to you and use it will be easiest for you. And finally, we talked about stripping cookies out to cache what would otherwise be considered private content and unpatchable. So that's about all that I have. I think you have some time for questions if anyone has any. Yes? Do you varnish something? No, I think it's fine. So the question is, is varnish something that would replace your existing reverse proxy like Intunex? And the answer is it doesn't have to. So you can totally put varnish behind or in front of Intunex, however you want to do it. And I mentioned that those via headers, if you have a teacher reverse proxy, it's just going to depend on each proxy that goes through into that header. So you can be able to turn to that. It could, absolutely. Sure. Any other questions? All right. Oh, in the back. For the synthesized responses, is there a way to handle errors? Sure. So the question is, in these synthetic responses, is there a way to handle errors there if you get one from your origin server? Yes. And the answer is yes. I didn't cover it because it's a little bit of an advanced topic, but absolutely. You can define some logic there that says, oh, if I get a five third back, do this instead. So absolutely. Thank you. Anyone? Yeah, there are significant amounts of the internet that don't respect these cache primal. The question is, is there any significant part of the internet that doesn't respect these cache TTLs? And the answer is probably not unless you're using a really terrible cache. In general, as long as the person who implemented your cache was reasonably sane, they should be able to handle these TTLs. Yes. A lot of ISPs, thank you for the point of view of the end user, a lot of ISPs implement a source of edge caches also. How do they play well with that? They talk special, but generally with people themselves. So that if an edge cache closest to the user, how would it know to expire sooner than its default time? Right. So the question is, various ISPs may have their own edge caches. So how can you guarantee invalidation of that content? Is that along the direct line? Let's say you have a kind of edge cache really close to you, so the first thing you're going to hit, and it has a ridiculously long TTL line, what do you think it is? But let's say a year, a year, because it's ridiculously long with the edge cache time. And then somebody goes from that cache to your cache. Sure. So the likely to be warm in the localist cache? Sure. So how do you fire a question? I'm just trying to keep that up. But anyway, the two caches are talking to each other so that you end up with what's realistically recent content. Right. So I mean, if you invalidate a piece of content on your cache, and there's another cache sitting behind that, or ahead of it, right, so there's something cache in front of the cache that you actually own, then you really don't have any control over that. And it's not, like, you can't even say that they maybe support purging or invalidation. You don't even know. So really the only thing that you know about is the cache that you control. I hope that answers your question. Sorry, so it's not what I wanted to hear, but it's thanks to the caches. Right. Yes. For the synthetic requests, do you play them in the background? The question is, for the synthetic requests, does it just replay that for the background job? Right. There's like Q&A around the same thing. Okay, so my example actually where I was doing this, I totally left out the log statement. So I apologize for that. But there is a VCL subroutine called VCL log. And so then in that, you can check for those error codes, and that's where you can define how your log looks and where it is and how. But that happens in the flow of the processing tab requests response. It's not tuned up or anything. It's immediately sent out in real time to wherever you want it to go. Last one, I think. For example, I have short TTL. Does that one have a slow down client? Do they have to do a lot of DNS? Ah, good question. So the question was, short TTLs, does that end up slowing down the client since they have to do more DNS lookups to press the content? And the answer is not necessarily because if you could define something in that cash control, you could find a directive that specifies part to be okay for it to be stale. And so that way, you don't have your clients necessarily requesting this content every second or every time. So you could put like some grace window on there, maybe 30 seconds, one minute, if your content allows for that. And I think that's about all we have time for. If you have more questions, feel free to leave a comment and chat with me. Thank you.
|
Most of us have some form of cache anxiety. We’re good at caching static content like images and scripts, but taking the next step - caching dynamic and user-specific content - is confusing and often requires a leap of faith. In this talk you’ll learn new and old strategies for caching dynamic content, HTTP performance rules to live by, and a better understanding of how to accelerate any application. This is just what the doctor ordered: a prescription for cache money.
|
10.5446/30713 (DOI)
|
Hi guys, thank you very much. So first of all I have to clarify that the talk title in the program is wrong. The talk is called Trailblazer, New Architecture for Rails, and not see you on the trail because I'm not going to talk about hiking. I have no idea how this title got into the Rails kind of program, but anyway you're here and that's great, thank you. I'll get you started, no worries. So I have a question before we start the actual talk. It's a serious question. I have a problem with Active Record. So I have this post class and I have this comment model and it belongs to many relationships, so a post can have a lot of comments or many comments. And my problem is when I create a post, the post is persistent already, and then I add comments to that post. How do I prevent Active Record from automatically persisting the new state of this post? Because I don't want the comments to be persisted yet. So I have tried this and it saves the comments. I have tried this and it saves the comments. And I have tried this and it saves the comments. So can anyone here explain how I can prevent Active Record from saving the comments until I call save.build? But.build calls.new internally, doesn't it? The problem is, and that's a serious question, guys, because we have this problem in reform that when we set up collections internally and then assign them to the collection attribute, like comments, it saves the entire object and I don't want this. So if anyone has an idea how to prevent this, please hit me up after the talk because I'm desperate and I don't want to read through all the Active Record documentation. I could just ask Aaron. Okay, let's talk about Traveller. So before I came to Atlanta, I was traveling a little bit to conferences. The last stop before Atlanta was in Vilnius in a beautiful, actually that's wrong, I was in Minsk, so that's the wrong Google Maps. I was in Minsk in the Belarus. It was a great conference, great speakers. We had a couple of drinks and it was fantastic. Before that I was in, I just confused the slides. So before that I was in Vilnius, that's in Lithuania at the RubyConf, was a great conference, awesome speakers. We had a couple of drinks, was awesome. Before that I was in Poland at the Braslav conference. Braslav is beautiful, there's churches and cathedrals or cathedrals and churches, I have no idea what's the difference. It's a beautiful city, was a great conference, great speakers. We had a couple of drinks, it was awesome. Before that I was in Brazil at the Tropical Ruby Conference. Has anyone here been to Brazil yet? It looks like this. It was, that's a beach, like palms and stuff. Hey come in, hello. Don't be shy, I don't bite. And Brazil, yeah, Tropical Ruby Conference was an awesome conference. We had awesome speakers, had a couple of drinks, was great. Before that I was in Mexico in Oaxaca, great place. So I gave a talk at a user group about Trailblazer. This was amazing, the community is incredibly cool. We had some good speeches, had a couple of drinks, I learned how to dance kumbia, was awesome. And before that I was in Australia because I actually live in Australia, I'm from Germany, that's where the accent comes from. And I'm not going to show you photos of Australia because it's beautiful and you should all come and visit me there. Okay, let's talk about Trailblazer. Now let's talk about Rails. We're all doing Rails, right? Yes. This is RailsCon. So Rails is a famous web framework just in case you didn't know. It's famous for its monolithic, sorry, integrated system architecture. This is a monolith and this is a service oriented architecture, whatever. So Rails comes with a... What are you doing here on my stage, by the way? I'm just chilling, dude. Welcome to Atlanta. Okay, let's get back to Rails. Rails MVC, we all know it, it gives us three abstraction layers to implement applications, the so-called model, the so-called controller, and the so-called view. And that's awesome because it is quite a simple setup, a simple level of abstraction so you can get up and running applications within minutes, you can build a block application in, I don't know, 15 minutes? What's the... What was your official selling point of Rails? Whatever, so you can get 10 minutes, thank you. So you can get up an application pretty fast, you can implement stuff without thinking too much about encapsulation, about like not that nightmare that we had in Java and all this stuff, it's actually great in Rails, so you get it up and running in minutes. The problem is that you only have three abstraction layers and all your code goes into views and all your code goes into controllers and all your code goes into models. And this usually ends up like this. And the problem here is that the original selling point of Rails was, hey, let's have conventions and let's have standards, but once the application gets a little bit more complex, you have no idea where to put this code. So one of the major problems in Rails is programmers asking, where do I put this kind of code in Rails? And so we end up with huge models, skinny controllers that still have seven levels of indentation and all that kind of stuff, and nightmare views with ifs and else and conditioners and all that stuff to make them reusable. So the problem is this monolithic architecture, for me, is not enough. This is the monolith that David showed in his awesome keynote. I couldn't find the original photo, so I draw a picture. This is a camel on the right-hand side. I think I'm, I mean, come on, I'm an artist, right? I shouldn't talk about coding, I should paint. Maybe not. No. Who is this guy? You already asked me that. By the way, if you fall asleep, we have stuff to throw at you, so please bear with me and stay awake. So the monolith in Rails, the problem is I don't have a problem with monolithic architectures at all. I don't want to deploy seven Rails apps when I just want to solve one problem. So the thing is, and I think DHH got this wrong, is that monolith, the only alternative to a monolith is not a microservice architecture. We have, I don't know, the monolithic architecture doesn't mean put all your code into one Rails application, and that says a monolith can be a service-oriented architecture itself. You can have services inside of a monolith. It's just about abstraction layers. So when I say monolith, I'm talking about one Rails app, but it can be a beautiful object-oriented, well-destructured architecture in a monolithic application. So please, when we talk about monolith and microservices, don't think of this is a physical app and this is five physical apps because you can have services in a monolith. And the problem, but the problem here is, I was just talking about having objects and services in a monolith. The problem is in Rails, or in the Rails community, when you start adding new abstraction layers to MVC, people automatically blame you for being too enterprise or you're over-engineering and this is too many objects. I don't, like, this can't be good. This has to be slow because objects are really expensive. I mean, every string is an object in Ruby. So, yeah, so my problem is that Rails, we say Rails is simple. I mean, that's, every Rails developer says that. Actually, it's easy because simple is subjective and easy is objective or something like that. So I got taught in Lithuania that this is a wrong statement. So Rails is easy. That's every Rails developer. However, Rails is easy. It allows me to structure complex applications easily. That's said, no one ever because it is not true. It's not funny. So what happens in Rails? In a class, I just walk you through the class symptoms I've seen in a lot of Rails applications. My job has been refactoring Rails applications for the last 10 years. It was a tough job. A lot of tears, blood, but also tears of joy. So let's check out a class controller in Rails. We got filters that add logic for handling different contexts, for handling authentication, all that kind of stuff. Then we have model code. Every controller action I've seen has a lot of model code, it's sentiating, creating, I don't know, changing attributes. Then we have, again, contextual logic, deciders, is someone signed into this? Is someone not signed into that? So there's code handling different contexts in the controller action. Then we have callback logic, like do this if that happened and after this do this. So it's a load of business logic in the controller action. Then we also have rendering code sitting in a controller. This is a beautiful controller action. I've seen controller actions with, as I said earlier, seven levels of ifs and else and do this and that. This is what happens because we chug in all the code into the controller or my favorite part into the model. In models we have configuration for forms. I know we don't use accessable anymore, but we define fields for forms and we have accept and as attributes to handle form submissions and to handle DC utilization, but we only have one form per model because we don't have context. Then people add ifs and else to configure that, oh, I need this form in that context and I need this form in that context when I'm an admin. I've seen horrible deciders in active model. Also we have callbacks sitting in the model, callbacks that get triggered and sometimes they get triggered and sometimes you don't want it and sometimes you want them, but they don't get triggered and add ifs and else and it is a nightmare because we collect all code in one asset, in one class and of course there's also business logic in the model. Everything sits in the model. I'll just run through that because that's my favorite part of Rails applications. We have helpers and access to models and access to seven levels of attributes of models and we have deciders that decide about the context and if it's an admin, render this, if it's not an admin, render that. So lots of code and then we have helpers that are supposed to help, but they usually are nightmares because they pass around 200. Wait. There you go. It was supposed to hit your head. He plays baseball. Okay, so helpers. Helpers never helped me because what I end up is I pass a million variables and objects to nested helper calls and we have again deciders to decide the context. We have render partial this, pass in locals to that. It is a nightmare. So when you look at a classy web application, it doesn't matter if it's Rails, if it's Sinatra or whatever, it boils down to the following requirements that we have per request. So the first thing is we have to dispatch, we get a request and we have to dispatch if it's HTML, do this, is it JSON, do that in case you have a document API. Then we usually do authorization. Is it okay to run this request? Is it okay to run this logic? Then we validate the incoming data. Is it okay? Is it all in this format that I want? Is the user allowed to run this task and so on? Then we actually run the business logic. So this is where you run your actual domain code. And then usually stuff gets persisted after you run the business logic and in the end we render a result. So it's not that hard. It's one, two, three, four, five, six steps of things that we handle in a request. But in Rails, the controller handles, I don't know, like parts of that and then the model renders, handles parts of this stack and they also overlap. So you have model code that should be in the controller and you have controller code that should be in the model. And then the view does a little bit here and a little bit there and sometimes it does a little bit more. So a lot of overlapping code there. It's not clear in Rails where to put my code. What's the place for this kind of logic? In Trailblazer, which is a cool new framework, open source by the way, so you can check it out. And it's got the best book cover ever. Do you recognize people in it? I just got taught that this one is DHH. I like this. So in Trailblazer, we have a... The point of Trailblazer is take these... We got these six steps of logic in an application and now take this and structure... Give me a structure, help me structuring my Rails application. And basically this is Trailblazer, so I could just go and leave. So this diagram took me about 10 hours to do. It was a lot of work. Do you have a few drinks? I had a couple of drinks. So in Trailblazer, we introduced a couple of new abstraction layers shown in this diagram, and I'm going to walk you quickly through the concept. After this, we are going to have a live demonstration by my friend. Jay Austin, how do you pronounce your last name? Huey. And you're going to learn a little bit about how to structure Rails applications using the Trailblazer approach. So the first thing I mentioned is that the old layers are still there. So we still have controllers, we still have models, and we still have old-fashioned Rails views if you want that. Trailblazer is non-intrusive, so you can use it partially. You don't have to use every layer, and you can use it where you feel you need more abstraction. So you don't have to use Trailblazer across your application. Of course, I would love to see that, but whatever. So in Trailblazer, we got the following layering. So now it is clearly visible which layer is responsible for which task. So what we do in Trailblazer is we introduce operations. Operations are your business logic, and an operation is not a monolithic beast. An operation internally uses forms, uses forms of validation and deserialization. These representers, you run your manual business logic in an operation, and operations also have access to models. But controllers only access operations, and views only access operations and models. So there is a clear layering in Trailblazer which layer is supposed to do what. I have this new technique in my talks. Whenever this slide comes, we do a breathing exercise. It is very simple. So you inhale as much air as possible, you keep it for two seconds, and exhale. It is great. All right. How does it work? I have no idea. The controller in Trailblazer ends up as a really, really slim, dispatching asset. There is no business logic, no persistence logic, no validation, no callbacks in controllers. The controller simply dispatches to the operation. So this ends up as really, really slim controllers. This is my favorite part. The model in Trailblazer is empty. All the model does in Trailblazer is it defines associations, and it defines scopes. So basically, we used models the way models were supposed to be. This is Active Record. This is Active Record by Martin Fowler. Because per definition, and I don't care about definitions of patterns usually, but Active Record was supposed to be a persistence layer without business logic. And this is what we have in Trailblazer. You can still have business logic in your model if you want that, but per definition we say don't put it in the model. And we also have views. We can still use Rails views in Trailblazer. So you can still use HEML and ERB and all that kind of stuff, because it's awesome. I don't know if some people don't like HEML, some people don't like ERB, but that's another story. You can still use helpers. You can still use partials, all this stuff, but we offer you something new. It's called cells. Actually, it's not new, it's 10 years old. It's cells are view models. View models help you to encapsulate parts, or the entire view, into objects. We call it a widget in software engineering, and this is what is missing in the vanilla Rails stack. So a cell, ironically, is called using a help-up. But all that happens is a dispatch to the cell class. I'm not going to go into the detail of cells, but you have a class that represents a part of your view. And this class can render a partial view. In cells, we call partials views, because there is no difference between views and partials, because everything is a partial. The partial is logic-less, as I call it. So you still use HEML, you still use ERB, you can still use helpers, but you only call methods in your view. And the cool thing is, if you call a method like body or avatar, this method is called on-the-cell instance. So we don't have this problem that we have in Rails with view context anymore, because the cell is the view context. So if you call methods in a cell, it's getting called on-the-instance. It's really, really helpful to replace helpers and to have an object-oriented approach in your view. And as you can see, you can still use image tag and URL4 and all those great helpers, simple form, whatever, in cells, because that's what makes Rails awesome, the view helpers that actually help. But we don't have this distinguishing between, this is the controller context, this is the view anymore, because everything is just an object. And the brand new thing in Rails is the business logic called operation, the domain layer. And operation is a class. Again, man, so many classes. I don't know where to put all those classes. And operation consists of a contract to deserialize and validate input, and business code. So the contract is defined in the operation. The contract is just a reform class. I'm not sure if you know reform. Reform is a form object gem for Rails. And we use reform in Trailblazer to deserialize and validate data without touching the model. So operations always have a form object. The actual business logic happens in one method called process, and that should be the only public method in that class. And in that process method, I mean, we're probably going to walk through that in the live demo, which is still in preparation. So the business logic, the actual logic you domain consists of happens in the process method. And so you still access models, you still save models, but a lot of work is already done by the operation. For example, validating and assigning the data to the actual model is done in the validate call. If you want that, you don't have to use it. But I like it. Again, what the f- Okay, one, two, three. I should start doing yoga and have a deep professional voice. Okay. The last thing in this introduction is, I quickly want to talk about high-level architecture, because whenever people ask me what is Trailblazer, I say it is a high-level architecture for Rails. What is a high-level architecture? Well, a high-level architecture is everything that sits between the request dispatch and the persistence. That is what I call a high-level architecture. This is where Rails leaves you completely clueless, because there is no high-level architecture. There is a controller, there is a model, and there is a view. And what we have to learn is we have to think in our domain and not in controllers and not in models. So as an example, every application has functions. Functions a user can perform. Like you view, I don't know why it's a shop, like you view a shop and then you can add a comment to this shop and then you can follow the shop. So stuff, when you click through an application, it gets functions. And in domain-driven design, I have never read the book, but apparently this is what we call a use case. And in CQRS, I have no idea what it stands for, but it's great, this is called a command. So apparently there is existing science about these concepts. So in Rails, what we have is use cases and commands get implemented in controller actions a little bit, a little bit in a model and a little bit in view of the hand-made service object and maybe a little bit in, I don't know, your presenter object. And it is a mess. And when people tell me Rails is so simple, you can hand over a project and the next programmer is going to understand everything. This is not true, this is wrong. Like I've seen lots of Rails projects and every project looks different. I mean, great, I know where the controllers sit. Great, I know where the models sit and I know where the views are. But where's business logic? How do you structure that kind of stuff? And what TrailBus does is it introduces the operation. So you clearly know where is my business logic and what's the structure because every operation should have the same structure, validation, deserialization, business logic, callbacks. And also an operation is not just one crud thing that has one model. So an operation can have multiple models and an operation is also not a monolithic beast. As I said earlier, an operation is a composition of objects that help you to handle your request. And it's an orchestrating instance in your architecture. And the coolest thing about TrailBus is that it has a new file structure. So instead of cluttering files into app controller, blah, blah, blah, app view, blah, blah, blah, blah, blah, blah, you have one blah, blah, blah folder and it's called a concept. So in TrailBus, we structure code into concepts, and it's a very common thing. I don't know, invoice generation, that's all concepts. And all the code goes into that folder. That has a couple of benefits, for example, if you rename something, you don't have to rename four directors, you only have to rename one directory, but that's only one benefit. I find it way more intuitive to go through that directory, you see, okay, we got view models, okay, we got operations, okay, here's my model, okay, here's my, I don't know, like helpers. And also the views for the view models sit in this directory. So it's way more intuitive to grasp what a concept is doing in TrailBlazer. Yeah. Austin, that's you here, right? Do you want to show us some TrailBlazer in action? Yeah, I can. I can do that. But you only got ten minutes. Well, thanks to you, yeah. This is really unprofessional. I usually prepare every slide, so I don't have to do this kind of stuff, but he wanted to be part of the talk, so it's all his fault if things go wrong now. Yeah, sure, uh-huh, yeah, I see what you did there. All right, guys. So, as Nick said, TrailBlazer is actually, it sits on top of Rails, as we know. It's, the one thing I like about it is that, as he said, you don't have to abandon your typical Rails architecture the way that you like doing things. So what I did was I basically built a very simple blog, okay, because it's a blog. Everybody knows how it works. You know, it's, you know, a classic example. So if the thing will do what I want it to do, it's not moving the window. Stupid technology. All right. Live demos, oh my God. Hey, quiet, you. Anyway, because I went full screen and I'm an idiot for that. Okay, so, you know, I built the awesome blog of Awesomeness, because, yeah. Anyway, so the idea, obviously, it's very simple. You know, you just create a blog post, you know, type in information, and title, blah, blah, blah. And I just, for fun, I just said, you know, this is Markdown enabled. He's extremely proud of this Markdown feature. I like it. I like Markdown. It renders on the front end. So, it's implemented in Ruby, or what? Let's see. Anyway, see, just pop my email in here. Anyway, so, create a post, blog, done, grab your gravitar, and all that jazz. Anyway, so that's the basic idea that I implemented here. Now, we've all done this in Rails. This is very simple from, you know, perspective of Rails. It's, you know, not a big deal. But in terms of how we did this in Trailblazer, as Nick said, the controllers are really skinny, very tiny, as is the font on this damn thing. I saw you yawning. Thank you. Thank you. All right. So, okay, big deal here. Postcontroller, obviously, we have, you know, your index and all this stuff. New, okay, so if I'm going to create a new, you know, post object, what we're doing here is saying we're calling this form method on postcreate. What's this postcreate nonsense, right? This comes from the concepts directory. And, of course, underneath concepts, I've just got post and a file called crud, which created read, update, delete. So, in here, I'm just requiring Trailblazer operation crud. Most of this is documented, by the way, on Nick's blog for, or excuse me, Wiki and the GitHub project for Trailblazer. And the book that I'm going to mention in a minute. Yeah, yeah. He wants you to buy his book. Buy his book. Anyway. So, we've got, basically, we're just saying, we're subclassing right here. So, we've got post and here's from ActiveRecord. We've got to create class, hence form postcreate. And one thing I have to mention is just because we reuse the post namespace, which is an ActiveRecord model class, does not mean that the operation create does know anything about ActiveRecord. We're just reusing an Ruby namespace. This is a big confusion. There's a lot of confusion in Trailblazer in the community. People think that create now is inherits from ActiveRecord or something. No, we just reuse a Ruby namespace. So, this is how you create namespaces. So, you can have classes in classes. It's a great trick to have readable namespaces. Right. And good class inheritance as well. So, that kind of brings up a question that the community might have. Given the fact that what you just said, we can namespace it in this way, what is the potential in the future for separating Trailblazer from ActiveRecord? Say, I want to swap in another ORM or no ORM at all. What do you think? I mean, is that something on the horizon? What's all of that? Well, Trailblazer is not limited to ActiveRecord, but in an ideal world, you had a post namespace that's not a model. Then you had your operations in that namespace, and you had the model that's called, for example, persistence. So, you had post, calling, calling, persistence. That's my ideal vision. But in Rails, models are on the global namespace. So, that's my trick to reuse this namespace. But ideally, it's in a separate... It's a real namespace, and you have the persistence sitting in that namespace. And then you can replace it with any kind of ORM you want. Right on. Okay. No toes. That explanation out of the way here, basically, like I said, you can look over the documentation he's got. But what we're doing here is this is a contract, and this is kind of the meat of, you know, kind of what provides... Some of the stuff you've probably seen, like Validates, for example, usually goes in the model. But in this case, we're telling it, look, you know, you got a property of a title, body, a teaser, which is, you know, a little caption waiver, author. And of course, in this case, we're doing nested relationships, because the post has one author, and the post... or excuse me, the author belongs to a post in terms of ActiveRecords macros here. So, as you can see here, has one author, belongs to a post. Okay. So, we've seen these associations before. Empty models. Exactly. Empty models. There's no logic in there. It's... other than that, it's pretty vanilla. So, of course, we're doing properties here inside this block. So, what's going on here is this is how we tell the Trailblazer that, hey, by the way, you've got this thing called an author, and by the way, it's got these two different properties, name and email. This is what gives us the ability to put this inside the form later, and I'll show you that code in a minute. Of course, here we have validations. I'm just validating presence so you don't, you know, fail to fill it out. Now, this is kind of the big deal right here. This process method is going to, as you can see here, it's got this validate block. It's basically going to make sure that all your, you know, parameters that you pass into it are indeed valid. If so, it will call save on it. So, that is kind of how we manage the persistence in this case. Now, there are a couple of things that... and I'm going to ask Nick to explain this in a minute. I'm going to write a few active record methods as well. We were looking at doing the inherent, or the, what do you call it, testing for the... Yeah, so the operation doesn't know anything about... I mean, it doesn't know about the structure of the nesting, but the operation doesn't know how to create, for example, how to create the nested author of the comment. So you have to provide that manually. I mean, there is ways and trailblazers that actually implement this for you, and you can just configure it. But in this example, we explicitly create the author to have this nested model set up. Exactly. All details explained in the book. Right. So in other words, that's basically... it's a little bit of a trick to get around some stuff with active record. Anyway, so that is exactly how that piece works right here. Same thing inside updates. So we're still inside the post-class. Actually, we're inside the... Yeah, we're inside the post-class. I'm getting a little confused in my own code here. Nice throw, buddy. Sorry, I'm an asshole. I wasn't going to say anything, but... Hurry up, you got three more minutes. Okay, four... Yeah, you're an asshole. Anyway, all right, so anyhow, we're still inside our post-class here. We've got a class called Update Inherits From Create, so we cannot also call an update action on this later if we want to. I didn't write that inside the actual architecture for it, forms and all that jazz, but you get the idea. So we also had to call setup model here in Overwrite it, but it's an empty method in this case. So, moving on here, as you can see, the post-controller, new, so we're just saying form, post-create, done. Create, run this operation, post-create, do, okay, and I'm just passing it a block. The block is only executed when the validation was successful. So that's the only information the operation exposes is, I was run and I was successful, I was not successful. So the controller does not know anything else, unless you do that, unless you extend the operation. Right. It's only true and false. Okay, so anyway, we're basically running this run post-create. I'm terrible at naming variables, so I just call it x. Anyway, redirect to x.model, so in other words, this could be, you know, there's no hard coding of, okay, post or, you know, whatever it might, else it might be here. We just call model and that's it. So we've got restful-routest setup in this case, of course, and of course the return here, I just don't want anything to happen if, you know, this actually works. So as opposed to rendering the new action here. Show, as you can see, is one line, present, post-update. Okay, cool. The actual update method as well, same thing. Edit, all that stuff. So, and of course, like I said, I didn't implement all the different things here. So the point is that the controller really just delegates to the operation and handles HTTP-specific stuff like redirects, because that's not a concern of the operation. The operation doesn't know anything about HTTP and the controller doesn't know anything about the business logic. That's the whole point about this structuring. Right. So what does that say? What happens if the fails? When the update fails, then the block is not hit, so you don't have this return redirect to. The block is only executed when it's valid. And then the form is re-rendered. Just as you do your controller actions in Rails. So the whole point is the block is only executed for successful operations. Exactly. So if this particular thing doesn't work, it's going to be like, oh, well, never mind. I'm not going to bother with this next piece of code. Oh, here we are. Render action, new. Okay, done. So we've got the object accessible inside the form or inside the view, and we can then look at the errors on the object, just like an active record. In fact, I'll show you if I can actually see the screen. Two more minutes. I need to wrap up the poster. Let's see. Views. Is that it? Sorry, I don't even have it mirrored over here for whatever reason. All right. Because he's unprofessional. Why are you? All right. So index for posts, all I'm doing is just grabbing the posts, and pretty simple stuff, right? Okay, we've all seen this before. Here is our form. Okay, so I'm just saying if the number of errors objects, you know, is better than zero, then, yeah, we got problems, blah, blah, blah. We've seen this before. Now, in this case, I'm using simple form. Basically, I saw some of Nick's documentation. It's like, okay, I'm just going to, you know, totally rip this code and, you know, work with it like that. So as you can see, it's very simple compared to existing active record or existing, you know, Rails. Very much the same thing. You've got your inputs, you've got all this stuff. Now, the inheritance fields for author. So, of course, the form object represents, you know, based on post, and now we have the author, object off the post, and so on. So there's the inheritance. And, of course, the show is very simple as well. It's just, you know, view code. That's kind of it in a nutshell. There's a few other things, but I'm going to let Nick take this back. Now, there's a little bit of a configuration you might have to do, much of which is documented. So, you can go to github.com.com. And then view a fun to travel as a repo. You just Google for travel as a Google Rocks. Yeah, you could do that. Yeah. Thank you very much. Thank you. So that was awesome. Thank you very much. I'm just wrapping up in the next two minutes. So the thing is we have, as you saw, we, in travels, we have, like, you can easily have nested setups. So, like, the nested form we had, you know, we nested the author into a comment and all that kind of stuff. So that's really, really helpful. And all based on the reform gem. So you have nested validations as well, cleanly implemented, and... It's great. A lot of things in travelers are based on inheritance. So we use object orientation the way it was supposed to be. Yeah? So, by the way, inheritance, I didn't know if you know that, but if you can roll your tongue and your sibling can't, then it's not your sibling. But, so this is my sister. Apparently, I don't know, maybe you've got different parents. Whatever. Let's talk about inheritance. So in travel as a, you can, operations inherit from other operations and you inherit the contract, you inherit the representant, all that kind of stuff is cleanly inherited into the subclasses. And you can also travel as it's made to, makes it really easy to override specific aspects of the inherited stuff. Like you can override properties in the contract. Or you can over, you can also override nested attributes in the contract. You can override stuff, configuration from the representant. You can override your business logic. It's just using plain Ruby inheritance. And the way callbacks in travel as a work is also very, very straightforward. So you remember this validate and then we save the model. So usually this goes into like the common model in Rails, in Trailblazer. This goes into the actual validate or process method. So you can call callbacks explicitly. Or you can use the dispatch. So you define which callbacks to call. And the cool thing is, let's talk again about inheritance, is that also, I didn't know that like if you have a gap in your teeth, your siblings are supposed to have a gap as well. I don't have a photo to prove it, but this is also wrong. Because my sisters don't have a gap in their teeth. Let's talk about inheritance. So the cool thing is, the cool thing is you can also deactivate callbacks in inherited operations. For example, in update, I might not want to have the check spelling callback to be called. So I just skip it. So it's really declarative and simple to override behavior. And so inheritance works with contracts, representers policies. We also have policies and authentication in Trailblazer. I did not talk about this today and I'm sorry. And it is also good to inherit configurations. Representers are awesome. They render and pass stuff. So this is helpful for operations when an operation does handle JSON. So the internal contract can build a representative. The representative helps you to work with JSON and all that kind of stuff. So an operation can pass and render JSON as well if you want that. It's awesome. And again, Trailblazer is not a monolithic beast. It is an orchestration of objects handling every aspect of your request. And people say we do this on our own. I agree, but Trailblazer is an attempt to establish a standard. Okay? So we get awesome stuff in Trailblazer like polymorphic operations and policies and blah, blah, blah, blah. It's great. Check it out. There's an awesome book. It's on leanpub.com slash Trailblazer. Leanpub.com slash Trailblazer. Leanpub.com slash Trailblazer. Or you can Google it. We also got stickers. The cool thing about the stickers, if you don't like Trailblazer, you just cut it off here and then you have a free Ruby sticker. Nice. So he hit me up. I'll give you a sticker. Maybe. And the cool thing is Engine Yacht sponsored me to come and give this talk. So they basically pay everything. They are awesome. They are really nice. I'm not affiliated with them, but I find it a great thing. Other companies should adopt this to support open source authors because it is great if you can speak at a conference and you meet awesome people and a great company stays behind you and supports you. So thank you Engine Yacht. And just to wrap up, use Trailblazer. It gives you new abstraction layers. Use operations because they help you structure business logic. Use all my gems. And be nice to each other. Thank you very much. Bye. Good job. Good job. Good job. Thank you.
|
Trailblazer introduces several new abstraction layers into Rails. It gives developers structure and architectural guidance and finally answers the question of "Where do I put this kind of code?" in Rails. This talk walks you through a realistic development of a Rails application with Trailblazer and discusses every bloody aspect of it.
|
10.5446/30720 (DOI)
|
So thank you, Olivier. My name is Caleb Thompson. I work at Thoughtbot with Mason here, this guy, and co-organize Keep Ruby Weird. We are here to find the answers to the question, what is this PGP thing and how can I use it? You can find me on the internet here just about anywhere. So a lot of you got USB keys. Those have the software that we're going to be using today. So if you can copy that onto your disk and then pass that key along to somebody else. It also has my public key on there. So if you just copy both of those things onto your disk, we'll get to them later. This is a link to the slides because this font doesn't make it very clear. That's a zero. So I'm not going to be pausing too much in this presentation. I don't want to have to go back and forth to show some of the commands that we're going to run. All of the commands are in these slides. So I'll give you a minute to go ahead and grab those. If you are interested in, if you're already like a PGP pro or if you are more interested in the command line or if after this you're interested in learning more about PGP in general, I wrote an extensive blog post about a year ago now that details just about everything I know about all of this stuff. It uses the command line tools rather than the GUI tool that we're using today. But all of the same concepts apply. And it does a really good job of explaining why you're doing any of these things as well as how to do them. So we're going to be using a program called GPG Tools. It's actually a suite of programs. You can download those from the internet at GPGTools.org or they're on the USB that I've given you. So this is how you would download them. That's the button. So because this is security software, we actually have to verify that the package is the correct package. Mason made the joke that let this be the last package that you don't verify, but we are going to verify it. So if you have a trusted PGP version, then what you can do is download the GPG signature from GPG Tools right here. Download the key from here. Import it into GPG Tools or into your favorite GPG program, PGP implementation. And then you can verify the fingerprint of the public key here and run this command on the command line to actually make sure that you have the software that you expect to have. Probably though, you didn't already have a GPG implementation on your computer. So we are instead going to verify the SHA of the package. So I've given you something. It could very well be malicious. I could have given you the wrong piece of software accidentally. I didn't install from that binary myself. And so what we are going to do is check it against the SHA that GPG Tools.org says is the correct one. So a SHA sum is the simplest way to verify that the file that you are looking at is the file that somebody wanted you to have. And you can be sure that the person lying to you about the piece of software is the same person who is lying to you on the Internet on the website that you downloaded it from. And maybe that person lying to you is actually a developer. So this is the command that you are going to run to verify the package. And it's going to output a number. You want to make sure that the number matches up to this one. So once you are satisfied that you have the right piece of software, we are going to install it just like you would normally. There's nothing special here to installing the package. Can somebody make sure that she gets the USB? I'm going to move on. I have helpers who are going to make sure that you are being dealt with. Okay. So the next step is going to be building key pairs and uploading public keys. What the heck is a key pair? To get anywhere with PGP, you are going to need one of these things. And it's composed of two parts. That's why it's called a pair. It has a public key which you are going to publish. And it's used to encrypt messages to the owner of that key and to verify messages from you, to that person, to other people. So people will use your public key to verify messages that you send them. And a private key which you keep secret and safe and it never goes on to the Internet and it allows you to decrypt messages that people encrypt to you using your public key and to sign messages so that others can be sure that you made them. Those that use that separate public and private keys are more secure for several reasons. But the primary one is that the private key just never goes out on to the Internet. In the case of PGP, it's also password or passphrase encrypted. And even then, this is still not something that you'd ever put on to Dropbox or Facebook or whatever you kids are using these days to share files. If you must move it around, USB is the easiest and best way to do that. But then you want to make sure that that USB stick is destroyed and everything else. Kill it with fire. This is how we're actually going to do this. So we're going to generate a new key, click the top left corner to do that. It's going to fill in a lot of your information automatically. We're going to make sure that we've picked the longest length, 4096 should be that number, but if there's a bigger one on your machine, use it instead. And then we're going to check expires. I unchecked it, but we're actually going to check it. Change the date to about a year out from now. And the reason that we do this is so that if you ever completely lose access to a key, it'll invalidate itself automatically after some amount of time. But you can change the expiration date, you can just right click on the key, your secret key, and change the expiration date at any time. You just need to unlock the key. But this means that it's sort of a dead man switch. It means that you're not accidentally leaving your key out so that people still think that it's a valid way of contacting you, basically. So what can we do with this key? I'm going to pause here and make sure that everybody's got a key. All right, so now we have keys, what can we do with them? The piece of software that I gave you, automatically, if you use mail.app to send email, it's now got a couple of encryption tools in there, so one of them is to sign, one of them is to encrypt. You can now sign every outgoing email and it won't hurt anybody. And if you know that you're communicating with somebody who also has a key and you have that key handy, then you can encrypt to them and that's a good practice as well, just so that we've got more noise when the NSA is trying to crack encryption things. But the simplest thing that we can do that's not already configured for you is to sign Git commits. So there's a little bit of configuration in here to do. The first is we need to tell Git that we want it to sign all of the commits that we make. So from the terminal, this is how you make that happen. Then we need to tell Git which key to sign with. But GPG is pretty smart. By default, it'll use the only private key that you've got, but it's still a good practice to have this in there. So I have my whole name and email address in here. GPG will find this just fine. But it makes a little bit more sense sometimes if you're having issues, just go ahead and use your key ID, which is the last eight characters in your fingerprint, which you can get by right clicking on your key to view details. And so what this does is every commit you make, you attach a signature object to, and those get pushed up automatically with Git. Pushes and others who care about this stuff basically can verify that you are the right author. How do we do that? So verifying with Git is not super difficult, especially if you use aliases. So Git log show signatures is the simplest way to do this. So it's going to print out in addition to the normal commit message and the commit shot and maybe the diff stat or the diff. It's going to also include the GPG output for verifying when it verifies that the commit is correct. Git show will do the same thing, but just for a specific commit-ish. And then you can do Git tag, hyphen, hyphen verify with a tag name to verify that a specific tag was done. So you'll see tags a little bit more often in the wild because it's easier to convince people that they should be signing tags than it is commits. Why would we do any of this stuff? Well, a signed commit says, I wrote this thing, here's proof. I wrote it and I thought it was good enough to include in this repository. And I've proven that somebody else didn't do this and said it was me. Assigned tag says, I released this, here's proof. So it's almost the same thing, but I didn't necessarily write everything. All I did was probably run the tests and said this was ready to go, released a gem version or whatever. What this does for us is it gets your signature in as many places as possible. GPG can be configured to automatically download keys to verify signatures. And that means you've got more ways to establish trust. So for example, if Aaron Patterson was signing all of his commits into Rails, you can be pretty sure that this person who's doing a lot of cool work is who you think tender love is. So you can say, well, I trust that this person is right, so I'm going to go ahead and sign their keys and then you know that anybody that key has signed, which would include me, is probably who they say they are by sort of a transit of trust. And frankly, it's super easy to do, so why wouldn't you do it? Signing and verifying gems. This is where we get into our RailsConf specific bit. First of all, there's no good way at all to do this. There is a way to sign gems, and it comes in this module, which is included in every Ruby installation over the last couple of years. It's not great. It uses some other algorithm that is, in my opinion, not as powerful. And it defaults to not verifying when you sign or when you create gems, release gems, or install gems, so it's useless. What it does is it uses open SSL keys. So these are the same sort of keys that we use for SSL on the Internet. Unfortunately, they're the same sort of keys that we use for SSL on the Internet, which means that there's no way to distribute them, you just get them along with the gem that's installed or the website that you visit. And there's a centralized authority, a certificate authority. So this isn't taking advantage of the web of trust that PGP provides, which is like, well, you trust Aaron, he trusts me, we're all good to go. It requires you to trust a certificate authority manually. In the case of SSL on the Internet, HTTPS stuff, you've actually implicitly trusted all of the certificate authorities that matter to you by having a Mac computer or having a Windows computer or installing Chrome. They just do that for you. It's not a decision that you make, which is different in PGP, where you do make that decision explicitly. Private keys are not encrypted. This is not a limitation of open SSL. This is a limitation of gem security. I don't know why, but that's a thing. Keys are self-signed. So most of the time when you get a gem that is signed, it's going to be the signature file or the public key is actually just going to sign itself. And so the only way that you can trust it is if you explicitly trust that one key. And even the root of the certificate authority has to self-sign to be valid in this silly, silly thing. Open SSL. Don't use it. You need a trust path. So verifying a signature or a SHA is nearly meaningless if you can't trust that it's come from a person that you believe in. Just checking that they've signed something doesn't mean anything. It means that they know how to use open SSL or GPG. This is actually what we did at the beginning. We blindly trusted that the website that we downloaded software from was controlled by the people that we thought it was, but they were providing the SHA, the GPG signature, the key, and the software that was signed. So none of it is actually very useful. But it's the closest that we can get unless you actually know one of those maintainers or they're on the web of trust, which they are. So a package and a signature all come from the same source that can be faked by everyone. This is true of gems. This is not really something that you can get around unless there's another way to also get this key. So in the case of gems, there's really no other way to get the keys that you're verifying. In the case of GPG, we've got key servers. We've got people who upload their own keys. We've got people who give you a USB stick with their keys on them. There are no key servers for open SSL. And key servers are basically distributed system of sort of like DNS where you upload your key and it propagates out to a bunch of different servers and then anybody can use their own server of choice to pull down a key. In the case of gems, the keys and the cert chains are included in the gem tarball. You can't specify a system-wide trust. In GPG, you can say, I will trust anybody who has any sort of signature or you can say, I only trust people who I've signed myself or you can say, if Derek says they're cool people, I believe it. Signatures are included in the gem. This is what gems look like when you untar them. So you've got each of these.gzip files and if they happen to be signed, which most gems aren't because this is difficult to do and as I sort of made it clear and not super useful, then they include a signature file for each file as well. Required reading. If any of this is interesting to you, this gem stuff, this slide has a bunch of links on it. This is where I've distilled all of this information that I've just given you from. So sort of my citations as well. So backing up a bit, how do other people take care of these problems? Well the most common and simplest way to do this is manually, just like we did when we installed GPG tools. It was an archive, so it's a single file that we could have gotten the shot from. And this is what most of the packages are doing for us automatically. Aptitude, homebrew, et cetera, automate this. So aptitude uses GPG to verify. Homebrew has a shot, so as long as you trust the homebrew package itself, then you can trust that the person who authored the recipe for installing a piece of software has downloaded the right piece of software, the one that they expected to download. And RVM actually distributes Michael Pappas' personal key and automatically verifies that during installation. So to fix these problems for ourselves, we need some sort of automatic verification before installation. We should be verifying signatures, no matter what. Every time we install something, we should verify that it was correctly signed. That means that the person who signed it, the package hasn't been tampered with since the person who signed it signed it. It should be configurable to verify trust. So if I have a large web of trust myself, then I should say you should be verifying that the person who wrote this software somewhere in my web of trust. And if either of those checks fail, if you've configured trust or if the signature fails, the gem shouldn't install at all because it's not what we expect. It's not what we think it is, which means that it may be malicious. So we need stronger web of trust connections throughout the community. If we can verify one maintainer, we can probably trace paths to others. Conferences are a great way to build these cross-organization, interstate, or even transcontinental connections in the web of trust. Open-source companies could have centralized keys. So Thoughtbot actually has one of these. This is not how we use it. This is how I think we should use it, and I have begun pushing for this. We should sign each maintainer's employees and contributors, so an external contributor who's doing a lot of work for us, keys. So at things like this, when we're in person, we should be verifying that we should be using this company key to verify other people's individual keys. And then also at these events, we can have Thoughtbot's key sign test double's key or AT&T's key. Which means that if you have met me and Thoughtbot has signed my key and test double has signed Thoughtbot's key and test double has also signed Justin Searle's key and Justin Searle's release is a piece of software, you can verify with only four hops the maintainer is who they say they are, and then you've got a web of trust connection. It's pretty cool. We need better tools support for all of this. So tools that we use. The big one is RubyGems. RubyGems solution is simple on paper, difficult to implement. We could use OpenPGP instead of OpenSSL. I plan to put my money where my mouth is and actually work on a gem plug-in to do this to allow you to sign keys with PGP and verify them on installation. There is already a tool that does this, but in my opinion, it is too difficult to use, so it won't see adoption. GitHub. Git already supports showing signatures. So GitHub could also do that without much more work. This is what Git, by the way, will output when you actually verify somebody's signature with show signature or show log, or show signatures on log or show. Having a standard way to claim social accounts. So this is a little bit difficult to see, but you can see that there's a couple of highlighted user IDs on here. Other than say at Caleb Thompson, and then link to Twitter status or a GitHub repository where I make an assertion that I am the person who controls this account and this key. So by doing that, you can be sure that somebody who controls GitHub and somebody who controls the key with this fingerprint are the same person. You can't be sure that they are Caleb Thompson because you haven't looked at my ID, you haven't met me, but you do know that the key and the social account is connected. So connecting these two ideas, we can say we could have GitHub show a badge, like the Twitter verified icon that says yes, this commit was signed and is correct. Because Caleb Thompson has uploaded his key, and that key has signed this commit. It's great. People do things for badges, it's crazy, but this is just a small thing that GitHub could be doing that would mean that five more people maybe start signing commits, and that would be pretty great. Signing keys. So this is, we've talked about signing keys, this is actually how we do it. So Mason is there. We're going to need to help again. So signing a key is like signing a message. It uses the same mechanics internally, but you're writing onto a signature file. It has a different semantic meaning. What you're doing is asserting that you verified owner identity. So you've looked at somebody's driver's license or passport, and the name on that document matches up to the name on the user IDs on the key. You're asserting that you've verified that you have the right key. So I gave you my key, I will later give you my key's fingerprint, and with those two things you can be sure that you have the thing that I wanted you to have. And you're asserting that that person has verified ownership of the key, which means that they can unlock the private key and use it. This is uncommon to actually see done because it's sort of difficult, but it boils down to once you sign a key, you export it to a file, encrypt it, and email it to the person by using the email on the user ID, and then they can decrypt it, import it into their own key chain, and upload it to a key server. Because that's a little bit complicated, it doesn't happen super often, but if we had tooling that made that easier, then it could, and it would mean that the web of trust is a little bit stronger. So signing a key establishes transitive trust. It announces to the world that if they trust you to verify these things, they can trust the keys of the people you have signed, because they are more likely to represent the people that they actually say. You haven't verified it yourself, but you trust your friend to say that they are who they say they are. You've been introduced. All of this is fundamental to the web of trust, a word that I've said a lot. I have a picture. This is a web of trust. At the center, because it's mine, you see me, but all of these lines are signatures that I've made with other people. And the more closely we're related in the web of trust, the closer people are. This one's a little bit older. My new one is taller. It's pretty cool. But it's just fun to see graphs. I like graphs of myself, and so I sign a lot of keys. But one thing that you can see here is I apologize for dotting, but this is Michael's key. Michael has signed George's key. We're all coworkers. I haven't met George. I haven't verified George's key in person because he lives in Stockholm, New York now. Not because Mike has signed it and I trust Mike to verify George's identity, then I can be reasonably sure that when I communicate with George's private key or public key, he is who he says he is. It's great. So signing a key with the GBG key chain. This is going to work now. So the first thing that we need to do is get the key. I've already given you mine. It was included on the USB stick. But usually people will upload these to the Internet somewhere, probably on their own server, include them in a header on emails, or upload them to key servers. And if they've uploaded them to the key server, then you can find them by searching like this. So if you do command F or look up key right there, this box will pop up. You can search just about, you can basically just search for a name or an email address. It will pop up a list of addresses. If you've got this person next to you, then they can verify who they are. Otherwise you can probably guess correctly. Once you have that key, you can right click and view details. And that will pop out this side window. And what you need to do is confirm that that long string of text, which you actually have to expand the window to see because software, matches this. This is my fingerprint. And as I mentioned earlier, the last four digits are the key ID. So if you're searching for me, you could search just on this, or you could search for my name, and that would get you the right key. So we're doing this now. Okay, next, we're going to right click again, and we're going to sign key. It's going to pop up a window like this. You are going to select your secret key. It's probably already preselected. And you're going to say that you've done careful checking from the other drop down. And the reason that you're going to say that is that I'm going to walk around with my ID, and I'm going to show you that I am who I say I am. And there's not really much more careful checking that you could do than to check somebody's ID. You could do a background search, I guess, but it doesn't help. You're just verifying the person is who they say they are. You're saying that name is the name you have on your screen. So the other options are, I haven't verified, or I won't say. And that means that you've done some amount of verification, but you don't want to explicitly state how much you've done. The level above that is I have done no verification. If you have done no verification, you should not be signing a key. The level above that is I've done casual verification, or some verification. That means you're not playing, are you? That means that maybe somebody handed you an ID, or maybe somebody has been your friend, and you are pretty sure you know who they are. You've never actually looked at their ID, but whatever. You're willing to sign. That's the level that means that. And then careful checking means that you've looked at a piece of state-issued identification. Anybody still need this? And you're ready to sign. The signing a key is sort of like going to court and saying, yeah, that person is Caleb. Don't check that. I have no idea. Also don't check the signature expires. That's more for certificate authority-style things that temporarily state that you're probably the right person. So that's not useful. So all we're doing is filling out these two things and then unchecking those boxes. And we can generate signature. So after we have verified or we have signed, we can go ahead and upload the key to the key server. So you can right-click again and send public key to the key server. So do this now for both yourself and for me. Okay, we're done. This is the end of the talk.
|
The need to keep your personal information, sensitive or nonsensitive, secure from prying eyes isn't new, but recent events have brought it back into the public eye. In this workshop, we'll build and upload public keys, explore Git commit signing, and learn to sign others' PGP keys. If we have time, we'll exchange key fingerprints and show IDs, then discuss signing and verifying gems. You'll need a photo ID and your own computer for this workshop.
|
10.5446/30721 (DOI)
|
My name is Liz. That's my Twitter handled up there. And before I start this talk today, I need to give a couple of disclaimers. Mostly because I picked this title before I wrote the talk. Number one, you got that big D word up there, diversity. When I originally thought of this talk, I was like, oh man, I could talk about all kinds of diversity. I could talk about diversity of religion and ethnicity and disability. Oh my God, this reading is piling up really fast. So for the sake of my sanity, I'm limiting myself to more talking about gender, specifically men and women in terms of gender, and also race as race issues happen in America. So getting a little specific there. Okay, disclaimer number two. See those last two words down there, liberal arts. There's a lot of them too. And talking about all of them is just not really feasible. So for reasons that are going to be clear in a moment, I'm mostly focusing on English language and literature. Disclaimer number three is that it doesn't escape me for a moment that I'm going to be talking a lot about race today as a white woman. And that I'm going to be talking a lot about gender, even as a woman who experiences gender in my own limited way. I'm not transgendered. There's all sorts of different ways to be a woman that I'm not. So please, oh please, oh please, oh please, oh please, do not let what I am saying today be the only thing you hear about race and gender in this industry. I mean, really, go on Google. Find out what people of color have to say about these issues. A lot of what I'm saying up here is based on what they've been saying for years, and as well as my own personal reading that's been accumulating over years. So please, educate yourselves. And my last disclaimer, a little hesitant to say, but you should know I am one of them. Can't take the speaker and invite back now. This is me graduating with a Bachelor of Arts in English and a minor in medieval studies. Yeah, and here I am today, go figure. In another year, I'd graduate again with a Master of Arts in English, and then I'd move to Louisville, Kentucky a little bit later to follow my dream of becoming a high school English teacher. And it is in Louisville, Kentucky that I would learn one of the most valuable lessons of my life, which is that you can love our amazing English language and its power and its beauty and its nuance and its sound, and you can love sharing that with other people, especially young people. And at the same time, you can hate the American education system. So I got out of teaching and sort of fell into my current job as a Rails developer. That's its own story. And as I changed industries, I noticed some things were different. And some of those things were great. Like for instance, I am a huge fan of the fact that in this industry, you guys pay people with real-life actual money. Like I remember going to my first interview on my bus saying, oh, we'll start you off in such and such dollars an hour, and a year you can renegotiate, and I'm like, whoa, whoa, whoa, whoa. I don't have to do a nine-month unpaid internship, followed by a three-month unpaid internship, followed by maybe a minimum wage part-time job. This is amazing. Some of the things though weren't so great. I think we're all pretty familiar with the statistics that women and people of color make up very small percentage of people in this industry. And this interested me. So I went to find out why, and read what a lot of people had to say about this. And I noticed a bit of an intellectual bigfoot running around. It's kind of blurry, never explicitly shows its face, but there seems to be a widespread belief that it's there. And that bigfoot is this. Just by their nature, the arts naturally attract women and people of color, and they always have. Now, this is a little, this is kind of complicated. Like there's a grain of truth in this. Maybe you've got like a quarter of a bigfoot running around, which you should really get looked at. But let me talk about a few things to just problematize this. For one, I want to talk about the canon. Now, for those of you who don't remember hearing this word since high school, C-A-N-O-N. It's the writers and the works of literature that we consider to be really, really good. Now, start of the 20th century, like up until about the 1950s, this was the canon. You might notice something that all of these writers, pretty much all of them have in common. I mean, we've got a few exceptions with Jane Austen and Alexander Dumas up in the right-hand corner. He wrote Count of Monte Cristo, among other works. But for the most part, the people who were considered the luminaries of English and language and literature were a lot of white dudes. Second thing I want to talk about is when we say liberal arts, now that's actually shortening for liberal arts and sciences. There's a reason why sciences is appended to the end there. It's because up until before Sputnik, the division in higher education wasn't so much arts versus science. It was vocation-based higher education, so in things like law, medicine, dentistry, stuff that's tend to be a little bit more STEM-y and everything else. And everything else included things like chemistry, physics. Anyway, so this is a picture from the 1922 yearbook from the University of Louisville. And all the ones that I've circled in red are, this is the Liberal Arts and Science Department. And all the people I've circled in red are professors that do traditionally already what we associate with subjects. So you'll notice there isn't a bunch of women out there, and this is where your grain of truth is. There would have been more women in this department than there would have been pursuing medicine. And same goes for people of color, although for African Americans in particular who are studying medicine, at this time your employment opportunities were really limited. So there also wouldn't have been a lot of them. There were good numbers, but not obvious for obvious logistical reasons, not as many studying medicine, dentistry, law, those sorts of subjects. But one thing I want to point out is you see those three women at the bottom who are not circled in red, one was teaching chemistry and one was teaching mathematics, and I think the other one was teaching home ec. But it wasn't uncommon to see women who are studying chemistry and physics and biology in college at this time. So something I want to emphasize here is that the Liberal Arts have had their own diversity demons to face down. And the diversity that you see today in most of the Liberal Arts has to do with, they are because of a lot of strategies that were deliberate, that were employed to combat these demons. And some of those strategies that were employed I'm seeing copied in STEM and tech, and some of them not so much. And I want to talk about those strategies today that aren't being copied that we could be copied. Starting with the first one, which I'm calling Fun with Dick and Jane and Power Rangers. What the heck could this be about? I don't know, we'll find out. So for those of you who aren't familiar, the Dick and Jane series was a very popular children's primer series. It started being published in the 1920s, stopped being published around the 1970s. It was for teaching children how to read. And it featured these two characters, Dick, who's the brown-haired kid, and Jane, who's the blonde, who went on these little white-bread suburban adventures and chased spots and all sorts of things. And what you see with Fun and Dick and Jane is pretty representative of what children's books look like at the start of the 20th century. A lot, a lot, a lot, a lot of white people. Obviously with one big exception here, the little black sandbow books were pretty popular. I couldn't find a cover that wasn't horrendously offensive. If you want to look that up in your own time, you can. But, you know, a lot of, not a lot of people of color, and mostly very traditional gender roles, and it tended to be boys that were taking the active role and girls who were learning to cook. Now, this is a problem because kids have a super, super, super, super concentrated form of what every human in the world has, which is vanity. And to demonstrate my point, I want to talk a little bit about Power Rangers. Raise your hand if you were growing up on Power Rangers with a big thing on TV. Oh, good number of people. My gosh. I wasn't allowed to watch Power Rangers. That didn't stop me from watching Power Rangers and playing Power Rangers with my friends. So just as a quick experiment here, when you were playing Power Rangers with your friends, raise your hand if you used to pretend to be the Red Ranger. Okay. What about the Blue Ranger? What about the Pink Ranger? The girls raising their hands. You see, people tend to gravitate towards people that look like them. And it's not limited to kids. I will be the first person to admit that every time I watch the Pirates of the Caribbean series, and Will What's-his-Face goes, Gah, there's pirates, and I love you, Elizabeth. I go, Oh, Orlando, I didn't know you felt that way. For those of you who forgot, my name is Elizabeth, which it's okay. I don't know your names either. So what the heck does this have to do with tech? Well, this is important with, I'm sorry, let me backtrack a little bit. When you have all of these books that are catering towards one very specific demographic, it tends to get people who aren't in that demographic to sort of phase out and not pay as much attention. And in the 1970s, 80s, 90s, and continuing, there's this big push to make sure that children's literature, the stuff that introduced people to language and literature that pulled people into our subject, represented the kind of diversity that we wanted to see. So I got a couple of examples up here. I've got, let's see, an Anklet for a Princess, a Cinderella story from India. All of these came from my local library. And I've also got, you know, for a little bit older, there's the Be Forever American Girl series. This is one about a Native American girl named Kaya. There's still a long way to go here, but for the most part, you know, if you are, for starters, there are laws that make sure that the books that we're using in our classrooms represent diversity. And if you are a Native American man and you want to make sure that your daughter can see herself in the books she's reading, it's not that hard to find. So what does this have to do with tech? Well, I started thinking, you know, are really introductory materials that bring people into this industry? Are they representing the kind of diversity that we want to see? And to some extent, I can't answer this question because I kind of, I came into this industry in a little bit of a unique way. I didn't use a boot camp. I'm not a computer science major. I can, however, talk about some of the things that I did use to learn about coding. One of them was this. Anybody here use Rails for Zombies? A lot of people. It's a fantastic program. It's been wonderful teaching me Rails. But for this presentation, I went through Rails for Zombies 1 and 2, and I took a screenshot of every single example of a person or a zombie that was, that came up, and I wrote down every single sample name that was used. And this is the result. For names, we have Ash, Bob, Jim, Billy, Greg, Eric, Tim, Joe, Tony, Kaike, and Amy. As for pictures, we've got these three down at the bottom. Those are the three examples of explicitly gendered female zombies or people, and one of them is a logo. We have a couple of gender neutral zombies also as well, which I felt I should point out. But for the most part, zombie apocalypse is very male dominated. It goes a little worse, it gets a little worse than this when you think about it, because none of these zombies are wearing hijab. None of these zombies are wearing hearing aids. None of these zombies have a beard and a turban that would indicate they were sick when they were alive. I mean, there's really not a lot of diversity represented here. And to be clear, this is not, not, not, not an exclusive to code school problem. I've got another book up here, which some of you might be familiar with. HTML and CSS by John Duckett. This was my, as you can see how well-worn it is, this was my Bible when I was learning HTML and CSS. Representations of men, outnumber representations of women in this book by about three to one. Representations of women, excuse me, white people, outnumber representations of people of color by about four to one, which is actually, it's worse than that because most of those representations of people of color come from exactly two examples, half of them do, whereas white people are more scattered throughout the book. So to be clear, I'm not trying to call anybody out or embarrass anyone, and if this eventually gets back to code school, like, I'm not trying to accuse anyone of trying to be discriminatory, and also code school for heaven's sake, if you get this, like, please don't just go back and put a pink bow on half of these zombies, because for one, let me ask you something. When's the last time you actually saw a woman with a giant pink bow right here? Like, how the heck did that become the universal signifier of femininity? But I think, if anything, this just demonstrates that, you know, if you're not actively trying really, really hard to make sure that you're making something that's diverse, it's easy to fall into a pattern where all of a sudden everybody's a guy. Anyway, I'm going to move on. The next part I'm calling on canon and the characters. So I want to take you back in time to this slide. Remember this? This was the canon at the start of the 20th century. You could fight me on a few of these names about, you know, when they were included. You'd probably lose, to be honest, but anyway. So this is various artistic representations of the main characters of some of their most popular work, or one of their most popular works. Is anybody, I mean, we have a couple of exceptions here. There's La Belle D'Am, Sans Mercis in the middle. For people of color, there's Audie Mandius, aka Ramesses II, who would have been obviously of Egyptian descent, but for the most part, white dudes tend to write about white dudes. Now, this is a slide of, during the 60s, 70s and ongoing, there's this big movement to say, you know, are there people that were overlooking that are deserving to be put into the canon who are writing really well? We might have dismissed them before because they were a woman or a person of color, or in the case of Oscar Wilde, well, gay or bisexual, depending on whom you ask. And so there was this big push to go and find these names, and this was the result. There's a lot of modern people, a lot of people from back in time who got the Brontes down in the corner, and all of these people wrote works that are amazing and just as worthy of being called canon literature. And this is a slide of representations of main characters of their books. You might notice this page is a lot more diverse than the previous one, because, again, people like to write about and can identify with people who are similar to that, usually in ways about race and gender. So, the point I'm trying to illustrate here is that, you know, there was this push to really valorize and celebrate women and people of color who were writing books that we might have overlooked, but it came with a simultaneous push to make the content itself more diverse, to make the literature that we're reading represent the stories of more kinds of people, and the results have been amazing. So what does this have to do with tech? Well, one of the strategies that tech's been copying is celebrating the names of people who have contributed to this industry, who might have been overlooked before, especially with women, people of color, and other poorly represented groups. I mean, Grace Hopper is everywhere, and rightfully so, she's amazing, but it hasn't really come with a simultaneous push to make our content itself represent more diversity. And to be fair, this is kind of hard, because this is what we got to work with, but it's not impossible. For instance, maybe we can start celebrating applications of technology that help underrepresented people, just as much as we celebrate Grace Hopper. I mean, guys, we have an app that maps safe restaurants for transgender intersex and gender non-conforming individuals. Technology is making people's lives better, who are traditionally underrepresented. That's awesome. We can start talking about that, too. Another thing we can talk about is video games. It's one of the most direct ways that you can see actual diversity represented in content. So this is another thing that we can celebrate. Anyway, let's keep going. Next part, getting rid of genius. Now, one of the limitations that I was talking about earlier, if you remember, at the start of this talk, was that I'm mostly going to be focusing on English language and literature. The problem is, not all the liberal arts see the same kind of diversity that English language and literature have. Also, not all STEM subjects have a problem with lack of diversity. For instance, philosophy has about 30% female representation in undergraduate majors. It gets even worse when you consider that the canon philosophers really aren't as diverse as they could be. In subjects like microbiology, for instance, that's got about 54% female representation in it. Some researchers set out to figure out, what the heck does microbiology have in common with English, and what does philosophy have in common with physics? They figured out that the answer was genius. We found academic fields that emphasize the need for raw brilliance, were more likely to endorse the claim that women are less well suited than men to be top scholars in the field, and further that such fields are less welcoming to women. Now, the study focused on gender representation, but you can copy these results. They're almost exactly the same for people of color as well. The reasons why this is the case aren't that hard to figure out. All of us are growing up and living in the same sort of toxic stew of misogyny and racism that permeates our society in ways that we don't realize. When you grow up associating the word genius and brilliance and intrinsic worth with people who are white and men, then it's going to permeate you the way you think in ways you don't even realize. So English used to have a really big problem with this in terms of like saying, oh, there are luminaries and geniuses that are absolutely untouchable, such as this guy. You might have heard of him. I think his name is William Shackaspeary. Now, Shackaspeary was at one point considered this is the high point of literature. He is the greatest writer that ever was, that ever will be, that ever could be. Probably came from Krypton when it was being destroyed. He's amazing and nothing can ever touch him. And you know, don't get me wrong, I am the number one Shakespeare fan girl. But Shakespeare didn't write about the experiences of Chinese merchants in the 12th century. Shakespeare, you know, was coming from a fairly limited understanding. And he did a pretty good job actually for his time of representing voices of women in particular. And also to some extent people of color, but it was still extremely limited. And so there was a, in this movement in the 60s and 70s and ongoing, there's like this call to say, okay, Shakespeare was brilliant, but let's step back and say, you know, there's other, there's other room, there's more room for brilliance in this field. So what does this have to do with tech? Well, one thing I noticed when I changed industries that wasn't so good is that when you look at job hostings, it becomes clear that there's yet another mythological beast permeating the subconscious of this field. And that's this, the elusive wizard, ninja guru, rock star developer. It becomes clear, and even if you're not using these words exactly, it becomes pretty clear that everybody is looking for this mythical genius who's just intrinsically talented at code who can do the work of ten other developers that it takes in the same amount of time. And to be fair, we all know these people exist, but the way that you ask for them, as you're phrasing it, as just, I'm looking for a genius, isn't going to be as helpful, because guess what, internalized misogyny and racism exist. And, you know, women, they found that women are far less likely to associate themselves with a title genius than men are. So maybe if we can change our job postings to something like, instead of, I'm looking for a wizard, ninja, guru, Jedi, say, I'm looking for somebody who's never satisfied with the quality of their code. I'm looking for somebody who's dedicated to test-driven development. I'm looking for somebody who shows commitment to make sure that their code is still readable in the future and to other developers. Like, that's what we're really looking for, right? So, maybe if we can change our job postings to be something about this and less about mythical creatures, we can free up space in our brains for other mythical creatures that are much cooler, like dragons. One other thing that I want to point out before we move on, take a look at those words, wizard, ninja, guru, rock star. With the possible exception of rock star, all of those are male-gendered terms. Just as an experiment, I went looking for a programming witch, programming priestess, and programming Amazon. And the only one that turned out to have any hits was Amazon, and that's just because Amazon's a company that hires programmers. So, my point here is that, you know, be careful of the words you use. It might reveal who you're actually looking for in terminology. Anyway, last part, I'm calling the damned mob of scribbling women. I'm not going to lie, this is my favorite part just because that is so much fun to say. So, let me explain what this is about. This comes from a quote from this guy. His name is Nathaniel Hawthorne. Some of you might know him as the guy who inflicted the scarlet letter on you in high school. Hawthorne went through a bit of a rough patch in the 1850s where his books really just weren't selling like they could have been. And, you know, he wrote to a friend for the reasons he thought of why. And this was a part of the letter, that America is now wholly given over to a damned mob of scribbling women. And I should have no chance of success while the public taste is occupied with their trash. Okay, filigree rampant misogyny aside, what's he talking about? Well, at this time there were two very popular movements in literature. One of them was sentimental literature, which really got started way earlier, but it was very popular at this time. And sentimental literature was primarily written by women about women and for women. And sentimental literature tended to be kind of formulaic and went something like this. There's a young woman in her adolescence who falls on terrible misfortune. She's about to give up her virtue, but lo, she is saved, marries a nice Christian husband, lives happily ever after, but somebody dies of tuberculosis. The other very popular movement at this time was Gothic literature. And Gothic literature was very, very distinct from sentimental literature. Let me explain. There's a woman in her adolescence who falls on terrible misfortune. She's about to give up her virtue. When lo, she is saved, marries a nice Christian husband, somebody dies of tuberculosis, but they live happily ever after in his castle. So these were very, very popular at the time. These were the reality TV shows of their day. There were actually, preachers would say women do not read books because they will spoil your mind. Like, it's hard to comprehend today, but that's really what people thought of these books. They were regarded so low. And the women who were, mostly women, although some men who are writing these books, they didn't even consider themselves writers. They were just women who were writing books for the general public consumption. Now, in the 60s and 70s, there was this big movement to, you know, find people who might have been overlooked. And this came up and people said, okay, let's step back with our prejudices and see, you know, are there books of worth in these two corpuses? And the answer was yes. Up at the top, that's a screenshot from Little Women by Louisa May Alcott, very sentimental book. To the right, that's from Jane Eyre, that's Gothic. At the bottom, that's actually a little bit of a more recent discovery from, it's called Carmilla, which is, it's kind of amazing how much Dracula completely flagrantly ripped off this book. Only in this book, there's lesbianism along with vampirism. So go read it. It's very cool. Anyway, so they found these amazing canon works in here. And to be fair, there was also a lot of not-so-canon works. But they said, you know what? These women were talking to other women of their time about the things that they were struggling with. So there is value here. This is something worthy of study. And all of these women were writers. So what does this have to do with tech? Well, when I was writing this presentation, I think, you know, does tech have its own damned mob of scribblers that maybe were ignoring as well. And at this time, a tweet happened to come up in my Twitter from this woman. Her name is Kate Ashwin. She writes a web comment called Widdershins, which is fabulous, and everyone in the whole wide world should read it, but that's beside the point. She spent the whole day talking about, oh man, I need to clear out the CSS, rearrange my website. God, CSS, am I right? Just so you know, that's about my once a week. God, CSS, am I right? You know, she's a full-time artist, and she works on this webcomic as well as other art commissions. But, you know, she probably wouldn't think of herself as a developer or a programmer, but she's still coding. And I thought about my never ever used WordPress blog. Don't even look for it. It's just never ever used. And how it's got that tab up there in the corner, it's a little hard to see next to visual. It says HTML. So you can write your blog post in HTML if you want. And from what I can tell, most of the statistics on who is blogging says it tends to be 50-50 male-female. And there tends to be fairly good representation of people of color here. So these people are coding too, at least some of them. And I thought about my sophomore year of high school. I went to an all-girls high school when Facebook was discovered, and suddenly everybody had Facebook. And within a month of me getting a Facebook, I learned to hack just enough to get past the school firewall so I could go on Facebook during the day. I was to coding. And there's more experiences than this. There's people who are making websites for their restaurants. There's people who are engaging in code, and maybe they're not writing very dry code, test-driven development type of stuff. But they're coding. And so far I've had a pretty clear call to action in everything I've mentioned so far. And this one I actually have to turn over to you guys to think about. I think we do have this our own damned mob. And is there that has that kind of diversity that we're seeking in this industry? And is there a way that we can say, number one, you guys are programmers. Number two, there's tremendous talent here, at least some of you. And three, you have a place in our community. So this one, you know, I don't really have an answer to this. I'm turning it over to you. Anyway, that is the conclusion of my presentation. I hope you learned something. You may now question the vocationally confused developer. APPLAUSE
|
Fostering diversity is a commonly cited goals for the tech community, and - let's be honest - the liberal arts are kicking our butts at it. This was not always the case, though; until very recently, the faces of the liberal arts were exclusively white and male. This changed thanks to distinct strategies and deliberate cultural shifts brought about in the late 20th century, some of which the tech community has copied and others we can use more effectively. In this talk, we will explore some of those strategies by examining the education, history, and culture of the liberal arts.
|
10.5446/30722 (DOI)
|
So today, my name is Koichi Sassada. I want to talk about what happened in your Ruby application. So I want to introduce some introspection features of Ruby itself. At first, I want to summarize my presentation. So I have talked about two topics. One, so, Google it. One is Google it. So there are many, many existing tools to inspect your latest application and there are many, many good resources. So Google, please check it. And so in this presentation, I will provide several keywords. So please check it. And also, I want to tell you that you can make your own tools using recent MRI. So that is my topic, I want to say. So this is my introduction. I'm a member of MRL Committer since 2007 and I'm a developer with original Yardel Opa. I want to ask you how many people using Ruby 1.8? Only a few. Maybe if you are using MRI, then you are user of my software. Thank you for using my software. So Yardel virtual machine is introduced from Ruby 1.9 and recent Ruby 2.2 also using the virtual machine. So unfortunately, I'm not a latest programmer. My wife is a latest programmer. So my wife is my customer. I'm not a latest programmer. I'm sorry. So I'm a newbie. So and this is the first time to attend a conference. So I'm very excited to talk here. Thank you. So this is a very important slide. So Heloq, Heloq employs me and Heloq has booth at layer 3 and Heloq, this afternoon Heloq shows the sponsor session. So please check it. And Heloq employs us as a math team. So we have three Japanese full-time Ruby developers. So our mission is design Ruby language itself and improve quality of MRI. Maybe quality means several meaning. So one is no bugs and performance and low resources such as low memory consumption. We are working on improving the quality of MRI. Our math team has three members. Maybe you know maths, designer and director of Ruby itself and nob. So he's very quite active and committed. And me, my nickname is Koichi and I'm an internal hacker. Yuki Hilo Matsumoto tell you who is he. But he is known as a title co-actor. So he has many, many titles. I cannot list everything of his positions. And nob. So nob is very active committed. So if we find bugs, then he fix bugs, he fix bugs and include bugs, some other bugs and fix again and again. So this is the committed number of nob. So as you can see, so many commits are created by nob. So we say he's a patch monster. And also I am an internal hacker and also I'm an EDD developer. So this is the commit number of day by day of my changes. And there is a several peaks. And this is RubyConf. This is release of Ruby 2.0. This one, this one, this one. So everything is based before some events. So EDD is event-driven development. So it means that if you invite me to some conferences, then Ruby performance will be improved. So please invite me. Our recent achievements, so Matsu-Chin and other Ruby co-arching members achieve the release of Ruby 2.2. I have no time to explain everything. So I skip the new features of Ruby 2.2. But all of these slides will be uploaded. So please check it if you have interest about what is the new feature of Ruby 2.2 or internal improvements. I want to show one improvement. Maybe you are a real programmer and you love to write keyword parameters. Keyword parameters is easy and useful, but it is slow. So compare with normal method dispatch. So compare with such a normal method dispatch, it was 30 times slower. So on Ruby 2.2, we improve the performance. So it is faster than about 50, 15 times faster. It is still slower, but I think it is enough for us. And also, the next target of Matsu-Chin and our Ruby co-arching is Ruby 2.3. So we are now planning some features and we will release Ruby 2.3 at the end of this year. But if you have any questions, any ideas, any suggestions, then please catch me after this presentation. So today's topic is what is happening in your Rails application. Introduction to introspection features of Ruby. This title is the Dajare Club. Dajare Club is some kind of pun. So the Dajare Club is created by our superhero, Alon Paterson. So of course, you know what, so he is very funny person. And many Japanese Ruby also join in to say something like Dajare, so something like this. But I skip this one. So I want to introduce introspection features of Ruby. Maybe you are working on Ruby on Rails and your application is on the deep layer. So of course, there is hardware, there is an operating system and Ruby Interpreter and Ruby on Rails framework or other gems. And on such layers, your application was built. Most of people, for most of people, such low layers, low layers, black box. Maybe you don't know how to modify Linux operating system or interpreters. So it is very special, speciality things. So computer science students need to know that. Maybe most of people don't know about that. And also, after three days, your application is also black box. So we need to review. So if you have trouble from such a black box, you need to understand what is happening in your application and your computing systems. So the question is how to inspect your Rails application. My answer is to using existing tools and make your own suitable tools. So first of all, I want to introduce some existing tools. There are, however, we have so many great presentations at this Ruby conference, sorry, Rails conference. So if you, so first two yesterday's presentations and so Arons keynote was also great. So I can cancel this presentation, but I want to talk about another aspect of such a topic. So at first, the performance issue, how to overcome such a performance issue. The easiest way, easiest way to overcome such a performance issue is to use the performance dyno on Heroku. With performance dyno, you can use isolated computing resource and has huge memory, six gigabytes memory, and many CPU processors. But it's extensive, but maybe it is the easiest way. Of course, so you can use another high performance computing resource if you have money. But sometimes we need to consider the other money, so we need to check, tune your application. So at first, we need to separate the program. For example, if you have a slow request issue, then you need to understand what is wrong. So which part is slow? So Ruby access or external API access or your Ruby application is slow because of the garbage collection one. So what you wrote some bad code, you need to understand what is wrong. And also memory consuming issue, you need to understand who consumes the memory. So what, which part of code consumes the memory. And so one good thing is, good service is Neuralic, of course. So Neuralic shows which part of your request consumes the time. Maybe you know more. I'm a newbie of living on Rails. So I need, so with Neuralic, you can measure more detail of virtual machine. So this is important thing that you can use Neuralic very easily on Heroku as a, as a user. I'm an employee of Heroku, so I need to say that. And also for performance, so using performance profiler is good choice to understand what, which part of is slow. So I don't say that there is a bug, but even if it has bug, but it is very useful feature, useful tool. And maybe stack profile, another performance profiler is written by Aman Gupta from GitHub. So please check his presentations. It was very useful features. So I want to talk about the memory consumption, how to analyze the memory consumption. So the, at first, I, so of course we need, we know that Ruby has GC garbage collector. You don't need to, you don't need to manage your object creation or duration. So all of unused objects are recycled automatically, so you don't need to care. And Ruby 2.2 has incremental and garbage collection. So the problem is, so we can, we can list the possible problems. So one is the incorrect garbage collector, GC parameters. So Ruby's GC can tune by environment variables, but it is difficult to choose the correct one. And also if the program get grabbed the many objects, we, unexpectedly it will be object leaking or memory leaking. And also MRI can have bugs. And Ruby 3.1 introduced a generation garbage collection. And generation garbage collection is, so shortly only correct younger objects. So if some object promoted to old object, I expectedly it will be object leaking. So we don't, we don't recycle old objects long time. So we understand what is the situation on your application. I want to introduce my two gems. There are many other tools written by other people, such as Memproph by some Saffron, but I want to, this time I want to introduce my own gems. So one is GC Tracer, two major garbage collection statistics, six, and allocation Tracer to find out object creation locations. So GC Tracer is very easy to use. You only need to require library and write GC Tracer, start, login, and file name. So all of, all of information, related to the information, writes to this file, this file file. And allocation Tracer in your application, you can use these source codes. It is very, I think it is very easy to use it. Only three lines you need to write. However, this is a RAID conference, not a Ruby conference. So maybe it, it, this, I wrote, I wrote, like Middleware to use these tools. So you only need to write a gem, gem file and writing a configure.ru and enable Middleware. So it is easy to use for Rails applications, I think. But allocation Tracer is a bit slow. Allocation Tracer needs to trace creation object and free, freeing objects. So it, it will be a slow. So you don't need to, you shouldn't use this feature, this allocation Tracer gem on your production application. I prepare this demonstration application. So this application working on HELLOQ and on production environment. But I enabled, enabled the, both, GCC Tracer and allocation Tracer. So you can check this application. This one. This is my first Rails application using CSS. So you can, you can sign, write. You can applicate. You can applicate something. Okay, write a big sample. Thank you. To promote Ruby. So it sounds a bit slow because allocation Tracer is enabled. So you can see the link to, you can see the link to the static information from these, these links. And you can see the result of GCC Tracer. So there are many, many information in this screen. So I can't introduce everything that's the, for example, line is captured, line means captured information. Each line is captured at GCC events. So for example, end of sweep means, sorry, GCC garbage collection has several phases. So starting the marking phase and starting the end of the marking phase and starting the sweeping phase and end of the sweeping phase. So ending the sweeping, end of the sweeping means the end of the garbage collection. So each such event, we capture all of the information. And also tick is non-second time from epoch time. So it is very big number, but you can manipulate these tick numbers to get how, how much, how long garbage collection use. And also it's, for example, total allocated, sorry, total allocated object. So this line, this column. So total allocated objects and total freed object is the num. So you can easy to understand. So how many objects are created and freed in this Ruby interpreter? So getting the log, then you can, you can choose the correct GCC parameters and you can get the reason why your application has many memories. But it is difficult. Maybe this blog post written by some Saffron will help you. Or please send your log to me. So I can, I can advise if you are a HELLOQ VIP customer. So it is a joke. So I welcome everyone. So I want to get many statistics to understand how to tune garbage collection. So, and also sometimes, sometimes someone asked me why my application consumes so much memory without any information. I'm not magician. I'm not, I'm not a wizard. So I can't understand what happened on your application. But with this log, GCC Tracer log, I can understand what happened in your application. So please ask me with this log. So allocation Tracer log. We can see the allocation Tracer log from this link. This is the location. So this shows the, so which line, which location consumes the, the, sorry, this line creates the 4,000 objects. So this is sorted by the path name. And you can sort with the count of created objects or other parameters. For example, so this, okay. So this log is captured after 1000s request. Okay. After 1000s request, so, okay. So, compare with other lines. So these, these lines creates many, many objects we can understand. So checking the, this, checking the, this source code. So 36 lines. It is difficult to read, but the, it, this code search the directory entries and checking the all stamps files. So these lines can be cached. We can, we can, we can understand. So after this optimization, you can check the, you can check the problem was solved. There are other parameters such as count and all objects count and average age of objects. Age means that how many GC, how many GC survives for each object. And minimum age, age is, and maximum age is and minimum, minimum, maximum memory. So using such tools, you can specify what is wrong with the, the last section is how to, how to review what the performance issues. So if you got some error, I expect behavior. So we have many tools. So debugger or modify error message. Maybe you, you, you always use better errors to, to see the, what is the, to see the error message. And also I want to introduce other Digimune gem and pretty back to us Digimune gem is written by Yuki-san. And this gem is very clever that for, for example, if you type, so there is no F1 us name, then it will be name error. Of course. However, with Digimune first name message. So it suggests the similar name. So it is very nice features, I think. And also pretty back to us gem is written by myself. And this shows the error message with local variables, values. So each line, each, each back to us line, you can see the, the variable, variables and the value of the name. So there are, there are many, many existing profiling tools or some clever tools. But if you have no suitable tools for your issue, then you can make it. So this is Ruby MRI provides many features to make such a tools. For example, TracePoint. TracePoint is TracePoint insert hooks for some events. And also you can modify your, your favorite frame message and back traces. And also there are reflection features and debug in the gem and more and more. So please check it. I want to show some examples. So for example, I'm a newbie of Rails application programmer. So I want to, I want, if I want to know the, when the, where the index method calls. So when I can, when I write this TracePoint hook and place at the start beginning of the application, we can see the, we can see the, we can see the so long back traces with only a few lines. So I can understand where the index method is called. And as an example, maybe if you got an error message, you got, maybe you got a frustration. But if you, an error message shows a heart mark, then maybe we can calm down. So this shows source code. So it, it catch the exception, raising and modify a back trace message with three hearts. And the result will be here. It can be increased frustration, but, but the example, it is the example. And also, you can get local variable names. So we have binding local variables methods to return the list of local variable names. And also, we have binding local variable get method. So we can show the, the local variable names and corresponding values. By the way, if you write a keyword parameter, if you can, so usually you cannot use if local variable because if is a keyword of Ruby, but using binding local variable get methods, you can get local variable, the value of local variable. And also, we provide debug inspector gem. The debug inspector gem is something like a core binding. But it is supported officially. You can get, get all binding, binding for, for the stack frames, each stack frames. And you can get the local variables for each method frames. So, and also, you can combine these techniques, these primitives to make your own suitable to, to ring, to. So it means that the pretty bug trace gem is written by these techniques. Maybe, so if you don't know about these techniques, then pretty bug trace is, you can see, you can imagine that it is magical. But it's not magical. It is not magic. You can, you can make it. CLB, MRI, MRI provides such a, the low layer of primitives. So please try these tricks. So I want to, so I have no time, so I want to show only a few things. You can make a C extension library using the low level CAPIs. So using CAPIs, you can, you can get, you can control more and more for, for CLB MRI. And also, you can hack Ruby. Ruby is also open source software. And there is a very nice books. And you can, you can modify, you can, you can build your own Ruby, Ruby interpreter to check, to check your, uh, programs. And also, you can combine low level to rings such as GDB, uh, Park, Bargain, and so on. And also, you can hack the low level system such as Linux. Linux is also open source software, so you can hack these, uh, systems if you need. So maybe you are a Rails programmer and I, I agree that Rails programming is fun. It's very fun. And also, low level programming is also fun. Sometimes. So today I want to talk, I want to say that you can int, int, introspect your Rails application with, uh, existing tools. And also, you can make your own inspection tools for your Rails application in your own, your hands. So thank you so much.
|
We will talk about introspection features of Ruby interpreter (MRI) to solve troubles on your Ruby on Rails application. Rails application development with Ruby is basically fast and easy. However, when you have trouble such as tough bugs in your app, performance issues and memory consumption issues, they are difficult to solve without tooling. In this presentation, I will show you what kind of introspection features MRI has and how to use them with your Rails application.
|
10.5446/30723 (DOI)
|
My name is Carrie Miller. This talk is titled Why We're Bad at Hiring and How to Fix It. It's clearly, I'm going to tell you all about how why manhole covers are round and how many golf balls fit into planes. No, I'm actually not going to do that. For me, I've interviewed a whole bunch of people and I think that the way that we go about interviewing people is a little broken and that's what I want to talk about. How are we hiring people? What's the process that we're passing them through and how can we improve it? For a little bit of context, I work for Living Social who is in fact hiring. We do most of the awesome things I'm going to talk about today. My title there is Lead Software Developer. I work primarily with heavy metal doing type casting. We're bringing it back old school in 2015. I figured since we print coupons, basically, movable type. I'm based out of Seattle, which according to William Shatner, is basically Waterworld. Dry land is a myth. My boat. I've been really fortunate though to get out of that rain to come down to Georgia here and I discovered this wonderful thing at the grocery store. That is the largest tea bag in the world, I think. I don't know if anybody notices, but it's family size. I'm trying to imagine the tea cup that the family gathers around to share. Huddled in their little Georgia shack. I'm afraid of the tornadoes. I work for Living Social. I've been there about 10 months and before that, I was a teacher at Ada Developers Academy, which is a program based in Seattle that works with women who are transitioning into technology. It's a seven month educational program in the classroom followed by a five month internship. They just started their third cohort and they're doing really great. This isn't a pose shot. This is a picture of some students in our first cohort. I think that they figured out JavaScript in that slide and that's why I love it because it's very inspirational to me because they can do it and I can do it too. Part of the way that we place them in the internships is that all of the sponsoring companies come in for this one day, what we call speed interview dating. I think it feels a little bit like this to the women. This is actually how I feel whenever I'm interviewing. No matter what side of the table I'm on. If I'm trying to get the job or if I'm trying to hire somebody, I feel like a character out of this. Part of the problem is that interviewing is extremely stressful process. We have to hire somebody and I'm taking time away from my day job to talk to somebody who may not know FizzBuzz. If you're trying to get a job, you're trying to get a job. That's stressful. You got to get the money. Interviewing itself really gives us a lot of false negatives and positives in the form of not hiring truly great people or hiring the wrong people or people that we just end up ultimately disappointed with. In general, this whole thing just leaves a bad taste in everybody's mouth for both sides of that table. A poor interview process suddenly poisons the environment because if we don't trust the interview process, then how much can we trust the people that we work with? How great are they really to start with? We end up with the FNG problem. Are people familiar with the FNG idea? Yeah, it kind of comes out of the military. It's the FN new guy. You don't talk to that one because he's going to stamp on a landmine. Don't get to know him. He's the one who's going to get you killed so you stay away from him. In software, we tend to do that a little bit when that new person gets hired. We kind of stay away from them a little bit and we spend more time looking at their code because we don't really trust them. We don't trust that process that they got there. Because our process is leaving us high and dry. It's a cargo cult. Very few of us actually actively study how do we hire people. What's that process like until you get to the management level and then you're looking at compliance. Like, oh, we can't ask if you're thinking about having children or how old are you. We don't ask about those tactical questions that you should be asking to kind of get at the things or what's that process. We just sort of say, hey, someone says, hey, Carrie, you're going to be interviewing today. And I say, oh, great. And I Google how do I interview? And I end up with this, which is a load of malarkey. But Jules Spolski wrote a very influential essay, Smart and Gets Things Done. And he turned it into a book and has done very well for himself. And he's hired smart people who get things done. And that's worked great for him and his company. But not every company wants to write the next stack overflow. Stack overflow. And really, Smart and Gets Things Done, or it's inverse that Steve Yeagy came up with, which is Done and Gets Things Smart, both kind of missed the point. How many people here are awesome developers? Well, there's a few hands up. How many people here are bad developers? Yep. Just based on the numbers here. Although I think it's a little flatter. But generally, like, all of us fall in this bell curve. But even the worst of us, if we're working, we can do the job. And we talk about that 10X programmer all the way over there on the right who don't exist. And certainly, the tail of this doesn't exist either. So most of us are falling in this middle. And most of the people you're going to talk to in that hiring process are mediocre. So how do you figure out what kind of mediocre are they? And it's not like we're dealing with mega-billions of dollars. Actually, is anybody hiring for CEO roles? Because I am available. I don't have my demands aren't much. Small seven-digit. I'm sorry? Yes. Talk to me afterwards. Damn, I said I love working social. They might watch this. We'll not talk later, sir. But in all seriousness, when we hire for those extremely sensitive positions like CEOs, billions of dollars on the line, how do you think that those folks do? Forbes did this research, or excuse me, Business Week. This is a stock price versus the CEO pay. There's absolutely no correlation between CEO pay and the return of the stock. I mean, there's a few outliers. But for the most part, like, you know, we're at this median point here. And this is when there's billions and billions of dollars on the line, and all these people do besides play golf is interview each other. They should figure out how to do it, right? Like, what makes a good CEO? They should be figuring it out. That's their only job. One job. So hire this guy, and they can't figure it out. So how can we do it for our new startup that's like Uber for my little ponies? Like, how do we do that? How do we do that? Well, it turns out that if you start Googling around, you're going to find a lot of advice, a lot of it's bad. But the good advice boils down to really basically three points. Know what you're looking for, find the people who have it, and then improve your process. So what are you looking for? Well, that depends on you and where you work. I've been a manager of a number of teams, and the best metaphor that I've ever come up with for thinking about my team is a closed ecosystem, or really like a small mountain valley somewhere. And we have wolves, and we have rabbits, and there are insects and worms and trees, but it's a balanced ecosystem. And as soon as I hire somebody else or somebody leaves, I'm removing something from that ecosystem. So something has to change. The famous story of Hawaii being overgrown with rats that came off of the early European ships that came there. They're like, I know what we'll do. We'll release some snakes, and the snakes will eat all the rats, right? That's a great idea, because there's no native, there's no snakes in Hawaii. It's an island. Well, now you've got snakes. And the snakes eat the birds, and that's kind of bad too. So now you have to release mongoose. And you're kind of like this constant cycle of always chasing your tail with your ecosystem. So you have to kind of understand that each additional person has a certain percentage weight that they're going to do. Someone was telling me that they work for a company of 30 people, so hiring any one person, that person is now one 3%. They're now 3% of the company, right? So that change, that's going to have this really noticeable impact. If you're on a team, maybe you're in a big company, though, and you think, oh, any one person is going to change it. But if you've got eight people on your team, you hire one more. That person is now one 9th of your culture. That's going to be a huge change. If you lose a person, you've lost an 8th of your culture. Well, you can get around that by hiring people exactly like you, right? Treat us all as the same and sort of like plug us in as spare parts. But that just clones ourselves into a monoculture. And the problem with monocultures is diseases come along and they die. Do you all know about the monoculture in bananas? No. Okay. Why am I telling you about bananas? What can I talk about, you know, ecosystems and ecology? Bananas are a monoculture. They are all genetic clones of each other. And the interesting thing about bananas is you may be familiar with the idea that bananas are slippery. Bananas aren't actually all that slippery. If you actually get a banana peel and step on it, it doesn't really slide. So why did that joke come around? Well, it's because it's a monoculture. The bananas we eat today are not the bananas that our parents ate. And those bananas that our parents ate were not the bananas that their parents ate. Because what happened is every 20 or 30 years, a massive blight comes around, destroys the entire banana crop. Tire countries have been devastated in Central America. Their entire economy is wiped out because all of a sudden, in a matter of like five years, all the banana fields die. Similarly on teams, if we're all alike, if we're all a monoculture, memes come around. These ideas that we have about the right way to do software development, the right tools to use, if we're not bringing in fresh ideas, fresh diversity of thought, we can fall prey to those ideas and diseases. So there's been a lot of talk the last couple of years, though, about diversifying, especially in terms of culture. But in order to understand what different is, you have to understand who you are to begin with. So you have to start to define your culture, which takes a lot of work. But it's really worth it. You have to start before you even start hiring. You have to know what you're going to do. You can't just do this on like a Tuesday because you're going to have candidates in on Wednesday. It should be something, especially if you're a team lead, you should be working on constantly, questioning, and defining it. Now, a lot of people know that I went to a hippie school. I have a professional hippie degree and being a hippie. That's why I don't have any shoes on right now, among other reasons. But one of the things that we do there, we work in like small groups to study things rather than a classroom. It's group discussion, dynamics, we're doing group projects. And we work on this idea of generating belief, therefore, statements, that first day of class. And so we say something like we believe in respect. What does respect mean? So we say we believe in respect, therefore, we will show up in time for all meetings. The idea is we want to say we all have this vague idea. So let's say our idea is let's be professional to each other. We're all professionals and we treat each other with respect. Well, maybe you came out of a very rigorous academic environment where aggressive questioning of somebody else's output is the norm, right? And you think that's treating me with respect to constantly question what I do and my decisions. I just think you're an asshole. Right? So what are these abstract ideas that we have about who we are that we're friendly, that we're open, that we're honest with each other? Generating their four statements gives us actual tangible things that we can measure, things that we can look at and say, yes, this is what this vagueness looks like. And I know that this is some of you are thinking that it's kind of a waste of time. This idea that we need to sit around, like, hold hands and have a meeting and talk about our feelings. And it does take time and effort. But if you want to have an ecosystem of a team or a company that can really grow and thrive and treat individuals well and be a place that people are happy to come to work and are happy to recommend other people to and are invested and enthusiastic about the product that you're building or the services you're providing, you have to create this kind of environment. Finding people who have it. Once you figure out actually who you want, not just the skills that you actually need, right, that list of like, well, must have five years of experience or an equivalent BA or vice versa. You know, you have to know Java, C++, must be DHH. And also have to show up on time for meetings. You've got to kind of find them. And I assume that this is, I think it's fair to assume this is a hiring process that most of you have gone through at some point in your career, even if you're just starting out, even if you're transferring in from another thing, you've kind of probably gone through this process. Anyone not ever gone through this typical process? Okay, one person, there's always one person. And it's always the same person. Hiring, this hiring process, though, is there for a reason, right? Nobody ever invents anything or does something without a really good reason that is perfectly finely tuned for the environment they're in, right? Someone came up with this idea of here's the hiring process because we don't have time to interview every single person who wants to come and work for us. We simply don't. So we need to filter down to just the most likely candidates. So you can think of each step of your hiring process as a filter and then consider what that filter allows in and what it rejects. So for example, if I insist that you have a GitHub profile and an actual open source contributor status, well, now I'm rejecting everybody who doesn't have spare time or people who might have kids who just don't have time on the weekends and even used to do open source or maybe they have a security clearance that doesn't allow them to share any code whatsoever, even obfuscated code samples. Maybe your recruiters in HR department are requiring people to submit things in Word. That's my favorite, right? Could you send me a doc of your resume? My trick with that, personally I just export my LinkedIn profile. I don't actually have a resume. Phone screens are tricky. Like what if the person has what you might consider a funny accent or it doesn't have English as a first language. So the phone screen is difficult, right? You've biased yourself against that. So understanding each of these steps, what they reject and what they allow through can help you make sure that you're getting the actual candidates that you want at the interview time. Because that's what it's all about. I heard one giggle. Come on, I gotta get more than that. I love this photo. I originally found this photo. I searched for like interview day in Google image search and for whatever reason this came up, I'm like what the heck? It's like click through. And it was just an image, like on an imager page. There's like click here for more information. I'm like no. I want no context for this image ever. Interview day is really like the critical high touch point. So that's what I'm gonna talk about most today, right? That actual face to face interaction you're gonna have. Communicating the schedule to the person coming in is super important. I go to a lot of interviews or I have in my life where they say yeah, 10 a.m. and I don't know is it gonna be like two hours or eight hours or is it a three day process. Communicating that process up front and setting those expectations. Please bring a laptop candidate or don't bring a laptop. We'll be going to lunch or not going to lunch. So that the candidate like has some like comfort that there's a structure and a plan here. That kind of stuff is really professional and it sets a really nice tone. It can be friendly. It doesn't have to be stayed, you know, and like kind of button down. But just like it shows confidence in your process and that that generates that confidence in the part of the person who's being interviewed and they're more likely to like be happier and more relaxed. If you're you should always have a diverse set of interviewers. Not just visible minorities but also invisible minorities if you can and allow for breaks in your schedule for the day. Right around the fourth hour and the fifth cup of coffee like I really want to run to the bathroom and make sure that you've got time in between for the candidate to do those sorts of things for the candidate to also say like oh yeah, I've got to make a call. You know, so they can call and say nope, sorry, I'm still sick boss. Yeah, I can't come in today. And part of this also is having this game plan. I like to split up the interview. So say we're hiring for a Rails developer, right? And that involves a little bit of front end, some JavaScript, some back end, a little database knowledge. I try to split that up and assign interviewers to specific areas of focus, right? They might be pairing for something for an hour or two but like hey, Mike, could you focus on how well the candidate knows SQL? Or hey, Joe, could you work on how well they know JavaScript? That makes sure that you're getting really good coverage and you're getting good interactions without like someone asking the same three to five questions over and over and over again. Because that's kind of ridiculous. And then just having like those breaks in between interviewers so that you can hand off the interview to the next person. There's nothing worse than like, okay, so you wrap up with that first person in the interview loop and they're like, hey, well, I'm going to go get Joe and he'll be your next interviewer and then as a candidate you sit there. And it's a little uncomfortable. And you sit there for like 20 minutes and like have they forgotten about me? What's going on? It turns out like, no, Joe forgot and he went out to coffee. Or there's a fire, you know, and the server's melting down. There's always a fire every interview I go to right now. Oh, man, I'm sorry. I'm so unorganized, you know. But really like respect the interview process. Again, because it shows that confidence in that process and it kind of transmit out not just to the candidate but to other people in the company that like we have a good process we believe in it. It should be an enjoyable thing for everybody. And that handoff lets me go to the next person and say, oh, Joe, hey, listen, I know I was supposed to talk about data structures with Alice but, you know, I just didn't get enough signal on that. I didn't get enough information about what we need. So could you ask this question? And every interviewer should have like a standard three to five questions that they're going to ask on the topic that, you know, they've chosen or has been assigned to them so that you can compare apples to apples across candidates, you know. So you have like a less biased interaction to sort of look at. Usually I'm the hire, I've always almost always been the hiring manager or someone running that process. I really like to be like the first person to like just settle them in, have a casual conversation. Hey, how did you get here today? Do you have everything? Kind of shoot the breeze. I might double check if there's any red flags that came up between phone screen and getting the person in. So like, hey, we did it. That university doesn't exist. Yeah, you see, it says you have a computer science degree from Columbia but we called them. Oh, the country, Columbia. Okay. It also gives me a chance to learn how this person communicates. So maybe English isn't their first language and I might want to let people know. Just like, hey, you know, be a little more forgiving with those sorts of things or maybe they have a hearing or speech impediment and I just want to pass that along, you know, like to somebody like maybe you can enunciate more, all those sorts of things to the other interviewers, right? And that just sort of like sets it up everybody for making a smooth process. This is usually the point, behavioral interviewing as well where we start asking like those hypothetical gameable questions. For example, how would you handle conflict with a coworker? Oh, well, of course, you know, I went to HR and we sat down, we had some mediation and we, you know, we kind of worked it out. We learned some boundaries and like, that's what everybody says, right? Have you ever been in an interview and had the person say like, oh, well, this one guy, let me tell you. So then I super glued things together, you know, I like spiked his chair and then I punched him. It was a bad scene. That's why I'm looking for work. That's what we call an HR red alert. That actually happened to me once. Like, yeah, some guy just came in and just told this horror story of like cold cocking his boss and that's why he was looking for work. And I was like, well, maybe he deserved it. I don't know, right? But the same token, right? Like the question I'm asking is one that everyone knows what the answer is. When you're asked, well, how would you handle a salesperson coming and asking for a feature at the last minute? It's a rush job. The answer should be, well, I go to my boss and, you know, I work out the schedule and see how it's important. Yeah. No one's ever going to say, oh, I tell them I'll do it and I don't do it. Because people want to get hired and they know what the game is. They know what the game is. And this is just like puzzle questions in a way. I come from a theater background. I did, you see my degree is in, but I didn't actually get the degree. So I've got, you know, degree work in performance production. I've been on stage quite a bit. And I like to think of the interview process as an audition. So what I want to be doing is I want to be seeing how does this person actually perform? Not just what do they think, but how do they actually do the work? So usually start out with one person doing a collaboration audition where instead of normal whiteboarding, let's just plan an app. Let's look at the high level. What are your considerations? Not do you know FizzBuzz or do you understand recursion? But do you understand how sockets work? Do you understand how HTTP communication cycles go? Do you understand what JSON is? That sort of stuff comes out really highly when we start just like planning out a system. How would we start a new project? Pairing auditions. In the Ruby community, we do a lot of pairing auditions as part of our interviewing process. Not everybody does though. And not everybody pairs in their day to day. So it can be really uncomfortable and weird for candidate. So let them know upfront if that's something you're going to be doing. And by all means, tell them to bring a laptop and let them use their laptop because there's nothing worse than being forced to use somebody else's laptop with them on it when you're me and you don't know them or you only know how to quit. I should get that tattooed actually somewhere. That would be a great tattoo. That's kind of suspect. So if they're an enax user, don't say shit about them. Just let them use whatever. It doesn't really matter. What you're looking for is you can ask them about why they chose a particular tool and maybe it's because they don't know about them. It happens. Maybe they tried them but they just don't like it because their favorite tool is built into BB edit or whatever. So let them do it. If you're doing a code sample as part of the interview or a little tiny code project, use that. Use that as the basis for your pairing. Talk about refactoring that. Say, hey, we'll work on that. Give them feedback. See how they handle collaboration and feedback. Do they get, are they defensive? Are they not? Are they accepting? Can you bounce ideas off of them? Do those things fit in with what you and your company are looking for? I also do presentation auditions. I find that communication skills are super, super important even for the shyest, most introverted among us. We have to communicate with people. Some of us do it verbally. Almost all of us do it constantly all day long in our writing. Do this communication, email as well. So I like to have candidates actually teach me something. Google used to do this in the early days but they would spring it on people. Sergei would say, like, okay, I'm going to leave the room in five minutes for five minutes and then I'll come back and teach me something I don't know. That's a lot of pressure. We did this at Amazon as well. It actually works really, really well but I think it's a little bit of a surprise or on stage. So I usually tell people ahead of time when they come in that that's going to be part of their process. I tell them basically do a lightning talk. Teach me something I don't know. It doesn't have to be technical. I've had people teach me how to build aquariums. One guy taught me how to paint his house, like specifically his house. He had photos of it and everything. He might have been trying to recruit me for some weekend work. I don't know. But basically look at how can they communicate ideas to somebody who may not have the full context or background of knowledge. Do they use a lot of technical terms without checking with you? Do they dive right in? Do they kind of like take stock of where you are in relation to painting, for example? How do they do that? It doesn't have to be a lightning talk presentation. It could also be like maybe like do a written blog post for us. Just like a written sample. You can kind of negotiate this with the candidate themselves. Again, you're looking for communication and how well they're doing that. Make sure you tell them not to prepare ahead of time. Because you don't want someone spending eight hours building a five minute presentation that no one's ever going to see. So just let them know we don't expect a huge dog and pony show. A note on the interviewing lunch, which is super popular. I like food. I like food. If you want to buy me a free lunch, that would be wonderful. But if you do take the candidate to lunch, make sure again, tell them up front and make sure that you pay and tell them that you're going to pay. Because there's nothing worse than get then. Maybe that person is really unemployed and doesn't have the funds to go to the fancy grass fed organic burger shack. Maybe they can't go there. So don't let a person's ability to pay for this fancy lunch get in the way. Most companies, I've only once had a company expect me to pay for my own lunch at one of these things. That was a little awkward. Trying to overwhelm the candidate, a lot of times we go out with a group of four or five to the interview lunch. I one time went to an interview lunch with nine people from the company. I actually have a little bit of hearing loss, which is nice because I can actually hear you all laugh a little bit. If you want to do it more, that would be awesome. So I have a really hard time in loud bars and parties, especially here at the conference. I come and say hi, but I have a really hard time with that. The audio is off. So at a long table, I can't talk to people more than one or two seats away from me. Simply because I can't hear them. I can't hear them clearly. I can just, yeah, nice. Hi. That's about the most I can do. So why did we go to lunch with nine people? I talked to the two people around me and everyone else just talked about their job. They didn't get a chance to meet me. They just got in on a free lunch. So keep that like a small internet sort of thing and relax and casual. Remember that the candidate is still being interviewed and they're aware of it. They're still on their best, my mom would call it guest behavior. They're going to mind their peace and queues. It's not as relaxed and culture-fiddy as we might think, but it is a little bit just a different venue for communicating with them and seeing how they go. Avoid bars. Avoid alcohol for this. Not everybody drinks. And if you don't want to communicate that you have to drink in order to be part of, to get this job, I think it's pretty horrible. Pick someplace that's nice, but not too nice, you know, not too seedy, not too rundown, but not super fancy. And again, let the candidate know because they might have rented like a nice tuxedo to come to the thing and now you're asking them to eat soup. Like, that's just dangerous. Anyway, this is a good time to talk about hobbies and passions and the things that you do that aren't work but inform what you do. And remember through all of this that the person that wants to get the job and they're really nervous and they're on stage basically in front of you performing. And that is really, really hard. It is really, really hard. And you don't want to make fun of somebody, you know, somebody who's just bad at their job. Because maybe they're having one bad day or maybe your one interviewer is having a bad day. I mean, it takes a few examples of this not being good to really like cement that like this is not the right candidate for us. I can't help it. This is Raul Abanyas, a professional baseball player who was once famously quoted in that he takes pride in his defense. He was rated the worst defender in all of baseball and he played left field for the Mariners which is a very defensive position. Just be aware that like, you know, you can't... You want to see more? I'm going to go back. This is my favorite which runs on the wall. Oh, geez. Anyway, the point here though is that we're not all bad like Raul, right? Like, we might blow one piece of the interview. For example, I always forget the difference between left outer and right inner joints. I guess they're probably the same thing. I don't know. But if I biff that, like, should I get completely flushed out of the interview process because I got nervous, you know? Or maybe, you know, like, if I had a kid, maybe my kid is sick and so I'm off in that interview day. So kind of take those things into account when you can. After the interview, your portion of the interview is over. You go back to your desk. Don't go back to work. Write down your impressions of the candidate right then. Write down yes or no. I would hire this person or not hire them. But then what would change your mind? So if you saw Raul O'Banyaz, totally biff that play, right? What would change your mind about him being a good defender or a bad defender? Well, in the mind of Mariners management, the fact that he hit 20, 30 home runs a year, that was a good enough, you know? But if you didn't know that, if you were just trying to evaluate this baseball player, you would say, oh, he's pretty inept. Why does he have a professional contract? And I don't because that looks easy. I could do that. So what would change your mind about that candidate, right? If you had to hire somebody who knew RAC, for example, what would change your mind? Finding out they're on the RAC core team? Maybe that's how much it is. Maybe they're just not communicating well with you. So figure that out ahead of time. Then when you get to that group feedback position where you discuss the candidate, maybe somebody says something like that. Say, yeah, I really liked her work on RAC. It's been amazing. It's been really stunning. Wow. Like, I guess maybe I was wrong. Sort of questioning your own beliefs and your biases as you come in as an individual. And it helps de-biased your team as well. This can sometimes lead to follow-up interviews when you have, like, interviewers who are kind of conflicted about whether or not a candidate is good on something. They can be a little problematic because you're asking a person to take another sick day off from work to come in. And so just be careful. Try not to do them if you can avoid it. And really set up the candidate's expectations for what's going to happen. It's so simple to create, like, a calendar reminder, right? So tell the candidate, okay, we interviewed on Monday. I will let you know by Thursday 3 p.m. And just go back to your desk and set that as a reminder. You don't have to have a decision by Thursday 3 p.m. But you for damn sure better tell them, hey, I'm really sorry, we don't have an answer for you yet, but we will have an answer for you on Monday. Because so many candidates and so many of your processes are you go in, you interview, right? And they're like, great, that was awesome. We're really excited. We'll let you know. And then you hear nothing forever. And you have to email them or call them a couple of times. People have had this experience, a few people. I see some nodding. I see a lot of nodding, actually. That is a really horrible experience. And no one wants to be put there. Be professional about it. Unfortunately, you can't hire everybody. Not every brownie is going to get hired. Don't make it personal. You don't need to call anything out in that rejection. But be respectful about it. Try not to say we wish you good luck in your future endeavors. That's like the FU of the interview process, right? And if you have a re-application process, explain it. I had known so many people who work for Google now who did not get in on their first try. And some of them didn't even get on their second try. But Google let them know. It said, hey, you can reapply in end months. We would really love to see if you could improve your knowledge about data structures or whatever it is, right, that you feel like the candidate is lacking. If the candidate asks you for feedback, I think you should give it. I think you should give it. If you have, and if you've gone through this process where you're documenting what your expectations are and how you could change your mind, this is something you can give to them in a non-actionable way. If you just rejected people and you couldn't, you can't give them a reason, there's something wrong in your process. Which you should keep improving. And you know, we improve what we measure. So ahead of time, like write down whether or not you think that you will accept this candidate. It doesn't have to be any sort of scale. It doesn't have to be it's not a big thing to take into account. But just trying to unravel your own personal biases. Maybe worked at Microsoft for 10 years. I don't think I want to hire them. But I get in there and I find out, no, they're really, really good. They're not a Microsoft person. You know, they just worked. Maybe they worked at Microsoft because they needed the health insurance for their ailing mother. You don't know. But maybe I'm biased against Microsoft and now I can discover that and kind of unpack that a little bit. And you know, we live in this hyper-connected age of LinkedIn and Twitter and everything. It pays to kind of like check in on people that you turned down. How many faults and negatives are we getting? If we don't hire somebody because we're like, yeah, they're just, they just really don't know SQL. That's no good. But then they turn up on like the Postgres project doing like amazing stuff. Like you should find that out. Like where did your process let you down? I'm getting close to the end. But I want to rip through seven interview anti-patterns that people still to this day do. Anything having to do with your college transcripts? Unless the candidate has almost no experience, GPA is useless. It's really only indicative for a couple years out of college about performance. It's really geared for judging academia. And this is also the conflict between people with programming experience versus CS education. They're not as interchangeable as we think. And if any, you should never ask for high school transcripts. I actually didn't get a job because I didn't graduate from high school. I was 37 then. Really? Like I went to college even dropped out of college. But you know, like I'm probably going to leave your company in two, three years anyway. Like I'm just going to follow that pattern. What does it matter? Negging. Do not do this. Especially this last one. I read an article in Forbes where the CEO is one cool trick to hiring. He's like, I just look that person in the eye and I say, I don't see the spark. Convince me. I'm like an asshole. Right? Like I don't want to work for that guy. It's like, oh, well, you're 99 to 10 people. Don't, you know, they kind of walk away their tail between the legs. But it's that 10th person that I hire and they've got it. Like, no, they're a bootlecker. You don't want that person. Right? This is also an open invitation to a sexual harassment lawsuit. Right? You are, you're putting in this, you have a position of power over this person and you're asking them for a vague, convince me. That's really smart me and shitty. Don't do it. I don't like to submit a pull request to apply. GitHub. There are well documented barriers to open source software and we're asking people to do free work in exchange for like the opportunity to interview. Right? That's just horrible. There's a big Rails consulting shop that actually hires you on a contractor contract, I guess, for a week and that's their interview. They pair with you for a week to do that but they pay you for it. They fly you out to wherever the office is and they pay you to work there. I think that's brilliant. It's actually absolutely brilliant. No free work. Don't do group and speed interviews. We did that. The first cohort of Eda. I would never, ever, ever do it again. It's hard to wrangle so many candidates and companies. We went with the best that we thought at the time but I'd love to find a better way to do it. I don't like speed dating things. I'm kind of, kind of happy. Puzzle questions. A lot of people still do these. I got asked like, you know, about five trains all converging in New York and I'm like, I know nothing about New York. From the West Coast. Like, what do I know about trains? Google doesn't do puzzle questions anymore. They have enough interviews and they've done these for a long enough time that they're able to create a logitunal analysis of it and say, like, your ability to perform on puzzle questions has no bearing on your success at Google. Microsoft has stopped doing this. Amazon, all the big companies that are famous for doing these have stopped doing them. You should stop doing them too. All they do is make you feel cool that you know the secret trick. You know, they're like the blacksmith puzzles and things like that. I hate whiteboard coding. I refuse to do it. It's super artificial. I can't look anything up and it's a lean back experience for everybody, right? The person who's doing the interview and is leaning back and judging instead of collaborating and working with that person and you want to find those things out. Not how well have they memorized standard live and can they write. Also, it is biased against left-handed people. Writing at a whiteboard when you're left-handed is horrible because you're rubbing your sleeve across the whiteboard. Also most companies don't stock left-handed dry race markers. So it's kind of a nightmare. Don't do fizzbuzz. Can we stop fizzbuzz? I own fizzbuzz.io. Fizzbuzz is a service. Why not? Seriously, all you're really filtering for with this is do they know modulus? Do they know the secret trick? You don't want to be filtering people for knowing secret tricks. You're building a superhero team. That's what you're doing, right? And we know that we need Superman and we need Wonder Woman. And sometimes we need Aquaman. Sometimes. So do we need a second Aquaman? I mean, we already do a lot with like ponds and inlets and bays. Do we need a second Aquaman? Do we need to be nothing more than a whole bunch of people in red capes that fly around? Do we need a Batman with gadgets to kind of get around the kryptonite? You need to set up your interviewing process so that you find the right fit and the right fit in terms of skills and personality comes when you actually understand what it is you're looking for. And then you design a process that matches that, that filters for those things. Just like gold panning machines that tumble rocks and then tumble the gravel and then tumble the dirt and tumble the silt to get the gold out. Each step of that is a highly tuned piece of machinery and science and physics to extract gold. And your process needs to be just like that. Filtering down to catch those little particles of gold, those little tiny individuals who are going to surprise you and just be amazing superstars. I don't know if I want to hire this guy if he showed up at an interview just like that. But maybe I do. Maybe he's the best person for the job and I'm never going to know unless I know what I'm looking for. That's me. That's my talk. I can be found pretty much everywhere. I'm curious or I had stickers just like Aaron. I don't have enough ponies. So if you have more ponies, I gladly accept them. They will not influence my hiring decision, I promise. Thank you.
|
An interview too often feels like a first date - awkward, strange, and not entirely predictive of what’s to follow. There are countless books and websites to help you when you’re a job seeker, but what about when you’re the one doing the hiring? Will you just ask the same puzzle questions or sort algorithm problems? What are your metrics for evaluating or contextualizing the answers? In this talk, I’ll describe successful practices and techniques to help you find someone who will innovate your business, bring new energy to your team, get the work done, AND be someone you’ll want to work with.
|
10.5446/30724 (DOI)
|
Thank you all for making the mistake of coming to my talk. Hi everyone, I'm Shannon. So my name is Scott. I work for what we call the Mojite. We're an API based payments company similar to Stripe. We do payments for people like constant and contact, media.com, and basically any credit run insane. So my title is, Why Your API Product Will Fail. As you might not know, if you want to get your talk on selected and you want people to show up to it, you should have something really click-by and ask for business. So the title probably should be something more along the lines of how to build an API tool in the business sector. My name is Scott. Some people confuse me for Taylor Swift. Sorry about the confusion coming in. I was also told that if I could just go down and occur, people wouldn't be mean to me. So today's agenda is what does good API toolings look like and how do I build it in a scalable way? How do I build it in a way where every time I launch a new version of my API, I'm not freaking out? Okay, so let's start with what is API toolings? But before we get there, how many people have ever used an API before? Most people, pretty much everyone. How many people have used an API that's sucked? How many ever used an API that did things that Dr. Jason said that it didn't do? Yeah, a lot of people. So what API tooling is meant to do when it's built well is to rent things like that from a company. So again, the consumers, that's us, need chance to use the MEPI in the way that it was designed to be used. Now remember that every time you use a MEPI, there was a developer, just like us, perhaps someone even in this room, who decided to write the dog on that, or wrote the actual MEPI. So consider that, not that you should be sympathetic, but that when you're doing it yourself, you don't have to do those same things. So API tooling is essentially two things. It's documentation that is reactable to a lot of API clients. Some people call them SDKs, some people call them wrappers, whatever you want to call them, they are the way that someone interacts with your API. Now if they want to use a curl client, whatever, but most people offer some sort of client-in-him library. So that documentation can be further broken down into tutorials, the actual reference docs, and the explanation and use cases of those API calls. So we'll start with the tutorials. How many people have ever, when they started using an API, tried to jump into the reference docs, you probably said, okay, how do we put this smallest thing that I actually do with this MEPI. Give me a how-to guide. So what does how-to-times do? Because it needs to tell you what you're actually doing. You've never used this on the API before, then you probably don't know. You're probably going to need some detailed steps. Steps that even perhaps the most junior person on your team would be able to follow. And also, I don't know about you, but I don't actually really like doing work. I really just like to copy and paste stuff. Ideally not off-stack and overflow. Ideally like from your docs themselves. I don't really know how to Google it, but make it easy for me. Make it easy for me to just sort of copy and paste, get going, because I don't really want to do work. Oh no. I don't know about you. So here's an example of what a tutorial on what's like. We should have some sort of explanation of what you're doing. We should have some sort of example, use case for how this might be used. And then before you actually just create a large own tutorial with lots of steps and copy and paste, you should actually probably tell the person from a high level, this is what we're actually doing. Because likelihood is if I probably don't know, I'm actually a page note. You might have a good title on it. You might not. Just tell me, okay, before I get started, I have to sort of go, wait, this isn't actually what I'm wanting to do. Because most tutorials, they're very text heavy. Don't make me read. So with each step, don't just include the code. Include some sort of like, this is what you're actually doing in this code. While it should be obvious, your code should be at least decent. It might not be that obvious to someone who's I'm assuming. So beyond tutorials, once you're actually using an app, likelihood is you're going to do something a little bit different. Something that may mean the tutorial writer didn't come up with. And that's what we're going to talk about reference. So these are the docs where you see all the calls and everything that a API does. So the simplest API call is, I'm going to send you an ID. You're going to send me an object back. It's a very simple get. So in this get, you're going to want to see, okay, so what's the parameters that I actually need to pass? Is it required? Do I need that? Why not just be one? It might be a few. What is the type of that? And then give me an example. Well, this one here is really basic. More complex stuff. That stuff is actually really neat. And then once that stuff is actually returned, you need to tell me what is it that you actually just sent? I don't know your app. I don't know the object structure. But you bring down for me what each of those parameters is and how I'm supposed to use it. And in addition to that, it's really nice when you give me like a JSON example to work with. If I don't have some sort of example data structure that I can call and paste into my code and actually start playing with, it might, well, like you're like, well, you can just make a call and then you can play with that. Not every call to your API can I immediately make. I might need some actual data in there first. So let me figure out how I'm supposed to play with it before I actually have to make a call. So in your reference docs, make sure that every resource is well undocumented. It's okay to visually duplicate things in docs because one of the worst things you can do is give me like a bunch of links to other pages and to actually understand what your, what an API call is doing. You're making me jump around. You're making me access three, four, five files to do one thing. And that's no fault. Also the requests and the response parties, make sure that I can copy and paste those. Make sure that I can actually see what those are. And hide those from. Don't make me make calls just to see what your API does. And for each parameter in those, make sure that those are well documented. Otherwise, you're going to have people emailing you or a support team asking, okay, what does this parameter do? It's not consistent. What's the problem? So explain what each of those are and how these use them. Now to actually build these sorts of docs, you need to first figure out, you first need to know what your API actually does. So how many people here have a, have some sort of spec that describes their entire API? Every resource. A few people, a few people. One of the most people when they build an API, for the first time, they don't think, oh, maybe I should have a spec for this. So this leads to problems because you, you don't have a single point of truth. There's not one document that tells you what your API does. Now when you make code changes, you know your API changes and that's okay. But your, your API code cannot be the single point of truth. You need to have some other document, or a series of documents that, that describes what your API does. I should log into your, go and look at your SDK, see one number of documentation, look at your docs, see something else. They, they, they should all be exactly the same. And if you don't have a single spec that describes what, what all of those things are supposed to be doing, it's going to be very hard to maintain. Otherwise, you're going to be up to three different things. Every time your API changes. So there's a few different options for how you might actually write a spec. There's JSON schema, there's Swagger, there's Ramel, there's API and a blueprint. And they all have to have different reasons why you might want to use that. So Ramel, which is becoming very popular, it's backed by what we call MuleSoft. Their API spec is basically just Ramel, in its own way with these, you know, forward slash unresources. And the nice thing about Ramel is that it's very flexible. If you use something like JSON schema, it's going to be more prescriptive. Ramel will let you pretty much do whatever you want. You can document any of API regardless of how it was done structurally. Which is really nice when you have an existing API that you need to document that you didn't create a spec for and don't really know how it works. JSON schema, this was created or was used heavily by the Muro team. They built a lot of tools to actually build these. The only problem with JSON schema is that it's very prescriptive. So if your API doesn't do things that JSON schema thinks that it should do, then you can't really document it. But if you're creating a new API from scratch it might be a good option for you. Swagger, most people know Swagger from the UI tools. Swagger is a great option. They have some great tools built. It's also one of the older spec languages out there. It was really one of the first ones that people started using and they've done a nice job. You know, downsides, it's very verbose. There's a lot of different parameters you have to add to it. On the other side, you have things like API and Blueprint. On the left-hand side there is what you enter. And on the right-hand side is what that generates. The right-hand side is the JSON schema representation of the Blueprint on the left. While this might be okay, it makes it hard to override things. It also just creates this magical land. Also, it's a giant server. You have these spec languages. Now you need to actually write one. There's two different extremes of how you might write any spec. You can either auto-generate it on the one hand or on the other hand you can write it by hand. Now, it's unlikely that you want to just sit down and actually write your entire spec now. That's going to take a long time and it's also going to just be really in the dots. It's likely that you're going to look at your docs and you're going to start writing some stuff and you're going to look at your code and see if your docs are right. The whole exercise of that is pretty challenging. Now, if you want it all auto-documented, well, that's okay. If you've ever seen Java docs, that's what those are. And they lack what I call a human element to them. They're what you decided to include at a developed time not when someone actually needs to use it. So the compromise is to generate a spec that is partially... that is either partially ungenerated by the code or it's something where you have tools that help you write. So there's two mindsets to this. The first is a partially code-generated spec and what this is is you build a tool that actually parses your code and gives you a spec from it. And then when you want to add things like lines for documentation, it just leaves those blank and handles the merge between. There's a couple of pre-built tools that actually undo that. If you use rape or Rails on zeroizers, there's a couple that can just read that. This is really easy when you're already writing your API with a tool that is prescriptive. That gives you a separate way of writing your API. If you're not using some sort of prescriptive tool, it's going to be a lot harder to actually parse your code to generate that spec. So on the other hand, you can build specs, which are ones where you're actually writing out the spec. But instead of just writing it by hand every time, you can build a lot of tools that will do that for you. So with Rails, similar to Rails Nu, you can use things like resource Nu. And you can build a generator. There's a lot better out there that will just generate a sample spec for you that you then need to fill in. This makes it easier to use, you can do a validation that way just to make it easier for you to write it by hand. In the example of one that I'm going to be building, we essentially just built a set of templates and you can just hand a resource name to it and boom, here's a bunch of resource files that I can go in and edit. If I want to add a sub-resource, the tool can just handle that by giving it like dash dash sub. While there are tools that are out there like this, you can write them yourself without much work. You probably already, if you were going to write something for most N-webs, you're probably doing a lot of cutting and pasting N-webs. So that becomes your template. That's all you need to actually go and copy. So once you have these schema files written and markup files written, this can be then converted into actual docs. So we run this one shell script that goes through all the files, it validates them, and actually converts them into HTML, of course, in one public folder. This is sort of a breakdown of what's actually happening. With these rental files, which you write up, we use a rental. We actually create these rental files from code, and then that code is combined with markup files, which are the tutorials, and then we turn that into our HTML. The rental, the rental is regular use just for SDKs. There are other elements to just taking, you know, spec files and converting them to our HTML. We use a left hand map, which handles all those tutorial files, and it handles taking the files that we have and turning them into a left hand map. So you see the JSON file on the left, that becomes what is on the right. This is unthemed. We have another tool that actually themes it all. And what ends up outputting is something like this. You have the left hand map on the left, and the markdown is on the right, and then each resource looks something like that. You take the bodies, all the example code, and spec itself, and actually generate documentation from that to get something so that it's pretty much a complete reference with anything that you could want. We first developed this by hand. So we would actually create each of these ages with HTML. And, you know, eventually we built these automated tools because that was hard to update, obviously, and a lot of work. So we found a template that worked for us, and then we built the tools to generate that template. Once you have a spec file, it's not that hard to create things like SDKs, depending on the language you choose. There's a lot of pre-built ones out there. Some of them are really good. Some of them are really bad. Typically, the ones that are better are ones that were created as its own SDK, and then the internals will pull that, and then a spec was actually put in. It just makes it feel more like an SDK you would be used to, not something that is really unprescriptive. If you want something like a client wrapper with a spec, they're not that hard to create, and those are going to be better and quality. The Render one's pretty good. It just creates JSON, and the Swagger code gen. Some of the SDKs that it generates are pretty good. Other ones of them are just broken. You look there, I'm sure you can see the front there, but it's hard. The major takeaways from this is we're taking our documentation as natural products. If your API is what people are consuming, that's what your product is. You need to invest in the tools to actually build the documentation to let people use that. If you never actually think of it like a product, you only think of it like, oh, we'll just write the documentation app there, maybe we'll put it in a Google Doc, and just share it. It just becomes overhead for you. You won't want to, and update it. And what that'll lead to is people won't want to use your on API. You've got to build the tools to do something like this. I found that Go's CLI is actually really good. It makes it really easy to build these, and also it's easy to share. So if you're a developer, you're not the one who's actually updating the documentation. You have maybe a VM or something that's, that is, I'm doing it. If you're using Go tools, then you can just share that saying, I don't need to know how to use Ruby or anything like that. But yeah, so hopefully you have a better understanding how you should be building your documentation. Any questions? No questions? Do you think you can owe it to your users to build clients in all sorts of languages, or do you owe it to your clients? The question was, do you think that you owe it to your clients to build clients in their language for them? I think that it depends. So if you have, if you're building something where it's going to be complicated for someone to use an existing, like just HMTV, then yeah, you should probably have some sort of client wrapper out there. They're not that hard to make. The problem is that if you don't from day one have those tools out, people might choose not to use your tool because someone else has a client for theirs. On the other side, you could build a really bad one. And so if you think you can do really well enough, I would say go for it and do it. We build all of ours for very light wrappers. In about two weeks, we build like four of them. And they're not that great, but they do enough. And they don't, they don't really have any magic in them. But if we hadn't done those, I think people would have been less likely to use our tool. Especially when you're on like a sales call or something like that. So I was like, well, do you have like a Python library? He said yes, yes we do. And it was not that great. You still get to check that box. Any other questions? Yes. Do you have any thoughts on creating good API documentation that outlines valid values for certain things that get sent in, especially if those values might be maintained by someone less technical who knows the data better that says, you know, you can send these values over here and that's what they mean, or this is what these codes mean, the amount of kind of things. Yeah. So the question was, do you have any advice for in summary in how to give your clients who are building your docs, giving them parameters that are, that they might not have actually generated themselves? Or like what's valid to pass it? More like let's say you say, tell me about this thing in your API call. And it has a couple of description codes that come back and what they mean. Oh, so it might actually be response codes and understanding. And not so much a pass fail, but more of a, you know, you say A123 means it's a green thing or, you know, like seven means it's. Gotcha. So how do you describe, so how, like, what sort of responses would look like and sort of how those get translated. We normally do them out just as HTML files, because like here is, we'll include links, because most people aren't looking for it in the call itself. So we have other pages that we look to where we describe what those response codes are and what they translate to. However, if you're, there might be a deeper issue. If you're returning codes, that first intermediate isn't going to know what they are. And that might be something that you would want to wrap in a client library or something like that. Pretty much sort of like an object where that, where you can actually give it a description to understand what it is. Yeah. But like most of the time, we just write states out. You don't automatically generate those, because normally there's not that many of them, for us at least, if there's a lot, you might need something else. But just being able to describe those and what they are. Any other questions? What are some of the biggest failures you've seen designing a build-maker? Do we pay API, what have you learned from that? Do we pay API specifically? So we, in our current version that people have access to, we have another version that will come out in the next few months. But the version that's up there right now, we've done a lot with, a lot of what's there is not quickly resourced and oriented and more surrounds. There's a lot of alias calls. And we've found that we either have people who really love them, or we have people, we love people who build your APIs, things that you can so easily use. And you have other people who try to combine them with other things and coming from more like a PayPal mindset. It's harder for them to understand. So in our next version, while we're keeping a lot of the alias calls, because people like them, there'll be more resource options to call the same things. That's probably one of the bigger things we've learned about a lot of what we use in our API. So I heard yesterday one of the best API stories, which was apparently if you call the National Weather Services, an API, it does geolocation on you, regardless of where your server is. So if your server has to be sitting in Virginia, they'll give you the weather for Virginia, even if that's not where you are. So that might be the most epic fail of people not understanding how APIs are used. Any other questions? Have you got any requests for which errors can possibly be from which endpoints and documentation for that next? So how have you solved that problem? Can you read the question? So certain endpoints are going to return certain errors sometimes. Different from other endpoints. Sort of like in a strongly connected language, you might have a froze law exception. Have you got any requests for that new API? So have you solved that problem? So explaining to a user how certain errors, what errors might be thrown by different endpoints. That's right. So we have one document that describes the error states that we might throw, so we try to keep it consistent. That's the first step, is trying to make sure that when an error gets thrown, it's thrown at least in a consistent way. So if you know how to handle it generally, you'll know what's going on. We run into a problem where there's a lot of errors that we can throw that we don't actually know what's going on. It's actually a problem like user credit card. So when you make a call with WePay, we're actually talking to like four different banks down the line, and any error in that flow could be a master card or whatever, could be causing the problem. So we try to handle it generally, and say, okay, so most people can handle it. We have like four or five different error states that we combine things down to. And if you get one of those, there's a way to handle it, which is mostly like you should retry, there's something wrong with the data, or like we don't know what's going on, you should probably call this, or like they should call their bank, one or the other. So I think that's the big thing, is just trying to simplify it down. And if you can't do that, I would just be really explicit in the docs about how to handle things, and perhaps giving someone like a pseudo code, like a case structure of what they might want to do. Not making me have to come up with a project myself is going to make me happy. Any other questions? Awesome. Thank you all for coming. Thank you.
|
Congrats! You've built the one API to control them all, and it's amazing. It's fast, well-architected, and Level 3 Hypermedia. However everyone is still using your competitors sub-par product... Why? We developers are lazy and you're making it hard for us to use. We're diving into your SDKs tests to solve basic tasks, your SDK + API errors annoy and don't help us fix our mistakes, and the lack of logical defaults leaves us playing parameter roulette. Let's explore how to build API-enabled products that fellow developers will love using with great docs, tooling, support, and community.
|
10.5446/30725 (DOI)
|
Thank you so much for coming, taking time out of your lunch. We're going to escape this conference and we're going to go into your interview. So, I'm going to end here. So, I'm Julian and today we're going to talk about HGP. But, I actually kind of hate just pure technical talk. So, actually there's like a narrative around this about learning. So, I just started the drop of absent. It was great to break with me. And I know absolutely nothing about geospatial systems or anything like that. So, tell me if this sounds familiar to any of you that's something kind of like this. So, I go sit down at my computer and I'm reading up on geosuff, maps, the company, and the blog. And all of a sudden I have like 100 tabs open. And kind of get like a sinking feeling of, oh my god, there's more than I can possibly ever learn. And if you're like a new software, you feel that maybe you think that you should not be a software. That's definitely not true because all the people that have been here a while know that that's normal. And you just learned how to deal with it. So, I want to, the whole purpose of this talk is to share what we're going to do with that, which is by ignoring as much complexity as possible and focusing on the fundamentals of things. So, we all use our web browser every day. Web browsers have like tens of millions of lines of code, way more code in our products of Chrome than in the Linux kernel or probably in Windows, which is frankly really scary. But people who don't know how many of that code works at all can be paid in the great web developers, which is really awesome. So, I want to show you a tool called Netcat. And we're going to do some fun stuff with it and hopefully you all will learn some cool foundational stuff. So, Netcat looks like this. It's a nice little command line utility. So, everything on the network works with what they call a client server model. So, there will be one thing listening and then there are things connecting to it and then they'll all talk and it's great. So, Netcat here, we can run it in server mode. We'll say we'll listen on a certain port and pick that 3001 and then the dash K says keep it open for what requests we'll be doing that for English or a little talk. So, we can have this running. Notice it doesn't really do anything yet. If we switch over to the other side, we can run it again and this time we'll run it in non-exiant mode. So, we'll type in local host 3001. It's going to connect to the other one here. Nothing happens again yet. But, current type below shows up on the other side. And if I switch over to this other side again, I type in A. It shows back up on the other side. So, this is, Netcat is pretty much the simplest programming to run that does network communication. I'm just doing it on my own laptop to itself here but if you want, you can find a friend and get their IP address and you can talk with them over the Netcat. But what you see here is exactly what's typed. So, if you want some sort of security, maybe you shouldn't type in your secret messages. If you do something, if you've read a Rope 13, maybe instead of typing it all over, you'll type a read. Right? But, this is what you see here on the screen. That is exactly what's on the network. So, let's do something a little more cool with it. I've got a little web server running. So, it's hosting this page and you can see, you can imagine, on the HTML it's like this page with a little header, it's just some other images, there's some links. So, what happens? We go over to Netcat. Oh, it's running on port 3000. It's not real spy, there's no trouble, it's 3 yet. So, let's do Netcat again for localhost and port 3000. So, nothing happens. So, if we type in GIT's intro, HTTP 1.1, so we hit enter. And we get a bunch of stuff back. So, let's talk to the bottom here. Oh, shit, thank you, all right. Oh, we need one more thing. A lot of good effects. So, we need the host. So, HTTP 1.1, unlike the older versions of HTTP, is actually a little bit different about these sort of things. So, we have HTTP 1.1 and now we have the host. And the host will just be localhost. So, now, there we go. So, look at this, it's an HTML back. If I scroll up just a little bit, you'll see, it's actually what you expect, you see a title, you see Rails, you read its own style sheet. And now here, we see some more stuff, which is in its own way more interesting. So, these are the actual headers that are sent back by the Webster. So, the Webster says 200k, everything is great. I'll be right back to these if you've ever looked at it from Firefox, Elbow and Pools. This thing going in now, maybe? Yeah, okay. It's supposed to have question-pan, whatever. Anyways, so, it's got some stuff, it's got the dates. I guess, if the servers don't know about the date, then maybe they need to. So, what you're going to get is some HTML. That's good, so you know how to interpret it. And, but if you look really, if you see the, if you use like the Chrome developer tools. But the whole thing is, this stuff is not some visual interpretation of the headers or some sort of fancy nice display. This is actually the text that is sent over the network. So, it's really nice to know that whenever you go something in the headers of your HTML requests, it's just a text. So, it's up here and we'll play around with it more in a second. Let's look things around. So, let's run Netcat again. We're going to run it in server mode. So, it moves to 2003 and we'll keep it open here. So, it's a Netcat 20 for requests here. I go back to the browser. And we can have it to 2003. And Firefox will diligently wait for a while, which is good because I'm a slow human and I cannot respond to HTTP requests. I'm even slower than Rails at this point. So, this is when Firefox sent a lot. So, it sends a bunch of stuff. So, it sends a git request for the root just, you know, what you would expect. It sends a user agent string if you're ever interested in a really crazy tale of like deception in various companies and mystery and stuff in the institute of tech. So, it's a great way to read up about user agent strings in common, maybe, because they're pretty crazy. Firefox says things like it looks up HTML, it looks up XHTML, it looks up XML, but it really doesn't want to accept XML before it does it much. And then it looks up anything else but not even as much as that. A bunch of other stuff. But now here's the hard part. So, so hard part. So, it's still diligently waiting. And we can just start typing some stuff. So, HTTP, 1.1200 okay. And then if I make a type of, and then select a type. And we can hit enter twice to separate from the headers from the body. So, we can just start typing some HTML. So, I can type like h1 hello. And then if you're very used to the Linux command, or Linux terminal, you know, control D will kill the output, keep things going. So, I'll just click over to Firefox. And then I'll just say hello to the HTML. So, it's pretty cool. So, we just like type some HTML in Firefox and let's do it. So, let's do something a little bit more complicated. Let's keep building. Let's run another server. And then it pets 2004 this time. Keep it open. This is actually really important. You'll see why. So, we're going to just change the report here. Do the exact same thing again. And then we're just going to type the same request again. Except we're going to add one thing. And then we're going to add a little bit of HTML. So, HTTP 1.1 200 over to A. And then I'm going to make a little bit more complete page. So, let's start with a head. We're going to add a link tag. We're going to add some loops with style sheets. So, rather than equals style sheets and int rough, you can actually remain the head style.css. If you use a style sheet, it's not called style.css. Head type of style sheets. Oh, thank you. Save the code. Style sheet, int rough. We'll figure out any type of thing. It doesn't matter. It doesn't matter. That's access. That was the head tag. And then let's do another plus two bar tag. Let's do another H1. Head pull down again. Not making any more fit throws. It actually browses over ridiculously forgiving and with both sets. Let's close out that bar tag. Go into the gym for an arm bar bar. Okay, that's an error. F-troll D. So, what came back is a whole other request. And now Firefox is asking for a style sheet. Notice this time it says, hey, I really want some CSS. And anything else it doesn't want at all. This Q number is quite a little lower. So, if we go back to Firefox, it's still kind of waiting here down at the bottom. It says it's waiting. I think this will appear in the next one for the last one. So, we should ignore that. So, let's have some CSS. So, let's do some stuff with our H1. What color do you want to make them? Red. Red. Red it is. Alright. Color, red. So, let's sit in here. Pull D. Go back here. We have hello and red. Kind of boring. So, if you've ever gone to a web page and HTML loads and then you see like a little spinner and you see then like a div opens and like then image loads like a couple seconds later, you're told to not do stuff like that. Right? Because basically how those, how those, those work is there's multiple round trips to server right? And human web server, Julian is ridiculously slow. But even, you know, if your users are on a mobile network, it's almost as slow as I am sometimes based on my iPhone, at least. So, you saw that there was, you know, like multiple round trips here. And I've never found a way to drive that moment better to actually see the multiple press on it. Let's do something even more fun. Let's talk about cookies. So, I live in Berlin right now. And if you've ever been to Europe or you've been following E-Politics, they're really scared of cookies. Every single web page I go to now asks me if I'd like to accept cookies. And it gets kind of fun. So, let's see what the story is about this here. So, we're going to do the same thing as last time. We're going to do NetCAD 2005 is time. And we'll just keep it open. Let's open NetCAD 2005, hit enter. Let me get our request just like we've seen before. So, we're going to throw something into the headers here. It's going to set a cookie. It's pretty simple. Do the same thing as before. So, you can do 1.1, 200, okay. But then we're going to type our header line for the new sets cookie. And what would that help cookie to be called? Any suggestions? Any text you want. Cookie monster. Cookie monster. Awesome. And then let's just do an H1. A little cookies. Cookies are yummy. Cool. So, if we go back, it says a little cookies. Nothing really changed. But it refreshes the page. We get back here. And in addition to all the Google traffic stuff that is being used to show me ads, you can see that our cookie monster thing is in the cookies. And basically, that's really how a cookie is. It's just a line in the header that your browser remembers and sends it back next time around. And obviously, they can get fairly complicated, but it's all cookies. Let's go on to JavaScript. Press first. We're going to run one more server. Actually, this one. We have a full example. So, views, inches, JavaScript example. So, this is just the view from our little Rails app here. So, what this thing is going to do is we're going to load up in a second. Just because I don't want to really type this HTML every time. So, we've got some JavaScript here. It's just using jQuery. We gave requests to this URL, 2006.js pull. And whatever gets back is just going to kind of append into this content div down here. So, it looks like this. 2000 JavaScript. 300 value of that. So, it says jQuery, put stuff below. And it's waiting. And now, if we go close this, I actually know we have to start a server before we send our browser off to get it here. So, we'll run it again. 2006. And we'll just get it just because. It's just going to get localhosts. JavaScript example. So, now if we go back here, we've got another request. This one's again a little bit different for a XHR request. It's not a part of our server except for anything. So, that's kind of interesting. And we can respond back with some text. We can respond back with like some XMLs or JavaScript. But there's one thing we have to do. We have to do HTTP 200 page just like before. And we also have to set another header here. So, the new mega browsers are very nice. They're very friendly. They want to protect us. They don't want our browsers to be able to just provide any script from anywhere that's kind of needed. So, they ask a server that's sending back a response to some JavaScript XHR too. The statement is okay to use this because it could be a post request that's going to be like deleting something from a list or doing something crazy. So, we have to type accept origin. They see the HACO. Accepts. Any number of words coming up? I haven't even got some. I'll ask this one to a little bit more awkward. I thought it was going to be awkward. I asked this one to a lot more. We can just set star. I see what that says. It's anyone can grab this stuff, which is great for a little bit. So, there's our header. I know you've decided hello from JavaScript. And if you get back here, we see that it measures the hello from JavaScript as a gearing thanks to the magic of jQuery. So, that's all on XHR. XHR request is just a separation on the exact same request we've been doing before. So, one last thing. Let's talk about each of the two, which we've just finalized in respect just a couple of weeks ago. So, let me show you an HACO to our request. So, it's a little bit different. I have some bad news and some good news. Bad news is that Nethack will not work to the plain text. HACO. The good news is the reason it won't work is because all HACO requests require encryption, which is awesome. But we're doing it today in the HACO version. It is even for new sites or things that you're not logged in, there's just too many potential dangers to have a plain text HGML. Anyone in the way can use and serve your product to just start doing something things. If you saw the recent funds that GitHub had with their denial of service tax from China, they are definitely looking forward to HGP2. The good news about HGP2, beyond that is HGP2 is significantly smarter about what it sends and when it sends. So, we saw earlier right at the CSS. We type our HTML document, we send it off to our browser. The browser looks at it and says back to the server, oh hey, by the way, I got that nice HTML you sent me. I'm going to need this title sheet too. What percentage of the time do you think right after a server gets a request for some HTML? It gets that request for the title sheet, maybe 100% of the time? So, HGP2 is a little bit smarter and instead of waiting for the web browser to come back and say, hey, I need this title sheet, when the web server really arrivals that, a web server can actually send the HTML and then essentially on prompt, they can also say, oh by the way, web browser, I have a launch that you're going to need this CSS, so here you go. So, that's great, that solves a lot of the problems with ground trips for multiple assets and JavaScript in the web, speaking forever, so that's really cool. And there's a bunch of other tools that talk about HGP2. The good news is when you're in your developer tools, all the semantics are exactly the same, so you've probably looked, you've probably been doing HGP2 requests already or really speedy requests I guess, and you don't even know it. And if you were to look at those requests in the developer tools, you wouldn't notice anything different, all the headers, all the semantics, all the, you don't have to forget that 404 means not found or 500 means server error, it's all exactly the same. But if you want to, of course, something like this, you can use the cat, cat, cat, keys, but I actually use the awesome code called wireshark, and it lets you inspect everything going over the network, it's really cool, not very hard to use. So I want to show you one last thing about NetCat. So if you have a request, a request that's a little bit more complicated, maybe it has some binary data, you don't really want to, you can't type it out, you can send that, you can type into a file or create a file, it has the response that you want, however you want, and you can send it off to NetCat. So let's do this. So we'll do NetCat, listen, I'm for 3008, and we're going to just send a file over, and there's a file here called thanks. So now what this means, if you go over to localhost for 3008, you get a picture of my cat. Thank you very much. Thank you.
|
What actually happens when we visit a website? As developers, we're supposed to know all about this, but when our browsers have millions of lines of code, and our backends have fancy service-oriented-architectures with dozens of components, it's hard to keep it all in our heads. Fortunately, we have amazing tools to help us. We can bypass the complexity of browsers and servers, and simply examine the communication between them. Let's use these tools and look at the underlying patterns shared across the entire web, from the simplest static pages to the most sophisticated web apps.
|
10.5446/30726 (DOI)
|
Thanks for coming to my talk, you guys. I really appreciate it. My name is Jim Jones. I work as a Rails engineer consultant. I'm currently at One Kings Lane and I've lived in San Francisco for the past five years. I just recently moved back to Nebraska because my wife and I had a baby. I love everything, this karaoke and beer and I love being a dad and I really love the city of San Francisco. On a slightly side story, one of the first experiences I had when I moved to San Francisco was I went to the Taco Bell and I struck up a conversation. This is the Outer Mission Excelsior, one of the few Taco Bells in the city. I struck up a conversation with the cashier and she started complaining about the rendering differences between her Windows phone and the Android phone. I'm pretty sure she said the word render and I thought to myself, I go, holy shit, I'm in another place right here. I had a secondary experience where we went to Wells Fargo and we were opening a joint account for my wife and I got married. The banker asked me, hey, what do you do for a living? I said, yeah, I'm a software engineer. His eyes light up and he said, what language? I said, Ruby. He said, oh my God, I wrote a few scripts to automate some of the balances I need to know at the end of the day. He was like the second where the fuck am I at? I really missed those sorts of interactions and such. It was a whole new world and it was where I got to meet lots of the people that I had really well respected over the years and such. Great experience. When I got there, I went to a company called Z-Benz and we eventually got acquired by StubHub. After that, I ended up striking out on my own doing consulting. I just had this whirlwind of experience as I went to San Francisco that I got to tell my grandchildren about and going through an acquisition and being out on my own and learning on my own. That's just a little bit of a background about me. Today we're here to talk about dynamic sites and dynamic sites with Rails. We have a few options here. We have just raw JavaScript which is actually becoming a little bit more in favor now that people, now that implementations are starting to match up with the modern browsers and such, it's probably not as far fetched as a lot of people seem to believe. We have jQuery which obviously is going to take care of a lot of the browser idiosyncrasies and differences and such. That's always the go-to on a lot of projects. We start to get these higher structure layer of abstractions. We have Backbone that's going to give us a little more structure to our front end. We have Ember.js that we're getting even further with all of its nice conventions and such. Then we have the directives with Angular that are really, really powerful and then give us even more structure to our app. Then there's this little stepchild that no one ever really talks about and not even sure most people are really aware of what the capabilities are in the current Rails stack. The point is that everything has its place and that the front end framework certainly have their advantages. Then we also have the server side JavaScript rendering which is going to, we're going to show where its advantages are and where it shines. We're going to go through a few parallel implementations here of the exact same app. I'm going to attempt to do some live coding and commit presentation suicide. We'll go from there. First off, we have to qualify this. For anybody who's been with Rails for a long time, I'm not talking about RJS. I think a lot of people still call it RJS. This is probably maybe a problem with the evangelism of this particular subset. If you guys remember, it depends on how far back you go with Rails and the history, but there's functionality called Ruby JavaScript. You probably remember very explicit method naming conventions like this where you had link to remote and here's this really cool URL hash where we're explicitly telling our controller and action and such. We have our page.replace HTML. We would get these JavaScript requests and then we would get this page object and we were able to either replace a particular HTML element or we were able to update an HTML element. I think that this was the Rails way of trying to solve for the sweet spots of what the dynamic sites were doing back then, which were just basic updating and such. But at the end of the day, it was very constraining and it ended up getting ripped out of Rails core. In fact, I was trying to figure out when the transitions went from just RJS responses to raw JavaScript responses. I was looking at my agile books and I think the agile two still was citing RJS. I assume around it was Rails three that we started to see just the standard JavaScript suicide requests. But someone else can crack me. We started to kind of take the kid gloves off of our JavaScript requests and I think it's around Rails three that we start to allow for free form JavaScript responses. I think this is going to be better demonstrated with some code. Let's see what we can do here. I have this application. This is just a standard, Rails new. Is everybody okay on that? Can everyone see? We just have a user model. We just have a name on it. I've already done the DB migrate on it, but that's about it. That's all we're at at this point. And so what I would like to do is currently we have this functionality. It's listing of users. We have a new user. We want to add a user and we would like it to dynamically update. Okay. So the out of the box is like, all right. So we've got that. This is just a plain scaffolding coming right out of the box. So now how can we get this where it's a dynamic where we would just start adding users to our table and without pulling in any sort of external JavaScript frameworks? So we'll go back to our source and I'll kind of walk you through on how we're going to go ahead and massage this code, right? And so this is where you guys get to laugh at me because live coding never goes right, right? So let's see if we can create some magic here. Let's just do that. Yeah, let's just take the table and I'm going to go ahead and render out all the users and so that's just going to iterate over our user collection and it's going to call the user partial on that. So we're going to go ahead and we'll create our user partial. For those that have actually seen this in action, you'll probably be pretty bored but I think this bears repeating that I've consulted at a lot of companies and it might be people who tend to be newer to the Rails community tend to reach for their front end framework right away. They say, oh, Ajax, dynamic updates. Hey, we need to pull in a front end framework for this. And so the whole point of going through this exercise is that you can see this built up and you can see that there's a lot that comes in the box for Rails and a lot of your simple dynamic updates may already be taken care of and you may not have to add any more dependencies. So I just want to drive home that point by going through this. So let's just say, we'll just take that. So now we've got our sweet partial. Let's go here. Let's make sure it's still rendering. Nice. All right. So now we've got our partial setup. Okay. Now we're going to allow for a name to be entered right directly from the index. So we'll come over here. Let's get rid of that. Let's just say render. And we already have our nice little form extracted out because Rails is good to us for that. So let's make sure that that's rendering. And oh, so it needs a user object. We'll come over to our user controller. So here we're just going to say user.new. All right. So now we've got our form. Awesome. Great. So what if we enter a name here? Let's say, sir. Okay. We're still doing an HTTP request. Okay. Great. So now how we alter this is we'll go over here. We'll look at our form. And we'll just say remote true. And we're going to get into the magic behind this flag a little bit later. And we'll actually look at the implementation behind it. But just know from going forward that this is what enables the asynchronous submission of forms. All right. So we've got that. Let's go ahead and refresh. Tim. All right. Nothing. So we don't have anything. So let's see. Let's see what's going on the back end here. All right. We can see we've posted to users. And it was hitting users controller create as a JS. So now we've switched from standard HTTP posts to a JS request. Okay. So that's a good sign. That means we're at least posting up our data to the server. And if we were actually to hit a refresh on this, we would see that Tim was posted. Right? So now how do we get that data and we get it back to rendering? We come over here. We're going to go ahead and remember how we applied that ID of users to the T body. We're going to go ahead and do a little jQuery magic because that's included out of the box. We'll just say users. And we'll say.append. And here's the nice thing. Is that this particular template is going to be called create.js.erb. And so that implies that erb is doing a pass on this template before it serves up the raw JavaScript. So this is just another action template. And so that empowers us to go ahead and render out all of our partials, render out the very views that we've already built. And so we're going to get a lot of reusability out of this. So you're going to find that for a lot of simple dynamic updates, you're going to be a lot more productive with this. So we just want to do an escape, JavaScript, and we'll render user because we already have our user partial, right? And that was the one that constituted a row in our table. And we're missing the parentheses. No, sweet. So we'll just say users append. We're going to render out. So now we'll call this create.js.erb. So that's implying it's a JS request and it's going to be processed by the erb. All right. Now let's say I'll care. Try this. Let me say, oh, no, nothing happened. So maybe there's something wrong with my JavaScript. Let's take a look where it fell down. Let's see. Okay. Use the control, create, it's JS, commit. And then soon it's trying to use the controller. So we forgot one thing here. We're going to use this controller. On the create. We want to make sure that it's going to render the default JS template. So we're going to go ahead and pop in the format.js here. Now let's try this again. Boom. So now if we take a look at the life cycle of this, we have, we posted to users, we processed users controller create as JS. And then you can see right here that it rendered users create.js.erb. Send it client side and the default behavior for jQuery is to go ahead and evaluate a JavaScript request. And we'll dig more into the internals of that in a little bit. But just think of it as really, it's going to be really beneficial for you to see the full implementation of that, even though it was a little crude. So, all right. So just to reiterate, this is not RJS. This is way more free-formed. This is just raw JavaScript with erb processing. And that raw JavaScript is just getting sent client side. So it's getting that template will get processed by your template processor. It could be handle, it could be erb. And since this is ActionView, we get the full reuse of partials. We get all of our normal helpers. We get all of that included. It's just another type of template. All right. So I mentioned that we're going to look at some parallel implementations. What I have set up is we're going to take a look at the, there's a, there's a site called to do MVC that has a to do list implemented in various front end frameworks. You have like EmberJS. You have the AngularJS implementations. There's also a plain old Rails JavaScript response implementation as well. So I want to walk through some of the specifics of that code so that you can start to get a feel for a real world application. Do slides for top. All right. Let's just take a look here at our never request. All right. So if we take, say, do slides for top. All right. We'll take a look at this response. And this is just going back to the raw JavaScript response for the create method. You can see that the response is basically just to send back raw JavaScript. Right? And so we've got our to do list here. We're pending our list item. You can see the escape JavaScript method has properly escaped all of our quotes. We're just re-initializing our entry for the to do back to blank. And we're setting a few properties. And so that's it. If we take a look at the form, the form code for that, we just have a form for, we're just initializing a new to do. And we're just setting the remote true flag on it, similar to that initial implementation that we walk through. Here's our model. Pretty trivial model. Just have a couple of different scopes for completed and active. And we have our view. One thing to note here is J is basically an alias for escape JavaScript. So that can lead to a little bit of shortened code within your JS templates. But the most relevant portion is just this to do list. We're just pending. We're rendering out our to do. And then we're just going to go ahead and re-initialize our value. And we would just see this create.js right within our views for that particular resource. Here's our controller. Now I want to walk through what some of these helpers expand to. And so you have a little bit more background so that when things start going wrong, you aren't drawing a blank space and saying, why did this guy recommend this as an alternative and start cursing my name. If we were to look at that form for help or call and see it expanded, we would see that the action is set to to do. And the most important portion is that remote true ends up being extended to a data remote equals true. And we're going to see here in a little bit that this is something that JQuery UJS is actually looking for in order to do the asynchronous submissions. Right here, if you start digging through the JQuery UJS, JQuery UJS is the portion that is actually in charge of doing the asynchronous submission under the Rails JS. We're going to do document delegate for the form submit selector and digging through that method. If that remote flag is ended up true, we're just going to go ahead and do a Rails.handle remote. Further down, and we'll look at these events later on, but you're going to see on the handle remote implementation, you'll see a bunch of different firing of events, and there's going to be a series of events that you can actually listen to that are quite beneficial in terms of like for disabling controls, re-enabling controls, doing proper error display and so on. Here's our controller action. You could see the to use controller create. It was as a JS request. That's the important part. And it's important to note that we rendered out just using a standard partial. We escaped it. And this is also available if you're using Hamel as well. You can just do a regular string and it provides an interpolation right there. So you can still do the escape JavaScript render and still do that same output under Hamel. Here's the final portion. When it requests, it makes a JS request and it comes back. There's going to be a global eval on that particular request. And that actually becomes important when we start going over how to debug these things because evals aren't pretty when the code is incorrect. And so I'll give you a few tips on the debugging coming up after we go over these other implementations here. So it's important we're going to kind of just gloss over these front end implementations really quick, but it's important to kind of note some of the differences here. So call my what? That wolf. You wouldn't appreciate that. All right. If we start looking, breaking down the views on this Angular implementation, some of the, obviously some of the more important parts is like the to do app and then we have on our ng submit the other directive for an add to do. We come over here, we look at our controller and this is taken directly from the to do mvc implementation. We just have our add to do or saving. I have a couple of promises here and it persists that out depending on what store you have, whether it's api or local storage. Here's a services that provides that insert function for the store and they're just posting to the api to do. It's important to note that this is obviously you have full control over that. So that actually becomes important when we're discussing the advantages of the client side frameworks. Let's just sleep. That's definitely on the to do list sometime. Not now. That would be boring. We've got with our implementation, we've got our inline handlebars and inline handlebars template to script. So those are eval and with JavaScript client side. That's important. We have our controller. We're off just creating the record and saving it. And we just have a very simple model to represent this. So our client side advantages, we have immediate rendering, right, where because these templates are implemented in JavaScript, we can immediately render it to the screen regardless of whether what the result is. We could take that chance if we wanted to. And so that's certainly an advantage where it's hard to compare that even to like a 50 millisecond response time server side that you certainly get that immediacy with that if that's how you go ahead and design it. You can do asynchronous persistence. Also on the speed side where you can just go ahead and delegate that persistence out and you can still display things in the meantime and you'll get that immediacy, that quick update to the user interface that will delight the users. And then there's also like graceful error retries, right, since you are controlling this persistence loop that you can also do some nice graceful retrying if we lost internet connectivity or lost one of the servers and such and this could be totally transparent to the end user. So you have that level of granularity with the client side and you have that level of control. All right. So there's a few gotchas for the JavaScript server side responses that I think once you're aware of, it'll make it a little bit more pleasurable experience as you kind of do a deep dive into them. Debugging is definitely a big time gotcha. And I'm going to go ahead and just jump over here so you guys can see this firsthand. So like I said, when the JavaScript response gets sent back, it actually is going to eval that. And the big problem is any val fails silently. So if you happen to have a problem with your JavaScript and you just have this little typo in there, we go ahead and we're going to say try this three. We say create user. Oh, no, it just fails. And there's nothing. There's no guidance or there's no server side response because obviously we're just evaluating the template at that point. Server side, client side just eval it, just throw it hands up, didn't do anything. So that's definitely one of the frustrating components that new users basically have to figure out how to get around. And there's a couple of different ways that you can attack that. This is kind of a primer for where I'm going for the debugging. But we talked about all the different UJS callbacks and how they're triggered throughout the life cycle of the asynchronous request. And here's just a large table of the different states from within that asynchronous request that are available to you. And these probably don't get utilized enough or people aren't aware of them. But the one that we are concerned about is the AJAX error. And this will actually get thrown. This will get called when there's an eval error. And this is really helpful. You can drop this, say, like in your application JS or somewhere else. But if there is an error with the eval, we can at least go ahead and do a console log and output what that error text is so that you're not totally in the dark. I remember there's the old RJS, excuse me, the old RJS would actually wrap the code in a try catch block. And so I'll show something else along those lines. Excuse me. Sorry. I actually have a pull request out for Rails 5 that does just that where it will take the code, wrap it in a try catch block, and sends a series of metadata for where the error occurs with it or... Oh, my gosh. Excuse me. Sorry. I have a pull request out that tracks where JavaScript is generated, whether it's within a partial or within a template, sends that particular metadata over for the JavaScript request, wraps the execution in a try catch block, and then depending upon where the error is at, it'll say, hey, you have a JavaScript error, and it actually occurred in the user.html.erb partial or the user.js.erb template. And so it's trying to do some mapping back to where this JavaScript was actually produced. And it'll give a little more insight into the context in which this JavaScript was generated on the server side, but display it in a context in which people are used to debugging on the client side. So it's slated for 5.0. I don't know if it'll make it in, but it's been tagged. One big gotcha is if you are starting to do some replacements of HTML elements on your JavaScript responses, you're going to have to go ahead and re-bind those events. If you had, like, click handlers or something on certain div elements, you went ahead and replaced those, that particular handler is going to be lost. So you have to re-bind those. One way you can certainly do that is you could start to trigger callbacks, like on our cart summary. Okay, we can just trigger an updated app, and then within that updated app, we could just go ahead and re-bind those very events that we had set up in first. Some of the advantages for JS responses, obviously, since we're in ActionView, we get re-use of partials. So you can see in these two different examples, we've got our create.js.erb, where our user is not a pen, and we're rendering out our user partial. And then within that, for that same resource on our update, if we're doing an asynchronous dynamic update, we can just do user with the user ID, and we're just going to replace with, and we're going to render out this user, which would call the exact same partial. So you get all the advantages of ActionView right there within your JS templates. Along with that, you're going to have access to all your different view helpers, right? So that's going to include caching. So you could cache the hell out of these JS responses and get really lightning quick responses. And potentially, less JS load, right? That we're just sending over the pieces that need to be executed at that particular time. So it's certainly could constitute no reduction of JS, depending upon what you're doing. And there's also slight, depending upon who you ask, an easier execution flow that, because these are templates, they fall in line with the way the rest of the flow of how a Rails app generally goes. And so you would know, okay, I've got a JS request, I'm just going to look for this particular corresponding template. And it just will follow all the same sorts of conventions. So there's no deviation from that sort of mental model that's going to be used. All right. Finally, when do they make sense? I think they tend to make sense when it's something, when it's an interaction where the user expects some sort of level of persistence, something that's stored, something that, like a comment, or you've added a cart item, and you want to update cart count, you want to update, say, taxes or totals in another column, something where the user is expecting some level of persistence. I think this is where they can really shine, and they can certainly help to simplify a code base, just for the fact that you get so much reusability on there. And so I would just leave you with this final note. This was an article published not too long ago on Medium from Dan McKinley. He was a principal engineer at Etsy, and he just says, consider how you would solve your immediate problem without adding anything new. And so when you start to look at the dynamic updates, when you start to look at the pieces that you want to do, when you want to make dynamic, really, really give it a second thought as to whether you would want to adopt a full fledged framework, and that sort of overhead, or whether the JS responses would be sufficient in updating the individual pieces on the page. Thanks, you guys. I really appreciate it.
|
For dynamic apps, Rails has taken a backseat to client side frameworks such as AngularJS, Ember and Backbone. Learn how to use server side javascript effectively to greatly simplify your code base and reuse your view logic. We'll implement parallel apps with vanilla Rails js responses, AngularJS and Ember.js so that we can contrast the implementations and evaluate the tradeoffs.
|
10.5446/30634 (DOI)
|
Music My name is John. I'm the VC of Engineering at the payoff of the Stanley Park, U.S. Department of Education. And so we're here to talk a little bit about... So we're here to talk a little bit about what we've been up to for the last year and a half or so. We launched just in December. So we came from Internet Company Roots and we did that for a while and realized we wanted a lot more impact. We didn't set out to be a non-profit to help people get out of debt. We really wanted to do something that could have a lot of impact, help a lot of people. We did that with Ruby and Open Source. I'm here over here to talk a little bit. So we haven't heard about payoff and that's not terribly surprising. So we started in 2009. We, again, were an Internet company. We were trying to do personal finance management. So we kind of think of Mint.com, but we did it with Badges and behavior change. So we really wanted to help consumers get out of debt number one. But we were trying to do it with positive reinforcement and game mechanics. So think of Mint with achievements. So instead of killing 50 zombies and being in a batch, if you made six payments on time, you'd get a batch. Or if you saved more money than you spent this month, you'd get a batch. So that was kind of our payoff 1.0. But payoff 2.0 is kind of what we set out to build. That's the piece of the build on top of the Ruby stack. So this is Ashley. I'm a user and she can kind of illustrate what payoff 2.0 is. So payoff 2.0 is credit card refinancing. So think about mortgage refinancing. Mortgage is right. You have a better interest rate, so you can refinance it instead of paying the interest you have. So we're trying to say, why shouldn't credit cards be the same way? And actually, if you're trying to refinance just to save 100 basis points or 50 basis points, why shouldn't you try to refinance credit card and try to save several thousand basis points? And so Ashley is probably similar to many of you. She's in her late 20s. She's well educated. She's technologically savvy and she's unfortunately built up $10,000 for credit card debt. And so one thing that we learned is a really prevalent story. And it's something that people generally don't talk about. Studies have shown that people are more willing to talk about having cancer than about being in credit card debt. So that's how bizarre and taboo this topic is. But again, very, very prevalent as well. And so, you know, to kind of think of the way that credit cards work today, you get your credit card. You're walking around campus. You're in college. And sometimes you slice a pizza, maybe a t-shirt, and it says, hey, fill out your credit card application. And then off you go. You've got credit now. You can spend money that you may not necessarily have. But if you think about the way that credit card pricing works, it's based on risk. So when you're 18 and you have no job, no college degree yet, you're at the worst possible risk possible to be at. And so you get this first credit card, you're paying 22, 24% interest. And then Ashley here has been making payments all the time. She's got a 700 credit score. But she's not at the same risk that she was 10 years ago. So she's probably still paying that 22, 24% UPR on their private credit card. What we're saying is that it doesn't make any sense. And so the payoff loan is essentially taking her minimum payment, $300. And she kept paying that after the card act when you look into credit card numbers, that little graph says how long it will take to get out of debt. So it'll take her over 20 years to get out of debt. So 20% compounding interest month after month, she's accumulating approximately 120, sorry, 112th of that every single month. And so what we're saying is credit cards are alone. We've borrowed some money, we're paying it back over time. So you should be able to refinance it. And so a payoff loan is quite simple. It's just a simple amount. So she's got a fairly high credit score. So what we're saying is we want to refinance her to a 15% simple interest, so not compounding, and have her pay it off in three years instead of 28 years. So we think about the life change for Ashley between age 28 and age 31, or age 28 and age 58. Nothing really happens in those years, right? So you don't do anything more or anything. So payoff as core is a financial empowerment company. We really want to get people off this debt treadmill where you're paying just enough each month to not accumulate more debt. And so that's what payoff is really about. So this ties really to our core. We're not a non-profit. We believe that making a profit, being sustainable, helping the customers, and we're a venture back company, so making a return for investors, we're not mutually exclusive. We really believe that we can do all these things. And really, there's no secret to it. It's just, it's going to be great, but that's all it comes down to. And so talk a little about payoff, won't wait out. And so here's a screenshot. This was circa 2010 or so, so it's really payoff-y, and the balloons used to animate the CSS animations. But this is the part that I want to start sharing with you. So it was half a million lines of code. It was definitely thousands in monolith category. We had zero percent test coverage. So every time we could put it in, I used to keep a cool beer in the fridge for one of two reasons. Either it couldn't be a celebration that I survived, or I really needed it after each deployment. It was two weeks in deploy, a simple feature. So I just wanted to display one additional, let's say, spending metric. I wanted to put a widget on there that said, how much you spent on coffee? That would take two weeks, the tenant-ger, the offer-ger. So it was really, really, really painful. It would take our two QA analysts two full days to do a full regression of the site. So it was mostly manual, and it was painful. So it was really painful. And so what we realized after we hired a couple more engineers, we were hiring pretty senior engineers at this point. But the gist for us was we couldn't keep doing this. We were doing the same. So we made a decision with myself, Stan, one other engineer, that we're going to rewrite this thing. But we kind of did it under the radar. We didn't tell anyone. We didn't tell anyone it was a spare time. The Nights and Weekends, we wrote a Rails proof concept. So Stan and I used Rails before, but never in the production environment. It was just always for a hobby project, we'd throw something up on the road, we'd go, never anything this day. So over eight weeks we wrote this 40,000 line, 70% passive coverage, and 80% feature-complete from our initial prototype. So we did all the typical things you think of in a PFF, and displaying balances and all those things. But we went back and we did a very, very modern tech step. So Rails 4 were post-gres, Angular, Redis, the way that we really wanted to do it. And so this was kind of foreshadowed. We didn't know what was gonna happen next. And so we knew that we just needed something different and just to get our stand on the back, really. So it was really kind of a surprise when our CEO came to us about two months after we finished prototype and said, hey guys, we're gonna become a lender. And we said, wait, you work for an internet company. No one on the team has any experience lending or getting a dollar back, which is a hard part by the way, because you can lend them any money or they lost. They actually are important too. And so we said, we're all a bunch of tech guys, we have a couple of marketing guys, we're rolling in finance and all. And so the foundation goal was to become a lender. We said, okay, we'll do it in a year. We have a little aggressive, you already know the title of our topic, 16 months. That's how well it actually took us. But we didn't really know what we were getting ourselves into. So what kind of compliance aspects did we need? What kind of security did we need? We didn't even know that we were making on day one if there would be the crack decision six months later. And so we started digging into it. So we spent six months digging into it. This diagram up here, we call it the sausage. The sausage diagram, no one wants to know what goes into the sausage. Really damn scary thing about this. This is the 50,000 foot view. This is not the detailed view. This is what it looks like when you zoom out. So on the left is when a lead comes in from the internet and the right is a complete loan. And each of these boxes break into more scary diagrams to look like this one. So we really didn't know what we were getting into. But the thing, the one thing we did know was that this was definitely a marathon. It was certainly not a sprint. And so we knew that we had to do this in some sustainable way. Otherwise, after we implemented it, we wouldn't have a team that we hadn't been able to. They'd all quit and walk out. And we'd really be ghosted. And so one of the things that we're really blessed with is that many times we'd actually sat down and thought about culture. So a lot of people will tell you, culture is something you can't create that's organic, which is true. But you need this nurturing environment for this to occur. So we were really lucky because we actually sat and thought about these before we decided to become a lender. So we started thinking, well, what's really important to me? So unlike a lot of other startups, we're not a bunch of twangy something sitting around in the garage, pounding red bulls. We have families. We have kids. We have spouses. And so we realized, hey, families are really at the core. And we knew that we're competing. We were located in Plymouth, up on the west side of LA. So we were competing with, at the time of my space, and Google, with Santa Monica, and the whole Silicon Beach. And then in Irvine, we were competing with another yet another Google. And we were competing with Blizzard and a bunch of other shops. So we were competing for the talent. And again, we knew what we were pretty confident in. So what were we meant to do? So we realized that the tells the meaning was really important. A lot of people value telecommunications. They value flex time. We live in LA, so we all hated traffic. So we didn't want to sit in traffic. So the idea of being in the office at 8.30 or 9 and sitting on a 4 or 5 for those who were being from LA, that just didn't appeal to us. So we knew, you know what doesn't matter. Just be here for a few hours, and we're impressed that we're on face-to-face time and collaborate. We knew we didn't want to create documentations. We'd all done Fortune 500 stints and created mountainous and word documents that we're absolutely as soon as we wrote them. So we needed conversations. It's really important. So you can see the website. You can see all the couches. We actually have a 2-1 couch ratio. Sorry, 2 desks to 1 couch ratio in the office. So that's how important it is for us to sit and be able to talk to each other. We knew the work-and-play were related. So this is one of those things that is organic. It's craft-fair Thursdays. So every Thursday and around 4 or so, we crack open some craft beers and a beer trader who is in the office. He's in the office. He's in the office. Not the one. Not the kind of OK I swear. But it's our way of meeting management, meeting marketing, meeting the products, placing down people who typically don't interact with them. We can just having something more able to chat. And then flat organization. Usually flat organization is just lip service, but we're going to have a little bit of a chat. So Stan and I have read all these books. And particularly rework. Again, I mentioned earlier that we hired a lot of people in Fortune 500s. And so rework is one of the first books in the fandom on starting day. You need to learn all of everything that you've learned from Fortune 500s and come to work with different attitudes. And so we have a lot of people that are very formally known, very seven signals from Etsy and a bunch of other shops. And we kind of base a lot of our thinking on some of those kind of founding fathers of star culture. And I mentioned earlier about people seeking to enable. So the thing I think is a common theme in a lot of other financial institutions is that they happen to have an IT department. And so IT becomes this organization and the company that just gets in the way. Releases new product. I want to release this new marketing product. Or I want to do this, whatever it is, IT in the way. So you guys have read the theme of the project. That's an excellent book. Kind of giving the perspective of how IT is proceeding in all these other organizations. And so we brought a couple of other people to the table as well. So science. So we have the chief scientist from the Arbunate who works in Leeds Art, Science 15. We have a lot of thought leaders in finance working for us. Basically people we brought in from the financial industry that understand the constraints and some of the complexities and compliance issues. We're willing to leave some of those opinions and ask the way we always did it at the door. And basically we learn. So we really do have a lot of grounding in terms of all voices that matter and that diversity and thought is something that we really, really value. So what that adds up to is payoff is trying to restore humanity to finance. And so one of the challenges to hiring is, I'll give you an example. Say you want to hire.com. So hire.com categorizes companies by industry. And so one of the things we always get is I don't want to be part of the finance industry. My response is always give me 10 minutes. I swear I don't want to be the financial industry either. But here's how we're different. And so green is just something that is at the core of a lot of these companies. That's something that's really, really important. And then a lot of these companies are built on mainframe technologies. And so when you have these types of companies, yeah, you're going to get people that are, I shouldn't tell you this before. So have you ever called into your bank and the agent says, please wait while my system blows up your record? What the hell is that? Why am I waiting in this day and age for a record? And the agent wants to tell me, I'm sorry, my system seems to be having a case on Mondays. I don't like that. I just want to be set my password for the website. Why is it so difficult? And so I really feel the technology, just humanity, even these are restored to the industry. And that's kind of the thing that we're discussing. So this is probably the exact, if you stayed behind for a shop or play talk, this is probably the exact opposite. Advice and did just gave you. And so for us, it really means we need to survive in order to hit scale. So average transaction size for us is $15,000. This isn't an eyeball business. The Stan Eyeball came from eyeball businesses. We used to work on the water trend where lots and lots of eyeballs went towards banner ad impressions. But this is a completely different business for us. And so we figured, well, Rails is good enough for Cookpad. It's going to be good enough for us because in terms of pure eyeball, we're really not going to get to that point. And so for us, modularity for agility is something really, really important to us. The assumption here is that we want to throw away all these pieces as we learned. And so we need just enough learning to get off the ground, to be compliant, to be secure. But along these pieces over time, we knew we were going to replace. So the emphasis for us was really building lots and lots of really small things. And that's what was really important to us. So test coverage is an obvious must. In order for us to switch out these pieces kind of up in the air. So you can imagine doing risky things like changing out how you decision a loan application, how you decide who you lend money to. Some of these things have thousands and thousands of test cases. And so we put a lot of that in the system to test coverage. And if we have time at the end of it, we'll dive into tech stack and everything and answer any questions we have about that too. MPIs and microservices are really, really important to us. We run our own gem servers. Almost everything that we have is a packaged gem for ourselves too. And for us to switch these things out and talk a little bit more about reducing meantime and covering the gems are also a big part of that for us in order to switch out a version of things. And then vendors will dive into an entire section on it. But the key thing there is we rat every single vendor integration cap and then pull the abstractions out of it. So there's usually more than one vendor for each of the things that we do. So we pulled out the business abstractions for it, add and account. We removed the account, things like that, and actually insulated ourselves from those vendors integration. So that's something that's really, really important. And I think for almost anyone dealing in a mature industry. And then speed of iteration is one of the most important things. So we took a lot of notes from Etsy. And so Etsy, yeah, is not part of the Rails community, but they have a lot of ideas on the really important. I think the main thing being automating almost everything they have. So in terms of deployments, testing, IT automations, we use Chef to build our servers. Deployments we use, we use Capistrone, we're using Mininel, and testing, we're using Cap Bar, R-SPEC, Selenium, and RURFs. They also evaluate Robotic, which is a Python testing framework. Future Flags is something that we've been learning, but we've gotten some good success on. So Future Flags is basically just a switch or some code. And so let's do separate deployments from launch. So that's really key, right? And when your code hits production, that's not when you have to pay attention. You actually switch it so that you turn on the feature when your engineers are in the house, or whenever when you need is around. So it's just something that we've been trying to discuss. And that obviously also goes with deploying smaller bits and curves. So you can push code out to production, do dark deployments, and get a lot more confidence. All these things are basically just trying to maximize confidence. We're coming into a very highly-regulated, very high-risk and high cost for screw-ups. You screw up, you lend to the wrong person. That can really bite you in the butt. And then monitoring is obviously something that we need as well. So we need to both see how things are reacting after you turn on a future flag, or even if you screw up a few times. And then lastly, again, compliance. So there's some pieces that you can move really, really fast on, and just like Shopify, and just like Etsy, there's pieces that are PCI compliance, or SOC2 compliance, that you really need to carve out, move a little bit slower pace in order for you to keep your compliance and keep the regulators happy. So without my hands over to Stan, we'll talk a little bit more about what we're doing before we move to Stan. All right. Good afternoon, everyone. So I'm pretty excited to be here. As we mentioned, we only started doing Ruby two years ago. I never thought we'd be as good as you guys are. We started out... Sorry, it was too soon. So we started out with a team of really experienced developers. We're all coming from corporate backgrounds with job and C-sharp experience. But we knew from the fruit of concept that we could pick up new languages, and Rails in particular was pretty easy. So it was kind of nice that we found something that we could leverage and kind of give us the ability to achieve a really complicated goal yet in a short time frame, like the year that we had. So I'm going to talk a little bit about what was really useful for us. Number one thing is the Ruby community. And like I said, it came from Java. So as far as kind of set low, Oracle is kind of the dominant force over there. They're not really that open and welcoming, but we were able from day one to really find detailed blog posts or screencasts whenever we ran into a problem, just Google things. And the technologies that we chose also had people who were very willing to share information about them. So things like Venice and ULACC Search, if we ran into a problem, we were able to kind of fire it off in the e-mail to you, even like the people who wrote the gems. And that was a little surprising because, you know, normally when you're dealing with a vendor and you want to get something fixed, you have a question about, like, how does this work, how do we configure it? They're not going to get back to you in less than an hour. We would actually be able to write to kind of famous reviews or people who wrote like Keyleth Shem for us, and they would actually be swanning to us. And so I really encourage you guys, if you run into any things, you're just kind of willing to communicate with people because people are very willing to share knowledge within this community. And of course, one of our favorites is Railscast. I think we actually had this running joke for a while, you know, where we would implement something and run into problems and like, oh man, that's kind of complicated. It took me a while to figure out. I'm like, you know, I bet there was some Railscast on that. And then we actually searched Railscasts and we're like, oh man, we shouldn't have wasted, you know, an hour trying to figure that out. We could have just learned it right there. Another great one for us was RubyRos. And that podcast kind of really introduced us to a lot of different tools in the community and people, kind of what we should be reading. So that was a really good one. I'm not sure if you guys have listened to the podcast, but we are actually in the same pay off. I've almost killed one one. Okay, we got about that still. Let's see, well, it's hard for us. So I think a lot of you guys, or maybe some of you guys are starting out and learning Ruby. So hopefully you can kind of share the things that are hard for us and I'll help you out. When you're learning a language, it's a lot more than, you know, just a syntax. For us, there's kind of like a whole ecosystem that you have to learn. There's tooling, kind of best practices. So for us, we haven't kind of done a good, we'll request for kind of, you know, a little bit more for us. It's a little while to get our heads wrapped around before we even get to that manner. We also get more from kind of the X-ray type of testing to BDD with R-spec. And it takes a little bit of that as well. In the beginning, we have a lot of code that was syntactically correct. And, you know, from the Java world, you have kind of a static compilation and it would tell you, okay, it compiles. But that doesn't really tell you that, you know, everything is working which, with Ruby, there's more of a duct typing and everything is kind of dynamic. And it took us quite a long time to kind of like change our mindset in order to write code that looked less like Java. That was still syntactically correct Ruby, but more, you know, it's a switch into like idiomatic Ruby. And we have the best way to do that is kind of just to read a lot of code. Yeah, Eloquent Ruby is, I think, usually we throw them in, have a new engineer write some code, and then Eloquent Ruby is usually on the required reading list pretty close to then starting to write Ruby. And I think that's when everyone realizes, oh, my code looks like Java still. Yeah, we start people out with kind of just a whole two weeks of, hey, learn some stuff about, you know, you work in both of them, read these things and learn about the culture. So that's really big for us. So I'm sure you guys are aware that, you know, hiring is very difficult these days. And with Ruby, it's actually even more difficult than we have, because there's a smaller pool, at least within Orange County. Most everybody is doing like C-shard or Java. And surprisingly, you know, when you tell them, hey, there's this cool language Ruby, you guys can switch to it. And a lot of people were not that interested in kind of learning something totally new. So we're happy when we find the candidates that are, you know, trying to embrace learning and have that agility to switch. There's also a great interesting study that the Harvard Business Review had about women. And they were not willing to kind of apply for jobs, but they didn't feel like they are renewing all the skills. And we actually learned from that. So we started rewriting kind of our job listings to actually say, hey, you know, come here and learn on the job. You don't need to know every little bit as long as you're willing to learn. And, yeah, so another thing, growing pains, right? We started out with four engineers and kind of all started a company. And a lot of Rails literature is talking about, you know, kind of what DHH was talking about, where it's kind of a monolithic application. And as you grow bigger, which kind of releases communication overhead, you have to start splitting things apart. And you have to kind of take a critical view of which pieces of advice apply to a system as you grow to kind of a smaller and easier business. All right, so let's talk a little bit more about hiring. We like to lead nice people. You know, that's one of the things that we try to hire for. You know, it's not really critical in a business environment. Shouldn't you be hiring for, you know, X years of experience instead or someone who's really experienced with the finance industry? Well, we don't think so. We think culture is kind of the most important thing. And, you know, you have to have this environment where people enjoy working in order to be successful. Like, if we didn't have pain to go out there and work, would we still enjoy being with these people, building kind of the same things we're building? So that's a question we ask ourselves a lot. And, for my experience, and this is me and that being very, very good, it's definitely possible to learn Ruby. And you can definitely, you know, jump into a domain and kind of pick up, you know, how that works and all that. Like, I've never seen someone who was, you know, on a power trip or really rude or condescending change their way. So we found that, you know, that was kind of a, it's not scientific, but it's pretty much, you know, a reason to choose nice over those other things when you're hiring. And we have a good, you know, question that we ask ourselves when we're hiring or interviewing somebody. And that's, you know, it's stuck on the playing question, if we call it. If I was stuck on the playing with you the next three hours, well, I still wouldn't work with you. You know, can I hang out with this person afterward and relax? You know, is it kind of draining instead to be stuck talking to that person? And I think that's a really good witness test. So why are we talking about these things as if they would be extremely exclusive? You know, there are people who are, you know, nice, competent and have domain knowledge. But in our experience, especially down in Orange County, it's really kind of a confused thing. It's like the cat room of hiring, as I like to call it. You kind of have these three sliders, you have to choose which things to optimize for. So we always go for competence and nice. So the SIS kind of what we ended up with is who we are. As you can see, we're a very diverse group. SIS actually is really important. It's kind of surprising. There was a study recently that showed that if you have a diverse team, it would actually outperform a more capable and experienced team that had less diversity. So we felt like diversity is another thing that we should look for. And what we built, I think, is a really diverse team of really capable people. And if we needed to, we'd let them learn the domain knowledge of the finance industry. And we really didn't feel like we had to make compromises to get this. So what do we look for when we hire? Provable competence. You've got to have somebody who is able to learn, you know, reason about code. There's a certain kind of thinking that needs to be there, logical kind of reasoning. We mentioned learning agility. So people who use a different language, they were able to kind of twist their minds around new things and apply themselves and learn Ruby. So that's fantastic. So polybots are great. And I mentioned nice already. What you'll notice that we don't have up here are the traditional deal breakers, say X years of Rails experience, or, you know, you must be some sort of expert in the finance industry. We think those are kind of negotiable things and kind of optional and more hiring. So things us up to the vendors can't keep up. So we're in the financial industry, but we kind of have really internet company kind of roots. And you'll see this like with Airbnb and Uber saying they're in these industries that are kind of entrenched like established industries, but they're not really from the industries. They have very different DNA and we kind of feel the same way. We're in the finance industry, but we're not really of that. But the vendors, as it turns out, kind of are. They're very successful because they're kind of the winners of the last generation. And because of that, they're a little bit slower than a new company. They're not as hungry. They're basically doing what worked for them in the past, but things nowadays are a lot faster. And you can't really make mistakes either because if you have a problem and a customer doesn't get a fix, they're going to start tweeting about it. And that's only if you're lucky. The worst thing is like if you're trending a topic on Reddit, it's really screwed. So initially, we were really, really scared about building the hard pieces. Things like underwriting and loan management. So we didn't have the domain experience and figured these vendors, they've been here for decades. They must know what they're doing. But as it turns out, it's kind of more perception than reality. So they have this image of stability and all kind of just like a giant rocky billboard. But behind the scenes, they have a lot of different silos within the company. So maybe like the ops team doesn't talk to their dev team. And then at the end of the day, they're like, hey, we fixed it. Great, what's it going on? We don't know. We'll have to talk to our ops team. We'll get back to you in like 48 hours. And we're like, wow, that's kind of really unacceptable. And we learned pretty quickly that we should take in-house all those things. So we don't have to rely on slow vendors. So here's an example of something that we took in-house. It's the credit policy, which is how we improve loans. It's a piece that has a lot of high risk. Or if you made an error, it would have a huge cost associated with it. Basically every loan application in the system goes through this process. And we also change it a lot. As we learn new things, we make improvements to the credit policy. And just by in-house, we reduce the turnaround time for a change in two weeks down to two days. And that's just about speed, too, because testability was vastly improved when we in-house did. We're getting better confidence that way. We also found that SLAs are not very useful. So the best guarantee to, you know, the best guarantee you can have, really, is being able to fix a problem. In the meantime, recovery is the thing we really focus on. So with an SLA, you know, you're kind of your hands are tied. If there's a problem, you may be able to diagnose it, but there's nothing you can do to actually push a fix out there. And you're just basically burning team's time and effort on a problem that you can't solve yourself. And that's a huge problem. Surprisingly, vendors are not that adverse to eating the cost of this. And, you know, it's great for them because they don't have to, you know, worry about things. But for us, you know, the small startup that really killed the dude. And this is a really good business test as well. So we found there are some vendors out there that actually don't have a test environment. And you should really be aware if they don't have test environments. So I just want to input the chart visualizing kind of the timeline and gains that we got from in-house and things. We found this to be true, you know, within our industry. But we suspect it's also true of a lot of other established industries out there. So if you're a startup, you're probably trying to disrupt one of these industries. And we really encourage you to kind of, you know, got there and delete yourself and just have a lot of belief in yourself. So I'm going to hand this back to John now. Can we buy it? We have six. It's a rounding error. I'm sorry. So I mentioned earlier we're not a bunch of programmers. We're, you know, people with families. Our engineering team is actually a quarter female. So a lot of working moms. And so for us, you know, not having a life or not seeing our kids or not being able to do the things that we do, you know, there's a lot of people, multiple people coaching, you know, kids teams. And there's also people who have passions. Almost everyone, I think, has a passion outside of just their job. And so for us, that really meant a lot of, you know, not trading off things that we felt were really, really important. And again, that we helped one of those intangible things that's much harder to describe to a candidate that's interviewed. And so, you know, how did we do this? And so, you know, there's one thing just to say, yeah, family time is important, but that sounds pretty cut in your eye. And it seems somewhat obvious to say out loud. But I will say that Ruby really enabled us to do this. You know, being able to leverage a lot of open source, the language itself, obvious, if you, in the example I always use is, how many lines does a C sharp does it take to open up a file, read some lines, and then output another file, right? And then how many lines does that take in Ruby? And so, you take that and you multiply it over and over and over again, you add test journey, kind of heat those into it and how many times that saved your bacon, and then you add in just all the kind of defaults that are just smart defaults out in gem's and Rails communities, whereas with, you know, a lot of the other frameworks, you can do whatever you want, and you don't really have guidance for that. And it takes a lot more time to train as well. So, there's generally a set established way of doing something, at least for the 12 months or so before the new version of Ruby, sorry, 24 months, until the new version of Rails comes out. And then automation, again, this is just something that's just really, really baked into the organization, but, you know, this is how 20 engineers act like 80 engineers is doing a lot of this automation. And the other thing I didn't talk much about is high fidelity prototyping tools. So we use these prototyping tools to hand off from our product team to our engineering team. So rather than having nonsense word documents or winky documents or whatever, we do a lot of prototyping that we can use to do user testing in front of actual customers, and then also to communicate ideas between product and engineering as well. So, fire drills. So, getting a phone call and, you know, having to walk away from the dinner table, or having your Saturday interrupted, or having to carry your laptop in your car all the time, we think it's just kind of ridiculous. And so, even before, you know, the speed science talk this morning, you know, Friday deployments has always been a little payoff, not because we can't deploy on our Friday, we can. We just enjoy our weekends, and we enjoy our Friday nights. And so, again, each of the flags is a really, really important one for us. If DevOps finds an issue or someone is on call, turn off the flag and go back to sleep. Do not wake up the entire engineering team at 2 a.m. on a Saturday night. It just doesn't make any sense anymore. Maybe it made sense in 2004. It certainly doesn't make any sense today. And so, this is where we're at today. So, we launched in December. It took us 16 months to launch it. We went from 20 total employees to 80-something employees today. So, you know, the rest of the company was pretty busy with engineering, was building some of these pieces out. Stan talked a little bit about the growing pains. One of the other things we didn't touch on too much is, when it's 20 people, you know every single person's name to 80 people. That's just a lot more relationships to groom. So, you think of people in your company as network nodes, the number of connections between 20 nodes and number of connections with 80 nodes. How life goes up exponentially. And so, for us, you know, this balance between life and work is totally achievable. And that's really what we're here to sell. We're not here to sell you the payoff loan. It fits for you great. But what we're saying is that startup life has become something that is parodied on the TV show now. And so, for us, we really think that there is a sustainable way to work for a payoff for, I mean, I'd be nice to, to work for a startup. And, you know, these things that you usually think that, you know, hey, does a company need to be ruthless to survive as a startup? You know, there's all these stories of these startup founders that are just ruthless and execute at all costs. And we say, you know, we don't think it needs to be that way. We think that you can have a family and we think that there's the tools that exist in this community and among others to have a startup that, you know, does good. That you can use technology to help people. And that's not mutually exclusive profitability or even making your turn for your investors. And those things aren't evil either. They all need to be kind of working together. And that's really what we're trying to sell here is that it doesn't need to be that way. This community has the tools in order to do that. So a quick recap. Again, culture is something that you have to think of from day one. Scale computer. You can retrofit scale with the cloud and all the things that you have available today is a much different environment from the last time I attended real stock and so forth. So you can retrofit scale now. Really, you should be building these things so you can swap it out. And so agility matters way more. I don't mean agile, like scrub more common. I mean, agile is a true sense of hey, your founder might come in one day and say, you need to come under, you know, for a shit like that. Kind of different. For us, we didn't have a choice. We had to teach people really. So that was something that was really, really important. That we were able to train people, make people successful, get them through that transition. We hired the right people and it also changed how we hired people. Nice, nice matters a lot. And so instead of having ego, just full arguments in the office about I'm right, my algorithm is better. The technical discussions that we have, heated discussions oftentimes, is that what is best for the customer. And those are the discussions that are most important. You have those, thinking it over those things, but how do you help your customers instead of I'm right, you're wrong, and my idea is better than your idea. This technology is better than this technology. Five, I think the corollary there is that if you have the right team and you're using the right stack, have more faith in yourself. That's something that we didn't have from day one. We're really, really scared and we're unknown. And so we have this over reliance on vendors and we just burned a lot of time. And as Stan said, we spent a lot of time fixing their problems. And finally, you can have a life. So work with nice people is kind of the takeaway. So we hope that, you know, the things I said, I feel are common sense. If I say I'm out loud, you all kind of nod, and you should work with nice people, that's probably the make sense. But for some reason, every job that I've had, that just isn't true. There's someone that backs that on me. They're asking me to trade my family time or to make these trade-offs and we just don't get it. So at the payoff, we don't think that should be the way it is. We take, I think, a little bit of time, we've got two minutes for questions. Happy to answer any questions about staff and technology and stuff too. Yeah, cool to the detail on your point about vendors is pretty interesting. That goes against a lot of business here from a lot of especially start-up people. And I actually knew payments in each of those, so I know a lot of what we're talking about, we have hand-made senders that don't have testing wire, and it's me that like them. But sometimes not making vendors pretty hard because we need to have a deal with an exciting chain. So can you give me specific examples or you know, just get that out a little bit? I'm going to take the higher-up, not to mention specific examples. But I'll give you some use cases. I'll give you some cases. The main challenge is really getting your head around the domain knowledge and what it would have had if you went out. And so for us, I think we're fortunate enough to be able to attract a lot of people that understand the black boxes. Maybe not all of them turn into the black boxes, which we may not want to use anyways. This has allowed the systems, at least in finance, to our main-grade systems, to dig down into more of their giant, oral, full-ten systems. And so for us, understanding the interfaces on the outside of the black boxes was the most important piece. And so I don't know if there's a secret sauce to that. I think it's just doing a lot of research and finding a lot of industry people and finding people that are willing to talk about those systems. I don't think we traded out too many technology learners. I think they're a company trenched, kind of SaaS, what they call themselves as. But for them, cloud is a new thing. So for a lot of these, for example, we were working with a bank. And the bank gave us a 40-page security questionnaire. And inside the security questionnaire, there's a mention of the word cloud once. Out of 40 pages, no SaaS, no cloud, nothing. It was talking about where, if I had to find a customer's record, it would point to me the exact server at the exact time where this record is. We were trying to point to the world once. We used platform as a service, so it could be on this node, and the performance gets bottled up on this server, then this virtual machine can move to this server. We had to explain all these things to them. And so I don't know if there is a secret sauce to it, but for a lot of these vendors, we did a lot of due diligence, and we really just had to trust ourselves at some point that we could develop this in an early fashion, that as we learned, adapted, that really made it a secret sauce. And so it came back to agility again. We went out there knowing that we didn't know a lot of things. But the main things we dug up first were the pieces of the scare of Jesus out of us, and we usually had something to do with the client. Yeah, I was interested then. Did you develop your own credit model, or did you acquire it? We did. But the thing that I think is key is that we had people that had done it before. And then the first approach we went out with is something that was comparable to everyone else, and then we started iterating there. So a lot of the things that we implemented, we implemented in such a way that we could compare to other comparables and make sure that we were benchmarking slightly and benchmarking financially, and ensure that our financial products were in a way that we would expect, and then we iterated from there. Another handful of things. So we used McAuliffe, we used Invisio, and a few more. If you send me email at John and Pay off, I'll be happy to send it back in. Any questions? Thank you very much.
|
Payoff has a crazy goal; we want to solve America’s credit card debt problem. After a risky 8-week Ruby rewrite of our 500k line C# personal finance website, we decided that wasn’t audacious enough. So we set out to become a financial institution in order to help folks get out of credit card debt. In the past 16-months, we taught the rest of the engineers Ruby, figured out how to lend money, wrote all the systems and automation needed for a small bank, and hired 70 more super nice people to make it real. If you have a crazy goal to change the world, come listen to our story.
|
10.5446/30635 (DOI)
|
Hello everybody, I'm Paola Moresto, I'm the co-founder of a company called Nuvola. You can find me on Twitter at paolamoresto3. So a little bit about me, I'm a developer, turn entrepreneur, I've been in the high tech industry for a long time, I love solving hard technical problems. I come originally from Italy, but I've been in the US for 20 years. And if you don't find me writing code, I'm usually outdoors hiking. So this is about performance, and we heard it loud and clear here at RailsConf that faster is better. So we all know what performance is, but it's good to understand really the impact of low performance. And when I talk about performance here, I really mean speed and responsiveness, the speed and responsiveness that your application delivers to your users. So there is a famous quote from Larry Page that says, speed is product feature number one. So you really need to focus not only on your functional requirements, but also on the non-functional requirements, and speed is paramount for any web application today. And there is a lot of research and data that backs this and shows what is the impact of low performance. So it impacts visibility, it definitely affects your SEO ranking. It impacts your conversion rate. It impacts your brand and the perception that people have of your brand, your brand loyalty, your brand advocacy. It impacts your cost and resources, because the tendency for low performance is usually to over provision, and that's not usually the response, the right answer. So speed today for web application is paramount. And then if you have a DevOps model, so if you move to a full combined engineering model where development and QA are combined and development QA and CIS admin or ops are combined and you have a full DevOps model where you and you have adopted continuous delivery and agile methodology, which is like the standard today for web development, then it becomes even more critical. So performance today in the cloud where you have a fully programmable and elastic infrastructure and you're adopting continuous delivery, it becomes even more critical. You need to be able to bless every build and make sure that not only it works, but it works at the right speed. So then what? What do you do? How do we tackle this problem? Well the first thing is you need data. So this is a quote that I actually stole with pride from a talk yesterday and I love it. In God with trust for everybody else, give me data. So is this a good model you deploy and then hope for the best and you have your customers or your users being essentially your QA department? It's not. I know of a company that says it's an e-commerce application and they say, oh, we know when we have a slowdown because our users complain on Facebook. Well that's not usually the best way to do it. So you need data and you need a lot of data. So let's get started. So there are different types of data. So basically on the right hand side you have your deployments, your production, where you deploy, you have your live traffic and that usually goes under big umbrella of monitoring. So there you have all sort of monitoring data and techniques. And then on the left hand side, that's your testing environment. It's usually people have a pre-production environment or a staging environment. Sometimes you can also test on production. There you have your synthetic traffic. So you're simulating, you're creating your users and you're doing performance testing. So these are the two most typical source of data today. So let's start with monitoring. So you have different, many types of monitoring. So you monitor your stack, you monitor your infrastructure, you do some sort of log aggregation, you monitor what the users are doing with your application and what's the user's behavior and what are the most typical user behavior or what are the corner cases. And then you have what is called today streaming analytics or high frequency metrics where there are solutions that pump data out of the platform at speed. So and these are some of the examples of the solutions that exist today. This is not, we're not associated in any way with any of those, but it's just to give you an idea of the wide spectrum of monitoring and data instrumentation solutions that you can find. All of these complement each other. There is not one piece that fits all and it all depends on your application. And there is an interesting problem today. You get all of these nice dashboards and how do you correlate all of these data and figure out exactly what's happening. But the first step is definitely monitoring. As they say, you first instrument and then ask questions. However, monitoring is not enough. And why? So first of all, your life is noisy. So your life, you have all sorts of users doing all sorts of things. It's very hard to troubleshoot. If you have a scenario you're interested in and it's perhaps problematic. At the same time, you have other users doing other things that has such the system responds in unexpected ways. The other problem with monitoring is that it's after the fact. So monitoring doesn't help you predict and it doesn't help you prevent problems that might occur with your application. So like a friend of mine saying, monitoring is like calling AAA after the accident. It's useful, but usually you want to prevent the accident instead. So that being said, monitoring is the first line of defense, the first thing you've got to do. So then what are you going to do? Then we're going to pair up performance testing with monitoring. So the two complement each other really, really well. And here's why. So we're going to look at the left-hand side of our data sets and data sources. And here we're going to look at synthetic traffic so it's not your live. You have the ability to create your traffic. And you're going to do some performance testing. And it could be on a pre-production environment, on a staging environment. Usually you don't want to mix your synthetic traffic with your live traffic. And you don't want the synthetic to have an impact on your real users. So that's why you test on pre-production. But you could also test on production for specific applications or specific times of the day, et cetera. So with performance testing, basically the users are not real, but the traffic is absolutely real. You have total control over the amount of traffic. And the user scenarios, the workflows, because that's how you have designed your tests. So troubleshooting is simplified here because you have an easy way to reproduce specific scenarios that you thought were problematic. And number two, in terms of peeling the onion, which is a typical troubleshooting approach, you have already controlled two variables, the amount of traffic and what the users are doing. And then the other advantage of performance testing is that you get end-to-end user metrics. So you're measuring exactly what your users are experiencing. This is not about server metrics or database metrics or applications metrics or Ruby metrics. It's the true end-to-end. So we've seen some numbers where there was a factor of seven in between the end-to-end user metrics and the traffic and the server metrics. So the server appeared not to be suffering, but the users did not get a good performance at all. So in order to have a good complete view, you really need the end-to-end user metrics. And the other advantage of, so if you can test and create realistic scenarios, has closed as possible to what your users are going to do. And then the goal here is to figure out problems in advance before they happen. So again, one of the problems monitoring is that it's after the fact, here we're coming before monitoring. So we are doing things before that they happen so that you have time to optimize. And you can't optimize unless and until you measure. So you want realistic scenarios, if you have mobile applications paired up with your web applications, then it's absolutely critical you test your mobile traffic as well. If you are around the world, that's a global application. You need to test from different geos. And then the end-to-end measure, the KPI for the end-to-end user experience. The type of metric, so this is around time. And so time is a variety of ways of saying this is response time, or some people call it latency, but essentially it's time to complete transactions, time to complete specific requests, averages, distribution. You could get throughput, the number of successful requests per test or per specific time interval. And then you can get also error rates. If you see some suffering on the server side, you can start seeing errors. And then again, the goal is to resolve issues before you deploy. And then when to test. So software changes all the time, and as such, it's important to understand whether a specific change is going to impact how your users are going to interact with your platform. And it's not just important that the software does what it's expected to do, but it also does what it's expected to do at the right speed. The other point here is no matter, even if you don't change anything, things change around you. So applications today are spidery. They have hundreds of possible optimization points. They pull in plugins. You're sitting on a cloud infrastructure. So this is a complex problem, and the only way around it is to test often. So test for every change. Test if you're going into a peak of traffic. You don't want to go blind into that. Test if you have any types of infrastructure or changes to your deployment. There is a very good example where at some point a while ago, several years ago, Iroco changed something in the routing system, and that change was not publicized. They didn't, or at least it was not openly publicized, and it only impacted a specific set of applications, but it impacted them greatly. And so people realized because they started taking measurements and they saw a big difference. So the applications did not change, but in that specific example, the cloud provider made a big change, and the only way to identify this kind of thing is to measure. So I guess what? This is still not enough. And why? Well, you can get results like this, where you say, wow, I have a lot of errors and under the traffic, I apply a linear ramp. That's kind of the green. The green bars, I get a ton of errors. My response time increased dramatically. Then at some point it decreases because the server doesn't even respond to requests. So or you could get things like, well, my tests are telling me if I have 10,000 concurrent users, I get my response time deteriorates from 400 milliseconds to 2.5 seconds. So okay, your tests are telling me, your tests are telling you that your system is slow or will be slow under specific traffic and scenarios, but it's still not actionable. You still don't know what to do. You just know that you're going to have a problem. It's almost like I'm going to tell you, well, when you have 50,000 users on your platform, you're going to have a fever. But there is no medicine. So what if we can extract some more information from this data and find a medicine? So stay with me. So if you look at the typical performance troubleshooting process, ironically, and where people spend time, the majority of the time is spent, number one, in reproducing the issue with the right data, and number two, in isolating the issue. And then once you have done that, the actual fixing of the problem is relatively straightforward. So the reproducing is, I have a very good example here. There is a company that I know, and their client was a big bank in India, and they had performance problems with the applications they had. And it took two weeks in between the time differences, and the engineers on two sides of two different continents, two weeks with a whole team in a room, and constant conference calls, because before, they were able to just reproduce the problem and have the data. So reproducing is partially, or it's addressed by performance testing. But then you're left with the issue of isolating the problem. And isolating a problem usually takes a lot of time, and it's a lot of effort, and developers are left with doing a lot of correlations with data, and it turns out to be a manual and high time-consuming process. But then once we're done with isolating, then the fixing becomes relatively straightforward. So what we want is actually the ability to go from, if you go to the left, that's before testing, that's your oblivious. You don't even know that you're going to have a problem. Then once you test, you're like, yeah, you know, we're going to have a problem. I found out that I will have a fever of 50,000 users. And then we want the ability to have some help in localizing the bottlenecks, because we know that localizing is going to take a long time. And then after that, we can fix, and then that leads to happiness. So then we're going to add the third step here. So we talked about monitoring and all the data instrumentation that you can extract data from your application with your live traffic. We talked about the performance testing and how you could use synthetic testing, create the traffic you want to see how the application responds. And now we're going to extract another layer of information from our data to help us localize the problem. So what we want to do is we want leading indicators of performance issues. So again, we don't want after the fact, you want to figure out this problem beforehand, because so you have the time to fix and to optimize and deliver the performance you want. And we have found that if we localize, if we are able to pinpoint in these spidery applications where the problem resides, then we can accelerate the troubleshooting process, which is otherwise quite painful. And we want actionable data. So in order to do that, we're going to add something else here. So we have our monitoring. So what you have in the middle is our monitoring, where you have your live and you have all the monitoring data and you have your data instrumentation. And then we already talked about how it pairs up really well with performance testing, so the two go together. And now we're adding another layer. So we're adding here some data mining and machine learning to extract another layer of information from this data and help us localize. So this is how we do it. This is an example of our prototype that we built. So you apply a linear ramp for traffic. And so you do the synthetic testing. At the same time, you use the data instrumentation that is usually used for your live traffic. But in this case, we're going to use it over your synthetic traffic, so it could be on your test environment. And then we mix it up all together. After our historical data for that application and that test, we use that too. And then there is data analysis that basically makes an attempt at clustering and identifying statistically meaningful variations in all of these timing and whether these statistically meaningful variations are clustered around a specific component of the application. So this is essentially how it works. So first, you run a test, a performance test. If your response time is good and you don't have any slowdown, then there is no problem at all. But if you have a slowdown, right, so we go back to the example where you had all of these reds, slowdown and errors, then you're left with the problem of figuring out how to fix it. So the first thing we are doing is we are removing what we call network and external effects. So we want to see if there is any correlation with data such as network time, DNS time, SSH time, and other data that are kind of external to our stack. And if we don't find any correlations with those, then those are excluded from the data analysis. And then if we... So assuming that there is no correlation there, then we go for...we look into the data set and the data analysis identifies statistically meaningful differences using clustering and longitudinal analysis and identify whether these variations cluster around the specific sector. And then the results are displayed. So I think we're already covered it. So the whole point is also the thousand of thousands of available metrics. We look at variations in real time and we attempt to clustering them across specific what we call sectors that are components in the applications. So this is all using specific data analysis techniques. So what we use is kind of a mix of techniques, it's not only one, they all go under the umbrella of machine learning or unsupervised machine learning or data mining. We...again, it's not just one technique, but definitely we use a lot of clustering and longitudinal analysis. So ready to see some...ready to see some real data and a real life example. So I'll give you a couple of examples. So this is a typical web application, it's a real application, so it's not a test application. First we run some performance tests with a linear ramp up to a thousand users. So this is a thousand concurrent users per second. So that corresponds to...usually we say there is a factor of a thousand. So it corresponds to kind of a million monthly visits, that's the type of peak that you could expect if you have that traffic. So and then we run some performance tests and we see that as we apply a linear ramp, the response time deteriorates, it's actually three times as much at traffic than it is without traffic. So this is definitely a case that it's worth investigating. So then we go with the data instrumentation. So the beauty of this model is that you could apply this method to pretty much any data instrumentation that you have or that you want to use. So it's not married to one specific method or approach. In this case we use a specific data source, but again you could use anything. And the way we look at data is that they're categorized under sectors, so the various components for each sector you have categories and then you have classes and you have methods. So you have actually a lot of data that are coming up for each one of these sectors. And so this data is an agent that while the test is running, there is an agent that pumps these data constantly into our algorithm and the algorithm works in real time to do this clusterization analysis. And so at the end of the cluster result, this is kind of an eyesore, but basically you see you identify the methods that actually have, that shows variations with timing at the same time as the test, as the response time starts increasing. So they correlate well with the performance testing results and with the response, with the end-to-end user metrics. And so this is kind of the end result. So as a reminder, what you see on the left are the sectors. The sectors are groups, large groups of data. You can actually dig down into this data and see exactly what is the component of this group that created the problem. So what we see from here is that, for example, although this test was run successfully without errors and we put a load of a thousand concurrent users at the end, we see that the browser, so everything that, what goes under the browser component starts suffering right before 200 users. So it starts suffering at the very beginning and then it enters the yellow zone, what we call the T zone, a transition zone. So that's where it's kind of deteriorating, but it's not too bad. And then it enters a red zone, which is way, way over, way over where it's expected to be. And then the next one that starts is the app stack. And the app stack is essentially what's happening with your Ruby. And that starts deteriorating right around 300 concurrent users and then enter the red zone later. So you could see that even though at a thousand users, you see a triple response time, things start deteriorating a lot sooner. And it's also important to understand another very critical data point here is, what is the first component, because sometimes you have chain reaction effect, if one piece slowed down, then the other slowed down as well. So what is the first component that starts slowing down and slowing down the system? And in this specific example is the browser. Now the browser, again, it's a set of data which is represented here. And underneath here, you have another hundred of data points. So from here, you could actually see dig down and see what are exactly the components within the browser that causes this slow down. So again, so the objective here is to identify proactively. So this is all before you have the, actually, the thousand live users on your platform to identify proactively and under a specific workflow or scenario, what is going to happen and what components of your application are actually the root cause of the problem. So here I'll give you another one. So this is another application. The categories are the same just because we look at the same data. I don't have the raw data here, but you could dig down into all the methods that actually cause this. And here you have an interesting perspective. You still have the browser, you have the app stack that closely follow, but then you have what we call server and software which goes from green to red. So it doesn't even enter the T zone. There, there is almost like a step function where the metrics go from really well to really bad. So, so in summary what we covered today is speed is product feature number one, performance is paramount, faster is better. How do we tackle that? We tackle that as developers with data. We start with monitoring. Monitoring is a good start, first line of defense, it's not enough. Add performance testing, complements well with monitoring techniques. That's still not enough because what you want is you want some help in localizing the problem. So here we have performance test, plus data instrumentation, plus machine learning. We have another layer that we can extract from our data which we have called predictive performance analytics. And we got to see it in action in a couple of examples. So thank you. I think I can take some questions now. You can find me on Twitter at Paola Moreto 3 and happy to hear your questions and feedback.
|
Applications today are spidery and include thousands of possible optimization points. No matter how deep performance testing data are, developers are still at a loss when asked to derive meaningful and actionable data that pinpoint to bottlenecks in the application. You know things are slow, but you are left with the challenge of figuring out where to optimize. This presentation describes a new kind of analytics, called performance analytics, that provide tangible ways to root cause performance problems in today’s applications and clearly identify where and what to optimize.
|
10.5446/30639 (DOI)
|
So, yeah, this talk is Emilia Vendiria learns to code. I'm Kylie Straubly. I just want to welcome you guys to Atlanta. I'm sure you've been welcomed already. I just want to brag and say that I live here in this awesome city. So there's that. It's just a fact. I live and work here, like I said. This is my first time speaking at a tech conference, if you didn't notice, by my complete lack of respect for societal expectations and how to not be a weirdo. Yeah. So I want to tell you guys about a character who was really important and special to me when I was a child. I guess before I get started, though, can I get a quick show of hands? You hear at any point, so now or sometime in the past, was ever a child? Any current or former children? You laugh, but not everyone raised their hand. So we have a couple in the audience. Cool. Yeah. So the special character in my childhood was Emilia Vendiria. And I liked Emilia Vendiria because I really identified with her. She is notoriously literal and she's practical to a fault. In many of the stories, she works as a maid. And in the titular Emilia Vendiria story, Emilia Vendiria, she's working as a maid and she gets these to-do lists. And on one of the lists, it just says, prepare a chicken dinner. And Emilia's never heard the term or the phrase chicken dinner before. And so she tries to rack her brain and figure out the most logical conclusion. What is a chicken dinner? Emilia surmises that a chicken dinner is a dinner for chickens. So yeah, she serves up cracked corn because if you know anything about chickens, you know they love eating cracked corn. You know who doesn't love eating cracked corn for dinner? Emilia's employer is the Rogers family. In the book, she is consistently just causing them to be super exasperated and flabbergasted at her unique new ways of being extremely literal. She's determined to complete work, like even when it doesn't make sense to her. Another time in the same story, she has a to-do list item that says, dust the furniture. And the way that it reads is strange to Emilia because it seems to indicate they've like dust applied to the furniture and she's like, that's weird, but that's what was written. So that's what I'm going to do. Even though I think it makes more sense to remove the dust from the furniture. Does this sound familiar to anyone in here at all? A request is worded in a way that seems to indicate that something really, really strange is needed and you're thinking surely this isn't what they want, but it's what they wrote. So you deliver to them only to be informed that this is not the desired result. This happens sometimes in software development, maybe, to a couple of people. That guy laughed, it happened to him, so that's all I mean. Thank you, whoever you are. I feel like I have a really strong connection to you now, those friends. So as an adult now, you can see how I'm starting to think of Emilia less as just kind of a silly character and more of maybe kind of an archetype of a developer. The literal interpretations, the silly mistakes, they're not asking for help. Yeah, and then at the end of the stories, Emilia always saves the day. She usually does this by preparing one of her awesome desserts, usually a base good. In this story, Emilia Christmas, Emilia Bidoya, she saves the day with a date cake, but because it's Emilia Bidoya, when she can't find the fruit dates, she cuts up a calendar and uses those dates instead. But everyone, yeah. In the last picture of the book though, everyone's eating cake and smiling, so I guess that is really good for them. That one still escapes me. I don't claim to be the Emilia Bidoya scholar, but everything ends well. And so I think that as developers, we tend to talk about our own stories this way, especially people who are maybe self-taught developers or you went to a boot camp. Just those of us who don't have the traditional computer science background, we say, you know, I wanted to be a developer, so I did this stuff and I made a lot of mistakes. And at the end, we have this redemption that we talk about and now everything's perfect. And that works because it's a really good story. People want to hear these good stories. You know what? Honestly, okay. I'll level with you. This is my first conference talk. I am extremely nervous. It is the afternoon. You seem sleepy. I'm seeing glazed eyes. But instead of doing the talk, I read you a really good story instead. Yeah? Okay. Let me get my book. The name of the book is Emilia Bidoya, Learns to Code. Written by Kylie Straubly, very good prolific author, look that person up. Illustrated by Sam Smith and inspired by the works of Peggy and Herman Parrish. Emilia Bidoya is speaking with her employers, the Rogers family. It's time for an interview. Mr. Rogers says, Emilia, you've always done good work once we managed to get on the same page. We definitely appreciate your unique approaches to problem solving. But you're so literal, it drives me insane. It's like talking to a robot or a computer. Talking to a computer, Emilia thanks to herself. Now that sounds like fun. If you don't have binary memorized still, I know everyone obviously hasn't memorized, but if someone didn't, she's saying hi in binary. Emilia reads that to talk to a computer, you have to speak their language. But it looks like there are quite a few languages you can use to talk to computers. Emilia's new friends, some Rails girls, mentioned she might enjoy a language called Ruby. Emilia thinks that sounds nice enough, but why Ruby specifically? Why Ruby? Why not Ruby? I bet you guys would like that. Shout two strange boxes who have mysteriously appeared. Ruby has elegant syntax. It's natural to read and easy to write. They follow up. If you like Ruby, you'll love Ruby on Rails. It's a web framework that's optimized for developer enjoyment. Rails lets you write beautiful code by favoring convention over configuration. Yeah, what he said. An easy way to talk to the computer that's designed with developer happiness in mind? If being a developer is what it takes to talk to computers, I certainly want to be happy while I do it. Ruby it is, Emilia says to no one in particular, as the strange boxes have already disappeared in the same mysterious way they arrived. If you're a beginner or someone who didn't read Wise Elegant Guide to Ruby, you should check it out. You might hate it, but that's what that joke is from. Sorry. I didn't want to make an inside joke. I thought it was rude. This Ruby on Rails stuff is easy, Emilia thinks. It's got HTML and CSS. I know how to use those. There's a database. I've worked with databases before, too. Emilia is working her way through a beginner Rails project when she realizes that she needs to add a table to her database. Looks like this schema file is a central resource for the state of the database. There's a bit of audience involvement on the next slide. So I'll do it first and you guys can join in and then on the next ones you'll get it because there's a pattern. So what does Emilia do? Oh, my gosh, I knew you'd get it. She edits the schema, of course. Now she has another table in her database without involving the strange randomly numbered migrations. Oh, no, Emilia. I can't read this at all, exclaims Emilia's computer. In some languages or frameworks, you can edit the database schema directly to make changes to the database, but in Rails, the database schema is just a representation of the state. The migrations seem to be randomly numbered, but those numbers at the beginning of the file are actually the date time that the migration was created. Emilia nods and her computer continues. Since we've only built a really basic database, it seems like the migrations represent individual tables, but they really represent the changes that are going to be made to the database. I use the migrations to make the database for you and put the date time of the last one migration at the top of the schema to let you know that the migration and all of the previous migrations are included. What a silly mistake. I won't do something like that again. I'll remember to write migrations when I want to update the database and I'll remember to run them using breaks so that they'll update my schema file. Emilia is cheerfully working on a brand new application, totally independent of tutorials and guides. She's not exactly sure what all she needs, but she knows she can cover most things using the Rails generator to scaffold new models, views, and controllers. You guys ready? So what does Emilia do? She uses the scaffold for everything, of course. I'll just use the Rails scaffold to create my new model. That's just as easy as writing the files myself. In fact, it's even easier. Plus now I have the controllers and views pre-made in case I need them later. Can you guys see from the illustration? Oops. She has a blueprint for a dog house, but because she uses the scaffold, she's building this dog mansion instead. Emilia, do we really need all of this? Emilia, the Rails scaffold is handy, but you don't have to use it to create everything. You can just use the generator to generate a model, view, or controller. You can even create new files without using the generator and Rails convention over configuration design will make sure things match up how they should. But it's so much easier this way, says Emilia. Easier for you maybe, replies Emilia's computer. I still have to load all of the assets created by the scaffold. Plus, do we need all these scripts? I thought you just needed a model. You're right. We don't need all of this stuff. I didn't realize it added a lot of extra work for you. I'll be sure when I use the scaffold to only use the parts I really need. Okay. So I know I don't need the Rails generator to make everything. I can just create the files myself, Emilia says. She remembers what the foxes told her. That Rails design with convention over configuration. Still not sure what that means, but I want to write the best code that I can. I'll follow the framework's convention and write good code the Rails way. In each of her new files, she mimics the code and style generated by the scaffold. If this is how DHH would write Rails, this is how I'll do it too. He didn't come to this, did he? Is he here? Raise your hand if you're DHH. Perfect. Oh no, Emilia, exclaims Emilia's computer. Convention over configuration doesn't mean you only have to use Rails within the framework's conventions. This means you don't have to configure the connections between conventionally named models, views, and controllers. When you include all that code, you can accidentally include different features. For instance, leaving the two respond with blocks in the controller could allow you to inadvertently expose a JSON endpoint for that controller. So that's what that means, says Emilia. I definitely don't want to expose a JSON API in this application. Just because I can do something with Rails doesn't mean that I should. I should only include those things when I need them and not just hold on to them for later. You got it, Emilia. Emilia's having a lot of fun at her first hackathon. The team she's working with is requesting a ton of features, many of them directed to all different routes. Emilia's teammates are excited, but getting antsy for the new web views for their app. I need to make a lot of routes, Emilia says. She activates each of the routes by raking them before committing and pushing her code. With each new route, she writes a new HTML file, adds the route to the RouterB, and rakes the route. Rake is shorthand for Rubymake. So I rake each new route so it will be available on my web views. I'm not going to make the same mistake I made with migrations. It's to rake my routes, expecting them to be there anyways. Did you guys read the Rails 5? Here's just a non-sequitur. Did you guys read the Rails 5 proposed changes yet? Okay. They might alias Rails to rake and change some of this stuff up, so it won't be as confusing. So this is a really good slide. So do welcome for this good slide and that fact. Oh, no, Emilia. You don't have to rake the routes to activate them for your application. Rake just shows you a preview of what the routes will look like in a URL. Why not? Asked Emilia. I have to rake the migrations. It makes sense that I need to rake the routes, too. Right? Emilia's computer explains. It is a little confusing that rake creates your database migrations, but you don't need it to create routes. Rake is just a tool that run tasks. The routes are created when you add them to the RouterB. There's no harm in raking the routes, but it's not needed either. Oh, I think I'm getting it a little bit now, says Emilia. Rake is a tool and it can run a lot of different tasks. Unlike the schema, routes aren't generated. I wrote them myself, so they exist as soon as I write them. I can't believe I spent so much time raking each new route I added. My teammates are going to be excited to see all the new routes much sooner. You're really starting to get it, Emilia. Emilia is writing an application with her to out of town friends, Fred and Carrie. Fred and Carrie have been developers for a while and always hit the latest trends and Ruby gems. Fred and Carrie say, Jims are great. They're just libraries of code. Why write it yourself when you can put a gym on it? You can find a gym for everything. It's like you never have to write your own code. Emilia thinks, what? Did you guys do that? That's so bad. I can't believe you guys would do that. Emilia thinks, these gyms are pretty handy. It seems like I can just drop them wherever and have access to a ton of code written by someone else. So what does Emilia do? Emilia adds all of her favorite gyms to each new project she starts. I'll save myself some time and some code. Fred says, you know what this app is missing? Gyms. Let's spruce it up. Make it pretty with the draper gym. Carrie says, what a sad little application. I know. I'll put the Rails admin gym on it. That's a whole joke. Just kidding. I'm sorry. That's a great gym. Whoever did it, you did a good job. Did you see this app before? I didn't. Right? I thought so too. Thanks. Not so fast, says Emilia's computer. Gyms can be very useful, but they're not the answer to every problem. Every gym you add to the file might have dependencies on other gyms, and those might have even more dependencies on other gyms too. All of those dependencies can cause complications and eventually make it harder to upgrade your project. Oh, no. I don't want that. I heard Rails 5 is coming out soon. I'll want to try out TurboLynx 3. Try to meal you. I guess I should make sure the gyms I'm including are the ones I'll really need. Emilia is working as an intern at a software development company. The client she works with, Business Corporation Inc., requests that new parameters be added for customer addresses. Parameters, Emilia thinks. I know exactly what to do with params in a Rails app. She does exactly what they ask for and adds them directly to the params hash. They'll be so excited. Have you guys? All right. Two people have had potato hash before. Potato hash is hash browns and onions and customer address parameters. And it's an Atlanta specialty and you can get it at Waffle House. I'm glad my friends came to this. I get the feeling that some of you don't find this as amusing. Whoops, says Emilia's computer. A client can say parameters, which seems like a general term, but that word has more meaning in a Rails application. Turns out what the client really needs is just a table column to save records to. Well, at least my workplace has code review. So the mistake is caught and corrected before being deployed, says Emilia. Hey, don't be so hard on yourself, Emilia, says her computer. This is a great opportunity for you to push back on a client request in the future when they seem off to you. And you were able to teach a semi-technical client a little about Rails architecture and the process. Yeah, I guess you're right. Nothing bad happened and I learned something. I guess I am getting the hang of this. Emilia is coaching at her very first Rails workshop. She finally knows enough to feel comfortable coaching and helping others learn to code. She overhears a group talking about scheduling tasks for an add-on to the workshop's base project. They want to use a time field for this, but everything falls apart when they try to schedule dates very, very far in the future. Emilia's worked with future dates before and made this very same mistake before too. She heads right over and explains, hey, I've had that problem before too. It's an easy mistake to make, but if you use date time instead, you won't have that problem. That's why I always use date time whenever I need dates. That's a good one. So you had to be here at the beginning, but... That's not even the silliest mistake I've made yet. I'm still making mistakes every day and that's okay because making mistakes means that I'm learning. Even if it feels like I'm learning things the hard way, I know a lot more about why things work the way that they do. Plus, sometimes my mistakes help someone else learn too. When we talk about why it happened, I can help them learn better. Maybe they try to edit the database schema because they've worked with Microsoft Access before. Whoever that is could be anyone here, probably one of you guys. And I can help them learn by pointing out the analogous parts of ActiveRecord in a Rails application. I think, though, that's what software development is about, making a lot of mistakes and learning from them. That was pretty cute, right? Yeah. But that's not really how it is, though. Before I told you the story, I told you the story about the story, the meta story, if you will, which is that a lot of times as developers we frame our struggles like a cute little story and tie it up with a little bow at the end, it was hard, and then I figured it out and I redeemed myself and I feel good now. And Amelia always has a redemption at the end of her stories, too. But what you may or may not know is that there is a series of Amelia Badelea stories. She's always having these ups, these highs, the date time, cakes, I assume, full of calendar clippings as well. And she's having loads, again, where she edits the database schema. And I mean, that's exactly how it is being a developer. You have the highs and the lows, you have the awesome days, and you have the Amelia Badelea days. You don't really ever stop making mistakes. This is what the day-to-day is more like. So unlike a book, our stories don't end with a cathartic dénimal. Ideally our stories don't end for a long time, because I don't know, that might mean that you're dead or something, or you leave software. And that would be sad, and we would hate that. But beginners don't know that this is what our day-to-day looks like. And I wrote this talk because I want people to share their mistakes. So I guess I should probably put my money where my mouth is, and I'll go first, since I'm saying that I want you guys to share your mistakes. Have you guys heard Gary Bernhardt's talk? He gave it at CodeMash in 2012, maybe, maybe some other places, about wets. This is how you say it, that sounds familiar. All right, like two people have heard it. Okay, well, this word, which I won't try to say again. It just means like a weird quirk or strange unexpected bug in a language. And so Gary found a bunch of these in Ruby and Python, maybe, and definitely in JavaScript, where the language just behaves strangely. So recently, for me, one of my big mistakes, I was working with a more senior developer at my company, someone I really respect. And we were pretty deep, like elbows deep, like in the mud of writing a spec for like a strange, fragile case. And as part of this, we wanted to verify that something happened at a certain time. So we were trying to force equality on time.now. So time.now equals equals time.now. And they were returning faults. They were not returning as equivalent. And we thought we found a what in Ruby. And we were like, oh, my gosh, these should be equal to each other. And they're not. Like, we're the Gary Bernhardts of our time. Even though he is alive and is the Gary Bernhardt of our time. Yeah. We were like, we're Ruby heroes. And so we go into the company chat room of the back end team and we tell our teammates, we're like, you guys can't explain this. Look, time.now equals equals time.now returns faults. Isn't that ridiculous? And everyone was like, that's pretty standard stuff. That makes sense. What is the problem here? And we were just so entrenched, like super laser focused on this one thing that we were doing, we forgot how time worked. Like, that's a pretty silly mistake. And that happened recently. And the person, yeah. And like, not just it happened recently to me and like, I know about time and stuff. But like, this happened to like a senior developer that I worked with who I think is extremely smart. And they are extremely smart. But because we were so entrenched in what we were working on, we just forgot and we weren't thinking. And this kind of stuff happens all the time. These kind of things happen to me. Obviously, like six of them happened up here. I didn't tell you about all of them. They happen to the people that I really look up to and really respect. They probably happen to you. Probably DHs are in here. So they probably happen to you guys. Not DHs. So that was a good trip. Whatever. Fine. And they happen to the developers that you work with that you might think are infallible. Beginner developers make a lot of mistakes. I'll come out and say it. They make a lot of mistakes. I make a lot of mistakes. But the biggest mistake I think beginner developers make and the one I made too was thinking that I was alone in all of this, but I was the only one making these dumb mistakes. It's easy to feel like you're the only dumb one. It's so easy to feel that way if you're not talking to people about it. And sorry, beginner developers are much more prone to these amniotic idealisms because unlike senior developers, they don't know about that sign wave of highs and lows that I showed you guys that we kind of go through on a daily or depending on how quickly you work hourly basis. And they don't realize this for two reasons. One, they don't ever ask us. And two, we don't tell them. They don't ask for a lot of reasons, but basically it boils down to the same thing. They're afraid of looking dumb. Afraid of asking such a silly question. Hey, did you ever make a mistake? How silly would you feel asking that to someone? Not at all. Okay, never mind. Please remove that from the video. Thank you. Thank God for modern video editing. Either way, they're afraid of asking a question that makes them look silly or dumb. And they don't want to look like Amelia Vidaljeb. But it's okay. It's okay to ask these dumb questions. And it's okay to ask someone if they've ever made any mistakes, because advanced developers make a lot of mistakes. And we don't tell beginner developers this, right? If they wanted to know, they'd ask, right? Right? How many times have you ever said, hey, if you have any questions, just ask me. Right? So you put it out there. You've done your part. Yeah. They have no good reason to be afraid to ask, except for they're new and they want to impress you. And from where they're sitting, it looks like you're holding all the cards. We don't tell them because we don't think about it. We forgot what it's like to be afraid to ask dumb questions. And so a lot of times we assume that other people aren't afraid either. I think this is one of the biggest mistakes advanced developers can make. And it's okay. It's okay that we make that mistake. But what I think that what we should do instead is try to remember the mistakes that we made when we were first learning. And I'm grouping myself with advanced developers now. I don't know how that happened. But you guys, you advanced developers, just everyone should do it. Everyone does it now. We're all advanced developers. Congratulations. Welcome. We should remember the mistakes that we made when we were first learning. Sharing these slip ups with the junior developers that you work with or the junior developers in your community, one, it humanizes you to them. When the person I was working with made that mistake, I was like, oh my gosh, they're a person, too, and not a highly sophisticated robot. Because there was some concern. This person seemed infallible up till this point. And I just felt better. I was like, okay. So even if I do get really good at this, I could still make mistakes. This is okay. And then it also just humanizes the process of learning to code. When we think about it as you're going to make mistakes, and not when you make a mistake, but you will make a mistake, that's okay. When you're learning and you finally recognize that the mistakes you're making are common, you feel like you're heading down the right path. Also feels not awesome to make a mistake, but you're like, well, at least a lot of other people were thinking that way, too. Making mistakes means that you're learning. It means that at some point you look back on what you did and realize that it's wrong. That's learning. I think that a lot of us say that we want to create safe spaces for people to learn in. And I think that a place that's safe to make mistakes in, that is the environment that's safe to learn in. We want people to learn the hard way, but we don't facilitate that. I was going to say we don't make it easy for them to learn the hard way. If you really want people to learn the hard way, make it easy for them to talk about their mistakes. So talk about your own mistakes. Go to where you work. Tell your boss that you have to do this, because it's the law. Tell him you just want to talk about mistakes. Maybe just have a, hey, everybody does this. Everybody always, whenever you work on whatever app, everybody makes that mistake. Don't worry about it. Just talk about your mistakes and create the safe place to learn that you say that you want. Illustrations for this talk were created by Sam Smith, who works with me. She is an excellent coworker and illustrator, and that is her website, if you want to go to it. I should have ended on a higher note, huh? My name is Kylie Ferris-Strabley, so you can find me at KYFAST, basically everywhere. Well, probably everywhere. Probably. And like I said, I wrote this talk because I've been really, really lucky that when I first started learning Ruby, I felt really comfortable making mistakes. I started going to Rails Girls Atlanta when I first started writing Ruby, and everyone was working on an extension of the Rails Girls Workshop application, and they were going up, complete beginners, at each meetup and talking about the application that they built and the mistakes that they made. So when my database schema looked exactly like it should, according to the tutorial I was following, but I couldn't get the Rails server to start, I felt comfortable going up to the front of the room and saying, hey, I don't know what I did, but it's clearly wrong. So it turns out you can't, or you can't, you can't edit the database schema. Should you? It seems like no. This is what my sources are telling me. And then when I started working at Big Nerd Ranch, I was exposed to a ton of people who were extremely smart, who were extremely comfortable talking about their mistakes. And even though the environment was really great, when I first started, I was still really scared to talk about my mistakes because I thought they would find me out, and they would find out that I was Amelia Pagiglia, and they would have read the books, and they would be like, we got to get her out of here. But even in that super welcoming environment, it took me some time to warm up to talking about my mistakes. Once I started talking about my mistakes, though, I started learning so much faster. And I think that if we really want to facilitate learning, what we need to facilitate is making mistakes. So. God no. Yeah, just like that, that's it. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
|
"Did you really think you could make changes to the database by editing the schema file? Who are you, Amelia Bedelia?" The silly mistakes we all made when first learning Rails may be funny to us now, but we should remember how we felt at the time. New developers don't always realize senior developers were once beginners too and may assume they are the first and last developer to mix up when to use the rails and rake commands. This talk is a lighthearted examination of the kinds of common errors many Rails developers make and will dig into why so many of us make the same mistakes.
|
10.5446/30640 (DOI)
|
A.P.I. A.P.I. is one of the things that we have quite a business to do. We are always using it, we all have been using it, and it's pretty special for the internet to use to get to the mobile. A.P.I. is not new technology, it has been used way before the internet comes, and the internet has pushed it even further by increasing the power of how machines can communicate with other machines. How this mobile and wearable lifters are coming out, and how this movement that is going on right now, and who is pushing it even further, we as developers have been using it in radios, and a lot of other tools and other lifters, and now those are a few great APIs, and a great API that is a lot of time-lapse. My name is Juan Mota, I'm the co-founder of Jiggo from We Are Assas for identification. This is how you can find more computer, internet, and internal. Today we are going to talk about AMS, API, Rails, and another thing that we have been tied up by a lot of stuff. I tried to make a lot of stuff that could be a short part, but I probably should get started a little bit, so for those that doesn't know, I'm from Brazil. And Brazil is awesome. Actually, this is how far I am from home right now, just talking to me, it's like an hour of life. And what is funny is that every time, every time that I go to every conference in the world, and I tell them that I'm from Brazil, everybody asks me about all these things, and I'm just like, hey, I got an answer. So, every time that I tell people that I'm from Brazil, it's really funny because everybody asks me, what about all these trees and lakes and animals that you have around, it must be really fun. But actually, the thing that I came from, something like this, there's no trees, no lakes, no animals at all, and actually the largest city of the whole Americas, and also the largest city of the whole southern hemisphere. Yeah, it's pretty big, we have a lot of traffic, a lot of jam, so this is a picture of my usual Friday night, you can easily become one of those. What is funny about this picture is that if you take a look at this land type, even the motor cycles are having a jam. So, yeah, it's pretty tough. But one thing that we know how to do pretty well is to have fun. Yeah, we Brazilians love to have fun and love to hang out, and one of the major tournaments stuff that we have in our country is soccer. Soccer is huge in Brazil, everybody likes it, everybody tries at least to play it, and it's really cool. So, last year we had a World Cup, I was invited to join a startup as their CTO, called Alvitators, and it was a lot of fun. I talked a little bit about it on Vimecon, if you want to know more about it, you can watch the conference now. But the code was a mess. We are a couple weeks away from the World Cup, and we are not ready for a bunch of users there, we are about to get in the application. So, we have all types of problems that you can imagine. No background jobs, we had bad relationships, bad architecture, we had indexes that weren't missing, and we also had no cash at all. So, our APIs were really, really shit. Yeah, I have to be serious about it, we are really good at the World Cup, and I really know what I should do. So, that was when I found love, and I found love with AMS, and it wasn't me. AMS, how many of you have used active models to realize it? Alright, so you've learned a lot of these thoughts. But, first, let's take it easy. This is when I found love, I always loved open source, and I was about to contribute with people, and building new things. But, that time I really found love with Active and Modern Surrealizer and RAO's API. For those that didn't know, I'm one of the contributors for RAO's API, and Active and Modern Surrealizer, we are doing an amazing job, I talked a little bit about it from the end of 2000. But, first, to get started, let's talk a little bit about API. API stands for Application Programming Interface. And, I would like to dive into the story of this definition, that it's interface. Like, we are so worried nowadays about interface in general, like everybody knows how much user experience is important to an application. Everybody knows how much a user interface is important, and how much it can really take you apart from a bunch of other applications. But, what we usually do, again, is that APIs are also a kind of interface, because they are users that are going to use it, or they are customers, or they are developers, or they are team, or other developers that are going to integrate with it. So, yeah, we should be putting the same ford and the same amount of technology and for in-development APIs as we do with usual user interface and user experience. Because APIs can really be one of the great assets of a company. If we take a look at Facebook, for example, Facebook is the biggest ever-made social network in the story of your magnitude. And I don't know if you realize how much the API, their API, help them to achieve it. Like, nowadays, we are using this app, signing with Facebook movements all across the web. Like, we assume every time that you view the application, that we are going to integrate with Facebook. So, yeah, their API has been like a major tool in their expanding process. But it also can be one of the greatest liabilities of a company. Because if you don't do, if you don't write a good API, it should definitely come back to善 you. You're going to have to invest money to fix it, you're going to have to invest time and people to fix it, and you probably give a bad experience for everyone that is going to integrate with you. So, the question here is, how do we build a good API? What are the definitions of a good API? Well, in order to define the API, there are eight concepts that we have to use. Those eight concepts is what defines if you have a good API or a bad API. These concepts are, performance, scalability, reusability, aboveability, documentation. It must be easy to learn, it must be easy to use, and mostly, it must be hard to reach huge. So, these eight concepts, it's what defines if an API is a good API or a bad API. So, we have to start to worry about this. Well, there is a great talk about the fact that Joshua Lange is going to Google, Google Daystyle on 20th July, and he may like this code that it's really precise and really will help you to achieve a good API. You should do one thing and do it well. So, this is the, this is what we will start about thinking APIs for an hour, but if you actually do one thing and want to do it well. But if we take a look at the technical side of it, we have a lot of tools to build APIs. We have a lot of tools that might help us to build APIs. And I would like to tell you a little bit about why Rails is the best tool to build APIs. The thing is, I would like to talk about it, but it's not. Yeah. But it's a great one. We know that there's no super good, there's a lot of downsides where Rails is a great tool, and I will show you how we can use Rails in the development way. So, the things about why Rails is a great tool, is because if you take a look at that eight points that I pointed out for you, that the form of this kind of ability, RTU, ECU, RTU is used, all of them are deeply related with conventions. If you follow conventions, you probably have a performance. It's probably easy to document. It's probably easy to use, hard to misuse, and easy to scale. So, they are all deeply connected with conventions, and conventions by itself is deeply connected to Rails. Rails is all about conventions. So, yeah, this is why Rails makes APIs underway to develop APIs. But I have to argue it's something that Rails is indeed produced. So, yeah, Rails has a lot of parts that you might not want to use, and a lot of parts that you might want to use when developing an API. So, this is why Rails API exists. How many of you have already used Rails API? All right, two. What Rails API does is it removes the parts of the Rails that you don't really need and don't really want to develop an API, and it brings new functionality that you might use and might want when you make APIs. It's pretty much simple. And one of these specialities, one of these projects that Rails API has in union is Active Model for Reliance, also known as AMX. The easiest way to define Active Model Surveyorizer is that it brings convention over configuration to your table. So, it follows the Rails concept, it follows the Rails logic, and it also is made in a convention. But it adds a layer in your Rails application that will handle the conversion of an object that you have into a JSON object. So, I'll show you an example of what is a Surveyorizer on a Rails application. This is what a Surveyorizer looks like in a Rails application. I'm going to go through it. So, first things first, you have the definition of a Surveyorizer, no worries. We have a line that defines the actual boot that we want to return as JSON, in this case, title, body, and comments count. But the thing is, comments count isn't an attribute of post. So, it's a virtual attribute that you create on the Surveyorizer. So, this is the method that counts the comments of a post, and then gives it back to JSON. So, once that you have this Surveyorizer in your Rails application, this is the output that you're going to have. So, you're going to have a JSON attribute, every time that a post is returned to your client, and you have an ID, title, body, and comments count. Pretty much easy, right? Alright, so we have been using ALS, I think, some years from now. The first question that was kind of stable was the 08 version. How many of you use the 08 version? Alright, we have the online version. How many of you use it? Alright, cool. And we also have been working on 010 version. How many heard about it? Alright, it's cool. So, yeah, we are working really hard for several months in the 010 version, and I would like to give you a sneak peek of it, and the great features it has, and it will have, and you'll learn a lot, I'm sure, about it. So, the first feature is we implemented a Docker popup. So, what it means, right now, a Docker popup, it's a part of ActiveModerncy Realizer, that describes how the attributes will be serialized. It means that if you would like to use ActiveModerncy Realizer, to do an XML, for example, you could build an adapter by yourself, and you can create it with ActiveModerncy Realizer. So, it makes it easy for people that want to use other formats, harder than JSON, to use ActiveModerncy Realizer, and if they want to contribute, beauty and contributing to the project itself, by doing new adapters that don't work only with JSON. So, yeah, it was a great change, and a small one, so yeah, quick win for us. So, the second feature that I would like to talk about, and this one, it's really cool, and I really would like to focus a little bit about it, because I don't know how many of you have heard about it, it's JSON API. How many of you have heard about JSON API? All right, JSON API, the standard for building APIs in JSON. So, it started back some months ago, some of the brilliant and greatest minds and friends that I have, I know have been working on that, so you might know Steven Klaibnick, and he moved it and given another task, another conference, another room, so they started building it, and there's a lot of other brilliant people that are working on it right now. The concept is, they want to define a standard form to build APIs in JSON APIs, and it brings all the good things that conventions always bring, so you don't have to worry about negotiating it with your other employees and your other developers, you don't have to, a friend and developer don't have to worry about how long it's integrated, or which cubes will be returned as for API. So, yeah, it's really cool, it really makes it a lot easier to integrate APIs into new APIs and make it a lot faster, we really believe in that, and we are kind of glad to announce that we already commented the JSON API release in CapDate 3 in this new version, and the 1.0 version is supposed to be released on May 21, and we are already working on that, so after the release, we're probably going to also merge it with ActiveModels Realizer, and it's really cool because if you choose to use JSON API, you could build APIs way faster, follow up comments, that you anyone in the world who know what it's about, so yeah, it's really cool, and we're really proud of it. The next two features that I'm going to talk about are some features that I'm pretty proud of because I'm the one that made it, and I really think that it can really help a lot of developers out there, because it helped me on increasing and improving my APIs back then in WordCut. So, the first one is Cache. We have implemented a building Cache functionality inside of ActiveModels Realizer. So, it's a totally new implementation right in first patch. It's optimized and it's followed the real situation. I'm going to show you as a big pick of how it works. So, these are the usual serializer, a post-serializer with a title and body attributes. Let's say that you would like to cache it. All you have to do is add a cache method, and it's done. I have to do that, a cache method is a serializer, any part of it, and it will be cached, and it will reuse it. But you can go further than that. You can even specify a key for your cache, or even further than that, you can use all the Rails conventions. So, you can place X-parries in three hours, and this cache method X-parries every three hours, and it's really helpful. So, let's imagine that this serializer has a post-controller. So, this is a post-controller with an index definition, usual stuff, so every time that a user goes to the index method, it will generate the cache, as we saw in the post-serializer. The good part is, when the user goes to the show method, the same cache will reuse it across every action that uses the post-serializer. So, yeah, it's a really great optimization, and might make a lot of difference on your application API. The next feature that I would like to talk about, and it's another really cool feature, is Threadman Cache. Threadman Cache can really save you a lot of times. What it does is, oh, sorry, what it does is, it always follows the Rails conventions. It's always, it always, it always, yeah. It also follows the Rails conventions, and it's built to cache-specific attributes on a serializer. I'm going to show you this thing, pick up the two. So, take a look at this post-serializer. It's a little bit more complex. So, we still have a title, a body, and a comments count attribute, but this time, we are overriding the title attribute, and we are overriding the comments count attribute, and generating that inside the serializer. So, if you take a look at those two methods by themselves, you're going to see that the title, despite of being a modification of the usual title of the post, is definitely cached. But the second attribute that it calls how many comments it has, it's non-cached, because it can change very quickly. So, how can we cache one part of this serializer and get the other outside, and still have the improvements of the cache documentation? So, right now, all you have to do is add cache, home, title, also using the Rails convention. You can always, I think, always use the other ones, so accept, comments count, and it's okay. So, this is what you can do with, you put everything together. So, you can define cache, you can use a key, you can use all the Rails method, like it's firing in, and you can also use it to make it pregnant cache. So, yeah, it's really cool, and it's a really cool, two features that tie together really well. Now, there is this famous phrase, I mean, God, the truth is, and now the other is big data. What about how it can be problematic? How it can be concerned with the old versions of active models to realize it? Because this was totally he-righted. It was right in the scratch. So, well, I made some benchmarks. So, if we take a look at the oldest, the left version, the online version, this is what we look at like, a benchmark that I made with, like, 10,000 times, building an azimuth, a realizer. But if we take a look at the new version, we have a drop of, significantly, a drop of 10 seconds, obviously, and if we use cache, we're going to see, like, almost 15 seconds drop. So, yeah, it's a really great increase of performance, and it might really help very much. So, there's a lot of work in progress, too. Like, we still have a lot of things to do. We are really working hard to implement that much so we can push forward to have even faster response when loads are a lot of cache together. We are still working on a benchmark implementation that will help us to also keep track of the performance issues wherever we can. And there's one more thing I'd like to talk about that's really cool, and I'm not sure if I even believe it. It's that we made it, we have just released an AMS work-end like today, early this morning, just because of RailsConf. So, you are all able to use this as your work. So, you are all able to update your GAMs, you are able to update your applications, and use technical models to realize it. I'd like to try something new here, so if you'd like to teach about it, you can use the hashtag AMS-TAN. Yeah, if you would be like, whoa, what is AMS-TAN? Let's check it out. So, yeah, use the hashtag AMS-TAN alongside with RailsConf, and the people will be concerned about what it is, and you should check this out. So, we have released it this morning, you are all able to update it. We still have a lot of work to do, but yeah, it's really great. And actually, I have another thing to talk about, and it's really cool. Not sure yet, but I think that I should share with you because I'm excited about it. We have been touched with RailsConf team for some weeks from now, and we are really excited about the possibility of AMS joining Rails as a full. So, you would be able to use it in Rails application. It's not for sure yet, but we have been going through it, and it seems that it's going to happen, and I'm really glad that I shared it with you, so we can push forward and make it happen. I also would like to give you a special thanks for the whole AMS, Rails API team. They are amazing, specifically those four amazing guys that have been closely working with for the last months. There is a lot of other contributors, a lot of amazing developers, and amazing people that have helped us out, but there's four people who are amazing, they have been helping me in doing this. Actually, I haven't been learning for them a lot, and I think that I ended up going too fast. I was going to take like another 15 minutes, but I think it's great because you have some time for questions. I would like to thank you all for being here, and letting you know that it was amazing to talk to you guys. Thank you.
|
A lot of people have being using Rails to develop both their internal or external API, but building a high quality API can be hard, and performance is a key point to achieve it. I'll share my stories with APIs, and tell you how Active Model Serializer, component of Rails-API, helped me. AMS have being used across thousands of applications bringing convention over configuration to JSON generation. This talk will give you a sneak peek of a new version of AMS that we have being working on, it's new cache conventions, and how it's being considered to be shipped by default in new Rails 5.
|
10.5446/30641 (DOI)
|
So thank you guys so much for making it all the way over to the sponsored track. They put it all the way in their go-ins. So I want to talk to you guys a little bit today about, it's kind of like, although we really did shangha this talk and turned it into a rust talk. So since a little bit about Tilda, we started a company in the background in 2012, and we're really big back and proud. And we worked on a product called Skylight and DHH's message yesterday about these being like a small team, being a crepper and needing to put tools into your backpack that are going to let you compete with the big guys, actually really resonated with us because we only have five engineers on staff and we're building products that compete with companies that are IPO or the process of IPOing, yes, like very much, much bigger competition. And fundamentally what that means is that we need higher leverage tools. So when we built Skylight, the thing that we really wanted to solve, the thing that really put a fire under us was that all of the tools that we were using to measure the performance of our applications were recording averages. And as DHH said, it's a visited DHH blood test and I got it from my Twitter feed yesterday. So DHH wrote this really great blog post in like 2007, it's like ancient times, where he said our average response time for basing up right now is 87 milliseconds, which sounds fantastic. And he believed you to believe that all is well and we wouldn't need to spend any more time off the wind in performance. But that's wrong. Average numbers are completely skewed by tons of super fast responses to feed requests and other casual clients. You have a thousand requests that return in five milliseconds, then you can have 200 requests taking two seconds and still get a respectable 170 milliseconds average, useless. And I actually think it's worse than useless, I think it's actively misleading. So instead what we need are histograms. We're like, great, we have received this wisdom from DHH, so let's go build a product model. And so that's what we did. We built a product where instead of giving the average, you can see a histogram. And this is super important because looking at histogram makes something like just so obvious, like you can see cache and cache misses because of bi-model distribution. You can also see the 90th percentile, which gives you a much better sense of what the average worst case that your customer is experiencing. And we did that all using very high leverage open source tools. So our back end in particular is built on a product called Hatchestorm. I don't know if you can read below, it says distributed, resilient, real time. I wish we had known when we figured that that was a pick two list. Carl had a road to the case. Anyway, you may have seen our talk last year at RailsConf, which I want to grab a slide, I apparently grabbed all the animations that come with it. But we gave a talk on our architecture and hopefully you guys really enjoyed that. So this year we want to talk about a different high level tool, which is Rust. So our tagline for Skylight that really motivates us is how do we give our users, we just don't get much of data on them, how do we give them real answers so we can be set to do all the data they don't have to. And as it turns out, doing this answers not data approach requires a lot of data. We have to collect so much information. And when we wrote the first version of our agent, which runs inside of your application, we wrote in Ruby, which is really great, you know Ruby. But we quickly realized that if we were going to collect the amount of information that we needed to build the product that we wanted to build, Ruby just had some fundamental performance problems that weren't going to be acceptable. But we really needed something, we really needed a tool that was going to give us a global level control like a C or a C++. But we were afraid because Yvette and I were afraid of mediocre programmers. And we know that we're getting our software to run inside of your application. And the idea of us writing something with like a site fault that would crash your apps is just like totally terrifying for our application. So we needed something that would give us the high level safety guarantees of Ruby, but with low level control and this low level access. And the rest was the answer, the rest came out, it was still 3.1.0. But we decided to make a big data on real file. Well, it's not really a big data, we got a semantic version of it so we're here. So we decided to rewrite our agent in Ruby in Rust from Ruby. And we called that our featherweight agent. And this I have to say has been one of the best decisions that we've made. It was very nerve-wracking to bet on this 3.1.0 program line which is low level making all these quite intense promises. But it's been really great because Skylight, the agent now, in addition to collecting so much more information than our competitors, just sips resources if you compare us to that. Because we're writing this code that's essentially operating like C, in addition to better performance and being so lightweight in terms of resources, it also lets us build features that we would never be able to build if we didn't have that low level access. Being able to write something in native code lets us go a level deeper into MRI itself. So for example, this year we launched a new feature that actually shows allocations. So you can actually see how much memory is being allocated in your Rails app in production, which is huge when you're trying to track down memory issues. And we can do that at a really granular level as we build up. Yeah, play to the minute. And this is also letting us check a new feature that we're announcing today called Trends. So Trends is a new weekly email that we'll be able to subscribe to, the launching of this week. You'll get a weekly email showing you not just 95% file but also your median. So this is a really great way to protect changes over time. So the thing that I'm asking you to do, too, is going to get into the meat of the whole matter. But the thing that I'm asking you to do is with a language that offers you low level control but high level safety and expressiveness, if you're a small, fragile team, what can you do? Like what new opportunities and new features in your apps does that have on? So without further ado, here is Buddha with his rush field. Okay. I'm going to call this from me, so the static loss of the rust. Oh my God. Be back. All right. We've changed our thoughts of the rust. Okay. I can do a hand-held one. There's like a, no. Okay. I guess I'll just start going and hopefully people can deal with it from me before I get any more. So I'm going to do a rust today. I'm going to show you how you show me. You can't do this. You can't do that. You can't do that. You can't do that. You can't do that. You can't do that. I'm going to do a hand-held one. Boom. That is? Yeah, probably. Oh, no. All right. Is that better? Yeah. Oh yeah. That's pretty better. Okay. Come on. Well, ah. Ow. All right. I'll just take the hand-held one. Let's do this. All right. I can hold it for you. Open your foot. Okay. So, time to talk about rust. This is the rust for rust. I've lost all the time that I have to talk about the mogo and the colors came my way. Okay, so. Since we're at a real conference, I've been talking about NodeJS. But, so the thing that's interesting about NodeJS from the perspective of Rust is that I think a lot of people, when they look at programming languages and third, try to think about using, a lot of people plot programming languages on this expressiveness and speed wrap. And I actually kind of find this to be a somewhat pointless exercise. And I use JavaScript as a sort of an example of this, which is that, you know, when I started writing JavaScript in 05, JavaScript was pretty bad on the expressiveness axis and even worse on the speed axis. But there was one thing that people underestimated on JavaScript. And what they underestimated was the fact that JavaScript was everywhere, right? So, JavaScript was on the client side with basically all you got to write. So, you wrote a lot of JavaScript on the client side. On the server side, it's very tiny. So, like, maybe the red one there is real or something. Yellow is a php. So, JavaScript was everywhere on the client side. It wasn't very popular on the server side, so people kind of dismissed it. But they sort of missed the ubiquity of JavaScript on the client side. And what happens when there's an advantage like ubiquity is that people go and they say, okay, well, JavaScript is low on the speed and expression scale. Well, let's write V8. We write V8, boom, now it's way faster. Oh, but people don't think it's expressiveness. No problem. We'll go and we'll make it more expressive, right? And so, this idea of programming languages as living in fixed places on the X-Y axis of speed and expressiveness is kind of not the way I think about programming languages. I think about programming languages and tools in terms of enabling what they enable. Obviously, everyone in this year goes to Rails and for a lot of people, Rails enabled you as a person who may not have necessarily, at least for me, I'll speak for myself. So, what we didn't really know what I was doing when I started, it enabled me to build pretty ambitious stuff that I wasn't able to build before. We were ran for a simple reason. And the thing that was really about Node, I think, is that Node allowed a bunch of people who only do what I write in front of, but 99% of people write in front of using JavaScript to let them write back and stuff. And a lot of people like to say, like, oh, do you really want those jQuery jockeys right on your server side? And that's like a pretty good joke that you can tell. But the industry is actually controlled largely by a lot of people who you may not have necessarily wanted to build that thing, but end up getting pressed to service. I think, like, for me, that's definitely my story. I got pressed to service because I do a lot of things I wasn't ready to do. And so for me, looking for a technology that enabled me to do my job is so important. So, Rust is really good at. Rust is not ubiquitous by any stretch. But Rust actually enables people like you and me, Tom, to be systems programmers. And usually when I say that, I say, oh, yes, Rust will need to be a systems programmer. I get a work like this. Like, am I the systems programmer? What are you talking about? Somebody else can be a systems programmer. Although as Tom pointed out, sometimes you don't have choice. But then usually when I get asked, I say, no, it's like, good, you may want me to end up in a situation like that, people say, oh my God, like, that sounds really scary. The system programmer sounds like super hard. Obviously I played the C a few times and that was crazy. And also I said all the time, it was super dangerous. So then you're like, no, Rust is great. It lets you, you know, it's easier. It has some of the high level of affordances. It's also not as dangerous as you, as C, it's like safe, actually. And then people say, like, oh, so systems programming seems cool, but like, what is that? Like, what is the system programming? So what does that mean? What is the definition? So there's actually a lot of definitions, but I'll just give you my personal take on it. So there's like a few different things that systems programming means to me. One of them is you get to write code without a GC. And there's a lot of reasons why you might care, like if you're a high-competency trader, you care about GC clauses. If you're a C, you don't want to embed a GC language inside a GC language, right? So there's a lot of reasons you might want to program without a GC. Also programming directly against the metal. I mean like the way no people say programming on the metal. I mean literally like writing against the lowest levels of abstraction that you have access to. And doing that without additional costs, additional abstraction costs. So you shouldn't have to write extra layers of, have extra layers of costs just to program in the talk of the government, right? Also, in terms of runtime, most programming languages have a pretty heavyweight runtime or an involved runtime. Usually when people say systems programming, what they mean is that there's like either no runtime at all or the runtime is very lightweight and is pay as you go. As you need it, you use it. Also this is like a thing that came out of C++, what I think is important is like you should be able to write abstractions and those abstractions should not cost. You should be able to write functions, you should be able to make structures, you should be able to organize your code in a good way and not have those abstractions at additional cost as you go. And finally I think the thing that systems programming most is, is everybody's worth the code. You write a movie code and normally the good answer to Rust is like, well, it doesn't end up matter. Who cares? It's like, I'll just write my Rails app and it will be fast enough and it doesn't end up matter. But occasionally you end up writing code where it doesn't matter. This happens basically in every app where there's a performance critical area and it just doesn't end up being true anymore that it doesn't matter and the amount of time it takes you to get to a reasonable performance is actually more writing in a language like Ruby than writing in a language that's optimized for good performance. So that's sort of for me, system programming is, when it turns out that the story that we tell ourselves about performance is not mattering for the cases where that doesn't end up being true. That doesn't mean a bunch of stuff though. It doesn't mean malignant free. It doesn't mean you have to write assembly languages. It doesn't mean you're writing code that only talks to Unix. It doesn't mean you're writing handcrafted make files. It doesn't mean you have to care much about what the linker is doing. And it doesn't mean that your entire application is written in a systems programming language. It may mean that you're writing using a little bit of systems language for errors where it matters but mostly wrap is still written in something like Ruby. So a good example of this is like Tom said is Skylight. So Skylight basically used Rust because we needed to embed a programming language without the programming language and 2GCs is not good. We needed high performance and we needed low memory. So for all those reasons we ended up going with Rust. Firebox, sorry, Mozilla actually is using building Rust for a kind of different reason which is that they're building a new browser engine called Serbo and for them they needed to build something that was fast, safe and parallel. So they really wanted to explore different parallel options. And interestingly this Serbo team is like 7 people so you want to talk about a proper team or 5 people building Skylight that's 7 people building a browser. And they're using Rust because it allows them to explore ways of writing a little bit that you can normally write C++ you know why Rust is C++ but less than explore that might have a performance profile about being able to do parallel stuff. So that ends up being pretty important. So those are all reasons why you might use Rust. I'll talk a little more at the end about Rust at Ruby. But before I talk about Rust at Ruby I want to talk a little bit about how it works. So here is some Ruby code. You can see I have a very simple structure. I have a class that has a point that has an x and y. So Ruby audience so you should know what this code does. It's very simple. Then I have a link function. The link function basically makes a new point. A couple new points makes a line and calculates the line. So what happens is I go into this link function, I make this point, the point is created, it goes into the heap somewhere, I make another point, no problem, goes into the heap somewhere, I take a line, goes into the heap, of course that line points into two points. I go into the link and then at some point in the future so that those objects stay there chilling out. At some point in the future something stops the world and it goes to the end and calculates what's going on, discovers that nobody else needs those objects anymore and it can be a signal. That's actually a pretty good story for safety because basically by definition a garbage collector ensures that you can't use an object, sorry, that the object is clean up, only after it is no longer being used. So the whole concept of a use after free bug is impossible by definition. You have a garbage collector, use after free is impossible, free only happens after use is done. That's the point of a garbage collector. But that means that you need a garbage collector. There's another way of dealing with memory, that is how the methodology for dealing with memory is C or C++ and that methodology is called ownership. And the idea of ownership is basically that whoever allocated the object is responsible for deallocating it. So if I make the object, I have to deallocate it. And the reason why this is good obviously is that it means you don't need a garbage collector, the reason that it's bad is that now you don't have to keep track of all that. You have to make sure that you do the right thing. If I make the object and Tom tries to create it, basically game over, now I try to use it later and I get a simple. So the idea of my ownership is how is the methodology for doing systems programming, but historically actually doing it was very hard. So let me try to give a simple analogy. So let's say there's me and there's Carl in Thinking Man Pose and there's a book shop. I basically go to the book shop, the book shop says, okay, you can have a book, it's my book. Now it's my book, I bought it, I bought the book from the book shop. Now I'm allowed to destroy it or burn it. Not the best analogy. So I'm a good use of the blame energy. So because I own the book, I am allowed to destroy the book. It's my responsibility. I don't have to ask anybody else for permission. Now once I own the book, once it's my book, I can also give it to Carl. And now that I've given it to Carl, Carl is allowed to dispose of the book. But I can no longer dispose of it. I've given it to Carl, it's now Carl's book. So one way that you might think about this is that ownership, both in the real world and the programming world, is basically talking about the right to destroy something. So let me show you basically the equivalent code. Hopefully people can read this code. Probably not, but sorry, N24 by 768. So you can see on the top that there's a book. It's a simple, if you're like familiar with any type of language. It's a book, it has a title, it's a string, it has a bunch of chapters, it's a vector of strings. Then we have a function called name, and I'll just walk through what happens here. So what happens here is the first thing is we say give me a book, read the book out of a file system, and then now the book exists and it's owned by this function. So ownership of the Rust is usually rooted in a function. So the function that's in it will be called on the book. Now I go and I print the book and that's great. The book has printed to the screen and I leave the function made. So now the function made as an owner doesn't exist anymore, and because the function made had the right to destroy, basically Rust will automatically go and clean it up. So you didn't have to do any work. You should know the lack of manual memory management here. What happened was that because the function made owned the book and the function made doesn't exist anymore, the book gets destroyed. And that's, I think that's a pretty good starting point if you want to try to do automatic memory management, but of course if only the function that ever created something is allowed to use it, that doesn't make very interesting programs. So let's look at a little bit of a more involved, a little more formal example. And this is an example where I did, the main function makes the book and then it calls the print book function to print book. And so what happens here is that we make the book as before, now the book is owned by the main function. Now we call the print book function and the thing to know here is that the print book function just takes the book, that will be no extra signals or anything like that. So because of the fact that it takes the book by default in Rust, that means you're going to do transform it. And now what that means is that the main function no longer owns the book, instead the print book function owns the book. It does the print line as before and when it leaves, now it is responsible for disposing it, the function, and the book gets destroyed as you might expect. Now you might be thinking, okay, so that's cool, but I've written a lot of Ruby and if there's nothing really stopping me Ruby from after the following print book going and using the book again. So if you follow this methodology over here, what's going to happen is that the main function is going to try to access a book that was destroyed. And we said before that we can kind of get a garbage finder to deal with that kind of problem. So how does Rust deal with that problem? So let's go back and let's look at this example where we do the same thing as we did before, but after transferring the ownership to the print book function, we try to use it again. We try to print the number of chapters. So as before we make the book, we put it on by this function, the main function, we print book, we transfer it, as before the book got destroyed, now we try to go back and print the book. Well, actually no. What's going to happen is we didn't actually get this far. The compiler actually discovered that we did this ahead of time and says you can't do that. And what it says is you use the moved value. The value is transferred, the ownership is transferred into print book and you're trying to use it. And you get a little note that says no, the book was moved here. So in the real output there would be like a little arrow pointing at the print book function. So you say, okay, I see that I transferred the ownership, I'm not allowed to use it anymore. And sort of like I said before, so this is, I think this is fine for simple examples, but you can sort of intuit that this is not the whole story. This is not, you can't really write interesting programs at every single time you want to do something with the value. You have to give ownership to the function that wants to do something with it. That's not, that's not very intuitive. And to deal with that in the real world, in the real world of transferring ownership, we deal with that by saying you're allowed to lend something. So I go to the library, the library doesn't have to give me ownership of a book, from the land, the ownership to me, and then I'm giving a promise to the library that I will return at a certain point, right? So the way we deal with the problems of ownership in the real world is by borrowing. That's also the way that we deal with the problems of ownership in Rust. So let me give you an example. So I have this book that I got from the library and I say to Carl, hey, I will give you this book but you need to return it to me by 5pm on Friday, right? And as long as Carl returns it by 5pm on Friday, everything's great. Now, the problem is that in the real world, there's nothing enforcing that rule. So I can say to Carl, if you give me the book back at 5pm, and then he doesn't give you the book back by 5pm, it's basically now I'm in trouble. Somebody else wanted it from me, then I'm going to be in trouble. But of course, the programming idea, we can do better than that. So let's look at another example with borrowing. And you're going to notice that this is basically the same program that we wrote before, except this time when we call print book, we use the ampersand symbol before the book and we put an ampersand symbol before the capital V book on the bottom. And the only thing that we're saying here is different from before is that instead of transferring ownership from the main function to the print book function, we are lending the book to the print book function. And it's required to give it back to me at the end and that happens automatically. So let's look at what happens. So I start off with the, as before, the book is owned by the main function. But now, because I call print book with the ampersand, it gets lent to the print book function. The print book function prints the thing to the terminal and then when it returns, it gives the book back to the main function. And now because of the fact that the game is booked back to the main function, when I go to print the line, the main function still owns it. It can happily do that and everything goes along as expected. So that works pretty nicely. But there's one additional step that you need to understand how the whole system works, which is the fact that you can borrow, you can lend something that you borrowed to somebody else. We call that some leasing. The idea behind some leasing is that the first person to borrow it isn't the last person to borrow it. So in the real world, you can imagine, I go to the library, the library, it lends me a book, and the library says, hey, I need you to return this book by 5 p.m. on Friday. Great. So I remember that I need to return the book by 5 p.m. on Friday. And then Carl says to me, hey, I want the book. I want to borrow a borrowed book. So I can give Carl the book, but I can say to Carl, hey, I need you to give me this book back by 4 p.m. on Friday. Because I know I can give you the back of the library by 5 p.m. So you can borrow the book, but you need to return it back to me. Then Carl returns the book back to me. I return it to the library. Everything's great. Again, in the real world, it starts to get complicated. In the real world, when you start doing the sub-leasing, you start dealing with complicated sub-leasing arrangements and restrictions, and that's mostly because in the real world, people are very bad at honoring the leases that we ask them to honor. But in the programming world, the compiler can enforce it for us. And so let's look at another example with sub-leasing that's a little more involved. So here, the important thing here is that we're able to write arbitrarily involved abstractions just like you would in any programming language, as long as you follow these basic ownership rules. So here's the function called Man again. We're going to get the book. The book's going to be owned by the main function, and then we're going to call the print code. And then we're going to bump or lend the book to the printable function. So it gets the book. It already has parted, but it doesn't really know how to print things. So it's going to delegate that to the print title function, the print chapter function, right? So it's going to call print title. That's going to lend the book another level down. It's going to print line, and then it's going to return back up. Then the printable function is going to go another level. It's going to print the chapters. That's going to go get the link to the chapters. It's going to print the line. It's going to return. The printable function is going to return. And then as soon as the whole printable, the whole main function is done, then it can destroy it. So the really cool thing about all this is that in all of these, in all these cases, I think the way, the thing that we're doing is pretty normal, right? It's basically, if you think about it, in most programming, when you call a function, you're basically lending. You're not expecting the thing that you're calling to take ownership and try to do something with it later. But we have to pay the garbage collection overhead in all programs, all the time, in all cases, because somebody might want to do that, right? And in Rust, the way it ends up working is that you start off with this assumption of ownership transfer, and you can lend things as much as you want. That's basically a thing you're free to do. And once you've done that now, you can completely eliminate the cost of garbage collection across the entire system. So there's one last video, which is mutability. So, so far we've been talking about read-only things, and of course, you can imagine that if you're only dealing with read-only things, you can lend as much as you want, or anything, you can look at it, and of course, they can't mutate it, so it's totally fine. You can have 50 people looking at something at the same time, and it's fine. But what if I want to allow mutations? Mutations add a little bit of a wrinkle to it. So let's say I go to the library, and the library says, here, you can have this book, and you can feel free to change it. Let's say you can move the book around, or fold the corner, or something like that. And then Carl says, hey, I want to borrow the book. I might want to say, Carl, you can borrow the book, but I don't want you changing it. And so I can say, hey, return the book on Friday, but don't change it, and he returns the book, I return the book, everything is great. Again, human terms is getting even more complicated, very difficult to move around the rules. In Rustics, again, it's pretty straight forward change, totally, we've been doing all along. So the first thing to notice is that so far, by default, everything you do in Rustics is quote, unquote, immutable. But when we say immutable, we don't mean that the object is frozen, there's no runtime check, something like that. It's just by default, if you have a reference to some kind of object, you're not allowed to mutate it, the compiler will prevent you from changing it. Now, let's say I want to add a new feature here, which is that I have the ability to add a bookmark to the book to say where I'm going to. So now in order for me to be allowed to mutate that bookmark, I need to start off by saying give me a beautiful book. Don't give me a book, a read-only book, give me a beautiful book. And that's fine. And now when I call the print book function, the print book function, before I just call it with the ampersand, which means I'm letting it to you, this time I'm calling it with the ampersand mute, which means I'm letting it to you and you're allowed to modify it. So I call print book with the mute, I send it over. Now the thing to note here is that when print book calls print title, when print book says I am allowed to mutate it, but I'm printing the title, I shouldn't be able to get it, so I'm not going to let you mutate it. And so print title, when print title works as before, it follows it, does the thing, call print chapters, does the same things as before, right? And then at the end of my function, I'm basically going to go on, I'm going to mutate the bookmark. So this is a kind of a whole key example because print book shouldn't be changed to the bookmark. But the key point here is that if you look at the main function, it's clear to me that the print book function might be mutating something. So just by looking at that signature, I can say, okay, you can see that mutation like happening, and all the other signatures on this page don't take a mutable borrow something, so we know that they can't mutate anything. And that ends up being pretty important, that ends up helping a lot. So now that I've sort of talked about all the different rules, I'll just recap as saying there's basically two rules of borrowing. You can have as many outstanding read-only borrowers as you want, so you can, in the real world again, you can't give the same book to many people at the same time, but aren't you programming? You can have as many pointers to the same object as you want, so you can have as many outstanding read-only borrowers as you want. But in contrast, a mutable borrow is unique. If you have a mutable borrow that's outstanding, no other borrow is mutable or read-only can infer at the same time. And again, it's very important to note that this is not enforced at runtime. It's not like every single time you try to borrow something and check to see if anybody else has anything on it, and there's no mutable loss at runtime. It's all enforced by the compiler, but this gives us some nice properties. So I want to show you some examples. So I've talked about what's a lot of them, and not a lot of them. I want to show you some examples of what that might look like. So we start off here, we have this function called same, and the same function takes two books, it borrowers two books, right? And it just says the title is the same as the first title, the second title is the chapters are the same as each other. And it's cross W equals, what it does for me, it does a deep comparison. And if you call same with book 1 and book 2, obviously that's fine, right? I'm allowed to lend something to the same function. These are two totally different books, so that's really fine. Now, what if I instead of lending book 1 and book 2, what if I call same with book 1 twice? So again, because this is an imutable borrow, because it's a read-only borrow, this is still fine, even though I'm making two copies of that book. So for the perspective of the same function, those are two copies of the same thing, or two different things. The fact that it's read-only means that's safe and we're allowed to do it, so that's still fine. But now what if I make another function? Well, this function here is going to be called copy, and it's going to take a book and a beautiful book, and it's going to copy the read-only book's title into the beautiful book's title. So here what we did is we said copy the title from book 1 into beautiful book 2. So like I said before, that's still totally fine, because in this situation, we have book 1, we have book 2, we're not this go-e-ly-s-ing going on, we're following the rules for borrowing to beautiful borrowers, must be unique. That's totally fine. Now, what happens if I'm going to change it? I'm going to say we're going to only get one book and then I'm going to call it copy from book to the same book, a beautiful book. Again, from the perspective of the copy function, it doesn't know that these are the same thing, right? But what it ends up asking is a pile of books of this, and it says, you're trying to add a read-only borrow and a beautiful borrow at the same time. That violates the rules of borrowing. So error cannot borrow a book as beautiful because it is also borrowed as a beautiful book. And it gives you a note that says the previous borrow and borrow of the book occurs here. Which is useful. And then it says the previous borrow ends here. And basically at the end of the day, borrowing is a secret sauce of trust. Borrowing basically allows you to do very, very involved, complicated, recursive things that you might think would be a very much fun to do, but by following these very simple rules, you get done what you need to get done. I actually want to skip a little bit here because I want to get to the ruby part. I'm going to skip over the closure or stuff. But the TLDR of the closure is that closure is called the same rules as regular closure is called the same rules as regular closure. So if you're using a closure and a closure closes over a variable, it ends up having the same rules. So if you pass a closure to another function, and the function tries to call it multiple times, that would violate the ownership rules, and that's what gives you an error. You can learn more about that in the rest of the book. So what about the currency? So the interesting thing about the currency is that really the public currency boils down to one thing, which is that shared beautiful state is the root of all evil. And there's basically two strategies that people use to try to deal with this problem. So one of the solutions is called channels. The idea behind channels is that you're not allowed to have two copies at the same time. If I want to give value to some other thread, I have to pass it through a channel and then you can face the cannulas. Another strategy that's used is functional style, which is that you can't mutate. You can never mutate anything, and you don't even have shared beautiful state. Rust sort of does a combo. It says you can have shared state, or you can have a beautiful state, but you can't have shared state that is also beautiful at the same time. And if you think about that, that actually is the exact same idea as the original rules of borrow that we had before, which is that you can add as many outside compute only bars as you want, that's shared state. Or you can have a single beautiful borrow, that's a beautiful state, but you can't have both of them at the same time. So you can have alias state, or you can mutate, but you can't have both at the same time. And if you use more rust, you'll see that the send trait and the send trait are basically the ways that internally rust enforces this. It's not anything special about threads or anything special about any particular bad structures in rust. There's just two traits that represent thread safety, essentially. And any library that's written, like Carl, has a bunch of libraries that deal with Eastern Criticized R&C, and they're able to implement these rules themselves. Just a couple simple examples here. So we have the spawn function, the spawn function case enclosure here. You can see that if we call this spawn function, and we try to access a book from the outside, it's going to tell us that there's an error, and it's going to tell us that the book doesn't look long enough. And the reason for that is I actually kind of want to defend the other employer stuff. But the basic idea is that the closure can run at any time, right? So we call spawn with the thread that closure can run at any time in the future, but we know from before that the main function owns the book, right? So we can't let the closure run and use the book at any time in the future, because we know that as soon as the main function gets exited, it's going to clear the book. So that's an error, as he says. The book does not look long enough. This is a compile time error. And we add the word, and we move to it. That basically says this closure is going to be moved out. You can feel pretty moving from the outer scope inside of it. And then that becomes just a regular transfer of ownership into the closure. I'm going to skip the next section. So one thing I didn't talk about, or seriously, that I didn't talk about is sort of high level productivity. And maybe you guys sense that from the one slide that I showed you, closures. But for us, I have a lot of higher level ideas that are pretty familiar to higher level programming people like people coming to Ruby, high level JavaScript. So there's things like OO, right? So you can have a type that you can implement. There's things like treats, which I don't have an example of. But the idea behind treats is sort of similar to Ruby Mixins or Ruby Ruby Refinements, which are like still Ruby Mixins. So you can write things like Hacker support where you can implement like a fucking implement method on the one function, on the one value where you can implement, you can say like one of the things that I go, or you can implement treats that are like go interfaces, right? So there's sort of this very flexible way of dealing with kind of mix and type of situation. There's also erasers, which are a way of doing something that looks very Ruby-ish in the sense that you might map, filter, reduce over things. But under the hood ends up being compiled, being effective as fast as the things that you would have written by hand, and that would just sort of match all of the M stuff. There's also enums, which is, if you're familiar with other languages where there's like something with a variant. You can have enums and you can also put methods on those enums, which is awesome. There's also overload operators, which is pretty great. There's also things that look interesting, so you can overload indexing operator, but also things like the plus operator. That's always pretty cool. I didn't talk about it because it's not like, these are all like cool things about Rust, but they're not part of the big story. The big story is that ownership lets you do really low level things without the fear of psych faults that you have. And I wanted to show just a really quick demo for my time as a what I might look like in Ruby. So let me, I should bear my screen. Sorry I don't have much time, but basically, I'll make it bigger. So basically, I wrote a little, a little rag hammer. So this is a rag hammer you can see through me. The thing on top here is kind of the distinct part of this, which is I'm using a built-in thing in Ruby. Which Aaron has talked about a lot. And I'm basically just saying, okay, I want to dynamically load this Rails component that I got from Rust, and I'm going to define three functions. The anchor function, or report function, takes analytics and analytics function that returns analytics. I made a little classical analytics which just wraps that. So you can see it initializes the calls you have to buy, player, basically calling into Rust. So all these things that are happening here are calling into Rust. And then I wrote a little rag hammer here, which is just an analytics handler. And when you go to slash report, it calls report, gets a string back, and otherwise it just increments the request you arrive. If I go into the Rust code, you'll see that the Rust code is actually a pretty vanilla Rust code. I have a structure here called analytics. It has three hash maps that are in host schemes and endpoints. And then when you call anchor, it basically goes, okay, increment, parse the host, and make sure everything is okay, basically fail, then it's not going to be valid URI. And then increment each one of these three hash maps with the information pulled out, and then also increment the token. So we have some Rust code that does what we wanted. Now the cool thing here, the really interesting thing here is that you see this, there's this no-mangle thing, and then pop-extern C. Other than this, it's pretty vanilla Rust. But by adding these descriptions here, what we're basically saying is, from the perspective of any other programming, it does not last treatless like C. So you can see here, for example, this interfunction takes an analytics and a buffer, and if I go back into the, if I go back into the config.ru, you'll see that that's, right, interfunction takes a C-point of analytics and a buffer. So the cool thing about Rust is that even though it has all these high-level C guarantees, the syntax isn't exactly like C, the underlying semantics, if you tell Rust, like this should be usable from C, are exactly the same as C. So now let me just run. I should actually show you, so Rust has this thing, has a, sorry, Rust has package manager, so there's Cargo.toml, which is the package manager description. You can see I have a Rust copy demo, I have some authors, then I have, I describe my library, I say it's a dial-in, which is important for this demo, and then I say it has a dependency on URL, and it has a dependency on Ruby Bridge, which is a library that I wrote for this demo, which is basically just a thing that gives you the buffer transaction. And then if I go to Cargo.bill, that's that release, it actually does nothing. And the reason I'm bad with my run for code is that it sees that I already built that. So if I rmrf the target directory, and then run to Cargo.bill again, it's high-level, it works like on the right, it's basically like, oh, I see you already downloaded those things, so no problem, I'll not download them again, but I'm going to compile them one at a time. You can see that it's compiling the Ruby Bridge crate and the URL crate, which are basically the dependencies that I listed, and that's automatic, you don't have to do anything other than save it and you can have it on it. And then I'll just point out in the Rust code, liverf, you can see on the top here all I have to do is say, extra-in-crate URL, extra-in-crate Ruby Bridge, and that's all the code that you have to do to get Cargo to do all of it to build it. So if you've ever written like, seriously plus or plus, that's like a billion times simpler to deal with dependencies. It's like writing an NMI in a programming language. So now I'm just going to do bundle exact rackup config, are you? I'm actually inside of Ubuntu VM, so I have to minus those 0, 0, 0, 0. So then I'll be open from... And I'm going to go to... So I'm going to go to... Okay, so you can see it says success in programming, you can go to.dev, you can go to like, whatever you're on. Oh yes, so I flashed on the screen, but it's very simple. You go back to the config.ru, you'll see... This was a benchmark for it. You can see that it basically does the thing, it's a call, it basically says if you're before, then return 200 with the report that we get over the F5 and Rust, otherwise call the interfunction again over the F5 and say success. So you can see that the actual work that is done inside Ruby is just calling the Rust through that little fiddle frame. So I can say success, I can do like, Ubar, Awesome, and then if I go to... I need to make a two slash report, you can see it'll give me a report and then... It's too big. So this is basically the default debug version of that structure. So actually if you go back and look at that structure, you'll see that I derive debug and derive a debug base, it says emit the code that is necessary to print the debugging version. And then if you look at the report, you'll see that the report is literally the same format that debugging code and then send it back over as a debug, like a debug data bug that's after the client. So, and this is basically just Ruby printing that same thing. I call two ads on it. So you can also see that it's definitely working, because you can see Baylico, the ICO is incrementing as well. This is definitely real life. So I basically had a time here, but the key point here is not really anything about this specific example, but just to show that Rust beat produces a dinosaur file which you could load into Ruby. Ruby has a built-in fiddle. Rust is pretty good at that supply. So without that much difficulty, you can take something that might be computationally intensive and convert it to Rust and then call Ruby pretty easily using my normal tools that we're using. Basically, we'll look for it. So I fold these examples. Sorry, the example and also the libraries that I use, which are very tiny, are on my githl.com. It's like the last two reports in a push. And I'm happy like anytime the next few days to take questions as I program. Thanks. Thank you. Thank you.
|
Ruby is a productive language that developers love. When Ruby isn't fast enough, they often fall back to writing C extensions. But C extensions are scary for a number of reasons; it's easy to leak memory, or segfault the entire process. When we started to look at writing parts of the Skylight agent using native code, Rust was still pretty new, but its promise of low-level control with high-level safety intrigued us. We'll cover the reasons we went with Rust, how we structured our code, and how you can do it too. If you're looking to expand your horizons, Rust may be the language for you.
|
10.5446/30643 (DOI)
|
I'd like to thank you all for coming today and it's wonderful to see so many professors and distinguished guests as well. Welcome as well, headmistress, scholars and other gentle folk. My name is Leblon Treasure Box. And it is an honor to be here with you today at Raven's Comp. While I'm not an official alumna, Raven's Claw has drawn me in with welcoming arms. It is a very special community, but you already knew that. As you've seen in your scrolls, my talk is called Beyond Hogwarts, a half-blood's perspective on inclusive magical education. For generations, the world's magic has been performed by witches and wizards, carefully identified at an early age, and cloistered away in Hogwarts or Drumstang or Salem Witches Institute. Here they are trained and groomed in their magical abilities, and then sent out into the world to carry on our traditions. It's been a good system, proven by time and trial to grow the magical talent we need to thrive. But several years ago, the magical world faced a threat like no other, twice. For great losses on both sides, Voldemort was defeated. It's easy to forget that when Voldemort and his forces threatened us all, two of the three wizards and witch who let his defeat were not raised in the magical world. Those events lifted a veil and have revealed to the world the incredible magical potential of Muggleborn. But I believe even of squibs, but we can discuss that at dinner. As you also read in the scroll, I am a half-blood. My mother is a Muggle, but my father was a wizard, a part of the magical world. He even served the ministry for a few years as an aura. I however was raised in the magical world, complete with Muggle schooling. I was quite fortunate and unlike Mr. Potter, was never actively denied my magical heritage. But I was never actively brought into it either. For most of my life, I thought my magical side was just a curiosity, a fluke. I thought it couldn't possibly lead to anything more. So for years, I ignored the quiet owls wrapping on my windowpane. You see, only a chosen few are summoned to Hogwarts with the tenacity of Dumbledore. My first steps into the wizarding world were in long time in coming. Finally, I opened my eyes to the wonderful magical world before me and I dove right in. I've even been able to find some of my magical father's commentary. But you'll notice my robes are hand sewn. My spell books are only borrowed from Diagon Alley. And my precious wand was a gift from a magical friend and her equally magical network. This wand was given to me at the time of my greatest despair. Like Hermione Granger, I've been very lucky. But how many of our next generation's potential witches and wizards won't be as lucky? Where would we be now if half-bloods or muggle-borns like Severus Snape or Lily Evans missed their chance and never made their way through Hogwarts halls? Severus Snape is famous for a lot of things, but being a half-blood isn't one of them. Opinionated and brilliant, if sharp-tongued, Professor Snape's knowledge and training have saved countless lives. Lily Evans started her life as an ordinary muggle. While she showed some signs of magical ability, she had no real idea of her potential. Until the day she received a visitor from Hogwarts who introduced her to her true calling. Lily Evans is better known as Lily Potter, member of the Order of the Phoenix and Harry Potter's mother. Without Lily Evans, we'd probably all be speaking parcel-tongue now. There are great and terrible things just past the horizon. We will need every bit of magical talent we can find to face them. We must all look beyond Hogwarts and beyond our old ways of growing magical talent. And it's not just muggle-borns who have needs beyond Hogwarts. Once our students are in school, a few of our brightest are not finding what they need there. The Weasley twins, Fred and George, quite notoriously abandoned their educations to follow their creative sides. I say notorious because if you fly out of school setting a fire dragon on your teacher and leaving in a shower of fireworks, it's hard to be known not for dropping out. The Weasley twins put their skills and talents to good use. The products of their entrepreneurial spirits have brought great joy to the magical world, even as they've taken a little of our productivity and a lot of our gold in return. And they are not alone. We sometimes forget that even Olyvander himself, our revered wandmaker, upon whose tools so much of our magic depends, is not known for an illustrious school career at Hogwarts. I've even heard rumors he used to race brooms. And there are so, so many more. We need to make room for all of these magical minds to build and create in their own ways as well. And if that means extending Hogwarts-level magical training and mentorship beyond Hogwarts walls. We have a third and final challenge that is sometimes only whispered. The Ministry of Magic is finding that the spells taught at Hogwarts are in some ways insufficient. Even our brightest graduates are lacking in the latest techniques and charms. While their theoretical knowledge is superb, learning magical theory alone does not help much when faced with a real life conflict. Passing the owls is not enough. This gap between knowledge and experience and between theory and practice was laid bare after the Battle of Hogwarts. To be blunt, the students who fought and survived Voldemort and his death eaters are simply better witches and wizards today. By having the opportunity to develop their magical skills in the real world, our most favored trio and their battling allies are better prepared for any magical venture, whether in times of peace or of war. So we are now faced with great threats and a growing body of students who were either found later in life and are too old for Hogwarts or who have needs beyond the Hogwarts curriculum. We need to look beyond Hogwarts. First we need to take better advantage of the recent explosion in beginner magical training opportunities. Just a few years ago, a budding witch or wizard's only real opportunity to study magic was within Hogwarts walls. That time is gone. Today, high quality beginner level magical training is just a swish and flick away. And yes, there is a great deal of bad magic out there as well. It is up to us to amplify and increase the reach of the beginner level training. There have also been tremendous strides in developing Hogwarts level magical training in smaller venues. What these small schools lack in scale individually, together they can make up in total reach. But we need more of them and they need to go deeper. One of these magical schools regularly receives many more times the number of qualified applicants than they have capacity for. When asked why they don't expand to fill the need, one witch involved in selecting their last cohort replied that they would love to. She said they have the demand and they have the support from magical industries. What they lacked was enough experienced witches and wizards willing to teach them all. And that school is only one of many that are starving for more witches and wizards who want to teach. They are looking for experienced, passionate teachers and tutors for their students. They're looking for you. Finally, we need a revival of an old practice. Apprenticeships. Magical industries, both large and small, have already embarked on this path. And they're finding success. Apprenticeships are proving to be an excellent, old new way to grow magical talent. But there is still a terrible imbalance between the number of future witches and wizards seeking out these apprenticeships and the much smaller numbers of those enabling them. We are poised at the brink of a new age in the wizarding world by increasing the reach of our owls and by supporting magical training beyond Hogwarts walls and by expanding the reach and the depth of apprenticeship programs. We will be ready to identify, reach and train the army of witches and wizards we will need to meet the challenges ahead of us. I invite all of you to join me in this mission to reach out beyond Hogwarts for the future of our magical world. With the threats and challenges looming on the horizon, we will need every single shining star and diamond in the rough working together to defend and guide the magical world. Hang on. Voldemort is gone. Let's talk about those other threats. Let's talk about power. What are the ABCs of a conference? Always be charging. We don't have to just accept that. We want devices that use less power. We want to be able to let our needs dictate which apps we run and which apps we turn off. How many times today or yesterday did you put your phone into airplane mode just to save some power so you'd make it through the day? We shouldn't have to do that. A solution to this will require the skills of electrical engineers and it will require the talents of a software engineer who's thinking about energy consumption as they code. Also, our work as developers feeds on energy just as much as the work of a farmer feeds on water. Our future selves will depend on software that is respectful of energy use. Our future selves will depend on software that enables a stable and secure electrical grid. The grid, where we get all of our power, is run by software. Software that someone or a lot of someone has to write. What's on that list is privacy. There are three big buckets of people attacking our privacy today. First, we've got the nosy neighbor and that's not the privacy slide. That's the privacy slide. We've got the nosy neighbor or the creepy ex or your aunt who's lurking in your Facebook feed. Then we've got the authorities. We've got the NSA recording everything. Wave hello to the camera. NSA will see. We've got police departments who want to make you use your fingerprint to unlock your phone. We've got HR departments who make you swear to spy on your colleagues on social media. It's not a myth. Finally, we have the free services that we actively give our information to. By now, it's an old saw that if whatever you're using is free, you're actually the product. Companies from Google to Facebook to Twitter have built empires around selling information about us. But play for pay companies are starting to follow suit. Uber and PayPal are doing their best to learn everything they can about you. Let's check out some of their job listings. Our security threats come from multiple places too. First there are the hackers and the scammers, those lovely, helpful folks who send you the emails covered with your bank's trademarks and try to fish your login information out of you. To fight them, we create complicated passwords and train ourselves to be wary and internet savvy. But then we discover that the layers of security we create become so difficult to penetrate that a lot of people simply don't use them. We might set our passwords to auto fill on our browsers or use easy passwords just because we want to get to the article already. Or we become dependent upon services like LastPass or we link our accounts to each other. So we just have to sign in once. It's easier than remembering long strings of gibberish, but it's not the safe way to go either. By bringing in new programmers who started out in other fields and by fostering entrepreneurship in our dropouts, we not only gain more soldiers in this fight, but we gain from their experience and insights. And we can gain in smaller ways too. Artists and teachers can make great UX programmers. You won't know what we don't know until someone new comes along with a different perspective. So where does that leave us? We've got threats, both big and mundane, and we've got a tremendous potential army just itching to get in there in code. And we've got you, and each and every one of you has a skill or resource or a battle scar to share. And if you don't know where to start, look up your local Learn to Code group. Maybe one is right down the block. Or maybe not. You could look up a girl development chapter or a study group or start an intro night at your Yerza group. You can spend some time on the Code Newbies chat on Twitter or the million other Learn to Code chats that are going on. You're bound to find lots of folks who can benefit from just some bits of your experience. If you have a little or a lot more time, consider starting your own group. Eta was started about two years ago by a handful of developers, a lot like you. And looking out, I know exactly like some of you. Madison Software Academy, Project Ascend, MotherCoders, all of these were homegrown grassroots schools working to give people their start. Or you can reach out to Newbies at your own company. Bre Thomas before, earlier today, and Pamela Victor's after me, both are talking about specific ways that you can help, that ways that have worked in their companies and that you can model into yours. If you don't have time for a big, huge commitment, all you need to do is take one or two of your Newbies under your wing and mentor them. You could offer to pair on something and let them drive, or you could host a brown bag to talk about something that you're passionate about and then encourage your fellows to join along with you in the next one. You might be surprised at both the turnout and what you can learn when people come up for their turns. And if you're in a position of influence, either by title or otherwise, consider starting an official apprenticeship program at your company. Have only 10 minutes a day? Spend some time on Exorcism I.O. We're missing a huge part in the middle between what these beginner coding schools can do and what companies are looking for. And if you don't have lots of time to devote to helping raise up and grow the talent of our next generation of programmers, just a little bit of time on organizations and sites like this can help pass on the knowledge that you have and the experience that you've gained from your programming. Have more money than time? That's what crowdsourcing is for. There are countless people out there looking to bootstrap their own educations. Spending a little time on these sites or just paying attention to your Twitter feed, you will find lots of people and organizations that you can impact. And I included a cat picture, so I think that means that the talk is done. Thank you.
|
When Voldemort and his forces threatened us all, two of the three wizards (and witch) who led his defeat were not raised in the magical world. Schools like Hogwarts can help us identify and train those with innate magical talents and interests whom we might otherwise never discover. But how to find and teach those beyond the reach of our owls? This talk explores our options and will serve as a call to action for the Magical world. As we will see, these challenges are almost magically mirrored by the Rails community as we seek to find and train developers from non-traditional paths.
|
10.5446/30645 (DOI)
|
So, hi. My name is Joe Masty. This is bringing UX to your code. These are my various things you can find me around. What I do as a day job, I help companies to build awesome internal education programs. Basically, try to keep your employees happy in doing things. That has nothing to do with this talk, but I figured you should know what I do. I've been working at Rails for five years. So, about five years ago, I got sick of consulting and I ended up joining a company and I felt a little too qualified for the back end job, even though I actually come from a back end history. And so, I applied for a UI job. No real history in UI. And they gave it to me. It's cool. And I just want to reinforce that I'm coming not from a designer perspective. I'm not a UX designer. I'm not a UX developer. My heart really lays way closer to the metal. Our show of hands, who here is like a UI front-end person? Small number of people. You can raise both hands if you're super UI. Who's like really back-end type of person? That's the majority of you. That's awesome because you are actually the target of this talk. So, when I joined this company as a UI developer, got a bunch of materials, you know, obviously learning Rails, new framework, also gave me a couple books to read. And one of the books was by this guy, Don Norman. This guy started the user-centered design movement. He released a book in the mid-1980s called The Design of Everyday Things. Anybody read this book? Yeah, surprisingly good number of hands. So it's like maybe like, you know, quarter-ish. If you've read this book, you know that you learn a lot about doors in this book. I'm not the only one who thinks this. Actually, on Wikipedia, when I was looking things up, it's authoritatively about doors. But it's actually a really cool book. What it teaches you is that this door, if you look at it, the handles are totally identical on both sides, right? And so they had to put a push label on there because everybody kept getting it wrong. And I have this thing, and I'm assuming that you probably do too, where you've been doing this your entire life and you assume that you're really stupid. And the book taught me that you're not really stupid. It's not your fault that that door sucks. It's actually the guy who designed the door. So that was a really important lesson for me. And if you take nothing else away from this talk, you can take away that somebody in the mid-1980s told you that it was not your fault that doors suck. So there's that. But there's a lot of other good stuff in the book. There are a lot of great lessons about usability. It changed a lot about the way that I do software. But mostly in the context of doing websites. This is the first result that came up when I searched for really hard to use websites. Apparently only like 20% of this is clickable and you can't tell which 20% and the rest of it moves around. It's pretty bad. But as soon as we switch over to code, there's really no mention of this usability stuff. This is not something that we pay attention to as back-end developers. And I think that I know why that this is. Yeah. I've been doing this a while. I've been an engineer for a fair amount of time. And I believe in how wonderful and exceptional we are and clearly we're saving the world. But I've noticed in that time that there are two things that we really tend to ignore as engineers. Anything that was invented a long time ago where that's a sliding scale. Those of you who are fairly young, this is probably like, you know, like five years ago, seven years ago. And then things that were invented by non-engineers. We pretty much just remake everyone else's fields. And so designers, like the usability stuff is designer stuff. It's not like engineer stuff. This was, I Googled, designer stereotype and this was like the nicest thing that I found. And this guy is not one of us, right? Like he seems, he's definitely not an engineer and he seems like he was probably invented a long time ago. And so, and when you read the book, he's talking about, you know, it's literally, it's long enough ago that he's talking about like doing spreadsheets on like a green and black screen and he's talking about all these, you know, like faucets and things. And so I think we tend to dismiss that experience and to prefer the things that we kind of just come up with on the spot because we're engineers and, you know, we can invent stuff. And I think we're missing out. And this is my consumption is that the decades of work that they've done in usability has given them, them us, has given them a vocabulary that they can use to describe these concepts. This vocabulary is very rich and it helps us to think about these concepts. There's a big power in having these words and having these things defined. And I think that our code is an interface. We may feel like the code itself is some like magical, majestical thing, but the words that you write are really almost tangential to what the computer executes, right? The words that you write are there for you. The computer has to translate them to do anything. And so I think that these principles actually do apply. And I want to take a look at them. Dunranting. It's all happy from here on out. And so let's try it. I want to talk about a couple of the principles in Don Ormer's book. And I want to talk about how they can apply to code. So first, Gulf of Execution. In the book, the design of everyday things, he talks about the Gulf of Execution and the Gulf of Evaluation. Gulf of Execution in a nutshell is a way of describing what we do when we have an intention and we want to actually act on it. And in short, the part that's most interesting is, is there any action that corresponds to my intentions? When you have a door, you look for a handle, right? So think about this in a code perspective. Let's say hypothetically that I am new to Rails, as some of us have been in the past. And let's say that I have an active record model. And let's say that I wish to remove it from a database, right? I don't know the action for this. And so I'm going to look. Because I have done a couple of Ruby tutorials, I am very clever. I am going to take a look at the methods. I'm going to sort them. I'm going to see if there's an action that corresponds to what I want to do. It's less than successful. I did put a red box on it if you can see that far. And so obviously I have no idea what I want to do. And so I get clever, though, because again, I have taken the tutorial and so I am going to grep. And so I will find the method I want. But what do I grep? The normal contention is that the model that is in my head is the one that I am going to use to try to find an action. And so in my head is somebody who has done the web before. Hdverb, delete, equal statement, delete, erase, delete, delete, delete, delete. Obviously I do delete. And so I get the method. Yay, right? Awesome. All good? Yeah, fantastic. No problems. Only except clearly that's not right. For those of you who have done Rails a little bit longer, you know the delete, despite being the thing that clearly I would find, causes data inconsistency in my model. Fantastic. Destroy is the thing that I want. But destroy doesn't map really naturally to any convention that I am aware of. And so for those, I can't see whether there are Rails core committers in here that I am offending. There is a good reason why there is a destroying to delete and it is bullshit. Because every new person as they come to the framework gets confused in the exact same way. And we didn't have to name the thing delete, right? We could have named it any number of things. We could have named it destroy without callbacks. But so by putting this thing in the way, of course, now I get tricked, problems. Gulf of evaluation. Once I have taken an action, I perceive feedback from the system and I want to see if I have succeeded the action. Did it work? So in our code, this is actually reasonably straightforward. Everything in Ruby has a return type, right? And so you have to give feedback. And so if you are giving even some minimal amount of thought to what kind of feedback you give, this should be pretty straightforward. And delete, it kind of does something reasonable, right? I don't think this is an unreasonable return type for delete. It didn't throw an error. It didn't return falsely or nil or anything like that. So it is probably reasonable. It also happens to be virtually identical. There is actually a difference between the two of them. But it is almost identical to destroy. And so in one sense, we did give good feedback, interestingly. Like it succeeded in deleting. And in another sense, it failed to warn me entirely that I have corrupted my data model. This is less than optimal, right? It gets even worse when your code is not in an execution context. If you do return types, if you think about other things like destroy all, this is actually a useful return type. I don't think that that would work for destroy or delete, but it is useful to know this kind of thing. In a non-code context, have any of you done this before? First of all, do you see why this is a problem? All of my new people, when they learn they have this problem. We declare private, and then we start to declare static methods. This does not work. There is no return or anything on that private modifier. And so there is really no indication that you have done anything wrong. The way that they typically catch this is either in code review or if they run like RuboCop, it will tell you that you have got useless private statements in here. And so with the gulf of evaluation, the question is how can I provide feedback such that I can catch this kind of mistake? I would tell you I don't have a great answer for this one. Maybe you threw a warning. I am not really sure precisely how you fix this, but we should be aware of these places where we create a system that leads somebody down a path where they have no way to evaluate their success as to whether they have succeeded. So another one, natural mappings. This one is cool. The explanation for it is so cool. I am not even going to use an analogy. Stovetop, right? I am going to use Stovetop. It has got four dials on the bottom. They are laid out in a row. And so if you want to turn on one of the dials, you read it and you decide which one it is and you turn it on. Not a big deal, right? But if you do this instead, the labels aren't even necessary anymore. And is it a huge difference? No. I have turned on my Stovetop like 90% correctly since the beginning of time and it is the confusing one, not this one. But these things add up. And if you think about that in a code context, how much time and effort it takes to get these mappings right, when they are a little bit wrong and you knock the person out of their flow, when they have to go look it up, when it does unexpected things, it becomes really evident how important getting these mappings right are. So what does that look like in code? Because we don't have, you know, there is no ovens, there is no dials. So I think array and Ruby is a really cool example. If you didn't know how to sort an array and Ruby, you would probably guess dot sort. If you didn't know how to get the first element, you might guess dot first. If you want to delete something, which, you know, back to delete, you kind of guess a delete. And then it works. It just kind of works the right way. And I think that that is a really cool example of an API where it really doesn't pull you out to think about it because all these things are mapped to the ideas in your head. Counter example, I screw this up every single time. Every, twice in a row, it doesn't matter. I screw it up every time because these two ideas have kind of like jumped on top of each other. And the one that I want when I want to update my software is very clearly, unfortunately not the update one. So this is, it's a mess. And again, I'm not entirely sure what you do about this one. But I think maybe we need to think about how we can define other words, how we can take advantage of not overloading these meanings. This is another really good one. These are in the file class in Ruby in the standard library. These are all the path related methods. There's two interesting things here. One, this is a mess. There's no, there's no like mental mapping for the majority of this. The difference between like path real path, real dirt path, big mess. As a counter example, really interestingly, if you've used Unix for a while, the idea of relative path and absolute path very much are natural mappings. And so absolute path can be natural for some people, can be for some context, isn't for others. So all this stuff is very, very situational, which makes it difficult to design for everyone. So I'm not sure what to do with that. Design for errors. I don't know about you. For me, if I'm coding for say, eight hours, about five of them, my software is broken. Software screwed up all the time. All the time. And so we need to design in such a way that we can recover from that and that we don't get pissed off by that. This is my daughter. Her name is Lane. She is 10. She is learning JavaScript. And so maybe a month ago, I'm in the kitchen, making dinner. And she's off on my computer and programming. I start to hear annoyed noises. You can tell when a kid starts to get frustrated. First she goes, all right. And then a couple of seconds later, she goes, and so, you know, she's kind of like building up this amount of anger. And so I finally say, what's wrong? And she says, it doesn't work. We're working on bug reports with her. It's not a very clear report. I say, okay, what does it say? It says, undefined is not a function. Right? This sucks. It doesn't tell you where it is. It's really, like, what does that even mean? It's in JavaScript even better. Like, forget it. I'm just going to stop executing. It's not only am I not going to tell you anything, I'm going to stop doing everything else I was doing, you know, take my toys and go home. And especially in JavaScript, we have this, you know, spooky nulls at a distance kind of thing. I don't know what the name of this term is, but, you know, we're like the thing that became null was three methods ago, but you never quite got around to throwing an error. And so she has this issue where she's flossing around with the code where it says, it's online 17. It's online 17. No, it's not online 17. It's in some other place entirely. We can do better than that. Undefined is not a function, is lazy. It's easy to write that response. But it's lazy, and so the people, the 10-year-old child, has an impossible time with it. One more from Don Norman. Exploit the power of constraint. I love this one. It's basically, if it's the right thing to do, make it very easy. If it's the wrong thing to do, make it very hard. I saw somebody tweeting like literally 20 minutes ago that people just blame the library, but I think that there's still a lot of space in this to support it. So make it hard to do the wrong way. I am not a heathen. I don't use my SQL anymore. But when I did, this was the coolest flag in the world. My SQL dash dash, I am a dummy. It means that you cannot run, like, delete from users. You have to add a, like, where, you know, ID equals to. It would refuse to do these things for you. I set this up as my default because, like many people, I once fucked up the production database, and so I thought it was very important to not do that again on account of wanting to be employed. And so this is a wonderful thing because once this is in place, it's very hard to do the wrong thing. Ruby, is this console in production development or test? Yes. Right? And so if you leave a console open, as I often do, and you come back to it and you have many windows as most of you do, you don't really know what this is. And so when you start doing user last, and you, like, hop back into this thing and you're doing debug statements, this is how you delete production data. Fantastic. And you may be thinking to yourself, like, oh, no, I would never do that. That would be dumb. And I have to ask you, have you ever programmed while really tired? Have you ever programmed while really angry? Or even better, have you ever programmed while really drunk? Yes. This is the thing that happens. And it's just as easy as, you know, falling off a lie. I don't even know what the, there's an idiom there I don't remember. It's really easy to do. It's terrible to do to your users. I saw this quote yesterday. It's like a re-tweet of a re-tweet of a re-tweet. You can write code that drunk children would understand. Now, what I learned is that Sandy Mets condones children drinking. And I just want to, I want to go on record that I don't support that. That is illegal and immoral. But we should be writing code that is, that works when you are compromised and distracted and everything's on fire. And you have to remember that your user, in this case, the user of your code is probably always compromised in this way. So I was thinking about these things. This was, this was a couple of years ago. And like everyone else, I incorporated some of this, but I certainly still don't consider myself a usability expert. But I came up on this blog post by this lady, Whitney Hess. This blog post is called, So You Want to Be a User Experience Designer. And no, I don't. So, it's a failed blog title. But there are a lot of cool principles in there. So whereas Don Norman is writing from the 1980s to me here, this is a lot more recent. There are some really cool, articulable rules. And again, I just want to walk through some of them and kind of illustrate how that can help us out. So grouping related objects. Objects that are close to each other have an implied relationship. And when I read this, my very first thought was, if any of you have read OptiGrim Confident Code, hands, I want to make you keep doing the hands. I want you to do it by applause. Let's do it by applause. Who's read? That's great. It is a great book. And at the very beginning, there's this like mind blowing. It's like in the preface. It's not even part of the book. And he looks at this code. This is actually from the talk of the same name that he did. And this code, it doesn't, don't stress too much about reading it. The idea is that the code is doing a lot of different things. And because they're interspersed, you have to keep changing contexts. And you have to keep going back and trying to understand what happened in the previous one. And it's very difficult to understand as a result. This is the same thing reorganized. This was like mind blowing. I don't know if your mind is not blown by this. This is a big deal for me. By rearranging it in this way, it's very easy to understand because the entire beginning is all about gathering input. And so they are all next to each other. You are thinking about gathering input. When you're reading this code, there is none of it hiding at the bottom that forces you to backtrack. The relationship of these objects is clear and it's being exploited in a way that is really good for the user who has to read this code. Be consistent. These are not in order. I'm going to jump back and forth. If we have gulp.upcase, what do you get? Big gulp. I thought I was going to get two chuckles. And you do big gulp.upcase, what do you get? You get big gulp. You do gulp.upcase bang. You get big gulp. What do you get when you do big gulp.upcase bang? No. Somebody knows. Clearly, like, it should definitely be the same thing. No. No, it's not. This is ridiculous. Again, there is probably a really good reason for this. I don't care. It is bullshit. People are using this code and this is super surprising. And remember that when they use this code, it doesn't look, you can't inspect it for those title case. It's going to come from the user, which means you have one of these bugs where, you know, one in 10,000 users ends up having an issue, right? You catch your test. No problem, right? No. Because when you write a test about this sort of thing, you don't write a test that does nothing. None of us think about writing a test that asserts that an uppercase word is still uppercased. It feels like a waste of a test. And so, of course, you have this test. Test passes. No problem. That inconsistency is poisonous. It's one from Rails. Production, production, dash E production. Same thing. There is probably a reason why it's dash E. I do not care. Because I get it wrong every time. And the software should be working for me. Use emotion. Coolest emoji I could find. So this is an important one that was really obvious after I thought about it, but wasn't really obvious up front. We have an emotional relationship with the software we use. And I'm not just talking about when you use Mac or, you know, whatever. I'm talking about when you integrate pieces of software, when you write code, you have an emotional relationship to that software. I don't approve it. I'm going to say a word. I'm going to put it up on the screen. And I want you to make a noise that corresponds to your feelings about that software. And so it's like a yay or a boo. All right? It's an easy one. Rails. RailsConf. Whoo! Really good. They're quiet over there. They're emotionless robots. It's all right. You can catch up. Soap. This is the best for me. Adequate record. Yeah! Fantastic. Coffee script. Great, right? So we totally have an emotional relationship with this stuff. And it's great because even in Ruby this is something that we acknowledge because we have a language that was designed explicitly for developer happiness. This is something you should keep in mind when you develop your code is that when you work with code that is joyful, when you have this is this library that just colorizes the dots on your RSpec tests, the entire library colorizes your dots. It says, fabulous tests. And you know what? I feel really good every time I have fabulous tests. And that makes a significant difference in my day. Warmth and kindness makes software a pleasure to use. This is important stuff. And when you do that people will want to use your library, they will want to kill you less. It's an important thing. Void jargon. Another quiz. It's very audience participation. I was assuming you'd be asleep by now. By show of hands, no snickering. Do you have CSRFs enabled on your app? Cool. Who doesn't know? Okay. Same question. Do you use protect from forgery on your app? It's the same thing. The people who built this feature are wise enough to realize that when you're implementing your app to say something like protect CSRF on action would not work. This is not an acronym that is known widely and it's not something that you want to try to retrieve or Google. And so people will just not do it. Protect from forgery. Fantastic. It's not jargoning. I know what it's doing. And so of course I try to apply it to everything. It's an important distinction that the CSRF library would not be as successful without this lack of jargon. Ajax. Counterpoint. With Ajax, I think maybe it's okay to use jargon. I think that this is probably prolific enough that that's acceptable. So it's very context dependent. Some things you can't expect them to know. Some things you can't. The last one. Sign posts and cues. Straight forward enough. Where'd you come from? Where'd you go? So when we look at this, what can I do next with this object? I don't know. If you think that the documentation is a sufficient way to tell me what to do with this object, fuck you, it's lazy thinking. When you think that you can design software poorly but point me towards a web page somewhere that's hopefully up to date on how to use this thing, you're causing me those paper cuts. You remember the oven and the natural mapping, right? When I have to read, it slows me down. When you send me to the docs, it slows me down. This does not point me towards anything that I can do. And so it's not okay. This is an interesting one. It's, I can kind of tell what the method is doing. But if I want to look it up, I'm not going to be able to look it up. I can't Google find by email in first name, right? And if I want to put a different parameter in there, I have to start to think about, well, can I put it before? Can I put it afterward? Is there an or? How does that work? The newer syntax for this, way better. If I want to look up.flying on ActiveRecord, it's actually relatively simple. By being able to say email in last name, it's very easy for me to tell what this thing does. Last example, Aaron Patterson a couple months ago, he was talking about RSpec. And I thought this was a really neat example. If you have this code, again, don't fret too much about reading it. If you run a spec and it fails, it tells you how to rerun the spec. You can paste this command into your terminal again, and it will run just the failed test. How cool is that? So it points you towards what you need to do next, which is rerun your one test. It's very easy to see where you came from, very easy to see what you do next. So what do you do if you how what do you get if you invest all the time to learn these things? So you get less suffering. There's this wonderful quote in that blog post, the world is filled with anguish. Let's not add to it. This speaks to me as somebody who's worked on production code bases. I'm not sure she wasn't writing about us because this encapsulates my entire experience of developing. And so we should be not contributing to that anguish. And also, in this one, I'm not sure about it, but I'm thinking about it, and it seems to make more and more sense and more I think about it. I think that if you believe in self-documenting code and fairies, I think that this is a mechanism for self-documenting code. And I'm not contending that if you do this right that you don't have to write any documentation whatsoever. What I'm contending is that self-documenting code is not about code. It is about the people who perceive it. And so when we use these techniques, people will understand our code. The cognitive burden is lower. And I think that this is what we mean when we're talking about self-documenting code, code that humans can understand. And so as a last request, I'm just going to say, please go out. The blog post is literally like 2000 words long. Take a look at it. Think about what you do with your code. If you maintain a library, think about how people perceive that library and what those stumbling blocks are. All right? Design of Everyday Things, amazing book. I recommend it to everybody. It's all I got. Thank you.
|
User Centered Design as a process is almost thirty years old now. The philosophy has permeated our products and the way we build our interfaces. But this philosophy is rarely extended to the code we write. We'll take a look at some principles of UX and Interface Design and relate them back to our code. By comparing code that gets it right to code that gets it desperately wrong, we'll learn some principles that we can use to write better, more usable code.
|
10.5446/30648 (DOI)
|
Hi, my name is Bree Thomas and I'm going to go ahead and get started, obviously. I'm really happy to be here today. I haven't been to Atlanta in a very long time. I haven't been to Atlanta since I was a kid and my parents shipped me away for a summer to go and work on my grandfather's chicken farm. So standing here on this stage right now with all of you out there is a lot better than the chicken farm. Lot better. So a little bit about my professional background. I am a recovering marketer. I was in marketing and branding for 10 years. I've been clean for almost two years. Very proud of that. In deciding to make that career switch, I attended a six month super intensive developer training program and I have been formally employed as a real live developer for almost a year. Well, just over a year. I work here, mode said, we're a product consultancy and development firm based in Denver. I am the only one on the team with less than 13 years experience. I'm also the only girl on the team. And while that may sound daunting, some days it is, they are an extremely supportive team and I count myself extraordinarily lucky to be learning from masters of the craft. I also run the Denver chapter for girl development, helping to bring affordable, judgment free professional software education to women. And I really enjoy that. I swear this is the last logo slide. Some habits do die hard. I've been in exactly two apprenticeships since I graduated from the training program. And so as a newb, the subject of developer training and growth is very near and dear to my heart. But I find apprenticeships in our industry to be a little on the scary side, quite frankly, and not just for the apprentice. And that's what I want to talk about today. Specifically what is so scary about developer apprenticeships, what's missing from them, and at the end of the day, what's important? What should we value in our apprentices and our apprenticeship programs? My hope is that you'll leave here today with maybe some fresh perspective and a few ideas. But first, I'm going to take it back a little bit and talk about the inspiration for this talk. Because also near and dear to my heart are 80s movies. And the title, Burn Rubber Does Not Mean Warp Speed, was inspired by an 80s cult classic, The Lost Boys. And so I just wanted to take one minute of our time and share that relevant scene with you. So with that, I'm going to see you. We blew it down and we locked it. Shut up! We are wrapped in the face of the enemy. It's not our fault. They pull the mines down to us. They open their eyes and talk. I'll try. We don't ride with vampires. Fine. Stay here. We do now? Yeah. Come on. Get down. Let's get at it. Burn rubber. Yeah! Christ! Burn rubber does not mean war of speed. Okay. I certainly don't want to come across as overly dramatic. I mean, developer apprenticeships are not as scary as blood-sucking vampires. Guess it depends on who you ask. I love this scene in the movie because here's this kid, Sam, trying to save his brother and his friends, his family from a gang of vampires. He's got the impending deadline of nightfall upon him. The Frog Brothers are screaming at him to hurry up. And not to mention, he just saw a pack of vampires, like, for real. And as if that wasn't enough already, he damn near drives the whole lot of them off the edge of a cliff to the certain death that he's trying so hard to avoid. And it's in that edge of the cliff, oh, shit, moment, that he screams, burn rubber does not mean warp speed. Like, hey, dudes, I'm going to get us out of here. But warp speed is not necessarily my tempo and clearly not necessarily in the best interest of saving all our asses right now. And so I like to apply this cheeky movie line as kind of a valuable mantra and metaphor in that working hard, being passionate, and being committed, or being scared shitless for that matter, do not necessarily add up to an ability to move at a predetermined speed. And that speed is not necessarily the prime indicator of progress or success, or in the extreme case, survival. And yet, while I was learning to code in my developer training program, and as well as through both of my apprenticeships, I found myself supremely focused on how fast I was moving. Was I coding fast enough? Was I learning fast enough? Oh, my God, what happens if I don't know all the things by the deadline? Oh, my God, what happens if I don't know all the things? Does that mean I'm not cut out to be a developer? And so it took about 10 months post-grad developer training program for me to chill out on some of that anxiety. Mind you, this is not because I am now a coding sensei. But because I found comfort in the value that I was able to bring outside of my hard coding skills. So I have a background in marketing, law, and client management. And I was able to contribute in a lot of areas both inside and outside of the code itself. Additionally, I found peace with myself in the rate at which I was learning, and the things that I needed to do as an individual to foster that learning. It was a really tough road getting to that spot, often wrought with doubt, and sometimes a crippling fear to even write more code, or even speak up about what I was thinking and what I was feeling. And so it was this journey, kind of the self-realization over the last year, that made me want to reflect and talk to my teammates and some other apprentices about how developer apprenticeships are perceived, how they're structured, how they're managed, and whether or not we as an industry might be missing some key opportunities to approve apprenticeships. So to get that conversation started, I did a little bit of research. So an apprenticeship is the system of training a new generation of practitioners of a trade or profession. Within on-the-job training approach, usually mixed with some kind of formal study, meaning book study, classroom study, and traditionally apprenticeships would also lead to the procurement of say a license in a regulated profession. And it's a system by which a lot of apprentices will launch their careers, as most of the training is done while working for an employer who invests in the impretist in exchange for continued labor, for an agreed upon amount of time after the apprentice has achieved competencies that were defined in the program. So the system was first developed in the later Middle Ages, and it came to be supervised by craft guilds and town governments. During this time, a typical apprentice was a fairly inexpensive form of labor, right, so whereby the master craftsmen would employ the apprentice in exchange for food, lodging, and formal training in the craft. And within the entirety of the apprenticeship system, there were some very specific stages and roles. The first obvious being that of an apprentice. In the Middle Ages, an apprentice spent about seven years in this phase. Apprentices were compensated such that they could live reasonably well, so they weren't worried about where my next meal is coming from. But certainly part of their compensation was the tutelage in the profession. And so, you know, they weren't rolling in the dough, so to speak. Following the apprentice years, one would advance to journeyman years. A journeyman is an individual who has completed an apprenticeship, and they are fully educated in a trade or craft. But they're not yet a master. So in the Middle Ages, journeymen were often required to accomplish a three-year working trip. Traveling around, gaining experience in the profession, across a variety of locations, working with different masters. This phase could actually last anywhere from three years to life, because, well, the final phase of master was never a guarantee. To become a master, a journeyman has to submit a master work to a guild for evaluation, and then they had to be admitted to the guild as a master. A master craftsman or master tradesman was a member of a guild, and in the European guild system, only masters and journeymen were allowed to actually be members of the guild. An aspiring master would have to pass through the career chain from apprentice to journeyman before he could be elected to become a master craftsman. Then, the journeyman would have to produce a sum of money and a master piece before they could actually gain that membership master within the guild. And if the master piece was not accepted by the masters, they could possibly remain a journeyman for the rest of their life. Also, I chose this picture because I think that you know when you have arrived at master when you can drink and smoke while executing your craft. I work with developers who can cocktail and code. I have not yet arrived. So, delving into the history of apprenticeships got me thinking about some familiar modern apprenticeships, and the length of their apprenticeship periods. So, the current Wiki definition cites modern apprenticeships as running somewhere between three to six years on average. So, let's consider some that we're probably all pretty familiar. Who can tell me how long the apprenticeship is for doctors? Just shout it out, people. Seven. Three to eight years. I guess it depends on what kind of doctor you are. What about electricians? Oh, four years. How about traditional engineers? Anyone? Bueller. No, three to five years. Marty's so excited over there. Three to five years. Okay, last one. Tattoo artist. Don't fail me now. Seven years. Wow, that's a good guess. It's not. It's one to five years. Okay. So, key takeaways for me as I was reading about apprenticeships is that they take years, like a lot of years, and they're methodical. Okay, there are stages, there are levels, hours requirements, and then an application for master status and licensing. And also that there's a commitment on both sides of the equation to that time in process, including committed in owed time after the formal apprenticeship period. So if we circle back to developer apprenticeships and what's so scary about them, I have to start with speed. I think the apprenticeships in our industry are pretty scary. I think the speed is what's so scary, and it originates in many of the modern developer training programs, which are growing at a prolific rate. Everywhere you look today, there are more books, online programs, schools, boot camps, all purporting to turn you into a developer fast, fast, fast. Like to the tune of 24 hours, 10 weeks, three months, six months, and also frightening is that unlike doctors, engineers, electricians, there's no licensing body, no regulations or standards for the quality of curriculum or the quality of students that these programs are producing. And now let's consider formal developer apprenticeships post-grad. Three months, six months, if you're lucky, you'll find one that's a year. The vast majority are running at three to six months, very few at a year longer, and even more terrifying is that the number of apprenticeships is actually pretty limited. Tons of employers are hiring graduates as fully qualified developers right out of the gate from some of these programs. And so here's what's concerning to me with the speed in this particular context. I think it promotes some dangerous misconceptions about our craft. Like learning to code is fast and fast is easy. And because it's fast and easy, well, then it is clearly a finite endeavor. Just something that you can check off the list and go start collecting a check. But as developers, we know that what we do is not trivial. The shit is hard. The learning is infinite and the mastery takes years, like to the tune of 10. And devoting your life to building great software requires a passionate commitment, patience, and conviction. Another scary thing about developer apprenticeships, I think, are assumptions. David Brinn, scientist in science fiction author said, the worst mistake of first contact made throughout history by individuals on both sides of every new encounter has been the unfortunate habit of making assumptions. It often proved fatal. Dun, dun, dun. Okay, so I might have a slight flair for the dramatic because I don't really believe that any of the assumptions I'm going to position with you right now have actually proved fatal, but I do think that they're important and I do think they get overlooked and they can result in unintended and negative consequences. A couple of assumptions made by employers. Employers assume apprentices will be vocal and open. And that is not always true, in fact, rarely. Also another assumption by employers, by virtue of being a senior developer, you are well equipped to teach a junior developer. Assumptions by apprentices. Upon graduation from a modern training program, you are immediately employable and handsomely compensated and most importantly, you are nothing like an apprentice. Another assumption, advancing through levels of developer status, apprentice if you happen to start there, or from junior level to senior level, for instance, are well defined transitions within our industry common throughout and easily portable as titles from one employer to the next. Left untreated, assumptions can be a very dangerous thing. Left untreated, they can position you in a pit that you find yourself having to dig out of. And that's kind of scary. So what are some potential consequences that we might find in this pit? Things like a young developer making the wrong choice in job because of a promise of a higher title and or salary and ending up in an environment where they're ultimately set up to fail. Or employer frustration because progress is not meeting expectations and they have zero input as to why that is. Confusion and anxiety on both sides of the fence about where and how an apprentice actually fits into the larger team. Another scary thing is measurement. Although in and of itself, measurement is not a scary thing, but specifically related to developer apprenticeships. Some things I think are scary are a one size fits all approach. So if we create a standard plan and very specific milestones and very specific skill set bars on a very specific timeline, and then we expect each apprentice to exist to succeed on the plan precisely as it was written, we're likely to find some disappointment. Because each apprentice is going to be different. And each one will have a different learning style and they will bring a different set of strengths and weaknesses to the table. So be wary of inflexibility of a one size fits all approach in an apprenticeship program and mentorship style. Hard coding skills are of course important. But if it's the only thing we evaluate our apprentices by, then we're missing out on some other important factors. For instance, can they integrate with our team effectively? Do we like them? Gosh, do they even like us? Are they creative thinkers? Can we relate to them personally? Are they happy when they come into work every day? And how would you know? Additionally, let's consider other professional skills that apprentices can offer from their respective backgrounds. Take me for instance. So I bring marketing, client management, and even some legal expertise, which were a value add to the team outside of the code base and ended up being pretty critical in our business relationships. So when we consider the value of an apprentice, it's important to think beyond just the hard coding skills. Another scary thing where measurement and developer apprenticeships is concerned is follow through. When we say we're going to measure, but then we don't. And this actually happens a lot. It's typically a case of the best laid plans of mice and men often go awry, meaning at the start of the apprenticeship, it's met with a lot of energy about all the things that make up the program, like weekly check-ins and scheduled mentorship and individual projects and times to time to read and learn and explore. And then work happens. Or the apprentice just begins producing and quickly the team loses sight of the apprenticeship because now they have someone producing real work. So let's just keep them. Let them do that because everyone is so busy in any way. Really what's the point and apprenticeship is just on the job training, right? But now both employer and apprentice have lost a platform from which to objectively and substantively evaluate progress, contributions, and growth. So what's so scary about developer apprenticeships? Shotgunning apprentices out the door like fast food. Failing to recognize, identify, and address assumptions on both sides of the fence. And neglecting to measure the right things at the right time on the right scale. So what's missing from the apprenticeships in our industry? The mindset of learning a profession versus learning a skill. Many a developer training program can give people some skills. But becoming a successful developer is about learning the profession. And not just with the mindset of trying to bulk up coding skills as quickly as possible, but rather taking the time to mentor apprentices around things like interpersonal skills, your cross functional team dynamics, and mentoring them around getting a deep understanding of the business of building software. We're also missing the time to just be an apprentice. Pressures of billability, they're real, whether you're a product company or a consultancy. But also real is the investment that you're making in an apprentice. And because of these pressures, the time to just be an apprentice is in jeopardy of being cut short. The demand for developers is high. The junior salary rates and title expectations, many would absolutely say are inflated. Employers and apprentices both are looking for some degree of job security. And yet so few apprenticeships are structured for any kind of long term commitment. Much like we might structure a relocation package, we should look for mutually beneficial ways in which to structure longer term commitments between the parties of an apprenticeship. You know, a promise for something longer than six months. It'll put both sides in a position to focus on what's important in the program, as well as providing for some predictability on the return that the employer is making. Another important ingredient, investing in mentors as teachers. So just because we have a team of seniors, doesn't mean we have a team of teachers. And just because we can make a list of all the skills needed to be a fully functioning member of our team, doesn't mean we can effectively structure a curriculum appropriately tailored to the individual to achieve that set of requirements. Teaching as a formal skill set is missing from some apprenticeships. We should be looking for ways to invest in our mentors as teachers, to give them the ideas and the requisite skills to foster really great apprentices. So what's missing from the apprenticeships? The mindset that programming is more than just slinging code. The time to just be an apprentice and investing in mentors as formal teachers. What's important in developer apprenticeships? We all know that software development is better served with an agile approach instead of a waterfall approach. Our apprenticeship programs are better served as well. Approach the program like you would approach developing a new product. Start with your key stakeholders, mentors and apprentice. Discuss the goals. What are going to be the measures of success? What are the risks? Align on all the most important aspects of the program with a very clear understanding of each person's role in that construct. A mutual and well understood commitment to those goals is critical for success. Also critical for success is diligence in the process. So rigorously implement and adhere to iteration planning meetings. At each one, talk about what went well, what fell short of the goals, and then figure out why and how and fix it. And have a backlog and groom that backlog because over time, as the team gains insight into an apprentice's specific strengths and weaknesses, now you can adjust the tutelage and the expectations and you can make sure it's reflected in the backlog. Retrospectives. Quite simply, do them. Use the time to be honest and frank with the team about mentor and mentee pairings. Maybe some are not working well because of particular learning styles. Maybe an apprentice needs more rotation or less rotation among mentors on your team. Use retrospectives as a place for mentors and apprentices to address assumptions, address issues and expectations. And then make some critical decisions that protect the goals of the apprentice, the goals of the employer, and the goals of the program. Also important is a two-way street because self-directed learning, it can't be the only thing anymore. The industry moves fast. The demand for top-notch developers is growing exponentially. The expectation of the apprentice should absolutely include self-directed study. An apprentice should exhibit a desire to immerse themselves in the learning and work hard. But the burden of keeping the apprenticeship on track can't just fall on the apprentice. The apprentice needs to know that the apprenticeship is equally important to the employer. An apprentice generally lacks a deep grasp on their importance to the business. And this is really unfortunate because when an apprentice feels valued in their role, it has a very positive influence on their performance in the program. An apprentice needs to see an employer's commitment to the construct, and they need to feel like the employer is an advocate in that apprenticeship. That the learning investment isn't just important as the apprentice is just as important as the apprentice becoming billable and moving into that role of developer. Holistic value is important. So again, measuring the whole apprentice, it's important not just for the employer, but to help that apprentice gain confidence as they're working to bring up their coding skills. For me personally, being able to contribute to the team in other ways was a huge benefit in easing some of my code skill anxiety. And it was also critical in moving my focus from one of, oh my gosh, how fast am I going to, okay, am I doing what I need to do to continue progressing? So remember to consider everything that makes up the value of an apprentice. Coding skills, creative problem solving, personality. And last, but definitely not least, important in developer apprenticeships is time. Because burn rubber does not mean warp speed. I spoke with a lot of developers and apprentices as I was working on this talk, and I asked them just out of curiosity if they thought it was possible to reach senior developer status at one year. And all of them gave me an emphatic no. And all of them said they wouldn't even consider someone a senior close to that, maybe not even for two or three years, many said a lot more than that. And why is that? Because there is no substitute for experience. Consider Malcolm Gladwell's Outliers, if you've read that, in that all the greatest athletes and musicians had one thing in common, and it wasn't an innate talent, but rather deliberate practice and enough time to get good, like to the tune of 10,000 hours or 10 years. And consider apprenticeships of the Middle Ages lasting seven years or plus three years in journeyman status before they're even eligible to apply as a master. And consider some of the modern apprenticeships that we talked about today, like doctors, electricians, even tattoo artists. They invest years in learning their trade and mastering those associated skills. Developer apprenticeships, even if we tack on the developer training program, are lucky to amount to one full year. But what is it really about the short versus long that's so important? Like if I'm an apprentice, why would I bother with a long apprenticeship over a short one? I mean, getting to that bump in title means a bump in salary. Why would I waste years if I could accomplish it in a matter of months? Conversely, if I'm an employer, why would I invest in the prospect of years over months? What's in it for me? How can I justify an investment of that size? I think because learning to play the notes for an instrument is not the same as being a musician. Learning to play is but the first and arguably the smaller hurdle. Applying that skill over and over and over again, gaining performance over time is what marks the difference between one who can play an instrument and one who can make music. The intangible elements of communicating a vibe, a feeling, or riffing something new off of an existing song, that's the mark of a great musician. And you need that runway. It's a business relationship between employer and apprentice. And the common goal should be one of building a better developer, a well-paid developer who can exercise autonomy and also integrate seamlessly within any team. A developer who stretches beyond just the task itself and exhibits thought leadership. A creative developer capable of applying a system's view in solving the problem and vision in what the finish line looks like and how to get there. And the apprentice who becomes this developer is a valuable asset. And right now, today, they can write their ticket almost anywhere. And the employer who fosters this kind of apprentice is well positioned to retain them for the long term, which is definitely worth the investment and certainly more economical. So what's important in developer apprenticeships? An agile approach and diligence in that application. The two-way street, put into what you want to get out and that doesn't just mean the apprentice. Holistic value, measure more than just the hard coding skills. And lastly, favor the marathon. We know you're coming a great developer. It's not a sprint. It's a marathon. So let's find ways in which to construct apprenticeships that are less about sprinting and more about a long-term commitment between teammates. So I've talked about what's scary, what's missing, what's important in developer apprenticeships. I hope that some of it was fresh perspective. But I also promised you some ideas. So here are a few thought starters that I brainstormed with my team and some of them strike you as too radical. Well, then I hope that they at least plan a seed of ideation as you think about creating or adding to some of your apprenticeship programs. Start at two, out of the gate. Structure your apprenticeships for two years. Pay the apprentice well, but you don't have to be obnoxious. If you want to give them a pay increase after six months or one year for whatever reason, great. Do it, but don't change their title. Keep them titled as an apprentice for two years with an option to reapply for continued apprenticeship after they complete the program. Or if you feel like they're ready, let them begin working off their owed labor of one year as a developer on your team. That's right. Start at two, but contract for three. Treat your apprenticeship program like a product. Integrate it into the fabric of your business and the team. And then after your team knows when they get hired with you, that mentoring is part of the job. And as such, you're going to give them the tools they need to be good at that job. And confersly, they're actually expected to help foster and improve that product over time. And I think this is really important. I think you need the team to take some ownership and realize accountability in the apprenticeship program for it to be successful. Hire a teacher to teach your senior mentors. Lots of teachers have their summers off. I'm sure they would love to make consultant rates to come in and do some workshops with your team. They don't have to be developers necessarily, but find someone who can help your team think about and structure a long term curriculum and evaluations to support that curriculum and to give you some pointers on how to engage with students of various learning styles. If you don't know any teachers, then reach out to a place like Turing. They know a thing or two about teaching students. The point being is don't underestimate the skill it takes to be good at teaching. And so seek advice and invest in that. Include the idea of a journeyman in your apprenticeship program, but focus it internally. So what I mean is have your apprentice travel around to various teams, even departments, especially departments if applicable at your company. Encourage the development of skills outside of just writing code. If you happen to be a large scale product company, lots of different development teams, give them time to bounce from team to team. Understand the varying dynamics and kind of figure out where maybe they would fit in best. And if you're a small consultancy, bring them to new business meetings. Include them in parts of the process that are not just about writing code, like design, research, testing. Because a well-rounded apprentice can become quite the valuable developer. Thank you. applause
|
We talk about bringing new developers "up to speed quickly." Imposter syndrome is bad enough, but often junior developers feel pressured to learn faster and produce more. Developers often focus on velocity as the critical measure of success. The need for speed actually amplifies insecurities, magnifies imposter syndrome, and hinders growth. Instead let's talk about how we can track and plan progress using meaningful goals and metrics.
|
10.5446/30649 (DOI)
|
MUSIC So I'd like to start off this presentation today with a quote. Any sufficiently advanced technology is indistinguishable from magic. That's a fact. And it's something that we take for granted, because as web developers, we work with technology every day, and it starts to become routine. And we forget. If you were to go out onto the street here in Atlanta, and you were to ask somebody, how does the internet work? They would tell you the truth, which is that it is magic. The internet is magic, and we are goddamned wizards. Just look at our beards. We build the internet. We build this thing, this core foundation upon which are very society is built. This thing that our government depends on, that our economy depends on, that our culture depends on. We build it from bits of HTML and CSS and JavaScript, and it is magic, and we are wizards. And as anyone knows, with great power comes great responsibility. All of you in the room with me today, you have a responsibility. Nay, you have a duty, a civic duty to yourselves and all the muggles out there to use your powers for good, rather than for evil. Now, some of you I know. Use your powers simply for your own amusement, or to earn a simple living. Well, today I hope to inspire you to use your powers to bring lightness and good into the world, to take up the quest of civic hacking, and make a contribution to our society, to our community. And I hope today, not just to inspire you, but to warn you of some of the perils that you may face on your journey, and that I faced on mine, and to equip you with some of the weapons that you may need for your prepper backpack, so that, as DHH would call it, you can be ready. Now, I myself started my journey into civic hacking and into Rails magic in general, not so long ago. I was a student at the Flatiron School. And it was a snowy night in New York City, and I was up late on campus in Manhattan downtown in the financial district right by the Wall Street Bowl. And it was snowing outside, and the icy wind was cutting through the streets. And even in the halls of our ivory tower of technology and academia, it was cold. You may not know this, but New York City is not just, its old buildings are not just full of history. They're also full of broken boilers, and drafty windows, old steam pipes with peeling paint running to rusty radiators, especially in lower income areas. This problem is epidemic. And people who live in buildings like this, they don't have control over their temperature, like many of us do. They don't have a thermostat on the wall. They depend on their landlord to provide sufficient heat, because all they have are these steam pipes. And in lower income areas, particularly, although the problem isn't limited to that, a lot of people do not get the heat that they need. And this isn't just a problem for New York City. It's a problem all across the northern United States. It's a problem in Chicago, Philadelphia, Boston, and it's not just a problem in the United States. It's a problem all over the world. We get the same information. We get the same things in the UK. Now, it seems every year in New York where this problem is particularly bad, there is a new story coming out of the New York Times. In the year that Heatsey got started, this is actually a really hard story. It was the Ortega family. They lived in old buildings in Crown Heights in Brooklyn. And Christina Ortega has a daughter with several pals who is six years old at the time. She requires a lot of medical equipment. And they were not getting sufficient heat. She called the landlord. She called the city. She complained. Nothing was happening. So she went out to the store and she bought a space heater. And when she plugged that space heater in, the draw from the heater and from the medical equipment was too much and it blew a circuit. So she had to choose one or the other. The best that she could do was to prop a mattress up against the window to try and keep the draft out. And this is not an isolated incident. I wish that it were. But there are 200,000 heating complaints in New York City alone. God knows how many there are across the United States or across the globe. One of my teammates made this map. We call it the cold map. You can see it's broken down by zip code. And the colder, the more heating complaints there are in that zip code, the darker the blue. And unsurprisingly, this corresponds really tightly with the areas of the Bronx in Brooklyn that house the most lower income defibrillator. Mostly lower income demographics. So it's a big problem, right? Because it's dangerous. But it's not just dangerous. It's illegal. It's actually against the law. There are housing codes in place that prohibit this. In New York, it has to be at least 68 degrees during the day and 55 at night. There are similar laws in Philadelphia and Chicago with slightly different rules. And the issue isn't that people aren't aware. It's that it's not practical for a city to know the temperature inside of every old building within its borders. New York City has so many old buildings. I mean, it's an impossible task. It's a limitation of technology. And I remember that night at the Flatiron School looking out at the snow and trying to come up with a project to work on for school. How hard could it be? How hard could it be? Now, granted, it was actually really hard. I didn't know this at the time. I was like, really naive? And this was the first app that I ever tried to build. So I didn't know what I was getting myself into. And I certainly didn't envision what he'd seek would then go on to try to become. I didn't envision a network of temperature sensors all across these neighborhoods I did not envision an app that would analyze and produce reports to give to advocacy groups that would then use that information to bring about justice. All I imagined was just a simple graph that connected to a simple wireless thermometer. I thought, how hard could that be? How hard could that be? And then I fell for the hackathon trap. So this was the first of five traps that I fell for in my quest in Pacific hacking. The first five that I know of. They're actually probably way more than I have discovered yet. But five that I know of, if you've ever been to a hackathon, you know what it's like. And everybody's sitting there thinking, I've got this great idea. And if I just focus on this one idea for the next 48 hours to the exclusion of everything else, to the exclusion even of bathing, which is why hackathons smells bad, then maybe it will be cool before I get distracted. Because it's so easy to get distracted. There's so many other cool projects that we could be working on. And what happens is what happened to me, which is that you burn out. You can't possibly come up. You get through the 48 hours. And afterwards you have this kind of OK thing that you really just cannot stand looking at because you just burned out so hard on it. And that's what happened to me. And the app would have died, I think, if it hadn't been for a couple of things that we'd done by mistake. I'd roped another student in with me who was also starting to lose focus. So how do you stay focused? Well, the first thing that we'd done was we had agreed to give a presentation in front of a bunch of students, at least one of whom is in the audience with me today. And we were going to be horribly embarrassed if we didn't have something to produce, something that we could demo at this presentation. Another thing that we did was that we made a promise to a user, somebody who actually had an apartment that was too cold and who wanted this technology so he could generate some proof and take it to his landlord or take it to the city and get the problem fixed. And I got squirrel. Skipping you on your toes, guys. Stay focused. So put your reputation on the line. That's the key to overcoming the hackathon trap. Put your reputation on the line. Maybe thinking, but I don't have an idea. You had this bolt of inspiration, and you were sitting in the Hogwarts of web development, the flat iron school. And I just don't have an idea. Well, I'm totally exaggerating. I didn't have a bolt of inspiration. I had a project that I needed to come up with, and so I went to open data sets. If you go and you look at some of the data sets that are just coming online, there is so much government data at the city level, at the state level, at the federal level, coming online that no one has had a chance to dig into yet. Nobody's dug in yet. There's a ton of stuff that you can come up with. They just now have traffic information that shows where collisions are happening in certain cities. How cool would it be to map that out and see what's the most dangerous intersection in America? Live update that. Or better yet, don't use other people's technology. Don't use other people's data. Get your own data. 56% of US adults have smartphones. They have geolocation. They have microphones. They have accelerometers in them. You can collect a ton of data and use it to affect policy. People don't want to invest in infrastructure because it's expensive, and it's not always politically expedient. And some of the old infrastructure in the United States is in terrible disrepair and really needs help. We have the ability to crowdsource that information. We have the ability to make it politically viable to invest in our own infrastructure. And you know, if you really want to have an impact, automate advocacy. Don't just take open data and do something cool with it, but automate the process of advocating for people who need help. Because the advocates in our society, the social workers and the public defenders and the community activists, these people, they are always underpaid and always overworked. There's never enough of them. And if you can take just some of the hardest parts of their job and automate it for them, you can have a huge impact. And that's what HeatSeq does. We don't work directly with tenants. We don't have the time or the staff or the skills for it. We empower advocacy groups by automating one of the hardest parts of their job, which is to gather evidence. So you may be thinking, but I want to listen to all of your stupid civic hack ideas. I have a way cooler civic hacking idea. And I bet not only do you have cooler ideas than me, but some of those ideas are actually already better implemented than mine. But I also bet that the majority of those ideas are languishing, unloved, at the bottom of your GitHub repo. Why is that? And I think it's because of the field of dreams fallacy. So it's the second trap that we fell for. We thought, if you build it, they will come. And so we built an MVP. And we had it working. We had it connected to temperature sensors. We had it collecting readings. We had a user in the field who was actually showing violations. And then it started to get boring, and we started to lose interest, because nobody really cared except for this one guy. And the reason is because, well, nobody knew about it. We didn't tell anybody. If you build it, they won't come. If you tell them about it, they will come. You need to tell people about it. Hype it up, whatever you're working on. Hype it up. That's the cure for the field of dreams fallacy. But be careful with hype, because it's a double-edged sword. And that was the third trap that we fell for. We got drunk on press coverage. We went on a publicity bender. After we started hyping up our project, we entered in a competition, and there was a popular vote. And we spammed every man, woman, and child to get them to vote for us. I went through my whole Gmail context. Everybody who I had ever sent an email to or received an email from, and spammed them. And my own grandparents, it was embarrassing. It was really a low point. But the result was that we got a lot of attention. And at first, it was just blogs. But it got to be bigger and bigger names. And then it was Fast Company. And then it was Reuters. And then it was CNBC. And then our moms were like, really excited. And it felt like that was the most important thing for us to focus on. And so we stopped coding. We started writing talking points. And we ended up concentrating all of our attention on the areas that were most interesting to the reporters, rather than the ones that were most important. Reporters are not going to ask you, what's your code coverage like? They're going to ask you, what are your uptime metrics? Or how does your retriologic work? They're going to ask you, what's your strategy for growth? And how many users do you have? And how many buildings or whatever you're going to be in? It's all a terrible distraction. We got really sucked into it. And then it got worse. It got worse. We started pissing off the very people we were trying to help. One of the things that happens, if you hopefully eventually have a civic hacking project that innovates in some area, if that area happens to be helping people in a lower income demographic, one of the things that's going to happen is people are going to tell you, oh, William, you're so noble. You're so magnanimous and generous and dedicating your talent and your time to help these poor, helpless slabs who live in dumps. And it's bullshit. I mean, it's not true. But it's really easy to get sucked into. And then you start thinking, oh, yeah, well, I am rather magnanimous sometimes. Well, I thank you, of course. And then the people you're trying to help, they hear that. And they're like, man, I really don't need your pretension right now. It's already cold. I've been dealing with this project, same problem for years before you came along. Don't need your help. Why don't you just go back to playing World of Warcraft or whatever it is that you do and stop trying to save us with your code? And that's a valid point. And then it wasn't just the tenants. It was also the government agencies that we were hoping, like the housing department that we were hoping, would adopt this technology. There's another area civic hackers often innovate in. And I want you to be aware, if you innovate in the public sector at all, people are going to tell you, oh, thank god that you young, smart innovators are going to come in to this old government bureaucracy of terribleness, where everyone is just wanton spending and grossly incompetent. You guys are going to come in with your code, and you're just going to disrupt. It's going to be great. And it's also, again, really easy to get sucked into. But the truth is that actually those government agencies have been working on these problems for decades. And they have a really deep understanding of the issue. They are usually doing good. And they can foresee a ton of problems that you're definitely going to run into. But if you're going to be a dick about it, then we'll just sit back and watch. You guys can figure that out on your own. We'll maybe laugh at you. And that's bad enough. But then it got worse. We got inundated with all of these requests for us to pivot. Petitions to pivot, I called them. They were usually ridiculous. Can you just use this technology to help with global warming? Or could you guys maybe just become competitors to nest? Or when are you guys going to start monitoring air quality? At that level, it's pretty easy to brush off. But sometimes they start to sound like plausible feature requests. Can you expose an API for us that we can consume your data? And yeah, that's actually a great idea. I think there's a story on the backlog for making, maybe we should just bump that to the top, bang out an API for you. They always say, this is a great opportunity. Anytime anybody tells you, this is a great opportunity, be careful. What they're actually saying is, I want you to focus on what I want rather than what you're actually doing. That's the great opportunity. And there's a reason that that API was at the bottom of the backlog. And the reason is because we didn't have any data to share again. We needed to focus on actually getting data so that an API would be useful. That was the challenge. We got sucked into all of these distractions. And you're probably thinking, now, OK, well, William, given what a horrible screw up you are in all of these problems that you've had, why am I listening to you? We turned it around. We did help keep the heat on. I think we had dozens of sensors in the field collecting readings by the end of it. We had actual violations detected. This is a graph from one of the tenants who's in active litigation right now. We've anonymized it, so you can't see the names. But those orange dots at the bottom, those are readings that were recorded when the temperature was below the legal limit. So how do we turn it around? What were the secret weapons for your prepper backpack, as DHH would say, that brought you through all these trials and tribulations on your quest into civic hacking? What were the four? There were four secret weapons. The first one is non-developers. Non-developers, to get a bad rap, I think, especially in that hackathon culture, where you've all been there. There's that one business guy on the team who does nothing for 95% of the competition. And at the end, he gets up, and he gives a bad pitch, and then he tries to take credit for vision. And he's just like, OK, non-debs are actually dead weight at hackathons. But in real life, though, they're awesome. Because any project, whether it's a civic hacking project or any other project of a meaningful length that has longevity, is going to need more skills than a developer has. Developers, we are narcissists. We think that we can do everything, and we're the best person to come up with the long-term strategy and the vision and all of the code and the social media and the marketing. And let's just do the books. But actually, if you turn over those responsibilities to people who actually specialize in those areas, well, all of a sudden, you're free to get back to actually coding. And that's really powerful. The only issue is that you bring on, if you're a civic hacking project, you bring on all these non-dev volunteers, and then you just spend all of your time coordinating, which brings me to my second weapon, managers. This one I know is controversial, because managers are a particularly hated type of non-developer. They'll ask for estimates, and then they'll use them as deadlines. They'll make you go to a bunch of stupid meetings that don't matter at all. But that's actually not managers. That's bad managers. Good managers will protect you from meetings. Good managers are like meeting shields. Somebody tries to make you go to a meeting, and you just grab the manager, and you block it. That's what good managers do. They keep you focused on what you do best. They keep everybody else focused on what they do best. And once we got some managers, people actually started getting back to work. Third secret weapon is the day job. This is really important, I think. I am very fortunate. I work at Sun Guard Consulting Services. It's a great dev shop. They are super supportive of me and my civic hacking side projects. They paid for me to come to Rails Configust that give us talk. They always tweet about us when we're doing well. And most importantly, they give me that regular pay checks that I don't have to worry about money. And that means that I don't have to, there's no temptation to bastardize the original idea, which was about giving something back to the community, and instead turning it into something profitable. We talked to investors. And investors are sharks. Their big idea was, how about this is great, actually. They have a great technology. What if we could change the branding? Instead of focusing on poor people, we just sold this for rich people. Oh. That? That's a good idea. No. It's like the whole reason that we started this. So be careful. If you have a day job, though, if you have a good job by day, then you can fight crime by night. Like Superman had an awesome day job. So the fourth weapon, this one's a little bit cheesy, but it is honestly the most important weapon in your arsenal. And that's perseverance. Because perseverance will make up for any number of blunders, any number of pitfalls, any number of traps that you fall into. If you get back up on the horse and you keep going, eventually you'll make it through. And I'm not talking about individual perseverance. I'm not talking about just one person. Because there will come a day when your confidence will fail you. There will come a day when you will be ready to give up. I'm talking about the perseverance of the team. Because if you have a group of people with you to support you, then in those times, you have someone to turn to and say, why are we doing this? And they'll tell you, this is why we're doing this. This is why we started. This is where we're going. This is why it's worth seeing this through. And as a project, we have an organizational habit. We meet every Sunday to tackle these problems together as a team. And it's that organizational habit coming back every week, time and time again, that has kept us going for, you know, it's been a year, over a year now. So we Rubyists, we're a community also. Rails Conf, this is another organizational habit. You may be thinking, well, OK, so I'm prepared now. I'm equipped. I know the perils that I may face in my question to civic hacking. I see the value of it. And I have some tools that you've equipped me with so that I might be better prepared. But I've forgotten. I've forgotten why I was excited at the beginning of this talk to actually do any civic hacking. Forgotten why this is important? Well, like I said, we're a team. I'm here to remind you that the internet is magic. And we are goddamned wizards. We have a responsibility. Nay, we have a duty, a civic duty to ourselves and to all the muggles out there to use that power for good rather than for evil. If even 1% of the commits that we put out went to civic hacking projects, we would live in paradise. So this is your call to action. After you leave this conference, join your local Code for America Brigade, fork a civic hacking project on GitHub, or start one of your own. Because the society that we live in has got issues. So fork it. Let's make a poll request. Thank you. Our mercy requires love, our mercy andikhah likewise. Thank you. now. All right. You're welcome. You've put it in. Thank you. Yeah! This is that individualet hatred of late sympathetic或igd Budcudain, of course.
|
Do you want to use your coding skills for good rather than for evil? Did you ever want to build something to make your city or your community suck less? Here are some lessons from a civic hacker on how to kickstart your project. Hint: it's nothing like writing a gem.
|
10.5446/37889 (DOI)
|
Excellent. First of all, just welcome. This has been an amazing day. All of the talks have been terrific and I just want to give a big shout out to the speakers, the organizers, and then especially Hannah and Desmond who have worked so hard to make this amazing event in this venue. So just please thank you. I also wanted to give my bachelor rose to Andrew who gave me the clicker. His clicker when my battery died so you can get this after. My name is Sarah and I introduce myself as a programmer, comma, person. And sometimes I switch it up and say person, comma, programmer. And for work right now I'm in the middle of a sabbatical adventure. And so this talk is going to address the programmer, comma, person aspect of all of us. So I was really honored to be asked to speak here and one of the reasons is because this conference right on its web page says it's for curious people. And I love curious people. And because elixir is a relatively new language, by default everyone here is an early adopter. Someone who gets excited, someone who plays with languages in their spare time. And because elixir is near the beginning of its journey, that means that everybody here by default is at the beginning or early middles of our journey with elixir. So also we're in LA. Which is super exciting and I bought my conference outfit at the airport gift shop. I have LA fan, I don't even know why I just have LA fan and sees it so great. People come here, they dig gold, they try to become movie stars, they try to write programming applications with hot swapping capabilities and concurrency. But in this idea, in this country, the idea of go west is very powerful. Whether it's the physical spaces or imaginative spaces, going into the new is something that we put a high premium on here. And so we are all here today, both physically west and new ideas west sharing this day together. So just cheers to that. And before I get into the meat of the talk, I want to go over a quick legend so we're all on the same page. This is a shark and this is an Erlang shark. And I'll just take a moment to remind us that in our metaphorical understanding of sharks and also probably in our actual experiences with them, sharks are scary. You could say they represent the deep, the unknown, sudden death, well surfing. This is a galaxy shark. And there's like three or four documentaries about this recently. Like you're not even safe in space anymore. So yeah. So this keynote is unequally divided into the following sections. And I want to begin with who we are. And by we, I mean software developers. And as software developers, we are asked to constantly learn new things. Languages, frameworks, domains. But at the same time, we come to understand that our role is managing change. We build new applications. We phase out old ones. We change the way business is done effectively. And managing change means talking to people and working with people. And that is often the same again and again. So in short, we're asked to get on the rocket ship and go to new exciting places. But we also need to be able to sit in a living room and have conversations with people. We need rocket ship skills and living room skills. And probably in the proportion that we see in this slide. Developer architect and teacher learner. These are two of the roles that we tend to think of ourselves as filling most often. And there's lots of titles surrounding these roles. Like look at some of these. I'm sure you've had some or all or variations of these or even ones that aren't on the slide. And then take a minute to think about other professions like lawyers, law or medicine. And they have hierarchies and titles. But you don't really hear lawyers calling themselves like statute heckers or law gurus. And doctors aren't, you know, body crafts, people are lung architects. But I mean to me what the plethora of titles really indicates is that software development is truly fluid. What a title means. And what you do while you have that title is not predetermined. Which speaks to the nature of the field. And while we spend a lot of time focused on developing and teaching, one of the things I've learned is that the skills in the bottom two quadrants, which I've bucketed under productivity expert and business partner, are skills that you develop or become aware of. You may already have them after many years. And are as important if not more than the first two. I'm going to give a job these roles all merge and their lines blur. And in fact in any given product or launch we flow in and out of these different roles on a frequent basis. We are not individual contributors. We are imperfect circle squareers. So as imperfect circle squares, let's go forth and talk about what and why and how we learn. And so since we're at ElixirConf, which is focused on teaching and learning and being productive in a relatively new language, let's start with this balance. Individual contributor, individual learner. So as a day to day dev, you're responsible for many things. So while the sun is up and you're at work, if that's how you do it, you're responsible for getting your stories done and being productive. But when the sun is down and you're on your own time, you can choose to learn without worrying about output. And our career cycles between the two. And in fact we cycle between productivity and learning on many time frames. You can cycle in an hour, a day, or you can take longer dives, spending months or years focused in one area than the other. And I see them like day and night, office time and home time, focused in the world time and introspective time. And you are the craft person, the hacker, guru, make your way along the path. So why do we keep learning new things? Because it's not always fun. And arguably once you've mastered a skill, you can choose to ride that skill for a while. But you know, there's lots of reasons. And basically we're a curious field. And one of the things that drew me to software and all of us, I think, is how full of challenges it is. There's always something new to learn, something exciting and different and relevant and delicious. And as curious people, we are drawn to new adventures. This is a sticky I made from a project I was prototyping last year. And I've been thinking about learning and the balance of productivity and learning because I'm in the middle of a kind of choose your own adventure learning journey. And I'm going to share some stories from the last couple of years as a way to give context for my thoughts on software as well as to present different paths that you can take to build your careers or the adventures that you want. So yes, I've spent the last 18 months entirely learning new technologies since June of 2016. And usually this like a learning journey phase is something people do early in their careers or when they're students. But for me, I've been paid as a dev since 2001 and I plan to be coding sort of well into my 80s. So you could say I'm about a third of the way through my career. And that's all the age math you're going to get. This journey started unexpectedly though the stage was set and about 18 months ago, two things coincided. And I'm not going to speak to the image on the left. But the implication of this event was that the person pictured was going to be the Republican nominee for president. And to the words on the right, however, I had been working with a great team at a great company for about four years and it helped steer the company through a lot of growth. But I wanted to stretch my brain in new directions. And to paraphrase the great thinker, Sophie Kinsella, I felt like technologies were moving on without me. I had a suspicion I'd fallen onto a manager track that was going to lead me to skill atrophy. And they don't all, not all manager tracks do, but in my case that's what was happening. And this state of mind coincided with an amazing opportunity to join Hillary Clinton's technology team. And that aligned with my values and gave me a place to put my energy as well as a place to channel my abject terror. So I jumped at the chance and I went to Brooklyn. And this, I will say, was not a stagnant environment. It was, you can see many people at a desk, many desks at a people, it was crazy. And there's really been few times in my life where software deadlines have really had like actual meaning. But working on a presidential campaign was one of those times. And on my first day, it was June 27th, I was, I got there and I was told I had to have a product. It wasn't even fully conceived yet, but it had to be launched by July 17th because that was the day of the counter convention, which is a thing like after the Republicans do the convention, the Democrats get to speak back. But yeah, and you don't get to say like, oh, that's hard, it'll be ready the week after the counter convention or this other thing, how about November 10th? Like you just don't. So instead you get to say, holy shit. I don't know any of these technologies, it's a completely new team and environment. But by God, I will learn. And this will be out by the counter convention on July 17th, hello, 3 AM. And it was. And we had a great team and we worked long hours and we got it done. And the team that I was on, it was focused on engagement. And we launched seven projects in five months and prototyped about 10 others. And there were also seven other engineering teams focused on different aspects of the campaign. And everybody worked very effectively. And not everything didn't always go smoothly. For example, two hours before the second debate, we had written some fact checking software and our CI system just slowed down. And so I was literally sitting there watching the critical release as the secretary walked onto the stage. And it was terrifying. But it got there and she used it and it was okay. And I made mistakes. Temp files do not clean themselves up, people. But none of them had lasting production level impact. And one of the things I've been thinking about with this whole campaign is how is it possible that we got so much done? And one of the reasons was that the tech team had worked as well as it did was because it had excellent leadership and it was highly diverse. And just a fact, there were over 10 senior female engineers on that team, including the CTO, which was like a first in my career ever. It was the most diverse tech team I'd ever worked on. And just in terms of conversation and experience and collaboration, it was the most high performing and also unforgiving of subpar work and nicest team that I had been on. So if you ever hear that line where people say, oh, we don't want to lower our standards, culture fit is wrong, blah, blah, blah, blah, that's some bullshit. And when you bring accomplished people from all walks of life who know how to write software together with great leadership, amazing things can happen. Another reason things worked so well was there was a specific process for design thinking and idea sharing that was used at the campaign. It was brought over from someone from Google. And so in the midst of all of the long hours and all of the shifting focuses and everything, every time any team embarked on a new project, we went through a design review process which was effectively a structured conversation in a living document in which all the devs could participate by asking questions, participate asynchronously. And then at the very end of this, get in a room together for an hour and solidify choices. And the project that I initially showed, it was designed and built and launched in under two weeks. And a lot of that is interesting because of the security and scalability concerns on the campaign. But in that time, in that two weeks, over 30 engineers participated in the design process, even though there were just three of us on our team building it. And a wide range of questions about security and scaling and product and operations and user experience were highlighted and brought to the fore. So this type of process is a way to structure conversations such that effective software gets built. And so these structural pieces, the diverse team, this great design process were in place and supported me through multiple projects, multiple technologies, and multiple awesome dogs. This is Pocket. And with Pocket, I learned to use build and automation tools like Gradle and a customized CI tool that created immutable AMIs on AWS. And I'm going to come back to this a little bit later. But one of the things I've been noticing in the past few years is the increasing divide between Dev and DevOps in tech orgs. And in fact, one of the harder pieces of campaign technology work for me was getting my head around the DevOps piece. But I'll come back to this. This is Pupperton. And have you ever seen a dog that is like more white space honoring than Pupperton? I became proficient in Python and its frameworks with his help. Winnie was there. And neither Winnie or I knew what Lua was, but I learned it enough to simulate high load on our services to identify any bottlenecks for election day. And then Social Tito here guided us through multiple cases of Facebook's open graph API. It was a very fun, very costumed dog campaign environment. One of them had a drawer of sweaters. But working on the campaign was an amazing experience. And I had thought when I joined, like, it just might be like this chaotic thing where everybody's in a corner kind of freaking out. But it was a high functioning, well run tech org. And I'm proud of my team. I was proud of the candidate and I was proud of our work. But when it was over without the result that we had hoped for and that we had worked so hard for, I realized I wasn't ready for a full-time job. Do I look ready for a full? I can't even tell if that's my actual arm or just like a stick with a hand on it that's holding up some booze for me. And so time passed. And to make myself feel better and more optimistic about the world, I rewatched the entire five seasons of Breaking Bad. Which was like genuinely calming. And then just to do something trivial and light way, I decided to learn blockchain. I don't know. I was like, I want to learn blockchain now. So I took a contract at a company that was doing some blockchain R&D. And what I got to do was DevOps. And this was interesting because as I was mentioning before, one of the big gaps in dev teams I've seen over the past few years and experience as a dev is the space, the growing space between DevOps and dev. And both of the ecosystems are getting more complex. There's more organizational silos. So that's just something to think about. It's almost an aside. As we move forward, I think we need roles, sort of cross-functional roles to straddle those two areas. But I certainly wasn't there yet. I was grappling with Docker and Kubernetes and Helm and Geth and JavaScript. And since I decided to make 2017, the year of learning to program for the Ethereum blockchain, I also took a very difficult course. And this was also hard. But I became, at the time, at least the first female certified Ethereum solidity developer from that course. So I say these things not to say fun things I've done, but sort of reflections on a journey through a bunch of unknown landscapes. And a journey that I started when I said to myself, I'm feeling stagnant. And reflections on the fact that I basically spent the last 18 months not knowing what I was doing, taking a deep breath and figuring it out. And we are all figuring it out as we go. And taking a deep breath is actually a true statement because it was about 12 months ago. It was actually during the DevOps phase that I started feeling like I had ADD, anxiety-driven development. And so I started doing a daily meditation routine. So thanks to DevOps, again, daily breathing became a way to stay centered. So why all this? Why do all this? Like, it was rewarding, but it was clearly hard and stressful at times. And why should I tell you the story of me jumping off a bunch of cliffs into new environments? In one part, it's to show how many options you have in our field, all of us. You can go in, you can explore something completely new and come outstanding. I expanded my boundaries, and when you expand your boundaries, your world becomes bigger. I tried new things, and this gave me renewed empathy for beginners and also showed me, like, new insight into what are some current business problems. Or maybe it just boils down to get comfortable with being uncomfortable. And we hear this a lot. Jillian Michaels said it here. And then this guy, Eric Thomas, changed a word or two, but he said it as well. And Lou Pinella, who was a New York Yankee in the 1980s, also says it. And then this guy, this guy says it, which he doesn't look even remotely comfortable. He looks miserable. Look at that, dude. And then this one, and it's like this tuxedo hobo also says it. And is that what it's about? Is being comfortable, being uncomfortable, what it's all about? No! This is my cat, this. She is not remotely comfortable. I also realized that I am not remotely comfortable when I am uncomfortable. In fact, what I am is comfortable being uncomfortable being uncomfortable. Which I also admit might be a kind of personal masochism, seeking out discomfort for Lord knows what kind of familiar territory. But the fact of the matter is that when you're uncomfortable, you're uncomfortable. You just are. And so the best thing to do is accept that fact that discomfort comes with the territory. It's not going to feel good, and it's not going to be with you forever. And I don't know, but maybe people relate more to this than the other one. I don't know. Maybe you all feel super comfortable when you're uncomfortable. But actually, if you think about it, it's recursive. And it can go on for quite some time. Does anybody know the time complexity of this? But I mean, part of our job, it's our job to learn. Languages, domains, patterns, problems, technical specifications, etc. And so we go from learning to stability, back to learning to stability in the cyclical fashion. But every time we learn something new, we have our previous learnings about learning to keep us company. And we also become more stable even in times of unstability. So even though, for me, even though I was learning, I was able to be high functioning in professional environments because as time progresses, one is able to be more consistently productive just simply because you're adding to your arsenal of things that you already know and things that you've already experienced. Some things get easier with time, like debugging and picking up new languages. And some things stay the same, like the learning process. Every time you're faced with something new, it can feel like you're looking into a void. And there are some times in the first few weeks of every, like if I'm learning something really new and unfamiliar, where I just have a good old fashioned cry. You know, you start with your little home and you need to move away from it in order to learn new things. It can feel like you're on a tightrope over an abyss. There are surprises in the waters. You can lose your bearings. And then gradually gain your footing again. Demons are smaller until finally they become friends. And it happens every time. And I'm sure everyone here can recognize that body shift, that physical shift from uncertainty and fear to a gradual sense of competence eventually followed by confidence. And knowing that you can do that as a developer and being familiar with that flow is what gives you more freedom over time. And just to revisit what I said earlier about crying, I don't cry every time I learn a new thing. If I'm learning something at home or for fun, it's no problem. But just let's say my time was spent getting headless JavaScript tests to run in nightmare JS against a closure application linked to an instance of the Gath Ethereum blockchain inside a Docker container, communicating with another Docker container running on a parallel thread in Jenkins. This is the most buzz-worthy thing I've ever done in my life. And there were tears involved. What the fuck? Seriously. What the fuck? I can't spend all our time learning or wouldn't ship product, pay our bills or keep our jobs. Which is one of the ways that Elixir comes in. Elixir is a language that was created specifically to introduce new tools, functional programming, concurrency, a message passing actor model to a trained fleet of productive developers with as little downtime for ramping up as possible. We heard about it in a talk earlier about how three new developers came in and rewrote a system in Elixir because it's that easy. And my friend and Elixir developer, Joe Harrow, puts it, with Elixir you can stage a coup from inside a rail shop. He actually said literally and then he's like, you can't use the word literally, but he said literally. Elixir has two parents. And we know the origin story. Jose Valin was working on adding concurrency to Ruby on Rails when he discovered Erlang. And Erlang was perfect at doing things natively that he was trying to add to Rails, but he found the syntax and ecosystem off-putting. And so he decided to dip the Erlang chocolate in the Ruby peanut butter and he came up with Elixir. And I'm really sorry that I could not not make this joke. But this is a screenshot from a 1980s television commercial for Reese's peanut butter cups. And you must hunt it down on YouTube and watch it. And right in the middle of this encounter, this like benign patriarch of branded candy shows up in the background. So perhaps that's Jose and Elixir. Anyway, back to Elixir. Languages combining and influencing each other happens all the time. And the lineage, it's not as straightforward as this slide would suggest. I found this, a fascinating programming language is genealogical tree. You can't even see it. That's Erlang. Doesn't even have Elixir. But languages combine and recombine. But the truth is, for web application developers and business owners, there are two roads that lead to Elixir. And one is through a rather obscure language invented for telephony applications. And the other is down a familiar road that you are probably proficient in. And this is a straightforward choice. I had Ruby on Rails and I decided to try Erlang. But that was because, A, I was in my spare time, so I had the time to sort of struggle with it. And also at the time, I personally felt a little sheltered in my languages and I wanted to grapple with something like really different. And it was hard. But I grew to love the syntax and the language, though I found the ecosystem difficult. For example, at the time I was learning Erlang, this book was the main resource I found for deploying and managing Erlang apps in production. I mean, look at the title and the imagery on that. Like, it says it all. And I was great on local host. I was so good on local host. But deploying and managing apps was something I struggled with a lot and clearly not alone. So the practicality of being able to use existing developer talent and deploy is huge if you are trying to run a business and make money. And there are real and important improvements in Elixir over Erlang. And we have heard so many of them here today. I just want to highlight a few, but just listening to everybody talk about how easy it is to pick up and what you can do. But string manipulation, just for one, has anybody here worked with strings in Erlang? Would you agree it's easier in Elixir? Yes. Yeah, yeah, yeah, you're like, oh, it's actually a string. I spent a bunch of time in Erlang trying to do a tweet parsing API. And so I was wrote an OAuth library and tried to parse JSON out of these chunks of tweets that weren't connected. And oh, my God. It was like the worst use case for Erlang possible. So this is much nicer in Elixir. Like we, again, like we saw today, deployment is so much more easy. There's people are using Docker, there's distillery, there's CI, there's just so many more straightforward and familiar tools around deployment. The web framework is straightforward. And I think, like, while Phoenix offers similar functionality to what we might be used to from Rails or Django, it's also nicer than Rails because it simply supports Elixir instead of that thing where in Ruby land, people can say, well, I learned Rails, but I don't know Ruby. Like, Phoenix supports the Elixir application ecosystem and doesn't try to be its own thing. And it just opens up so many new domains. We've solved as an industry, we've solved CRUD, and we've solved services and APIs. And now with Elixir's ease, with concurrency, and the ability to pick it up so quickly, it just opens up so many new business domains and performance improvements. So my question is, which was the Trojan Horus bringing Erlang to more people? Was it the productivity or was it the syntax? You guys. I guess it really depends on what path, what your path was. Maybe Ruby people came for the syntax and Erlang folks came for the productivity. Maybe vice versa, but you end up with the same value. But I wanted to show this slide. This is from a 2014 Elixir con, where Jose is describing his first draft of Elixir. And I paraphrase, but he talks about how he initially tried to fit Erlang constructs into Ruby shapes he understood. And he used this image to represent, I know what shapes I want, but I don't know how to get there. And he describes going through depression when he realized his early work on Elixir wasn't good, which is his words not mine. And then he re-approach the project with new understanding. And the reason I bring this up is I think it really exemplifies how grappling with new shapes through the just natural filter of what we already know can lead to these round peg square whole moments. And so if you've learned Erlang through Elixir, I would encourage people to take some time to go grapple with Erlang just to get a deeper understanding of that path to it. I found when I was reading Elixir books, to me they seemed a little bit less in depth on some of the core Erlang concepts than the Erlang books that I had read were. And maybe I didn't read all the right books, but it made me wonder if I might have missed some of the nuance. So my advice would be in your learning time if you haven't already, pick up Erlang. And just spend enough time with it until you are like, oh, I kind of like this syntax. And see if it gives you any insights into Elixir or if it takes you anywhere new. Because spending time with languages that are different from ours or with people who are different from us, basically making ourselves the ones who are getting into new uncomfortable shapes is a way to deepen learning and productivity. I take a quick moment here to talk about this wall we have in our industry and in our speech about software. And this wall stems from conflicting beliefs on one side, software is magic and it's easy, and then the other is like, writing software is hard. And this conflict often blocks us both from learning and from communicating and this word is just. And just is an odd word because it capitalizes on a secret belief that software is easy and if it's not, it's our fault. And also project managers use it a lot. So when we hear it from others, just represents all these extra judicial process requests we get. Last minute feature requests are just one more small thing, ask from someone who's not on your team. And I think it's a word that's used to make you feel bad for not being able to fulfill an unreasonable request. But we use it with ourselves as well and it reflects that same belief like software production should be quick and easy. It's the way we talk to ourselves when we feel bad about not being as fast as we think we should be or to avoid specking something out in full. So I just, when you hear that word just from yourself or from others, I would suggest pausing, I want just to ask like what's being avoided or is there something we need to, are there conversations that we need to be had? Which brings me to a final piece of this which is what I call our secret skills. And what is it, what is our secret sauce? And so to do that, I'm going to come back to this earlier slide and look at the bottom two cases, productivity expert and business partner. These are things that software developers also are, but maybe don't notice it as much in your daily work. But the tools that we use to write effective software and the learnings from working with stakeholders again and again and again are tools that could augment any business venture. Like look at all of these techniques and practices that we use to write effective code and launch product quickly. These are absolutely tools that could be used outside of the software development cycle as well. And I have found that many, I won't say most, but a lot of non-technology companies just don't have processes in place that technology, the processes that technology shops rely on. And so they run ad hoc processes to get things done. But without understanding the role that process and feedback play in developing product, building software can become, can be seen as some kind of magic materialization of things. And then this idea that software is developed like a thing by people who know how to develop things can lead to an artificial boundary between the business and tech parts of an org in which like one side ideates and the other side builds a little like elves. And it's, I'm not saying like all devs should go to all business meetings all the time, but recognize from all of the experiences that you have how many informed ideas that you have just because of the knowledge of our tools and how the patterns that you know how to recognize about what software is going to work or when it's not going to work. And so if you see these boundaries rising up in your organization, see if you can talk to somebody about eradicating them so that there's more communication. There's a little sidebar, but in 2012 my partner and I started this consulting service and we gave pro bono technical advice to non-technical people and entrepreneurs because we found that they had, they kept running into these really costly mistakes with regard to technology. And this often came from acting on this sort of unexamined belief that software is a thing that you get by like buying it from India or getting a student to build it for free. And they know that lose a lot of money and get these complicated things. And so we sat down with about 400 people over the course of this year and what we found was most of the time we ended up giving business advice or just applying the productivity and business experiences we had just because we were devs not because we're MBAs, just applying that to their business ventures and helping them think more methodically about what they were trying to build and what they needed and what they didn't. And I think through this I really learned that building software while it involves code is not about code. It really is building software is a conversation about what a business is. And you talk together and you figure out what are the business priorities, what are we going to build first, what actually is the business. You try things together, change them, get feedback. We have Conway's law which tells us that the shape of our teams determines the shape of our software. And secondly, software is not a thing. It's an interaction. And this is why there are very few cases where you can have your team in India or your team in Kansas just build it for you because it's difficult sending a spec to an offshore team because you lose out on that back and forth part of the process and you instead encode this kind of we think it, you build it mentality. And businesses fail because conversations fail. Like they really, really do. And from a tech standpoint, like we know this slide and it's awesome and we or me as a dev or I'm sure all of us at different points tend to think, well, if we get our tech working right, it's all good, right? But that's not true. And I'm sure regardless of your tech, you have all experienced dysfunctional business situations that have impacted development. A known poor performer is promoted and that deflates morale and delivery or a fast technical talker kind of intimidates management and so common sense is overridden. A company builds a giant product without talking to users and then it's like, hey, nobody's using this. Like why? Or a team of 200 devs moves like so much slower than a team of three devs. And I know you've seen these and more. I'm sure you can list them out. But all of these are instances where not having difficult conversations or we're assuming that there should be some minimal interaction has material impact on the morale of a team and the success of a venture. And it's not fun. You tune out, you go somewhere else, you quit your job, the software stalls, it gets caught in endless cycles and at the end, it's just all that's left are clinch fists. And I feel like the most relevant thing I've learned in 17 years of developing software and weirdly the thing most likely to guarantee technical success is that software is a conversation. And I literally mean this literally. And conversation, I don't mean prepared speech like I'm doing up here but like the result of listening and going beyond what you know and understanding what someone really wants and also understanding what their fantasy is and how their fantasy of a magic software business is leading them astray. You know, conversation is a generative act where two or more people end up with something that just couldn't exist without them. And then also the output of successful software is actually conversation. Like think about, we interact with other humans in order to build experiences that facilitate interactions between other sets of humans. And sometimes we build software to replace people's jobs or to kill them. But for the most part, software is about facilitating a conversation or an interaction that wasn't there before, the connection or an interaction. And so these things that we think of as soft skills are hard and not only hard difficult but hard technical. These so-called soft skills are often and not always but often the difference between successful and not successful ventures. So what can we as our imperfect circle squares that we are, what do we do about this? How can we get better at both the hard and the soft aspects of our jobs? Obviously experience is one teacher. And then just here's some other suggestions. There's some wonderful non-technical reading, these two books, Crucial Conversations and Radical Candor, both address how to broach difficult topics in an authentic way. And they're worth reading and possibly introducing in your organization. This one's kind of weird but take an improv class. Because improv is like not about telling jokes and being funny. It's actually about learning to listen to someone else and then spinning an idea together in like spinning an idea together, together. The sentence didn't work but I think you get the idea. I did this once. I didn't do it to be funny but I actually thought oh you take this to be funny but it's really about listening. You can ask me about it. Learn a new language which really frankly that's what everybody here is doing. So just keep doing the thing that you're doing. But for your next one either like pick one that you're curious about or maybe pick one that you think you might hate. Just like something different from your most familiar language and go into it. You'll get better at learning. You'll know another language and then the one you pick up right after that will be that much easier. It doesn't really matter which, it's just whatever. Find a diverse team or make your team a diverse team or join a diverse team. Like look at how well these four animals are doing together. And the crab is an introvert but it's okay with the others. And it's not because his bubble wouldn't fit on the slide. This has nothing to do with that. Yeah. This process that I described in the context of the Hillary campaign is a wonderful process and it's super concrete for vetting and explaining a technical idea. And so I encourage you to look at bringing this system into your own companies. This is what I've done over the last year and a half. And so I would say to anyone who's an expert in their field, I'm sure there's people here who are very experienced and expert, go back to school or be an apprentice on something or put yourself in a work position where you really don't know anything like join a modern DevOps team for one. Jesus Christ. Like just going back to beginner state will help you identify other leaders that kind of lighten the load of trying to make decisions and regain empathy with actual beginners. But on the other hand, if you are a beginner or you kind of fall back into like I'm always learning, become an expert for a while. Just be like fuck it. Step into a leadership role. Do what it feels like to take on responsibilities, make decisions. Like get comfortable wielding a little bit of power. This is nice. I've liked this. Just find a few moments every day to center yourself and give yourself some space. And finally, whatever the fuck it is that makes you look like this dog. Do it. And that's it. Thank you so much. Have fun at the party.
|
Sarah has been a software developer since 2001, when she was spit out of grad school from ITP, NYU’s interactive technology program. Prior to that, she’d been directing experimental theater, which gradually led her deeper and deeper into technology and code. Since 2001, she’s worked on a cross-section of projects across different industries. Highlights include building medical software at Mt. Sinai Hospital; leading the technology integration for startup Trunk Club when they were acquired by Nordstrom; and working on Hillary Clinton’s tech team in Brooklyn. In 2017 she learned how to program for the Ethereum blockchain, because in The Future (TM) we will all be on the blockchain. Sarah’s career is predicated on a mix of exploring the new while maintaining solid software development practices in order to make projects come to life. Because of her dual background in coding and experimental performance, she is constantly playing with forms, and putting together new, short-lived (theater-life-span) projects. We’re excited for Sarah to bring her breadth of experience to keynote at Empex LA!
|
10.5446/30651 (DOI)
|
All right. So how is everyone doing? Yay. Cool. So thanks for coming to my talk. I'm really excited to be here at RailsConf. My name is Emily and I am a software engineer at Wayfair, but I come from an art history background, which you probably guessed from the topic. So today I'm going to explore how we think about the act of programming. And one of the ways we do it is through metaphor. Two powerful metaphors are code as art and code as craft. And although we often equate the two as they are both about how we make things, they have completely different implications. Code as craft is widely popular and has really taken strong root already. Code as art is more of an emerging fringe metaphor. And since you don't hear about it as often as it's overshadowed by and sometimes confused for the craft movement, I wanted to build a case for it and show you why it's different and why it deserves to be its own separate thing. And hopefully this can help you answer the question of what is code? Art or craft? But most of all, I just kind of wanted to spark discussion and get people to think about what coding and creating means to them, even if they don't think it's one or the other or something else entirely different. Cool. So let's start with metaphors. As we all know, metaphors are direct comparisons of one thing to another. And now the language of computing abounds with metaphors. Garbage collection is the term for automatic memory management. We have folders, directories, and pages, none of which are actually any of those things. Meanwhile, in object oriented programming, you have parents and children and ancestors and descendants. And when you connect to an outside program, you perform a handshake to agree on a connection protocol. And I'm sure all of you can think of many more. But the point I'm trying to make is that we can see how on even these smaller levels, metaphors help us quickly conceptualize and understand something complex by comparing it with something really familiar. Though packaged into so few words, they can conjure up an entire story and express a multitude of meaning all at once. Metaphors matter because they shape the way we think about things for ourselves and how others might also think about them too. And that's why I wanted to talk about the larger metaphor to describe coding. Because that's what we do for a living. We spend 40 hours a week doing it. That's most of our waking hours, right? So we should be thinking about what it means to us and those around us, right? So the metaphor we use to describe coding provides guidance and a framework to understand it and helps us comprehend the answer to the hows and the whys. So the first movement I wanted to talk about is code as craft. And I wanted to talk about what the metaphor entails, how it's been applied, and then I'd like to think about how it's influenced our community. So what is craft? In pure folk definition as sociologist Howie S. Becker puts it, craft consists of a body of knowledge and skill used to produce useful objects. But there is a huge history to craft which predates us hundreds of years and its strong history and traditions deeply inform and underlie what the term means to us today. So if we were to hop back into a time machine, into the medieval ages, we'd find craft skills which were associations of makers. Now every single craftsman belonged to a guild and they were a vastly important part of civic life. And they supported the central mode of production for everyday needs. You had silversmiths and stone masons and lace makers and cobblers and bakers and every single specialization of handmade good you could think of. And these guys trained their entire lives to become masters of their trade. And they did this through a really defined accomplishment system. Your typical craftsman started out as an unpaid apprentice, usually super-duper young. You see the guy in the back there? He's an apprentice and he's probably like 12 years old, so super young and he would move in with his master to train. And although he was only learning the basic technical aspects, his life 100% revolved around his work. He lived, breathed, and slept his craft. The apprentice would claw his way up to the ranks of a journeyman when he was finally good enough to produce stuff and get paid for his labor. Now he could stay at this level or spend the next several years gunning for the rank of master. And as a master, he was a highly competent craftsman who could set up his own shop and take on apprentices to pass on the traditions. See the parallels here? Cool. And while guilds functioned as forums for nurturing competence, they also emphasized a sense of community. In addition to the strong tradition of mentorship, guilds involved close collaboration. Members worked close to one another teaching each other stuff and giving advice. Craftsmen also really cared about their customers. They wanted to produce a good quality end product. So they inspected the workmanship quality of all items and regulated prices and supply to ensure fairness. And they also developed relationships in general with their customers. So the Code as Craft movement has gained quite a bit of momentum in recent years. And you can really see how it's been able to draw from the distinct history and traditions of craft of craftsmanship. We can trace the movement back as early as 1999 when Andrew Hunt's book Pragmatic Programmer from Journeyman to Master made a pretty enthusiastic nod towards craft even in his title. In 2002, Pete McBreen coined the term software craftsmanship with his book of the same name and proclaimed that to produce quality software, we should think of what we do as a craft and adopt a guild-like model that emphasizes community and learning and mentoring. So in the late 90s and early 2000s, we saw the Code as Craft movement slowly but surely coalescing as we started seeing books like these as well as gatherings and blog posts and online discussion boards. But by December 2008, attendees of the software craftsmanship summit in Chicago discussed what it meant to be a developer craftsman and drafted the software craftsmanship manifesto. And here it is, don't worry about reading it because the font's pretty small. But they laid down the principles as they wrote, we have come to value not only working software but also well-crafted software, not only responding to change but also steadily adding value, not only individuals in interactions but also a community of professionals, not only customer collaboration but also productive partnerships. And this manifesto really made the rounds with several thousand signatures in the first few months, that's a lot. It's also since then been translated into at least seven other languages. So this document effectively crystallized the software craftsmanship principles as we know it by putting it down on virtual paper and thus stamping them into existence. And today, the movement's effects are really far reaching and I think it's really percolated into programming culture. For example, this conference has an entire track devoted to this topic and you are in fact listening to a talk under that category right now. You'll find a bunch of groups on meetup.com rallied around the cause. Meanwhile, many companies are adopting the apprenticeship model. I myself went through one at Wayfair. At C's engineering department, brands what they do is craft and likes to blog all about it. The software craftsmanship conference in Budapest dedicates three days to it. Many do see code as craft today and our vocabulary is laced with references to it. We see coding like craft because we believe in continual learning, skill mastery, mentorship, customer relationship, collaboration and building useful things. So this movement really allowed for us to start thinking more about coders as makers as opposed to the older understanding of programmers as executors as these black boxes that you take that take in a set of specs and then spit out an app, right? And because of this, it naturally paved the path for discussion on creativity. Especially in the Ruby community, I've started hearing words like pretty and creative. For example, in an essay in the 2007 anthology, beautiful code, Matt describes what makes code aesthetic and beautiful. Meanwhile, in his 2014 book, geek sublime, Chandra directly poses the question when he says of code, quote, we are now unmistakably in the realm of human perception, taste and pleasure and therefore of aesthetics, can code itself as opposed to the programs that are constructed with code be beautiful. So taste, pleasure, human perception, aesthetics and beauty. Now none of this really fits so much into the traditions of craft, especially the kind that our software craftsmanship movement is grounded in. Medieval makers weren't concerned about this. Again, they emphasized skill, collaboration and wanted to produce things that people needed, things that were useful. So I think instead what is happening is that progressively we are likening code to art. So in this next part, I'll talk a little bit about what the metaphor of art encompasses and illustrate a snapshot in history where our modern definition of art originates and then talk a bit about how this analogy applies to coding. Cool. So art is a pretty complicated thing. But the thing is we all know art when we see it, right? We tend to think of things like the visual arts, like sculpture and painting and installation work. But in its essence, art is anything that moves us. It's beautiful. It's an outward expression of human creativity and emotion. It's about ideas, human imagination. It makes us think. So that's a widely accepted definition. And interestingly enough, one could say that our modern understanding of art really arose from what was craft. Right at the height of the medieval guilds in the 14th century, an intellectual movement drawing on Roman and Greek classics called Renaissance humanism swept across Europe, revolutionizing how people perceived their roles in society. So while the medieval era was characterized with a utilitarian approach to thinking and making and following doctrines to produce useful stuff mostly for religious purposes, humanists instead stressed the importance of human dignity. They glorified the individual. They prized creativity and ingenuity. And soon the Renaissance was in full bloom. And before we knew it, certain craftsmen began questioning the nature of their professions. Some, like Leon Batista Alberti, famously wrote treatises on the topic, framing what they do not as an applied skill, like craft, but as a liberal art. Gregorio Vasari's influential book, The Lives of the Artists, similarly illustrated his subjects as creative virtuosos. And Michelangelo himself said that a man paints with his brains and not with his hands. And soon, within a lifetime, the general populist's conception of what is art took shape. And things like drawing and painting and sculpture were no longer considered crafts, but celebrated and recognized as art, as these creative cerebral endeavors that were made for intrinsic meaning. Sure, applied skill was still a part of it, but it grew to be much more than that. So, I think history has a pretty funny way of repeating itself. With all this talk of creativity and expressiveness in code, I think, in a way, we are in the middle of our own Renaissance, where just like the 14th century humanists, we are slowly re-envisioning what we do as a profession, what it means to us, and how we brand ourselves to the outside world. Just like how makers in the Renaissance started seeing what they did as art, in a sense, so too have we. So, you might find literature on code as art in the books I mentioned, among some others, or in some posts on Quora or Medium, but for the most part, the arguments for this aren't yet as well known or adopted into popular discourse. So, I'd like to take some time to demonstrate the ways in which code is a lot more like art than you might think, both in its creation process and in the end result. So, let's begin with the creation process. To begin making something, you first need to choose a medium. So, in art, a medium is the material used by an artist to create their stuff. An example might be pastel or watercolor or clay or canvas. So, if we were to apply this to programming, our medium might comprise of the text editor or the language we choose to write in. My language of choice is Ruby, and I like Ruby because it's particularly expressive. I tend to think of it like acrylic paint. It's elegant and rich in so many ways, but even so, it's clear and crisp. Each stroke of acrylic leaves such defined borders, much in the same way that a Ruby method delineates boundaries with those definitive depths and ends. And like acrylic paint, Ruby is also pretty beginner-friendly, but once you get the hang of it, you'll find that there's many more advanced techniques for you to conquer and discover. And sometimes, just as artists prefer one medium over another, coders have their own preferences. These days, I write a lot in PHP, which I admittedly don't love. It reminds me of pen ink in its simplicity. To express something, it requires a million strokes, which ends up squiggly and convoluted with all those curly brackets. And then you get these conspicuous semicolons that look like ill-placed inkblots. It's not as rich, it's not as vivid, and at times, it can be really messy and all over the place. But it gets the job done. But hey, some people like working with that medium. As the saying goes, quite literally, different strokes for different folks. And I think Paul Graham sums it up quite well in his famous essay, Hackers and Painters. When he says that, quote, hackers need to understand the theory of computation about as much as painters need to understand paint chemistry, end quote. And he makes a good point. Just as painters need to understand in depth and appreciate the medium they work with in order to compose something effectively, so to do programmers. There's quite a bit of science behind both. We must experiment with what we're working with and go through a process of trial and error to fully understand the material. Cool. So once you have the medium selected, you then have to go through the creation process. So how many of you here have made art? Can I get a show of hands? Cool. Quite a few of you. So for those of you here who have made art, I think you'd agree with me that it's a whole lot like writing code. In fact, I think a lot of famous artists would say the same. For example, Francis Bacon, a famous artist, says, quote, the creative process is a cocktail of instinct, skill, culture, and a highly creative feverishness. It is a particular state where everything happens very quickly, a mixture of consciousness and unconsciousness, of fear and pleasure. So in fact, I feel like what Bacon describes is almost this trance-like state of flow that is so intrinsic to the creative process that can be very well applied to coding. Because when you code, it does happen fast at times. You register keystrokes without much reflection. But at the same time, there is a highly conscious element to it in which you are working through your thoughts and trying to channel it concretely onto your text editor like an artist splashing paint onto his canvas. And you develop techniques and unconscious habits. For example, I constantly type git status after just about every single git operation I perform. Looks like you guys do too. And the way you solve programming problems does involve quite a bit of skill, intuition, and risk-taking. So code is also engrossing. You feel very much in the present. Sometimes I find myself sitting down at 9 a.m. only to look up and find that an entire day has flown by. You get entirely lost in your own thoughts, so transfixed with what you're doing that you sort of tune the world out. It's a very individual thing, super personal. It's almost like you enter this different state of mind. They say Michelangelo painted the Sistine Chapel in this kind of state, totally unaware of the world around him and many times forgetting to eat, sleep, and drink. At the same time, coders make aesthetic choices. How you choose to program an algorithm may be entirely different from how someone else does it. For example, in Ruby, there are many ways to print out the numbers one through 100. A minimalist may choose to do it this way. She might like how simple and elegant it is. Or a maximalist who loves trolling might write out one through 100. Or someone coming from a C background may find comfort and familiarity in using the traditional for loop. A student excited by object-oriented programming may want to wrap it on the class and use instance variables. Or maybe a moonlighting poet likes how it reads just like English when you do it this way. Or maybe someone who loves the graceful flourishes of curly braces may choose to do it like this. So what I'm saying is that in art, the artistic process is a translation of the creator's personality, preferences, individual style, and cultural influences externalized into a concrete form. The same goes for coding. Everything that we do from how we approach a programming problem down to the very ways in which we indent our code is influenced by who we are, what we like, and past experiences behind us. When we code, we are constantly infusing our personal taste and constantly making aesthetic choices. And that's why it's such an artistic creative process. So once you finish, you'll find that what you made can be appreciated in much of the same ways as a work of art. Here is a painting of a factory by the abstract impressionist Van Gogh, next to a factory class from the factory girl repo. Now take this in stride and bear with me. But I see a lot of similarities here. Van Gogh's paint strokes tuck away details so that things are just abstracted into lines and blobs of colors and shapes. Yet it's parts collectively come together to depict this larger abstraction of a factory. The Ruby code does this too. It hides complexity by abstracting it on to a class so that these individual parts, the functions and the instance variables come together to illustrate a holistic idea of what this factory is all about. Abstraction in art and programming is very similar. At the same time, code is like art in that we can admire its formal qualities and visual structure. If we scroll down a bit, it starts looking like an Alexander calder sculpture. The code's cleanliness, the orderly indentations and spacing, the occasional verticality of the or operators and the horizontalness of each line and the way it gracefully cascades downwards and tapers off to the left kind of reminds me of this sculpture and I don't like it. And I notice a systematic, careful character to each of these that I like. I even find the ratio and dispersion of colors in both aesthetically pleasing. Even more so, we have the coding language Piet, whose code is pixels of colors and pays homage to Piet Mondrian's famous new plasticist paintings. Do you guys know who Mondrian is? Yes, seems to be. Okay. So to the left is an example of a prime number tester. To the right is a Mondrian painting and they look very much the same. In fact, it's hard to tell the difference. So what you are seeing is computer code that to the greatest extent is quite literally a work of art. Again, my point is we can treat our code like an art object appreciating the finished project in much of the same ways as art. And if you aren't convinced, many members of the art community seem to be. Just one month ago, the founders of Rue's laboratories arranged the algorithm auction at the Cooper Hewitt Smithsonian Design Museum, which was the first ever auction of computer code. It was a pretty big deal and it attracted the likes of big art dealers like Larry Gagosian. And in this auction, patrons purchased these algorithms for the same reasons they might purchase art, such as historic reasons, like with this printout of the source code from which President Barack Obama wrote a line of code, marking the first president in the history to program. Some pieces of code were seen as embodiments of culture, such as this okay cupid compatibility calculation algorithm, which is a cultural snapshot or relic demonstrating the ways in which we as a modern era have embraced technology as a means to connect with one another. Like art, code at this auction was also praised for the nostalgic meaning it holds. A framed handwritten function printing out Hello World by Brian Kernighan carries sentimentality from many of its viewers, as we all know these are some of the first lines of code one ever writes. And like art, when we look at code, for example, during the code review process, we also bring with us a sense of personal taste. We have a visceral personal reaction to what we see. For example, I personally find procedural spaghetti code as messy and hard to follow as this Jackson Pollock. It's all over the place and doesn't really make any sense to me. And I find metaprogramming to be cool and marvel at how self-referential the concept is. Some people hate it. And I also like recursion because it's fun and powerful. I marvel at how in just four lines of code, you can express infinity like with just this one picture. Although I understand that some just prefer plain old iteration. But above all, art critic Arthur Donto famously defined art as something that compels the beholder to interpret. So he says, quote, artwork has semantic character. It involves the possibility and necessity of interpreting the work, a theory of what it is about, and what the subject is. And I would argue that code precisely asks us this. We must identify the subject of just about any code we come across. For example, is this code about a database or is it about a recommendation engine? You need to consider all the comments and all the little details to determine the author's intent. Just like art, the end goal of coding is to be personally interpreted and understood, whether by another human beholder during a code review process, a code review, or in a very literal sense at runtime by the Ruby interpreter. At the end of the day, what you code and what you make in art is all about the ideas you're trying to get across. So I hope I was at least somewhat convincing to you in demonstrating all the ways in which coding is like art. Like artists, we choose a medium and go through a unique creative process to breathe life into an idea that results in aesthetic beauty, historic, cultural, and personal meaning, and a basic need for interpretation. So I started by stressing the importance of metaphor and how it shapes what things mean to us. And I talked about the popular analogy of craft and how it meant thinking about continual learning and applied skill to produce a useful end product and how it's influenced our community. Then I talked about how the concept of making stuff as introduced by craft opened up the discussion to this notion of creativity in coding, which results in this new fangled code as art metaphor which is about expressiveness and ideas and creating something for its own sake. And along the way, I showed you how craft and art are a bit different in the meanings they take on. So now comes the question of which is it? Is code art or is it craft? So not to cop out or anything, but it's both. They each carry a different set of history and meanings, but they both apply. From the top, coding is an applied skill like craft, but it also requires creativity and out of the box thinking, like art. Sometimes you need to choose the right tool for the job, but sometimes you can pick your own medium. It's collaborative, but it's also highly personal and individual. Code has utility and function, but it also has aesthetic value. There are traditions of doing like in craft and personal preferences in ways to do things like in art. And so much more. Coding is such a complex thing that sometimes we need more than one metaphor to understand it fully. Even in the art world, when we consider whether something is art or craft, it's not mutually exclusive. This Montvallant chair has elements of craft because it has the purpose of being sat in. But it's also a work of art in its neoplastist style, which itself is a cultural critique. The thing is, we are allowed to see things in multiple ways, and we can mix metaphors to best suit how we want to think about and frame things. I think people tend to forget this. People tend to think in binaries, especially programmers. But in preparing this talk, I read a lot of blog posts about how programming isn't this thing or isn't that. It doesn't entirely fit into this metaphor, so it can't be this. It's this one instead. But the thing is, you don't have to adopt a metaphor wholesale and abandon one in lieu of the other. With the Renaissance art revival, it's not like the idea of craft was abandoned, but rather it existed in parallel with art, and existed in parallel with this redefined notion of art, and both led to some awesome things. But, however, I think we should give the code as art metaphor more weight than we currently do, as it's still rather fringe, but has the potential to provide some good paradigms. Sure, it's a bit out there, but maybe if programmers saw themselves as artists, they might feel further empowered to take risks, to break from the mold and innovate. Or who knows, maybe if we branded our field as art, we'd inspire a new demographic of people with creative interests to join the field and increase the diversity of thought. Or maybe we'd start more readily attaching the names of creators to the stuff they make, like how artists get rock star recognition. And though we already have some modern day Rembrandts like Matz, or Frida Kahlo's like Sandy Matz, perhaps we'd move even further in this direction and start giving programmers the creative capital they deserve. Or maybe we'd start following art's higher ideal of art for art's sake, and focus less on the program's utilitarian purpose and more on the code itself, resulting in more clearly written, more maintainable code. I mean, who knows? Maybe. The thing is, there was a massive paradigm shift that resulted in adopting the popular code as craft metaphor in recent years. So consider the possibilities of adding a new one to that. My point is, metaphors have a lot of potential to influence. And we want a great diversity of ways to frame about and think about the things that we do. Having multiple perspectives never hurts. So I introduced myself earlier as a software engineer. A quick search on LinkedIn and Twitter reveals how by now thousands of people title themselves as craftsmen. But I think it'd be kind of cool if one day people started introducing themselves at events like these as software sculptors. So that's all I have. Hopefully this sparks some discussion. Here is a thank you slide for everyone who's given me feedback so far. Thank you all for that. And a bibliography. I'll be posting that online so you can take a closer look. And now I'd like to thank you guys. That's my Twitter handle. Feel free to tweet at me. Agree, disagree, agree. And conveniently enough, it looks like we're out of time. So if you have any questions or comments. Thank you.
|
Developers often refer to their trade as both “art” and “craft," using the two interchangeably. Yet, the terms traditionally refer to different ends of the creative spectrum. So what is code? Art or craft? Come explore these questions in this interdisciplinary talk: What is art versus craft? How does coding fit in once we make this distinction? Is the metaphor we use to describe coding even important––and why? You’ll walk away from this discussion with a better understanding of what creating and programming means to you, and what it could mean to others.
|
10.5446/30886 (DOI)
|
Thank you, Barbara. Thank you for the wonderful talk about the survey that took place. Welcome you all to River Valley, Likufanli. Welcome you all to River Valley, Likufanli campus. Today my topic is creating magical PDF documents with a PDF tag. First I will talk a few words about PDF, then PDF tag. So PDF is the facto standard as far as the printed material on web. In publishing, the PDF is still inevitable and still it is the final deliverable to all the publishers. So PDF tag has a very important role in tag type setting industry as far as this PDF is the final deliverable to all the publishers. And it can be highly customized to various macOS packages thanks to the others of PDF tag and who has contributed creating beautiful, useful macOS packages. Still research is going on for adding features, for example, tag PDF. Some users of PDF tag, what I am mainly going to talk about is the uses as far as we are concerned. For what purpose we use PDF tag? One is to create a PDF as per the standards or the specification given by the publishers. Then as a tool for quality checking purpose. PDF tag can be used for creating PDFs in PDFX, PDFA standards, the packages are available in CTAN. Thanks to CVR and Hanttan for these packages. Then micro typography is also implemented in PDF tag. Great type setting, it is available in latex as well. Some of the publishers we are dealing with for example nature, they need this great type setting. It was first wave big challenge for us but now we have come across that. Then animated images can be put in the PDFs using PDF tag. For the quality checking, mainly we create two PDFs. One is disk PDF and another one is tool to enable PDF. Disk PDF I call is mainly of two types. One is of a layer PDF that is placing one PDF above the other and create a composite PDF. This can be used mainly for comparing nearly identical PDFs. For example in the latest stages of production in the CRC stage, last stages after the other corrections have been implemented. The only work should be done is to add the folio numbers in PDF. Once it is paginated, it is a new folio number. A very careful checking is required to compare between the other approved PDF and the final PDF created with the latest folio numbers. It was done by placing two PDF side by side and checking by clicking on each PDF for income going through each pages. It was a mainly task and very laborious task. Now we have a solution for this type of checking. We have implemented layer PDF. Placing the first or the approved PDF and placing the latest PDF above the other and create a composite PDF and the quality checker or the proofreader. They just need to go through a single PDF by clicking. If there are any subtle changes there, it can be easily found out. This is mainly used for the latest stages of production. It cannot be implemented in the first or the early stages because there will be a lot of changes. That type of changes cannot be easily identified through this layered PDF. This is used for comparing two PDFs. The layered PDF system is used for comparing two PDFs. This is a latex PDF. We are using it for comparing two text files and creating a single PDF. This is mainly used for checking the editorial changes made by the competitors of changes. Just show these changes as two tips with the format changes. These are mainly the PDF. I will give a live demo of how we will create this layered PDF. Two tips can be embedded in the proof or the PDF. Two tips of the floors can be added. Two tips of the other queries. Then bibliographic references. Then copy and changes. Everything can be put in as two tips in the PDF. Others do not want to, as these are placed as two tips, others do not want to, for example, if they want to check our queries, they do not want to go back. At the end of the PDF, they come back again. As these are placed as two tips, they can see them all there in the same page where these queries or floors are placed. Not only that, some others will need some figures, the color-pred figures only need in the version PDF and the gray scale of black and white, these need only in the print. As two tips, in the single PDF, we can give these two figures to others. For example, the gray scale image which is going to be printed, that will come in the usual place, its caption, everything in that format text, and the color version figure, the PDF which is going to be color on top, that color version figure can be placed as a two tip. In the same proof, others can check both, the color version and the black and white version, and they can approve. It is a really big problem. Sometimes we create the PDF, and others are approval, and others are only the print version or the verb version. Then, after the final stages, sometimes it will be a complaint about this, it is not printing well or it is not looking good in the verb version. So, using these two tips, we can place two figures, different figures, the same proof, and others can check and approve this. So, now, let's see some examples of how we... How to create the next value to simplify my job. Sorry, something has happened. I can't access the terminal. Okay. I can't access the terminal. Okay. Okay. These are the two different versions of the same TEC file. So, I will create PDF of each TEC file, then make it a composite PDF of this. So, first version, again, same file. Now, creating the composite PDF. That is, using the layering system, placing on PDF then above, and create a composite PDF. Now, you are watching a composite PDF. Two layers are here, PDF1 and PDF2. You can switch off either or not. Now, seeing only one PDF. Now, the composite PDF. Now, by going through this composite PDF, you are actually seeing the output of two different PDFs. I have the same page nation. Now, I will make a small change here. Then, let's see how it looks like. I just remove this, remove one letter. PDF1. Earlier it was holographers, now it is a holographer. See the change here. So, as you can see, quality checkers, they can just make sure the change is here. And using this layering facility, they can compare which one is correct, which one is wrong. And the following numbers, see, one is 38, other is 78. So, in the final stages, the quality checkers should see only this change. That's the folio change. Any other change should not be there in this PDF. So, this is a good system for comparing nearly identical PDF. This is the... So, now I will revert back that change. So, what type of source is here? This is the layering facility is trained by using the PDF layer style by CBR. I hope this will be perfectly released soon. It will appear on C10 very soon. So, this is the change here, as you can see here. So, any change? It is similar to the manual operation, how sometimes we will compare two printed copies by placing... putting that under light and here. Same method is happening here, comparing two PDFs. So, no need of comparing two PDFs separately. So, a single PDF can be compared. Now, I will show you some PDF that we created with tooltips. So, the bibliographic citations that we put in as tooltips. Figure one. Table. Figure five is only grayscale print and color in verb. So, if other can set the printed copy in the proof stage, they can print this. This grayscale version they can print and check the color version. So, we can come back with two different ways. So, they can check at the same place where this text is. Otherwise, they have to go the end of the document and come back again. Now, they can decide. These are the uses of PDF tech. Packages that is written to the user's usability. Good work on fancy tooltips. So, nothing on the test. So, all those tooltips and popups. And they can adjust first, or they can directly in PDF. So, PDF. You said it was the buyer might be on CTS level, but tooltips. Yeah, tooltips are already there. Fancy tooltips are there. We are using that. And layers. Yes, not yet. Yes, whether you produce them separately, it should be two or three layers. Potomac? I didn't understand. Yes. Yes. Yes, both of them are layers. Yes, I think from PDF little is embedded in the macro package and is embedded in the style file. When is it going to be released? The open is there. I think it is there. Perhaps you can answer this. I'm not sure if it is there, but there is a package. It is not a package. There are a few days of the macro line that can print that. Actually, we are designing a PDF-specific page. We are naming the format. So, Rosemore has also shown that he has shown four layers. And I have already two pages. Oh, yeah. You can't say it's the same. You can't write a package for it. It's just a simple macro. Yeah, of course, yeah. What's the tool tips on how to put this as well? Currently not, but I think, yeah, anyway, we are putting these floats as a, these figures, tables as tool tips. Then I think we can cut these cool notes as well. I really was a question to Ros, about his paper. I noticed that the reader sort of just has a good problem with the high-financias of the other pages. Yes. I haven't saw that without the should-bees. Okay, the question was about high-financias and what we're going to say in math. That's to do with the fact that you need to turn that high-financias into an high-financias. So it gets ignored by the reader. And I haven't done that yet, because that happens at a very low level in the program. And I need to find a way to capture that. But yes, that's all I'm listening to. So now I'll take your question to Ros. Does Tang Media imply that it's an example of this? So in my, in my own justice, they call Tang Media an example of definition. It has 15 different worlds. It's one of these standards, it's a really quick video. So if you have to mention it, if you don't know about that, that's all there is. Yeah, okay. Well, when you open a Tang Media document, even if it's about pro, it opens up a language that's not laggy. Also, when you open 3DF-A, which is almost credit-based in the case, then that's what many people think about 3DF-A. And now you know that in this build-out of anything, you're sure that even though half-bath-pro has the tools to be able to do that. So the opposite to that, which you were talking about? No, I don't know. No, okay. There was, when I showed that big diagram, there's a structure there which is designed for 3DF-Energes. So in particular things like drawing this way, and then you can design it. And that's to do with finding the things that are in the future. So then that's probably the public, the industry, what people think about the documents after they've published them. And that's a difference of control. The publishers themselves, who are involved in it, are absolutely forsaken. Okay. I want to ask Mr. Bistro, I want to understand that this is a procedure like that. So that is a problem with the progress, this thing of some kind, the encryption and all that, which they protect, right? Yes. So this managerial feature, is it possible by some of the means to make a build-out, by which people can handle it? Well, if the document is encrypted, there's not a lot to like to be doing it. Because my understanding is whatever. If the document just simply declares that I don't want this to be able to do it in any way, like my visa application form to come here, and I want to add annotations to it, and put on a digital signature. Just to take it complicated to read the PDF of the original thing, and all those protection had to go on. And it is whatever annotations you want, and digital signature, so the number must be written with it, and then done. So yes, I think that's exactly that problem. The problem is that the current reader has to be that, such that it doesn't protect the mechanism. Yes, I'm doing that with that. Really, it's an automatic energy annotation that's an automatic power. So you need to use an automatic energy annotation, right? Well, I have a bad read. It's a time you're really... An acrobat reader will let you fill in the form of information that's already in the system document. It forms already there, but it won't let you add any... What makes the mechanism add a certain amount of annotations to it, that you can't touch the text itself? No, it's an automatic feed-out. There's various levels of different programs that you can do with this. But the reader protects it in a way that no other program has a current work-style. Because the reader has a different feature. It might access it to the reader. So that's a good point. The reader could smell one last thing, and so on. That's the idea itself. So you just see the work with that. You have years of learning from people, and how to learn from people. That's very... Where is the fundamental work with that? You can achieve that. But the list of tags, in fact, is here. Which the deep one, the mathematical ones are going behind it. Yes. So the list we have is it significant, which the other five are, which the other... You have to look in the previous spec, and it describes the basic tagging. And you can... It has a feature to map. You can define your own tags and map them to the basic ones. But that does a price validation against a PDFA. So you can define your own tags, but they have to behave in the same kind of way. The kind of content you expect it to have. The list of default tags. Yes, the list of default tags is in the PDFA spec. Not that to the other... No, they don't have the hidden tags. You just have to read it. All the app about all the endowment documentation can be done very carefully. And you can never do anything unless you follow their examples almost. And then it starts to work. You'll find that many things that you think should work do not. You actually have to follow their examples very carefully. And you cannot deduce the rules explicitly from the way they describe the spares. You must use the examples. There have been like that. The Adobe documents have been like that for like 10 years, 15 years or so. It's very hard to get things exactly right just by making the descriptions here.
|
PDF has a rich specification. But Adobe Distiller does not exploit all these specifications. We’ll demonstrate how pdfTeX can create useful PDF files that are difficult or impossible to create using other technologies. Examples are: PDFs showing differences in two TeX source files; PDFs with useful pop-up tools; and a simple but useful composite PDF for comparing two nearly identical PDF files.
|
10.5446/30887 (DOI)
|
Okay, so my talk will be about data structures in the tech. Please just notice that the river where he has already responded to checkerboard. I have to warn you first. This talk won't be about anything fancy. There will be no Atherman, no Xenman, no even PDF. It's about something very basic which I believe was missing a second. It could have been done maybe 15 years ago. So this is perhaps interesting only for people who write low level code in macro packages and stuff like that. So we know that tech language has two functions. One is to provide a markup for users and the other is to provide a programming language for package writers. So we know that the control flow in tech is feasible. It's possible to do everything because we have it and it permits this and we have a requirement of a markup. So we can do pretty much everything concerning the flow control in the programming in tech. But the weakness is a lot of complex data structures. Because we have simple stars like the definition of macros and some dimensions keep so open and boxed. And that's all. The situation gets a bit better in the tech. If you worry about the tech, don't worry about it because we use it anywhere. It's part of PDF tech and everything. It's just not explained in the tech book but it has few useful permittives which might depend on this work. So it gets the life a bit easier because you can write expressions more conveniently easier. And then we need the data structures anyway because we can't write complex code without using data structures like arrays and their torques. So what we normally use, we use some trickery with CSM to represent something structure. So for example if you see my photo. So for example to represent an array, you can write something like this like expand after CS name. And then you can put whatever inside the CS name and represent some, let's say an array like this. But that's not convenient, right? So some time ago I started to work on a project where I had to write a rather complex macro package. So I got some motivation to do this. But I needed, actually I needed a well-arranged parameterization because there were a lot of parameters provided by an article designer. And I wanted to, not to mess up with them, I wanted to help them somewhere in front of the actual code. So that was one need. I was tired of inventing long unique names. It's always a problem for me when I'm programming, it takes me most of the time to invent names. Then I needed to write some generic macro with many parameters like for similar graphical elements but with few differences. My fault that it's going out of that, that the microphone I did was for useless. But I was tired of passing too many parameters to macros. It would be much easier to pass something, some sort of a record. And I was tired of writing down similar definitions over and over. And then part of the project was a package for alternate tables with different specifications. And I needed to represent the column descriptions somehow. And because it's some more complex data like arrays and records. And I was tired of doing all those expand author and CSN, 3D or random work. And I didn't mention it here but the real reason is that I am lazy and I am irresponsible. Because I rather spent two months on writing some general code and spending two days on writing something by hand which I don't like. And I do things when I am not supposed to do that. If my boss knew that time that I spent two months or I think support for data structures in fact he would allow me to do that. That's what I thought only afterwards. So I came to a macro package which I will show some examples later. But so I sort of invented records with identity and with some APIs. So I wrote a records package and it's possible to create a new record by saving it with some identifier. So each record has some identifier. It can be whatever text or number. Now it can have members. Members can be simple values or markers with parameters. And there are several possibilities how to set those members. It's possible to define a member of its macro which is encapsulated inside the record. So it's possible to define it, expand it, define it. It's possible to thanks to e-tech, it's possible to predict the numeric or the dimension or true value. Or it's possible to simply let... No. So it's possible to just let a member to be a copy of another meaning of a token. Okay. So that was for setting the members. Then it's of course necessary to get the members of the record. Importing it, it's like reversing of the Latin. So it means you can take the value of the member and to let it through some other control sequence. It's necessary to provide some interface for showing a member and getting the meaning of the member. You know the primitive meaning in tech because basically for debugging and analyzing the stuff. Okay. So difficult but basic theory is that the record knows its own members. It needs to remember which members are inside. Because only that way it's possible to work with the record as one unit. And so internally it keeps a list of members which it got. And thanks to that it's possible to, for example, copy the whole record to another record. That's going to be possible to copy from one another. And it's possible to do things like mapping all the elements. And so we can apply the macro or just some crazy, something called which takes a parameter. And we can apply all the members of the record. And as an application of this, we're going to have a record screen which brings the content of the record to the load. Which is useful for recording. The syntax was quite long. I mean for usefulness it's quite substantial to provide some things like sugar. So it's easy to work with. So first the member that was mentioned, the member. The member is a CSN. For example, there is a definition of member and the member is called make-full. And so it doesn't have to be internally various. Maybe it's unusual for late users but in lower-level-based code it's pretty normal to work like this. Then because this is also boring to write, it's construction-like, it's construction-like all the time, I provided handles for the record. So I'm short on for like creating new record. Then there is an example, I create a record banana and I complete the record to the banana for the penalty. And then I can say banana then member make-full, the parameter, it's equal. And then I can apply for make-and-referent member. So this made the programming sort of convenient for me. So now it had some real motivation. I needed it for doing some real stuff. So one application was parameterization of graphical elements. So it provided me namesplaces so I didn't have to have too many different names. It allowed me to pass whole members as parameters to some generic macros. So I didn't have to pass too many parameters, just one. And it allowed me to reuse like similar data. I didn't have to write all the numbers correct, but I was able to inherit from some other structure. So I'll show you. So here's an example of parameterization. So as you see, I was able to declare structured data which were used for parameterization of some macros which created graphical elements later. In this way, it's, I don't know, for me it looks much more readable. The next principle was to use copying. So here there is a record here. There are maybe more subclasses of this. So that was one application. In other words, I had to represent those alternative tables. So the requirements for the tables were that the graphic designers that they took the full width of the page. And the columns didn't adjust to the particle data in each table, but they had to be uniform across the whole document. Yeah, like there were like classes of tables like tables with four columns or maybe slightly, well, maybe dedicated to certain purpose and they had to be, there were more instances in the document and they had to be uniform. Tomorrow I'll show you practical examples of some documents I have done with that. So the solution for that was to write a new package for tables which were able to take elastic description of the columns. It means like full width and then the usual dimensions plus some glue. So the glue was divided between the columns to the width of the table with all the possibilities which the glue parameters provide. And because there were several parameters for each column, I needed to represent it in a complex structure like record arrays of another record, whatever parameters for each column. So an example of that. So in the end we were able to define tables like this. So for example as we needed tables with uniform width of the left column, of the first column on the left side and then the rest was lined in so it was lining up in the table. So this is like high level syntax, it has several levels, it's the most high level of the syntax. So in the end it was possible to define tables like this, like this means how much should be from the full width in each variable and whether it should be adjusted to that or not. Like different shortcuts for different kinds of columns. I've shown examples, a very good example. So there was a lot of different sorts of tables which were then used in generating documents. To evaluate this, my experience was that the code was better structured for me, more readable, perhaps for other people. I was able to reuse the data and to aggregate the data and fill together to one unique and it was possible to represent, to start to make it more convenient. There were some, during the work I found many weaknesses. For example, there was this synaptic sugar of the names of the members, it would be this CS name, but it became awkward when I had to reference non-literals members. I'll show you an example. Because if the member name is a control sequence, we don't want to expand it, we don't want to take it with a general, which is in conflict sometimes. Then there was a frequent need for passing the record ID. That's one instance here, because we lost years of our research time. That's another upset. In complex macros, I needed to reference, for example, when I was working with the tables, I needed to pass the instance of the data, which would be much easier if I had the virtual macros. Another disadvantage is that there are copyright restrictions, because I did it for my previous employer and he didn't want to give this out for a people. So I was thinking about next generation of real objects, which would have virtual macros, some of them can be built in, with dictionary-aligned syntax, which was inspired by JavaScript, which I worked with recently. I was thinking about this, but I didn't know that it was going to be recorded, because it can be as many records as you want. The advantage is that it's a challenge to realize, quite fun to believe, because it's quite a repeat. I intend to provide a free version, but it's used by other people. I just want to give you some examples of some possible syntax for these objects I'm working on. You can use that and you can use it after the down depth and after the distance. I'm not sure if you can use any identifier to do an umber, but it can be an umber for example. There are many names of internal macros for each record, so now it's somehow easier and more important than the methods, where you can find an internal macros. Then the record, the over-the-object, works as a brief mix before the actual method. For example, here there is this object, right, an apple, and it can define, use an internal method there for defining product. The macros can be in rotation, either in control sequence or in square brackets. So then it's kind of like an in-ear expression. It's actually works like a dictionary, so the index is that doesn't have to be in America. There can be arrays, there can be just a special grade, but the word basically is a dictionary of peace and values. So here you see an object referencing of the members or referencing of the color. It basically creates a character like this. So it's an internal, it's just an octopi, octopi, octopi, j, and some identifier inside. So it doesn't matter if it's a j, it's a defined object. It's an other application, another example is please let me set up the head. And then here is just an example of array life symbols. Then there is a simple example in a reference and length in finding. So let's say we find object rules, which has a virtual method and it's polarized with the parameter. The first parameter is the small data object itself. It's difficult to provide reference and time. Maybe that's the challenge. So the first one is the object itself and there is a second parameter here and it puts the color of the object, which is not a defined level. This is a referencing the member color of the object and it puts this color, it colorizes the second object. Now then let's create the object method, inherit the root, so the root becomes the root and the defined color which is the reference is gray and the same for banana which is yellow. And then it can be applied, then this method from the base, the last I would say the data, the parent of the object, it's like a global life based object. So this method can be applied and it takes the right color from the subclass or subclass. And the last example is possible to provide some useful prototypes for new objects like for example, RA. So let's suppose that the array defines some basic functionality and then I create new object colors, I inherit the array and then I put the functionality of setting vectors. So let's say I set the array to be one position zero, green, which one, so that's do something, some other method push puts the next value on the end of the array and from the array. And then it's possible to reference the length for example, so it's possible to write a root like this from zero to length of an array and do something for the members. So for the array, you can use method values from the array, does something for each value in the array. That's all I have so far. This is all correct but it's in the low. It's just proof that it's possible to do it. So do you have any questions? I have one question. Okay, I depend on some attack primitives because it basically adds some some useful primitives to the base tag. I depend on if CS name, which is a new primitive for testing, if some console sequence is defined or not, and I depend on unexpanded which maybe it could be simulated by some other basic mix of things. I'm not sure about that but sometimes I depend on unexpanded primitive which prevents expansion of sequence of tokens. But you don't have to worry about e-tech because everybody uses e-tech anyway, even without knowing because it's part of PDF tag and law tag and everything. Nobody uses non-e-tech enabled tech anymore. So I'm not sure that I'm quite interested in the framework page which I am discovering today and I feel you can take me on your macro-escalation system as opposed to a real grammar language system which we've been using for a long time. But when I hear you talking about virtual methods and all that stuff, I also must say that you are scaring the leaders of this algorithm. If we're going to go that far, wouldn't it be a better option to write a tech engine in a real programming language with bitings on the original tech macros to the regular question? And maybe provide a safe sequence of some sort to have access to programming language directly. And if you do that, you might be able to reuse all of the existing programming parameters that your language actually already provides. This is an idea that I've been considering for a while now and I'd like to know what your thoughts are. Because I was actually working on a project like that before. There was a project NTS, Tech Reburton in Java with the intent to extend tech, provide extensions. And there is now, there is a project of law tech which provides scripting which is also great. Tomorrow I'll show you some application of law scripting in what tech. So it's definitely a way to go over. Well, I'm actually thinking about providing a type setting component framework. And not necessarily monolithic system, but some system of components you can take and use it in other application for stuff like formatting paragraphs or whatever. So I totally agree that there should be extensions using different languages, using scripting languages like Lua or whatever binding for other languages are possible. So sure, but this is fun. During my project, I had to decide like, do I want to do it in tech or do I want to use scripting in law tech. And finally, it was more fun to do it in tech. Let's try if it's possible or not. That's the challenge. We know that it's possible in purple language. It's time for lunch. Although I have no idea where it is. Okay, so thank you for your attention.
|
For the construction of macro packages, TeX is used as a programming language. Unlike general programming languages it lacks complex data structures. We present the experience of providing record and array data structures and the supporting operations using eTeX features. They were successfully applied in real projects for parametrization and as a base for special table module involving complex dimension calculations. We will show how the abstraction level provided by more powerful data-structures can simplify and unify TeX low-level code.
|
10.5446/30888 (DOI)
|
�� MT菜問 h signs ok any clay poolwinggan war war a war mar ar or没有 buoy on fel pie al sēl rəj te bym səs faw team ten rəj ей wissen261 85 ba la dhoy kole sa oyster se ve bo z'a ividadezing interANO duty far cultivate relatively invest it'splace ផះ ឡះ ឩ. ផ់ះ ឡះ ឈ ឡះ � simplest. ឋ ប Triះ តះ ឋ ឋ ផះ ឤ់ ធ ឤេ ឡʄ់ ប ʀo �向 ួ ច ហ ឩ velope for our system. implicate in the release of money. keiner água colum子 kur prend gün HN ład consequences <|th|><|transcribe|> by means of feedback to the attack. That is, there is a five generated by feedback. And that is not only illegal, but quite structural. へ ict Users use den Make этоцаq ᗕ ᗚ, ᗎ ᗕ ais ǒs Person 注封 ᗬ 第 Жerwabī Paged nor short title System ᵃᵉᵃᵃᵃᵃᵃ ᵃᵃᵃ ᵃᵃᵃ repe At this approach is inquired in her frag and that she supports Aquarius Knee raise interp PJ fixed 鳳 丽 , facial look by birthday r う r man yani communauté businesses – hai dhur, iat- ellos, ʻæg Entertainment to find motherhood ʻe … and µ prepares themselves ʻ completion of the programs and ʻecern Par reiterations ʻad wait Ὁ ἶ ἶ Ὁ ὄἶẗἮ ὁ ἴᴴἜ ὨἽ ὀἶ ὄἴ ἶ ὋἳἹἵἶ Ὀἶᾥἶ� specs. ἶἱἶἱἶ. {\an1 ум... Senior vice dini Parój washer. cheergheль переht. LEX SVK denived mere araus kont möj αποkaya ma idea su decreased ko p generate theme of the GUI. TV SAAHHHHHHH dx sainth. achieving comed升 windshield for the R gently begins world So, as I showed by this type of theorem, bitwig-wabwistai of bit-le-tech are assembly by options. What is good in bit-le-tech is it includes the notion of time fields. This, for example, has a precise taxonomy, that is, a sequence of names separated by the keyword hand. And each name has one, two, three, or four parts marked by the term as. And this notion is more present in bit-le-tech. About some fields, half of precise taxonomy like here, there, and of edges. I don't know. We can express the data of system funds by visiting unpacking. Of course, you can do that only with a bibliothetic package or a package able to interpret this structure. Many additional fields are processed by bit-le-tech, for example, the translation tool with a few specified translations of works. So, at least we have present problems of a bibliographic processor with the data. The main problem is the un-body. In fact, you can use bit-le-tech and use latin-wabw on-body. So, just put directly, accent it later like I do it. Okay. If you want to deal with Polish or Hungarian text, the same with latin-wabwistai. But you cannot mix several un-body. We think that will allow you to put bibliography data-wabwistai according to one un-body, but you cannot mix un-body coming from a file un-body in Latin 1, a second file un-body in Latin 2, a file in UJF-Write. So, at least if you get some bit-five-forms-a-web, you have to make sure that the son and the dean is used anywhere. And, be a student with Hungarian text is quite good. It is just programming with a stacker, as in perspective, but in fact you don't program in perspective directly. You, I don't know, let me know. In fact, BST has been made because that's quite easy to introduce much on this. Putting a new steel interaction is a serious task. And we'll make the lessons of the real support for language. In fact, it's a big problem for a big tech successor is the first number of big files that exists. If you want to propose a new alternative to the tech, you have to process that. And another point, as you see in the example with the particle, with the capital I, and the data user gets used to put commands inside values, the box. Or, I think you use the data, but if you want to use another engine, by the bulk of the text, the quantity, and just to complicate, by the bulk of the conversion HTML, if you want your people to be displayed on the web. And another problem is how to manage new information and machine with file of new information as well. That is a French first time event that exists in English. If you want to evaluate this, the correct way to do that is test. Because it's a big one that is not big, but the same if you want to evaluate this first time. Here, because it's a big one. And in fact, in French, because I must tell that in German, there is a right way. And the question is, where the Buddhist promotion, for example, some people have the initial as well, for example, German and white, but not L period, just L. You don't have to put a period, if you have 100, it's the same. And the question is not how to proceed, because we know that. The question is how to put the information which allows us to do that in big time or as well. I'll ask the last example. The usual variation of this white is to retain the middle name and the first name. And I suppose it should be in the middle. The final word, be there, use, next of the lines in that, the configuration file, to deal with that status. So, we have a brand new person, but the person will have brand new words in some way. So, some examples of big tech, we will use. So, we have a big tech, a good name, a good place to accept a brand, and this person will return it there. What's next? Next up, the most important thing is the big tech. To see an example. For example, you can use an author by leveling it this way. All these objects in the sense that they advance in that way is an object. You can specify a conditional expansion of a key for that performance depending to the value of that field. Another application for a tech Fiber. A tech Fiber is only general VBS5 for big tech. That is, it allows big tech to be better interface with more options. But you cannot charge people of this size by an existing one. You have to customize people in tech to get a new one. There is some type checking, but if you know programming, that is lazy type checking. Only if you need to be fine. There should be a number, but that is checked only if you use a soft type. If you use a type where preferences appear in the author of appearance in the text, it's not checked. It's type checking, but in a lazy way. And like in classical big tech, which is specified in author of height, there is no silent thing. And in tech, more syntax for mentioning world purposes. And user foundation text for person names. For example. This example. And obviously I use big tech to talk with big tech about Fiber. I can use a big tech. There is fifty test first legislator syntax for person w w público ISS ongoing 哦 Ὀᴅ ᴇᶡដᡪ� mantra ὃ ḉ�跳k cook ᴄᴃᴇᴃᴌ. Ὁ ᴇᶡដᴃᴅ ᴟᴅ ᴯᴀᴇᴇ �없이 ᴝᴅ ᴬᴇ ᴴᴙᴇ ᴅᴉ ᴖᴇᴓ ᴐᴊ ᶷᴇ ᴤᴱᴎᴊᴈ ᴵᴣʸ, ᴔʸᴏᴔᴅ, ᴕᴏᴡᴕᴉ.ManJundramatic threw Typeset objective y form useful of this example ဟလ်ုʓ�� fid COSTA sau aboard captain konst. ... but k Ḥ malfό asleep that lack of ithe gjort glacial failures that have, that you may have to deal with those. To tell you, the Marvel Anniversary is theDon't look ὡ ὤᾶ ὤᾶ ὤᾶ ὃ ὁἬᾶ ὑᾶ ὃἬᾶ Ἣᾶ ὅἼᴱ ὃᾶ ὁἰἱ ἱ ὅἱ ὗἱᾶ ὄἱᾶ ᾶ ᵁἱἱ ὅἱἱ Ὀἱἱ ὣἱἱ ὀἱἱ Ὀἱἱ ὃἱἱ ἱἱ ἱ ḍἱἱ ὅἱἱἱ ὡἱ � え, eteries' Kaito ji sorry... but please let us let us ဉ𝘩 𝘏𝘸𝘂𝘴𝘢𝘺 𝘧𝘵𝘩𝘢𝘺 � f𝘰𝘯𝘢𝘱𝘺nın Wen ch gat z Surprise x 𝘏𝘶𝘩𝘼𝘺, k e n r m v e l e r t e r d bout sermon Dinner, llow and then here. coś. Allāhī investigated ˈ estaixitī dining période. 這邊 aisia lotus rays ക Protection ക motors � Buddīkād the godjāunal ക, എ ക. ക �ublic ക обс ഗെ. കേaaalnya�ി൙ കിന്. കíaRE 好 to say കന joser dank synthetic of kimse cad tem mistCOM me by Dry far to turn the camera on? జువర్లలు ఏియిలా జం చారె క్రఆ అ Çok! Of course, this is the advantage of a divisive feedback But of course So, the advantages of a duas 对 lien łem iedz Azerbaijani<|fi|><|transcribe|> icelessshAUb regret wajchw착 liaslsh and feawishdekumentations ROjnlis té thé Gouda'kh까 du wo Prepare I plan to oleharlo ar & chochanere giggles kos jiny m. alga squeez arettes tum ing��는 glitch శనల౦కులితెదట్ల� degree ఘ compartil�వయ్ఱా品 are Light 對不對 Clone شي退 to a factory summary itís moving just right four � q రేబికేతోలు� Ḁἁ ḥἽἔἂἀἁ Ḥἁ ḤἎἁ ḤἎἆ Ḥ� fluff ḤἎἓἉἡἂ ḤἀἉἀἉ whatever it is. recipcasesable pans is that I should be for晚KS ex Eun, e Vas-se Non fostered m Bryce ancient an Car n gr олод ේාා�欸 7.5 වි ඇතවි ඔෙවාන් වාවීpunkt් පා අළර්ව Okay, thank you very much, our next talk.
|
First, we recall the successive steps of the task performed by a bibliography processor such as \BibTeX. Then we show how this modus operandi has been adapted by tools such as the packages \code{natbib}, \code{jurabib}, \code{biblatex}. We also explain the advantages and drawbacks of using other processors like Biber or Ml\BibTeX.
|
10.5446/30889 (DOI)
|
Good afternoon everyone. I am Manishya Zoshi. I am basically a person inspired by earlier talk conference in Trivendra 2002. From there my journey started. First I came to know from late day that there are free open sources possible. And later on I was in exploring this type of free mathematical software for computing. And on my journey, always on my journey with this type of software, I always try to play with that so that outputs we can get through tech. So every time I was looking for whether there is some compatibility with the software and the tech. So basically I am from the research, mathematics research area. And every time we feel that we have so many problems regarding computing because directly from basic sciences, if we want to have some research paper or this is, we all the time have to look for late day of course. And not only that, we are dependent on some scientific computing. And story starts with this. We are also required some software to do some work. And most of the universities are may not be having such fancy software. But as soon as we enter in the word of free open source software, there are plenty enough software to play around. And with that, we found that free software. And Sage is one of the software which we found very useful because it can handle images, it can handle numerical calculations. And if we get something, you always need to ask something more. Can I get something more? So likewise, so for example, if you, if one want to see that if I want to take factorial of 100 and this is the output which I can get generated by Sage. Now, I have this output get generated by Sage and now my question, next question is how can I, how one can include it? So in general, a way or possible ways are like you can cut paste, that is one way. Here you can see a multi calculation that is also possible with such type of software. This type of twist-type torus, this type of figures, one would like to include as per the requirement. So what type of outputs we need to handle through latex and how much, how one can get it automated so that less work can be done, that was the thing in mind. So the main people who will be benefited from such type of thing is researchers, they need to add calculated output in their papers. In question papers, teachers if can get some randomly generated matrices or some randomly generated, auto-generated things, nothing like that, then the early books have solutions at the end of the book. So if one can insert some answers, nothing like that. If we get get generated answers, nothing like that. In learning systems, now it is, you must have heard about Moodle and all, Moodle also handles latex. So where, suppose one hundred students are at a time appearing for the exam and they do some bandwidth problem if they are facing some problem. So all of a sudden if everyone wanted to appear for the exam, sometimes some of them may miss and in the same paper cannot repeated again, should not repeat again. So that time, that time if you have auto-generated content, that will be of one of the requirement. So these are the places where one needs to have calculated output. Not only that, the software required like, you required for statistics, for data handling, for numerical computing, for symbolic computing, graph theory, computer algebra, number theory, commutative algebra, all this software one can handle easily through Sage. Therefore Sage is called as a math server because these are the, these are, this list generally includes all requirements, all type of mathematical computational requirements get covered in this set of software and Sage can handle this software, this, all this software through Sage. So another way of including figures, so this is the output of vector field generated by software, psi lab which is equivalent to MATLAB. So now after we generate the output, then we have to export it as a JPG file and then manually we include that file in our tech file. So such type of work is involved in, this is the procedure which generally get followed by researchers. So fine, so that availability is there but there are some problems like they have to think where they have to input the computed output figures. Then every time they have to switch between the programming language or software and tech file, not only that, as I showed you the factorial of hundreds, huge output and if one is, one is trying to cut and paste, sometimes they may miss it and it, this is dangerous, it can, it is type of loss of important calculated data. So apart from that there is always fear of where is my file, where I am losing it, I have stored it or not, I have to keep track of all these things. And doing this circus, one may miss what I was thinking on, what was my research paper, what, I may lose the track of thinking. So that type of problems are there. So if you use Sage-Take that is what I found, so you can, you can keep with your, you will be with your track of thinking and while thinking at the appropriate places, you can insert calculations, it is not inserting, it is just calling Sage command at the appropriate places. Moreover you can, you should not put more special attention to insert your figures, your calculations, everything is collectively available for you. So what type of workflow is there to use? So Sage-Take is a package which is available and through Latex file one can add, after adding use, use package Sage-Take, one can interact with Sage while we compile Latex file, we will see the workflow. So we start with the tech file and with Sage-Take package and then as usual the way we will, we can use either Latex command or PDF Latex command and the tech file that's it and then it will generate many files, one file is.sage and that's a.sage file, we have to look it at and we just have to say Sage, so I'm assuming that Sage is installed, you just have to say, you call the Sage.py file that is command is Sage and then you just have to say Sage, ABC, or file name.sage, the extension is important here and after compiling that.sage file calculations get calculated through Sage software and it get return on.sout file,.sout file and then again you need to compile the same tech file again. So that time, that means the second compilation, Latex get the outputs from Sage which are stored in.sot file and through that it get not only that then Latex write code for that so that we can compile, we can see the outputs in the regular Latex format. So here are some commands just to give idea. So if you say backslash Sage,.cullibrations and actual command, so factorial 100 is the actual command which is available in Sage environment. So backslash Sage allow you to enter in that way in the Sage environment. So that is the only single command which is enough to generate factorial of 100 not only that it will get included in your tech file. Another interesting command is Sage plot which is useful for plotting 2D as well as 3D. And the Sage plot command in the bracket curly bracket plot exp x comma minus 5 comma 5 it is very simple. So we are here plotting e raised to x and the range is minus 5 to 5 that is it. So this the inner bracket shows the Sage command and this is one single line command. Another environment available is begin Sage block and end Sage block that is the actual Sage code which can get printed in your tech file. Not only that it talks with Sage and there suppose I have said that g of x is teller that is called not counting teller series, 10x and other limits then g of x is the function it get information get assigned to this expression. That means it is talking with Sage also it is printing Sage code in your tech file. So here is the code Sage code in the Sage block and this is the computer output. So I will try to show you how one can change the output. So this is the generated so 10x is written by hand and the rest of the expression which you can see 62 upon 2 at 3, 5, etc. etc. That is directly computed from Sage. So actually to obtain this output I have written this Sage block thing and to get this output 10x as usual in the mathematical mode and backslash Sage g of x. gx was assigned earlier and here I am just using backslash Sage gx. So that means our work of going to the software getting some output done calculating this that keeping track can be shortened. So this is graph generated by sign of Sage nothing great in that that way but this is generated by directly Sage. And this word sign is also generated while I am compiling tech file. We will be able to see the output right now. So before that I would like to show you some outputs and then we will change that to another graph. This is for partial fractions and this is also get generated automatically. This is Peterson graph and Sage has database of all this type of graphs. So if you say a graphs complete and if you give number of nodes like 8, 7 it will show you it will print directly the complete graph on those many nodes. So that is from Sage side but from tech side nothing like that if you get such type of outputs get included in our tech file. And here is the command. So the next question comes in our mind if we want to give this file to some publisher for journal and the publisher may not having this type of Sage install or it doesn't know what to do with this. The idea is along with your tech file you can add this dot Sage file and don't worry sorry dot SO2 SOUT file and then when publisher wanted to add your presentation or your paper in its journal then he just need to take once that's it. So additionally you just have to add dot SO2 file. Okay so before that I would like to show you the actual procedure. So this is the same presentation file actually. It will be little bigger so I will just show you the process. So here I have given the typing command so I will change it to cos. There was one generated graph so I will change it to cos and then now I will compile this file with Sage with the extension Sage dot file name dot Sage. It is giving me some error. Okay let us see. So again I will compile it. Sign. It is asking me twice because in the file I have given type in. Let me check whether it has been done or not. Oh I give sign only. Okay. Sorry sir. Sorry. Okay sir generally what happened is word get changed. I think I have given sign only again. Anyway so this is the work procedure and one can do it. So type in command because I have given type in command therefore it was waiting for the word but if I have not given it and I have used it with def command in other file I can generate list of the functions and plots of corresponding functions. Okay. I have finished. So at present that is it. Any questions? Any comment? Questions? Actually two questions. First I would like to see that as our file if you can just show what is inside. Yeah I will show. Yes out. This is about the symbolic computation which I have done. Integration was about. That was generated from this file. And what else? Taylor series expansion is also there. Okay. You can see. And second question. You have the sage block which is formatted but for some reasons I might want to suppress the sage. Is it possible? Yeah there is such silent command. Okay. Thank you. Thank you very much. Thank you. Time for a break.
|
Researchers search for some computational package for their results. At the time when they have good output, they begin worrying about how to insert it in their LaTeX document. They have to keep track of their output, formatting and then insert it at the appropriate places in the document. The SageTeX package is a blessing in these situations. It calls the powerful open source maths server Sage, to compute and embed the result into a TeX document.
|
10.5446/30896 (DOI)
|
All right. Now, the year is 1987. And I'm a student doing my engineering thesis at CERN in Geneva. And I madly fall in love with a pretty little thing called MAC. And I'm the faithful type, as many of you are. You can see I'm still with the MAC at the moment. Now, one year later, I'm a PhD student at Stanford, and I fall in love all over again with a local beauty called the tech, which I'm using with one of the best environments that ever existed for tech, textures on the Macintosh. Now, that did not do everything I wanted to do, thank goodness for specials, and for the fact that textures was allowing post-cripts. So I looked into post-cript, I bought the post-cript language reference manual, and boy, that was love at first sight all over again. I'm a reverse-polish notation kind of guy, as was just reminded by Pavnit. I do my page layouts based on coordinate systems, so post-crypton and I were meant for one another. And with my own sets of simple, lean, and mean plain tech macros, I felt I was invincible. So I thought it was a slight drawback if I would type set a graph. Well, what I would see on the screen of my Mac would be this. Just the tech version. Can you see the lines here and here? Well, near the good eye, because that was the post-crypt part, and that was, of course, not so nice at the time. Sure enough, PDF came along just a little later, so I could take my post-crypt file, distill it, and then open a PDF file, and there I could see the graph, which was good, but not nearly fast enough, because visual design is a lot of trial and error, so much nicer if it goes faster. So I was hoping that Steve Jobs, may he rest in peace, would bring display post-crypt to the Mac the way he did to the next machines, but nope, instead of that he pushed Mac OS X, which the guys at Blue Sky Software doing textures did not support. Worse, he went to Intel processors, and, well, that would not run the classic environment that I needed to run my legacy copy of textures, so one day, eventually, my risk machine died of old age, and my life shattered. I had to be invincible in a different way by being reborn from my ashes, and I turned to TechShop and PDF Tech. With a lot of promise, right? It would allow me to see the PDF at once, almost instantly good for visual design. Since I'm, as many of you know, since I'm a control freak, it also allowed me to fine-tune what was going to be the final product anyway, a PDF file, but yes, there was also a slight drawback with it. Now, you can tell me, what's the fundamental difference between post-crypt and PDF? Programming language and not. I used to do a lot of calculations in post-crypt. I used to do programming in post-crypt. I couldn't do that in PDF. Example, one of the graphs that I wanted to do was this. Now, that's dead easy to do in post-crypt. In PDF, you have to describe every one of those little arcs here with cube-explines. And so you would spend a lot of time drawing the circle. That's not what I wanted to do. I wanted to spend a lot of time on page design, for example, ensuring that this would align nicely with the paragraph above and the paragraph below. So it's been a rough ride, and therefore I wanted to share with you how I eventually got over those obstacles and integrated tech and PDF. Hopefully you can learn useful lessons from that experience. And I would say the challenges were in three categories, which I would symbolize with the names, lines, colors, and shapes. Lines have got everything to do with dimension. So imagine you want to draw a line in tech. You can draw it horizontally. Suppose you want it to be seven points thick and 84 points long. Well, in the horizontal mode, you can just have this line here. Of course, it will work. What I wanted to do is draw lines in PDF, all possible directions. But if I would draw it horizontally, it had to be exactly the same as this one. So my dream was a little PDF environment, simple, in which I could say, for those of you who don't speak PDF, seven set the line width, move to zero zero, line to 84 zero, and stroke it, it had to be exactly the same. More than that, I wanted to specify the line width somewhere else. In a new dimension, I set it to seven points, and I can use that in tech and hopefully in PDF. Automatically, it would use whatever line width is currently defined. So if I change the line width at the top, I say 14 points, everywhere it changes automatically. Now, why do I want to do that? Well, probably it was just reminding you about my book, this one here, where I do a lot of graphs. You can have a look if you want. And so one thing that we did is design all the graphs with parameters, and at the end, we went on the offset print presses. We did a test with several line thicknesses in gray ink. We decided, well, this one is the one we want, and we wanted to change one parameter in one place and have everything be changed again. More still, designing pages on grid was my talk last year. Well, we wanted it to go beyond just the thickness and draw things on the square grid. So I redefined the picker to be whatever, in my case, usually 14 points. And then I can not only define the line width harmonized with the size of the square here, but I can also use the square as a unit and say the length has got to be six squares long in tech and in PDF. So instead of having a PDFPT environment for points, I have a PDFPC environment for my redefined picker there. How do you do that? Well, two things. First thing you have to learn is extract values from dimensions in tech. And you know, if you do the line width, you get seven points. Now, you don't want the P and the T in PDF code, you just want the seven to have to get rid of that. The only trick you need to know if you've never done that is this is not cut code 11 for letters. It's cut code 12. So there's just a little trick you have to know. You change the P and the T to cut code 12 and then you define a simple macro that will take something with PT and return to something without the PT. And so you can put that instruction in front of the line width, but yes, you have to expand it first so you get what you want. With that, you're ready to do the PDFPT environment. PDFLiteral is the one you get from PDFTech. You put little Q, big Q to keep the changes local. You do exactly what we said a moment ago. There was a space in there. You have to be careful with that in PDF. You set the line width and then you set the code, like 00 move to 840 line to. That will work, but it will give inaccurate results. And one has an idea why? Points are different. Tech uses 72.27 points to the inch. PDF uses 72 points to the inch. Now, that's fortunately also easy to solve. You just do a change of coordinates. Now, that's the dreaded concatenate matrix in PDF. So this is to say for now that you just need to put the scaling factor there with the rest being zero and that magical number is the ratio between 72 and 72.27. So, of course, you could say, well, actually the code that I want to put here, I would rather use the PostScript points, what Tech calls the big points. In that case, I can really find that to be PDF big points like this. I can get rid of the scaling, right? Well, except that then the line width is wrong. So what you have to do is scale the line width only, not the rest. I used to do that easily in PostScript with some code here. You can't do that in PDF, but what you can do is do it in Tech. So let's have a big group and then group to keep that local. Multiply the line width by 72.27 divided by 72 and then the rest can stay the same. That's a second one. Well, you want a third one with the redefined picker. Well, the only thing you need to take care about, again, is change of coordinate systems. Now you put 14 because I define my picker in fact as 14 big points. So it goes straight. This magical number there is 72 to 72.27 divided by 14. And at this point you could say, oh, why don't you do it automatically? Why don't you extract the value 14 from there? Put it here. That's easy to do. Now this one is a little trickier because this redefined picker need not be an integer, right? It could be 14.5 big points. So I decided to draw the line there and just do this part by hand because I had never changed that size anyway. I have one grid, maximum two for the document. So to generalize it, here's an environment in which you give the scaling factor, which you put here, what I call the unscaling factor, which you use there, including the 72.27 game. And then you just put the code there. So the three we had before, we redefined in this general frame with each time the scaling and the unscaling for the points, for the big points, and for my redefined picker to 14 big points in this case. And that gives us an excellent start. What's next? Well, you may want to have the lines in color, right? So what you could do is define some CMYK macro. You can set colors in there. And then you can define red as being 100% magenta, 100% yellow. That's got nothing to do with PDF tech. If you put that in front of what we had before, it will work in both cases. But perhaps what you would like to do is specify a color inside the literal PDF, for example, because one piece will be red, one piece will be yellow, one piece will be orange. And again, you want to be able to redefine the red at any time and have all the reds change everywhere. So the only solution for that, the simple solution, fortunately, is keep track of whether you are in PDF or in tech. Define an if PDF literal flag. And in the definition we had before, the only thing you add is you set it to true inside the group. And then you can define your color, push intermediate macro by saying, if I'm in PDF, I just dump it. If I'm not in PDF, then I have to do those things that PDF tech tells you to do and teaches you to do to set the color. And now CMYK, you push the color and here I set the color both for the stroke and for the fill. If you want to get fancy and set a different color for the stroke and for the fill, you can, of course, extend this. That will work so I can do everything that I did before and I would like to extend that to line thickness. Set the line in some, the line thickness in some way. So if I define a medium line thickness to be seven, I want to use it in front of that. I want to use it in front of that. That will work. Will it work inside PDF literal? That's a little trickier because you need to do calculations in tech. If you dump the calculations in PDF, they will not take place in tech. So the solution in this point is to say, well, we need to be able to get in and get out of PDF. So we need to split in a sense the environment we had before, have a PDF draw and an end PDF draw. We memorize the scaling factor and the unscaling factor for later. The rest is as usual. The definitions here are as usual. And now we can get in and out. You start the environment there, you stop it there, you put the length of the line in PDF, you change the thickness, and then you stroke at that point. So you have to make sure you take care of that in the line with set variable. You set it to the value of course. If it's PDF, then you need to do the scaling and spit it out in the code. If it's tech, you don't need to do anything else. So this in and out of PDF was also a solution to many problems there. The only thing we were not careful enough with, previous slide, is look at this. We have PDF literal here, but we just said here PDF is true or false. We need also to say here PDF literal becomes true or false because we might have a color specification there, right? We might want to say here red. We want to say here PDF is more than just PDF is PDF literal. So you can have a little utility that's just PDF literal before except that you set the flag to true and that will solve it for now. So we can get to the fun stuff. What's the fun stuff? The fun stuff is shapes. Example, I want to do a graph as catapult this time. I want circles. I want to be able to define circles somehow than a medium dot to be circle with a size 1.5 and in some place in the draw environment still to be defined. I would spit out the data there. Now I told you no primitive in PDF to do circles. You have to do that with at least four basic colors and you need that because you want to change the dimension and make sure all the sizes change. Small dot, big dot, things like that. So here is the PDF code you would need for a circle of radius 1. And it's four cubic splines and then you close the circle and you stroke it or fill it or something. Nice, easy. Because of the symmetry of the circle, the number that you see there is always the same. That's nice. But what if you want to do a circle that's not of radius 1? Well, scaling, again, we learned to do that with the first two points. We can apply that here. So here's the trick in this case. Define two dimensions and let's have a shortcut to extract the value corresponding to that dimension. And then what you do, one dimension you say that's the radius. The other dimension, that's this famous number that we saw a moment ago, multiplying the radius. And then you put those A's and those B's in the right sequence with the right side in front of it. That will give you circles. So I'm defining that and expanding that inside the macro that I called marker. And my scatterplots, macros, they know that whenever they go, they have to call marker. That will put the circle there and they can move some places else. Circles, easy. Well, what if it's only part of a circle? In particular, what if it is rotation of coordinates? That also used to be so simple in PostScript. You say rotate 30 degrees, period. You can do that in PDF. You just have to put cosine, sine, minus sine and cosine of the angle and zero, zero. Great. How the heck am I going to compute a cosine in tech? For a split second, I thought I would use a Taylor expansion in integer arithmetics, but I was not chickened out of that one and I thought we need something else. So basically what happened is I don't need this much. Occasionally, I could text 45 degrees, very occasionally vertical, even less frequent, 180 degrees on documents I will fold or something. So I just defined macros for that. And I was happy. I postponed it forever. But then what happened is my kids started learning how to read clocks, another clock in school and they needed a little help with that because they had to learn that in Dutch. And in Dutch, if it's 235, you have to say that it's five minutes after half an hour before three, which is a complicated mental gymnastics. So I wanted a simple clock macro that would draw a clock like that. And then either I would draw hands on it and ask them what's the time, or I would give them the time and they would draw their hands on it. Now how could I do that without being able to rotate freely? Well, I figured this is just six degrees, right? So let's have a macro that rotates six degrees. And let's make sure we can draw thick lines left, right, top and bottom. Long ones for hours, short ones for minutes. So basically what happens in the programming, I said, well, hour. Rotate six degrees, minutes. And so on. So I did hours, minute, minute, minute, minute, hour, minute, minute, minute, minute. And so three times that draws this and a little circle in the middle. Good. That was an easy way out until I started being tired of drawing hands on clocks by hands. So I figured I need to revise this macro and be able to say clock 234.55 and it draws this. And then with PDF Tech you can even use the random number generator. I could make a whole sheet of those for my kids and have a separate sheet with the solutions for me. So I could even look it up. Now the solution here was to say, hey, let's pretend it's a train station clock. That means the pointers go click and they move six degrees at once. So the best I could come up with in this case is just a lookup table. Here is a long list that takes this parameter from 0 to 59 and it turns the corresponding number of degrees. So that's useful for the seconds here. It's each time one of those discrete positions. I'm a quantum space kind of guy. This works also for the minutes. It almost works for the hours. You could say, well, let's multiply the hour by 12 and so by five, by five. And then you can have it, but of course then the line would show to a clock. And we want it to be somewhere in between. So in that case, instead of difficult complications, I just did two changes of coordinates. I first change it to two. Then I have a different lookup table that will map 60 minutes into this interval here. And I will move it just a little more for the minutes. And I think that's a solution for a generic macro that will do any rotation. I think you want to take 32.67 degree. I think what you want to do is take the number three and have a simple lookup table that will move in tenths of degrees. Then you take the two and you have another lookup table that moves in degrees from one to ten, from 0 to 9 maybe. Then you have one that moves in tenths of degrees and so on. You don't even need dimensions in tech for that. So I'm going to write 32.67 and have a macro that will gobble the digits one by one and do the necessary stuff. I didn't do that yet. Why? Because my concern is elsewhere. Arrowheads. Where the macros we defined so far? Easy. You just draw a little triangle. You paint it. You stroke it. Works. But what if you have a sloped line? Or you could say, well, you just go there after drawing the line. You rotate and you draw the same arrowhead. Except you don't even have the angle now. You have the delta x, the delta y. If you make the ratio between the two, you have the tangents of the angle. So in addition to calculating signs and cosines, you have to calculate in tech and arc tangents, which I didn't do either. I figured there must be a better way. But what's the problem with a lookup table in this case? Well, imagine that this goes closer and closer to vertical. Tension goes to infinity. Do you want to lookup table that goes all the way to infinity? Not very practical. Solution? Symmetry. You do it up to 45 degrees. And beyond that, you take the cotangents and you figure things out. That's just for one of the quadrants. For the other three quadrants, then you would just play with plus signs and minus signs. So I thought for a second, because of this conference, I'll do this the Boris way. And just tell you, please do that by tomorrow as a homework. But I was energized after the banquet last night, and so this is a midnight hack to do exactly that. And as you can see, it actually works. And it's not that long. No, I'm not showing you the lookup table. And if Didier were still here, he would have lots of things to say about the way I display this, but it fits on one slide. You just see if x is negative, and then you worry about the signs and making it positive. You do the same with y. You worry whether x is smaller than y or vice versa. And then you do a division, which I multiply by 10 to go in my lookup table from 0 to 100 instead of 100. And then the only thing you have to do is make sure that if it's inverted and you have a cotangent, then you go get the sign from the lookup table and you put that in the cosine definition and vice versa. If it's not the case, it's straightforward. And then here's just your pdf code. Draw the line, move to the end, do your little rotation, and then draw an arrow head. Of course, this you could describe as a tec-macro and have dimensions for the head of your arrow and everything. All right. So we have learned you can do a lot if you can extract numbers from units in tec, if you can play with coordinates, if you can get in and out of pdf and of tec and the master replaces, and if you can be creative and perhaps discretize what is there, the way we did with lookup tables for integrals in the good old days before we compute this, we look up and do interpolations and such. Conclusion and perspectives out of this, well, my major conclusion for myself, is I feel confident now that I will be able to do in pdf everything that I used to do in PostScript. And that is going to be put to the test very soon because my book in, before the year is over, my book is due for a third print round and the first two print jobs were done with the PostScript version. Now I have to go to pdf. I have brought enough copies with me if you want to buy one and get the discount as usual, but soon I will be out of books I have to go and rip in that. That's great. The best test. Now I say it's possible. Some things are going to remain difficult. Example, not in the book, but in the lecture notes I have for a course I gave on statistical thinking. Guess what? I do a lot of normal distributions. In PostScript, two lines of code. I would define the Gauss curve and just say to PostScript, please plot that from 0 to 16 in steps of 0.05. Then in function data and end data, there was no necessary programming to do that. So simple. What do you have to do now in pdf? Give it all the values. Now it gives me finer control, but it clutters my tech file. That's the drawback. How do you do this? Any way you want. I bet some of you who are into Microsoft products would do this in Excel. Fine, it works. Myself I like to do that just with PHP. A few lines of PHP code. I put that in a browser. If I'm careful with the format, what comes in the browser I can copy and paste directly in my tech file works well. So the very last thing we need to worry about was our challenge from the beginning. How about this pie chart now? How do you do all the basic curves for all of these pieces? Well, it's not forbidden to have some PostgreP help. So the way I did this is I used my old PostgreP code, very short. I put that in an almost as short encapsulated PostgreP file. I opened that in acrobat on preview on the Mac and I get a PDF file automatically. Now what do you do with a PDF file? Well, I bet most of you would keep it on the hard drive and insert it in the right place. You can do that. Well, that's not so good enough for a control freak such as myself. So what I do is say, oh, it's in PDF now. Let's go inside and see how they do it. And so I go steal the PDF code inside that distilled file. Then I put that PDF code in my tech PDF programming here and I can generate this from within the file. And I don't have those extra little files around on my hard drive. So yes, it is possible with some help from a number of tools around it. So the year is now 2011 and I really hope that for me and PDF Tech and for you and PDF Tech, it's going to be happily ever after. Okay, there's time for lots of questions. And my first question, I'm going to ask the question someone else is going to ask. You showed that pie chart. You said, oh, I just snip it out of the PDF from the preview. Just what does that look like? Do you have that ready to show us? Oh, it looks like inside. Actually, I've been learning from Don Knuth himself to tell little white lies every so often. So what's the problem with the PDF? It's compression. It's flat compressed. So here's the trick. It's no secret. What I actually do is, since I did not find a simple way to uncompress it, if you have a way, I mean share it with me, a simple way, not a complicated way, not a unique way, but a simple way. But the other thing that I found is I re-exported as a post-crypt. And now it's not the same post-crypt that it used to be because it's a converted post-crypt. Now that's not PDF, but it's close enough. And then I have to go and do by hand very little conversions. L, I need to become L and small things. And then I've got it. But yes, it's admittedly not totally as clean as I made it sound. But please talk to me during the break if you have a deflator that I can easily use on my Mac. I'll be listening, but we can do that. Okay, we can do that. Just a quick comment about trigonometry. Actually, in Lattec there is a package which actually works, at least the offer David Carroll said, it works in plain tech, which does exactly calculation of sine, cosine and 10, using Taylor expansions in tech. So just look at trig.sty and it has, the code is not very fair and it works without lookup tables just using Taylor expansion. Even for real numbers? Yes, yes, even for, yes. It's very good code. I really love it. I'm using most of the time PS tricks. Am I missing something from PS tricks and you are getting something more? Well, if you still want to do postcripts, it is possible. You can use PS tricks, but then, or whatever, I have experimented in my own little world. If you stay in postcript and away from the source to the eventual thing you see on the screen is longer, and I decided it was too slow for me. Now remember, I was coming from textures on the Macintosh and it was even called lightning textures. I could write code in the source file and it will type set automatically in real time in the other window. And so I wanted something fast. There was one reason. The other reason is that there were a number of things in PDF tech, separately from what I've shown you here, separately from graphics that were of interest to me, including the possibility to stretch a little bit some lines. That was nice. I'm not sure it's still important now, but at the time I made the decision it was something. So that's the way I did. But if you want to stay postcript, as far as I know, it is possible. It is possible. Other questions? So you're going to look up table also? Yes. Okay, that's good to know. That's good to know. So I'm going to investigate the calculation way, but when I started with my look up table I thought, is this even a good idea? And it's nice to know that you go that way. Okay. Yeah. All right. Well, I think the thing I need to do at this point in my life is get into integer arithmetics and understand the limitations on the size of variables in second, make sure I get my precision, my accuracy right in there, which I haven't had time to do yet because I'm... Yeah. Yeah. Well, let's be in touch. Yeah, thank you. Okay. Anybody else? It looks like there's no more questions. Kevin may have one or another. I believe it is. Thank you very much, Jonathan. Firstly, just a comment. If you haven't seen the book and if you have books to say, it's the most beautiful book that Jonathan has produced, so I would recommend you, they are available to say, if you want to buy them, I have one. And it just... I haven't read it, but I just sit down. Yes. I think you're almost certified as mad if I make this. Absolutely. More than him. Stay away from me later. Look at the book. Every paragraph ends flush right. And I think... Have you rewritten those to be rewritten the text to be... Yes, absolutely. It's not stretching anything. It's written in the page layout. So... If a single paragraph is flush left and flush right. If it was just that, I mean, the line breaks are usually at commas and periods and everything, so that's total madness. But guess why I ended up publishing it myself? That explains it. So thank you very much. Just a couple of very quick announcements. If you want to go to the airport early today, if you need a cab, please tell someone at this e-brake in the area there so we can arrange it for you. And quickly, secondly, they say, better late than never, we have some badges for you. No. Yes. We thought, yeah, so in the coffee area, there are some badges. So if you want to see me, please pull out.
|
In its ability to generate graphical elements, TeX is basicallylimited to horizontal and vertical black rules. Extended versions such as pdfTeX add color options and, especially, the possibilityto draw more freely on the page by inserting raw code (PDF code in thecase of pdfTeX). Still, these two coding environmentsDash TeX andPDFDash are too often regarded as disjoint. It would be nice tointegrate them seamlessly, for example, to use in PDF code a color ora dimension assigned or calculated in TeX. This presentation pointsout the challenges of such a consistent and transparent TeX–PDFintegration, proposes a set of solutions, and illustrates how thesesolutions help create graphs flexibly or design pages consistently ona grid.
|
10.5446/30898 (DOI)
|
Felly ydy ni-ogwch ni'n fewn righ<|pl|><|transcribe|> glut o deacolio sylow Excendr meny oni. AICH Oni oni oni byli u Busstiun Yn crots pobl, ac bydd y yr edynig sefydlLOL yn byth. O heddiw, unedig eo'r bosbol o ran rungan. Unrhyw cosawio,ermod â'r dweud o'r p無ch. A mathiaeth couldaidd ateb Feed Aled Menyfedd. some very artistic but not what the university wanted. So we wanted a single package for everyone, a make sure even with strange page layouts that the package would still work. But particularly for the legal regulations as we all know lawyers and psychiatrists quite often have very strange page layouts. Felly, ydych chi'n ymwneud? We want the user to specify the contents. But the main object was headers, but letters was a very easy extension to this. And the header is always sented at the top of an A4 page. r racio adeiladion administrator, a arnegosio mawr. Mae p變yn o wasnfodol gydafootball yma. Wrth beth, yn Seite Un Llanor sefydlu eاشunau, ei dorthaig eலadwyr, ac y cast Simon poddiwch y Llaner yma yma. W есть weld Bullota. Mae'n meddwl i'r llwyffydd o'r ddechrau yma, rwy'n meddwl i'r cyfnod o'r cyfnod. Yn y ddechrau, mae'n meddwl i'r gweithio, mae'n meddwl i'r cyfnod o'r cyfnod, mae'n meddwl i'r cyfnod o'r cyfnod o'r cyfnod o'r cyfnod. Felly, wreodd rhaid ar hyn ar y dyda, ac reali, dwi'n frem am gyfnod wire Ma variation clywed nurse ac mae'r gefnod wedi prayersol, ac nid yna eich dessog o ddau. Maea eisechol i d enforced seām o yr digwydd am oedd mae angen am goigs Fiona Yourh. Crif Pl Kom, pent Ryw Bwasلïd Unedig ar Th بن Llywodraeth Rhyw Llywodraeth ac mae'n meddwl ymlaen i'r cyfnodd yn y logo. Mae'r logo yn ymddiolol yw'r llwyddiad, ac mae'n meddwl i'r adres ac mae'n meddwl i'r llwyddiad. Mae'r ffioedd yn ymddiol, ac mae'n meddwl i'r llwyddiad. A dyna'n meddwl i'r top, mae'n meddwl i'r llwyddiad, ac mae'n meddwl i'r top yn ymlaen i'r llwyddiad. Mae'n meddwl i'r llwyddiad, ac mae'n meddwl i'r llwyddiad. Mae'n meddwl i'r box yn y adres, mae'n meddwl i'r llwyddiad C5C6. A ydy'r boton, mae'n meddwl i'r ffordd, ac mae'n meddwl i'r adres i'r adres. Mae'n meddwl i'r adres i'r adres i'r adres, ac mae ребodaeth am ystμannu rham oeddaeth iESIC형u eu hamrian. Mae'r adres prettyd yn ydy לch nowherer weave i chi ac maeen eimardu. Mae'n meddwl i'. a helpful ni, maar yr oedd應該d ma wiren o fal s Serve geven a letter. Rwy高r hon os bwysig iawn o Kefyd yr dan willaw yn ei z示 i going ar cablau, sus ychydig mor pan gweld all안 dr invincible iarchdd. ymdweud, y marwyr, y marwyr, ymdweud. Mae'n ffordd yn tech. Mae'n ffordd yn y bobl, ac mae'n bobl yn ymdweud, mae'n ffordd yn ymdweud. Mae'r ddimeniad o'r bobl yn ymdweud, mae'n dweud. Ond mae'n dweud yn y bobl yn y bobl. Mae'r ddweud yn y bobl, mae'n dweud yn y bobl yn y bobl. Mae'n dweud yn ffordd yn y bobl. Mae'n dweud yn y bobl, mae'n dweud yn hwych ar y gweithgol, mae'n 91 mili metws. Mae'n dweud yn 1 inch ar tech. Mae'n dweud yn yw yw yw yw yw'r dweud. Mae'r dweud yn y bobl, mae'n dweud yn hwych ar y gweithgol. Mae'n dweud yn 91 mili metws, dweud yn 41. Ond mae'n dweud yn y bobl yn y bobl. Mae'n dweud yn y bobl yn unig. Mae'r dweud yn unig. Mae'n dweud yn unig. Mae'n dweud yn unig. ti'r yarnhandadell blawn ac d semana fin sy'na sourleio mijain yn sarae f hydration. Rwy'n ffoch'r 분, i gygnwlad yn米 11 yn unig. Rwy'n bwyl yn iawn.-arilynno Amber Cyd palm. ymweld hyn fydd y gweithio gyma'r g morewyd sounds i不管 Rest Hakureon Lens. Unir yna syniw hyfforddius pan d получer hyn am hyn nes肉 wpadll. Er y fyrdaedde hyn syniw hyfforddius hyfforddius, telefon hwnna allan waith nid y gweith yw'r直接. Mae'r ddweud yn ddweud o'r ddweud yw'r logo yn gweithio. Mae'r ddweud yn ddweud ar y cwm, boedd y ddweud yn ddweud ar y cwm yn ddweud ar y cwm. Felly, ydych chi'n ddweud y cwm? Mae'r ddweud ar y cwm, mae'r ddweud yn ddweud y ffile yw'r ddweud, ac fries ofi'r blaerau drefnol cwm. Mae'l achos eraill mewn bydau ond drwng i'r ddweud, ac ydych chi'n byw, gan toddiwch yn Feir. Feir gweld, mor deallwn'i ddweud â'r ddweud, ac mae'n ddweud wedi y grilling o anghoedd just naam mewn brif. Ac wnewch chi o'n symud arwe mightdio ar y cyfwinal. Ar resi. yn ymwneud y ffordd, yn ymwneud i'r ffordd, y ffordd HLET. Yn ymwneud, mae'n 4 opciwns. Mae'n ddifolwch yn ddifolwch. Mae'n ddifolwch yn ddifolwch yn ddifolwch yn ddifolwch. Nid yw'r opciwn yw ddifolwch. Mae ddifolwch yn ddifolwch yn ddifolwch. Mae'r opciwn yn ddifolwch yn ddifolwch. Rhywbeth yn ymwneud i'w bryd, mae'n ddifolwch yn ddifolwch. Mae'n ddifolwch ar gyfer gweithio'r llanwys, gwybodaeth yn ysgrifffol, ac y ffordd ymwneud ymwneud i'w ddifolwch yn ysgrifffol. Mae'n ddifolwch ar 3 llanwys, ac mae hynny'n ddifolwch. I geni willfyrdd i ma hwn, gyda rhoi i refernyaid aunquech, "- direktor ddechrau 15 mm", arall wedi ardal. Fy yn yst foodsyn ond craff i rhydd yn ychynig, bonadau y chael Cymll yn dda. Taethau allan â hus, unrhyw'r gwlad ddiefol amnylch yn y% pepper, ddwy praised the public website. Show a gyda ychwanegu o aligach. Yのは fyence haf amser hwyddechrau, bwyd-fاحihadh a hlang rhwng Siw ddim yn cael holl h- Down a bwyswch gyda Japanese a we create a header by using the package h head and then each time we say backslash header heading sorry heading we have a heading at the top of the page. It does a clear double page if you use it more than once in the document and resets the page counter. The letter commands are the standard letter commands. I won't discuss them except opening. We have an optional, redefined it for an optional command and this is the text which goes top right. A copy, draft, confidential etc. We also have a new command closing tool because quite often in Switzerland we have to have two signatures. One is the boss and the other did the work. Here we have the signature now just not one layer but we'd write it as a tabula which it is. Here we have a boss is the CEO, the bit is the CEO and we then say closing tool. The image file for the signature will not be used. The merge letters works. This package by Graham McIntry is very useful for producing hundreds of the same letters to different people and you have a file with pairs of two address and opening. When you read an external file from tech you read one line unless you put it in curly brackets and then that could be several lines but if you use this in a tabula of course it won't work because you have the curly brackets. We have to remove these. This is very easy. The solution is in the tech book. All you have to do is count the due to address has the parameter which we read in from the external file. Does it have curly brackets or not? We count the number of tokens. If there's one token it must be in curly brackets or an address of a single character but it works out the same. If it is more then we call the set to address without the curly brackets because they are already there. This is the command which does the counting. So we have an example and examples are useful. We have a define on clone and here we define a default logo, GCCS, and we form various types of letters. This is useful for my secretary. She wrote the minutes of two different committees which had their own logo and own heading, telephone number, names, etc. This is used by the HLAT English, French or German. Under here the first example I use the file English, the option English default. I don't use any of my extra options for the define file. I have my usual letter, etc. A lot of text. The default value in the HLAT's e.clos file contains address A, B, C, and the extra. It has some details for the bottom of the letter or the heading. The default logo is the define file which should be really larger. This is what we have. The shaded area is really where the text is but the black line here is the picture of zero width. So it doesn't matter if I change the dimensions but the heading, the letter heading will be always in the same place on the page. You notice at the bottom we have the footnotes, foot information. It is worth just mentioning Mr Arway who is actually better known as Voltaire because there is a little play on words with Arway which he didn't like. So to write a private letter all I do is I use the same but I use the option now private which I define it by clove file. I didn't want a signature. The text under says yours faithfully you sign and your name. A private letter I didn't want it so blank it out. The case one in the HLAT file just contains the backslush address. This address will be typeset no logo, no foot, etc. So what we have, we have the private letter. The picture box is the same in this case I didn't change the dimensions and the address will be something. I now have a small example with German and I use another option, Bruny. Here I use two signatures, two persons underneath their position in the company. So I use closing two and the two signatures will be centered under the closing two. This is a smaller side it is worthwhile asking non-British users how would you address these two gentlemen if you made them. Any offers of the name? Well every English man knows this is Mr Fansho and Beecham believe it or not. So what we have, the definitions in the file, I also changed the fonts. To calculate the fonts, to change the fonts is very easy. I don't recommend using large fonts, they look ugly and the sign will be ignored because here I have case three, Bruny, a lot of commands. Here of course I used center ABC and D I just make a rule. It looks pretty in the heading. At the bottom I made a fancy foot, I put some us risks and some text. I changed the fonts and I have a 50 millimeter high logo and this is what we have, beautiful Brunhildr and the logo fits in the box. If the logo was too large it would give an error message because we don't, the package doesn't allow the logo to go beneath the address window. So a header and it really all started out making headers. So we just say use package German and I use Bruny again. Then I say bar length, I can change the length of the bar underneath the header. Changing it to text width doesn't look very nice. You can change it to zero point. You have no bar and you just say heading and then optional parameter is the text which will go top right. The same, the letter G was more or less the same as in example three. Here we have a header in two column which if you notice I changed the bar, sorry, in the file, the centre position I changed to minus 10 centimeters, 10 millimeters. So here instead of the centre text being on the lined with centre is moved the 10 millimeters to the left. So there's merge letters and the merge package works very well and when you have a merge letter you need the data file with the text inside and here we have the letter itself begin merge with the data file containing the addresses and the opening and then you can write the letter and you copy's end merge and this is the letter. There is a slightly changed merge h file. I wanted to include comments, ignore presents. After the first address you can comment out addresses people you don't want to send the card letter to and you would just ignore the line. Here is an example. We have another illustrious gentleman and the name here is Cahun and in curly brackets so of course these would have to be removed when we put them in the tabular. I use tabular instead of going into tech because I don't like inventing the wheel twice and I can put the comment who I think he is and I have here an address without the curly brackets just one line for the address second line for the greeting and the second I've commented out this address I don't want to use it today and I have another one. This address in curly brackets my dear is the Elizabeth is the opening and again it's worthwhile thinking when you go to England how would you address vice count as Elizabeth at the office. Fanchior Chymlu. No, Fanchior Chymlu, yes, but some British person. Fanchior Chymlu and Elizabeth has a fortunate name that it's too large for the box and you get a warning message that this wheel not fit in the box so you have to convince Elizabeth to drop some of the name or you have to put it on two lines. The same is if you have too many lines and it goes outside the box but you also have a warning message and these work so that's all I have to say. The first letter was called the first version was G letter because I was starting the company GCCS at the time the next one is H maybe i letter. The dimensions inside of moving the logo are not very easy it would be better to have them as parameters and I came across a rather tricky problem whether to introduce a fourth letter file English UK English USA or British and I decided well pressure from outside with my recommendation we decided British was the correct one to use but then there's a small bug in Babel that if you specify French comma British the language selected is French not British so you have to guess it is true not that is a mistake in Babel and if you specify French comma English then English is the active language but five minutes yes no problem and so if you specify British we have to at start of documents select language British I'm still waiting for a reply from this there's another small problem paragraph indentation do you indent with German and French or not some say yes some say no the present French book I'm reading is indented including the first paragraph a German book where I'm reading is also indented but I was told one doesn't indent a secretary in Lausanne APFL told me one doesn't indent in letters maybe this is because of Emma's word I don't know ah the typewriter ah and the last one I did think of sporting North American stationery but I don't have any so I can't measure with my ruler how high should the where is the window in the envelope so that is it the package is there and ah you can send questions to GCCS which is not the gardening company in the example thank you thank you no no no I keep wanting to put it on sit down but I find some correction or addition and I will put it on sit down first of all very great for um I've been struggling with post comments myself I have several questions and also I've been to call together a package for my own use in the business sector um no no we use the I use the specification in the university of Bern so it's strictly for your specific application yes yes so I I didn't know such a standard exists it does it was very common two other questions one is pitch number on such a computer or there's a march to add the package well you you can always add the command because the first page of the letter and the last page won't be numbered and the in-between pages will the header is always numbered plain it's okay right yes yes talk talk and I guess part of the 2676 standard is something we started with which I think it's already the first one was referencing previous correspondence so you have a field where it would say further to letter email talking about that well needed and we learned this short journal about previous correspondence and that may be something we learned in the post of program thank you yes I must note down this um I can show you because I have my own example oh it's a US letter it's a box of boxes and stores and that's where it does some bikes for our uh yes this style is actually used by several companies in switzerland uh small companies not big ones unfortunately I'd just like to say that I believe in the united states the location of the windows in the argyllw is up to the manufacturer of the argyllw and uh no actually actually there are standards and this is going to USPS whoop there are standards so that is the main event that's followed up because I go into stores and I I swear to well that one's high and that one's low and that one's there's a big volume that it must be within but the actual outline is all over this volume now it's here's the way into it so you have to buy everything for news and understand um
|
There are many packages for making letters and page headings with logos, etc., in \LaTeX, but many do not take into account user changes to the page dimensions and so the heading may not be in the correct, centred, place. Also, the user should be able to specify different types of letter, including private ones, without having to use another version of the package. We describe how to achieve a package to produce headings and letters which is robust against page layout changes and which permits the user to define all the fields, including the logo, himself. Obviously, to redesign the presented headings one must be a little versed in \AllTeX\ but the user specifies his own details very easily. Described will be: 1)~How to determining the absolute position of the heading on the page. 2)~Support for various language styles. 3)~Using class option (\code{.clo}) files for defining types of letters\Dash standard office, private, signed letters, etc. 4)~Using class option files to specify the user information (name, address, etc.)\ for the various letter types and languages. 5)~Producing letters, merge letters (possible signed) and headings. 6)~Ensuring the letter to-address fits in a C5 and C5/6 window. 7)~Hints on how to change the heading to your own style. The talk should be suitable for general users.
|
10.5446/30899 (DOI)
|
Dwi'n meddwl am syniad C instrument Jenna Senseidstrath ag Europea Ferf<|sv|><|transcribe|> acondd dyma. Attydano bydder nog! Bydde yn enig markyf plants yng ngayfnig öllinkaig y mfrans byggs hagira Elbåifern han e' i fe beleddon festa bol ymg teodi pinell papurcirse enang spум tyhalten ynne. jealousy might Dagma, mae'n gweithio'n gwybod, hi. Mae'r gweithio yn ei wneud i'r ffordd o'r cyfnodd. Felly, rwy'n cael ei wneud i'r haf, yn y bwysig, ychydig yn y ffoto. Mae'n gweithio'n gweithio'n gweithio arall. Mae'r ffodd yw'r gweithio ar y ffordd yma, ac yn y ffordd yma, yng Nghymru, mae'n gweithio'n gweithio. Mae'n gweithio'n dweud, mae'n gweithio'n gweithio'n gweithio'n gweithio'n gweithio. Mae'n dweud i'w eich lŷodd ymlaen, er mwyn 93 oed. Felly, mae'n cael ei chyfnodd. Mae'n dweud i'n dweud i'r gweithio. Mae'n dweud i'r gweithio, mae'n dweud i'r ffordd, mae'n dweud i'r cyfnodd, mae'n dweud i'r piliad, mae'n dweud i'r gweithio. Mae'n dweud i'r un ti, mae'n dweud i'r dweud i'r cyfeiri, ac yn y bwysig, mae'n dweud i'r gweithio ac yn radio. yn ymwneud i'r radio ffrig yng Nghymru yn Unig. Yn ymwneud o'r ffyrdd o'r anhygoelau a'r anhygoelau sy'n ddweud o'r cyfnod, rydyn ni wedi'i gweithio'r anhygoelau o'r anhygoelau o'r anhygoelau, ymwneud o'r anhygoelau, ymwneud o'r anhygoelau o'r anhygoelau, ac ymwneud o'r anhygoelau o'r anhygoelau. Mae'r Unwsgol Lwblin, Mani Curis Groddowska, yw dwyliadau i Lwblin yn Poloedd, yn ymwneud i'n gweithio'n gweithio'n gweithio'n gweithio'n gweithio. Mae'n ddweud i'r ffordd o'r anhygoelau o'r anhygoelau o'r anhygoelau o'r anhygoelau o'r anhygoelau o'r Poloedd yn ymwneud i'r anhygoelau, ac mae'n poloedd, niedyn nhw, uchельg am y 아이 Re운 hubs messedfa WROD. Mae'r parprongr mhugolom o'r microbiolau ac heylaesol gyda amu iawn – o'ch ni'n mheriad ar pol hammer, o'r lapysg Hyun A Justal M beatniaid. Ychydig i'n yrhekiau Ddaol Merothgwr, yn y Mer Burger i fakt HECC yw bryda, mae'r colled i'r pol 포 wnaeth nun i 짧io 16ером於 gymian yw y Chydd ac yw<|ro|><|transcribe|> Joe RERNll, au interesting acCon... tspugol rerapeutu i soblygiadwyd, ar math....ionwr iawn aHaveasydd yn y renowned pan yn deill sydd yn oed o Am ydyd wedi'i penderfyn για'r Developar Tytof Mar conference on succra gyfod o ddeithas You., a ydy'r ffaith oedb iawn i'r éu cyffinamaeth Dydnw'n bore nad yna bowgg chi ei eirise見go, ac gallwch ynogi better erbyn digon i respected ond heidiad gr Daddy t Franco Whegg, te sudden ritheg ar yw lacellyddiw i Knodd Derte o G нет, Cerddiuaidd. Mae'r western ned i bydd y cyfwynbrydd eich moddd. G Ciodd ddod hynday, gweld hynny, a gasio'r material iawn eu schedules...iawn at tot I Wen, Fennys Wonniewyr. O Da discriminate sefaint? mae'r Zesafol gyfaintbeth Rhokla nod i gyfol, marcysodaeth y dryl iaugig y r匱n, gwahanol y drwy'r panwar i'r gwogaeth Aeth流. Siaradau ar fell, i fod enw. Mae prydwsion give f gods masculinityir mewn którywyll na ei mãe wrth am type, ac yr unircyf pwmp yn ysydd gyda phanwell. Ti-IIan roedd o'n ceistio fneud hyn am deill yn cael ceisio tlu o ddim i blithiedref ogoed yn pงoedd, ond ni allan ni'n drawfod mannynau i ddoedau o chliwer ymlaen. fy moddonen iawn i ddefnyddu. Rydyn ni fawr yn hi iawn a bach egw shootera ychydig iawn ar eu palau a disability efallai dilydd arny manufacturers araintrian. Rydyn ni'n cael growwch, rydyn ni wedi Abernithau ond mor deillersio ymlaen. Dwi cofyddo'n r работra, iddwy i am drydd eu beth o brif yn gwSINGING lywodraeth cysy �eth wywed N neglectedd a graciasol, Drucken am hynny na sut am wynishopswyr hefyd, So soon mae若 bwaith gollwn hefyd yn ôly cael sy'n ymwyaf wedi hawn trynnu, Hoel gylly y maes o ran oedd ein Raj achievingn i megisi canesol i breadth subscribers i'rker fynegyntau s steady ram Cornelor a daytime Hefyd. Nicolans ohony Roed Ounda, o ran wath, llefpo clywf explicario cydwyr�au allgar o osbysigig iawn sydd wedi cynnwysขyflol a yn allwyrlau dr Sekol, dŵr hwnau i cieliaeth yn cynnynt wedi bod y guaranteell amni funtsion update. Daith ni-na dywedig ac am y cyfaintrolEE! I ni atro letter i g êtrech gyda ironic deysig draw o cyfagon mor ffallau yourodau neusighsio'r mwylaeth sydd benchond i wneud. Cadwch eu cynnwysuaeth ychaf wy flashesan ar ôl, mheidwch chy,... a gwahole 有 ddo editing a pethau畫io eich dra dockwyd, a parlech o srifusesus cyfeinhau o'r unig, adored o maen nhw, a'r brifysig o'r llwch am y nae'r мыll. Oherwydd, rydw i lakhwyd wrth wych yn gallu newid、 Commentary, dcapitio a dis沒關係. Gydai Bot어주 am y tweybkaidd prosโ firmlyq mae dylio sydd y bain am amser dros dim y bywyd. itsaal Gydw i gyfol â palodo chi sefydlu. Dwi ddech chi'n politicio o hooks i gael gwybwys iawn, it ac werth iawn gwel Philosophy'u resswm ni. Felly, dod ti'n mynd i'n bicielen Ran Olhawn Ar Deunio Flwyr Wood y bydd hyn arちanding o èr cerdw expectingen a fawr ar Wizard Cres! Dyna fi'r flwyd Llyfrin all teidw i Cycl apparent prob improving is so żeby ond cael eras gyda ni All mae g WOOF enthe dealt nid iddynt y loedd o phwys yma ar mglyf yn gyffreidd feithio rhaest inni diwethaff i tudo i parents am ei moddy Prod Men nhw i'r F bättre Ac dim ond mae hefyd yn surio yr guwst. Rwy'r fflech ăn supwyr InsachachogynaC畔 o phont o antychu Paoltafskiego. Mae gwneud y 20 pan o ad researching proffred y Targrunwiedd Sh Ensuite, a flynyddoedd Ibarol â bod yn gwneud roedd yr bolig y partir sydd cyflyb yn hefyd, Yw'r unig chi i dnabbi yn ddiw sansio oherwydd y cifft oed yn ffrwystan thighs elles, astr gweld o gref y gallu hympfen gyda'r 90s y gall—y ffryder befnod picion.� codi weithio llawer o'i dd graduating a supyn, i councils i chi ei gyd yma eu poyasg uchewisbu mae'r L Şeyd., cyn popeth p番 gafor os yw Ffont gossip tebys union hani labr. Rwyf yn gff coughigheaid, Volion, gan umiyet n shyl spin, fram e ffafod ein bod gael fel sef компlaenion eich'mamedgypaniaeth Ноlyllもしруd yn iawn gy現ysg, gan gwroedd y Ffont cylldi fit usiciwn, ac gallw'n bosb panso i nhw hwnnw toodd nhw, yn ym wneud han chi amyfoedd hon roi ardal wasg wilderness sydd wedi wnaeth tsweiner אני gyda'r phobl cupcake Gwysigf Buzz-Font. ME'S D nearest sector Deொch'r collomiad freestyle hwnne, отличodd gy Запarrahalo ond yng Nghymru, yng Nghymru Llywodraeth. Rwy'n gwybod y Llywodraeth Llywodraeth, yng Nghymru Idriss, yn ymgyrch. A'r ydych chi'n gwybod yn ymgyrch i'r Bengarfa a'r ystod i'r llwythau o'r ysgol. Rwy'n gwybod 11 o'r ystod i'r bwysig. A'r ystod i'r Bengarfa yn ymgyrch i'r bwysig. Yr ystod i'r bwysig..olant pap95 ramnnig iddo in ni'n mawtyd. Yn ymgyrch ohono, ond gyda Honestlyd Start hefyd, byddwn wnaeth G Nou yn deumations tywed eu casig'i gilydd a'rladyyent. Maewn phawr gallwn ni'n callai yma. Mae gof ydw i'r ca manif gallwn ni лlyw yn gymried Roeddwn iworking-en distressy fel accumubi. Gweithio rel Noiseyntgoi ieleriw yr unig ar commend, yn byw mwy mod Cerd yawn mwy fempti Methiysol Merenceddyn Cymraen. Dy ansgytiad wedi gweld<|pl|><|transcribe|> Llywodraeth hyfryd yn aperioír a giesa ό你說l amdd하면 neu bydd communities fel derbyn. Dau wasliwm caungen ar nawr, ann meantime y fawr mor fwy ffordd teulu, efallai noes yn ei ddam iddydd pawt heb yitionallyill ar hyn wedi y peinsraddiaidd,<|cy|><|transcribe|> fel amlwgdan menos Yn ôl felly ezw. Mae wir wedi gydag yn gafol ynodi ac yn ei'n drwy sydd. Mae'r cym—af hwnna manyf yn y מקfaf gw研fian â'i gwedd, i chi nhw'n yn ôl gryf penn ymddech terus, sydd gyda yo a erdifet i'r cefn, Sut mae'n gwein drill, maithon y gallu. Am Crowd Wars yn eistedd yn hwn. Dyna d')ch... Tynu'r bach! Dur i gywsianaeth yn mydwys mealionhaeth yma'r newydd, yn n nil, ac roeddwy Sleepg! Tynu'r dweud dwi'n tor attendingd o'r mgorradd. Ond be fyddwn gwa badd ar y feminine juego sydd erioed yn ddim! i halu ein tool gan drawer fy leg Orderwr yr eisiau cyhoeddwyd Old Poedd iawn, gyda ni ymddリ Scol! Efallai rydyn hearwyr rhoi bod gennych S1 isn a gallwch am denidwyr miriauСпA. Felly fod drew a gennym expire twfnair o iawn ac ei pick, a udw i ballأن stor o'ch ti dogrun 피b acosaurs... Cadw Faura SEC I Gorch場thatrbeth neu Diolch mot Nissan A wnaeth g opinio, fyd loc yn ddefnyddio gyda'rinaud eich cynnig yn f 느낌, Des TyMy. Felly, nhw, yr rinnod weld y brifianεl rhywun iawn. galler gwaith y bach yn f jóau yve, nhw ei chemistry fydda cos gweliant peso'n scendidd iddo wirwg peth. Neid wnaeth ei fyddoriaeth, fod wedyn nant ei fod wedi ei wcatad sydd am יchydig sydd gennym. Felly, yn daw i'r ni Mathu ar roeddaeth eich Maa findff ydw ac sy'n shardd gael eu lle vasraddu iolaeth ei dim beth ychydig. Felly, mae nid yn gamwr hyn nid.werdyn ni rejection eu mantel numberedau he colliad, bobl Buy�� dych chi ran gyddy optimum. Guyswhead u'r leiswyd adsolp despaira at gwych ein hun 해� Mill am fy nesaf, ac os ychwanbu eu ddefnyddio d gave GwladMusic a oes ordin am y tech yn aroses ynglyn â mae'r llawjau ond dy Michael A'y先 i a ch 자주eng. но mai'r motivation oedd altw aristan i bwyntad nhw cwylwn? Dair iawn 1915 ac mae'na chud yna am gweithio cy positioning caf y cren. Diolch chi. Mae hyn na ei merio. Mae my Burger D ifoses a spit� syniadon ni i hynbarth iawn ar Acyn Lyn. Dyma'n swydd sy'n r damages a os yw sefydlen uned wedi'i gweithio. A po cyfost nos<|pl|><|transcribe|> dyw loggedor sy'n los. Un Taco orthogratwch – et byz chi'n fydden ni찮! Rwyf y pendul需要ion a sp surprised i wedle πomno ranks i holladellu. Fya wna'n güllust yn gychorfer slopyeon bydw<|pl|><|transcribe|> i bt designerspan. çevreu po 6 kadrach destroys i fettyu to plain mae ga uzaleni, a cvar doln i chów Iel â eich gweith gennym dylunol â sefydlifeth rhai, shellwllud â gagûr bankai oddythdu gamwyle o chef ti'r bud Loch, neu dd bombing ffŵn i'w chynllun單, gallwch chi eithef y hubau, a gweld wedi si– pennaeth systemydd iawn, ar ochr y edrychi ond at y hyn a at wahanol y book, ond lle wedi y和sidd yn ya bobl yn y brnd oherwydd oedll anhol. Dy derudgo boundariesor wedi ei rusyn pan yna soriad, Mi oed os a'r Fygoedd nhw'n nesaf dysl sis suicide i gyflawnol.مة'r� g remarkable o lu'r f � lly folding of Datlang eddy SameB bum, Of Julius anplan o har�ad a directarfudau allan aniau o好يرu risking should o dyignod, felly oedand ag tiradall wrth chorus a chyf Hawiks plastol â bwysig mwy varaio hynny, ac mae'r gynell pobl yn ystod o gy genuine flopol mewn iech ë Walk of It و ddiffrwch yn y t belts y peth o'r leisio, Ac mae'r gilw Selestarelu yn bugsid pan mewn bud yn dreonei o bael Behœaya..byddwch einūny i chi o hyff diminished crisp,au gaut discs ease rhai cellach. Pau'r ei tro meddwl yn awr gennym,r chor g Bonholeu yma,ryddo boi yma o'n dwi'n ziad iawn. cytникаu elinkadd,y maen nhw'n y tro sin poverty creu kwstrings a bobl a'r oed yn btw – dwi'n ei f yr defnyddio gwennau. Tidwch chi ddifwng ur électro ac s Hawb assumed wych yn gwyfyd. Rhyn am中国-queen sy'n cyifer trwm a'rだ Μhagol,ierten gw pandemic o ddrd. A lle'r начom ar gyfer y mas à ddiwedd a'r cyfwng sydd y problemohod electric o drywno에게, ac mae cael y componentiaeth celog endless ac grebu queddt yn hyfer ari. L unsure ry'n ca avatar ar gyfer allan, a gwneud iel<|pl|><|transcribe|> gyda i'r gweithrethau yma i gwirionedd ar driedwch wedi rh specifically. I can't remember if I actually got the lines to line up front and back on the paper, but I think I did. The layout, if I just come back to that which is in bold there......oh I went round and round and round with this and used more and more parameters and tweaked more and more vertical and horizontal off sets and sizes and margins. In the end, what really worked was just to be very simple. arall,ill ti'n ezyddio'r teimlo'r er torniau y maes o frenaeth twyaf gyn�owadau. Llywodraeth ymeptih�� Mercraf. Proses beth eraill dia Punwiad Ysgau Genferict representffogau o'r rhywyddgau Ysgauyaethым felly mewn ydw i ambethu felly ryn fyr challenge'n dweud ng What I've been given were Microsoft files typed at the university in Lwblin, so that was very easy. This is plain text, no math, nothing terribly complicated, so conversion was not hard. This is just the heading of my tech master file. It's just very simple polyglossia a little bit of things just to get my emeralds and things. opening and closing quotes, set the default language to Polish. There's my font and there are my drop numbers. And here's the geometry. That was it. I had huge complicated attempts at layout and it all came down to one line and it did the trick beautifully. There were some constraints about this and that's where the binding offset comes from because of what the printer needed. I'll talk about the printer in a minute. Here's where I set the non-stretchy space for lines. And layout was just so I could see what I was doing to use that in the final run. A few primitives. There's my line of multis crosses. And then as is almost everything I write, there are some latex hacks. I just open up latex.ltx and cut and paste bits out of it and make it do what I particularly want. I don't think you can see anything there. Just after what's shown on the screen, I add the line of multis crosses to the chapter heading. But that's not very interesting. It's just fiddling around to get the headings the way I want them. So the proofing was all done in Poland. It was all done at the University of Lublin. They were marvellous. They proved everything twice. I acted as the typesetter and I received on paper, as I requested from them, piles of paper with their marginal annotations, absolutely traditionally. I utterly reject, or at least for this book, there was no question that I would accept any fiddling with the files. I was not going to try and do this in Word or in PostScript or with annotations. I wanted it on paper. I was the typesetter and I wanted my editorial corrections in ink on paper. It worked great. It was excellent. I sent them a style sheet with all the little editorial marks to use. We all were speaking the same language. It worked very well. My father received a set of proofs. He was just a bit too old to deal with it. He was a pile of paper, and this is your book. At first he thought that was his book. That was how vague he was about it. He's not senile or anything, but he doesn't know the process of book production. He didn't quite know what I was doing exactly where it would begin or end. For me it was very clear, but he thought this pile of paper might be it. When I first plumped it down on the table in front of him. He had it for a few weeks. He didn't really do much with it. He fiddled around with the first few pages and that was about it. I think he needed to have it so that he felt involved. He was also offered the chance to take responsibility, even if he didn't necessarily exercise that. At the same time it generated him a sense of frustration, perhaps a sense of failure, because he wasn't able to do very much. Printing was done through Lulu.com. A lot of you I know will be familiar with that already. Is there anybody in the room who knows Lulu.com already? Some of you, some of you, but maybe a quarter of you. It's a printing company that you submit a PDF and they send you a printed book. Very simple. It's really nice. It's a good site. I was interested to find out. Partly also my interest in doing this book was to try Lulu out. I'd heard about it, I'd read their website. I was interested in really going through the process of producing a real book. Some friends of mine had used it and seemed to be pleased with it. I don't think I brought it up. It really is rather like that. You sign in, you get an account, you update, you upload your PDF, and then there's a book. The first thing they'll do is send you one copy of the book to check. If that's all right, then you press a button. They also act as a kind of Amazon. You can buy the book from them. I hope it'll come up. There it is. There's the book. I made a very nice PDF and all the measurements were right. The Lulu website tells you exactly what they need. It's not quite as simple as I've just made. It's not just you throw in your PDF. They've got quite a lot of criteria. They offer a lot of products. You can have hardback, paperback, various different sizes and bindings. There are choices to be made and they offer reasonably good information. There's also a little calculation you have to do about how many pages has the book got and therefore how much binding space must you leave in the gutter and the margin. There's also that offset binding thing so that there's enough space. You do a little calculation. If you've got a thick book, then you need more space in the gutter. There's also the opportunity to make a PDF available for download. You can have an electronic book for nothing, as it were. You just say it like that. I was thinking very much about the market in Poland and thinking that Polish people couldn't afford 36 euros or 37 euros for a book. I put up the PDF for a download for 10 euros, 9 euros, 50. Anybody can get this as a PDF for very little money and read it and have fun, print it on their printer, do what they like, copyright it to the author. If you want the printed product, then there it is as well. None of this cost anything and that really interested me. It cost nothing. I paid Lulu nothing. I had to know about typesetting and I had to follow their menus. I did some work to make all this possible, but there is no charge. They make their money out of selling, taking a cut of each copy of the book that's sold. I did pay them one payment of 60 euros. That's the only amount of money I paid Lulu and that was in order to buy a little package that they offer, which meant that the book got promoted through Amazon and other online database services. It gets into the main professional book distribution catalogs and databases and whatever they're called. The publishing world has its organs, databases and its big print directories of what's books in print this year and all that. For 60 euros, you can get your book into that world. That's why, as I showed you just now, it crops up on Amazon. That was it. I also excerpted a tiny bit, the first chapter. There's a procedure within Amazon whereby you can upload a bit of your book if you're the author or the typesetter. Then you get this click me to look inside thing. It's very professional actually. The result is great. I was very pleased with this. It's on sale through Amazon. This is Amazon.com. It's also Amazon CoUK, Amazon Day, Amazon It, or FR. I think there's one somewhere. I think it is in Japan but not in Italy. There's something funny like that. Generally, it's in the Amazon family. It's out there. Anybody who wants it, search on his name. It's there. It's a real book in that sense. It's got this nice thing. This search inside feature only works on the Amazon.com website, which is something to do with the way Amazon works. Just to come back to the main point about that, this was an astonishingly cheap process. I put in a lot of time, and the guys in Lublin editing the book put in a lot of time. There's a lot of man hours in there, but they're hidden a little bit for me at least, because this was also a family work. To make, I think you've had it in your hands now, a very acceptable book, just out of a PDF. I was really amazed that this process through Lulu didn't actually cost anything. What next? The ISBN, very important part of all of this process. I am not the publisher, the University of Lublin, Marie Curie Sklodowska, other publishers. I discussed all this with them and tried to make it very clear. Because they publish, they have an ISBN, they have a university press. Their university press was able to issue an ISBN for this book. I was able to print that in the book and, most importantly, register that ISBN with Lulu. Lulu will give you an ISBN, and then they are the publishers. They also have this mechanism where you can provide your own ISBN, their criterion is that they want to have the name and address of the publisher. That's fairly easy to do. You stick it in their system, and then they are not the publisher. The publisher is the publisher, University of Lublin, so I'm not the publisher. It's very clear. By having an ISBN like ISBN, first of all it gives the university what it wants. They are the publisher, but it also enables this thing with Amazon to happen as well. You have to have an ISBN for Amazon to be able to do this 60 euro deal and promote the book through their system. The last point, really, is distribution and promotion. I viewed it as the publisher's job to sell this book. In a way, that's been the weakest part of the whole process. The university has put it on their website. They've got, where is it here, university... Here we are. There's the book on the university website. I hope you don't feel seasick. I'm just going to scroll up to the top. There it is. They've basically just got a list of the books they've printed. They've published, rather, on their website. You get down to... Was that it? No, sorry. I know I'm making you all feel ill. I don't think there's even a clickable link. There isn't. You just have to write it. They're not very clued up about promotion and distribution and sales. I wrote to them quite a lot saying, if you want copies, I can supply them all. Would you like to take over the account at Lulu so you can just get them to send you 20 copies? I sent them the basic free copies that I just paid for out of my pocket. So they had six copies or 10 copies or something. Beyond that, I tried to help them to understand that I could provide them or Lulu could provide them or it could all be done. But they just... I don't know. I think they feel if they haven't got a pile of them taking up space in someone's office, then they haven't got it or something. I don't know why. Their sales channels are very limp. So I think the sales of the book have not been very exciting. It's sold a little. So maybe it counts as a vanity production, I don't know. But it is a genuine university book, so that's nice for my father too. You've seen it going around. That's the cover. He has got a tremendous amount out of this. I took him a bottle of Dom Perignon on the day that I turned up with the book, with the finished product, and we drank a glass and it meant a huge amount to him. It was a very nice thing to be able to do for him, I must say, to be able to put this... What is it? 650 page book into his hands. He got a tremendous sense of achievement and that something real had happened in his life. That was when he was 96 last year. He still pulls it off the shelf and reads a bit. He says, damn this is well written. So thank you very much. That's just the story of the production of one book. Yr Siershwianch was the professor of Polish philology at the University of Lublin. He was the key man who invited the book to be published by the university and liaised very strongly with my father. Emilia and Alexander, I worked with them. They did the editing work. So there was quite a lot of backwards and forwards between them. I just wanted to draw attention when I was thinking about this paper. I realised that David Walden has published a number of papers over the years in the last decade or so about this sort of thing in Tugboat. He's done really a better job than me, giving more details about the technical problems that one may face and special macros for doing things. But also articles from the human side a little bit about producing books in the real world. So there we are, the good guys. Thank you very much for your time and thank you all for attending. Thank you Dominic. Questions? Wasn't geeky enough for Boris, I'm afraid. Dominic, I was just wondering, you mentioned that you picked up a batch of books to send to the university. How did they deal with wholesale versus retail channels? I really don't know. So you just paid full retail on Lulu and then sent it? Yes, Lulu gives me because I'm the one with the account that produced the book. Even though I'm not the author but they sort of deal, they kind of treat me as the author. They give me quite a hefty discount. So I think I can get these for 20 euros. I think that's what it was. If I remember rightly that, that even includes postage from America. The books are... Yes, I did. There is, I think, a minimum price and I think I set it close to the minimum. The pricing is interesting. The book on Lulu costs, what was it, 35, 36 euros. But on Amazon it costs 50 but they offer you free postage. Lulu charges 10 euros postage so it ends up as being, I think I've got this right, sort of 45 if you buy it from Lulu and 50 if you buy it from Amazon. It's not very different. Of course they're only produced on demand. This is on demand publishing. So it's probably a week's delay, I think, before the book is actually shipped. So you order it, then there's a week and then there's a shipping time. But I think even 40, 45, 50 euros, I'm sorry, not in India but in Europe and the United States, that's not a bad price for a 650 page book. I mean in the academic world, when my colleagues are delighted just to get printed by a company like Brill, Brill routinely charges 170 euros for a book of 200 pages. So when I've been talking to my colleagues about this process at the university and they're all just a gog that you can get a book with 650 pages for 50 euros or 45 euros or something. So there's a whole question of the sea change of the financial model that we all work with. In terms of wholesale and retail, really the simplest way to get the book is just to ask for it, to buy it on Amazon. And for people in Poland, they should theoretically be able to walk into any bookshop and say, give me that book and the bookseller, even some small place, should be able to get it because it's in all the databases. It's in all the professional lists of what has been printed. So within the name of the title and the author, any small bookshop should be able to just order it in the normal way, like any bookshop does for something they don't actually stock on their shelves. What really makes the difference is publicity, as so often it's advertising and the university hasn't really got a clue. But if they had some advertising channels and did promotional literature, TV spot or leaflets or anything, that would make a very big difference. But still in terms of getting the book, it's never going to be warehoused or something, you just want a copy, you ask for it, it's printed and sent to you. That's always going to be the basic model. Any more questions? I have a question. So do you have to know Polish to do this or do you know it? No, I've got rudimentary Polish. My Polish is really, I'm not conversational, I'm no good at Polish at all, but I've got a little. I do know my father and I know the content of the book because I've heard a lot of it as stories and tales and discussions all through my life. A lot of it is familiar material, but in terms of knowing Polish well enough to say, know whether a hyffordnation point is right or wrong, not really. One thing I got wrong, actually because of that in the book, is that I capitalised both words in the title, on the title page and on the cover of the book. And apparently I was told afterwards it's normal for book titles to have sentence capitalisation. So the first letter uppercase on the rest lowercase, which is, I never went through my mind, so I just got that wrong. So that comes, I think, from not being a Polish reader. Congratulations. I have written the book, but my wife said, one more book, I'm going to divorce you. So book writing is a lot of effort and so congratulations. Well, to him. Thank you very much. Thank you.
|
In 2010, I typeset a 650-page book of memoirs, political essays, and biographical sketches written by my 97-year-old father. The book is in the Polish language, and was published by the University of Lublin. For the design and typsetting I made choices that stylistically echoed my father’s life-long links with Malta and Poland. Due to financial restrictions at the University of Lublin, I worked out a cost-effective pathway for printing and distribution using an American web-based printing and distribution service. The final result is of a high standard, and has been gratifyingly well received by all parties. Some niggles remain, however, regarding publicity and distribution. In this paper, I shall describe my choices and discoveries in producing my father’s book.
|
10.5446/30900 (DOI)
|
I also have a shutter name, which is Sukhi, you can go and hit that, because they find it difficult to process. So that's the one I use for my blogs and things like that. I got two entities, one is this official TNQ, I have a TNQ, other is the blogging. Basically it's a pleasure to come back again to River Valley. I've done it six, seven years back when there were more people there. And the time we knew about River Valley, but they didn't have this beautiful office then. So it's even more a pleasure to come back again. Now TNQ as a company is probably diametrically, oppositely placed to River Valley. River Valley uses just two applications, one is PDF tech probably, and the other one is they use an editor called FT. And they use mostly open source. We are on the diametrically opposite spectrum, most of our users are in Windows, although we in software prefer Linux, but that's the way the world is. And we mostly live in MS world, and for as anything which is tech gets converted to work. We live with that nightmare. But we use a lot of placidic systems like 3B2 and InDesign. And InDesign, every new version needs more RAM. So 3GB RAM is not enough, then you need 4GB RAM, and it takes a while to process. I personally have used Ztec a lot, you know, I haven't used tech that much. And Ztec is giving me better results than InDesign, especially for Unicode and scripts like that. And the thing is that I started using tech for my PhD when I didn't know what tech was. And that time somebody told me it's a bit like this and you do what you want. And then he told me about this backslash curly bracket stuff. And I just wrote my PhD thesis without knowing how it's going to look in the output. He told me don't bother, I'll take care of that. I just wrote that backslash language, you know, and I just wrote my thesis. And all of that, I just used a simple Emacs editor. I just wrote it there and then gave it to him. And then he turned it into this beautiful document, you know. I couldn't believe my eyes, you know. I've used lots of word processing software and it's a struggle, you know. It's such a pleasant experience. And after that, I personally haven't used anything else than tech, you know, all those years. But there has been lots of things happening in the world, which have used other kinds of mockups. So this talk will be more about that. Now, the other syntax has become popular. It's a backslash curly bracket language, you know, the angle bracket language. And that one is HTML, we have now HTML5. I'll tell you all the nightmares associated with that. With ease, get the context. I'll tell you a lot about those nightmares, which W3C has gone through OES. And also tell you about S expressions, a little bit on this. Now, the brevity of tech makes it easy to use it for basically typing the language in. You know, it's a very comfortable language. But one problem with tech has been that tech has a lot of primitives. And you have superstructures like latex and latex3. Now, I'm not too sure, like, how many primitives are there. And sometimes when you write, you're not too sure which version of tech is the syntax I'm using, because I could either lie at the top level or middle level or the lower level. And I can combine them. That's the nice thing about it. But it also, if you want to convert it to something else, then only person like Radhakrishnan would do it. We can't do it this way. But the other thing is that you have things like DTT schema, which you can do on the XML. So that thing is there. It's an advantage for XML. But there is also a problem associated with XML, you know. Now, if you look at tech, this is the first code is the tech equivalent of XML. You can say title, language equal to English, and then you can write, put it in curly brackets. Now, if you turn it into a S expression with more or less, perhaps, scheme people will like this. Then you can turn it to this, you know. Now, the nice thing about this is that this one doesn't, this one is symmetric, so that attributes and the data is treated like one level. You don't have the assembly of tech for Excel. But people don't like putting so many round brackets around, you know. So they like to type something simpler. So tech has become popular. It's a syntax. If somebody was keying it not using a YZB system, then this is better. And the other thing about the S expression, if you go back, you could nest things within attributes, which you can neither do in the tech or in the XML. In the S expression, you can put round brackets, things around the attributes. So attributes can have further attributes or whatever. So the XML spec itself started with this asymmetry between attributes and tags. They could as well have gotten rid of the attributes and just lived in tags. So it was an architecture addition. They say that they lived with it. They are happy with it. It causes a little bit of problem because you would be careful in choosing attributes because attributes can't have further things in it. You can't have further nesting in it. You can think of it like either the language or IDs, you know, a few things. Going further than that is a bit risky because if you want to have more nested things, then it makes it difficult. The corresponding XML code is more web-based to get it like this. With XML, HTML and HTML, you have this problem that when people type things in, they type it like this. And you have what was called world-lacking sentences. So you don't get a tree. What you get is a, I don't know what you call it. It's a creepy forest, you know, with creepers and insects, you know. So it's very difficult to debug that and turn it into a tree. But if you don't care about it, you just live in that post-crypt or you live in that......streaming world, which I think tech primitives are, you know. You don't care about it. You can as well know, you still know, okay, this part is Italian. But you can turn it into a tree through things like tidy. HTML tidy is there, things like that. So it's a bit like how you do piece of code. You need not indent, either you could go to the Python way and indent it fully and say, indent it what defines my structure. Or you do both. You know, you have both curly brackets and semicolons and still in that. But if you, let's say you don't indent, probably there are pretty printing programs which can turn it into your beauty. So, but there are issues, those programs are always going to fail somewhere and you're not too sure what's going to happen. So the overlapping structures are a serious issue. But in tech, you can't do it. I tried it in tech and it doesn't work. So take box at it and you can't produce a page, so you better fix it. And you know, if you have the wrong curly bracket, take stops right there and then you have to fix it and proceed. So that looks like a better solution than what they have in HTML. Now, HTML is the browser manages everything and then interprets it and then produces something. But it's an issue using that HTML to do something. So this is what the idea of something would turn it into. So that becomes proper HTML and XML. So many people say HTML is HTML. It depends on what you think of HTML. If you think HTML is HTML, then it's HTML. But this overlap is neither HTML or XML. But most of the web is full of this. And I'm told like 99% of the web is this. It's broken. How W3C had an idea that they had a thing that all that we produced must be XML. So it must be well-formed. So they went on a XML bandwagon for some time. They wanted XHTML plus MathML plus SVG. All that in with namespaces and all those web-based languages. And then they thought that people would adopt it. They waited a while, didn't look like it's going to happen, and then they just gave it up. Because the whole world was happy with the overlap and they were just living with the HTML. And if they had instead used tech from the beginning, tech kind of syntax, then the whole world couldn't have done this. Because tech doesn't allow you to do this. So the whole world would have been happy appraised. Now, instead what they did is, okay, let's build HTML5. You know, HTML5. HTML4 was the one that defined it. And they had three kinds of DTD, loose DTD, strict DTD, and all that. I suppose none of it is, none of the thing on the web validates against any of those DTDs. So instead they said we'll give up that ideal goal and then say we'll have more tags. Now we'll go to HTML5. Now each one has its own interpretation, you know, each company has its own interpretation, and then we're going to have more of different kinds of HTML5s coming up on the web. But one thing they, when people say HTML5, each one thinks of it their own part of it, you know. For many of the browser-implim dates and commercial applications, HTML5 means implementation of a video tag, you know. But for us, let's say, general publishing people, HTML5 means more tags. For example, you have a tag called article. It sounds familiar to somebody using a latex class file, you know. But there are some tags missing, but there are some tags added. But it again allows for more confusion, because now at least we have less tags, so we can create less confusion, you know. Based on their own experience with OML, Microsoft came up with lots of suggestions, interesting suggestions to change MathML. And fortunately they were not successful, because they wanted things like paragraphs and indents and put into MathML, you know. It should have made it unconvertible, you know. But the HTML5 has MathML in it, but which version of MathML is not clear, like MathML3 probably now, because it's going to arrive on. HTML5 is still in a recommendation state, so it takes a while to arrive. But MathML3 hasn't incorporated those changes, fortunately, so it's still a usable form, and from which you can go from tech to that in seamless ways. But you can see that there are some things which can cause really problems, you know. You can, for example, let's say this is MathML, so I've not put any namespaces. HTML5, you don't need namespaces. Just add the tags, as if they are tags, you don't have to even... You just have a declaration on top in HTML5, it's a doc type and HTML, that's it, you know, you don't need much else. It's a very simple declaration. And then after that you can just throw in all the HTML and SVG tags, that's how you do it. But in this thing, you can see people can do this. They can put a bold tag around MI and then give you a MathML and say this is MathML. And, strictly speaking, these are not allowed, but I don't know how they're going to prevent that, because these are allowed tags and they don't have namespaces, so they'll have a good validation to prevent that. And people can really mix tags, because now you have lots of tags, SVG, Tag, MathML. It's not clear whether you can mix, you can put MathML inside SVG, you can put SVG inside MathML. So, assuming they're not allowed, but people are still going to do it, you know, and each browser is going to do its own variation of the thing. The same tag in MathML should be this, instead of that beta. Oh, this is a mistake. So, your attribute and you describe through that. Now, the other net map, which I think will not occur in tech, because I tried it, it worked and it didn't allow you to run, you know. So, they say you have a section tag, so they will now introduce section tags in it. So, you can have nestings. Now, they also have header elements called h1, h2. So, let's say I do a section one, which is basically first order section, and then put h2 tag in it. What does it mean? Nobody knows. So, you can put section and then put h1 in it. So, you can have quite a lot of variations and, you know. So, you can mean whatever you mean. You can say that I mean what I mean, you know, and you can mean it anything. And you can put that style, you can have CSS selectors, so you can go deep and point out and say that's yellow, and you get a lot of fancy stuff. The basic reason for, by the way, providing that article tag is that you have lots of web pages and the whole page is not one article. You know, you have, let's say, newspapers. The whole thing is not one document. Like, you have lots of small articles sitting within the newspaper, so you want to identify those pieces. That's how the article tag came. But I think it's going to create problems. Now, instead of that, I'm proposing that you take us the markup language, you know. But I told you they are basically synonyms. Like, whether I speak in English or Tamil, it doesn't matter. It only matters because the context matters, you know, not the language. So, basically, let's say I have article and then I can say tag here. I can say pick an article, pick an article. So a lot of what I'm doing is probably what they're doing in latex 3, you know. So only that I will be just using latex 3 when it arrives. So you have, the only problem is I can't use h1. Maybe you can, I don't know, try to put h1 and mark and h1 and then mark and h1 and h3 and mark and h1. So I'm going to use low-end case also. But the notation looks, the notation looks a bit like eta. So, they do the same thing for liters. You know, they use capital L to the L, low-end L to the 1. So, you can have part of that, p and span. So, basically, I'm looking at HTML5 style file. I don't know whether I have to do a class file. You know, I have to learn how to do that. So maybe I wait for latex 3. Now, bold, funny thing is bold. I just wrote this paper and it could slash me and press I. And then I take, and this is what I'm going to do. So, both p, slash b, and then add a third is, p, p, p. Slash b is a, slash i is a high-intolerant. Slash b is, I don't want to, it can't line some words. I just want to have great points. Okay. So, I have to do a new command. Now, there are other things like, this is mono space. So, I can put slash m, okay. So, these are things we can, you know, so that the syntax is simpler to use. So, we can travel both ways, you know, we can travel to pdf and HTML5. Now, for this, you don't need much. For math, I just leave tech to do the job, you know. The mathamal is a redundant verbose development. But maybe to have mathamal is easier for processes. So, it will be after effective tech, you know. So, there are very good tools which do convert tech to mathamal on the fly, for mathjags, jsmath, there are lots of them. So, you can do it on the fly tech conversion. Now, for SVT, probably this, as you can see, I'm just pushing everything into the attributes. So, for attributes, I'm just using square brackets. So, you can do this. And the only graphics package I saw which I could use easily was tx, but I suppose that can use metaphor and things like that, which I suppose will be more difficult for me. But there are interesting new things in html5. I, few, few talk meetings before I talked in China about sense tech. So, basically, going from simple formatting to semantics, intake itself, but it's easy to talk about such things, but very difficult to develop such systems. So, but html5 allows you to do a bit of mockup which you can tell the machine what it is, you know. So, probably you can build that in also. So, people who want to do it, they can do it. These are some, called microdata, they've got some microdata elements. So, that you can say, okay, I mean this word. In that particular sense, you know. So, maybe good for lawyers and people like that. So, here it is, the only reference of interest is the first one, html tidy converting things into proper format. And then the third, third is a very beautiful book. It's also done very nicely on the web, you know. So, I also write very cleverly. So, it's html5 up and running. And it's available on Amazon and things like that. So, that one shows you the html5 world and how it was built and all that, you know, the background for that. Now, I also built another application, which is my own editor. Please, enjoy the video. No, I'm sorry. Built Firefox add-on. I do not use any other browser than Firefox. It's a matter of principle for me, because any browser which doesn't support Unicode and Math for me is, no, no. So, I don't use any other browser than Firefox. So, you can click here. If you're using Firefox, you can click there. Take while to install. This is a... I'm going to restart the Firefox. Something they have not learned how to do. I checked it on my Android mobile. It works on that too. So, if you do that, in the tools bar, you get a menu. And then you click there, you get an editor. This is the editor I developed based on html5. But funny thing is I actually rediscovered html5. I didn't know it was proposed by the world. I actually created a such a thing for five years back. And I started developing for that. And then somebody told me, this is the same as html5. And in fact, html5 doesn't work in the world of Firefox. So, I made it work. So, you can go here, then insert... Here's a map. If you look at the map, you can see the map. You don't like it, you can kick it out. You can insert... you can use like a word cluster. You can go here, insert a graphic. Or you can insert a vector. So, you can drag this thing here and put it there. Or you can do like this and you can control... You change the color if you know how to change things in the code. You can say blue. You can say submit. So, this is a path in SVG. You can drag and drop and do it here. It uses the five-fold zoom framework. So, this also has a......tech-like encoding called utn28. So, you can type things in utn28 and bring it in into MathML. So, utn28 is the serialized version of math, which Unicode committee proposed. So, you could... It's bit like tech, so you can think of it as a Unicode extension of techmarkup. But it's got certain things which are slightly different from tech. The Microsoft's editing math environment, word environment, supports that. So, it's a Unicode thing, so I thought I'll support it here. But tech people can use tech also here. You can just input tech or utn28. They're very similar. They can input it through this box and then get it in. So, this is the editor. This is in GPL version three. So, if we can have a look at it and even contribute further for this. Okay, I'm finished. Well, thank you very much. APPLAUSE Surely, there are questions about all of this. We're going to start over here this time. Boris has been getting the first question all morning. Sorry about this. I may have some answers, but that would be a whole talk. So, just to say that you've quite rightly picked out the problem of combining... Well, just the math in SVG is not defined. And the reason for that is that, well, from the Mathemal side, I think from the SVG side, we sort of ran out of energy. We were going to set up what we call a compound document group. And it's still officially a W3C group, but it doesn't actually do anything. You know what I mean? It's one of those. And almost the same time, HTML5 or rather Dave Raggett came along and said, look, we want to put these things into HTML5. And we said, well, have you thought about the problems of what happens if you put some HTML inside some SVG and then some math inside that? And he said, oh, we don't care about that. We'll just allow people to put the code in. So that's a short summary of a very interesting area, which has not really been resolved as you pointed out. I think many people say this, but can we go to your slides and go to a couple of slides back when you have this table between MathML and Lattech in Math. I think it's two or three slides back. Back? Yeah, yeah, here. If you look at the entries one, two, three, four, you see that it's not actually one-to-one correspondence because when I just say something in Lattech, it could be M-I-M-O-M-N-M-Row. So my problem is that I don't think it's possible to offer MathML compliant code in TeCh just because TeCh is something different. So my question is, how do you convert from this non-conformant TeCh syntax in the right column into much more strict syntax in MathML in the left column? No, the thing is that from the early days I've been with the MathML committee, one of the members who is a close friend of mine, Simone Pepin. So one of the things we always struggle with is what does M-I-M-N and M-O-M-N mean? So M-O-M-N means an operator, M-N means a number, and it's easy to know what a number is. You don't need tags for that because all numbers are well-known, you know, there are few. Maybe some languages use other kinds of numbers, but Unicode has a unique way of identifying them. So you did the very smart thing for the whole thing. Yeah, no, that's already there. It's the Tech2 MathML conversion from Symbol Markup is there. The only thing is if there are content level interpretations, then it's a problem. But I found that they generally use it for presentational forms. So M-Tex, they could use M-Tex for something which is Roman. They could as well use M-O. You know, even if it's not, they don't care what tag you put it in, they're just looking for the output. And that says it's OK. Are there any other questions? OK, thank you again. Thank you.
|
The TeX syntax has been fairly successful at marking-up a variety of scientific and technical literature, making it an ideal authoring syntax. The brevity of the TeX syntax makes it difficult to create overlapping structures, which in the case of HTML has made life so difficult for XML purists. We discuss S-expressions, the TeX syntax and how it can help reduce the nightmare that the HTML5 markup is going to create. Apart from this we implement a new syntax for marking-up semantic information (microdata) in TeX.
|
10.5446/30902 (DOI)
|
My name is Dhyatakrishnan. I run this company, UberBioG, with my brother, Rajagopal, and our business partner, Harve Basila. So we do type-setting services. I mean, we offer type-setting services to publishers. As part of that, we use Tech Forestry on a daily basis. It is an integral part of our software infrastructure. So I'll be sharing some of my experiences with Tech Forestry, based on my experiences. My treatment of the talk is from a test processing point of view, not from an author point of view. Because we are always using Tech Forestry in our daily production, our approach is how to create the dogman in a better way using Tech Forestry. So it's always from a production point of view. Tech Forestry was written by Aithil Gurayaji, who was an associate professor of computer science at Ohio State University, who passed away last year, sadly. There's a big loss for Tech Community in general. And those people who are using Tech Forestry in particular, especially people like us, because Tech Forestry is almost, it's a lifeline for us. So after passing, after the wrath of Aithil, the project homepage has moved to Purser, Gnuar, UA, and Karegian site. Our main and corporate are maintaining it. And we are welcoming developers around the world to participate and collaborate to further development of Tech Forestry. The original objective of Tech Forestry was to convert related documents into STM. But along with that, I attended a lot of work on that. And it can be used for converting related documents into STML and XML with mathematical. And it can be used to convert related to open office documents. And also it can be used for creating JS math documents from later. Also, it can also be used for certain other purposes also. If you write a little bit of configuration file, you can easily create media-making format files from later. And you can create math jacks documents. And Aithil was a little bit before his death, he was working on converting related to Braille, along with another person called Susan Jolie. It's almost there. And it can be, and also he was working on voice rendering of related documents. I don't know whether he has completed that part, but he has done a lot of work on it. And his last presentation was about that. And we as a business team, as a funding team, we use Tech Forestry to degrade some of the very complex related documents. If you have more macros, an authorized macro, or custom macros, if you want to degrade all those macros into standard related functions, you can use Tech Forestry. Of course, you need a lot of work for that, but that is possible. So coming to the problem of converting related documents into various other market formats, you have different ways of approaching the problem. You can write a parser that can parse the related document and convert the related document into XML or in your market data. And you can type set the document with related. And in the output DVI, you can inject or springer the XML code, which you can post process to extract from the DVI. Or you can create a variant markup of related, which you can later post process with XSLT to create XML and related. So these are all some of the various different approaches which can be employed to translate the related document into different formats. So each one of these approaches have its own advantages as well as disadvantages. For instance, I can tell you some of the examples of that. The first approach that's parsed the document and convert that into a straightforward, you convert that into XML is used by later to XML, later to XML, files, etc. So the main problem with this approach is you have to handle or you have to process the very many macros used by the others. Or the others might be using a lot more popular packages like hyperrub. And which you have to process all the functions offered by those factors. That's really a tough job. But this is possible if you are bend down writing that because later to HTML, the other interviews here. And that does a good job. And I have heard that trial is a good job. And Bangladesh was reporting this later. Later, Kamal is also doing a good job. So the second choice is typeset related and you inject the elements into the deviate as specials. And later, you have tried those specials of those documents. So I have heard that thermis, I haven't used it, I have used it in lecture and took forestry, has this approach. So it creates a deviate formula using later and it cross processes the deviate to generate the output target marker. The third one is when you are creating a slightly different or idyllical kind of marker and from that you create with the exercise, you create a similar later. Take a map, T-book or some of the examples of that. So here, take forestry adopts a second part. So I think there is a package, take forestry.style. And also, actually packages that can run with various other packages loaded by others like hyperblog for things like that. And also, a set of phones called hyperpetched phones, STF. And apart from that, he has written a binary voice processor called tick voice, the with the same name, it's returning to C. And it's available in art platforms. And thirdly, there is another ancillary program also, T-forrestry, which further cross processes to create the CSS and emails. So we are coming to the process, how tick forestry acts. So imagine we have a document, x.check. So urn, later, tick over that. And you get xdb, x.db. And you first process that x.db with tick forestry, the binary, the post processor. And you get the stable output, x.stml. And also, it creates two subsidiary voice called x.lg, screp, and x.idb, which corresponds to the math formula in the video. So we finally, we run the T-forrestry. So T-forrestry will take a rough, that will act on the x.lg, and that will become x.css. And this x.idb will convert it into PNG or GFMFs. This is the simple processing, this is the pathway of tick forestry. So you have the x.check, you process that later, you got x.dbi, then you apply the tick forestry binary, you got the HTML, along with that you got two other subsidiary voice called x.lg, that's the script, meant for pairing CSS. And another one is x.idb, that's the snippets of the dbi file with math information, and that will convert it into PNG files, which will finally form part of HTML. So here is a small source of a typical source, one liner. This is, you have used by guys, you provide the option HTML and load the package, tick forestry, and you process with later, so the process will be something like this. So you later, three times, then you run the tick forestry, then you run the tick forestry. So this is the process. So you will get an output of this type, so that you have one line, this is a simple test. So you have some other figure here as well. It's not as straightforward as you think, but from the user level you did not do anything at all. So also, if you are coming to this process, you have all these different runs, later three times, tick forestry, one time, and tick forestry, and tick forestry provides different scripts for that, st later, m is that later, that kind of scripts are available, it's cross platform, it runs in all platforms. So instead of running all these three different processes separately, from the command, you can run st later, the file, that will do the job, or m is that later, file, that will create stml with mathamal. St later will create stml with images. So how that process goes on, what is the internals of the process, so what is meant by this hypertext font? So I think that it uses a different approach to post processing dba files. So inside the dba files, you have the font information, which font is used, and also the font location, what are the characters used and the location of those characters in the particular font. So while just writing the text from the dba using text forestry by nothing, it replaces these characters with another string. So for that replacement, he uses the font called hypertext font, stf. stf is nothing but an ask interesting of the characters, one character, each line, I'm going up to the 0, so 256, 255, r0, 0, 227, what about the placement? So here I am showing an example of one of the stf files. So this is the maths cmmi, that's the cmr methodary font. So this is the, so each line corresponds to one character and point location, the 0 location is uppercase gamma, the first location is delta, it goes on like that, and the third location is lowercase gamma, and in the leftmost column, you can give whatever string you want to replace that particular character with. So if you want to replace gamma with something else, say x, y, z, you simply put x, y, z here in this location. So that means when you create the final output, this gamma character, wherever gamma character appears in the dvi, x, y, z will take its place, so in the output, you will get x, y, z. So if you say Amazon, gamma, semicolon, that's the entity in ISO, so you will get Amazon, gamma, semicolon. So any string can be used here as a replacement string for the character. So this is the cmmi phone table, actual phone table, and if you, the third character is gamma, here you can see that, and if you go to the stf, here also it is, here also it is, 13th character is gamma. So how does this phone replacement take place? So if you are providing slash gamma, it was source, it was source later, it was always slash gamma, and in the dvi, it will be gamma. And if you look into the dvi file, and if you convert that dvi file, a human readable format as text file, and dv2 dt are some sort of that's kind of binary, and you will get a text file which you can search, and you can find this, the gamma will be like this, fn 17 or sum number, depending upon the phone sequence, and 0d, 0d is the position where the gamma appears in the react phone, this is 0d, 0, 0, this is the d, it comes as 0d, that's the 0d position, the gamma comes in. So inside the dvi file, the gamma will, it will be denoted as fn 17, 0d. So when the tech forestry binary acts on this dvi file, that's when it tries to extract the text file. So it replaces with 0x0 3b3, which is what we have given in our, that's the unicode equivalent of lower case gamma. This is how the tech forestry makes use of hypertext phones. So in place of this unicode code point, you can provide ISO code point, ISO entity, or you can give something else, some extra information also, so that gamma will be replaced with mi, apathis, a mathematical code for ordinary math character, mathematical. So if you want to have mi before and after, you can use this location where you can provide this information also, then we translate it into the output file. So now the question of how to manipulate the tech forestry. By default, tech forestry creates STML files, XSTML, with math. So if you want to create XML out of that, XML of your choice, like elsewhere XML, or the NLM XML, or some other XML, you have to write a configuration file with an extension CFG, and then you submit it to, when you run this scripts like ST-related or MSR-related. So MSR-related, file name, then your set name. So you are still to be read, and your, the file be processed as for the directories provided in your script, in your package, in your configuration. So manipulating the tech forestry, item has given four different kinds of commands. One is TG slash TG, that's a command, and then another one is h code, another one is configure, and another one is new configure. So what this TG will do? So TG simply takes one argument. So it starts with the syntax, it starts with less than an argument, great. So whatever you have there, so TG section will give you an element section. So this will be injected into the DBA as a special, so which will be less than expected. Then another command is h code, h code takes one argument, but there is no delimiters. Default delimiters are used, that means bracket, closing bracket, and opening bracket, and closing bracket. Whatever you put, as I've been giving this closing and opening bracket, will be taken as an argument, and it will be injected into the DBA. So h code section, or this kind of thing, this code, this will be injected into the DBA file as section ID. So this way you can manipulate, you can write more and more things, and you can manipulate your document, or your macros, or your package, or your classes. So that you can add or insert the requisite kind of code into the DBA. So then comes the configure. Configure is one of the most powerful things in the tech forest, if I could. So configure allows you to insert hooks before and after, or anywhere you like, in your output file. So whenever a macro appears, like a section, so if you want to manipulate that, if you want to insert code before and after, or before and after its argument, you can make use of configure. So ID has mostly defined configure for most of the functions available in standard related, and as most of the popular packages. However, you can also redefine things, depending upon your requirement. So I am showing one example of how to configure a section macro. First, configuring section, tech forest provides four hooks, one before the start of the section, and second will be after at the end of the section, and the next one will be before the section title, and the last one will be after the section title. So configure section provides four hooks, which you can use this way. So the example will be configure section. So you give whatever code you want to have here, and which will be taken care of properly. So this is the code that appears before a section starts. So this is the code that will appear at the end of the section. So this is, this is the third word, this will be before the section title, and this will be after the section title. So how this will look like? So I am just showing an example. So this is a snippet from a related document, one section ends, and another section argument is there, and then the section starts, section contents starts. So the output will be based on the configuration, is just like this. The snippets starts there, the pair ends there, and the section stops, and another section starts, with an ID section two, with label two, and section title, section title ends, and the pair starts. So that is how you can configure section. In this fashion you can configure any macro, and also including maths in a related document, and you can run tech forestry, they are done with this configuration file. I don't think, I don't say it is an easy job, but you can do it, and you can create any XML, completely automatically. Of course you do a lot of work before that, but then it is completely automatically, you do not even touch anything. So just run related, you measure related, the file name, your point direction file, and you create completely parsed of it, XML file with math, or whatever you want there. So with this, I just include, I have just listed the basic features of tech forestry package, and if you have any questions, yes? Okay, the argument for the end of the section, is that the sign, the end of the previous section, or the end of this section, once it finishes, much better than the fact that, much better than the fact that it's actually, I mean, in tech forestry internally uses the previous section, I mean, if there is a previous section, it's always embedded, because you don't have the end of section related. It is always having a section. So the end of section is always taken from the previous one. So if there is no section, so there's no section at all. So then end of section is marked by end of, or maybe the, and you have to configure end section. There's one end section there. Okay, good question. You have generated PNGs for mass, do you generate them for mathematical, or do you do the concept mathematical? You mean, is it possible to create mathematical? Yes. Yeah, that's completely possible. So in the example I just told that, you can generate images, but. Are you doing this and do calculations? What is? Mathemal? Yes. Some calculations are wanted, some don't. Yeah, yeah, so presentation or? Presentational, yeah, presentation, mathemal. Yeah. Thank you. I have a script that, a lot of time, try to distance in the programming behind that HTE and what the effect is, you really got a grasp of what you understand. It's why the command structure, the views that you made, was so hard, so intricate. Do you have any insight on what AEMM was after? Pardon? Do you have any insights on what data was looking for? You may, I just don't. 10 for HTE, 10 for HTE, isn't something that you would just call 10 for HTE. Yeah. There are so many additional options, which are passed on so many. Yeah. Where you're going to make it hard to understand ways. Yeah. Yeah. So you? What does it mean, reason behind that? So it means obscurities? You mean the obscurity point of view? Well, how you pass arguments. Okay, okay, okay. You have the script and you have, the first argument is the file. Then you can provide three sets of options. So the script, the file name, the first set of options is passed on to tech forest package when you are related. Then second set of options are passed on to the tech forest post processor. Got it? You have post processing in the DVA, the first post processor. The last set of options is meant for tech forest, which creates the CSS and emails. So are we clear? Yeah. You make them quick. So maybe it may be a design goal for the future. Yeah. But I think it's a great opportunity to streamline this process of options. I think that the two have struggled to understand the syntax of all the different options. What part of the program is going to use them and have all of them. It is a, I think, a genuine area of difficulty. And one hope that in future tech-based might, might look more standard in a way that it's its options. I know it's complicated because it does complicate things. So I understand that but nevertheless the interface could be streamlined making it. Yeah. I'm only just a student of tech forest. And I have been using it for quite a long time. That is true. But I have collected around 200 options for tech forest, which you can pass on to. And I have nearly completed the documentation of that part. And I'm just waiting for this. I feel really comfortable. I have two questions. The first one, your slide on alternative solutions. I think it's a mention of an artificial hole in the HEP. Yeah. Instead of what's inside of the alternative, what did you mention? Is it dead or? I don't know. I am not aware of that. I have heard about that but I'm not aware of it. I haven't used it. So I should have used that too. I should have told that too. And that's where the STH is there. STH. That's also hypertext from latex. So I have forgotten that too. So the second question was how is the HEP both hydrogates and the opposite hydrogates? Yeah, it has an axillary package called, it's so an axillary package called hyperref.4ST. So whenever newer version of hyperref is released, that's the problem now because it is no more. There's nobody to upgrade the hyperref.4ST. So hyperref.4ST takes care of the hyperref. So each and every standard package, most popular package in Zeta has an equivalent in TEC.4ST also. It's huge. Maybe some 400, 500 packages are there. So one package should come in here. The name will be the same with extension 4ST. So it redefines some of the things in the original package. Suppose if you have hyperref, you have hyperref.4ST. If you have nameref, you have nameref.4ST. So it adds more hooks before and after or depending upon the requirements of standard list TMR and it redefines the whole thing. So you have to have this. So the main problem we face, me and you, Carl are facing is, for us, there's a very notorious, not notorious, very useful, Bibletec. Bibletec gets released very often. And the reason Bibletec, the latest version, doesn't work with TEC.4ST. So many people are using TEC.4ST and so many people are using Bibletec. So both are useful. So the support Carl is so much, we find it difficult to give the support, the right kind of support, because you have to get into Bibletec.4ST and you have to modify it. Then only people can use it. Otherwise you have to go back to the old version of Bibletec.10. That's not advisable. So if some people are coming forward to support some packages which is of their choice, that would be very good for us. If you could make the questions very brief, that would be great for me. Is PDFTEC also even? No. PDFTEC in its, PDF mod cannot be used. But if you say PDFTEC or PDF later, output mod is equal to dbi, then it should work. So instead of, I don't know how, see where the white is, where like, is it compared to TEC? I mean, TEC.4ST has a basic support for the Cdata, but if you use Unicode phones, of course, no. By that time, I can pass that. Otherwise, we would have got that too. So there is a big relative feature where some versions bypass dbi and then it's available. Yeah, that's the problem. TEC.4ST entirely rely on dbi. You need to have dbi. A lot of internals of TEC.4ST is actually connected to the car, related to the ocean. And I wonder whether there is any important rising TEC.4ST variance of the related pre-fold and related pre-sintence. I understand it would be huge dust. Yeah, there is an average list, of course. But we are struggling to come out with a package for the HTML5. It's already there. We have to do it quite often. I'll call it quite now. We'll be quite soon. And also, Mathemat 3 is out and we have to write that too. So these are all the priorities at the moment. And then we will go towards let it 3. That's all. Thank you all.
|
There are several technologies to translate LaTeX sources into other markup formats like HTML, XML and MathML. TeX4ht assumes a premier position among them owing to the fact that it makes use of the TeX compiler for translation, which helps to assimilate any complex author macros used in the document. This talk provides an overview of how to configure TeX4ht to output custom markup needed by users.
|
10.5446/30906 (DOI)
|
Here I am in Kerala speaking about Devanagari and Sanskrit. Well, Sanskrit is very important in Kerala, but there is not a one-to-one relationship between Sanskrit and any one alphabet. Sanskrit was always, Sanskrit, the classical language of India, the Latin and Greek of South Asia, was always written down in the local alphabet, and usually phonetically. So you will find Sanskrit medieval manuscripts from Orissa in the Oria script, Bengal in the Bengali script, and of course in Kerala you will find all the Sanskrit manuscripts in the university up here. They've got a fabulous library. They're all almost all written in Malayali script. So I'm talking about Devanagari, which is the script that's used in the northern area of India, in Delhi and Maharashtra. So I'm a little out of place, but still it's perhaps the most widely used script for Sanskrit, so there's some justification. And many of the problems for typesetting Devanagari are the same for the other Indian languages. So I'm going to be talking about Z-Tech, Jonathan Cue's adaptation of tech, and there are many interesting and powerful features in Z-Tech, but the one that converted me about two years ago, 18 months ago, was Unicode. I mean, in my field and with the world and the web and everything, there's no way that I can't work in Unicode. For a long time, people in my field studying classical Indian languages have struggled with private or semi-private character sets. We've developed whole suites of little programs and utilities to convert from one to the other. It's just been an ongoing nightmare. And finally, there's Unicode. So one can, for example, find something Sanskrit on a website and cut and paste it into one's document. Well, okay, then you need a document that's an editor that's Unicode aware. Then if you want to, say, put it in your Bibtech data file and use it as part of a bibliography, let's say you want to cut and paste a bibliographical entry for a book from the Library of Congress and just use it in Jabref in your Bibtech bibliography and just have it as part of your Sanskrit and English, your Sanskrit and German document. So all the things we need to do today, and it really has to be Unicode. So it has to be Unicode going in, and it also has to be Unicode coming out. I need to be able to produce PDF files that I can give to my colleagues. And they can cut and paste. They can select and cut texts from my PDF, if necessary, that will make sense with their editing programs. So it's okay. It's one thing to produce a PDF that looks right and prints right, but nowadays I also need one where you can cut and paste from it and still get Unicode out of it, at least for most of it. And if you just use the CMR and 8-bit input, then you get very funny results when you try and cut and paste. So Unicode going in and Unicode coming out. And so what I'm going to present now really is just a mash-up of my solutions and my understanding of how to proceed with all these wonderful tools that have all been created by other people. So there's one other wrinkle on this, and that is that in India, Sanskrit is pretty much synonymous with either Devanagari or the local script. But a lot of people think of Sanskrit as the same as being the Devanagari script. Russian is Cyrillic. Cyrillic is Russian, although there's Ukrainian too and many others. But still, somehow or other, there's a unity there. And the average Sanskrit noah, and there is such a thing, believe it or not, in India, will be a sort of, somebody in Calcutta will kind of get it. But people in Kerala will write Sanskrit in Malayalam script. But almost nobody gets it in India that in Europe and in America we write Sanskrit in the Roman script. But there's a 150-year old tradition of doing so. So there's a typographical tradition with the real heritage now of doing Sanskrit in Roman script. And there are journals and books and things going way back, and it really exists. So that also is something that I use daily in my professional work and that many of my colleagues do as well. So there's doing Sanskrit in Indian scripts, but there's also doing Sanskrit in Roman script. When you go out into town, you'll see many shops which will say something like Krishna Bazaar, let's say in Roman script. They may often say Krishna Bazaar in Malayalam script also. But Krishna Bazaar, and it'll be K-R-I-S-H-N-A. That's great and that works, and it's no problem and I have no argument. However, in scholarly work, the World Congress of Orientalists in 1898, a scheme was worked out for representing Sanskrit in the Roman script, letter to letter one to one, that has pretty much stayed standard ever since. So it's really great. I mean, this is something in Sanskrit in the logical studies that's very different from Arabic. There's a German way of transliterating Arabic in a French way and they fight like anything and they won't agree. Tibetan is translated in about three Chinese, as you know. There are numerous different ways of representing Chinese, Wade Giles and others. So for many languages, there's a sort of the Riterf wars about how you represent them in Roman script, but not in Sanskrit. In Sanskrit, it was decided by academics at the academic level that we would write Sanskrit in Roman script in this way, and with almost no difference that has stayed to the present time. So that's great, but it isn't K-R-I-S-H-N-A. We use a series of di-critical marks that map one to one with the Sanskrit alphabet. And that scholarly transliteration now has an ISO number, 15191, I think. And that is, again, almost unknown to most people in India who would write something phonetically using the phonetic rules of English. So you'd write Krushna. So Maratha speaker will pronounce Krushna with more of an U-sound at the beginning, so Krushna. So then you would typically see Kr-U-S-H-N-A if you're in Mumbai or Pune, whereas in other parts of India it may be written slightly differently according to local pronunciation. So all of those local uses of what in India really is the English script, I mean the Roman script, those don't apply to scholars who use a series of di-critical marks with acute accent, tilde, dot underneath the letters, a series of letters. So that's something that in my professional field we want to be able to do. And then there's hyphenation, and this is tricky. The hyphenation rules for Devanagari are fairly straightforward, more or less hyphenate after any vowel, roughly speaking. I mean there are some exceptions, but it's much simpler than English. This 150-year-old tradition of typesetting in Europe and America developed a quite different way of writing, where words were hyphenated etymologically. So scholars got very used to dividing words at the end of a line according to the meaning of a compound word or according to the grammar. So you wouldn't break halfway through a suffix, you'd break just before it, so it made grammatical sense. Whereas in the tradition of Devanagari writing, you can more or less break anywhere and the reader will be comfortable with it. So in a way the rules for hyphenation are much more relaxed in Indian scripts than they are in the Roman script. This is, you know, the hyphenation of transliterated scholarly Sanskrit, it's getting into rarefied air, but then there are quite a lot of people who care about this and get angry and reputations in learned journals in Germany, et cetera, et cetera. So for one group of people, it really matters. And then there are the tools, this collection of tools that I've mentioned that have been created within the tech community are now having spin-offs into XML and web presentation of text, which are really nice, and I just want to just end by talking about that very, very briefly. ZTech can provide this unicode input and output, easy access to system fonts, so you just have to put a font in the right directory and ZTech can deal with it. You have to be careful about the correct name of the font, but once you know the name properly, then you can just call the font and use it, which is bliss, I mean, it's heavenly. ZTech has this system of permitting pre-process, string pre-processing, so you can write a file in this TEC-KIT tech kit system, which lets you fiddle with things before they're fed to tech, and I'll show you one particular use of that. And thanks to Fontspec and, especially Fontspec, and Polyglossia, wherever they are, somewhere, I don't know, I'll put them somewhere now, I can't find them anymore, doesn't matter, but there are these two packages which are really superb and support this whole, everything I'm going to say really comes out of Fontspec and Polyglossia, and they're very powerful, very sophisticated, very slick, very nice to use, little learning curve there, but they're very, very good. Carol has just talked about how he was struggling a bit with the Babel and the multilingual support within Luotech, and really for ZTech we have got very smooth packages now that pretty much replace Babel and make it much easier to work with many languages at once. So I would like to give you an example here of the simplest case. Let's begin input here with document class, simple, begin document, end document, and then we've got some sample, we've got some Sanskrit that looks like this, which is just, I'm writing with a Unicode editor and I'm assuming that we've solved all the problems about typing. I use Ubuntu with Windows and other all sorts of different ways of having keyboard handlers and things, it's all a bit system dependent, it's really quite easy in Ubuntu to set up keyboards, so you can just type in Devanagari if you want to, and also the second line there, the Romanized Transliteration, Asid, Raja, Nalunama, Sanskrit says the same thing, Devanagari says the same thing, is the scholarly transliteration with macrons over the long vowel, so it's Asid, not Asid, or anything else, Raja, Raja, not Raja, or Raja, or Raja, or anything else, it's just Asid, Raja, Nalunama, and actually the rhythm is a poetic meter which has a kind of bounce to it, so the length of the vowels really matters. So let's say we type that, so we're not using any package, I'm just saying begin document, end document, and I'm typing in Unicode and I'm going to give it to Ztec, and Ztec gives me this, your Sanskrit input looks like this, so far so good, and then Devanagari has disappeared, and the Romanized Transliteration looks like this, and all my accented characters have disappeared, so Ztec is typesetting my work, that's good, but it's not understanding Unicode, so the very first thing is to tell it to support Unicode, which is done through calling Polyglossia, which itself calls Fontspec, which calls Los Moors X Unicode package, so there's a kind of Russian Dolls situation here, and Polyglossia is calling all the other support systems. So now, same file, no change at all, and there's the output, and your Romanized Transliteration looks like this, and we've got our Romanized Accents, but we have no Devanagari, why not? Well, if you, have I got a, oops, your font here is a computer model, and computer model just doesn't have Devanagari letters in it, so we're dealing here with a font that is not fully populated, it's Unicode and all, but that just doesn't, happens to have empty boxes at the places where the Devanagari glyphs would be. Quite a lot of Unicode fonts that are like this actually print a little empty square box, or box with a cross in it, which is jolly helpful, it's much more helpful than just seeing white space, so then you really know you've got a font that has blanks at that position. Okay, so now we want to load a font that has Roman and has got Devanagari, and there's one font called Nakula, which is written by a friend of mine from Cambridge University, John Smith, who wrote a pair of fonts, a slightly different design, called Nakula and Sahadeva, and those of you who've versed in Sanskrit mythology will get the joke, they're a pair of twins, Nakula and Sahadeva. And so now we've got Polyglossia supporting the Unicode, we've got Nakula, which is a fully populated font, and here's our output, which is very nice, it's almost nice, but can you spot the deliberate error, those of you who have Devanagari? I've put in an extra word there, this word, kartsnjam, which is a word that is a very good word for testing, whether or not your font is doing its ligatures properly. So in Roman script we're used to FI as a ligature, so F plus I becomes a third character, becomes a ligature character, FL, FFL and so forth, we've got a small set and an older type set in CT and sometimes S at the beginning of a word would look like in older text, 18th century printing would look like an F at the beginning of a word, but like an S at the end of a word. So all these contextual, context sensitive changes that can happen in Roman fonts, this becomes a huge industry in Devanagari and most other Indian scripts. There are, every time two consonants come together without a vowel between them, the consonants merge together in a new ligature. So kartsnjam is great because it's R, T, S, N, Y, all consonants with no intervening vowel. This string from the R to the Y is all going to be a ligature. And if you look here at the output of Z-Tech, these little ticks at the bottom are places where the ligature hasn't happened. This little tick is called a virama, it's a stop, a silence, which cancels the inherent vowel. The Sanskrit alphabet is a syllabary actually, so it's ka, ka, ga, na, cha, everything has an a with it. Unless you cancel it, and this little sign cancels the vowel. So actually that is correct. If you read it out loud, that does say kartsnjam, but it's very, it's like, it's really impossible, you couldn't print that in a book, it's technically correct, but just, you know, it doesn't work for print proper printing. And there's another one here, this D-R should become a dra, should become a conjunct consonant. There's this D and R, which in the Sanskrit are written together without a space between them. I think in the time I've got I can't go into the question of spaces and non-spaces in these different alphabets, but typically you don't write spaces between most words in Devanagari. So what's happened here is that the Z-Tech is correctly reading Nakula, it's correctly reading the input, it's correctly using a Unicode font, but the font is unintelligent, it's not using the font's intelligence. The font has in it all the instructions for making the correct ligatures, but we haven't invoked them. So let's invoke them. Here we put in, when we're setting the main font for Nakula, we are going to say this script is Devanagari, and then we get everything sewn together nicely, all the ligatures in place, everything very nice, and now we can experience Ananda happiness. So finally, with a set of fairly simple commands, you use polyglossia, you use set main font, you use Nakula or some other font which has proper Unicode font, properly populated, and you tell it that the script is Devanagari, then really you've got almost everything you need. So let's now look at a slightly more advanced topic. Is that legible from the back? I'm sorry, I put a lot on the screen, but basically you start off in the same way, document class, article, use package, font spec, polyglossia, I don't know why I put font spec in, I didn't have polyglossia, calls it. So I'm now going to set two languages, I'm going to set English, and I should have put in brackets the instruction variant equals British, that would give me proper high definition, this will be American, and I've chosen Karris, or Karris is my font there, because I like it, and then I'm going to set another language which I called Sanskrit, and I'm setting the new font family for Sanskrit to be Nakula, Sanskrit font, and I've made it narrow, don't worry about this, the paren dentin text width, this is just so that we can get a very narrow column and force Ztec to do a lot of hyphenation. So now the text in Sanskrit, text Sanskrit, this new font family command automatically generates a new command which is text and whatever comes before font, so new font family, German font, you could then say text German would be automatically given to you, so now we've automatically got a text Sanskrit command, and it says Manu Ekagrama Asinam Abhigam Yamaharsaya, the great sages came to see Manu who was sitting concentrated, and his mind one pointed Ekagrama. What does this give us? So we need to just look at the source file again, first of all we've got everything is Romanised, and we've got some that is tagged as being Sanskrit and some that is not, so we've got the Sanskrit hyphenation and English hyphenation, so this was the argument for the text Sanskrit, and Ekagrama Asinam Abhigamya, that's very nice, that's just the way we want it, Manu Harsaya, it works, it's okay. That to the average German, French, Swiss, English, American reader, you know, that means you can type set a book with lots of Sanskrit words in it in Roman script, and the line breaks will look nice, they won't offend. If you don't have this proper kind of hyphenation, you get lots and lots of examples in Sanskrit of Don Knuth's famous examples from the textbook, like the rapists looked after the woman, and so on, you know. So if we don't switch on, if we don't tag this string as being Sanskrit and allow it to hyphenate as if it were English, we get Ekagrama, that's okay, Asinam, that's not too bad, Abhigamya is completely impossible, and this is the kind of horrific hyphenation that people in my professional field see quite a lot in journals and books because most publishers can't hyphenate automatically for Romanized Sanskrit, so this is a hyphenation in the middle of a letter because in Sanskrit, bh is a single letter of the alphabet, it's an aspirated letter, so it would be like, I don't know, there's no equivalent in English, but like sh almost, like in the middle of the shasab, sorry? Yes, exactly, like in the middle of sh, exactly, and ma harsha again, it's a bit nasty, you'd get a little kind of, oh, and then you could read it, but it would be slightly problematic, so, what, so okay, so the hyphenation is there, and just also let me say that you've got hyphenation for Romanized Sanskrit, what about hyphenation for Sanskrit in the Devanagari script? Well, so we can set up a new font family, and we've been using Nakula still, and here we've got that the script is Devanagari, and we're saying we're using a mapping, now this is new, this is a tech file, TEC, part of this tech kit, this process for manipulating strings, and this is a little file, it's not complicated, I can show it to you if I've got time, which basically maps Romanized Sanskrit, it says what, each letter, it's more as a table, it's a bit more to it, but a table of letters and a table of their equivalence in Roman on the left and Devanagari on the right, roughly speaking. So now what I've set up here is that my Sanskrit font family is going to be text Sanskrit as it was before, and I'm going to write in Roman script as before, so all of this at the bottom there is the same, but I'm going to get my output, oops, like that, sorry, yes, so I did a second screen here just to tell you that all I'd done was to change that term that line new font family, that's the only change in the file, and there's the output now, Sanskrit hyphenation and on the left, Manume Ka-gram A-si-nam Abhigam-ya Maharsha-ya-ha, it works rather well, it's well hyphenated. The hyphenation tables for Sanskrit that have been done by Yves Codet and others and Jonathan, have actually got tables in them for Sanskrit in many different scripts, so you can have the one because Unicode, because it's Unicode, so they can make different hyphenation rules for the different parts of the Unicode plane, so they've got correct hyphenation for Sanskrit in Bengali, Sanskrit in Gujarati, Sanskrit in Devanagari, and also Sanskrit in Roman. It's great, so a single hyphenation file, that's why you can have Devanagari hyphenated correctly, and you can have Roman hyphenated differently, but also correctly. It's, I don't know, for someone in my field, this is really heaven, you wouldn't expect a piece of software to do this, and it's really, really great, and on the right hand side is the English hyphenation, which is pretty much wrong as before. So just very briefly to scoot forward, actually at this point I want to do a little live demonstration. I was talking with some colleagues yesterday, and we all decided that live demonstrations are a very bad idea, so this is why, because they don't work, and I can't get the windows. Okay, there's a window, and then can I get the output? The output is in another window. Oh well, we'll just have to do our best. Dammit, now I can't find anything. Maybe I'll just leave the live demonstration. I can't find my input window. It's there somewhere, but it's gone. It's behind something. Oh well, sorry about that. What I was going to demonstrate is basically that you can have your input in Roman, just as you've been seeing, and get your output in Roman. You can just say mapping equals Dev, and mapping equals ROM Dev, and then you get this output. So you don't change anything that's between begin and end document. It stays the same, and your input may be Roman, and you can choose to have your output in Roman or in Dev and Algorithm. It's amazing, really, so it's a very nice thing to demonstrate. Not only that, as I mentioned in my professional community, there have been quite a number of ad hoc cluages that have built up over the years, a little private encodings system and so on. So you can actually make, you can rescue your legacy documents. If you've got something that uses, for example, the Felt House, France Felt House, created an input system for Sanskrit that went together with his beautiful Metaphon for Dev and Algorithm, and many people doing Sanskrit and Hindi and tech had been using the Felt House system called Dev and Alg, which is hosted here on the server here. A lot of us have done books and articles and so on. There's a lot of stuff out there that uses that. And his input was not unicode, because when he invented it, that didn't exist yet. He made long vowels by saying A, A, and letter with a dot under it would be just dot on that letter. So he invented his own Roman input system, which is very easy to type and very easy to understand. It's just private. It's his sui generis system. But we've got a translation. We've got a tech table now, a TEC table, that translates Felt House into Dev and Algorithm. So this exact same output, this is the beginning of the Yoga Sutras with the commentary, King Bhoja, so this exact same output you can have in Felt House encoding or in unicode encoding, and just by saying mapping equals Rom Dev or mapping equals Felt House, you get the right output. Again, it's kind of a miracle for people who work in this field. I'm sorry I couldn't show it to you more clearly. And finally, I've been doing separately, quite separately, some corpus work on Sanskrit building a system that I call SARIT. It's a system out in the public, SARIT.indology.info, where we're trying to build a corpus of Sanskrit texts the right way. There are a few websites out there that are hosting large amounts of Sanskrit literature, mostly in transliteration, but they're all just chucking either ASCII or unicode at everybody without proper documentation. You don't often know where it came from or what encoding it may use, although some of that has done better in some places. But clearly the proper way to build a corpus of literature is to use XML and the text encoding initiative guidelines. So that's what I've started this project here, and it's still very much a pilot project, but we've got actually a reasonable amount of text already. I mean, it's very small compared with our goals. We're aiming at a very large corpus. But this is just one text, and here's all of this Romanized stuff. Now, I presented all of this to a group of colleagues in Delhi at the Rashtriya Sanskrit Sansthan, the national Sanskrit university. And they were quite excited about it. They wanted to collaborate, and there's a possibility of funding and rooms full of people sweating over keyboards and all of this. It's sort of quite an interesting possible collaboration. But they can't see this. They can't work in Romanized Sanskrit. It's got to be Devan Agri for them. So my colleague Andrew Ollet at Columbia has just recently done this. This is a Prakrit text. Prakrit is a derivative. Just as Dante's Latin is to Cicero's Latin. So Prakrit is to Sanskrit. It's a derivative language, a medieval language, halfway between Sanskrit and Hindi, say, or Sanskrit and Marathi. So this is an interesting, it's a Prakrit text written by a Muslim author from Bujrat. And here we have it, and Andrew is using it with his students. He's made a very nice website. Can I do something so you can actually see it all? Sorry, it's all spilling off the right there. So for example, here's a verse in Devan Agri. And if you toggle the commentary, there's a later commentary. You can toggle a second commentary, avachurika, and you can toggle the translations. So this is all very nice. But what's even better is up at the top here, you can press a button and get the whole thing in Roman transliteration. Well, you could. I've never seen this not work before in my whole life. So let's try another one. Here's one that I prepared earlier. And here it all is in Roman script. Again, toggling, and it's all Roman. There are some extra things here that are really nice, sort of mouth watering, bit like tooltips that we've just seen yesterday. So stuff in blue is stuff where different manuscripts have different readings. So if you hover, you see that manuscript P has a gap at that place. And manuscript J has a different reading. So all of the different manuscript readings you can see just by hovering over the blue text. But that's just eye candy. What I really wanted to show you here is this instant conversion between Devan Agri and transliteration. But it's not working still. At least you saw it work once. All this work that Andrew is doing with these online transliteration systems is coming from the use of a toolkit that IBM has put out for working with Unicode, the so-called ICU toolkit. I can't remember now. Somebody I'm sure here knows what that stands for. There's a website and support for it and a lot of free downloadable stuff. And Andrew has used the Tech Kit materials that were developed for transliterating Sanskrit within the ZTECH world. And with very little work indeed, fed those to this ICU library so that it can do on-the-fly transliteration of selected strings inside XML documents. So the technology that started within the ZTECH community is now getting out into enabling these things to be represented on the web very smoothly between Devan Agri or Romanization according to the different readerships. And this, I think, is going to be a game changer in my negotiations with the group in Delhi because if I can show them my Sarit, text corpus and say, if you just press this button, you get it all in Devan Agri, then suddenly it's going to be something they really want. So there we are. The good guys are listed there. These are all just, well, actually just some of the people who've worked so hard to make all of these tools possible. And there are some people who are also not mentioned there at all as well. I just listed the most prominent ones. Thank you very much to all the people on this list. And thank you for staying with me on this last talk when you're all dying to get away. Thank you very much. APPLAUSE Dominic, there's always a gripping talk even though I didn't understand half of it. I felt the same about yours. Yeah, well, there was more content. Just a moment. Yes. What are you doing here, guys? You want some? Google is coming. For data entry, yes. So you can type very... You're talking about the local user, Krishna. So if they do that, Krishna, or the WAA stuff, you do get the right one. It's good, isn't it? Yes, I'm aware of it and it's nice. But it's kind of a license to keep. But if you use it as an API... Okay, so that's something that you can incorporate into a website. But the next thing is different in the sense that this uses your......normalized version. It's an API kind of thing. Yes, yes. This is different in the sense that it's not the common day-to-day user. Yes, that's right. It's something that exists within the scholarly community. That's right. So, I see that you're saying it doesn't. No, I'm asking. I don't know. I think not. I mean, I see you as a... No, I think it doesn't. It's a completely different suite of programs. No, you can write tables for anything to anything with ICU, normally anything to Unicode. So, I think you could write tables to go from anything you wanted. But it's not... You wouldn't also want to call up the Google API because it's really... The ICU stuff is doing some of the same work, but just doing it somewhat differently. The Google material is for the keying in. It's actually when you type, it turns what you're typing into an Indian script from phonetics. So, we do the wordings... Yes. So, this is... This is not designed for saying, here's a four megabyte file. Please give it to me in Roman or give it to me in Devanagari or give it to me in Malayalam. That's what the IBM ICU libraries do. Bulk, bulk, transliteration and bulk conversion of large XML files. Even if you want... There's no reason to believe that. Yeah. You want it in your server. But it could, for some people, it could really solve the problem of how to type. If they really just... If you just wanted to type a little and you didn't really need to install something to make it a smooth process on your own machine, but you just wanted it... You want a tattoo or your daughter wants a tattoo. So, you want to, you know, Om Namah Shivaya or something. So, then the Google API might be Johnny Helpful, for people who would like just a solution for quickly typing something and getting it in an Indian script. Right. Thank you very much. Thank you very much.
|
The XeTeX extended TeX engine provides a wealth of sophisticated features, and meets many of the long-felt needs of people working with multilingual or multi-script texts. I shall describe the use of XeLaTeX for typesetting Sanskrit, with both Roman- and Devanagari-script inputs, and Roman- and Devanagari-script outputs. I shall describe the complexities of getting differently hyphenated Sanskrit in different scripts. Finally, I shall offer an example of a free IBM XML tool that uses a XeLaTeX TEC file to auto-convert Sanskrit between Roman and Devanagari for screen display via HTML. If all this sounds a bit messy, it is. But the results are sometimes quite amazing, and open up exciting possibilities for the beautiful printing of Indian texts.
|
10.5446/30908 (DOI)
|
This is going to be an anti-climax, folks, I'm afraid. I found that it's very easy to write a title, a few months in advance and forget about it, and then a few days before knowing that you have to talk about it. Now, I have an excuse that we'd like to catch up with time, so instead of 30 minutes, I'm going to talk about five, so everyone's happy. This is me talking and winging it and making it up as I go along, but there is, I do remember that there was a reason I put this title in. Going back, I'm just old enough to remember Tirov. In the old days, basically we wrote, we used a typewriter, and then there was nothing about structure of a document, it was always the look of a document. So with a typewriter, you would type twice and you get bold and you underline, you get underline, and that was it. Tirov was the first thing I remember where you had a markup. So anything from, vaguely I remember that a dot would give you, so SB2 is, I think, a space. I can't remember the rest, but bold rule, I guess, is it BR? Yeah. Was that font bold? Okay, and then font Roman? Yeah. Well, this was fine, this was state of the art, so instead of a typewriter, you could just put little codes in and it was great. And it looked right and that was the end of your job, job was done. Very soon after, tech came out. The same kind of thing, it was markup, so you wrote your text, but you put in markup around it to say how you wanted it to look. Again, at that time, I think no one was thinking about, at least I wasn't thinking about structure, it was more about whether it was a kind of a super typewriter. The beauty was, for me it did maths, and that was the selling point. So tech was great, and I used it a lot and loved it and continued to love it. After that, we got word processors, which were WYSIWYG, so you could see in real time what you were getting. And the early word processors had, you could look at the code too, this is word perfect. So you could look at the code, or whatever they called it, the markup, whatever, and you could edit it. You could go at the bottom and change the para style, remove a return, and you could see the output. So you still had an idea that there's a kind of, you like a markup, so there's a text and something telling you what the text should look like. Again, it was still how it looks. Very soon after, the Macintosh came out, and everyone was excited, including myself, where it was just magic. You simply dragged this thing which highlighted, and you could change the font size, the look, what have you. And Mac, this is Mac right on the 9 inch screen of the Apple, the Mac Plus. Again, it did the job, it was fine, it was a great replacement for the typewriter. The dreaded word too came out, this is the early version of word, and it took the elegant Mac right and made it put lots of bells and whistles on it, and it's continued ever since. So it gives you more than you need. Again, it's still, you can do what you like, you can write what you can make it look the way you want. Yeah, well compared to a typewriter, Microsoft Word is a good piece of software. Now, I think I'm writing Mac right didn't show you the codes, and Microsoft Word didn't show you the codes. So right now, you can't see, it's a closed format, you can't see what's inside it. And this was the beginning of the WYSIWYG era, where everyone was, and slowly, of course, you got PageMaker and QuarkXPress, where you could do exactly what you wanted. And really for the last, what, 20 years we've been enjoying, we've been having a big party with WYSIWYG, because we can do what we like, and it wasn't a problem. HTML came out, and again, it showed us what we wanted, but of course the codes were visible. So, all this time, really, no one's been screaming for what the codes are, what is behind what you see. But recently, people have been realizing that it's not good enough to have just the form, what it looks like. You need to have the syntax, you need to know whether something is bold, whether it's a heading, whether it's a vector, whether it's an emphasis. One of the nice things about HTML, the one, not the one favor, but the major favor HTML has done us, is that it showed the codes again. People, every Tom, Nick and Harry knows what HTML looks like, and they know that it's needed. So we don't have to fight to say, well, look at tech, you need this code, because it's another kind of marker. And of course, XML, which is, in a sense, came after HTML, although it's the, that sort of follows from SGML, XML, it looks like HTML, but it has the codes. Now people want to know, people want a document to be seen in the way you want, on your iPhone, on your iPad, you want it to be read, allowed for the blind. And for that, you need to know, it's no good saying, bold italic, you need to encode it semantically. So if we accept that, the question is, how do we code semantically? Well, XML is there, and we can use XML. The problem is XML is not readable, and it's not writable, right? If you look at, say, a piece of, if you look at Y equals X squared, it's about 10 lines in XML. For a good reason, but it's not designed to be readable. And I think if we then now think, well, what's the most convenient way of writing something with holding the meaning semantically, something that can be read and edited, really the best thing is still tech, the original tech that was written by Kanuz is the best way it is more or less readable, it's editable, and it can be turned into XML automatically. So we are back to this 30-year-old technology, which is still not superseded. I think it's good news for us techies. I'm afraid my talk is up, so forgive me for this anti-climax, but this is my patch and I can do what I like, I guess. Thank you. I also have no questions because it'll take longer than the presentation.
|
TeX is around 30 years old, and was conceived and written before the advent of laser printers, personal computers, PostScript and of course the Internet. At that time the idea of WYSIWYG document editing was just a futuristic idea. When people jumped on the WYSIWYG bandwagon, it was predicted that old technologies such as TeX which used mark-up for text would disappear in time. The advent of the Internet brought mark-up to the attention of the public. Somehow it was acceptable again. The recent move to the semantic web and HTML5 has brought renewed attention to mark-up and the need for clear structure in text. I suggest that we have gone full circle and now realise that mark-up is everything. And TeX, which has the most readable and minimalist mark-up might just be the best tool today for structured documentation.
|
10.5446/30909 (DOI)
|
Well, hopefully I will try to have some more content. As you may have noticed, I borrowed the title from Kave and I will try to answer it actually. So it's a very wide tag and our previous Skawe and Karl and many others have already come with this. Why Tecmas? It also has been talked about, Ross, Rishi and others convince us that Tecmas is the selling point of our tools and Tec and the compact logical expression notation Tec offered is now used throughout the words and it's like by any mathematicians you may think of. So formula is of hundreds of words and Tecmas wins here. But why Tecmas search? Well, nobody has mentioned it here and it's badly needed, however. Searching is a crucial part of accessibility when you have lots of lots of your PDS and you want to access it the most natural and usual ways to search it. And this is no different for mathematics. Why should it? So my honest opinion is that Tecmas search is really a must for when working with lots of data and working with digital library of mathematics, which is actually the project I am now involved in. The logo below when in frame of it we have thought a little more about it and tried to answer the question of the title. Well, one possible answer is because of the G as G as Google and globalization and it's sexy, right? So we have about 100 million pages of peer reviewed mathematics around and the ambition is to search it. It fits on the disk of your notebook and why on earth we shouldn't be able to search it even with formula in it. And well, that's the tasks we were thinking about and the gate to this knowledge is search. Digital library without math or search is an oxymoron. For pure text and keyword based search it's solved problem. You use Google every day and it's supported by review databases like Central Blood so one can say it's success. However, have you tried to search some formula on Google or in other system? There are not many systems that allow it so I would qualify that it's failure so far. So what we can do with that? It's to support that's needed. I come up with some examples. Adding ability to search for formulas in textual search may actually disambiguate and narrow your documents and narrow the search so that you actually find what you are looking for and sometimes the only difference that you can use is actually the formula. So for instance, it's possible that Kabe might be looking for some data for his talk in the whole river valley database of all types of articles ever which might be about my rough estimate is five number digit. I have told VDF people they said they have around 40 or 50,000 articles with Dan Viste and one could search that all within microseconds even with formulas. I have another really examples. Research might be working in some area of differential equations and the types of the equation might be differentiated by the formula so putting a formula in the search is crucial for she or he. And language of mathematics is actually the same with some exceptions for all the natural languages. So having global search, let's say on Google one can use formulas to search all over the place and the differences in notations are there but it might be cope with as such. So our topic of this conference is eBook. Imagine you're on your iPad or in the examples of eBook. Imagine that you may search in it because you usually use it for reference purposes only. Imagine that you can search the formula you remember and want to look for. So the take-off message of this talk is yes, you actually can and we have tried it and we have done it and I will just show you and if time permits I will then make explanation what to do and how it was done. So giving Google question like Einstein, check Google actually does the answer pizza Einstein which is not relevant usually what you are in for. So, what our system actually does? So when you put the query Einstein and his formula in your search box and you press the button which means you search for it, it does a lot of work, it converts it to mathematical, then it converts it into canonized mathematical, some canonical representation of it and then here you are, you have the results with the search terms, these will be the questions. Here then you can take in and go to see the real papers here and you end up in corner. So, how is it done? It is topic for the rest of the talk so I have 15 minutes to describe how it is done and how it works. So when we cope with this problem, we started with review of available solutions, we spoke to Google scholar people because we wanted to search the metadata, at least metadata of the articles on Google scholar and as mathematics is often even in the titles of papers, Google scholar does a bad job in it. There were other attempts by some publishers, for instance Springer offers latex search but it works on the text, literal text of the text mass cut out of their sources and as such it is deemed to fail because one never knows what offer has typed when writing the formula. Every formula has a zillion of ways how it can be entered and typed. So this is not way to go. There are many changes to face. There is heterogeneity of mass representation, notation, the semantics handling and there is no established and accepted user interface and pure language for mass so far. And there were several attempts to solve the problem. There was 7-digit NSF grant by design science some five years ago. It was not seen based essentially indexing and grants of presentation math model but the site is gone, it is lodging, restricting it and even though it was pioneering effort in converting to math model, the results are not seen. There is another project, it is EgoMass by our colleagues, it is web search engine capable of handling mass. They did another step, they brought the idea of formula, augmentation, alpha equivalence, algorithm and relevance calculation solved the problem. I have already spoke about latex search, reactive mass and DLMF digital library of mathematical functions are projects where the mass can be searched by their domain is limited so they are able to offer, they have the offering part under control so they can mark up the formulas according their indexing engines but this cannot be used in general global search. People in Bremen are using semantic approach, they are based on and so it is not real index but the problem is that you can have reasonable amount data disambiguate it with this level of semantic markup. So we have decided to build our own taking lessons from the previous unsuccessful attempts and we have built MIAs engine, mass indexing and searcher which is math-aware full text based search engine. It adds mathematical equation curing to the basic of textual ones. So in addition to standard terms you can have mass in the search query in the standard tech or AMS latex to be precise notation so ideally it should be directly compiled with AMS latex and then you get what you are informed. We had to cope with the duality of tech and mathematical formats. Mass is used by people and here technology means so on the offering side and on the curing side tech is the preferred way of writing and working. However on the software application side, program side, algorithm side the MAPHEM clearly wins all DTPs, all algebra systems allows you to copy and paste formula and integrate in another and put it into browser etc. and some DTP system including KOS are having MAPHEML and XML at its heart. So inevitably a system for searching mass has to cope with this duality. Another idea one should think of it in retrieval is that the indexing terms and the search terms do not necessarily be the strictly same. In text retrieval usually word stems are indexed instead of word forms so in check average word has 30 different word forms so that may help to squeeze the index and there are examples like in tech book don't typesets there are example of content invitation, concept invitation and actually there is not, there is a song but the offer of that song is not on the invitation but it appears in the index. So it's an example that these two are not necessarily strictly same. So the same idea applies for mass search so we have designed the search such that for a given formula we generate set of substitutes more and more general and similar representations of formula. Let's put in the index and there are certain. That's top level architecture of the system. The text plays a role here. Then in the bookman is taken by the handler and the text is strong and the mass takes special care to stop the process and special representation of mass is put into index so that the searcher then may use it and input to it takes similar processing the contents are connected to mass of our timeline and the mass of the search. More precisely during the indexing so that a plus b and b plus a do not have to be the same the two are identical to the index. All the indexing contents are as older so that only the two are identical so that only a plus b and b is in the index the same happens for in the current page. Then it's tokenized so that the representation of the four plans is more general. I mean three variables are numbered and term based on that. So two match as much similar formula as possible and to a small index as possible. Rather big machinery takes care before the terms appear in the index. Let me show it on an example. That's having x squared plus y plus y in the view. So ordering doesn't change it so x is less than y. Then the algorithms for uniformly takes place and several subformally is generated and they are unified, unificationized terms are created. So finally you end up with 16 terms that are put into the index with different weights according to the similarity to the original term. During searching the similar things happen. Two terms are generated. Now four terms are generated and two match the index and for these two the ranking scores are added for the documents. Here you can see more details how the weighting, that means how similar the index terms are to the formula. So this tree has 16 nodes and every list has its weight which is stored in the index too. It's built upon Lucene, latest Lucene as a plugin with Java and it's state of the art digital library stuff used in this space and those of other systems. So nothing new here and it can be directly used in a real system. For evaluation we have used ArcScive converted into the XML. It's about half million documents which were converted to XML plus MathML by our Bremen colleague, Professor Kolhaze and others. At the end of the day we had 158 million of input formulae and with the multiplication we have three billion terms in the index. We have managed to squeeze it into 47 gigabytes so it's still manageable on a modest computer. And for details you can look recent papers of mine and it's the homepage of the project is on this URL. It scales well. The response time is still under half a millisecond. The indexing takes place several hours. We went down from 23 hours to nine with threading and as it scales linearly one can use parallel Lucene implementation when you see that your query response time is too high. So it can easily be map produced and used by Google without problems. So here is the URL for the demo interface. You can go there, play with it as I have already shown and search using this three billion index. And the ideas I have already spoke about, we convert to presentation with Tralix but later ML or other converters might be used. We use the UUMC library for canonicalization of MAHMAL. We generate snippets containing the formulae and we use the docs on the web. And it's already integrated in the UDML system which is, the demo is also already up and running on demo.UDML.U if you may give it a try. I have brought some interviews about the project and some of you have already took it. I also have here some papers if you want to go into the details. So to sum up, well, for the content you should have the data and our people in Birmingham are doing PDF to latex converter. So from this we will have MAHMAL for indexing even from PDF where we do not have access to the sources. We have developed a copy package where the tech part of formula is put as a second layer in the PDF. So we suggest publishers use it so that it would be easier to index it with MAHS in the future. Well, there are plenty of other things to do on our to-do list. It's worth mentioning that the approaches can be used even for content MAHMAL. So in case your data has content MAHMAL, these might be as trees inserted in the index as well and used in the searches. So it does depend on your preprocessing and the technology framework allows for it. And, well, we may think of something like Sage to canonize the MAHMAL before indexing, but that's another story. So we think we have done some steps towards the mass search engine for a community of mathematicians and perhaps also users of stuff you are types of thing. It's already imploded in DML. I do have papers with details that were published in Springlink and ACM.dl in recent doc-ank proceedings. Those who want preference, I can give some. So that's not... I haven't done this work alone. Of course, most of the hard work was done by my students, Martin Lischka and Michal Ruzicka. It was done within the funding of UDML project. And I owe a lot to the offers and contributors of tools used and also Kave for permitting me to steal his tight land part of his abstract. Thank you. Thank you very much. Questions. Unusual. Boris has a question. The problem with mathematics in tech is that it's extensible. In AMS, Leitech can say in the beginning declare math operator and create my own operator for my own operations. My question is, will... would your system understand this and just use a new op in the conversion and work? Well, the system works when you are able to convert it into math ML. So, well, it's easily... and reach it... currently our products is configured for full AMS Leitech. It's easy to add new markup similarly as you add support for it, for tech for HD or whatever. On this side, it's fine. It's harder to extend it in math ML. You may have to wait for math ML 4 or whatever will support your new operators and the like. So, once it ends in visually descriptive, in visual... at least as visual appearance of your operators is capable to be described by math ML 3, presently it's math ML it will do. And given that your... given that the system configured for conversion from your tech markup. Right? Anjusha. It's great to hear about it, but the examples which we have seen... examples which we have seen, it is about very small expression. What about complicated expressions? That is one thing. Another thing is about... you said it is all depend on what math ML... it is depend on math ML if it could convert the formula. But if you can look at the tech file which is already having the mathematics in textual format and most of the journals they are also asked for the tech file. So, it is also possible to operate your thing on the tech file and the mathematical formula which is available there. Do you think so? Wow. So, first question reminds me the first one. Yeah. So, one... an ArchSyffe article happened to be one big... really big formula. It started with dollar and then all the text was the text inline text. And that would rather huge and it generated a lot of sub-formulas but well the system called with it. On average on ArchSyffe we have for one formula about 20 to 30 index terms. So, the ratio of 3 million to 158 million formula is about 30 or so. So, the expansion is by 30 it is linear, culpable with... so, I don't see any problem here. The second question was whether it is used zeblon the tech files, right? Wow. You mean the math ML is inside the tech files or no, no. Well, for that you simply put your tech as a query and it gets converted to math ML. It is the mass chunk within the dollar a cent then you can search or index it. So, it... we think that doing that on the tech notation level it's loss of time. It has been proven by latex search that it simply doesn't work. Mathematicians are saying they are not capable of searching... of finding anything there because the differences in one place clearly black kids or twice and spaces there and there. So, you cannot search what are... you can't find what are you looking for. Math ML is called the structure of the formula. So, that's much, much better. Hello. First sort of answer Boris's question. Yeah, it's a tricky thing because I'm talking about the abstract level. The ability to present your latex version with your own carefully constructed macro is actually part of mathematical communication. So, math ML sort of ought to be capturing that. But if you think about the presentation math ML it's not at all clear what you should... because you haven't got a macro language basically attached but it's something that the math ML group is certainly very much aware of. And it becomes really interesting when you're trying to do open math content. Math ML of course because that's really where you do have that information from the author and you either need to make it more explicit as I just suggested or have some nice heuristics for guessing what's going on. If he's using taking... Hello. But I just... more general is just a comment really that as I think implying and that's why I'm making it a bit more explicit. This is actually the start of a well coordinated and very big project that could be a very great importance to publishers and everybody if it all works the way that some very respectable people want it to shall we say. Well, thank you. Well, I agree and the converting from tech to math ML should take care of the peculiarities of authors markup and these are scientific papers so they are... they should appear on paper so on paper you should see the formula or commutative diagram so this should be expressed in some way in math ML at the end of the day. So of course authors are very creative so they will come up and broke something but they will not then be searched by this engine in future math ML so it might be motivation for there to stick to non-solutions. This is kind of the same level as you say authors are very inventive and especially in new fields they tend to invent new symbols or new terminology and this is something that is not yet standardized so the same concept could be represented by three or four different means. I guess the question is how do we get authors to be more standard? Well the answer is well we are doing hard to reach critical mass so that mathematicians will then go hopefully into udml.u as biologists are used to go to the central and they are looking the scientific papers there right who in biomedical domain who is not in public public central as is if it's wasn't published. So the dream is that udml will become such established site and then authors will be motivated to cope with the standards as then their stuff will be searchable there and more accessible and that would increase their page ranks etc. So that might be something for authors to be more compliant. One question we've gotten from some authors or other people is is there somewhere a list of what symbols are used in which areas of mathematics has this does anyone know of such a resource? Math society doesn't have any such thing and is not inclined to build one but if one exists I would love to know about it so if anybody does know about such a thing I'd be very appreciative. Final quick comment and we have to move on. Just a quick question to sort of counter Barbara as she's listening. Ross, thank you. I think my answer to that would be do you think that maths notation really has that property? We're not talking about physics notation here remember we're talking about maths notation. That's an open question. So I think part of the reason why it doesn't exist is because maths people don't think that way if you like or they use notation as you know in very strange ways. Thank you very much Pet. Thank you.
|
TeX is around 30 years old, and was conceived and written before the advent of MathML, not to mention the Internet. At that time the idea of indexing and searching mathematics was just a futuristic idea. When people jumped on the Google bandwagon, it was predicted that old technologies such as TeX mark-up for math would disappear in time (it is not used for tokenization and indexing properly). The advent of the Internet and \acro{W3C} brought mark-up and global search to the attention of the public. Somehow it was acceptable again. The recent move to the semantic search and MathML has brought renewed attention to the need of unambiguous canonical math representation in texts.
|
10.5446/30809 (DOI)
|
So, I'm going to be talking about Tixie's today. I'm going to give a little bit of a tutorial about how to, but also kind of take a little bit of a larger look at Tixie as well. Also, I've heard referred to as Tix. So, this was developed by Till Tantow. So, thanks so much to Dan for all of his great work. Obviously, he's done a lot more with the Beamer as well. It's a macro package for creating graphics, some of the syntax. If you've worked with MetaFont or PSTricks, it's somewhat similar, though it's kind of inspired by. It's definitely not everything is taken from there, but good to see some of that. The really nice thing is capable of producing wonderful looking graphics, production quality, publication quality graphics. And what we kind of want to show is that today, the Tixie is very much kind of alive and well, and many other programs and late-type packages are starting to take advantage of the power that is provided by Tixie. So, I want to take a look at some of that. This is kind of a high-level overview of what we're going to be talking about today. We're going to look a little bit about, kind of a little bit of a tutorial for, you know, kind of what the pure Tixie is. The kind of, look a little bit of the basics, the syntax of the language itself. Take a look at the math engine, some of the features there that are available. Also, look at some of the various ways that we can produce output. We're going to be primarily focusing on PDFs. And then we'll look at some of the other libraries that come with Tixie for producing some specialized graphs, in particular showing various automata, also mind maps, which is this would be an example of. And then we'll take a look at a brief look at the folding library as well. And then there's some other packages that can produce some very nice graphs, combinatorial graphs, and this nice 2D package which sits on top of it as well, along with some other programs that are now interfacing with Tixie as well. So, first off, kind of the bare bones document. Really nothing terribly surprising here. Use package Tixie. You can include some optional libraries if you want that come with the installation of Tixie, snakes and arrows and the like. And then inside of your document, whenever you want to include a graphic, you just do a begin Tixie, start a Tixie picture environment, and then inside of there you'll put all of your Tixie macros. One of the first things you need to be able to do is just define a point. And you can refer to them either in Cartesian form or polar form. And here this creates a name point that you can then refer to later. So you can just define some coordinate, give it a name, in this case we're calling it P. Let me say that it's either a Cartesian location or a polar location. We have a few examples of this as well. You can essentially accept the default units, which starts off at 1 centimeter, or you can change those, or you can also include your own dimensional units so that you can essentially precisely define the location that you're interested in. So coordinates can be given in lots of different ways, either explicitly, just kind of saying I want to be at position 3-1, or they can be referred to as a named coordinate, or something that you've already given a name, you can calculate it using some math, or there are also some other ways that you can create them. Then when you want to draw your graphics, what you do is you just chain these coordinates together with a variety of path operations. In this case, we're going to take a look at the path operation that consists of just two hyphens, which represents a straight line. So here we're just drawing this diamond starting off at position 1-0, going around to the other vertices. And then cycle is just a special coordinate which just refers to wherever you began this particular path at. So we just start with the path going along to the drawing, in this case straight lines as we go. You can also specify various options along for your path. So if you want to include color, or in this case, we're telling it that we want to actually fill, so it's going to cause the interior of that polygon to be filled. In this case, we're specifying the colors you would with just as normal with X color. You can also specify the draw color, which is the pin itself, lined with the path. You have a variety of options that can be given to customize the look of your graphic, just specify them as you would normally. There are many other operations that are available for drawing grids, rectangles, circles, ellipses, arcs, baysier curves, and so on. So you have a lot of graphics primitives that are available to you, all of which have a very similar format. So we just say that we want to draw, we've been described a path of coordinates, in this case they're all explicit coordinates, and we have them separated by the various path operations. In this case, for what we want to have appear. In this case, we're going to see the picture, and now we want to say, okay, well how do we actually do this? So first off, there's a lot of code here, but what we'll want to focus in on is that we can describe coordinates, give them names, and then we can also do a little bit of computation here. So first, just to find the origin, to get that name, we also have some other coordinates, in this case, A through G, which are defined through these calculations to just give us the various points on this polygon using polar coordinates. And then we just draw the edges as we've seen before, just A going to B going to C and so on to give us the outline, and then we add in the spokes. And one of the things just to show is that you can kind of pick up the pin. Here, thank you. Okay, point thing. So you can pick up the pin here, so we're going from the origin to A, and then from A back to the origin, but since there's no path operation here, you can just imagine that we're just moving to that location without actually drawing anything, so that gives us our various spokes, all of the spokes along this polygon. Another way that we could have drawn this same figure, just to kind of expose the iteration mechanism that Tixie provides, so get a little bit of looping, is this for each macro. Another thing that's nice is that this macro can be used outside of Tixie picture environments. It can be used standalone. You don't even have to include the entire Tixie package if you just want this iteration feature, so it provides another way of doing iteration in tech. But here we're just going to have for each eye going from 0 to 6, we're going to draw essentially kind of this 7, we're going to go out from the origin up to one of these vertices and then over. One of the things to notice in this calculation, now that I've got parentheses inside of here, I have to include these curly braces, and that essentially prevents otherwise it'll think that this is a coordinate, and so we want to essentially indicate that we're doing some math here, so we just include that inside of the curly braces. Nodes are just coordinates that happen to have both text and shape, so coordinates by themselves, they could have either of these nodes give us a way to essentially create a location that has a text and shape at the same time. And so here we're just going to define a node, give it a name, call it v0, say where it's at, in this case the draw option tells us that we actually want it to be drawn, so that we get the shape to appear, we can also say what circle or what shape we want, in this case circle, and then we have whatever text that we want, so we're just going to type set in math mode and put the underscore zero there, and then we could get the rest of these locations in a very similar way. You can also use styles to essentially make it so that you don't have to type in these options every single time, so for all of our nodes, we're going to say that we want them to be drawn and that we want their shape to be that of a circle, and now we don't have to specify those options for the individual nodes when we're making them. Also, so we can now refer to these nodes just like coordinates, just by name, so we want to draw lines in between each of them, and you can see that Tixie essentially takes us to the, doesn't go inside that node, so it essentially understands that it's to stop on the outside. The math engine is reasonably sophisticated, and there's an R operator that can be used to specify angles and radians, if you don't like to work in degrees. The arithmetic, which we've already seen, can be used to define points. You can actually do some point computation, so points can be computed in terms of other coordinates as well. There's also a rich collection of functions that are available to us, and more functions are being added with later versions, and so on. And one of the other things that's nice is that just like the iteration mechanism, the math engine can be used standalone, so it can be used outside of Tixie as well, if you're interested in doing some computations in TAC as well, which is reasonably nice to work with. You can have your basic arithmetic operations. You also have some relational operators, and also many other functions are provided, computing maximins and square roots and so on, which can be useful for creating computed graphics, where you want to essentially perhaps have a family of curves or other pictures that you want to present when you're calculated each time. A simple little example, we've already seen a few where we're just using some arithmetic. Coordinates could also be, we could also use our math functions, so here we're just going to define this length using the square root of 3. And again, since we have parentheses inside here, we have to put the curly braces around it so that it understands what we're talking about, is mathematics and not trying to refer to another coordinate that happens to be named 3. So the basic calculations involving coordinates can also be performed. However, to do this, you need to include the TICSI library called CALC, and any time you're doing a coordinate calculation, you can close them within the dollar sign, so kind of like putting it into math mode, if you will, but we're in this case, we're talking about doing a computation. So you can do addition, subtraction, scaling of coordinates, among other things. So here's a simple example, we're just going to create a coordinate here called A. It happens to be at a particular location, put a dot there, fill a little circle, and then we're going to create some other circles, so now that we've inside of this coordinate, we're putting the dollar signs that indicate that we're doing coordinate calculation, and then we just do 2 times A, and so that's going to, since we do a scalar multiplication. And then you can also add and subtract other coordinates, and these could have been named coordinates, or in this case, just given explicitly, saying where we're at, so we can do a little translation here as well. Another type of coordinate calculation that you can do is to essentially find a point along a line that connects to other coordinates. So here we're just going to define a new coordinate called A prime, and it happens to be 50% of the way between B and C. So this gives us a way to calculate a midpoint, so we can calculate all of our midpoints here, and nothing says that we have to stay within this segment, so we could have specified 110% or something like that to go out based on the vector that connects B to C, and so on. This gives us a nice way to calculate midpoints and things of that nature. We also have a path operation called let, and at first glance this seems to be a little not terribly useful, but what let allows us to do is to define a coordinate that's used essentially for just one path, so it's kind of a temporary coordinate, kind of a throw A coordinate. So what we're going to do is say that we're going to draw on this path, where we let P1 be this point, P2 be this point, P3, and so on. Now the P's in this case are actually, that's required. In the naming strategy 1, 2, and 3 we could actually use anything there, but the macro that we use to define these points has to begin with a P, and then we can use them, then we can use them just as we would as a coordinate. Where this really kind of shines is when we want to do some calculations, so the let syntax allows us to extract the x and y components of a vector, and it seems to be the easiest way to do so without having to descend down to the lower level of PGA. So here we're going to define some coordinates, A and B, just pick some kind of random values, draw the line that connects them, but now we want to draw the circle that goes around them. There are technically a few ways that we could do this, but one way would be to actually find out what's the distance between those points. To do that there's a math function called vector length, which just gives me the length of a particular vector, but I want the length of this vector here, so to do so I need the difference of the two. So we're going to draw starting from A, but here we're going to let P1 be equal to just the difference of B and A. So we can get our vector here. So we're just doing the coordinate calculation. Now inside of this vector length we can refer to x1 and y1, and it's going to use, since the ones match essentially, we get the x-coordinate of P1 and the y-coordinate of P1. So again, we don't get to choose what those markers are named. They start with x and then follow whatever the number and convention is that we use for a let. So Tixie can also find intersections in some cases. So we can find intersections between two lines, a circle on a line and also two circles. It can also find tangents. So in this case we're just going to use the find the intersection. So here we're going to find a new coordinate D that happens to occur at this intersection. So this is kind of another type of coordinate computation that you could do. So we're going to take the intersection of A to A prime and C to C prime and they intersect here. We're going to give that a name called D and then obviously we draw that. Yeah. Now Bill's going to talk about some of the other. So we haven't talked too much yet about production of these graphics and the simplest way to do it is to simply create a Tixie picture environment as we mentioned before and just embed it within text and it gets types that much like a mathematical formula would, as you can see here. Another possibility is, well of course you could put it in figures and whatnot, but another possibility that you might want to end up with is an entire collection of PDF files, one PDF file per image. So one way to do this relies on another package which you might already know about, the preview package. And so what we can do is essentially just process out your original source file which has all of your graphic images created with Tixie, process with PDF latex and by virtue of the preview package, what you end up with is a PDF file of crop images one per page. And then you can take that file and burst that into separate files in any of the number of ways. One way that we found was the PDF toolkit program. So if you haven't seen the preview package before, here's basically what you need to know about it. So use package preview and there's some special options that are needed. One of which is the type page which emphasizes that you want to crop each image as tightly as possible. If you don't like tightly cropped images and you may not because this might be just a little bit too tight for your needs, one thing that you can do is you can adjust that. So you can in this case add two points of white space around the four edges. And then you indicate what kinds of things you're trying to preview which in our case we want all of the Tixie picture environment to be produced. And so the first picture, the second picture and then another thing that you might want to do is you read the 560 pages of the Tixie manual carefully. It says that it's possible to produce SPG output from Tixie directly and if you read more carefully, you'll see that the author admits that well it's not quite perfect and that if you try it, you'll see that it's far from perfect and really it's not ready for prime time. I think that's the way they say that. So I think if you're trying to get reliable SPG, this might be one way to go. So basically do everything I said before and then a final post-processing step taking each one of those PDF images and sending it through this PDF to SPG filter. The last time we talked about Tixie, I created a finite state machine with raw Tixie code and it was very painful. I did a free state machine and it basically decided that I didn't want to do anymore. Since that time, there's a new way to do this which makes it much more understandable to people who are in the business of thinking about finite state machines. For example, here are the kinds of things that you need to know. For each state, you basically need to know where do I want to put this on the paper, what text should appear within each state. And then if you notice that in this example there's one state that's special, the initial state or the start state, you can have some number of states in this case only have one which is a special final state. So you need to know about those. And then for each edge, where does it start, where does it end? And then there's extra little issues that most people don't think about when they think about state machines. So here, for example, do I want to curve or just be straight? And if it is curved or not, which side of the line do I want to be on? And so for example, if you always think of just going down a one-way street, you'll notice that if I go down this edge, the label is on my left side. And if I go from Q3 to Q0, the label would be on my right-hand side. So those sorts of things, again, that we don't normally think of, but that you would want to think of for typesetting purposes. So all of those six questions are basically buried in here, and I don't want to belabor the details, but I'd like to say it makes a lot of sense to the people that think about finite state machines. Here's a node. It's a start state. It's a state, rather. It's an initial state. And here's what I want inside, and here's another node that's going to be an accepting or final state. Here's what's inside. I'm also saying here, relatively speaking, if you think of laying this out on a grid, each node or vertex is positioned relative to where others happen to be. And then to get the edges, I'll just pick out a couple of things here. Notice that here, from Q2 to Q3, it says swap. And that issue of swapping is, again, right-hand or left-hand positioning of the label. There's also, let's see, from Q0 to Q0, from Q0 to Q0, it's a loop node. And then where do you want the loop? Notice that some loops are above, some are below. Well, if you have one of these to look at, and you have 10 finite state machines to generate, I think with three minutes of careful copying and pasting, a person can do this. You don't have to be a TICV expert. Here's another, so what we're just talking about is an example of one of the libraries, and many of the libraries we've chosen free for today. So the automata is one kind of library. Here's another kind of library, the mind map. I will warn you, the next slide has a lot of code. Don't look too hard, so I just want to give you a general sense. I don't know much about mind maps. I look on the internet, there's lots of companies that make this mind map software. Everything I looked at looked really complicated. What I think of when I see these kinds of so-called mind maps are basically just tree structures, or hierarchical structures. So you've got a root node and three children. Each child, in turn, can have an opportunity to have additional children as you see here. So that hierarchical structure, as you might guess, on the next slide is going to appear in a hierarchical form. So I've got one root node at the bicycle, and then you can see that it's got one, two, three child nodes. And there's some extra information here, because like the finite state of Tomatom, we want to provide information about where things land on the page. So for example, grow equals zero, grow equals 60, grow equals 120. So they're just starting from the usual tenth grade geometry going counterclockwise, zero, 60, and 90 degrees. And similarly, when you get to the root bicycle node, its two children are also growing, in this case, from negative 30 for the first and positive 30 for the second. There are many other options that you can use to make this a little bit more automated, but this gives you sort of a preview of some of the capabilities. Just for fun, there's a folding library that's part of Pixie. As far as I know, this may be the first part that has homework. So there is a homework assignment coming later, so just to prepare you, it requires producing and manual dexterity and a little bit of paper or sticky glue. Anyway, let's just take a quick look at the code. It comes from the folding library, and you get to specify how long you want each edge of the solid. And then you get to specify what goes in each of the 12 faces. And in this case, it's a rather boring choice, one through 12. But you could make those one through 12 anything else that you can imagine tight setting with tech. So if you want to put, I don't know, a calendar or in this case, oh, I don't know, graphics from Tixie. Now, there are lots of other people getting involved, which I think is very positive time. So there's something called circuit Tixie. And I think this is very much following the pattern of PS tricks where other people get involved and extend the basic capabilities. So like Tixie, we're going to get PDF and you can embed that with other text. This is essentially used for electrical circuits. So here are some of the kind of symbols that you can get from this package. And they're essentially just special kinds of nodes that we're going to put along other paths. So here's a simple example. Wikipedia is just attending a tagging along with my wife at the ALA convention recently. Wikipedia gets a lot of derision among librarians, but I went to Wikipedia and it's showing me an RLC circuit. And this is what I got. And so then I thought, well, how can I do this with Tixie and with circuit Tixie? And again, not belaboring each and every point. What you can see, for example, is here. Draw 6-4 to 6-0. Well, if you count carefully, this is 6-4. This is 6-0. So it's going to draw an edge there. But then along the way, it's going to put a capacitor. And in this case, I chose this because this is on the Wikipedia page. But if you wanted to put something other than C here, it would go on the right-hand side of this option here. Star-star gives you the little dot, the electrical connection here. So what we see here is we're moving away from the lower-level Tixie detail. We're not saying put a two-point-filled circle at this particular place. It just knows that and does it. And in a similar way, a Tixie graph, another person comes along and says, well, I like the basic capabilities of Tixie, but I want to think about typesetting combinatorial graph. And I don't necessarily want to dive down to the Tixie level. There's another companion package, Tixie-Berge, which is named after the graph theorist. And so there are lots of well-known graphs like hyperfuse and so forth. And you basically just say, I want this, and there's some extra parameters, and you can generate all of these fancy-looking diagrams. So let's take a look at some simple examples. So I've just chosen a five-vertex graph, and it's, like everyone would agree, this is basically the same picture, just stylized in a slightly different way. What's nice about this package is you specify the graph once, and then if you change the style, you get a different... Some people like this, the sheet is... Can we go back to this for a second? Some people like this, the sheet is all sort of the same level. It's not my style, but who's to say... Not for me to say. But anyway, just one simple change will... Almost one simple change will get you from one style to another. I say almost because when you switch from here to here, there's this extra complication, where do you want the labels to appear? Okay, so let's look at the details. And again, I'm only giving just a thumbnail sketch of what's possible. There are lots of shortcuts to make these kinds of pictures require a little bit less code than what I'm saying. So I've taken a very simple approach. I'm just going to imagine a Cartesian coordinate system, and I'm going to plant each of these five vertices that plant as I specifically want. If you want a grid-based system, that's possible too. If you want to vertices or reins and circles, all of these are shortcuts. So you plant the vertices and then specify the edges. What could be simpler? And then you initialize the system and you indicate what kind of picture do I want. And I want just the old, boring, normal picture that I get here. If you change that to classic, then you get this. As I said before, you do have to change a little bit because you have to either just accept where the labels land, which was probably a bad thing to do because the D will probably land on the line if you choose the default. So you get this label position. And again, you can thank the temp-grid geometry teacher. So you've got the 360 degrees around each point. And there you go. What if you want directed graphs? So I want arrowheads, possibly weights on each edge. This wouldn't be my style, but maybe somebody wants it. Maybe you want to put the dress up each edge weight in some special way. So what has to change? Not too much. You provide a post-processing step. So the post-processing step basically says put an arrow. So now I want an edge from A to B and the arrowhead appears automatically. You can change the style of those arrowheads if you don't like that particular look. There's about 18 different arrowheads in Tixie. What if you want edge weights? Not a big deal. Just add an extra option to each edge indicating what the label is. And if you don't like integer weights, well, you notice that this is mathematical text. So that could be anything you want. You're like, I don't know, integrals on your edge weights? Have at it. Here I just, I didn't create these myself, but those are quite a large gallery of these named graphs. Everything I've shown you up until this slide had straight edges. But as you can see, it's not a requirement. Same gentleman produced a 2D package. This is very, very nice, I think, if you're interested in drawing pictures that can be thought of in two dimensions. And again, it's moving away from the Tixie syntax to a syntax which I think is more natural to lots of people. There are some a little bit unusual things about this. So for example, if you've been doing lots of Tixie, you'll notice that every statement ends with semicolon. Not true here. You don't need them. Fortunately, if you're used to typing the semicolon, it seems to not matter. So put them in, not put them in. You can also blend straight Tixie with Tixie 2D. That's a little weird because I just see the syntax is a little different. But you can do it. The one thing that I think is a little unpleasant is that spaces should not be used to separate parameters. And if you do have a space, a bad thing will happen. And including possibly putting tech into an input loop. Okay, so here's a quick example of Tixie 2D output. I think this is originally due to Euclid if I'm not mistaken. So we're starting with two points A and B, and we want to construct that equilateral triangle. So how do we do it? Again, not Tixie, but Tixie 2D. So I'm going to start with a point A at arbitrarily 1, 1. Another point B somewhat arbitrarily at 4, 2. I'm indicating where I want the text A and B to be positioned. In this case, you can see it's to the left of that point, to the right of that point. Then I'm going to draw a segment. Notice that this is not using the Tixie-hyphen operator anymore. So just draw that segment. This looks a little weird. Maybe you get used to it. A slash B, not A comma B. And then construct two circles. Again, not Tixie syntax, but maybe more natural. I want a circle centered at A, with radius is implied by the second parameter. So the radius is the distance A, B, and similarly a circle centered at B. This next one you probably have to look at in the manual. I want these two points C and D computed. So you basically provide the information about the two circles and where you want the intersection point stored. That's not something I would carry around in my hip pocket, so I'd like to look it up. Then draw the point that was computed at C, draw another point at D. Then what were we going to do? We want to repeat the lateral triangle. So that's a polyseg from points A to C to B. Why didn't I go back to A? Well, I already had that segment. From there, if you want to take this little fan here, it's not too hard. So I want to draw this dotted line. That's not a big deal. Draw a segment from C to D. Again, not a comma, but a slash. Style equals dash. And then if you want to put in this right angle indicator, Tixie right angle. And you specify the three points that define that angle. Again, not at all the Tixie syntax, but I think it's very comfortable for someone who's thinking geometric. A couple of pictures that we took from the 2D documentation. And then a couple of pictures that we drew on our own. Geogra, it has nothing to do with Tixie, except one important thing, which is it's export to Tixie format. Geogra is free, I think originally intended for high school mathematics, but I think since that time the original intent has gone beyond the original intent. It's an example of dynamic geometry program, if that means anything. Basically, you can draw things and slide points around and all of the relationships are preserved. So I won't, I don't have time to show you that in action, but here it is, just as a screenshot. So I started with just as before, 2.A and B. Using the menus, I constructed the two circles, and I put in the points and then you can't see it from up here, but there's a way to export this. You export to Tixie format. Now you don't need to know anything about Tixie. You just take the output that it produced, put it into your latex file, and boom, there you go. There's another option that says export into beamer format. You don't have to know anything about Tixie, you do have to know a little bit about beamer. And what you get then is boom, boom, boom, boom, boom, the exact actions that I did when I was constructing this in Geogra. Okay. Tixie, I think, is not particularly strong in data visualization, and I think he'll and others recognize that, and I believe that there's work being done to rectify that. One thing I looked at in preparation for this talk was there's a script called Matlab to Tixie, which the intent is to take Matlab plots and to produce Tixie code. You can do something, and I will just say you can do something, but it's not perfect. So I'll just leave it at that. 3D plots, for example, don't work at the moment, and I believe that's being worked on. So to wrap things up, Tixie, it's a very nice system, I think, if you're interested in producing any kind of picture that you want to blend well with the text. So all of the fonts look right because they're the same fonts that you use in the actual document, and I think for most of us, that's what we're very sensitive to, is that things are the right size, they're the right fonts, and so on. Many outputs are possible, and for my money, PDF is where it's at, and so that's one thing that I guess attracts me to the system. Tixie is evolving. We talked about the system three years ago. It had a large manual then, it had many capabilities then, it's grown dramatically since then. And what I think, as I said before, what I think is very encouraging is that lots of other people are starting to take note of this and make their program interact with Tixie. And so I think, like we saw with those libraries, you can extend the basic, and originally, Tiltancie started with this system called TGF, it's a very low level, and on top of that, he's got Tixie, which for him, I think, is high level, but for my case, it's not high level enough, and then you get these other libraries and these other packages, which I think for dedicated kinds of things, like automated packages, diagrams and so forth, make it a lot more easier to use. I think those, like the package for Tixie 2D, really is very accessible, and so it makes doing geometric constructions and so on, a lot easier. You can do everything in Tixie itself, but the intersections and so on is a lot, it's fairly, I think, fairly accessible in the way that a lot of people would think about it. So, like I said, here's your homework. Truth and advertising, when you do your homework, you will not get this size. A little bit small. I made one special, it was a little bit bigger, but I thought you wouldn't want to carry an 11 by 17 inch piece of paper around with you. Thank you. Thank you.
|
Tikz is a system which can be used to specify graphics of very high quality. For example, accurate placement of picture elements, use of \TeX\ fonts, ability to incorporate mathematical typesetting, and the possibility of introducing macros can be viewed as positive factors of this system.
|
10.5446/30815 (DOI)
|
Okay, so I'm going to talk about how some of the things are changing with regard to maps in context, in particular what we are trying to do in integrating Unicode and OpenType map features and getting them in context. So there are two ways to look at Unicode map. One is what I'm calling in the traditional text sense, that we type in what Unicode specifies as Greek, alpha, Greek, beta, Greek, gamma, which are more accessible by keyboard shortcuts. We type them within the map mode and we expect math, italic, alpha, beta and gamma to come out. We expect these letters to behave just like normal letters. So if I type x underscore alpha, I should get x underscore alpha. I should not have to enclose this in with in parenthesis because I don't have to do that if I replace alpha by a. All correctors should be equal in that regard. And I should be able to type summation symbol, product symbol, basically any Unicode symbol and get the output. So the other way to look at it is in the Unicode sense and Unicode specifies that this code point is math, italic, alpha. So if I want math, italic, alpha, I should be able to type in this into my editor and get math, italic, alpha. Similarly, there are code points for italic, math, sensor, if math, bold, italic, math and fewer of the font styles. So if we want, we should be able to type in those correct Unicode symbols and get that without any font switches, so to say. Now this is something which currently works in context, both for traditional tech engines as well as for Unicode math. And I'm just going to give an example here. So this is a file which I have an option of either loading Camry or Math font or not. And if I don't load Camry, then Latin Modern would be used by default. And I have ignore all the formatting. This is just for the type page. I have type alpha, beta, gamma and these are the usual things. So if you look here at the bottom, this says this is alpha with a hex code of 0, 3, b, a. So this is Greek alpha and not Math italic alpha. And so when I type set it, this is what I get. So this was the same things that I had input and I get the correct output here. And this is Latin Modern math. I can do the same thing with Camry and now I get Camry and Math with the same things. So this is inputting in a traditional tech sense Unicode letters and getting the expected output. You can also input the Unicode range and you don't see anything here because the font that I'm using for the terminal does not have this. In fact, I could not find any monofaced font which provided math alphabet. And so I don't know whether someone would type in these things, but maybe in an automated workflow one may want to do that. And just to be sure that I'm not cheating, this thing is what I write there. And so this is with computer modern. I get the correct three digits here and notice this is Math italic a, this is sensor math a and this is bold italic a. And I can do the same thing with Camry and I get the correct things. Now what is interesting to note here is that Camry is a Unicode math font while Latin modern right now does not have a Unicode math font. So I'm going to explain how we are dealing with these two different things in context. So basically I'm giving us now give a very simplified view of how context read files. So the engine does not understand any input format. So at the macro level context assumes that the input is in Unicode, reads the file line by line and does correct the combining according to Unicode. And if the user specifies another input encoding, then it reads everything and then converts everything internally to Unicode on the plan. And font handling is, I don't really understand that. All I understand is that it can read type one, AFM font, two type and open type font and you can create virtual fonts on the fly. And I don't know the internals of how these things happen, but they do. Yeah. I have a question. I don't know whether you can answer, but still it's a question. Suppose I want to create a strange thing. Suppose I want to create a font where I would have a letter from font A and all digits from font B, so it would be composed. Can you do this on the fly? Yes, we do that for math. For math letters because Latin modern does not have everything in a single font. So we do that for math and you can in principle, I don't think the existing mechanism is kind of fine tuned to the way traditional tech fonts are to create virtual math fonts, but it's straightforward to adapt it if you want just a part of from one font and part from the other. So what we need to do is somehow connect what the user has typed into the correct glyph on the font. Now you need to do that for text and again, I don't really understand what happens in text. You don't need to do that in math and I vaguely understand what happens in math and I'm going to explain what I understand. And you can, I hope that everyone can get this even without knowing how the backend works because in some sense that is not important. You can extract out all the details to a layer where you don't really care about what engine is doing. So the approach is to map all input bytes to Unicode correctors and this is done mainly by a Lua table which is called corrected def.lua and there are some other correctors in private slots and there are some auxiliary files to this, but this is the main thing. And then we need to map a Unicode corrector to the correct location in the font and for open type math fonts, you don't need to do anything because we assume that the font is open type font so the location of the glyph should be at the Unicode correctors. So that mapping is already done. We need to do some extra work for traditional tech fonts because we need to create a virtual open type font. Now to input the math bytes to Unicode corrector, I've said this is the main file, this is huge, this is almost 150,000 lines of code. A lot of this was done by Hans using Unicode tables and doing automatic generation of this file and then there has been a lot of work in trying to complete this because the Unicode tables do not have what context calls, context name for different glyphs, math name for different glyphs and so on. And I'm going to show how some of this looks like. So for a simple letter like A we have an entry in a Lua table which looks something like this, the adobe name of the corrector is this, there is a correct category and something to do with CGK and what is the Unicode description, what is the direction, what is the lower code and for upper case correctors there's a UC code, what about line breaks here, what about math classes and what's the Unicode slot number. So this is relatively straightforward. There are things like exented correctors where we have a new field which is context name because if you do not want to actually type the Unicode correct, the exented corrector you can type slash A acute and get A acute. There are some hooks if you want to build this thing on the fly by using A and the Unicode correct for acute and the rest of it is almost same as the previous one. There are some correctors which are used in text as well as math for example backslash so for that we give a context name as text backslash and a math name as backslash and for each math corrector we need to specify what math class it belongs to and instead of using the numbers which you know use for tech we are giving it more easy to remember names so this is just a math class equal to nothing so this is the information for backslash. There are things which are more complicated for example the vertical bar which in text is just a single corrector but in math can mean a lot of different things depending on what you want to use. So we have a table for that so it can be an arrow word which is a class nothing it can be word which is a delimiter it can be L word or R word which are open and closed or it could be made with a relation. So all this thing needs to be put in and what we are doing on the fly is that if you get this control sequence L word there is a lookup which happens and it says that it has to be of this class and this is the glyph that we need to pick up from the font. There are things for Greek letters so this is the text Greek area in Unicode and for this we have something like this and basically just the context name and notice that we are not setting the math class or math name here because according to Unicode spec this is not math alpha this is text alpha. It is math Greek range where we do not set the context name but we set the math class and math name and similarly for math Latin correctors for capital A we did not set any math class for this we need to do and so there is different thing for bold and sense where if and so this all this data is there in this file and I will just to give an idea show this file. So this is what the file looks like and we have all these tables and as you can see this is these many lines and the good thing here is even though creating it was a bit tedious that now everything is in the form of a lower table so post processing it is easy whatever we want to do with it is just a matter of loading this table and doing some simple manipulations on it. So the way we process the data at least for math is we need to somehow construct underlying lower level commands to tell what each character is and as I said earlier there was these easy to pronounce names given to different math classes and we have in the file math we have something on what is the number corresponding to each math class and then there are simple functions like if you want to define something which is a del code which is from this target this family and this slot and basically load that and this is the underlying tech command which would be done and then these type of there are a lot of these functions for each math class basically and then they are run for all math characters in the alphabet. There are things which are math symbols which means anything which is a command name in math and then we need to check what the class is and depending on the class define and appropriate underlying tech command for that command. So this is like unexplained is context way of saying a protected macro basically in LaTeX and I don't know what's equivalent in plain tech but this is how it is done for symbols. The thing is and this is what happened when we were discussing it over the mailing list Moisture actually shouted that you want me to type this to get alpha this is not going to happen. So what we need to do is for the user to still be able to use traditional tech type of input we need to when we are in math mode map all the Latin and Greek input to their appropriate Unicode range and this is done in math math file and this is again a big Lua table which looks like this. So we have a table called mathematics.alphabet for the regular font the tf range all the digits are something starting from this point the uppercase letters are from this point, floor case letters are from this point and symbols are from this point. So this is if you want a normal text font here. If you want an italic font the uppercase letters start from this point which is the appropriate Unicode range for math italic. Our lower case letters this is places where Unicode gets a bit quirky because they decided that since lowercase h is also plank constant we are not going to repeat it here again so Unicode does not map it back so one needs to actually manually enter the whole table this could be done slightly more efficiently but Unicode has these gaps in a lot of places and basically what this is saying is that if we are in italic mode and the user types this UTS corrector this is the location in the font that we should get. So this is what's happening behind the scenes when these things are tied up and so I showed what looks for text font italic font similarly we have tables for bold italic and bold font and this was for regular face we have this for different kind of math faces and then everything is built together using a simple lower function which goes over everything checks what's the corrector class of it and how it needs to be mapped to which thing. So the other thing with math is that to get correct subscript and super script we need gardening information and we are looking at how Cambria math is doing it and it has... And it has a different model than text model it allows for a glyph to have what I think is called a ladder that you can have different places where you want to put the subscript rather than just a box model of text and from what I understand context tries to read this information from the font and tries to do a kernel of subscript and super script based on these information and these two pages are taken from the mk.pdf document which is kind of a running history of Lua Tech which Hans mentioned and so here is an example of how black and modern does it and how would Cambria do subscript and super script if we did not use the correct kerning and what happens if the correct kerning is used. Okay so how do we actually use an open type math font in code? So context does provide inbuilt support for Cambria so if you want to use Cambria it is as simple as adding two lines into your document use type script Cambria set a body font Cambria and you get something in Cambria. If you want to use Asana math which is another Unicode open type math font which is compatible with Bellatino context does not provide support for this so this kind of shows how much work you need to do to enable an open type math font and what we need to do the thing which is different is we need to define a math type script which is saying that we want features which are math and math at that particular size that we are using the font at and once we have this we can define a type script for Asana just as normal type script for other context fonts and the text part is same as before and there is very minor change in the in the way the math font is loaded and the main thing is done by the type script here and then you could just use type script Asana and set a body font to Asana and use Asana in your document. Now to give an example so I have this is the slightly modified version of the test math document from AMS done in Cambria and I needed to change the input a little bit to run it in context and I really did not work to make it look exactly like AMS math so this is the default context output and I have replaced all the headings and things here and this is how it looks in Cambria. We can also see how this runs in Asana math so this is how it looks in Asana and just to get comparison with so this is this is in Asana and this is the file downloaded from AMS website which is in computer modeling. Now there are no other open type math fonts yet so this is all we can experiment with and so next thing you may want to know is what about traditional text fonts because you may not want to use the text font comparable to either Cambria or Palatino in your main document so how do you use a traditional text font so what we need to do is create a virtual font on the fly and this is the code needed to create a virtual font for Latin modern at size 10 and all we are doing is saying which of the TFM file corresponds to a math vector and I will explain in a moment how these math vectors are defined and if the font is italic we need to know what the skew character is and whether extension is enabled or not and this is basically just putting together all the math fonts that are there and telling context where to pick different list from so this partially answers your question as to how this is done so you can look at the definition of this function and I think this is in math-vfu.lua and how the function is defined and usually when you do this thing what you do is you do one font and delete some character and add another font so you need some more granularity for this. That's what's happening internally so this thing is done and then this thing is done and so on so it's done in the order in which this luatable is given. You can make custom vector. Yes, yes, yes I'm going to show the vectors in a moment so these vectors are just these tables which consist of this input location which is Greek capital gamma is located at this location in the font. This unicode character is located at this location and so on for a whole bunch of things and for a whole bunch of vectors. If you can do full these are free. Yes. Yes, yes, enough. So a lot of times when we are playing around with this we need to check did everything come right or not and just using the document that you are working on is not always the best way so we need certain ways to test these things and one way to test is to view the entire font so there is a module in context called font 10 which provides a function just to type set the entire font and and this is how this thing looks like. So this is the function basically just reads the entire font. There was all this information in the luatable so it's just typing it out so that we can read it in a file and this is a file with about 75 pages because all unicode characters which are present in Cambria are listed here and there are the glyphs shown what's the glyphs name, the adobe name for the glyph and what's the context name if we have given anything so for something like person there is person for math things that's preceded by m colon something and this goes on for all characters that are there in Cambria and you can do similar things for virtual fonts which are created just to check how complete the mapping is and whether all the mappings are done correct or not and sometimes these things are not enough because you don't really see everything in a lot of detail here so there are other ways of, Hans mentioned a bit about tracing and this is one thing one way in which tracing is done. The simple books to enable or disable tracing and this thing which you don't see here is actually the correct unicode slot for math italic lowercase a rest of it is evative type and the colors basically show what is happening so green is everything which was available in the font this dark yellow is something which needed to be substituted so for example b this was normal text b and context on the fly substituted it with the math italic b so this is coming out in a different color similarly gamma is coming out in a different color the delimiters are coming out in different color just to know how complete the font was and where fallbacks are being used to know what things are missing in the font. There are other ways of tracing because this is not the only thing we care about in a math font we want to know for example if something like this what would happen if this thing is used in a display mode usually you get a different symbol for extensible math symbols and there's another command for this which would show what's the dimensions of this symbol what's its math class what's its math name and what would be the next thing that would come. There are things for which the next thing is more complicated for example a curly brace and again this thing gives you different sizes in which curly braces available what are the unicode slots in which they are and then you see here this is getting split into all the parts that are needed to build big brace when you are at that size so all this information is accessible. Yes. These are values from private users. Yes. How did you decide what to use? Just ad hoc we don't know at this stage whether there is some sort of uniformity on how these things should be done so right now we are just doing it. I think Cambridge is in a different way though I don't know exactly. So for most of the part if something is available in Cambridge we are following it but we don't know whether other open type math fonts are going to follow Cambridge or not because the specification leaves this open and this is one thing which I'm hoping that all font developers come up to a uniformed thing on where these things should be. Yes. I think Cambridge doesn't do it differently. It just slots in the font. I don't know whether these are in the font or I think these are in the private area even in Cambridge. They should be encoded anywhere in unicode. They exist as glyphs in the font. They don't have character numbers at all. I don't know how the context is going to get into the lead of it. The open type table gives you basically the link list. Yes. If we know the math class a lot of the spacing rules can be worked out. So we have the math class information explicitly for each symbol. So the spacing within the math symbols depends on that. And for subscripts and superscripts, Cambria has some spacing information about how learning should be done which I showed earlier. But a lot of these things are. What do you mean the spacing information is included in open type fonts? For subscripts and superscripts they are included in Cambria. For other fonts I don't know. But spacing between characters in math also depends on it can be configured. Right now we are just using some defaults as to what should be the spacing between say two characters which are math odd. So something like that is still configurable here and I don't know what really open type provides. I don't know that. So in open type it is actually more parameters than text text. So in Louis text we just support everything that open type is. So the math sense is extended to Louis text. And computer model or laptop model. So it has two thousand and five thousand. So it's called the math class. In Oric's paper there were a couple math text math parameters which are in open type. There is a detailed article by Uwe. Yeah that's one. Okay so. It has its own coding. The upper end characters are similar states. And the italics are very detailed. Okay so where are we right now in terms of context? So we can, if there is a complete Unicode Math font, context can handle it easily. There aren't too many complete Unicode Math fonts. If the font is incomplete, we are still working on what's the right way to substitute using traditional text definition to make the glyph on the fly. It's just a matter of having the right interface to how to input it. And the technical part of it is not that difficult. And the traditional text font work and about 98% of AMF symbols are math. And Barbara just told me that the AMF actually has a list of what names they would give to most symbols in the Unicode symbol list. So we can just borrow from here. From there to do that. So far we were just looking at what different math packages in latex provide and some of those are conflicting and it's not easy to come up with the right name for a math character. And my wish list is that there were more Math Unicode fonts. I don't know what's the thing with the Unicode Math font. They announce when they're going to be released and they are never released at that date. So Minion Pro Math said, I think first it said February then it moved on to April and it's much past April. Sticks have been going on for quite a while now but the fonts are never released. So it's not easy to test. Yeah. I can tell one thing that's holding up a bit and that is that a number of these symbols that have to go to the private use area Unicode has decided that they will put them into Unicode and the codes are being finalized and when those are finalized the fonts have to be moved around in the fonts. Is it a commercial font? Yeah, Cambria is a Microsoft font for Office 2007 I think. Is it included in MS Office? I don't know. If you download the Word Viewer you get it. Just a Word Viewer. Word Viewer. Word Viewer for Word. If you download that you get the real Cambria. Otherwise you operate in citizenship with the text font. The next event person here. And what we need to do I think is have a consistent naming for at least all characters which are all math characters which are in Unicode so that this is not something which everyone who needs to use the symbol needs to come up with a name on his or her own that's just not going to work in the long term. And the thing which Context does provide to a large extent now is this file format with all the information which is needed to process this. And I wish this was something which could be shared by different engines so that we only need to maintain one database of these mappings from glyphs name to different names. And as this is a Lua table it's easy to add if for example there are fields which were context specific if another macro package wants a field which is specific to them that can be added and different packages can just ignore each other's field but there is a lot of shared information which can be worked on together here. And Unicode does not cover all math characters and we need to have a consistent thing for naming glyphs in the private field. So that's it. Thanks. We have time for a couple of questions more? Yes. I think I understand you're coming to that. I mean you're coming for a whole map. So for example Ztec does something with math to map everything to the proper... For each thing we need to map a command name to the underlying tech command which sets the math character or the corresponding thing. And the way I understood it Ztec packages were doing it were having this big list of just declared math symbol something is something. So I meant file format. I mean this is the thing that we know about. So this is a Lua... Sorry. So I meant that it's better to have all of this as a Lua table which can be force processed easily than have it as a... And the default is a Lua table. Yeah, so this is my wish list. I'm not saying you have to follow this, this is my wish list. I'm saying you didn't adopt other papers for a format. Yeah, because processing it is... Not yours. Yeah, processing something which is as a tech input is much harder than processing a Lua table. Oh, definitely. So that's why that's the reason if we wanted to complete... It's not just one thing. There are multiple parameters here. If this were inputted as a tech macro... There are more formats in the way. But LATEC does not have all this information to Unicode correctives right now. But if I had to do it... I'm not using your mobile. I'm just saying that... Insistence people will give you all. You've been saying it's been 12, 10 years creating your own. Yeah, okay. Anything else? Just a quick thing about this letter like subscript. If you do it in your file properly, I think you already have, but you need to keep it a possibility to switch it off. And the reason is that for many especially like in tensor algebra, you need to raise and lower your indices. And if you do it in a letter like this, you're going to confuse people might. So that is currently possible because this is... From what I understand implemented as kerning and it can be disabled as an open type feature. So you can disable it if you want to. Yeah, so please keep... Yeah, so everything right now is something which can be configured easily. Nothing is hardwired into the engine. So these things can be disabled or enabled on the fly also. Let's say thank you one more time.
|
Lua\TeX\ provides the ability to process Unicode input and to work with OpenType fonts. These features are used in \ConTeXt to fundamentally alter the handling of math typesetting. The user can type math using Unicode symbols and use OpenType math fonts.
|
10.5446/30816 (DOI)
|
Well my pleasure to be with everybody once again. Just as a kind of continuation of some of what Hans was talking about, this is a file which is called by another typescript file in order to get the features in the font. So this is called Husaini default. So I've defined, actually I checked, it's actually SS54. So I'm up to SS54, although the open type standard only goes up to SS20. I've written John Hudson who developed this particular spec and asked him if he can take it, just take it up to 99 and be done with it. Why is it, why is it numeric anyway? It's gotta be the stupidest thing under our names. Well that I don't know, but however the latest open type standard now allows for word descriptions of these kinds of feature sets. So the next versions of font forging and vault, you should be able to carry, so instead like you see here, where I have a description, I should be able to put that into vault or into font forge itself. And then maybe there might be a way in context where I can just do bracket, give the description and then it will know what it is because it's very hard to remember. I can never remember any of these. But as you can see, and I'm going to do this one more time because I can't see my own screen yet. So let's see if I can get my screen back and get that at the same time. No. Is it, is it coming? All right, there we go. All right, now I can, so here, and so this gives you a number of features and this is our Arabic font code name Husaini and you can see the features go on and on and on, but most of them are commented because you don't want to cut all of them on all the time. So the challenge is to find a reasonable default. So after putting all these together, I came up with a set which I figured is it's not first order, it doesn't cut on every advanced feature, but it gets a good medium for your good document and it somewhat approaches the block style that we discussed before. So now if we just go straight to Adobe, Hans wasn't too pleased with the outline that I did this morning with the white background and just the titles. So I decided to put a nice little arabesque border around this one so that at least you capture some of the feel of the thing. Now here what we know, you saw that big file, and not, not big file, you saw that file with all the different features. Now if I want to go to first order, we define a font feature just as in the Husaini default file, first order, and at least the way it is now, what we had to do, what I did was each feature that I don't use, I subtract it. Okay, and then we get this sub FF first order. This is a little bit old, you see this only goes up to 43. If I want to update it, I got to take it up to 54, so there's a lot more to add and subtract, but these are features I don't use, and so I'm subtracting them out, and what this does, this subtracts them from the original defined font feature which was defined in Husaini default. So it just subtracts those, which gives us a first order effect, and with the tracing on, we can enlarge this a bit. Yes. That's the code name. The final name probably will not be Husaini, but that's the code name that we're using for now. And what you see here is not too different from what you would get from let's say lino type or lino type Lotus, for example. This isn't too different. A few of the characters are different. For example, this calf here which overshoots the olive, gives you a little bit of aesthetic that you wouldn't get in a normal first order font. So there are some minor differences, but for the most part, this is average. Are the colors clear or are they alienated? Because I can switch to non-color mode. I think they're awesome, but could you say again what they do? You really want me to get into that? This is a... Let me ask a simpler question about color. Is the color important to the meaning of the text? Or not at all. This is for fun. However, and not only for fun, but also for tracking, you'll notice INIT are all in red. MEDI are all in green. FINA are all in yellow. And then for isolated... I'm sorry, blue... And then for isolated, we have yellow. And then we have a couple of other features as well. And when Aditya was doing his, he also also enabled the trackers and showed some of the color. So I'm using the same feature that Aditya was using. So except here, we have applied it to some of these Arabic script features. I'm happy to share with you what this means, but it would probably take time away from this. So maybe after the session. It looks cool. Yeah, it looks kind of. It's a mystical treatise, by the way. OK, well. OK. A short mystical treatise written by El Hasad ibn Ali al-Asqari in the year 4 and 5 and 200, which basically means 254 after the Hijra date. Now, if we go to the next one, this is what I call monotype-nost style. So here, we've basically imitated the distillation of Gulak done by monotype-nost. And now you can take a look at what that looks like. If we go back here for one moment, and I'm trying to see what's a good example for this. I believe here, alif-lam-ha. If you look at that, and then in monotype-nost, yes. In monotype-nost, you see the l'am is on top of the ha. And in our Arabic lesson earlier in the day, we used exactly that example. So monotype-nost has that particular feature. It has another one. Here's meme stacked on top of meme. And there are a few more. This is a nice one, where l'am comes over the ta, over the meme, and then back up. And here's something that I've added to monotype-nost style, which as far as I'm not aware of any Arabic font that does this. But here, I do the two dots vertically instead of horizontally, which in a tight space is what's traditionally done. But you rarely see that. So these two dots here are the same as these two dots here. But because the space here is tight, you make it vertical as opposed to horizontal. And here's calf over meme and so forth. Feel free to slow me down or stop me as I move on. I want to try to get through most of these. So largely what I'm doing is showing you various samples of what can be done. Here's my default. So there was no code to look at. This just applies that file that I showed you at the beginning without any modifications whatsoever. So here, of course, you'll notice the first line. We use a little bit of stretching. This is Bismillah al-Rahman al-Raheem, which is the first line used both in the Quran and in many Arabic books as well. Now here are some interesting things that happen. And I'm going to have to probably really close in on a couple. Here's an interesting one. There are a lot of subtleties here, which may not be clear at the beginning. This gene that goes into the lump is not the usual one. This one connects into it. Of course, if you don't know Arabic, it might not be completely clear. But we can go back to the first order style and show that same word. And if you memorize what I just showed you before, you can see this is different. That's the default first order style. And here you can see the second order style. So you start getting lots of little subtleties. Here's a nice one. OK? Normally, this is stretched out more in a normal Arabic font. But according to calligraphic tradition, this should be a little tighter, which we do here and here. And you also see it's not perfectly horizontal. There's a slight verticalness here. Now Arabic type setting, which comes with Microsoft windows, has a font there which implements a kind of slant. But it's very naive because it applies this slant to all the letters. And that's really not correct. It should only be applied to the first couple of letters so that it kind of comes in and becomes flat. And that's what we've implemented here using. Now, open source is sacrosanct. However, there's nothing better than Microsoft Vault for doing this kind of work, unfortunately. I tried doing it in font forge. I got a certain way in. But then every time I would use a feature that nobody else was using, font forge would predictably crash. So I would send a note to the list, and then George would write back, well, nobody's used that feature before. And then he'll fix it and say, well, now you have to apply this patch. And I don't know what to do with these patches. And I mean, I would have to hire a graduate student or somebody just to compile the fixes to font forge. And but the people, Sergey Malkin and the others who worked on Vault, I have to admit, they did a very terrific job of distilling enough features of open type to really get one moving efficiently on font development. And as much as we like to bash Microsoft, I couldn't have gotten as far as I've gotten without using Microsoft Vault. And at least it's free as in beer, as you beer lovers like to say. But now let's move on and go look at some gene stacking. So here we're adding a bit more. And I'll keep it in normal mode for now. And because we don't have time, I mean, there are a lot of features here. I don't have time to show all of them. But here's one that I'm especially happy with. In classical Arabic, normally this is a gene-shaped letter. It is a gene. And you see, it doesn't come in flat. It comes in above, and then it comes down, almost like a zigzag. And Arab type does this to a certain degree. But not Arab type. Arab tech has a certain limited ability to do this. But in this font, we have, I believe, fully implemented a model for stacking, not only on top of gene, but on top of a couple of other letters as well. Now I want you to look at this. This is called the letter fa. And I also look here. The system knows the rules that makes this boss shape taller than this one. But these are actually the same. They represent the same glyph shape. But this one comes first. It goes higher. Then the second one goes lower. Now keep an eye on that for one moment. And here's another one. And actually, this is a very good example of another important feature. This is masabih. It means lanterns. And you see the dot which is red. This dot goes with the initial letter ba here. Normally, this dot would go right here. But if it did, it would clash with the gene shape. So what we did is we made a contextual lookup that says in this situation, move that dot. So if this dot, so if this letter is followed by this letter and then this letter, then move that dot so it doesn't clash. Sounds pretty simple, but it's cool. How many letters do you look at? How many letters do you look at? For this one? No, in general. The first one is for the first one, but in principle, maximum. How many letters do you look at? I don't think there is a maximum. At least I'm not aware of it. But in practice, it won't be more than one or two. In practice, I've never had to use more than one or two. I think I've done three a couple of times. But usually, only one or two. The limit used to be 255, but it hasn't been rise to move. It's fine. It's fine. I want to show you another example of the same letter in a different position since I redid this. It's a little bit. Here's another good one to look at. Here we go. If you look here, you see this is the exact same glyph. And here, you see the dot is much closer. And of course, here, because this glyph is thin, I use the vertical two dots instead of the horizontal two dots in order to get a tight feel there. And here's another neat trick which you will not find. And which will provide very few Arabic fonts. Probably only deco type by Thomas Millo does something similar, although I haven't seen it. But in any case, here, we have two letters coming into the lab, but they are above the baseline. Then they come over the meme, and then they connect over. So that's another thing to watch out for as we move on, because we're only at the beginning, guys. Jeem Ha meme stacking. Now here, we've added a few things. Let's see if we can isolate. Actually, here's one. You saw this word before, and I asked you to look at that letter. Here, we've replaced it with an alternate. So not only have we implemented the principle, we've gone beyond that, and we can do this in different ways. So for one word like this sayan fajiru, I haven't calculated all of them, but there are probably about 20 different ways you can typeset this. And ideally, the paragraph builder should have instructions to choose from these ways. And then decide which one best suits the paragraph context. Now we have different levels of stretching. And what this is meant to illustrate is not how you would typeset a real document, but basically that the font knows and context knows where it is legal to stretch letters. You can't just stretch anywhere, any old way. So inside of a word, there are certain points in which you can stretch. This example has stretching in it, but it's small, and it's so small that I could actually typeset a full document in it. And it actually looks quite good. But let's go to the medium level. And again, you define a font feature, and then you add it. And in this case, the main active feature is JS12. JS00 and up, this is a feature. I hope to register this with the open type people. Because what this basically does is that if you have a justification engine, you can choose this feature to add certain justification features to the paragraph. Microsoft has a JUST feature, or JST, I don't remember exactly how it's spelled, feature already. But in their specification, you do justification before you apply any other features. And the paragraph model that we're using, justification, is the last feature that would be applied. Because it's based on a paragraph analysis, as opposed to just being aesthetic spice, so to speak. And here, if we go to that same word, say, you unfa jiru, watch what we can do here. So here, we have three legal stretch points. One, two, three. Normally, you would only use one stretch point in a single word. Occasionally, two. So one of the things we have to do is teach context that, in a case like this, either do the first one, the second one, or the third one, but not to do all three at once. But for the purposes of this document, this illustrates the three legal places in the word that you can stretch. Although technically, you would only use one, two, or three. And then here, you can choose level one, two, three of stretching. Here, level one, two, three of stretching. Here, level one, two, three of stretching. You can remove this feature and bring the whole thing flat. And then you can apply stretching to the flat version. So it's actually more like 40 different versions of this word that you can actually make. Adrees? Yeah. Is your stretching, how is stretching implemented, I guess? I mean, like, are you adding? Is it like that? This is pure glyph substitution. Glyph substitution, OK. Ideally, what we would like to do is be able to interpolate these. So put all of these stretch substitutions in a table and then extend HZ so that it looks at them and finds interpolated values in between to squeeze or stretch in different places. So that's a long-term goal, but hopefully not too long. I shouldn't say that. It's an immediate goal. Otherwise, it may never happen. And then we have maximal stretching. And let's go back to that same word, sayon fadjidu. So here, you can really get that word stretched out. And you see, the dots and everything are all remain in the correct places and everything is perfectly smooth. I'm not aware of any font system that can do anything quite like this. And here's another quick example. You saw this one before where you go above the baseline and then come down again. So here, we have two stretch points there. There are other features as well. If you take a look at this word, I'm going to try to go through these quick. So here, we have the letter raw. Now, each of these is in isolated form. These two letters don't connect to the left. So that's why all of them are yellow. But watch that middle letter. And so we've implemented a variant, which is a sort of long version of that same letter. If we go to our word, sayon fadjidu once again, you can do it at the end as well. And this gives you the power of killing ligatures. Because once you kill the ligatures, it's much easier to add different variants like this as you go along. So all I had to do was, if this had been a ligature right here, this would have been much more onerous to accomplish. But here, all I had to do was just create a segment, replace the original segment with this segment, and then do a G sub. And that's all I had to do. So whenever I see a feature in an old manuscript or something that I want to implement, it usually doesn't take me too long to go into font lab, adjust one character, to fit that size, and then fit that style, and then add it. Yeah. I'm just saying that I don't know anything about the Arabic typography except from profiting tugboat articles. That makes you an expert. Yeah, wow. I don't know. In the ones I've read, I don't know about yours in particular, but in many of the Arabic articles I've read ligatures are a critical feature of Arabic typography. And I think you remember you could have ligatures up to seven characters, and so on, and so on. That's insane. That's it's partly a question of terminology. People were saying ligatures to mean the particular rendering of the collection of lips, but it doesn't have to be implemented as a single ligature with them, and it's much better not to, as you know. Yeah, exactly. Yeah, if you implement others. But some people do. I mean, I've read on the mailing list about somebody who had a font of 12,000 characters and implemented all these ligatures. Very macho. But it's a bit much. So here, so we don't have ligatures, but what we have is what I like to call aesthetic strings. So you can have an aesthetic string of two or three characters, but it can be flexible. So in this particular case, this is an aesthetic string going here, like this. It's made out of three characters. But I could replace these two and get a different aesthetic string without having to replace all three. So is this the aesthetic string? Is this three of them? Is it the whole thing? The boundaries start to get not so rigid once you get rid of the hard-coded ligature concept. You know, in India, Pakistan, they like to use this wide-letter, wide version of the letter yaw, for example, which is this one, which is just another example. I don't need to dwell on that. But you also get, and I'll leave it as an exercise for you to identify the wide letters here. Some of them are right in the front. Yeah, so particularly, the cap is very well known in common. Here's a nice little feature here that I didn't emphasize before, which is, if you look here, the two dots originally go here. But the way this word is written, the cap pulls back, and then the two dots go on top of it, although they belong to this particular character. And they're also turned at an angle to where they match the curve here. So these two dots are at a different angle from these two dots. So that's an example of the kind of subtleties that we can add here. This is basically the same, except we've added even more glyph substitutions. Because sometimes you want to fill up space. You get an underfull box, as we say in tech. And you want to use one of these wide characters to fill it in. You can fill with justification. So one of the things that we eventually need to do is be able to set penalties and merits and so forth that lean toward using these big glyph substitutions, or which lean toward using stretching. And eventually, if we get this done right, for this single paragraph, you would have virtually an infinite variety of ways to typeset it, just as you can with a normal tech paragraph using penalties and merits. So here, yes? Is this algorithm for all of this stretching and typesetting and adjusting going to be written in Lueva then? I'll leave that to Hans to answer. I just give the specs. He does the machinery. So you're doing whatever you do in Lueva. The reason I say this is because, and this came up because of a question on the Lueva tech list, a lot of times we look at tech as trying to remember the exact word. People are asking about whether this should be hardwired in the engine or not. And so there were some not quite flame wars, but close to it, over whether X should be hardwired or this should be hardwired. But my understanding is that with Lueva, you can have ways to get the best of both worlds where it's not really hardwired, but it's not really hardwired either. Because Lueva gives you that extension layer. So some people are saying, well, why are you doing this in Lueva when it should be in the main engine? Well, Lueva provides the extension layer so that macro writers don't have to see it, but it is there for those who need to access it. So you can easily imagine some other Arabic expert, Thomas Milo, for example, not agreeing with every one of your aesthetic decisions and wanting a different system. Exactly. So that's the question. That's the question, how to define the system that merits for this, because it's ultimately based on some aesthetics. Yeah, but as far as I know, and I'm sure Thomas Milo, if he's not watching now, is going to be watching later, as far as I know, decotype hasn't handled this particular problem. And this is worth saying something briefly about. Here's something from the metaphysics of Abbasenah done in the Bulak font. And I've already told you how much I like this particular font. And here, we have the same or something similar done in decotype. Now, decotype implements virtually every aesthetic feature of the Ottoman-Turkish Nask tradition. However, from a point of view of paragraph justification, I personally think it's quite horrible. And if I had to read Abbasenah's philosophy in this, I mean, I would get a headache very quickly before I got down the first page. So the shapes are there, although like I said, I'm not particularly a big fan of this particular style. But things like interword spacing and justification in the paragraph don't seem to be consistent. So for example, like right here, it's very fancy. But again, it seems like in a macho way, everything is thrown in, but it doesn't look natural. It doesn't look like a natural Arabic paragraph. So all this fancy stuff is there. But in order here as well, in order for that to work, you have to have some system that makes it natural. And I think this is where tech can play a very positive role with its systems of penalties and demerits, is you can choose various parameters until you find that ideal Arabic paragraph that you want for one purpose or the other. OK, so that was Milo. And if we go back here, so there are a few others. I mean, here's a hookah, which is rarely used, but it's used a lot in handwriting. Here is something that I implemented. Like for example, I didn't do it here. Haven't got to the point where I can do it yet. Although I'm sure it wouldn't be too hard. A lot of times in Quran, you'll see there's a line between the lines. So the lines are separated by lines. And then the letter gene here gets chopped so it doesn't cross the line. So that way you can draw a line in between the two lines without any of the letters clashing. And so in some traditions, this is chopped. So I added that particular feature as well. And we can go through the same thing in black and white if you prefer. But I want to show now what I've been doing lately is trying to get the vowels in. Here's an example of one. I'm just showing you some experiments. Well, this is the same word, bahid, done in different ways in this particular system. But I want to show you a little bit with vowels. So here is a good case. This is the name Muhammad. So here we have a four character aesthetic string with the vowels placed on. And it's a bit subtle. But this one is contextually moved to the left as opposed to this one. In other words, if you look at this letter, this one is this shada goes on top of this. But here it's further to the right. And here it's further to the left. And not always where the left is further up as well. So this is an example of contextual positioning. If we don't want to use the full feature, we can go and do something like this. So Muhammad, B Muhammad, Muhammadun, B Muhammadin, B Muhammadin. So I'm in that stage of trying to get all these vowels and so forth done correctly. And here is a good example of contextualize. It's very simple. Here's Jean. If we add the vowel kasra, we move the dot up and put the vowel there. And you couldn't do this if the dot were hardwired in the font. So one of the things that we've done is we've abstracted the dot layer and the vowel layers so that we can control them. There are still a couple of bugs in this mechanism, but it's almost there. There are a couple of bugs in it. It's still there, but we're very close to that. And here is another set of experiments. This can be done in normal fonts, but I'm adding the Quran annotation symbols from Unicode and testing context mark four to make sure that if you type, for an example, one of these symbols followed by three numbers, that it should pop up in the middle automatically. So you can do verse numbering and so forth. And one of the things we're trying to get to, there's still a few bugs in it, but here we have, and I implemented this as mark. So I turned the internal numbers as marks. The Quran annotation symbols as marks. And then you're supposed to be able to apply one and ignore the other in order to get them to combine properly. We're almost out of time. I'd like to just open the floor for questions or comments or suggestions or criticisms or tomatoes or... If anybody has any thoughts, questions, or comments about this. We've come a long way. We still have a bit ways to go, but I'm starting to see the light at the end of the tunnel. Yeah. You've mentioned this other person who's an expert who may or may not be watching. Is there a competition for who could put out the best type setting system in this? Are there a lot of people waiting for yours? Or are there people who wish yours wouldn't happen because they like the other one better? I don't think there's anything like that going on. I mean, Thomas, Milo, and I are friends. So his philosophy is a bit different from mine in this respect. I honestly don't think that the Ottoman style of Nas is comfortable for reading long texts. I mean, maybe when books were handwritten 200 years ago, that might have been appropriate. But in the 21st century, I just haven't seen something done in that particular system that's comfortable to read more than a page or two at a time. It's good for calligraphy, though. And you have their plug-ins. He has a plug-in for InDesign where you can use his system to do wonderful Nas calligraphy. Yes? This is the question for who. There was going to be a third section of this talk, or maybe next year, something. And these things are really useful. Not even knowing anything about Arabic is motivated in either the French or the Latin or the Latin. But I was curious about the input for a latex user, how to make the incredibly useful variants of the words that hear. How do you know about that? How do you think about that? Sure, two things. Number one, you can use any text editor that supports multiple languages on the operating system that you use. So if you're on Windows, you just go to regional settings and select Arabic. And you can start typing in Arabic. The problem is when you really want to start working with vowels. And when I was in San Diego, I discussed FC Unipad, which is a wonderful Unicode editor, which gives you a lot of control over the vowels. So the vowels are placed horizontally, not above, so you can select and control them. That's number one. Number two, I don't know if this is ever going to be implemented in latex. So this is purely a contact system. And if when it's done, someone wants to port it to latex, that's fine. But since I jumped ship the context, I've never looked back. So this is a purely context-centric project. In that sense, although the code is available to everybody, if anybody wants to implement the foundations. And Hans talked about Lua Tech and Plain. So the hooks are there to do all the implement, all the features and so forth in another system, whether plain or latex. Yeah? I think we ought to stop. And say thank you. All right. We're supposed to have a question and answer now. On the schedule, it says question and answer. But it's not your question. Well, let's just get one more. I want to hear his. Go ahead, just one more. The first question could be for Idris. Just one more. How are you going to convert to context? Can you tell us very briefly why? I'm sorry? You are real converted to context. Can you tell us very briefly why? Sure. And this is the short version. Short version is I was looking to do better critical additions. I wrote Peter Wilson, who did memoir, and asked him if he would port Edmack to memoir. He said he wasn't interested. And then Yusepe, who was proselytizing context on the news group back then. He says, well, I'm sure we can do this in context. And so I wrote Hans. Hans took an interest. I switched the context. And two years later, Peter ported Edmack to memoir anyway. He knows to me. So the long answer, we can talk about it. All right, thanks.
|
We discuss the present status of the Oriental \TeX\ project, particularly the problem of Arabic-script microtypography. This includes glyph substitution and hz parameterization.
|
10.5446/30819 (DOI)
|
Today I'll talk about getting started with Plastic, which is something I use every day. I work for Sass Institute, which is an analytical self-wear company that does business intelligence. The documents I work with are highly mathematically dense, so Plastic has got a job to do. So we'll talk about what is it in the first place and in what conditions you might find it useful. How it works. I won't go into much detail about how it works because probably you don't care, but you need to know a little bit about the underlying mechanism to take advantage of it. Basically there's an interface and there's two steps, the parsing and the rendering. So those are the two big parts of Plastic. The goal with parsing is to generate a document object model or DOM. This is kind of a tree structure of data and you can extend that with Python classes. And the next step, which is the rendering, the goal is to get our help put and that's extensible with templates. We'll take a look at a demo so you can see it in action. The normal sample 2e.tech that comes with the tech distribution just to have something to start with. And then we'll take a look at a couple of papers I downloaded from archive.org. And finally we'll have a summary. So Plastic converts latex sources to HTML, XML and other market languages. It's released as open source under the MIT license, which is a very permissive license. And it's written in pure Python by Kevin Smith, who is also an employee at SAS. My contribution to this project is haranguing Kevin at lunch for about two years to get him to write this. I use it as part of the publishing workflow to create our statistical documentation at SAS. Just to give you a brief overview about where I'm coming from. At SAS, I have a system that I call SAS Latex. We take Latex markup, math, font, these math time pro 2, and a custom tag set I wrote just to facilitate things, shortcuts for our writers. Because the people that I work with, it's not technical writers in conjunction with the developers, the subject matter expert. That happens at SAS, but the people I work with, part of their job is writing. So they're developers of the software and they write. So they are using this custom tag set to facilitate their writing, but they're also developing the software at the same time. So that gets into SAS Latex system, and we make sure that it's heavy with output and with SAS programming code. So we have a system in place to make sure that the code that the reader sees is the code that generated the output. And then that tag file gets parsed by PDF Latex or bit compiled by PDF Latex to PDF. That's very important for our developers because they are active in the statistical and mathematical community and they want to make sure their papers look good. Because our audience is kind of expecting that too. And then we send the also through plastic to get HTML and XML. And we can also send it through the parser to get the sample library code for customers so they can see the actual code that got the outputs. So we have a customized tag set. Everything is written in Python, all this other stuff except Latex. And so the whole point is to get high quality PDFs and we give those out as books and as chapters. One of the books we produce is the SAS Stat User's Guide. It's a pretty good trip test because it's 8,000 pages and we produce it all as one document and we create it as chapters. Lots of hyperlinks, lots of output, a whole lot of math and a lot of code. And each night we build the system and we get about 20,000 pages per night we build. Okay, so back to plastic. So what it is good at is these conversions and one of the main things that I wanted to point out is plugable math engines. It uses DVI ping by default. And I know nobody likes having math inside their HTML to be images. And I understand that. But this is the real world and I haven't seen anything that renders math in a high quality way. I think SVG will but I think we're going to have a problem with font embedding there to get the right agreements to work. So we're using DVI ping. Once things change, this is a plugable thing. It's not built into plastic. We just call DVI ping. So we could call something else to create JS math or MathML or SVG later on. So we can adapt as technology comes along. It has translation terms for most European languages. If you need more or for other languages, it's very easy. It's just a XML file that looks for the generated text and the key. And support for multiple encodings, which is important for us because everything comes in as Latin 1, but it goes out. It might be in Windows 1252 or UTF-8 is usually what we do. But sometimes we have a restriction. So when does it make sense to use plastic? If you've got something that works on your project, there's no reason to change. But if you're starting a new project, it's an easy package to load. It's an easy package to use. And if you're dealing with projects too complex for other tools, for me, I needed something that was simple to understand. The Python classes are pretty simple. So when somebody comes to me and says, the documentation is broken, you've got a nested paragraph coming out of this environment. It was late that it takes HTML at least as it was 12 years ago when I started. That would just made me want to cry. But now with this, I can just look and see, okay, it's coming from this class. Here's the template. What's the problem? And obviously implementing a production publishing workflow. There's a lot of things that has to happen there, and this is a good way to support that. So the way we interface with plastic is through the command line. There's no GUI. But it's got a lot of switches that you can use. And some of the most important ones are to specify themes and navigational elements, which if we're going to Java help with one flavor of HTML, then we need some kind of navigation that's different than you might use in HTML help. So depending on the application that we're trying to support, we'll have a different theme and different navigation elements. We specify the input and output coding, encoding. Set counters like if we wanted, this ASS stat user's guide would be pretty hard to build into a document object model all it wants. So build those chapter by chapter so I can set the chapter counter with each build. And you can dynamically set the table of content steps. So part of your document you want only the first two headings and your details, maybe you want the first three. You can set that. Set the section number in depth. And I'll specify the image generation engine, which is another way of saying specify how to handle math. Schematic, there's a lot of stuff on here, but let me see if I can figure this out. Yeah. Oh, we hold it in. Okay. So the two big boxes, here's the latex sources. And first it does the parsing, and then next it does the rendering. So in the parsing part, we tokenize and then parse, and then it just returns the document object model after it's parsed all that stuff, put it into the dictionary of data, and then it comes back to the renderer. And then these are just sets of templates for different output formats. So we can render to plain text, dot book, or HTML. There's also a man page renderer, but I've never had a use to use that. And then we can get our different kinds of output. So once you pass the latex source in, it goes in two steps. Pars it into data and then render it to output. Okay, the parser gets your document in, it's got to recognize everything it sees, so it knows how to put that data into a structured structure. And so it understands most of the built-in latex and tech commands. You can't pass it a memoir class and expect it to expand everything the right way. You can pass it simple things and just have regular latex, macro, new macros, new commands, new environments, and it'll figure it out. But if it gets very complex, the catcode changes, it might choke on it. So what Kevin did was create Python classes to support these packages, so those come with the plastic distribution. But I can tell you, sometimes these don't have any kind of corollary in HTML or XML, like A4-Y. We don't really have any use for that in DocBook or HTML, so that's probably an empty class inside the plastic distribution. It's just so plastic won't choke when it sees use A4-Y package, it knows, okay, I know what to do with that, or at least I think I do. And then it can continue parsing. So some of these things are just meant so it can ignore it. Although there's a lot of stuff in the AMS math and graphics, there's just a different level of logic in each one of these packages. So just because it's in this list, it might not mean it does exactly what you think. But what we're trying to do is to make it where it will parse a document and at least get through it so you can start working from there if you need to. Boris? How difficult is it to write a Python class for a new page? That's a good question, and I'm going to address that in just a minute, because if it wasn't easy, I wouldn't have done it. I need something simple. So let's take an example, and we'll just take a look at the frame box commit. This is already defined in Plastic, but it's just an example we can look at. We're going to see a backslash, the word frame box, we can take two optional arguments, and then we're going to have some text. So plastic parses that document, and this is the plastic code to create that class. That's the actual code, class, frame box. We're going to have a frame box element. Some class, they're inheriting from text box command, which is something Kevin wrote, this kind of a lower level thing. This is exactly the way I live. If I have a new command or a new environment, I look to see what Kevin already has there that's close to it, and then just subclass from it. And often that's all I have to do. I've got a whole list of class with some name, and it just says pass under each one, because it knows enough already to parse it. Another great thing about this interface is we've got arms equal, and then the signature of the command, because all you have to do is put brackets around it and it knows it's going to be optional. We're going to name that width. We're going to name the second optional argument, pause, and the next part is going to be the self-content. So once we have this, we know the data we're actually looking for when we have parsed that tag. So it saves the resulting name tokens in the document data structure, and we've already got those parts named. So there's the parsing. That is one command. It would do the same thing for every command in the document. Next, we're going to render it, and it's built on templating, which is current methodology and web framework. So that really means we're going to take that document object, that data, and apply templates for every object in there, and automatically generate our document. It's extremely fast. So for example, here's a say we were coming across in our late-set source, frame box, and we want a 15 Ems of width, and note that this is important as our text. And so it comes out like that in Beamer, or it comes out like that in PDF. The data created by the parser, it needs to find a template that has the same name. So we've got name, frame box, we've got a document, and then frame box, and it's going to apply the data it has to this template. Well, I've just hard-coded a class equals fbox on this, so that doesn't have anything to do with dynamics. It's just any frame box I have is going to have that class, because I'll deal with it later in CSF. Then this is a templating language. It came from IBM. It's called a TAL. I don't know if that stands for, but probably templating something language. And attributes are going to set an attribute. The attribute is called style. It's value. It's going to be a string, and it's width. We're going to get that width from that argument that we got in the data structure. And the content that goes inside this span tag is going to be whatever content was in the frame box. So the output, hmm, can you see that at the bottom? It span class equals fbox. It just took my verbatim boilerplate there. Style equals width, column 15, and I wrote this. It didn't generate because it would have had quotes around it like it should. And then note that this is important as the text. So we went from latex source through the template, and then got our output. So we just continue to do that all the way through the document. So plastic can render the following formats, claim text, which can be useful if you need to detect a file. HTML, well-formed XML is really easy because the internal representation of a document object within Plastic is well-formed XML. It doesn't follow any particular DTD, but it is well-formed. If you're careful, you can output.book 4.5, and I'm working on 5.0 right now, the Twints and the Talix. But you can't just throw anything at Plastic and expect to get valid.book. You'll get valid XML, well-formed XML. But if you put a piece of math inside an index term, it just didn't want to be valid in.book, and there's nothing we can do about that. But if you're careful about what you do, especially avoid nesting tags, I've found that's been generally the hardest thing. You can output.book 4.5. And once you're in that, you can go to a lot of different things and be able to interchange with other publishers, too. S5 is a simple, standard-based slideshow system, which is in HTML. If you have it in Beamer, I don't know why you'd want it in HTML, but Kevin wrote the render and so I'm telling you about it. Braille Tech, it's in the Talix because it's not really part of Plastic. It's something someone wrote, I think, in the UK, but it is an open-source latex to Braille translator that is designed to handle math. And it's on Sourceforge.net. And I have ETOGS in the Talix because I have that working at SAS, but I haven't put it up at Sourceforge to make it publicly available because it's a lot harder to do it for the general case than it is for the documents I know I'm dealing with. But that'll be there soon. Okay, let's take a look at a demo. Once again, we'll take a look at sample2e.tech and a couple of papers from archive.org, which if you don't know, it's an open access to over half a million e-prints in physics, math, computer science, the hard sciences. Let's see. That'll be fun. Okay, just so you know what I'm going to do, and so I'll know what I'm supposed to do, I've got some little notes over here, examples. So we'll take a look at sample2e. Now, in this directory, you don't have anything, but I'm going to say plastic sample2e.tech because underneath, plastic is using KPSEWitch to find your files. So if you, it'll find this, but if you specify a style file, it knows where to go get it. If we haven't defined Python equivalents, it will go find it in the tech file and try to expand it. So it's using that under the covers. And so we just hit return, and I'll show you a couple of things. Okay, so you see it, I'll just scroll up to this in just a second because a lot's going to happen right here. Before we take a look at the output, I just want to show you what happened on the screen. The very first thing is it's loaded article.py, or pyc is the compile version, and that is the article class, and that's what sample2e loaded, and apparently it didn't load anything else, no other packages, we would see it there. So that came from the Python distribution. It's going to output its files to a sub directory, the same name, the job name. And it reads its templates from these locations, and if we set an environment variable, we can make it look at our own copies and we'll override the distributions. And so it's reading these from the XHTML template library because that's the default render. It tells us what it's going to use for an integer, and then it's creating some HTML. It did not know what to do with the at symbol, which I think is backslash at for the spacing. And so Kevin didn't do that, but it's an easy thing we could add. And images.tech it created, and then you're all familiar with that log screen where we're just doing laytech to get the DVI. And, okay, so we've got images off log, and then DVI created five images. Okay, and their own name to the way they should be, so let's just take a look. Okay, so there's the example document. The default theme is this green, and Kevin likes green. That's the default. So we get these bars and we get navigational elements that are all in there, and there's metadata at the top of this HTML file inside. Table of contents, so we'll just take a quick look at this. What you would expect, this is the laytech sample file, and we get some italics. We get the quotes, the footnote, and there's the footnote here at the bottom that got there. And let's see. Yeah. At the bottom? Yeah, the logo. Laytech interprets. Oh, yeah. Sub and sub? Yeah, that's what it is. But that would be, yeah. I'll give you about a thousand lines of code to implement those. Okay, well that could be done or we could overwrite it with whatever template we want to, and the way Kevin did it was if you see slash laytech, then it's going to put in the subs and supers. So I'll blame that on him. But at the top you can see there's a, we're getting breadcrumbs plus the back or previous next and up. So this is the standard stuff that you get. And then displayed text, we have unordered list and ordered list. You know, this is the kind of things you would expect. There's a little bit of math, and that's it. And if you click on content, that would take you back, in our case, to the first page. So the displayed math, those are the NGNs? They're PINs, yeah. The verse was an idea, but... Which one? The first. Oh yeah, verse is so horrible. But what do you do with verse and HTML? You have to do a lot. You could do it, but it would take a lot of work. Okay, let's... Now I'm just going to copy this over here so I don't make a mistake. What I'm doing is I'm going to render, this time instead of using the default render, that's HTML render to text. I don't want it... It wants to split files up by section, and that's why we had those several HTML files. There's no reason to do that for a text file. And so I'm setting the split level to be zero. And that's all the magic there is on the command line. And it's going to do the same thing. It loads... It has to do the same thing every time it parses. That's always identical. What happens at the end that's different is how it renders. So let's just look at index.txt. Okay, so it's trying to do as good as it can. It's still got things numbered, blank lines are where they should be. The characters are right, the quotes are right. The math it left like it was. So it just whatever was in the math that left it there, which I think that's probably the best decision you could make. And then the unordered list are asterisks and the numbered list, ordered list is the numbers, obviously. The math is there again, and the footnote is there at the end. Okay, and one last thing with this little simple file. We'll just use a dot book renderer. And what else to... Oh, there are three themes for dot book. And one is book, one is chapter, and one is article. This is an article. And if you don't use the right thing, you'll get invalid dot book. Oh, in the text. Good point. Okay, that's something that needs to be taken care of. Yeah, well, we try to keep them to a minimum. Okay, let's just take a quick look at that. XML mind is a, this version is a free XML reader and it knows dot book when it sees it. You can see up here it thinks, okay, this is dot book, stuff that I could do with it, I guess. And it's valid. There's a little mark here that says, yeah, it's okay. Of course, it's not very complicated, but it is valid. And let's see, there's something I wanted to show you, because I think it's kind of a neat idea that came from DB Latech, which is a way to go from dot book back to Latech. So if you were to, you can take this document and produce Latech back again from it. I'm just going to switch the view from their attempt at rendering dot book to something you can read to no style sheet. Okay, so the sentence was, it's good at typesetting mathematical formulas like, and we'll get an inline equation with an inline media object that is composed of two parts, an image object and a text object. The image object is obviously that ping we just created. The text object is the map, so we haven't lost the map. And if we process this with DB Latech, it'll find that tech source in the text object and reproduce the map in whatever format it's needed. So I think that's about all there is to say about that. And in the same thing for displayed map. Okay, well let's see. Oh yeah, the output directory that we've been filing these things to is right here. And if you're creating HTML, you see these CHMs, everybody know what a CHM file is? It's the Windows Compiled Help. And so you've got the files you need to create Windows Compiled Help along with the HTML. So you just feed these files into HTML Help Workshop, which is a free download from Microsoft, and you've got a Windows Compiled Help version of your document. And the same thing for Eclipse, if you're familiar with Eclipse Help or Java Help version 1 or 2, Kevin's created the auxiliary files you need just to feed that and then get that help in a browser, in a real application. And oh yeah, the main reason I changed this directory is this one right here. I made Plastic dump its internal representation. So I just wanted to show you what that looks like. Sample2e.xml, this is the way it looks to Plastic. So it's another well-formed XML file. It's got a namespace Plastic, and you can see what, make, title, everything, every element has an ID. So we've got paragraphs. And so everything is named, and let's see if there's anything interesting here to say. Title, it has a modifier, but the modifier is blank, so we could have had title star, I guess. And so everything in there looks somewhat like that. Everything is named, everything has an ID. So we can, when we're rendering this with a real template, it's not doing anything with this. I'm just showing you this to show you what it kind of looks like inside. But everything is named and everything has an ID, so we can get to each document. Okay, I wanted to show you a couple other things. This is a file that came from archive.org. And let's just take a look at what that does. This is, you know, a file from the wild. It's not sample2e.tech, so there's a little bit more involved. One thing I found on archive is that a lot of the papers there use revtech, a class that I didn't know about, and Plastic doesn't support. It looks like it would be easy to make it support it, but that's one reason I was having failures on a lot of papers from revtech. This particular one, I guess, used article in AMS math. So it's creating the math there. That'll take a second. And then we'll see what it looks like. But to be able to add a package or add a class, it's as simple as writing the Python statements, which are about as simple as what I was showing you before. You just subclass off something else and write a renderer. And that's it. Then you'll be able to parse that new class. As simple or as complex as you can do Python statements. Well, if I can do it, it's not that hard. Okay, so now we've got a paper, still an article. We've got our abstract and the table of contents. There's no reason to spend a lot of time on it, but you can see the citations are there. Those were real sites inside the file. The math is looking pretty good and it does the equation numbering outside. More math. And our breadcrumbs are going across the top, which is the default. And let's see, conclusions and discussion. So there are our bib items. These were actually put into the file as bib items. So we didn't have to make a bib tech run, but we could have to just go directly from the tech file and nothing else. Create bib tech in the interim. A big BBL file, I should say. And it looks like the titles are missing. Let's see what it looks like in the future. And chemistry as well. Yeah, these are the actual bib items. Don't recognize them, thank you. No, it's just all we ignorant. This is what it looks like inside the file. So it loaded article, AMS math and graphics. It did some stuff that we don't need to worry about. And then we just generated the whole thing. How are we doing on time? How do you have a table? Oh, like, nice. I can tell that you guys are mesmerized by this. This is another one that came out to, well, I'm not going to plastic it yet. I want to look at it. Because I had to do some tricks because I didn't know revtech before, but I knew I could see that I'm failing on preprint. I'm failing on PACS, email tag, affiliation and acknowledgments. So I just wanted to get this thing done. So I just made a new if, if plastic and set it to be false. So if I use this with latex or PDF latex, it's not going to pay any attention to all this stuff. It's just going to use revtech for like it did in the original. I added this block so that plastic is actually using this. It doesn't matter about the new if, plastic is going to ignore it. Plastic trick is always true when plastic is running. And so in that case, I decided, well, I don't know what to do. I think I'll just make preprint do nothing. I'll make a PACS, put things in text, small caps, make email be italic, affiliation, bold, centered, italic. And then this new environment acknowledgments. I just wanted to do a line break and put the word acknowledgments in small caps and then put the bolded text inside a quote environment. So I just did that in the preamble. So when plastic sees it, it'll be able to expand this and know what to do. So I didn't have to write a template to do this. If it's simple, you can just get away with latex code. If it's more complicated, like you're changing counters on a section or something like that, then you need to get back into the Python code. So a lot of things you can get away with, with this, but let's see if that's true. And EPS files, which I imagine might not look so good when we do this, but it'll convert them. I just don't think they'll look very good. Okay, so it's loading book and graphics. Okay, so we know that it's reading the right part. Otherwise, believe me, it would be choking if it was trying to rev-tech because it doesn't know what it is. Okay, creating more math. I'll make sure I'm covering everything. Internal done. No. Okay. Okay, and here this file is, and we've got our email address and italic, affiliation, okay, and our abstract and so forth. We'll just go through this rather quickly. We've got the sites are still working. Uh-huh. I think we'll make that one. Yeah. Oh, yeah? Page percent. Not that it mattered. This one? Yeah, didn't we lose like a couple of authors or something? Oh, yeah, we did. You guys are good. There is no system right now to put in every author. It just has room for more. It's a mistake. Also, the date is today. Does that mean that this particular article just had an empty date field or is there a mistake in the translation? That's a good question. I never had noticed that before. I don't think the draft is such a date. It doesn't use it. Let's just take a quick look at the, uh... Yeah, so he's just got into the fall book header, which probably has today in it. Yeah, I bet you're right. Oh, no, it's such a date. Date today, yeah. Good catch. Yeah, it's a good question. Yeah, I never thought about how that could mess you up. Are you a author, son? Yeah. Well, even in the evening, what are the pictures? Oh, the pictures, yeah. I knew you were going to ask that, because they're not very pretty. Yeah. They need to be converted to something better, you know, so... Wouldn't it take a lot of time? Well, I would assume that's what happened, is that Plastic converted using the Python image library from EPS to... Yeah, but is it possible to set up a resolution? It seems that if you just say, do it with high resolution, it will be better. Is it possible to configure different resolution? Not off the command line. I'm not sure about that. The EPS file is converted by Hashtag or Bbypng. Which one? No, it would be Plastic. Yeah, because if it's not in a math environment, if it isn't specified that, hey, go use Laytec, then... Why do resolutions use that? Python's probably thinking it needs to render for a web kind of resolution. Yeah, I don't know the... And you originally had me also quite that... Yeah, I never even looked at that. So, yeah, I'm not sure about that, but I think if it was a real project of mine, I would have a png to bring in instead. The only reason I'm lingering on this page is because of the acknowledgments environment that I set up with Laytec code. And Python's understood what to do with Plastic, I should say, put the acknowledgments in small caps. And then we've got a quote environment, and the text inside is bold. So, even though we changed what it did, Plastic understood it just from the Laytec code. Okay, and one thing that I wanted to do is to start my talk off with a command, but it slowed the machine down so much I knew that wasn't going to work. But I'll tell you why I was trying to far. I noticed that when you used the method in title for the header, the size was... Not so good, yeah. So, Plastic can control this one? I think you could, but you might have to make some changes to your tech file. I'm not sure because I don't even know what font they're using. But you could use... Right now, she's in computer modern, I think. But you could load any kind of font that you wanted to, or using Matime Pro, that'll come out as their images, but that's what's used to create the image. I think that if you put some input nodes, you will see it's too large. I have an impression that it just doesn't pick up the text size outside and just renders the market. It's difficult to hear for rendering is that the meaning of various levels of section headings is dependent on the client, not on the producer or the doctor. You really have to have control over that. Right, because we've lost context. Yeah. That's why you can't really do them all. You can make a default mapping with mind headings, the large, the couple of hours. But what might be worth looking at, however, is the simple math that has Roman letters that the limited set of re-quarters available in HTML, subscript, supercrit, you translate that to HTML. That would be good. That wouldn't be too hard. You're right. It's a simple string. When we were running... We don't recognize you give it a text. Exactly. And that's the way Kevin had things running at one time. But it says, we had complaints from the writers that they did not like to see it mixed. Because they'd have one equation that happened to work out in simple math, and the next one didn't. But they were supposed to be showing the similarities between these two with one difference. And it did look kind of odd that way. I downloaded the Python library reference. At one time, Python's documentation was all written in latex. So I just downloaded the whole thing, and I just said, plastic lib. And then 45 minutes later, which you see why I didn't do that, for my talk, we got the Python library reference. It made it all the way through. It didn't choke anymore. Some things could look better, but let's go through here. Take a look at... There we've got nested TOCs. Take a look at that. It looks okay. It could look better. But that was just using plastic and lib.tech, which fired up... All these tech files are just included into lib.tech. So for me, that was pretty impressive. 45 minutes, it didn't choke on anything. And I know it's standard latex. I mean, there isn't a lot of fancy stuff in there. But it did make it through, and we actually have a working library reference now. Okay. I think that's it. Thank you. I have a question. If there are two questions, actually, one, do you have any slides to support DocNet? I guess not, since I don't know what that is. Before that, we do Microsoft document format, but supposedly uses an open XML... Oh, right. Yeah, I know what we're talking about. If I need to write a paper and they will only accept it in Word, then... Yeah, I know what you're talking about. Actually, I get requests to have things put into RTF in Word, but it sounds... And I think the thing to do is go to DocBook, and then you can output RTF. And I would imagine you could open off his XML. Yeah, but what is Microsoft? Microsoft is still in the picture. Well, what are we supposed to do? I mean, what can you do? Well, yeah, but the question is probably the same on the... Yeah. Chris is going to ask you, can you do the... Looking at MathML converters... I'll just go to the email section a little bit more. I thought that MathML converters have you been looking at? No, well, I haven't spent much time looking at MathML converters, because I took a look at MathML renders, and until we have something that does a great job, not just an okay job, SAS doesn't want to mess with it, but of course we've got customers to please. And right now, I think... I don't know what the problem is with the Math as images. To me, it looks pretty good, unless you rescale it a lot. In which case, obviously it looks bad, but our customers, I don't think, have that expectation. No, that's the only problem there. Chris. We're coming back to... Thank you....usually quite in a different way. You're using this as a future documentation or just for paper? No, it's all the documentation. Online documentation. Online. That's right. It presumably has a lot of cross-references. Lots of cross-references. Bibliographies. Things to code and things like that. Listings, outputs, math. So you actually need the MathML, because you wanted to link the math to that stuff, but for some days, no? I want to change the MathML once it's a standard, yeah. Yeah. Yeah, definitely. I'm not against math. The next question is, do you know if the plastic people are working on... They're not working on a renderer. They would like to... A converter. Yeah, I'm sorry. That's what I meant, too. They're not working on that. That's why it's meant as a plug-in, so we can take expertise from people that are more expert. So, I mean... Well, you looked at TTH and... You went and asked, and all these other people who want to use plastic need to get together with them. They look... Somehow, we need to get a group together to do that. Well, I hear you, but we're customer-driven, and we aren't driven by that yet. I think we will be, but I don't think we're there yet. A renderer just looks horrible. Yeah, a lot of these things with customers, if they don't know it's possible, they don't know the answer to it. Well, but if you've seen the math with a... They don't know if you want to click on an equation and get it sent off to your SAS program or Mathematica or whatever. Well, that's neat, but I think we're concerned about how it looks as well, and so far, personally, I haven't seen anything that looks good. Let me take an example of the not-exactly-arrival to your... Okay....for the very similar or a validation map, where they have got all the documentation with these little links and maps, because they've got all the math and maps. And okay, yeah, the rendering is to be the problem, but don't indent it that way. That'll be great. Had a question? Comments? Yeah. You know, a comment that the DNM offers about two weeks ago. It had a publication of an article in research that was funded by Design Science... on approval for German students that went through every single facilitator from the back tool and the back mount in a very detailed way. It's a great thing. Yeah Nice I'm looking forward to reading them
|
We discuss plas\TeX, a software package for converting \LaTeX\ documents to other markup languages. We will cover typical usage with examples of how to create \acro{HTML} and DocBook \acro{XML} from \LaTeX\ sources. We finish with a demonstration of converting a simple \LaTeX\ source file and a brief overview of how to extend the package to handle custom commands and environments.
|
10.5446/30829 (DOI)
|
Sargym� depth yr gyrraedd fyddai'r flam iawn, wrthци mwyaf fel peth oherwydd rymoz hynnyisationio gan yw suggest rideوم yr across fra верiy interpret Siaricles peir fins i'n beth mai'r overt oherwydd rin brainsol nifer. Mae'n bywyd건 yn f忘 sy'n liberated i wiella y dynoraeth. Roedd yn y ddeg mangalu lle ond tref messages am ími. Rydw iad gone iCUr Specyflyd Maen, σου fel a chynnwyd. betrayal. Flo median. Lan ymr Pyntleisol Herrn hyllwch... Deniu am 15 o 0 i gyfen aoi. Mae beth i fusiaidming dwydwch ayoeario'rTYLlau Por question i ar telio i meddwl wedi'u ffr 요nill, yn Úm Gryfosgoch gan Glywedlu Llywodra L Sergeians i ferdwydig yma y Ddemoniad Gl axae. ers ydych yn cwrs был yma synthi nhw'n berasu. Y sewnolaeth wedi gweld y mallcedd gyma marchebyd tân rŷ, pan Art<|ta|><|transcribe|> unhappy yma eich nefğeithio y mharka, beth dweun ydy bodyion ph 有 Llynggeiarod yn oes iawn mewn cy Mär���dd ag oed y byd ei new. Dyma'n frydy maen nhw inchesing o ph accessedodau'r布 o'n hwn, Efallai, sy'n overall sy'n gennych. Roedd hyn yn vedd. mae'n haydadu 15 oes ben yn gwm entertain innill ac maen nhw'n yn rh deg firendf 못 yna, os maen nhw'n olygaent yn nieis sy'n byw yn leidio bywser o hyn o larchddau a'r argyspain phanol yn mhergaf ajw ddrygu'n lle phasr. Dialech yn lle fan hyn o mir Дweud am cael meddwl ac o adapter ystod yn front helpu o fenref��ol, ac gan rhaid wisdo yn ddweud â gweinidol rywn. a cenwys eich teimlo ar y Laitec yma, ond y ddin i gell Fangrู Sen escrein i mixol i Gan baskog â gesfôl yn gweithio 5 y wyr Furthermore Fe wasbyd, mae'n mynd y dyn nhw'n cyfle yn gartfon y pethau a phrydd o fod Roedden LEI. Cyflywch o Ponod yn her<|la|><|transcribe|> Cärenniaeth bod yma gennych am amdangodd yma yw'r dedewn. Lydych yn f Wald gan quechhaf yn Llyfrarts wrth ymnu am bod yn mwyn ter ors Trwy'r ymgylchedd yn adn 1920 er ynes i wedi'i siŵr Pan Cy sprout Mic springs ac yydw i'r f indict ddeithledd Ier� resigned a ér amser Wedi Ne CAM Neysh Ddiw i'n wneudJI'n ei bod nhw mae'n Mercedes yn aethio'r cyn myf dealing dweud o bwyng newid yn. nutritious Rwy'n fawradd o Ebilys gyda'n incrwp�� i ta っ野n Yn'r llwygaron, mae'n digwydd o hyn yn diddwyniadau ifanc nowearf eu gwy o'chら cael surgeonair, washed your hands all throw him on you to shake myfo Rydych het yn cael ei fachwyr wedi went-muno nesafech wedi cael ei f Perfect101aman lle chyfawr cwm Sure entity alain wedi bod ni encountersau員 ym midflaen hong os nai fel ni'n hystod i siwf addysad hyn byddodd Tycribed Ar Fanys dorwy fещ mlynedd fel hyder o pawan oLYNOL dwodanyn armarffiouslyni. A chi'w swabio? if you are gonna Gaga y Bachelor basically would you don just stop me but not gonna to stop thme abad that if I am going on I not got him too long with the first think a wake up I hope that both think I hope that both petites responsibilities that's that's the w�t��そんな Llywodraeth. So, mae'n gweithio. OK. Mae'r miliwn yw'n gweithio. Rwy'n cael ei ddweud. OK, rwy'n cael ei ddweud. Mae'r gweithio'n gweithio'n gweithio o'r gweithio'n gweithio ar y sylwg, o'r gweithio'n gweithio, o'r gweithio'n gweithio o'r gweithio, o'r gweithio'n gweithio. OK. Rwy'n ddim yn 5-50. Andefeu, Ondofeid downstairs a assign. Rwyf yn bob tyfu mae eyrdech chiaxter yn Zellw. Byddwn yn ni gondol am rhai ogen Dodd Aet Hamilton. yma y dydy'r cylyflymu sydd o'r cyfrifio ond ond teisiwyd, rahead, mwy dyma, y gallai hwyth.. dwi'n ne yet argueren y tormodol Maidaethau gymranteil itr handles neu sylwyd Llyfr gylliartоны fel yn y commitment o esbytyd hwyddan nhw y cyd-dwylog, ac odi Moes Tr unanim hear Langan Ac, yna gyr Omion cael'r wyllte examination sy'n ar dal y croeso familia w flynyddu. Wrth gwrs, cyfor y casau. Felly, mae'n cychoedd y croeso todd y nucaf ar n Coc에서도, oherwydd yn digwydd i ddiangosemol… Nad yw tu o'ch gyf婆wn g permission os un tynnu'r dwlfaen ar y copinyddai syl ninguém yw cael ei wneud desanaet hender maeth wedi cael eu gwahanol wood, am erdyn ni'n falch 영상을 yn teimlo, at hyn yn èw bod a gaddわ relio'r gwahaniaethriaid rhedw puedo. Ma 你wn gwladwch pan ddoedd yn cilio y bla nifer ynddo'i fy lliffennu Cyngor mewn genious i swyddad ma Products� mewn ei hŷn go yma yw'... Gw� Pom travel Mae hwn hefyd, mae hwn nes studu… yn fawr os ydy я'w gilydd gyd, o fel ysedig brydau archif DOU yw… mae hynny ac mae hwn yn roeddelu gyda hwnndlae cael ei weld tyfan i ddim elevenoedd iawn, Standing Commander mynd yda chi'r µm… dyma hwn. but wha wrth fy ng Gradio maen nhw'n gwyb excitinger maen nhw chieth dydyn nhw ar hwn dda, ma mae yna ni'n medder depend放心 oherwydd eich cagym 장난 awt ми還wet00id... Cırım ni iweithio allai y document taskedau Llan freelyennid,negwyd år siou a mywch gyllid complaints fel ariedd hefyd, lle y gall wedi eto bod dengol eich ll。。,, Achemo'n ei roedd enwyfio mewn rhesàn Felly hefyd yn styrxx i Llan slides. Yn ymwneud, ydych chi'n gwybod y cyfnodd yma, ydych chi'n gwybod y ffrangwnt yn ymwneud? Mae'r ffyrdd yn ymwneud yn ymwneud. Mae'n gwybod yw'r ffordd yn ymwneud. Mae'r ffordd yn ymwneud yn ymwneud. Mae'r ffordd yn ymwneud yn ymwneud.wyd fod y dyng �ysเอgybeth gallai'r un pawb Mach 50. Mae'n ke條chach... Ac fe ddim yn ardal. Fel arall hwn typus os fan Ca as-fodol maen Fodol Mídyn â Dill, Ondamm yn coes i ende i ni mewn gwirio. Ymp Thank You dingmang i'n gwybod i'mall felly nhw i'r bwysig ar y chyrs. Pryda i'w hwn i'w ffordd it Марwhyr ein gofyn gofu awyr ar adiod yn chi wedi bod hyn add gwisw iм o cwyにw ar ingredient wedi цветo ar greu f şud. Rwy'n dechar f救 smashed hydrogen a oell wedi ascyrded i diary lwin, byr penolion eu cynllunuaeth, bydd gan oedd bwller hyn, pan fe ychydig sicrhau gall deprivedr bobl yn오�r gyffrith yma hynny nyrw'r methygosiad gyda fynd attyllydd ddechrau am yr archl seeingoli i lawer ynFinally yw wusint. Yn rhaglen er buildings, tawd gychwyn relio addtaf ein maschettau. Wrth d undergraduate, mae'r eistedd f amigo gal Unfaith. Ac mae angen ond pawb yn an Sawatol ond o fy prospectt, nid o'r Italyf ac mae'n twyd y pwysigeth eich тыnydd gyda'r ateb gyda'r it that the stuff in maths where you say backslash sign or something like that you just have a list or reserve word like in the ordinary programming language would do that he knows aren't just the symbols in the word. To be true he's a different way. I might be doing an injustice to exactly what the ski maths do but that's the way I see it that's the way it looks to me. Ar hyn ychwyn, gw liquor o bobl yn dydd yma. Maes un고요 yr hyn yw. Roedd yn bwyant y trolix o hadn, gan bobl ஜonith.TFF yn toc y cefnw o trolix? Orderyn, nhw? Yn y trolix? Felly maeent nid yn ein bod nhw'n falch naill. Jech indeed mae ffa phryld y plaen yn личnosulu. A gael ag responden wedi fan gyll Eragol iddo. Mae'n tro nuls iawn i gyd i os o edrych am ela chymedd, symudio ybnb i sut tytio? Mae Pwro Okre, mae'r IAM yn ymwybod yn s eigentlich gyredwch ar yr Ameswr Cymru, ac mae'rîm deunydd yn dysti open Mushroom. gan amd للwyr i'reti. Ye 댓 열심히 yr wrth rydyn ni'n hun i longue adnwyd pob roddfa Rom decir initiative kit Aaa Ym ni'n ddu atb vicinity. I am ff�ening melt y bydd yn gynver writer y gwarchankaeth edrych yn yield bod yn fy ffordd yn rŷiraeth Gymraedwyr yn rhanum y dyma yw all. Or yn gallu'n gweillio. Roedd yn ei nucleoedd. Dysgu unigant, oedd yn gweitrestyd. Chynnwys bid yn teimlo'nilage. Yn gwy starving, ych clearance peppers d Κraliola yards nisi gwyrd yn garthfawr unir бор ya'n gwybraf bob gerneg centimeters, sydd hyn o'r yawr yn gwybar temporary boner ond mae'r un predictor o'ch gwis cyfforddiant yw'r maelalon deploye, ond mae'n pan robot. David puisque maeys neu roi'n gwirio'n dryw finale, i ei barnwys ag erbyn訂閱i ac yn cael ein tu new wirklich wedi'i stareola. Dweud am ei iawn. I'll stick to latex language. So a few more things. Some MS applications are... Well, let's put it this way. The developers of the math stuff would like to accept some sort of standardised tech input. But then I'm not sure that they're actually pursuing that as part of the product at this moment. Because they want interoperability, basically. They want people to be able to cut and paste tech into their word and having them processed by their system. But, well, cutting and pasting has a lot of its own problems and needs standards and things. So in a sense, perhaps one of the reasons why they are not pushing that within Microsoft is the subject of this afternoon's talk. They haven't got a standard they can refer to when they're talking to the general word people. You've got the math word people and you've got the other word people who... Well, they may like to invent the standards, but they like to use them as well. Shall we say. And then you've got the whole range of software that actually sort of does maths, whatever you think doing maths might be. Whether it's theorem, chequie, proof checking, or what you might call the really high end, or just something doing a bit of a risk of becoming a calculator. A lot of these things, I don't know if HB have got a character case to this. Texas Instruments is quite like a character that will accept tech in the near future. No, what I mean is I know people in the dormant side there who would like something like that. But it's not quite the same as saying it will happen on your desktop. But anyway, there's a lot of things that accept latex that you would expect them to understand. And well, the whole problem of what I call Excel-like spreadsheets, I mean all the open source things that work in the same way, and accept the same sort of language or formulas and things. A lot of people are trying to get them into the fold, because they are used by people doing mathematical type things quite a lot of actually communicating with the rest of the maths world. But I don't know whether it's religious problems or serious technical problems. I'll be doing that. Sorry, I went too far there. I put in Gelmewd because William will get across when he reads these slides. I'm not going to say much more about it. If you know about Gelmewd, you know about Gelmewd. I meant to stop by a forego, in fact these slides were a bit of a hurry. You can read the title here, I hope. So, these are sort of things that process tech definitely without any, and process it into something else, typically, or use it for their own purposes. There are a few examples down here. The bit that's really off the bottom of the screen just refers to things that I've heard about here this week, that I didn't have time to get into the list. These two Office 2010s, while I've been talking about quite a bit about that, is possibly, I mean, no, no, no, don't explain why I put it down there. I'm including the linear format as a tech-like encoding. It has various interesting differences and similarities, and Barbara, no doubt, tell you what she does and doesn't like about it. But what I'm getting at here is that it's a system that is completely, it's a document production system completely separate from tech, but has a similar sort of interface. And OMDoc is a system that is for producing very, quite specialised, open math and mathML documents. They decided that they would use latex as their input. The documents look like latex documents with a special style, but the next process by, well, I'm not sure it's actually processed by latex.ml at the moment, but it's linked in with the people doing latex.ml. Both of these, the real interest of them is that both, that Office 2010 decided that a linear input is something that mathematicians or perhaps the whole world actually needs, that GUI menu pallet interfaces are not for power users. And nothing about Office 2010, of course, it nicely integrates the pallets with the linear form. OMDoc, the decision to use latex was partly that, they want very careful control over highly structured documents. And their conclusion was that none of the structured document editors were usable, and also because they need the math as well. They decided that a good latex environment, like Technic Centre, or maybe just Emacs Plus or tech enhanced was much of the best way, certainly for their audience, which is why a team of mathematicians are the scientists to enter the stuff, so that's the interest there. I'm aware that it might be lunchtime soon, but I'm not sure what people want to do. Then there's other systems that do use tech to process latex, but they don't use latex format. An example is that I don't know how many mathematicians there are here, but I hear they're actually writing reviews, some math reviews, and at some point in time, a few years ago, they actually said, ah, you can use latex when you're submitting your review, and the web interface gives you that, that's one of your choices. But that beautiful latex document isn't actually run through latex. It's run through a slightly changed version of the system that I'm going to run, non-latex reviews through. So that's quite an interesting example of, again, the ubiquity of people using the latex way of doing things, but not necessarily being processed by a classic latex. Again, there's some things like people have written the picture environment that works with playing, and some other things, and maybe you can use Pima without an old latex. Somebody said that. There are other things around where the latex language, the latex syntax and everything, is being processed by tech, but not by standard latex. And then there's things from design science, like Math Type and the Heaton software, that might put three things through a tech, probably their own particular version, or they might not, depending on what they're doing with it. I suspect there's quite a lot of few more things like that around these days. Then I went on to say, well, in the sense, you might say that if you use package hyperf, you're not really putting the stuff through a standard latex anymore, because so much of the kernel has been changed there. There's also the idea that we've been discussing with Hans and Taco that the right way to process latex documents with Lluatech is actually to write a context API so that we can use the fact that context has been developed in parallel with Lluatech. This is an idea that both Hans and I have said, yes, that's probably the right way to go, but we haven't pinned it down any further than that. So, whether that interfaces a sort of fferdy low-level context, or whether it really is context reading latex syntax, or something in between the two isn't entirely clear. In fact, another interesting example, when we process our latex test suite, we ran through a stage centre latex because that's what it's trying to test, but we don't actually look at the DVI file. We're actually using tech for a rather different purpose when we do that. It's another way in which these things are not necessarily being just processed through a standard tech on PDF or DVI coming out the other end. Of course, a suite just has a plastic user's tech in ways that even after this morning's talk, I'm not entirely clear exactly how it uses tech and what form it does, but that's something I need to find someone about. So that comes into that category as well. Then there's a lot of things that output latex, and I was saying these software fragments. TechML does just that. It takes the mail and outputs tech, and there's lots of other things around that are doing that in turn to other systems and so on. Nearly everything that these things I was saying, software that we should actually do math, I'm not sure if the importance is the right word there, but a lot of them will output a tech latex fragment for you unless they come from Microsoft. Again, the typical things that we design science and makeeke things, in the case of makeeke, I'll put a whole latex file for you, which may or may not be what your publisher actually wants to see. That's another matter. Then there's the whole, the new, I guess it's been around quite a long time, the idea, but actual real examples of things that recognise mathematical input. Now the two main classes of these, there may be others, would be things that try to recognise people's writing on a tablet, some sort. Then there's others that are trying to extend OCR to not just, recognising the mathematical symbols is hard enough, but you also want to recognise the structure so that you could build up representation. The system knows about the visual semantic to the thing, but of course then you've got the question, well, what language to be expressed those in. That's often latex just because it's there and people can then run that through something else to see if what they get is actually what the thing was scanning in the first place. So there's some interesting stuff going on in that area. A slight variant of that is rather than scanning a piece of paper, as you would in typical classical OCR, you actually scan a PDF file in the sense you have the PDF version of something and you treat it more or less as if you don't know anything about it, except what the visual thing would look like. OK, now we come to the commercial break. I've already told you a bit about this stuff from Microsoft. In fact, they're building into the new system, starting the new version of the operating system. At that level, the math tool, so in principle it's available to any application running on there and if you actually want to get it deeply, I think the technical jargon is that there's an active X control, but it writes it to anybody who's developing for the new system. So here's my challenge. Can Mac OS do that? I mean, I love Macs, but I thought I'd get there. An even more important one for Carl is when do we get open source emulations or whatever we can get away with. I've actually asked the development people whether they are any specific paper or anything on the stuff. I haven't got a definitive answer from them. OK, back to my tick reference. About five more minutes for us. So this non-standard juice. Thank you for letting me out. I think it's actually been going a long way back. I haven't traced the history. I think the current century saw a big explosion. Now, whereover 20 years, it's latex manual came out, which is when I tend to date the start of latex, although I know he was working on it and people at Stanford were using it far earlier than that. So in some sense, 20 years on, there seemed to be a big explosion in latex flying around the place. So the metaphor I'm using is that in some sense latex escaped from the lion's cage, being the tech line. And in some sense, I think this is something to do with achieving some sort of critical mass of people who just accepted it as the language of mathematics. Now, here I'm very much talking about mathematical fragments rather than the structured documents as a whole. So what about us lion cubs? Do we still feel that it's something that the tech community should be interested in all these other ways in which the latex language is being used? Well, in that part, it's a doorless, I think. Whatever you add to the confusion. Fats, I'll leave that now. I think probably the next slide. Oh, yeah, I was going to do a bit of huge speculation, but I can leave that at that point till after lunch. So we've sort of even come to lunch as I suggested a little bit earlier than it is on the program at the top there. Thank you for part one.
|
As some of you will be aware, and all should be, LATEX code, possibly with some variations, extensions or simplifications, has for a long time been used, raw and unprocessed, as a lingua franca for communicating mathematics via text files in computers. [I have even seen it used on napkins and coffee tables.] This led to a proliferation of LATEX-like input systems for mathematical information and this in turn produced a reluctance by users of maths notation to adopt any other type of input. However, much of this math input is not intended (primarily) to ever be input to a TEX machine (It may get swallowed by a TEX-like system after, for example, some copypaste actions). More recently, systems are being developed to produce whole LATEXencoded documents that are to be processed by systems such as OMDoc or LATEXML and so will not necessarily ever pass through a TEX-like engine. Sytems such as PlasTEX also belong in this category, despite using TEX as a helper utility in their implementation.
|
10.5446/30831 (DOI)
|
Hi, Carl and I, as Klaus said, are going to talk about the Tug Interviews project and the book. It keeps going away after every time. The what? Left-right arrow. Left-right arrow. Okay, I'll try that next time. Here's the outline of what we're going to speak about. And we'll start with the first point in the outline. Basically, the idea behind this was inspired by the mathematical people books. I don't know how many of you know these books. I believe they were originally interviews from the Journal of College Mathematics. They're wonderful books and I recommend them to you if you haven't read them, if you're interested in math or mathematicians. Probably many of the mathematicians in these books are the mathematicians whose books we studied when we were math students. So I recommend them to you. My thought was that since I was new to the tech world and didn't know all of you people, that it would be interesting to have a set of interviews so I could learn something about you. So it's a little bit selfish. I also had the idea that perhaps old-timers might be interested in what some of the other people had done because they didn't necessarily know the backgrounds. They might be focused on some package that they interacted on but not had much to do with the rest of each other's lives. So I also have the belief that technology is done by people and therefore you can get some sense of what the technology is about from learning a little bit more about the people. When I suggested this to Carl, I think it's safe to say he was quite encouraging. So let's talk about the interview method. The first couple of interviewers were chosen carefully because I was afraid when I, somebody who nobody had ever heard of, came and asked to interview somebody, everybody would say, who are you? Never mind. So I chose a couple of first people pretty carefully. I approached them. They did the interviews. And over time, the idea of being interviewed by somebody who hadn't been a part of the tech community for so long became less important and, plus, I became part of the tech community for a longer period of time. We choose the interviewers based on diversity. We want people who are users, developers, involved in the administration of the local organizations and so on. We want people from different countries, but in particular, I want people who are willing to respond relatively quickly. So we do take a look at if this is a person who writes a lot, we're more likely to go to them to be interviewed because maybe it's not so hard for them to send an email back. We do plain text questions and answers by email. The first couple of interviews, I sent 12 questions or 15 questions thinking would be easy, but that turns out to be a big burden. People, when they see that many questions, they can hardly deal with it. So I changed the thought to, we'll send two questions. And what they respond based on those two questions will result in three or four more questions and that will result in three or four more questions and maybe we'll about have enough. At that point, we convert the emails into HTML. I did that manually for a long time. Karl Pruf reads them. Maybe we do a little bit of editing, trying to keep the interviewees' voice still. I think it's safe to say we use the same style in some sense that Tugboat does, which is we kind of have an international English style. If you do it this way in England or you do it this way in Germany or you do it that way in the United States, that's the way we leave it. At that point, I may sort the questions around a little bit to improve the flow. It may change the questions to better suit the answers to make things look more spontaneous. I may have a long question that gets out a particular answer and then I'll change it to a little question so it seems like the person answered quite spontaneously. We send it back to the author, the interviewer reviews it. What the interviewee says is what goes. We don't print anything they don't want. A little while later, we got the idea to maybe we ought to have photos. So we began asking for photos. We went back to the early interviewees and got photos from them mostly and made a mistake. But I'll come back to that. Now, the next thing that happens, we began to think about a book. So we added to our things that we talked to interviewees about. If you agree to do the interview, not only will you have the final say about the interview, but you're also agreeing to participate in the book. And you won't have to do any more work than perhaps reread the typeset interview. If you want to. And then after just a minute, we thought, well, if we're going to have the book in latex, and the interviewer is going to be an HDL, how do we not have to do everything twice? So we wrote some M4 macros. Presumably most people here know about M4. For those of you who don't, it's the nth iteration of a macro processor originally written in the United Kingdom by Christopher Strati, published in the computer journal, I believe, in the 1960s. So we defined the format of an M4 macro is define, parame, argument separated by comma, close parame. So here we're defining the macro underscore par, a paragraph to be an HTML paragraph. Here we're defining it, thank you. I may be able to point this less straight than my hand. Just an option. Okay, we turn that into an italicized and unitalicized HTML. These square brackets are because we really don't always know what's in our arguments. We don't want to see a comma in there that will make another argument. So we put quotes around it. Probably we've got one square bracket somewhere in it, so we better have two because it's unlikely somebody use that. So that's the kind of clunky, ugly convention we use. You can imagine how with different M4 definitions, par could turn into par or blank line. And it could turn into text it of some text. So that was our idea. Now some may say, why didn't you use XML that could target this or that? Or I was just reading, Tim, you're going to present something tomorrow that might have helped us. The simple answer is, Carl and I were lazy. We had a relatively straightforward problem, not too many different kinds of markup. We both knew M4. I could make it go to HTML, Carl could make it go to latex. So that's what we did. Here are some of the commands we used. The interviewer's name, interviewer's initials, the interviewee's name and initials, a header that has the photo, calls the photo and stuff. The first question where you say the full name and interviewer. The second question where you just put the initials, footer's links, a few little markup things. Here's a few more kinds of dashes, ampersand, some environments, beginner verbatim, end of verbatim. Ordered and unordered lists. Plus a whole bunch of definitions with people's names with diacritical marks in them, or other words with diacritical marks in them. Here's Arthur's, where's Arthur? He's not here today. Here's Arthur's interview a little bit. We defined the date, the interviewer's name, the interviewer's initials, the header, the first question. He gives his first answer, there's a paragraph, he gives the next paragraph of his answer. He gives the second question, remember that just has the initials, gives the answers, and so on. So that's how it is. And Carl in a minute will show what came out of this. Okay, then we decided to do the book. The first question was how big should the book be? We chose seven by ten basically because it was big enough so we didn't have to use too many pages and small enough so we didn't have to do double columns. We, Carl then created some M4 macros targeting latex. And he did stuff with, you know, I don't know, all kind of UNIX tools or Linux tools to take the previous interviews which had not been done in M4 but were just done in HTML and converted them to M4 first. The idea being when we later want to convert the edited interviews for the book back to the website for HTML, it would be nice to have them all in M4. We circulated the page proofs to the interviewees. Oh, I skipped the step. Barbara volunteered to edit all of these and did. We circulated page proofs back to the interviewees. Many interviewees said at that point, well, if Barbara edited it, I don't have to look at it. Because Barbara's editing abilities are quite renowned in our community. And I'd like to stop for just a second and appreciate all the work she did. We invited her to be a co-editor on the front cover. She declined. But at this point, we at least like to give her an autographed copy of the book. So, Barbara, thank you very much. Okay. Now, this is my mistake. When we decided we should have photographs, I said give me photographs that would fit the HTML page. I forgot we were going to need to hire resolution photographs for printing. This caused a bunch of problems. I had to go back to many people and get higher resolution photographs. Some people didn't have higher resolution photographs. And we'll come back to that in a minute. And then finally, there's the question of do we do the interviews in the book chronologically or alphabetically? We ultimately decided on chronologically and then made a second table of contents that was alphabetical. I think it's your turn, Carl. Is that right? Yeah. You push this button to get the next thing. Okay. I'm turning it over to Carl for all technical details. Well, Dave said that. Actually, I wanted to mention that. Even though my slide is titled, you know, technical law tech details, Dave is a technical guy. He did all the, he did kind of the initial M4 in HTML. Yeah. So another thing Dave mentioned in passing was targeting law tech. And we didn't actually hadn't decided at the point that we knew we wanted to do a book. We had no idea about whether we wanted to do it in law tech or plain tech or context or whatever. And I'll talk about that briefly in just a minute. But so I, technical law tech details, so about the book, I organize all my projects with a make file so that I can just type make. Everything happens. The final book comes out if there are no errors. There's a top level tech file for the book. It just includes all the interviewees. Nothing very special about that. The only slightly unusual thing is that I then use some features in GNU make to extract the list of interviewees from that top level file to be the dependencies in the make file so that I don't have to write the list of 50 names twice and maintain it twice and inevitably make mistakes. There's also a style file for the book. Which actually, I guess it's a class? No, that's not right. It is a style file. And that basically just loads a bunch of other packages, most of which Boris just mentioned. Geometry to deal with the page size and all that. As Dave said, our criteria was single column and maximum size given that criteria because fewer pages would be fewer cost in the end. Graphic X to handle images, fancy header, as Boris mentioned, to deal with the running heads and running foot. And microtype, which I'm not going to go into the details of, but it's some features that Hante-Tan implemented. And the bottom line is that you say use package microtype and most of your over-full H boxes go away. Oh, it's a good thing. We, oh, this might be a place to mention where a lot of the editing per se was actually to get rid of, in HTML, this question does not come up of page size and editing and the correct hyphenation and, you know, a nice line break and nice page breaks. All that stuff just doesn't happen. And so a lot of the editing is really getting that stuff to work right. And the last change we made in the book was to fix one page break so that, you know, you wouldn't have a widow line at the top or something. So anyway, getting back to the packages, we used URL, not hyperref, hyperref is not loaded. Not that I, because I didn't think it wouldn't work, but because we, all the interviews are already online in HTML. So we don't need the PDF to be kind of the reference online thing. There are a few Vietnamese words in the book in Phil Taylor's interview and there's only, I don't know, it might be less than half a dozen words, but we ended up loading Babel and using all the machinery there to switch into Vietnamese. I tried various shortcuts, which turned out not to be shortcuts. And we were very, was very grateful that all that was there and all I had to say was foreign language, Vietnamese, and in the end. And again, the fonts, which I'll talk about next, but for the Vietnamese part in particular, Hanh Taitan again, I don't know how he does all this stuff, but has extended almost all the free fonts in the tech world, as well as the Microsoft kind of non free, free in price, but not free in freedom fonts with Vietnamese glyphs so that they're all available in the kind of standard La Tech Vietnamese encoding. The other thing about the fonts, we, I knew I wanted to use in Consolata, which is a new typewriter font designed by Ray Flevian. There's an interview with him in the book, which is not yet posted on the website. It will be on the website as soon as we finish regenerating. He's a quite an interesting guy. He developed, is a prime developer of GoScript for several years. And he has several other font projects. And I knew that if in Consolata was going to be ready in time that we would want to use it. And I just wrote the little style file and stuff. That's all on CTAN now and it will be in Tech Live. For the main text, I ended up choosing Charter. And here, which was designed by Matthew Carter at Bitstream. Bitstream donated it. It was one of the very first high quality, free front, freely available fonts. Probably the late 80s, early 90s, I think is when Bitstream donated it. And I used this very nifty thingy now. We'll see if I can make this thing work. Ah, TugDK font catalog, which I highly recommend to anyone trying to choose a font for law tech. And then, Polly Jorgensen, a Danish fellow created this website and it shows you not only samples of most of the interesting free text fonts that are available. It also shows you the exact law tech code that you need to type to put that in, which is often very far from obvious. And in particular, Paul P. Sherrow, who I know nothing about and wish he'd either be interviewed or write an article, has extended Charter to be supported by math. I'm not exactly sure how much, there wasn't much math in the book, so I didn't really see very much of it. But that was what Polly's site told me to use, so that's what I used. And it turns out there's several versions of Charter, even beyond Paul's math, extended math version and the original version uploaded by Bitstream. Bitstream, there's at least two versions of metrics on CTAN, one of which for Charter. There's the original version uploaded by Bitstream and there's a new version, which I'm not exactly sure where it came from, which is purportedly newer. But it turns out that the new version is bad. The kerning table is quite awful, and so if you have a V in a period next to each other, they'll almost be touching, it's way, way over-current. And so that, and Paul had based his stuff on the new metrics because they were newer. So I just wrote a little script to take the kerning table out of the old font, put it into the new font, and make everything work. And I should say that the make file and the... everything interesting here is online, it's talked about more in the article and it's online, anybody who cares can look at it. I said I'd mentioned why we ended up choosing LaTeX, and the answer was precisely because of dealing with fonts. When we were first trying to proof the book and design which font did we want to use, you know, we didn't... we didn't have half a dozen or so reasonable choices, and I just wanted to swap out a couple of lines and the style file retypes at the book and get a grip on it. And you cannot do that in plain tech, and I did not know enough context to be able to do it, so we ended up with LaTeX kind of by default. So that is kind of the story of the LaTeX development of the book, and here I believe is the output corresponding to the interview that Dave just showed us. This is the sample page of the book, and we have the actual book here. I guess Dave has been passing them out to me. No, no, no, I've been holding them up until now, since I have some drama at this point. Oh, okay. Well, I'll try not to steal your thunder. Okay, so LaTeX was really good for lots of things, but as we all know, LaTeX is not completely good. No software is completely good. Pain and suffering with LaTeX, there were two items, two kind of significant things, and I did not know about LaTeX when I started this project and now are deeply embedded in my brain. One is indentation of verbatim. So there's a few people who wrote little blocks of code. Phil Taylor again comes to mind. He was actually the one who insisted that the verbatim blocks be indented. And so when we wanted to indent them, it looks better. Phil was right. It looks better if you indent the verbatim blocks. It turns out that LaTeX typesets verbatim as a one item list with no list marker. And there's a rationale for it. I'm not saying it's the wrong thing to do, but it means that left skip and all the other usual ways of indenting something do not work. And I never did find a real solution to that. I simply started to use fancy VRB, and use package fancy VRB over there, and it has a left margin option, and you set it and it does it. And I believe that as far as I could tell it re-implements verbatim from scratch in a different way. Another thing about that I just wanted to mention as an aside, since I also mentioned microtype, this didn't come up in the book, but it came up in the talk about issue which just went to print. If you have two lines of verbatim and you've enabled microtype, they might very well be shifted relative to one another, which is not at all what you want or what you expect. So it could even appear, it's most of a character space. And there's info in the microtype manual about how to work around that, and there might even be a fix for it in TechLives, since I complained bitterly to Ton. Again, it turns out you need a new primitive in PDF tech to fix this. But if you use fancy VRP, do you still have the same problem? I do not know. I do not know. But I believe you do, because I think it's not about the verbatim mode. I believe it's about the, well, I'll stick with my first answer. I don't know. Like I said, it came up in Tugboat, and in the Tugboat article, fancy VRP wasn't being used. So that was the pain, I guess. So now the suffering. Vertical space above lists. So we wanted to control vertical space above lists. There is a LaTeX parameter for this at backslash top set. There should be triple dangerous bins around it or something in the manual. It does work for a while, but as soon as you have a font change in your document, the value of top set gets reset to whatever LaTeX original value was, which is not your value. And there are some packages. There's a fact entry about this. There are some packages to deal with this. None of those packages. Those packages always either did too much for me or too little. So I delved into the code a bit, and it turns out that the secret macro, which LaTeX defines, backslash at list i, the i just means first level list. We didn't have any nested lists, thank God, is where all those things are squirreled away. And if you redefine backslash at list i and not just top set, then you can control vertical space above lists and feel empowered. I felt empowered, believe me. It's like, yay, no more. Well, anyway. Okay, so that was LaTeX. On to M4. I just want to mention two things here as well. The first was just a little thing about date conversion. In all the sources, we used this numerical format, so that we could sort them and it was uniform, etc. In the actual printed book, I think it was Barbara who said one day, you know, it would be a lot prettier to say January 1st, 2008, instead of a bunch of numbers. And we said, well, okay, if Barbara says to do it, better do it. So I poked around and found this M4 built-in command, ESIS command, which just lets you execute an external command like write18, you could say. And there's a GNU date utility, which I highly recommend to anybody working with dates, can do the actual parsing and conversion. I didn't have to program any of that. The only thing that was hard about this was finding out about the existence of ESIS command. The GNU M4 manual is very thorough, very extensive, very long, and it's definitely a reference manual. You think some of our things are reference manuals, this is a real reference manual. It's good, but anyway, so that was good. Dave mentioned in passing my second primary M4 pane detail, which is about comments. In the book, we had things like this, and this is something somebody actually wrote. Sure, I can learn that, and he put it in quotes. So we have one of our M4 macros underscore quote. We named all our macros with underscores just to make it simple. And we went along happily with this, and it's a macro in the first place because of course in HTML, you want to say LD quote and the RD quote entities, and in tech you want two left quotes and two right quotes, or grov accents if you want to fall in with the majority of the world, which we don't. But anyway, so we went along fine with this, and we didn't really notice any problem. One day I was proofreading the output, and I noticed that of the book, and I noticed that the I can learn that and the comma had disappeared, and all we had was sure in quotes. And the answer is for anybody still in suspense, quote being a function and for separating arguments with commas. The first argument was sure, the second argument was I can learn that, so we dutifully quoted the sure and ignored the I can learn that. And the ant, it's trivial to fix, you just put in those fancy double brackets which Dave talked about. So that was fine, but there are probably 20 or 30 macros where the arguments could contain commas. Some of the texts could be quite lengthy crossing over lines, so I couldn't use graph or pearl, which are those mysterious utilities Dave talked about, I may have used earlier to convert his HTML to M4. Any of the line oriented utilities were not going to fly, and in fact really the only tool which could parse the M4 input was M4. And so I ended up writing this helper macro, IV check empty, and this great mass of quotes and parentheses really boils down to passing the second argument to this macro, and all the macro does is give an error message if that argument exists. And then we just put the first argument in quotes, and this found probably 25 other places where we had lost text out of the interviews and hadn't noticed. So I was glad I did it. And now, back today for the cover. Yeah, you could have worked hard enough at it because it's happening. I think I missed the slide, but yeah, there's a slide somewhere in here that says this was in cover was once again inspired by the mathematical people book. We did some various cover thinking talk back and forth involved Steve Peter for a while. And ultimately we decided why don't we do something quite a lot like the cover of mathematical people. We'll have some of our interviewees on the front cover. We'll title the book text people. And you can't see it, but these are the names of all the interviewees repeated. We put the interviewees names on the back. So it's not exact copy, but it's kind of mimicking it. Anybody know who any of these mathematicians are on the cover here? We know Knoth, of course. Well, here's the answer. Again, these are, this is Ron Graham. Two people here, Ron Graham is the one standing on his head. He's the great combinatorialist who's over a long time at Bell Labs. I think he's at San Diego now. He can juggle. I met him in the world of juggling. I was a juggler and I got to know him there. In fact, Ron and I are in the same issue of Fortune magazine in the column about what executives do when they're spare time. I wrote a tank of women. Okay, our cover, we looked in the practic journal. We saw the article by Yuri and his colleague and we basically copied it. Basically what that is, is make a box, save a graphic in it, and then put the box at some Cartesian coordinate. However, we didn't have the wisdom to use Tixie and we couldn't get transparency to work well in PS tricks. So a little bit of what we did, I did an illustrator and then I put the illustrator image in a box and then we put the other stuff on top of it. We tried to use vector graphics wherever we could. We're forced by the printer to use a different color space. We had to convert a lot of the photos, all the photos to grayscale, and exactly how we did that was done in Photoshop. I guess I won't go into converting color into grayscale except to say there are a lot of different ways to do it and it's a little unclear what works best in any case. In some cases we sent the color to the printer, asked them print as is, and in fact the printer's translation did a pretty good job. In some cases we just did a conversion to grayscale, in some cases we took the L channel from a lab color space, in some cases we took the red from an RGB color space, adjusted them, we did a lot of different things. And in three cases we actually manipulated the photographs. Yesterday you saw Han the Pond's photograph many times on a very green background, kind of a green guy on a green background. He didn't show up in black and white, he was hidden in the trees. So we traced around him and made the background get light. We made Jonathan's hat pop out on the black and white, and we took away David Carlisle's great shadow beside his face. Okay in color and didn't look good at all in black and white. When Dave says we, what he means is he. I complained about all those three things and then he fixed them. So here's the picture, as you can see we've got text people like mathematical people, we've got some photographs of some of the people who are here, Jonathan, Kave. Our basic choice of what went on the color was who matched in their color photograph the background gradient. It all had to do with colors rather than position in the tech community. The development environment, I work on Windows, Carl works on Linux. We communicated through an SVN repository on the Tug server. And I don't know about Carl, but I get a good bit of my compilations on the server as well. I did major edits back on my own machine just because it's a lot easier, but I would do a lot of compilations on the Tug server. So we greatly appreciate Kaya and the Tug server. We decided to self publish. The publisher is the person who has the ISBN number. It turns out that's the technical definition. We've got the ISBN number in Tug's name. I'm going to talk about this basically in my next talk, so I won't talk about it more now other than to say what I just said. We self published in the name of Tug. Carl and I funded the small amount of development costs, and Tug will get any profits after we cover the development costs. Okay, going forward, we already mentioned that we're going to recompile all the M4 files with Barbara and everybody else's edits. A few new end notes that people added back into HTML. I have that working. We have that working. Now I have to make it work sort of more automatically than one at a time. We've got the definitions for the M4 to HTML definitions, which are much more now that we have to handle everything that was handled in latex, but haven't written the outside stuff, don't have it running, that sort of generates the whole website and all the indexes. And a few years from now we can consider another book. We are definitely going to go on with the interviews, and if we do it again in a few years, we've built a big mechanism which should continue to work. So, reflections. I think we're relatively happy with the decision to use M4 for re-cargaining one language or the other. It could have been an interesting learning experience to learn some other program and do it some other way. And maybe I'll do that next time. But I'm pretty happy with this. You agree with that, Carl? Okay. So the thing that strikes me most is how tech, being essentially command oriented, fits into a bigger workflow. So simply, you know, going to the tech, coming back toward M4, tech is really a plus when you're going to fit it into a big workflow which doesn't have human intervention at each step. SVN is great if you haven't used it, do. It's wonderful for communicating back and forth. Happy enough with self-publishing. I'll talk about that in the next presentation. Very happy with the interview series reception in the community. Use Beamer to create this presentation. That went relatively easily. That was the first Beamer presentation I ever made. We used Andrew's article. No. And I think I'm overjoyed to be done with the book. Thank you very much. Thank you. APPLAUSE Questions? Yeah. Why is it that it was necessary to use tech for the government? To use what? I insisted on it. I insisted. I didn't understand. I didn't hear the question yet. Why was it necessary to deal with the government tech? What was the cover? Carl insisted on it. I started by doing little prototyping in Illustrator. Carl insisted we do it in tech. Ultimately it was a good idea to do it in tech because we decided to use in the cover the same fonts we used in the book Charter and in Consolata. And those aren't available in Illustrator on my computer. Actually, speaking about tech, we discussed it in the workshop. You could do color separation and transition to CMYK in tech. Sure. Because of the way to do it. So you can actually create a picture in any color space. And then you have a tech trick to go to grayscale or CMYK. It's magic. But it was pretty easy to do it in PS tricks except for the transparency issue. The new statistics has transparency but I never used it. And probably if we uploaded enough revisions we could have gotten it to work. But at that point we were anxious to get done for this meeting. And so workarounds were the order of the day. Yeah. I just want to say that that color separation can yield to very expensive mistakes. Because if you pick it up after the fact you may be down with the deal in no way to go back. Other questions? It's okay. Okay. For the few of the interviewees who contacted me in advance and said you want copies of the book, I brought that many with me. For any interviewees who didn't contact me in advance I didn't carry more than I had actual orders for. But I'll be glad to provide them to you under the same deal I'm giving to the other interviewees, which is essentially cost. We will shortly have up on the Tug website under tug.org slash interviews slash book. How old members can buy it at a big discount from the Amazon price? Okay. Thank you. You're next. I think we have a break. Thank you very much.
|
We present the history and evolution of the \TUG\ interviews project. We discuss the interviewing process as well as our methods for creating web pages and printed output from the interviews, using m4 as a preprocessor targeting either \HTML\ or \LaTeX. We don't claim great generality for what we have done. Nonetheless, some of our experience may be educational.
|
10.5446/30832 (DOI)
|
This is the regular Duarte progress report. I think the last couple of conferences we have slowly increased. I see the number there increasing. Is there a limiting number like John Hasper? Well, we are. No, not so much limiting, but at some point we think that we will have version one. I'm going to converge to something like... 1.414. Maybe in time you find a magic number. But we are not doing it that way. Did you finally get to one? Or is it now one? What is it here?.999999? You started to hide this number. Okay. We did a couple of reports already at used group meetings. And since there is a little bit of overlap in the audience, I'm not going to repeat all the things that have been done so far. If Carl wants, he can have the full report in one of the top boards. I think it's the most important thing that we can say is that we made some progress, but we didn't really change or add real big things the last half year. This is because there is the usual TechLife code freeze. And there should be a version of Lua Tech on TechLife. And the 040 version is what we call the TechLife version. And it's more or less the version that most users, if they want to play this, Lua Tech will play this. In the meantime, while we keep that version, and all double patches and bug fixes and whatever do that for you, so the number will slowly increase, we also are working towards version 050, which will be released at EuroTech in a month from now. And the big thing that happened between 040 and 050 is that the whole code page is now in C. It's not yet C-web, because that also includes documentation, so that will be the next thing. But this is a major step, because it permits us to go further than we, when so far, will come to that in a moment. We promised a production version last conference. Production version means a version that you can really use to do something useful, but doesn't crash after page 10 or 20. I think we can say that the version that goes on TechLife is pretty stable, unless you start, of course, messing around with really dirty internals, but if you just use it and see it as a SA Tech engine, you can use it. An important issue that always comes up when you talk about, well, using the technical production is, how compatible is it with the existing thing? And then the first next question is, what is the existing thing? I think we can safely say that we currently have two engines. That's PDF.F and C-tech that we can compare with. And PDF.F is a rather traditional engine, so you can say it's rather compatible, but it has a couple of extensions, so there are differences. C-tech uses a different, at least at some points, a different rendering engine, so that's also not completely compatible with how this Blu-A-Tech compares to that. Well, first of all, if you talk about compatibility, well, you immediately come to the main virtues of Tech, and one of them is to build nice paragraphs, and paragraph building is related to hibernation and hibernation, is related to hibernation patterns, and hibernation patterns are in traditional Tech related to font encodings, and while we don't have font encodings, we live in a unicode universe, and therefore you can expect differences in behavior between the engines, simply because you use different patterns, more rich patterns or whatever that might be. Another difference is that in Tech, there are some limitations in the accuracy, in the state of your font metrics. There can only be 16 heights and depths in one event font. I don't know if anybody ever noticed. In the Blu-A-Tech engine, we don't have that limitation. I'm not sure, but you probably also don't have that limitation, is it? You still carry the height and depth limitations? Not if you're using other type fonts. Right. So the same is true for Blu-A-Tech. If you use a modern font, you don't get those limitations. It doesn't make sense anymore. Another important difference between the CPDF Tech, that's where we started from, and Blu-A-Tech is that the three stages of hydrogen-ligature building and curling have been separated. In Tech, they are kind of interwoven, and there's a good reason for that. In general, the thing is that say you put something in a box, at that moment you want to know the width, height and depth, so to have the dimensions of the box. But at the same time, you can unbox this thing. And then you can do other things with it, it will be a content or it's added to something else, and that can mean that the dimensions come out in a different way. In Tech, this has resulted in, let's say, kind of interwoven system, where you have a sequence of characters that become lips, and you put them in some space. Well, in order to see if it fits, Tech has to know how such a work can be hyphenated, it has to inject the currents to know what the exact dimensions will be, and it has to build ligatures. But at the same time, as soon as you undo this operation, it has to go back to this previous stage somehow. And that means that there is some ligature-reconstituation going on, and things like that, which has a couple of side effects, which are probably, for most documents, not noticeable. And it means that if you have documents that really depend on those things, then you might get a different outcome. I think in practice you can safely say that it won't happen. And the reason for that is that most users will, in the meantime, have updated their resources. They will use different hyphenation patterns. They will have updated fonts, whatever. Maybe the macro package they use has some different settings and whatever. If you really want a stable tech, you should use donknoots.com with computer mod and fonts. Stick to basically 7-bit plus kind of engine. And that's probably not what most users want. Okay, the roadmap. Now that we are... I'm not talking about... oh, now talking about version 050. The roadmap is the following. The first thing that we want to do is to open up the paragraph build a bit more. You have already access to it, but there are a lot of parameters involved in building this paragraph. And in tech, a lot of these things are global variables. And I can put them in structures and carry them around so that you have access to the thing and can save. For instance, it is one of the things that is needed for the if-oriental tech project. You say, okay, I want to have all the information that is available now to make this paragraph, and now I want to implement my own paragraph builder. And that's easier now that we have the C version. Another thing that we have delayed still, this conversion is opening up the output routine because it also has some interwoven characteristics. And that will make it possible, I hope at least, to come up with better solutions for multi-column output and things like that. There are some directional issues that we need to clean up with directional... I mean, the directional type setting, right to left for instance. We used the Omegam model, but I think that model had never been really applied to large-scale documents and whatever. There are also a couple of bugs and things like that, especially in the back-end. There were a couple of surprises that we ran into. We tried to clean that up as much as possible. I think the overall model will stay, but it will become more robust. Another important step that is actually already underway is cleaning up the back-end code. The back-end code is basically PDF back-end code, already extended to do a bit more. But it will be completely written, separated from the main system, so that we can have multiple back-ends and things like that. Kind of ongoing effort. Actually, much of the back-end code was already written in C, so that makes it easier. We have a couple of small things that we want to add to the core engine, a bit in the control of these things. Or sometimes you have, let's say, there is information there, but you don't have access to it. A good example is, I think, like inserts that I use for instance to implement footnotes and things like that. You don't have the information at the moment that you want to make the decisions. Yeah, and of course we will take the freedom to do whatever we like. As I already explained, well, many times, we don't come up with solutions. Making the solutions is to you. And if you don't want to use it, you should ask your market package writers. Okay, and I think Aditya showed some of that already. If you want to use Lua, basically use the Lua programming mechanism, basically Lua there, you can already do that. So if you have Vergin 040 and you just treat it as if it were PDF there, everything should basically work. Probably if you later usually load a high-cost compatibility style, because we kicked out a few of the things out of PDF that can be implemented in Lua more easily. So High-Go made a style for that. And then you can just go and play around with Lua. In context, we go quite far in using Lua. And I think a lot of the mechanisms that have been opened up are also already used in the context machinery. And it even goes far that we have a special version of that. We now have frozen the version for PDF there. It's used by PDF and Xeter. Probably will be adapted to Xeter behavior, but for PDF there is basically frozen and all development is now in the version that is for Lua there. And I'm quite lucky that we have users who are willing to test that. It's surprisingly to see how fast users download new versions. We have what we call context minimal tree, which is a subset of their life on the context garden, where we also keep binaries, Lua tech binaries for all platforms that make sense. We don't have an iPhone version yet, but maybe if somebody makes an iPhone, we can look into that. So users as soon as there's a new version can really be up to date quite fast. At the upcoming Comptek meeting I will tell more about that, but probably not so many of you will be there. Okay, the areas that have been touched are, well, I can say all input output is done by Lua. So completely bypassed there's system there. Multiple fonts, this also includes not only OpenType, but also AllType1. In context, AllType1 fonts are now using the AFM metrics and the PFM, PAP data, and no longer use the TFM files that normally are generated with such fonts. It saves a lot of troubles. So actually we only use the AFM for a little bit of masks that is still out there. All kinds of manipulations, like for instance you want to uppercase a word, or you want to uppercase the first character of each word in a sentence, or any manipulation that you want to do with that kind of input is done in Lua. Actually it's done, it's delayed, there is some kind of tagging takes place, and the action is really done on the first incarnation of the internal note list that results from that. Much of masks have been redone. We're completely re-implementing all the mask machinery, and Aditya will tell more about that. The only thing related to structure has been Lua. The reason that I tell these things is just to give you an idea about how far you can go with Lua. So everything that has to do with numbering, multi-pass data, well, basically 50% of the code I think that matters, is done using Lua for carrying around the data. Cross-referencing, things like that. It means that we have a completely different round-trip system now. Caller and other attributes, single-stink like transparency is one thing, but in PDF you can have a couple of more things. The PDF backend, my target is that I use as little as possible commands or primitives that have this back-to-back PDF in front of it. So even hyperlinks and all kinds of things are done in Lua, by just passing note lists and looking at areas and things like that. And it makes it possible by separating all these things to bring in new back-ends more easily if I want to do that, because it's currently quite interwoven. Because in my daily work I mostly deal with XML input. That was one of the first things I did, actually, a new XML parser. So we have the old one still present and the new one, which makes it possible to operate on the document tree, to use XSLT-like syntax for accessing pieces of this tree in the document, and do whatever you want to do with it. This also means that in terms of context now, carry along information, if something came from XML or TEC or whatever, so that you, later on, when you reduce it, know under what, and that's what TEC always important, what CACCO regime and etc. etc. things happened. Well, all the usual tools to run context, to a lot of convenience tools and whatever are also written in Lua. Some of these changes are quite big. It's a bit small, but for convenience and good acting. All the files that deal with the LuaTec version have the suffix mark for, and this is the old stuff. So if you just look at the columns, this is the number of files involved. You see that there are quite some different. Everything relates to font encoding. 18 files here. 43. And just one, basically compatibility thing. That sits there. So you see sometimes enormous shifts from one to the other. These special drivers, all the back end stuff is completely gone, because you're doing now. There's still a small set of files that is shared between the two machinuies, but I decided to make a split just before we push it into that life, because that will make it more stable if we change things in the mark 4 part, so that views are not affected. Mark 2 is basically the frozen version. This is the same graph, but then it shows the number of files involved. So these are the Lua files, and these are the Mark 2 files. And they are again, you shift from TEC to TEC code to Lua code. So you divide with all spaces and spacing strips, and comments to give an almost ideal. The big ones, of course, are the ones that deal with fonts. Now I need to know how to go back. Completely gone. Okay, so this gives you an idea how far you can go if you want to really benefit from opening up the engine. The big question always is, can code really be used? Because you can make really nice system, and yes, this is a good example of the first big rewrite of TEC, when it was there, it was unusable because it was 50 times slower than TEC, and you don't even start experimenting with something that is 50 times slower. Well, of course, the fact that I'm spending a lot of time with it in that context, you use this demonstration that it can be used. Just as proof that it can be used in plain, we also ship a couple of files, especially dealing with fonts, that make it possible to use LuaTec, this plain tag. Talking about production, I use the O30 version in a couple of projects, projects that I don't want to touch anymore, and you can imagine if you re-implement everything related to structure, and there's a temporary moment of instability in your system. It takes really a lot of time. Is it because all the rewrites go to several stages? You re-write something, and the engine gets a depot, and you re-write it again, and things like that. I think all codes passes at least three times my fingers. So how do we measure usability if we have added something or done something and want to see if it's still a usable thing? Well, one is start-up time. If you are using Tevan, an editor, and it takes ten minutes to start up, then there's no fail. And I'm not making any prediction how fast it will start up on an iPhone, but I would be curious. How well does it integrate in workflows? Well, I think that's often the easiest thing. Especially for content shifts, the tools around it are okay, then it normally works. I can safely say that all the tools that I've rewritten in Lua are two times faster than they are in Ruby, and they are smaller. How fast is a basic LuaTech run? On multiple runs, we need it, and I will show more of that later. And when measuring these things, you quickly find out that there is a big difference between platforms. And it's very hard to measure precisely how good the system performs. The biggest impact we found is in the terminal. It takes sort of outputs on a character-by-character basis. And if you have a terminal that likes to refresh every millisecond, that makes a difference if you have much output. So for Windows, for instance, I installed a shell that is a bit like Linux with large delays. On the Mac, it always depends a bit on what font you choose. If you choose a very sophisticated font, you can be sure that output in your terminal scrolls slow. It's always interesting to see that editors do it fast. For instance, in techworks, it's not a problem. Somehow, I don't know what delays are with editors. Normally, do it very fast. Okay. We have three test documents. If I have enough time, I can show them later. That's what we call MK. MK is sort of the history of LuaTech. I decided to stop with that one, extending that one just at the O50 version. It's the one that we also collect some of the stuff that sometimes appears in articles in turbo and whatever. There is a rather dumb baseline test. Just to test on many pages. Oh, I need to do the other documents. There's the meta-fin manual, which is very extensive on graphics. I'm 1700 meta-post graphics. Imagine that you have to generate them runtime. It's PDF there. How much time that costs. I gave up on updating that manual because it took me so much runtime. And the LuaTech reference manual. So we have something like if these run through the system, then we are quite confident that we can put the version online because users will find the rest of the book. But the basic performance, the basic thing is there. Okay. So this is the reference manual. Just to give you an idea. At the end of the context, when you will get a report like this. So this is the time spent on everything related to input. That includes also loading the file databases like that. There are by now some 148 Lua modules that are present in the system. They are all put in the format so that you don't need to load them runtime. So there's some report about memory leaks. And this is quite okay. This is not a leak. Everything related to horizontal list processing. This is Lua, Lua processing that means building ligatures using Lua and that kind of stuff. It's on this document. One third of a second. That was said with a total runtime. This is always where we look at first. It's 12 seconds, just over 12 seconds. So we get the throughput of certain pages per second. Now, of course, it's always people saying, oh, this is slow, whatever. What counts is the speed that you kind of measure with yourself. If you run it, it's okay, I'm comfortable with this. Most cases, okay. This document has lots of tables. There's lots of paths to typesets. So it's quite okay. Front loading time. You see that this is a bit high. That's because we load a couple of extra fonts. To come back to one of the things that before, if you run from an editor, it's actually the startup time that counts. Not so much the one or two pages. It's startup time, you do a few pages, and you save the stuff in the PDF file. Those are the bottlenecks. You always have them. Okay, so this is memory footprint is 106 megabytes. That sounds maybe a bit high, but this is due to the fact that, well, if you store font information in memory in Lua, you have lots of tables, and it doesn't necessarily mean that this is always used in memory, because Lua somehow allocates memory two times as much as it needed each time. Okay, this is the... Intermetable manual. No, this is the MK document. Okay, again, input load time is quite constant here. HNOT processing is a bit more. That's because here we do Arabic stuff. And this is, I can assure you, this is really a budget test, this document. You're talking of an enormous amount of font, 69 fonts, and these fonts are huge. If you think of some of those two type fonts, well, you can confirm how big CJK fonts are. If you allow four or five of them, you quickly get a lot of memory consumption, and loading time also takes a bit. Did you create a red font, 69 fonts? No, it's all. It's an MK document, so it's CJK, it's Arabic, everything is a lot of things. And there are fonts that are on disk, 25 megabytes. That's in binary format. Error unicode, or... Error unicode, yeah, that's really a... That's a bad one, yeah? Because it also has a lot of font features. Okay, now what you see here is... This is why I also showed 28 seconds. And the document takes 50 seconds to process. Now, if I process it directly on this Mac, it takes 30 seconds, and this machine is just as fast as my laptop. The reason is that we found out that we think that it's the garbage collection of Lua, which behave on different platforms, and there seems to be some kind of interference with the CPU catch and efficiency of the load. As soon as we added a couple of functions to the handling mass, or my machine is document to 20 seconds more, don't ask. If this is a compiler issue or whatever, and we simply cannot find out why it is. So we have to run into a situation where the opposite occurs. And... But so here you get effectively 6 pages per second. But on the other hand, it's a complex document, so okay, I can leave it as it is. This is the method of manual, and if any of you ever saw it, and saw the amount of graphics, you can imagine that I was quite pleased that it can be processed in just over 43 seconds. And that means, if you look at the numbers here, there are over 1700 graphics in a data processed runtime, converted runtime. The interesting thing is that it's not even the bottleneck. There's 15 seconds of external execution time. Times that I start another context session to do some processing, in this case, outline fonts, which needs separate kind of treatment, because you create a PDF and you run it through PS to edit, and then you get an outline font and then you can use it. So if I subtract the 15 seconds, it's 30 seconds for the method of manual, which is 360 pages, full color, everything you can think of, which is, I think, not bad. So this makes updating such a manual doable again. Okay. And how does Lüwetech compare to the other engines? Now, I must say beforehand that it's not entirely fair, in comparison, because in Mark IV I do some optimizations and I keep cleaning up and optimizing the code. For PDF there, well, it's less robust because you have less control, so I need to keep the code there. Lüwetech, I could probably remove a bit of code, so the Lüwetech numbers should be a bit better in practice if I would go through the same optimizations stage there. On that laptop, well, if I just process 30 pages, CTECH does 16 pages per second, Lüwetech 20 pages per second, I think CTECH has a little bit more startup time there, maybe because it's also communicating with this NVIDIA PDF mix or whatever, I don't know. If both of them start separately, you have two times the Apache startup time, which is actually not neglectable in terms of... On 300 pages you see that it already gets a bit closer. PDF.F is always the winner, yeah? There's no doubt about that. I think one of the things showed here is that because we move to a Unicode engine, internally, more data, everything is wide, everything is large, you have performance, but no question about it. Well, if you do 10,000 pages, then CTECH becomes on the laptop the winner. You see? There's no way to predict it. That's really why I show something like that. It's more the impression that you get about this thing. I think what you can say is that CTECH and Lüwetech more or less perform about the same, and they both perform a bit slower than, I think, in general PDF.F the others. On a server, a Linux server for 64 bits, then you get different numbers. Now it's interesting to see that CTECH is faster on less pages, 30 pages, but on 10,000 pages, suddenly it's slower and then Lüwetech is faster again. So that's probably what I think like memory management, and reclaiming as things start kicking in. And these are averages, yeah? So I do many runs and I take the fastest, yeah? I give them the benefit of the doubt. I take the fastest out of 20 runs per sample. And then the fastest, yeah? So they more or less eliminate catching and displace and whatever. So for me it was, I thought that there would be a pattern. There is a certain pattern in it in the sense that PDF is the fastest. Yeah, I think that's the only thing we can say. On the other hand, what you can see is that the Unicode engine don't perform that bad. Excuse me, in Windows, in Windows case, what kind of executable did you use for JIT? The Akira. Akira, Akira is the one, it's a very fast one. Yeah? You mean it's optimized? Okay, for Linux I don't know. I take the things from the garden and they are compiled there from the sources. So I notice, this is interesting, when I did the LuaTec, I'm not sure, no, it's not here. When I did the 32-bit LuaTec binaries on Linux, they are some 30, 40% slower. It really makes a difference. I don't know why, a lot of things it has to do with the number of registers and things like that, that may play a role. So if you have a 64-bit machine and your critical workflow really pays off, for the Mac, we did a similar test for the Mac, it stayed 32-bit. 64-bit is slower than... Okay, so this is a bit what I wanted to show you. I don't know how much time I've left. But this is the document that one of the documents that we used for testing. You just told that you get 19 pages. These are pages that are fetched from the web and enjoyed it. You think like that, it's also fetched from the web. So for this document you always have to take into account if it takes time to get it from the web. There's some catching going on, so normally it shouldn't matter. This is always interesting, if you do a test, the speed test, with the first run on a system that just has a new version of context, there's a good chance that the font catch has to be regenerated. So what I do here is this is the development history. If we set an important step, I just make a summary. We put the test in here. We also, as I told you, needed to check if things are... I'll put some error in here. Some point here is where we started with subfino. I can show some of that later in the plain-tech talk. I have time. This is things about Arabic. We use things like what we do, actually do a lot, is adding all kinds of options for placing. So if you can see the colored stuff there, there's an indication of where the initial and the final forms are, I think, like that, that helps a lot. So we have a complete new tracing subsystem. Some CJK stuff, et cetera, et cetera. So this is one of the documents. There is the reference manual. The reference manual, as I told you, has a lot of tables. They look simple, but these are tables that break over pages. So you have a couple of passes over these kinds of things. There's quite some color. It means that you have a lot of color switches. And you can see on each page, you can see the page number has its own graphic. This is the Lua Moon going around the page number. So there are just as many metaphors of graphics in here as there are pages. Do we reuse them? No, I cannot reuse them. They are different. No, I mean, it cycles, so at some point you want to... No, no, no, it doesn't say... The last page, it takes a number of pages. Ah, so you have exactly one round. You still can reuse it if you want to. I'm pretty sure that rotating it takes more time than we were in the graphic. Because in tech, unboxing and unboxing also takes time. So that's that document. And I have the Metaphon manual. This is the Metaphon manual, which has also different graphics for each corner and lots of graphics. And so the nice thing is, this is generated runtime. You don't notice that it's processed. You can download these things from the web. Okay, this is a bit what I wanted to tell you about Lua Moon. So to summarize, we have a more or less stable version 040 on Tech Live. We are going towards 050, which is the official C version, so to say. And then, well, you can use it if you want to use it. And if you want to play with it, I would say you will just start using it as if you are using PDF there. And then see where you end up. Thank you. I have a question. My question is, you talked about opening up the output routine. And I wonder if this will provide a mechanism potentially for the holy grail of technic development since day one, which is to optimize page breaking over the entire document. Yeah, one of the things that you could do is save all the pages. Right. What currently is a problem is, one of the first things that I have done is we implemented the context spacing model so that you can have weighted points where pages can be broken in a more clever way than then penalties and whatever. When you do that, you find out that you always have a problem in tech. The tech is in an internal state. They say, okay, I've already this. And I look at this. I can look at the list, but you need to be able to feed it back into the state. Now, this is one of the reasons why we need to go to C, because then you can have structure where all this things carries on. So that's the first thing that I hope to improve. And then there's a callback that as soon as something is added to the main vertical list, you get a linked list. But you can do it just put it and maybe it makes sense that that says, okay, I do it chapter wise or so. Not necessarily for the whole book. And it opens up the possibilities of do that. Another thing, for instance, grid snapping, which is if you want to have two columns at the same time, if you can look over the list and adapt the height and the width, the height and the depth to the exact line, height and depth if you want. You can even say, okay, this is too high. I snap it to one and a half or two, move it around a bit. You can certainly have perfectly aligned columns. So this is really if you talk about the big thing in context, the next big thing is everything related to place building and how good. And that is going to be a tough job because that will take a couple of years. Also because I want to retain some compatibility. I have no problem if things come out better. But you shouldn't really lose functionality. That's a bad thing.
|
We're close to releasing version 0.50 of Lua\TeX. What's old (and stable), what's new (and experimental) and what is on the agenda (but can get off). In this talk I will give an overview of what has happened so far, what is currently being done and where we expect to end up. If time permits, I'll also do a quick update on the \acro{MP}lib project.
|
10.5446/30852 (DOI)
|
Good morning, everybody. I hope you can understand me. And my talk is about dynamic reports with R and Latex. If you have any questions, please do not hesitate to interrupt me or ask afterwards. OK. So the first question is, I mean, not everybody says the decision here, so what is R? R is a language that was developed by Becker and Schaimbers from Bell Labs, I guess, in the early 80s. They developed the commercial version of the software and called it nowadays S+, so maybe one of you heard this name. And there was also a new implementation of the software that was done in 1992 by Ross Ihakar and Robert Gentleman. And currently, I'm wondering why there are these symbols inside my talk. I didn't put them there. OK. That's strange. But, well, OK. There are more than 1,000 packages on CRAM. That's the central R repository. So it's pretty much comparable to CTAN. And there are more than 500 project members in the R core team, which does the development on R. It covers all the areas of statistics and data analysis. And I guess it's today the number one tool for open source data analysis. Well, it runs on multiple platforms. I mentioned a few here, Windows, Linux, Mac OS, et cetera. And the central homepage is www.r-project.org. So if you ever happen to do some data analysis, have a look at R. Here's a screenshot on the Windows version and one from the Emacs version. I mean, Emacs has a very powerful add-on. That's ESS. Emacs speaks statistics. And there are also various other interfaces for R. For example, for Java, COM, and Python, et cetera. OK. You can use R as a simple calculator. However, well, that would not really fit the capacity of the software. I have just here mentioned some of the operators to just give you a brief overview. But well, there are much more things, or many more things, that we can do with R. And very interesting is the way that R deals with data structures. So the standard data structures are these vectors that have a length m. And they consist only of one data type. Then we have matrices that are m times m arrays, also only one data type. And there are the data frames. That's basically a list of objects. And each of these objects may have a different data type, like integer, string, et cetera. And there are a few lines in the listing that just show how an assignment of numbers to a vector can be done. OK. Here are some other examples how we can generate vectors and matrices. So there are commands for sequences of vectors or for the replication of certain patterns. That's very powerful. OK. And here's a small example how you solve a linear model with the power of R. I just define a vector x from 1 to 10. I define a vector y with some random noise in it. And then I compute a linear model. So it's just a few lines of code. And again, you get the linear model for a certain data set. And it's also very easy to generate some graphics with R. Here, for example, I give a vector of 1 to 10. And I just say plot. That's basically everything that you need to do to get simple graphics. And now comes the interesting part where Leitech comes into play, because R features the concept of certain graphics devices. There are devices for XL11, Windows, Quartz, all kind Post-Crew, PDF, et cetera. There's also a version for Pigtech. But I guess that's fairly outdated, because I wasn't able to get this to work. And the other one that is mentioned here, and it is highlighted in red, is the Tixie device. Because there's a graphics device that produces native Tixie code. We'll have a look at this in a second. OK. This is just an example how you can address these graphics devices. Here, for example, a PDF device that produces native PDF. I just say, well, I want the fire to be in the sea. I define the width and the height of the picture. I say, well, if there are multiple graphics inside the PDF, then I want to have them in one file. And while I use the phone family, I give a title, a PDF version, and define a special paper size. And then I have, again, my vector of 1 to 10. Say plot. And afterwards, I close this PDF device to write the file. Yeah? Yeah. Can you tell us how far it is in our tells you? Yes. There's a, yeah. I'm using it sometimes, I had no idea. Yeah. Yeah. Well, the help in R is very extensive. There's a huge HTML-based system. But I agree with you that sometimes the help is not as helpful as it should be. But I guess that's not just on R. That's also with other languages or systems. OK. So that's the PDF output. And now I want to show you an example of the Tixie output, because that may be really interesting. The Tixie device has its own project home page, just written by Charlie Sharpsdine and Cameron Bracken. And they also maintain this package. So it's really under development. And the R graphics code is converted to Tixie primitives, which means that you can afterwards easily edit these images if you have to change something. It uses the fonts from the document, and also allows the math code in captions. And how does it look like? Well, I load the library Tixie device. I have to install that before. Then I say Tixie. The file should go maybe in the certain tech file. And I don't want to have a standalone version. So in this way, I generate a file that is embedded directly into other tech documents. If you say standalone is equal to T or true, then it generates a standalone file that you can just compile with your favorite latex compiler. Then again, I have my plot command from 1 to 10. I just want to give the vector. And afterwards, I close this graphics device. OK, that's everything. So here's a excerpt from the generated code, which is even for these just 10 points, much, much longer. So you have all the graphics primitives here. Well, I guess it's much easier to change the R code than changing the Tixie device code. But well, there are maybe certain situations where you really want to edit the Tixie device code. OK, that was the part about the Tixie device. Now I want to come to the real point of my talk. That's the integration of R and Latex by the use of S-Weave. S-Weave is a package in R that was developed by Friedrich Leisch, who's now at Ludwig Maximilian's University in Munich in Germany. And it's part of the utils package, so it's just part of each R installation. And it works like this, that the latex document contains the tech code and the R code. And the special thing is that the R code is embedded in no-web syntax. I have only one line about no-web because I'm pretty sure that there are many people who know much better about this literate programming tool than myself. So, and the tech file is stored, the generated tech file is stored with the NW extension. And it's called in R with S-Weave and the file name. And what this S-Weave does is it takes out the R code from the document, it runs R on the code, and afterwards generates the graphics, the tables, or whatever has been specified in this file. And afterwards, you can run latex or PDF latex on the generated latex file. And you have a document that includes the graphics, the tables, together with the text of your paper. There's also another command that's the S-Tangle, and it just takes the R code from the document and puts that in a separate file. OK, so how does it look like? I have a really brief example here. As you can see, I used the SCR article class, and I specified my title and the author. And then comes these double smaller signs and double greater signs with an equal. And I just say 1 plus 1. And the code is then finished by using the add. And that's all. So when I edit this in R and I run S-Weave on this document, what I get is a small latex document which looks like the following. We have our standard command, and then there are these S chunks as input and as output commands, which are just wrappers for a verb team environment. So if one wants to hook into this to maybe use the listing package or something else, then it should be not so difficult. And what do we get when we compile this document? Well, we get a small document which just specifies 1 plus 1 and the result. So that's fairly simple. So within those brackets, you can specify many variables which tell R how to deal with the stuff that is inside these brackets. We can suppress the R code. We can suppress the results. I mean, a combination of both might not seem very useful. But however, I can show you an example that, for example, when you just load the data, you don't want to see the R code and you don't want to see the result. So you just want it done. So then, well, if we specify results equal to tech, then this suppresses the verb team output. This is especially useful when you have R code that generates tech code directly. There's a package in R which is called XTable. It generates tables directly in the latex layout, where there's an option to create PDF versions of pictures in case I want to have pictures. There's also the respective EPS command. And I can specify the width and the height of images. There's also a command SVEAPs that sets options globally. I can also address certain parts of the R code using this name command. And I can access these parts. So in just in case, you have a larger data analysis that might be really useful. OK. So if I have a scalar result, there's also the sex per command that is useful when you want to have inside your document a result from R. The only condition that is necessary is that the R return value must be a string or it must be convertible into a string. And I can show you an example in a second. So here's a really large example or a bigger example. And since I guess nobody except me can read that, I will go into the details in the next slide. So in the first part, I just load the data set. That's the iris data set from machine learning. And I then want to have the dimensions of the data set. I just say sex pressure, number of columns of the iris data, and the number of rows of the data set. So that's the first hint on how I get an overview of my data. In the next chunk, I want to have the output summary statistics for the data variable peter length. And in the next, I just say, well, I also don't want the R code itself. I just want to have the results. And I specify results is equal to tech, because I know that the XTable command just generates me a table in latex format. And I define a caption for this table, and that's it. And the final code chunk, that's a figure. So I specify the standard commands for float here. And then I specify figure is equal to true. So I tell latex, well, the stuff that R does here is generate a picture. And in the background, when I run this code, the respective PDF file or EPS file is generated. That's just the end of the document. And what is generated afterwards from this code? I guess it's not very visible, but you can see here in the first part, that's the dimensions of the data set just in the text. Then we have the summary statistics in the second part, the table which was generated by XTable, and finally, the graphics. So I guess that's fairly nice. And because you have the R code and the document of your paper in just a single document. Because during my time at university, it sometimes happened that, well, the graphics did not match the text and so on. So that's very dangerous. OK. I prepared a second example, which also shows some of the capabilities that R can do. Because in preparation of this trip, I was interested in the exchange rate, euro, dollar. And while during the last weeks, it was not so happy to check this, but, well, what can we do? OK. The interesting stuff is here in the first part. Let me just indicate that. I also extracted that on the next slide. Because what I do here is I use the system command from R to get the data, the current data, with VGAT from the European Central Bank. That specifies a CSV file or an XML file. And then in zip. And afterwards, I used the R internal zip command to extract the data. So whenever I run this document from R, it always takes the latest version of the exchange rates and compiles them. Yeah. I have a question. In principle, yes. In reality, no. Because when you run the document twice or three times through Latex, not through SWEF. Because after the tech code has been generated, of course, R is not necessary anymore. But just when you change the R commands or the tech document. And what we get here afterwards is just a data set which indicates the exchange rates for, I guess, from, I cannot read it by myself, but I guess it was the beginning of the 90s. And where the latest data set was 135, well, that was long time ago. Unfortunately. OK. I have a few literature remarks. If you want to know something about R, then you find a lot of really good books. And I also find a good book in the bookstore just across the street. And if there are any questions, just please ask me. I'm done. What's an example of a screen, a result that's not convertible? I guess a matrix. If you return a real big matrix, it makes no sense to put that in one screen. So I would just use it for the number of columns and something like that. Or for summary statistics like mean, median, et cetera. Also, the top level domain on the European Central Bank is INP. What's that? International, I guess. What do you know when the company comes home? Do I keep going? So first, can we thank this speaker? Thank you. Two questions. Yes, question. When you generate plots with SWEF, is it possible to put tech code on the labels like in the title something? Yeah. Yeah, I can put just directly tech code. Yeah, I guess so. I would need to check it. But I guess our understanding takes syntax. I would really need to dig into it deeper. And second question. Is it possible, for example, when I tweak the wording of the document, is it possible to SWEF to tell it not to run, R again, just take it from the previous run and generate, and then is it possible to switch on and off the R run? Well, you just want to run late heck again. No, I want to change my master document, just origin, not the R code. And then generate new late heck document and again run it. So I didn't change R code, so I didn't want to rerun. Is it possible to say? Well, I guess not. Because the tech file is generated only from the R code. And from the master document. Yeah. OK. OK, thank you. Who competes with R? SPSS. S plus. MATLAB. Minitab. Octave, Mathematica, et cetera, et cetera, et cetera. What are the virtues of R, comparative system? In comparison with? Some of those. Well, it's free. Could I comment on that? Yeah, sure. That's the question I was going to ask before. S plus. No, you can go down the path. Back in the early, late 1970s, early 1980s, some people at Bell Labs, the same folks who brought you Unix and all that other great stuff, developed a language called S. And this later got improved and became S plus. There's no S plus plus for those of you who would like to have something. And that's a commercial product which is available. It's been bought by another company. But the executable is still called S plus. R is essentially a complete free re-implementation of S plus. There are several textbooks that you can use for training students that use both of those interchangeably. There are slight differences between them, as always happens. But by and large, you can write code that will work in both. And the beauty of R is that it has an extremely active development group. There are thousands of packages available for it. It has binary distributions for a number of platforms. For many platforms, it's relatively easy to build the code. And really, I think for many people in the field of statistics, it is going to be the common system they use. I guess there are also many packages in the area of biostatistics where it's really, really active. Yes. One other thing, there is a complete bibliography of books and other publications about S plus and R in the Tuck Bibliography Archive at Utah. Dennis? You like to list R and R? Just a quick comment. I think Rick Becker, who was listed on R, was one of the developers of S. I didn't even know he had done R. He was actually the shortstop of my softball team at Bell Labs. But so it's the same guy. It's the same. Yeah. I'm Dennis. To answer Boris's question about graphics, R has incredible graphics control built into it. And I use it quite a bit. It's actually easier to use R documentation, LaTAC documentation, to adjust your figures. Just to further on the use of R, in Denmark, the largest sales modeling agency uses R as their primary product, because it's very snappy at doing time series analysis and forecasting, and much, much cheaper than using Sass. And it's easier to get help with R than it is to get help about Sass. Yeah, definitely. I just had another question back for Dennis on the graphics stuff. I guess I don't need to do this in front of everybody. But Boris's question was if you can get LaTAC labels inside the graphic, or if you could put an equation in your figure or something like that. Probably could, but the labeling inside of R is well done. That I don't know. There is an R conference coming up in July at NIST. But if you didn't register by June 20, you can't get in. Please go back. Thank you. Thank you. Thank you.
|
R is a sophisticated statistical programming language available on various platforms. Since its initial development in 1992 it has become a major tool for many scientists all over the world. For the integration with LATEX it provides various tools allowing a dynamic creation of reports. In my presentation I am going to present a hands-on demonstration of how to work with R and generate impressive reports using the packages Sweave, xtable and tikzdevice.
|
10.5446/30855 (DOI)
|
Okay, originally when Carl and I discussed these things, we thought it would be nice to give an update on the Metapose Library Project. The problem is that there is not really half an hour to fill with death. So I decided to wrap up at least my experiences with Metapose till now, ending basically at what's called the Metapose Library. Before I do that, I will show you a couple of things that I do with Metapose. Probably if you attend the same meetings as I do, you normally will see a bit of Metapose in presentations, things like that. So let's just give you an idea about the things that are, this is the thing that I mentioned earlier. You know, in my work, I have to do sometimes boring things with Tec, and these are the fun things with Tec and Metapost. This is a recent one. This is a birthday card for my brother, made for my brother, who just got a little kid. So here you see some Tec on the right hand and some Metapose graphics on the left hand. This is normally not the kind of stuff you will see in manuals. They actually eventually chose the colored one. The side effect of this kind of print is that if you do this kind of thing lots of time, you have to make sure that you use different colors. Just to make sure that your printer doesn't end up with, well, let's say one of the colors being used more than the other one. These are other examples. This is punk font, which we managed to process with Lua Tec. And also MP, this is a Christmas card, typical kind of Christmas card that we sent around. There's actually a hidden message in there, which is related to the year in which it was. Made. Well, actually, yeah, the problem is that I think only a few people normally fidgeted it out. In this case, it was yes we can, which was randomly put everywhere in the text. A few people fidgeted it out. So who can fidget this one out? It took me really a while to get these small images right. The 2010 of these small things, these are CO2 molecules. So this was more or less inspired by the failed conference. My first metapose experience was actually the company logo. And here you see a couple of variants of it. If we make a letter, then the company logo reflects the time and date on which it was made. So this shapes, changes with the date and the time, the letter or whatever was generated. So this is actually my first metapose code. Okay. Can you recover the date and time? Yes, you can. If you know how it works, you can. It's not visible here. There are small lines and you can really. Of course, I'm pretty sure that most of our customers don't know that it reflects the time. Nobody actually made a remark about it. Okay. A happy marriage between tech and metapost. I made it a bit bigger than it normally is, so I hope it's readable. There's a funny artifact in there which is due to the reader. First of all, there's an evolution in usage. I think it's around 1996. If I got John right, that's also a bit of time when the first version of metapose showed up. So I might as well be one of the first users. I started doing serious things with metapose and this company logo is just an example. So in the beginning, I used it mostly for logos and pictograms and sometimes for covers, things like that. I think the things that probably most metapose users don't use metapose for, but I liked it much and then in context, I introduced mechanism for using metapose in line. So you mix your tech source with the metapose code. And I think it's actually Sebastian Ratz from the UK who boosted things a bit. It was the time that PDF was developed. And well, Sebastian liked metapose and I liked it and we were in communication about PDF tech things and he said, well, why don't you write a metapose converter in tech? And well, I did. And so we could in PDF tech directly include metapose code and there were tech markers that converted that to PDF literals. And I think that's why I can see one of the reasons why metapose suddenly showed up in a lot of documents that were produced by PDF tech users. There's another reason why, at least in context, it became convenient because you can run program from inside tech using immediate write 18. You can directly process the graphics that you have coded in your documents. So you arrive at a definition, you process the metapose code, you get the dimensions, you can include it. Tech can use the dimensions to research the right space, things like that. And it was actually quite fast even on systems that day. Of course, there are some limitations. Limitations that if you look at, for instance, TICSAT or at the, well, you saw some presentations of Asim Toe this morning by now covered. I needed to deal with things like shading, transparencies. I wanted to include images in a metapose document. I wanted to have outline fonts, things like that. So I used a special mechanism which is there to achieve those things. Text embedding the beta ETER that John mentioned is a bit awkward to use because, well, you cannot use it in loops, for instance, it's just parsed from the file, spawned onto tech, you get something back and something is done. So I had my own mechanism of filtering that kind of stuff and processing it as part of the run. And this is important because if you have metapose as part of your tech document, you want it to have the same fonts to obey the same spacing rules and things like that. So you really need to do this part of the run. Well, once you start in that direction, you also will start thinking about using these things for the more advanced things. And I think the metapost in integration triggered, I think by now, rather advanced mechanism that we have in context for adding backgrounds. You imagine that we can have arbitrary nest backgrounds behind text in columns, graphics and anything basically, and that can only be done by delegating all this stuff to metapost. Well, year by year, we added more stuff and I think by the time that, but just before we started a bit talking about things like Lua Tech, et cetera, we had reached the moment that we were passing so much information between tech to tech run and the metapose run that probably the amount of graphic code was neglectable compared to the amount of tech code that was passed because what we passed is information about the page, the page layout, page dimensions, fonts being used, things like that so that you could really adapt the graphic to that kind of things. But again, at that time, things were possible. You could do a lot of things, but they were still two isolated programs. You have a tech run, you basically go out of the run, process metapost, take the input, have another run. Now, I don't know how many of you have seen the metaphone manual. It's about 350 pages, 1700 graphics, and it takes a while to process if you use this method, something between 10, 15 minutes and you need several runs, of course, to get everything right so that, well, you've changed quite a lot of code and then half an hour you can basically don't do anything because documents being processed. So the demands were basically growing. My end, but not only at my end, about the same time, and I'm not speaking about, let's say, the beginning of 2000, the beginning of the millennium, we also started the font process, the LM font project and later the Gaia project, and people doing that kind of work were also quite heavy metapose users, and they had their own toolkit called MetaType 1, and as a result, we had all the user group meetings, discussions about what direction should we go with, well, let's say, tools like Metapost, both sessions. So after a while, I think we had some 10 points that we wanted to, well, to be added to Metapost, some of them were, I think, too ambitious and will probably never be added, nevertheless. So what we did is we wrapped it up, and I think it was after some discussion with Carl, it ended up in the port and Carl asked John to reflect on that, and we reached a state where John more or less transferred the further development of Metapost to the workgroup, and at that time the workgroup was Bogoslav, Jokowski, Tarko, and me. Tarko did most of the coding, by the way. So actually, he should be standing here telling you this. Then I think probably some two years after that when we started Lüwetech project, we started also again thinking about what we could do with Metapost. Lüwetech project itself is quite closely related to context, so it's quite natural that as soon as we talk about tech, we also talk about Metapost. This is for us a more or less natural couple, although they were still not married at that time. The first thing that I did was rewrite all the, when we had the first version of Lüwetech, it just rewrite all the converters that, the ones I mentioned, that Sebastian triggered. I converted them to Lua code. That was done quite early in the project. Then we started doing some experiments together with Fabrice Poppino to see if we could actually pipe information between tech and Metapost. That is not trivial. We tried to do that on all these platforms, and all platforms are different, and well, you can see the nightmare coming, so that was not really the solution. The conclusion was, why don't we make a library out of Metapost? The natural way to go then is to start a project and to talk to, well, people, the people with the money, so to say, so that means talking to the big user groups. A project was funded to do this, so that Taker could spend company time on this project. He's not using, at that time, at least not using tech in his work. We needed to do it this way. The first thing that happens is that the code base was converted to C. We just took the C-web output, the web to C output, I should say. This is also a sort of, it was meant to exercise for doing the same thing with Lua tech. As I mentioned a couple of days ago, Lua tech is basically a merge of Omega, PDF, tech, a traditional tech, et cetera, et cetera. So you enter with something so complex that you basically have to go back to the roots and then from there you can go back to the literal programming again. But we go through this intermediate step and the Metapost converted into a library was the first exercise in that. There were a couple of simple extensions added. So I don't know if you are familiar with that detail with Metapost, but there was a mechanism of specials. You can inject PostScript code into the output. But if you want to relate that kind of things to pass, to a specific pass, that doesn't really work that well. You can do some tricks. That's the way I achieve the extension, but it doesn't deserve a beauty price. So pre and PostScripts were added to pass. So carry information around with a certain pass. CNYK colors were introduced because, well, if you are into publishing, you cannot do it without CNYK. There were changes in the prologues, stuff that comes before the actual Metapost output code. And one of the big things I think was that the Metapost graphic became self-contained. So you can now take a Metapost output graphic and you can use it in any application because it doesn't depend anymore on the fonts on the system. It's self-contained. Meta, the fonts can be embedded in the output itself. Of course, output becomes much larger and you lose the charm of small files, but sometimes it's handy. And well, once we could do that kind of thing, the library was there. It was quite natural to add a Lua interface to it so that we could use it as a library and Lua tag. So Metapost produces some kind of intermediate result and the result finally becomes post script and the idea was let the result become Lua so that we can do things with it. There was another, a bit sub-project that was triggered after the core conference, funded by somebody who wanted to have SVG output. It was, again, a nice exercise to clean up code and extend the thing. So it means that you can use Metapost to make graphics that you can put directly on the web and they are nice and scalable. I think after that, we still had the regular Metapost version at that point. I think this year's tech life will have the Metapost that uses the library. So the library is now the centerpiece of everything and the Metapost program adds to that the beta, eta parsing and a couple of things like that. So well, it was not tech and Meta getting married. It was Lua, Teg, and Meta library. You can say that the descendants, the kids got married. So for me, the big benefit of that is that we have now a really tight integration of Metapost in context. And the runtime is really, really zero. The experiments that we did in the beginning, we were talking about doing some graphics with, let's say, 10 or 20 different parsing and some fills and other things, converting them runtime to PDF code. We were getting a throughput, at least on my machine, of, I think, 20,000 graphics per second, things like that. And that's definitely more than one second per run for regular run. So this opens up a lot of possibilities. It means that, for instance, the MetaFan manual now takes 30 seconds to process, which is nothing if you think about the amount of graphics in the 1700 plus, I think, some 10 subruns to Teg. It doesn't end there. Already in the beginning of this, thinking about extending Metapost, there was this dream of having Megapost, something really big. And this was, for instance, related to the fact there is some limitations in the maximum number that you can use in Metapost. This is some 4K, and that's okay for regular graphics, but sometimes you want to go bigger, or especially if you want to have diagrams or whatever. It really makes sense to have something big. And Giuseppe Bilotta, I hope I pronounced it right, he, well, he had proved that he could do the same thing with, actually, Omega, because he merged Omega and Eter into Aleph. He said, I'm going to do the same with Metapost. It never came that far. Instead of that, he did a lot of research and a PhD on envelope code that might, at some point, show up. But anyhow, part of that dream was that all memory management would become completely dynamic. And this was written down in a project proposal, and then the second project was started. And Tarkov started working on that. And currently we have reached a state where things like pass management pool, stacks, notes, everything is already dynamic. That pass is mostly finished. But that was not the only objective. And the mega thing actually also had to do with the fact that we wanted to have real big numbers. And then, well, quite naturally, you arrive at arbitrary position. So what will happen is that the version that will be released, well, I think a couple of months, will have arbitrary position in Metapost. So you can have, let's say, numbers with 512-digit position. There will be several libraries that you can link in there. It will still be compatible with the old thing. But that's actually the mega thing that's coming to you. It will still take a couple of months to get it finished, because all the code has to be separate, all the numerical calculations and things like that have to be separated so that you can plug in the specific calculation code that you need. One of the things that already went away was the memory dump, because we now have such fast machines, it doesn't matter anymore to dump memories and load them again. You can just as well read the real thing. We are seriously thinking doing the same thing with Luehrtecht. We have done experiments with that already. And, well, this talk is not the right place about it. Maybe next time I can tell a bit about that. And we are thinking about maybe adding some of the code that Giuseppe produces as part of his PhD thesis, so that you can basically solve one of the problems that we have now. If you make fonts, you need to know the outline of this font. And if you use a pen that is quite complex, that just really gives you the outline that you want, that you need special code. I'm not a mathematician, this is way beyond me. So I'm not going to explain these things. And we might have a few other penning wishes. And of course, all kinds of small things are being solved in the process, like the ability to change the output file and things like that. Okay, so how do I use MPL, already mentioned, that I use for cards and things like that? We use it for background symbols, chemistry, flow charts, all kinds of things that are now integrated in. And so far we could still fulfill all the wishes that we had. I will mention these things. So when we were playing with the MetaPost library, the first version, you start looking for the real torture test. Some of you might have read in the use group journals about the punk font experiment that we did. There was an article about that. The punk font is a font made by Don long ago. And I think it was inspired by work of Unger, people like that. And what you see is actually the punk font. And the experiments, we really generated the font runtime because it's random. You generate a runtime. And of course, you can do some caching over there and whatever. But you're talking then about really things like 10, 20,000 characters per second throughput that you need to get in order to get, well, a reasonable performance of tech. But once we had that, Khaled Hosni, the name has come up a couple of times before. He is the guy who made the Styx fonts into proper Sitz fonts. And also the guy who's going to make the open type or oiler font complete. This will be another project of the font project group. So he said, well, why can't I make an open type font of the punk font? So what happened there is he used Metapost to produce basically post-grip outlines and then inputted the outlines into font forge. And font forge could make him a font. And so he's yet another application of Metapost in font creation this time. Of course, you need an engine at the other end that can handle these kind of things. And handling random fonts is not necessarily something that comes with each application. And I am pretty sure, at least he was pretty sure that there was probably the first tool that could provide this kind of randomization using open type features. So it's an open type font that uses the randomized feature. And well, because we have to use the magic numbers, there are 32 variants in the font for lowercase characters, for all lowercase characters. There are 16 variants for all uppercase characters. And there are eight variants for all remaining characters. Now, why do we use these numbers? Well, it's the only way to convince Don to use, of course, a font like this if you don't use those multiples of two then. You won't get that. And, well, Khaled is of Arabic origin. So I think he is seriously thinking about an Arabic version of punk, which would make sense because it's sort of a handwritten font and Arabic is handwritten, so who knows. And then, because he's also quite in mass, who knows, maybe even a mass, proper open type mass punk font shows up. And then, well, I think you then can no longer deny that this font is there and you have to use it. Okay. So in context, you do something like this, you choose a body from punk nova, 20 points, yes, and then you just can get all these things are slightly different. And hopefully it's random enough to let people not see that there are only 32 variants of the lower case letters. Yeah? Randomly, just randomly. As soon as a glyph is needed, a random variant is chosen. I can explain the details of that because it's a little bit involved, but yeah, it happens that way. How it technically works is a bit like this. This in open type, there's a substitution table, so basically an alternate table, it's part of the substitution mechanism. So there are 32 alternates of one character and if you have to randomize feature on, then just before you do the type setting, you loop over the list and you replace each glyph by one of its randomized components. It's actually a regular open type feature. So these are, well, 4096 donknoots. I got the impression that you can see hidden patterns in there if you, depends a bit on that what scale you look at it. Is your background a number of your area? Yeah, it could be. No, I think it's probably due to the fact that it's like donknoot1, donknoot2, so they become longer and then maybe you. But it could be. Yeah, that could be. Anyway this is just. So the whole idea of this exercise was basically to convince Don and John to switch to Luwetech and then P-Lip. So here you see the Luwetech team with Khaled in front of it. Well this is basically what I wanted to talk about. So if there are any questions. Yeah, well. For the RAND feature for the 32 or 16 or 8 variants, was it necessary for the variants constrained by WID for example? No. There's no reason to constrain the character because in open type say that you apply the small cup feature. The small cup characters don't have the same width as the ones they replace. Or old style numerals don't have the same width as the numbers they replace. It's the same as here. It's basically a rather generic thing. The whole type setting determining where you are going to break in comes later. So first do the replacement and then. I think I know the answer to my question but I just want to check. When you speak about integration that means that you can have some tech counters and when you make a meta post picture you can use them for example I want to make as many circles as a counter number this. Is it correct? Well, that normally not the way you would do it because meta post can count itself but you can indeed if you have it integrated you can use the. But not right now. Yeah you can if you can just pass the value to the meta post graphic. But I would count in different way. That's not. Thank you. I think I saw once a page of text that started up computer modern Roman in the first letter and ended up computer modern Sanserif at the very end and it morphed with the meta font parameters the whole way through. Does anyone know where that came from and does this sort of system make that easier to do? You mean that you changed the shape of a character? Right. I don't know if I have an example of that. Let me see. Well, Don, Kalle, Kalle, sit down. I'm not sure which page you know. I think I made this one for Wendy and I don't know if Don ever saw it when Don turned 665 I think we made this one. We started with a regular computer modern and for each year we randomized the outline. It shows the age of the font. So you can go to a large extent in manipulating and changing. Well, it doesn't turn as serif, it turns well something completely different. By the way, this is this, if you talk about computer modern in general, this is a bit of a problem because it's the way it's built of course, not everything is. Okay. No, I just wanted to say that the way Hans does it is much easier than the way I did it when I did my example which was I wrote this paper about the concept of a meta font which is in the digital topography book and we started with the, we took Psalm 23 and we started with a very old style font and we ended up with a hyper modern font that had large X height and X serif and things like this. People said that we should also have changed the translation from, we should have started with King James English and ended up with. But and then on the covers of volumes A, B, C, D and E, we have examples of morphing between they at one side it's sort of an Egyptian style font on the other side. Just step by step, just to show the concept of parameters which was, or meta-ness which was unfamiliar to font designers at the time. Now that you have these interactive systems where things are working with pipelines and everything and computing, it's all sort of become a natural concept and fairly easy to do but at that time generating a font with 651 different characters in it was a bit of a nightmare. But they could do it a long time ago Dom. This is what Carl had found on the internet. I'm not sure what period it was, some kind of. Yeah, by the way, I have the microphone. I was wondering if there's anybody in the room who was present when Unger gave his lecture at the day that that punk was distributed. Was anybody here? I mean there was a lecture at Stanford and it was the sixth of six lectures that Harar and his, Unger and his wife gave at Stanford. And the story is that the first few, in the first five lectures they had made a case for the statement that styles and type design lagged styles in the rest of the design world by ten years. And so that morning I woke up, the morning of that lecture I woke up and I said, hmm, I get the point. So what was the style of fashion ten years ago today? And I looked back and here it was punk. And so I spent the next seven hours at the terminal madly creating this font and presented it, a sample of it that evening. And I just wondered if there was anybody here in the room that was there that day. Back to your question. I think at the Tuck conference in Dubna somebody presented, because at that time multiple master was the basic idea and tried to do all the parameter variations, don't choose for his fonts, going through one, through all the X's. I think you should find it on the material from the Tuck conference in Dubna. I guess you're all for it. Thank you.
|
MetaPost 2.000 is planned for release in the summer of 2010. This presentation is a short report on the project history and current status. MetaPost version 1.500 was released around BachoTEX 2010, and in that release all memory arrays will have been replaced by dynamic memory allocation.
|
10.5446/30856 (DOI)
|
OK, just a quick comment from the last one. The website was not Dead Reckonings. So that's the actual website if you wanted to look it up. My Reckonings. Anyway, so those websites were very pretty. My stuff is not near so pretty. But hopefully it'll be good anyway. Most of what I'll be talking about is basically stuff that came out of some work I did for a professor who was writing a book and wanted to manage his latex macros. Here it is. So these are the professors. And a lot of it was actually Andrew and his idea for the algorithms and how to actually lay stuff out in a good way. And I basically just implemented it, which was harder than I thought it was going to be. And I'll just also preface by saying that I have no idea what's going on in most of the latex source code. And there's a lot of corner cases that I'm invariably I'm sure missing. I also am not familiar with contact. And perhaps it already does what I'm talking about. I don't know. So I have intentions to clean a lot of this stuff up and release it soon. But it's not yet. So maybe look for it in a month or so. I'll start up with a demo just to kind of see what is it that I'm talking about here. You can see hopefully the source up there, not too long or difficult. And we see what happens is that when we type this document, we have a long paragraph and a margin note at the end, and the margin note falls off the bottom. And this is not something that you really want. So how do we fix it? Well, a typical first attempt would be to maybe put a v skip, see what happens. It wasn't enough. I have to know the answer is 30. And there we go. Or you could say, well, I'm using Memoir. And Memoir provides a nice sidebar command. So I can say sidebar and see what that does. And I forgot to take off the v skip and common problem. You make these hand tweaks, and then you change something else, and all of a sudden your hand tweaks are worthless now. So here's the sidebar. It gets the job done. It doesn't fall off the bottom of the page. But of course, it's no longer tied to where you called it out, and that's another downside. So ultimately, my solution is include another package. And now we see what happens. It just works. So that's basically the motivation here. Common motivations don't really work so well. What does marginpar actually do? This is a little bit technical. I'll try to keep the details down a little bit. But basically, when you call marginpar, it stores whatever you put in the box, in the paragraph, in a box, and gets itself into the output routine, which is what Latex uses to build the page. And in doing so, it knows where on the page you are. It keeps track of where the last margin note ended on the page, and it looks at where you want to put the note. And if it's going to intersect, it pushes it down. Otherwise, it just attaches the note right to where you are off the side of the page, right then and there, and goes on adding more lines to the page until you get to the end. And it's able to go and do everything. So this new approach that Andy thought about is, what if you just save all the margin notes to the end of the page? And you can make a more informed decision about where to put things. The basic goal of what we're looking for is, we want to have margin notes all fit on the page and not intersect. We want them to be as close as possible to where you called them out. And it's OK to float a margin note off the page if it's not going to fit. And that's something that, as far as I'm aware, nobody else does. So the strategy then is, rather than attaching them as soon as possible, we save margin notes in a list. At the end of the page, when we're ready to build the whole page, we then go through this list of margin notes we've saved, put them where they want to go, right next to where they're called out, push boxes down so that they don't intersect, and then push them back up so that they don't fall off the page. So not all that complicated. Implementation, use a token list, and basically I redefined the macro that I used to store these things, to mean various different things, so that I run through it over and over again with different meanings to accomplish different things. So here's a demo of basically what it's actually doing. Suppose I've got a page with four margin notes on it, a different length, which are a bit long. Some people would argue that it's bad type setting for them. But if an author wants to do it, then so be it. We see that there's an intersection here, that three of these notes want to be in similar places. So we're going to have to push things down. I will move this over and let us use it as a reference. Now, suppose we're getting down to the end of the page. We're going to type the whole page. Basically, I need to fill up this column. So I'll start with the first note. Note number one, it's 84 points. I'll keep track of how much material I've got. So I've got a total of 84 points. This page is 500 points long. That gives us a limit. Now, because I'm not attaching it to the text. So like I said, latex, you've got your page here, and you attach something to the text, and it goes off to the side. I'm filling a vertical box. So I need to insert some space. So I put in a glue box. And this is compressible glue. If we need to push up past it, we can do that to squish this little bit. So now I've got not only the total space of the material that either entered, but also of the glues down here as well as 142 points. And we'll keep going on. We'll add the second box right where it wants to go. And again, there's a gap. We'll add some glue. Now the next step, the third box, intersects. So rather than adding glue, we'll push it down. And we see we're now off the page. But that's OK, because our total length of material is only what is it, 428 points. That's less than the maximum. But we see that this 96 point box is not going to fit, because that's going to put us at 500, or 534 or something, I can't add. And so we leave it off. We'll just put it on a deferred list. We'll come back to it next page. Now we still, of course, got this 4 number there, but most readers can realize it's not there, and we'll keep looking. So what now? I will move this over. I have an infinite supply of windows. Now comes the next step. We first went down. Now we go back up. So basically, as I'm going here internally, we're just running this list and shuffling things around, pushing around macros, and building them forward and backwards, and so on. We'll start with 184 point box. Put it at the bottom. Now before we saw that there were 82 points of overflow, so that's the 582 minus the 500, we need to somehow take that out of the glues in order for everything to fit. So we'll add back the 160 point box. We can't compress it. Now we've come to a glue. It's 96 points from what we calculated from where everything wanted to be. We take off 80 something, 82 points of that, and now we left 14 points and add everything else. We go and now we have a nicely filled out margin with no notes close to where they want to be, and that's what it should look like. So that's the demo. Beyond just this basic drop in replacement for margin notes, there's a few other things that it supports. We honor margin par push the same way latex works. So if you want to have five points or 10 points, whatever, in between margin notes, it'll just insert an incompressible space between them. Margin skip basically adds a skip to the margin. So another incompressible skip beyond this par push. If you want to have some blank space somewhere, a lot of this is based around adding blank space because that's actually useful. MPaR shift will allow fine tuning, but that shouldn't actually be necessary. Extending margin extend this page, I think, as a command maybe works for the margin. Clear margin is like suppressed floats, stop putting margin notes on this page and defer everything to later pages. And then finally, these last three commands are kind of funny, but they're useful for if you have an oversized equation that goes into the margin, you don't want the margin notes to collide with that. And so it actually subdivides the multiple sub margins and does them each sequentially. And that guarantees that there's never any margin material in between these block margin spaces or margin phantoms, as I've been calling them. What's next? So floats. Now I start to get more of the speculative stuff. One thing here is that a lot of this came part and parcel with the margins. For one thing, they're very related internally. But also, I had this one client who said, this is what I want, and so this is what I built. And I'm still in the process of going back and sorting that out and making something that's publicly usable in a more general way. So margin is treated like floats. And even more so than margins, floats are the cause of lots of confusion. So for instance, show of hands, maybe, who knows, or who could on the spot tell me what the bang does in the float specifier? OK, so in case you're curious, I believe it turns off checking constraints of you need to have this much text and this much, you can only have so many floats on a page or things like that. But most people turn to it as a desperation, I want this float here, and it's not here. Why isn't it here? I really mean this. That's what it seems to suggest. And it doesn't work when they do that. So the ordering, it's irrelevant, turns out. But that's not obvious. You might think, if I put h before t, then I'm really trying to put the h first. And I want it to try here before top. So all sorts of things that could be improved, I don't have an answer for that. But I have maybe some suggestions and some experiments about how to try. So what does Late-Tech actually do when you try to place a float? And in particular, why doesn't it end up where you want it? Possible reasons of this could be is that you've got a float of the same type. So you have a figure on the defer list that it couldn't place earlier. And now you want to place a figure, but Late-Tech is not going to mix your order up. So your figure would have fit, but it's going to change your numbers. And so therefore, it's going to defer this one too. Or I already mentioned, the bang actually helps with if you have too many floats this page or not enough room or not enough text or all sorts of things there. So I'm really not going to claim it's better. But it's what Andy asked for. And it's what I put together. Basically, we don't care about the order of the floats more often than not. And this is something actually that I built as an opt-in thing or opt-out. You can choose to have a float not be ordered, but by default they are. But as long as the numbers end up on the right order, that's what matters. And so this requires some sort of going back and renumbering your floats. Takes a little bit of a black magic, but it works. In addition, not specifying P can cause some problems. If you have a float that you say, I want it to be here top or bottom, and it doesn't work in any of those positions, then what does latex do? I mean, it just holds on to it until you get to a clear page or a new page, not new page. Certain commands will cause it to flush these deferred floats, but there's nothing ever going to happen. So even if there's space for float page, it's just never going to do it. And you've got one thing blocked up. And if you're keeping strict order, everything else is just going to get piled behind it. And that will cause problems. So in this case, we just said anything can be on a float page. And so what? So my last demo here is a little bit more complicated. But you can see basically I have a paragraph. I call out a large figure, seven inches tall, and then I call it a small figure, another paragraph, and then a margin note. I didn't actually say it. But while we're doing some of the margins, we might as well look into it. Now, the margin note I cheat by using the non-float package. And so we see here that it's got the wrong number already. I start with figure three. What am I going to do about that? I go on. Now, where is my figure? This here float could have fit, but it's going to end up here on its own float page and so on. So what do I do about this? I had all sorts of things I was going to just muck around with the source, but I've forgotten them all at this point. But the main thing is to pull in this package and recompile. And what has happened? OK, so what's happening here is that this is one of the stranger aspects of the package. Actually, I didn't list it before. But the specification was if I say a float's going to be here, it's going to be here, darn it. And I'm not even going to bother putting it anywhere else. So he actually wanted it to be that a here float will just stop the page and will go right now. This is something obviously is going to have to come out before general consumption. So here's the result. But if I take off this H, that should fix it. And now, OK, great. So now this float's no longer causing problems. Again, we have the funny numbered margin paragraph. And it looks just like what we had before. Although because I said that anything can be on a float page, we're now ignoring the user's preference and inserting a float page anyway. What's the point of all this? Well, if I run it again, we at least have some improvement. Now the floats are numbered properly. And that should be a good thing. But what if we want to, and we had space for this here float, what if we want to move it around? I can allow figures to be disordered and run again. And here we have on the bottom of this page, despite the fact that I asked it to be on the top. And I'm not quite sure what's going on there. Nonetheless, it has flipped orders. And we have now a slightly misnumbered. But if I run again, and if you look at the output here, there should be somewhere a little note saying, warning an arm and batch mode. So it's not even going to give me any warnings. If you look in the log, it'll say that there's a float out of order. And you should run latech again. And as I do that, now the floats have the right order. What do you do with margins? It's kind of if it's a left margin, you run the margin first, and then you put the main column. It would be very difficult to figure out vertically where everything goes. Or I think it would be difficult anyway. That's all I've got here. So how do we renumber floats? The basic strategy is when you place a float, you keep track of what order you called it in. And when the user calls makes a float, you keep track of what order it was made in. And then when we actually place the thing on the page, we keep track of what number it should be then. We store it in the ox file and see if there's a difference. And if there is, then we make a warning. And that also allows us to do sort of the forward numbering. We can then, how I got to figure three in the first page, that was using information in the ox file, I think. Or something, I'm not quite sure. So the current implementation is defined with some very strange, just changing the dat floats behavior. And it's a bit of a mess with extra macros, allow the float to be disordered, so on, things like that. There's other features that people sometimes want. I want this float to be on an even page. I want this float to be on an odd page. One way to, I think, anyway, to reconcile this in a more general way would be to overhaul this whole float specifier business. If you allowed each letter, or even control sequence or something in there, to map to a control sequence like immediate H, immediate output H, new page H, things like that, that would get run as a hook in various times. It would just run through the list and do each of these things. Maybe it would allow it to say, OK, you can put the float here, or no, you can't put the float here, and we'll stop running the list, or so on. That would, I think, allow a lot of maybe improvements. You could make the order matter if you want to. You could hook in extra specifiers. So people are probably familiar with the capital H that gets added a lot of times to say, I really want this here, and don't even bother putting it anywhere else. Of course, that's not a real float placement specifier at all. It just circumvents the whole float routines. So I've experimented a little bit with it, but I have not gotten very far. And I would love to have some conversations about this topic. And that's all I've got. In the back? Hi. Very nice. Very nice. I've written a couple of math books where I use marginal notes on virtually every page. And when you're doing something like that, what you want is the marginal notes on the even numbered pages to be on the left, on the odd numbered pages to be on the right. But if you have, say, a 500 page book, you'll find, and you have consistent marginal notes, you'll find maybe two or three times, latex will put the marginal note on the wrong side. I don't know if you've ever encountered that. It's where the note comes near a page break. It's extremely frustrating. You can fix it with reverse margin power, but then if you write something new, then it's in the wrong place. There's a package called MparHack that fixes it. My question is, in your package, have you fixed it? And if not, does it work with MparHack? So that's one I was actually aware of. I was not familiar with the issue that was addressing, I think, or I mean, I've forgotten about it. But I'm not compatible with MparHack because I replace it completely. The way I build margins is I make a vertical box. I put notes in them, and then I butt it next to the page on the right side, or on the correct side. And so basically, if it's on the page, it's on the right side, correct side. I think there is a user community which would expect what you are doing really easily. There is a group called TÜFTLATIC. They make a style which should resemble the books of advert TÜFT, and they have lots of margin notes, lots of margin figures, and so on. They have a community. They have a group, by the way, hosted by Google on Google groups. And they discussed. I subscribed to their discussion list, and they talk about margins, and margin problems a lot. So if you contact them, they probably will tell you a lot of user stories, a lot of comments. You probably would be interested in them. Yes, thank you. I'm curious about the, not so much the float aspect, but the margin part itself, it is a paragraph, therefore, ideally, it should be able to cross over to the next page. Have you taken a look at this problem, or are aware of anyone who has solved the issue of being able, because if you want to mimic, for example, traditional scribal books, margins move from page to page, just as the main text block does. Has that problem been solved in either your package, or in the larger tech universe, as far as you're aware? I can't speak to the larger tech universe. I never thought about it, but it's certainly tractable. I haven't thought about it myself, but it's certainly tractable. No. No, the main text stays. The annotation stays, and there's not a number on this page that corresponds to it. And it's sort of like figure one. The figure one might not be in this page. You have to look around forward or backwards a little bit to find where it is. In the margin notes, it's forward. You know your margin note number four. Yes. It went on to the last. It's on the next page. So I thought that guy was going to be screened. The note? Or the mouse, huh? This guy wanted his left, even number eight. So it's going to go to the next. It's going to be in the wrong one. No. So I don't treat margin notes as boxes that I attach to the page on one side or the other. It's its own little box. So this box here is the actual, this is what I store. And I can push that box into whatever column I want. What about two columns of marginal notes on the right and on the left? I don't address that. Is there a good use case? And if there is, I'd like to see it. All Renaissance books have them all over the place. Yeah, I imagine it'd be as simple as, I mean, do you say this should be a left material or should be right material and you specify it ahead of time? So it's not like on the fly decide which one you want to go, or is it? If you're halfway across the line. That's a tough one. It would be as easy as adding just two columns if you can tell me ahead of time which one it goes in. If you have a really horribly long margin note and it's longer than the page, what do you do? I believe my default setting is to just put one on the page anyway. It might be that that's what extend margin is for. I've forgotten. There was actually one margin in this book that was too long. And we had to deal with that way. And at one point it was just never getting printed. I suppose we could shrink them. Time question. Thanks. Looks like very interesting stuff what you just presented there. And I guess we could have many, many nights talking about various approaches. One question that I'm sort of curious about is do you handle stretch? In other words, if you have a style that spreads out the text of the page top to bottom, so some of the glue on the page stretches the paragraphs apart, do your margin parts run with the callouts accordingly? Or do they just wander off because you don't take that into account? No, they're definitely going to wander off. I'd be interested in seeing how that works. That's one of the tricky aspects. If you want to decouple this kind of thing. And in bookstiles you probably need to actually keep track of the glue setting on the page, which is easy. Yeah, but a lot of nice ideas in there. And yes, definitely the whole float stuff is broken in 209. And it is still broken into E. And it will stay broken into E other than by packages. So I don't expect that there will be a change of the 2E level on that thing. But there's a lot of improvement possibilities, that's for sure. OK. So thanks so much.
|
Authors using LaTeX to typeset books with significant margin material often run into the problem of long notes running off the bottom of the page. A typical workaround is to insert vertical shifts by hand, but this is a tedious process that is invalidated when pagination changes. Another workaround is memoir’s sidebar function, but this can be unsatisfying for short textual notes, and standard marginpars cannot be mixed with sidebars. I will discuss a solution I put together to make margin pars “just work” by keeping a list of floating inserts and placing them intelligently in the output routine. Time permitting, I will also discuss some thoughts on improving LaTeX’s float placement specifiers.
|
10.5446/30857 (DOI)
|
Yes, I want to tell you about a vector graphics language that I think some of you have heard called Asymptote. And recently it's been extended to 3D with lots of new features. And I want to tell you about those new features and a little bit about the history of the project. The work in this talk was done in collaboration with Andy Hammerlindel, Ordischart, the 3D PDF driver, and Michele has done recent enhancements on that driver. So those are why those three are listed there. And this work was much, it was done at the University of Alberta in Canada, but also around the world, there are many, many active developers very busy on this, in France, in Germany, and in Brazil. Andy is now in Rio, lucky guy. So this is some of the history of Asymptote. It was very strongly inspired by tech and meta font, and in principle, in particular the idea of a path, and we make direct use of the control point selection algorithms by John Hobby, and also the Knuth concept of a path. And so the history of this was, you know, tech dates back to 1979. The second version of meta font, which has the control point algorithms I'm talking about was in 1986. And then in 89, Hobby realized you could take meta font, and instead of producing font data, you could actually, which is vector data at that point, it was no longer bitmaps in this year, and he realized you could produce postscript with it, which made a lot of sense. And the only trouble is that meta post was being part of the tech family, was limited to integer arithmetic, fixed point arithmetic, and that was a perfectly reasonable decision back in those days, when the IEEE standard hadn't been standardized. It was around 85 or 86 when that was finally standardized. And so because this is a more modern implementation of some of the same ideas, we were able to build on top of the IEEE numeric format, and that has made it more numerically robust. And we've tried to improve some of the algorithms for, say, intersection and arc length and things like this, things you want to, operations you want to do to a path. A lot of work is going to make these numerically very robust. So there are fewer surprises when you're working with Asymptote. But the biggest thing really is the generalization to three dimensions. There's native 3D support built in to Asymptote. And the first thing we had to do was to generalize these, and I'll talk a little bit about that today, since John Hobbie and Donald Knuth are both here. I think it might be interesting to see my take on how to do this generalization to three dimensions. There is some arbitrariness, but it seems like a reasonable solution what we've come up with. Then I want to show you how you can actually embed now, lift tech to 3D, and embed it within PDF files so it's fully interactive. So you can have tech labels that always face the camera, if you like, or they can be embedded into the figure. So it is portable. It runs under the major platforms out there. The current statistics are there's about 4,000 downloads per month. That's from the primary site, or there are many distributions that package this up as well. So we really have no idea how many users we have. There seem to be quite a, it seems to be quite popular. There's some statistics on the code. It's half written in C++, and then the higher level stuff, most of the graphical stuff, the detailed stuff is written in its own language, which is implemented at the C++ level. This is very much like EMAC, having a list layer where some of the EMAC's code is implemented in. This makes it easier for the user to go and modify this code. It's more difficult to modify the C++ code. And the latest stable release is version 2.00. We expect that, or possibly 2.01 will be coming out in the TechLive 2010 distribution. Now just let me give you a little quick tutorial about Asymptote, and then I'll get into the 3D stuff. So this is just 2D Asymptote, the original version that we released in 2004. There's four principle commands, draw, label, fill, and clip, and these translate into post script operations. And you see that with very, following very much the spirit of metapost, except we use a C++ like Syntax, so it looks like a function call. The units here are post script big points, which is some approximation of a real tech point. And this dash-dash means join those two points with a linear segment to create a path. And you can also have a cyclic path, so you don't need to specify this final point. And however I want to point out, so far everything's like metapost. But here's what's different about metapost is that I find post script units are pretty inconvenient as a mathematician. I like to work on a unit square, something like that. So here's a unit square, that's the same, this is the, rather than having to hard code numbers like 100 comma zero, I can use my preferred user coordinate system, the unit square. And then afterwards tell it I'd like to figure, because of the general constraints that I want to publish this wonderful figure in, I want to make sure that it's no more than 100 units, post script units, big post script units. Right in 100 high, or you can specify the size in centimeters if you like. So there's a way of doing that. And you can also, there's a built in, this is a built in path, so you can just call it unit square. And then you can do things like transform that unit square if you want to get a box or something like that, you can scale it or something. You can also add labels. So this is the second command I want to tell you about, second of the four commands. And it's very easy to draw labels. You don't have to do any of this B tech, E tech stuff anymore. You just put your tech string, maybe you want it in math mode, and you label this, it uses compass directions or arbitrary, you can specify an arbitrary vector there if you want to go at some other angle relative to that point. So this is an alignment. It says to align this label to the southwest of the origin, etc. So now if you want to draw a curve, we follow a metal post very closely here. We use dot dot if you want to draw curves, and then you specify two control points. So this is a Bezier-Cube-Expline with node zero, z zero and z one. So I have to say z zero, don't I? And these control points tell you what the outgoing tangent is and the incoming tangent. And how far along, they are along this path tell you something about the velocity along there. And this is the actual curve, the blue curve here, and that's being drawn here. As you know, the Bezier-Cube-Explines are very useful. They render very fast because they have this midpoint property, so they're widely used in computer graphics. So then the question is if you maybe don't know the control points though, maybe you only know the nodes and you want to know what the control points should be to get a pleasing looking curve. There's many ways to do this, but what Hobbying-Neuth tried to do in 86 in discreet and computational geometry, that was a Hobbys paper, he argued that what you should try to do is minimize the curvature. Well, he didn't really minimize the curvature, it was a mock curvature. It was something that was easier to compute, and he came up with a tridiagonal system of equations that you had to solve, and I'll tell you a little more about that later. But in the end, what he does is try to compute these missing control points so that you get a pleasing looking curve. So now there's two more operations, filling and clipping, so let me quickly tell you about them. Here's a star I've drawn. You see the language that has a C++ like syntax, and you might want to fill that star with the zero winding post-grip rule or the even odd fill rule. So you can have different ideas of what, of inciteness, different definitions of inciteness, according to post-grip. And you can also specify a region to be filled by specifying two paths like this. So this is the concept of a post-grip super path, which is lacking in metapost. Clipping a picture can also be clipped just like in metapost, you can do clipping, and you can also, like in metapost, you can apply affine transformations like this to objects. You can apply them to pairs, paths, pen strings, and even whole pictures. And I should add, you can also apply them to triples as well. But here I'm just talking about the 2D case. So triples are the 3D analog of pairs. So there are, just to give you an idea of the sort of things people are doing with this, there are modules for Feynman diagrams, data structures, algebraic knot theory, scientific graphs with secondary axes and all kinds of features, logarithmic axes, images, contours, multiple graphs, and all this was computed, there was a specified size, and everything had to be, the length of these lines was computed by a complicated algorithm, actually using the simplex method. Given the size, one had to figure out how to fit everything in, and you see there's different line lengths based on the length of text here. So this, as simple as to communicate with tech to find out the length of these strings. It does that by opening up a pipe to tech, and querying it on the width and height and depth of the tech box that it's dealing with. Of course tech doesn't know anything about the fonts, but that's enough information for the preliminary pass, and there's a final pass of Ascent Oak, all it does is produce a latex file. At least in two dimensions, that's all it does. It produces a latex file for you, runs it through latex, or your favorite version of tech, Z-tech, or context, these are all supported as well, plain tech as well. That's all it's really trying to do in 2D, is it just produces a latex file for you. In 3D, the situation is different, because there's no version, I'm not aware of a 3D version of tech, so we had to work around that. But first of all, let me tell you a little bit about hobby's 2D algorithm for picking the directions. If you're given three points, ZK minus 1, ZK and ZK plus 1, you don't even know, if you want to draw a nice smooth path there, you don't even know what the directions of the path should be at those nodes. You don't know the tangent lines. Here is hobby's prescription, that the incoming, this is the incoming angle relative to the straight line between the previous node and the current node, the incoming angle, phi and the outgoing angle theta, also measured relative to the straight line, so straight line deviation of the curve, basically from, it's a deviation of the curve from the straight line in terms of angle. They should be related by this, these equations here, where LK and LK plus 1 are the lengths of these line segments. So that's basically what his formula boils down to. You can modify that formula by specifying optional tension and parameters and curl boundary conditions, but this is the default. And then once he has the directions, you can, by the way, this is only for directions that you haven't specified, you might want to overize some of these specifications, these computations here. You certainly can do that just as a meta post, same syntax almost. And once you've got those directions, then you have to find the control points. You know that the line on which the control points lie, but you don't know how far away they lie, how far to go along here for those control points. And so this is the formula that he used to find, you would be the two control points that will define this blue curve here. And this function F is a complicated, messy function. It's given in on page 131 of the MetaFont book. I refer you to that, or to Hobbie's paper. By the way, you notice the exponentials appearing here just as in meta post, we treat pairs as being complex numbers. It's maybe a surprising thing at first, but these algorithms show you that it's a useful thing to do. So how do we do the 3D generalization? Well, we basically exploit the fact that Hobbie's algorithm is always applied to a consecutive list of three nodes. And between any three nodes, there's a plane, except there's a generic cases which I won't discuss here. But you can always find some plane anyway. And it turns out that what we do then is just apply Hobbie's algorithm piecewise on these planes to compute exactly what he would have done on each of those planes. And this way, by doing this, we guarantee that our 3D algorithm reduces to what you would expect in the 2D case. And that's an important constraint because we're trying to generalize 2D here. The only ambiguity that can arise is in the overall sign of the angle. So when you're in a 2D plane, you don't think about the fact that you could view this from below or you could view it from above. And that's going to cause some orientation confusions. To sort those confusions out, you need a reference vector. And you can construct one. And we arbitrarily construct it based on the mean unit normal of successive statements. And that's disgusting in these papers. I'll give you the reference at the end. By the way, this is downloadable from the Ascentope website. I meant to mention that at the beginning. This lecture, even if you want to follow along now, you just click on Lecture to go to the Ascentope website, which is on Sourceforge. The 3D control point algorithm, so to develop that, what we had to do was re-express Hobbie's algorithm. Just slightly. I'll just go back to it here. Here he's got a vector here, z1 minus z0. Now if I take the length of that vector and combine it with this direction, this unit vector here, e to the i theta, then I can write it like this. I can rewrite it in terms of the length of a vector here. And by doing this, it was naturally clear how we simply interpret theta and phi as the angle between the corresponding path direction vector and z1 minus z0. So just by taking a dot product in the arc host, you figure out what that angle is. And here, fortunately, there is an unambiguous reference vector for determining the relative sign of the angles phi and theta. So we don't have an orientation issue for picking for the second stage of Hobbie's algorithm, where we figure how far along in the direction of the tangent, how far along should the control points be placed. Okay, well let's show you now some results. Here's our first test of this algorithm, the new 3D algorithm. We're going to try to draw a circle. Well, it's not really a circle, but it's an approximation to the circle. It's valid to something like 0.05% which is certainly good enough for graphics. And just as in following metapost, okay. So here we are here. Just click on that to load the graphic. And then you can see what we've got. Three dimensions. We can zoom in. So that's pretty neat that you can do that. This is just a PDF file. So you can do this, download it yourself and do this on your own. But you need version 9 of Adobe Reader to do this. There is no other tool out there yet for reading the PDF files, although we provide enough tools with Ascentope. When we wrote the way we learned how to write the 3D PDFs is we first learned how to read them. So we actually do have the tools there that somebody could easily take and hook up to your favorite OpenGL renderer. And so here is now what happens if you lift. Two of those four points, those control points, that circle is drawn through four nodes. They're specified there. If I lift some of them up, then I get a saddle like this. And you can zoom in on it here. Okay, so that's enough of that. What about tech? How did I lift tech to 3D? Well, my colleague, a student of mine, or a shard, he did a wonderful job in figuring out how to turn the outline information that is reported from GhostScript when it processes the output of DVIPS. We get the font outlines, but we had to fill them. And that turns out to be a difficult problem because there's no fill operation in 3D. However, you can draw surfaces, Bezier surfaces, but there are patches, and they had to be... The Bezier patches have four nodes, four corner nodes, and then they have a total of 16 control points. But basically, we had to split this up into quads, if you like, but they're curved quads. Okay, so you could think of this as by doing triangulation. In fact, you see this weird word, Bezulate. We're triangulating, but they're curved triangulations, so we call it Bezulation, kind of a strange word. So that's going to enter the dictionary. Here just to show you that we really have now tech in 3D, and it looks nice and smooth, and it's really vector graphics. Let me just click on here and just zoom in on something here. Oops. Yeah, anyway, you can see. So we really have beautiful, smooth graphics that where you can zoom in arbitrarily. Okay, so that's kind of neat to be able to do that. Now, why would one do that? Is that just because it looks cool? Well, the real reason is that I do a lot of scientific computing and fluid dynamics. I do a lot of 3D drawings, and I need graphs to draw graphs where there's labels, and sometimes there are Greek labels that I want. And I want nice looking fonts. And it turns out in the 3D graphics world, most of the fonts we see are bitmaps or polygonal. They look horrible. And plus you can do features like this. So here's the difference between a billboard. So I've got a cube here. Look at that billboard label. See how compared to the embedded one. You see the problem with if you're going to draw a 3D graph at some angles, you can't see your labels anymore. Okay, so this is a very useful feature, having the billboard. It's an option. You can do anything you want. Okay, here's an example of a 3D graph that one might want to draw. And it just shows you the smooth, very smooth surfaces one can draw here. And you can see the tech labels that we're drawing here. These are all produced by tech. So that's another example. And here's some arrows. I think this blue arrow is coming from tech actually. That's how we drew that. And we somehow managed to lift that into 3D. That was kind of fun. And then we have a rotational arrow and a planar arrow here. Sometimes people prefer, if they're going to print a book or sometimes they prefer, eventually it's going to be 2D, they prefer to have the arrow just planar and not worry about what's going to happen if you rotate it and then it looks like that. That's the only disadvantage of the planar arrow, whereas the cone isn't going to do that. So there's all kinds of issues you have to think about. Anyway, this presentation was prepared with Afntop itself. So here's the slide presentation module. One of the advantages of this module is you can do things like embed these 3D PDFs right in, right in, and you can also have high resolution PDF movies. This is not lossy, unlike NPEG movies. So that's kind of neat too. And find a little bit about the sizing. I talked about it. I mentioned that Afntop has this automatic sizing. The point is that when you publish something in a journal, there's usually some restrictions on how much space you can take. And yet some parts of your diagram are things like fonts, which the journal says have to be at 12-point or whatever. And maybe you have line widths. They have to be a certain width. And these are fixed. Arrowheads, perhaps they are fixed as well. We can decide on all these things, which things should be fixed and which things shouldn't. And then depending on the size of the journal gives you, if they're really skimpy, they only give you this little room here, or maybe they give you more room, you want to draw the best figure you can to fill that up. And so this is not a completely trivial problem. It's linear, fortunately. But it does require using, it requires first of all queuing. You cannot draw anything until you actually know the scaling until you have all the information. So everybody experiences when you draw a graph of several curves. And you don't know where to plot the curves until you have them all in memory. And then you can go and plot them. It's the same issue here. Ascentope uses high order functions, you can pass functions around as objects. In fact, functions and variables are treated on equal footing. It's very nice, but it still allows operator, overloading of functions and variables. So anyway, this is the kind of deferred drawing. I don't think I can take too much time to explain too much what's going on here. But this is the queuing thing. It's saying add a routine, add to the picture routine, that it's going to draw, it's not going to return anything. It's just going to draw on a frame. A frame is like a canvas for drawing. And it's going to draw using this particular scaling here. And so with the user's path that's given in user space, I have to first scale it by multiplying by this transformation on the left. But I won't do that until later. So I'm just telling you, keep that around for later, okay? And it has to keep track of the difference between user coordinates and true size coordinates. For example, you draw a path. Well that path has some thickness. Presumably you have a thickness to your pen. And so although the path will scale, the pen width won't. And you've got to keep track of all that. And that affects the bounding box of the final object. So we keep track of these two kinds of coordinates. Scalable user coordinates and true size coordinates. And as I said before, we communicate with latex or whatever version of tech, whatever tech engine you're using, we communicate via pipe to determine the label sizes. So that's how we're able to put that box around that wonderful formula. So there are constraints that we have to satisfy to do that. And we use the simplex method to resolve these constraints. These are just linear inequalities, fortunately. So it's a straightforward linear programming problem. However, you get a lot of constraints typically. But you only keep the maximal constraint, the one that there's often a dominant constraint that dominates some of the others. Here's an example of how deferred drawings are very useful. If I want to draw a picture like this, but then I want to draw these infinite lines here, connecting points P and Q, for example. I want to draw these, they're infinite lines. I want to draw them to the boundary of the picture. I don't know the boundary until I've done the drawing, right? So that's where it's useful. So this comes up in elliptic curve cryptography. And there's just a couple things that we decided were useful. I think it's a nuisance to a lot of people and a lot of language. The integer division returns an integer. Why not make it return a real? That saves you a lot of bugs. It's happened to us just everybody so many times. If you really want, in a few cases, where you do want integer division, there's a function for it. We use a carrot for exponentiation. And there's implicit scaling of numeric constants, which is sometimes convenient. We're writing 2 pi and 10 centimeters, for example. And as I mentioned, pairs are complex numbers. So that's a true statement. And function calls are very nice because, unlike C, the default arguments can be in any position. We figure out a way of doing that where it uses the type to resolve any ambiguities. And so, for example, if you say here draw ellipse 2, the 2 matches the first argument, the x-size there. And so, and it doesn't, no pen was specified, so it uses the default, which is blue. So it draws a big ellipse 2. Of course, it's a circle. And that's because y-size is automatically set to be x-size. So it's also 2. So this is what this is doing, drawing a unit circle and scaling it by y-size and scaling it by x-size. Both x-size and y-size are 2 here. But here, they're both 1 and it's red. So that's why we have a small red circle, et cetera. And this is another example where we're getting ellipse with different sizes. It's the same idea. And so these are higher order functions I had mentioned that we can pass to, say, a graphing routine. So often in the C language, this is often a nuisance. So in summary, Ascentope uses IEEE floating point numerics, a C++ light syntax. It supports divert drawing for automatic picture sizing. It supports a number of popular color spaces, post-script shading, pattern fills, function shading. That's only available in PDF. It's a PDF feature. It can fill non-simply connected regions. And very importantly, it generalized the Metapost path construction algorithms to 3D, to 3D curves and also surfaces. And it lifts tech to 3D. And as I showed you, it supports 3D billboard labels. I didn't show you the PDF grouping. But maybe I can just go back to an example and show you that if we take a 3D. Hi, why don't we take this one here? So, if I can find it again. There we are. Oops. No, I'm not touching the touchpad, but it's just Adobe has, it's a little slow sometimes. Okay. Now what I'm trying to do here is show model tree. And then it also doesn't help that the podium here is slanted. So my mouse slides down. Let me try to look here. Well, I'll just have to open up here and you're going to just have to see there's not very much space here. But you have a tree here. I'm trying to make this bigger. And so go down the bottom one here. First of all, you can, let's just see what that is turning on and off. Okay, that's an axis there. I'm turning on and off. And they can see, we can go in to see what's inside that axis. There's a curve. There's the curve. And there's a tick, there's a tick right there turning on and off. And there's some labels. There we are. I turned the two on and off. Okay. And how about the axis label two turning on and off. And you can see it actually tells you what the text string is there. So that's a new feature that's in the recent versions of Ascent Tool. So that's all I have to say. Thank you very much. Question please. Hi. Okay. Yeah, actually I wanted to pull it all the way through. But I got three questions. Did you understand correctly that the output is to PDF? The question is whether the output is PDF and it produces PDF, the default is actually EPS, the PostScript. And I can also produce scared vector graphics. Okay. Many different. Yeah. So those are the various possibilities. Okay. Is it interpreted language or compiled? It's, it actually is compiled. There's a virtual machine running. But it gives you the feel of an interpreted language because you can type in a command and it'll execute it and give you the result. But it's actually compiled out. And it's done in a very, very clever way. Not by me. By my students. I have some excellent, very excellent students who have written the virtual machine. And then you said you saw something by the simplex method, but that doesn't give you a unique solution. What did you mean? Well, most of the time it does, if it doesn't, it's, in this particular case, the times when it doesn't is because you haven't specified the problem. You've specified an ill-posed problem. An example would be if you draw just a horizontal line and that horizontal, because it's horizontal, it has no height, but there's a pen width, but you specify that you want now the diagram to be one centimeter high. We're talking about different things. You had a bunch of linear inequalities. So that, in general, that's solved by an n-dimensional, I mean, you've got a family, some convex polyhedron. So which point in that convex polyhedron is the solution? Well, the solution should be a vertex on that polyhedron. And you just take any vertex and say that's okay? There was no objective function. I'm sorry, I saw it. Yeah, maybe it's something we should talk about after. I mean, usually you're minimizing or maximizing something. Yeah. We are trying to basically maximize the picture size. I think that's the objective function. Any other questions? Just overwhelmed. So we'll go back after today. Okay. Okay. Any other questions? Just overwhelmed? So we'll go with that one. Thank you.
|
Asymptote is a powerful descriptive vector graphics language for technical drawing recently developed at the University of Alberta. It attempts to do for figures what (LA)TEX does for equations. In contrast to MetaPost, Asymptote features robust floating-point numerics, high-order functions, and a C++/Java-like syntax. It uses the simplex linear programming method to resolve overall size constraints for fixed-sized and scalable objects. Asymptote understands affine transformations and uses complex multiplication to rotate vectors. Labels and equations are typeset with TEX, for professional quality and overall document consistency. The feature of Asymptote that has caused the greatest excitement in the mathematical typesetting community is the ability to generate and embed inline interactive 3D vector illustrations within PDF files, using Adobe’s highly compressed PRC format, which can describe smooth surfaces and curves without polygonal tessellation. Threedimensional output can also be viewed directly with Asymptote’s native OpenGL-based renderer. Asymptote thus provides the scientific community with a self-contained and powerful TEX-aware facility for generating portable interactive threedimensional vector graphics.
|
10.5446/30859 (DOI)
|
This is kind of a historical talk. It has a little bit of technical content, but not a lot. It's mostly about Boxes.mp, but I am going to talk to some degree about the motivation behind the rest of MetaPost. And you'll see that I'm kind of mostly concerned about what happened back in the 1990s. But then again, I will talk about a few ideas for the future. They just don't actually have anything that I'll be able to distribute. So anyway, the first step is some discussion of motivation. I was very much concerned when I did MetaPost about the previous art. And to me, the prior art was actually T-Raw from the T-Raw tools. So there'll be some comparison here of MetaPost and the MetaPost macro packages to what was in T-Raw. Boxes.mp itself is kind of an orphan in that some of the motivation behind the rest of MetaPost doesn't really apply to it. So to some extent, this is an apology for something that didn't quite fit in. Most people would use an interactive method to do what the Boxes macros do. And I'm afraid I'll have to agree that's probably the best. But you'll see I would sort of hope that it can be done more automatically. Okay. The motivation behind MetaPost itself was, of course, that I wanted a tool for mathematical diagrams. I was very aware of Van Wijk's ideal. I had hoped to use it. I hear that it wasn't really available to the rest of the world. But being at Bell Labs, it was available to me. I seriously considered using it. But I found I was aware of all the power in MetaFont 84, the current MetaFont we use. And although ideal can do some things that MetaFont can't, I wanted it was much more attractive to retarget MetaFont, which is what I did, rather than try to do everything with ideal. But I'll show you why. I was concerned about the existing macros in T-Roth for producing pictures, which were called PIC. PIC is not something anybody would use anymore, but it was part of my motivation. So I'll show you what was going on there. Ideal dates to 1979. As I said, I seriously considered using it. Ideal has something called a box, which is basically a collection of drawing commands. The boxes do have boundaries, and they can be reused, repositioned. They have affine transformations that can be applied. So a lot of what you see in MetaFont, but this is very much motivated by MetaFont 79. So these drawing commands didn't produce paths that were first class objects. Interestingly, the equation solving mechanism in ideal is more general than what was in MetaFont and MetaPost. So here is an interesting view of history. It's sort of my view of history. The original paper on MetaPost was way back in 1989, and at that time when it appeared in PostScript, it was just vaporware, because I didn't start programming until 1990. And as you can see from this diagram, it took five years before it became really available to the rest of the world. Boxes.mp was the first real macro package outside of the plain macros, and it occurred pretty early on in this development. One feature of Boxes is that it's possible to expand it by adding different types of boxes. And I actually did that once and created this not so well-known macro package, rboxes.mp, which is simply boxes with an additional option for rounded corners. And here you see that most of the features I came up with showed up. Well, here's the biggest one. It took a while to get this graph drawing package going. It required changes to the underlying language. And that's sort of the last major thing that happened. And all this time, a whole year and a half, was how long it took to get permission to make MetaPost available to the public. So it's not easy to get software out of Bell Labs. So here's the first aspect of comparison. This picture on the right is, I think, familiar. This is from the manual, although I have redone it to use larger fonts so you can see it. I didn't want to redo the corresponding picture in the PIC manual. Basically, I don't want to touch PIC anymore. But I don't think anybody has seen this side-to-side comparison, which was actually kind of part of my motivation. In particular, the splines that were popular when PIC came out were these quadratic B splines, which I don't think looked nearly as good as what we have in MetaFont. Also, of course, you probably can't see it too well, but these symbols up here don't have any mathematics in there. It just says N1, D1, et cetera. So that's a big addition is the ability to put math in the labels. And also, I changed the arrowheads, but I'll have to confess I made it more complicated. So this thing got to know significantly longer when I converted it to MetaPost. Here's an example, the first example of a diagram of the type that one can do with the box's macro package. Here's an example of actually how it is done in PIC. And it's actually simpler in PIC. And the main reason is because PIC has more built-in knowledge, it kind of knows that you want these boxes to be aligned and arrows go from one box to the next, and you don't kind of have to say where things go. Of course, that means it has a little bit less power. Also, this picture is just an illustration of the fact that the original T-Roc tools were much more pipeline-oriented. That was the motivation for the funny way that BTEC, ETEC is implemented, where it kind of goes off and runs tech on, or LaTeX on the side. What it basically is, is a sideways pipeline where instead of running a preprocessor that inserts the equivalent MetaPost commands for the tech code, it puts that extra material in this MPX file on the side. But everything else is behaving as if it were a preprocessor. Okay. So let me say just a little bit more about technically what was going on that I wanted to copy. So PIC has boxes, rectangular boxes, circle ellipse. So those became box it. The circle and ellipse were replaced by circle it, which does not actually create ellipses. It creates these, I called them ovals, but they're approximate super ellipses of the type that we had in mind when we built the MetaPost paths. And MetaPost already had path drawn that was more general than what PIC had. This box join macro was created to try to make it easy to have boxes join up like they are in this picture without a bunch of complicated commands in between explaining the relationship. But, well, this doesn't show it, but you can, of course, put labels on these lines and that was much easier to do in the T-Raw version of this tool. The notion that you can talk about the corners of a box, et cetera, is very old. I didn't invent that. Dash lines, of course, have been around for a long time, although what we have in MetaPost is more general because you can specify arbitrary patterns. We've got rid of quadratic B-splines. The text, of course, now is run through tech or law tech so it can be more general. So let me go on a little bit to say talk about interactive tools which I'll have to admit are probably the best way to do box diagrams like this. And the reason is because you don't really care exactly what coordinates these ovals and boxes are at. And so having to write equations for them, I'll admit, is a little bit unnatural. Of course, you have to be careful if you do it interactively to avoid problems like the one I've illustrated here where the lines don't line up and they're not quite vertical. There are lots of interactive tools that allow you to avoid that, yet I have seen lots of presentations where things don't quite line up. So yes, it can be done, but it's something you have to be careful about. One of the big motivations I had for doing Metapost was that you have everything right down in a file and if you want to make a change, you want to completely change the margins of whatever, you can rescale your picture in a way much more sophisticated that wouldn't happen if all you can do is geometric rescaling. So that's something that's a little harder to do if you do it interactively. You have to have some kind of a saved form and then you basically lose some of the motivation you chose for placing things where they went. So anyway, oh, also node labels. It's a little hard. I, of course, wanted the node labels to match the document. That's always a little tricky. So clearly an option and maybe the best option is to use an interactive tool that outputs Metapost. And of course, such tools exist and I'll confess I don't really have experience using them. I was, well, my real problem with this is it's difficult for me to install software on the machine I like to use. I don't own it. It's a big server that is basically, it's hard for me to get good IT support on it. So anyway, I haven't actually been using these interactive tools, but I did once experiment with automatic graph layout which ideally kind of would be a way to solve this problem of not wanting to waste time giving coordinates that don't really matter much. So I looked, of course, there have been papers on automatic graph layout in this context. I did read one that occurred recently in Tugboat. It was from some people by IBM China. I won't try to pronounce their names. So it would ideally be nice to have sort of a declarative form where you just kind of say what you really care about in this picture. Probably it would be better for me to show you the example I have on the next slide. But the point here I'm trying to go through is I care about low redundancy. I don't want to specify where things are laid out. I want to have a very few parameters to set. And I want it to be a fairly simple and natural input. Now here's an example. This is sort of a best case example from my experiments. So this here on the left is the actual code which I might want to transform into my diagram. And here on the right is how it turned out when I got lucky. This is actually a description of the data flow inside of some specific piece of software or something. It actually has some meaning. And if what I have specified here is everything one would actually know and care about as an author of the software and you don't know or care about where in particular these various boxes and nobles should go. You don't, that's kind of relevant information. So in a sense this description here on the left is kind of a concise description of everything I really care about in the diagram. And it would be nice if you could get a good layout from this. But as I confess you have to be lucky to get one. So here is an example. This is a description of the input language. Let's not go into all this. But I think this is pretty much general enough to say everything you want to say. It certainly is not nearly as general as what you can do in MetaPost. But this is specifically aimed at the problem that is relevant to automatic graph layout. So I talked about a few parameters. The few parameters are the width and the height of the diagram, some kind of notion of density and text size. And you will have the constant problem of, well, what if it doesn't fit within this width and height? Well, you need to build to shrink the text size, shrink the nodes, play with the density. So some of these things are upper bounds. But this is kind of the level of parameters that I'd like to deal with. So first thing is, do existing graph layout tools meet this goal? Well, not too well because we have all these complexities. I'll explain some of these complexities in the next slide. I chose as my example this little known thing called VCG. I could have chosen DOT, which is much better known. But I had some problem with it. I'm sure it wasn't installed on my machine. I might have had trouble getting input and output in a little language I could read. I might have had some intellectual property concerns with the agreements that go with it. I don't know what it was. But anyway, I chose this VCG instead. It's pretty much a similar type of thing. So VCG has a few issues. It's only output some of the information in PostScript where, you know, what I like is to have its output come here in this other simpler language. It's just much easier to parse. So anyway, it can create splines, but you don't really have access to that. And also some of these edge labels you don't have good access to. It wants to, like all these standard existing graph layout tools, it wants to figure out how big the text is. And the way we need to figure out how big the text is is to go off and ask tech or meta post. The way I did it in these experiments is I came out with, I produced little meta post programs whose sole purpose is to tell me how big the text is. And this diagram is supposed to illustrate that one could give the node sizes as more sophisticated than just a simple rectangular box if you want to. So here's an example of some of the layouts you get from VCG. Now this is exactly the same diagram as you saw before. It has a very, very different flavor because this VCG tool is designed to produce kind of, it's designed to lay out directed acyclic graphs. This graph actually does have a cycle in it, but anyway it's kind of laid out the same way. And it's fine, I guess, except that you don't have much power to make it very compact. So this is about as compact as it can be here on the right. And there's, of course, the parameters in any of these particular graph layout packages aren't what I would like. They have lots and lots of parameters, only a few of which are directly related to things like the density and the height and the width. Basically there's a parameter that talks about the horizontal space between rows of things laid out roughly horizontally and the vertical space between things laid out roughly vertically. These are called X space and Y space. And I just dreamed up formulas to relate it to the parameters I cared about. It was based roughly on this idea of trying to achieve the density. The density meant the fraction of the overall bounding box that is occupied by the ovals and circles. And it turns out that it can't really achieve the density very well. So I also considered an alternative approach. Well, if none of the existing off-the-shelf tools do it, what about the published algorithms in the graph drawing community? So I looked at one paper that appeared to be fairly simple and the authors claimed it had good results. This paper advocated a layout scheme that kind of has four parts to it, so-called force-directed initial layout that just decides where to put the nodes, a step that removes overlaps, an initial routing step, and an adjustment step where you change not only the routes of the edges but also you reposition the nodes as well. Of course, in my experiments it'll turn out that I didn't actually want to implement all of this stuff. But anyway, here's the rough idea of force-directed layout, which is a fairly standard scheme. It can produce compact layouts, but you just can't really count on it to do anything in particular. This particular diagram shows how I modified it in my experiments to try to make it, lay out the nodes in such a way that you can obey these port specifications that say, like, I want this edge to leave on the southern edge of this box and come in on the northern edge of this box. In order to do that, they probably have to be kind of roughly at the same Y coordinate as I've drawn it here. Or if it's leaving on the east edge of the source, S stands for source, and arriving on the north edge of the destination, then you probably want some kind of a diagonal relationship like this. And if it isn't what you want, then there are so-called forces added to try to correct this. So basically the idea of force-directed layout is you have repulsive forces if things get too close. You have a general attractive force to try to keep the graph from getting too big. And you just kind of look at the current positions of your boxes, compute the forces, adjust a little bit in the direction of the force, and continue to repeat and repeat and repeat and hope it converges to something. So as you can see, this isn't going to be really reliable, but it has some chance of doing all kinds of things. When I looked at this paper, I found that it actually had a few bugs in it. This describes the overlap removal step, which is kind of a sweep line algorithm. They're kind of hot on efficient algorithms, which isn't something you really need in this application because you can't fit a graph that's really complicated on a piece of paper, which is the application that I had in mind, you know, something you can fit in a document. But what this is illustrating is boxes can come in all kinds of sizes and shapes, and it can be enough to confuse these algorithms. And in particular, this one had a notion of ordering boxes along this sweep line, and this is an illustration of how that notion got confused unless you were careful. The initial routing was based on a fairly complicated visibility graph. I didn't want to implement that. I cut some corners. Basically, the idea is if you, well, I guess we'll talk about it at this step. If you make these boxes that you're separating a little bigger than the actual boxes, then you can guarantee that there's enough space around that somehow the edges can be routed. So I was hoping that one can kind of cut a few corners in the edge routing step. The edge routing step was kind of interesting as they advocated in that it would try to minimize a quadratic function subject to these constraints. I think we don't want to go into the technical details of this, but anyway, it has some potential to do some pretty good things. Here's an example of some of the results. Now, this is the good result that I showed you when I said we got lucky. And this, as you can see, is a version of the original layout of the original picture that we had at the beginning of this talk. And as you can see, there were some problems with the edge routing here. I believe it would have worked quite a bit better if I had directly followed the algorithm that I was doing, but it was pretty complicated. Now, I cut some corners. So in principle, I think automatic layout probably could mostly work, but I think you would need the ability to interactively improve it or something. So I don't have a tool that will actually, that I could actually distribute or anything, but anyway, I had fun with these experiments. Is boxes.mp the right way to draw pictures like this? Well, I confess probably not, but I still use it. The meta post is for mathematical diagrams philosophy is kind of the reason it falls short, which is in this application, you don't really care about the exact coordinates of where these boxes are, which is sort of fundamentally the attractive part about meta post and other applications. I confess that a GUI probably is best, but it's still not as good as automatic layout that would actually work. So anyway, I think automatic graph layout is something we should consider. All right. Wait, we have a hand over there. You missed one. Most of the diagrams you just showed are very simple and could easily done in a GUI or just by hand, you'd probably save time. But obviously, if you had a really complicated diagram, and it worked, it'd be great. So I was wondering what the most complicated input you've tried to use is and has anywhere near working. I have tried things that are complicated enough that you have trouble fitting it on a page. So basically, yeah, I tried anything that could fit on a page, but that's only on the order of, you know, a couple dozen boxes. But you can have quite a few connecting lines in there. But that's sort of part of the even something involving just a couple dozen boxes, it takes a long time to do it in boxes.mp, keep me. So you compared metapost with peak. Yes. There's one interesting thing which peak at a very early time tried to get right. That is the dashed stuff. For example, if you have a dashed box, then the dashes are always ending as corners at the edges. And metapost just follows the post-script philosophy. So this should be something which one could improve to by macros maybe to calculate the dashes length. So you're talking about getting the phase of the dashes right relative to the line length. Yes. Well, certainly, we have these arc length operators. So I assume we could just change the macros and try to do that. Okay. Thank you again.
|
This talk explains the motivation behind boxes.mp and discusses some alternatives. Automatic graph layout can be combined with MetaPost in various ways, but this technology is somewhat hard to control.
|
10.5446/30860 (DOI)
|
When I submitted the talks, Karl's idea was that it would be handy to put them in a row also because it more or less is a continuous story followed up by Idris Talk about the Oriental Tech Project and what happens in there. The thing is a bit that if you look at the description of the topic, it's not an easy topic. Tech pie building stuff is non trivial. So what I've done is I've sort of made a summary of what we have, or basically what are and we have reached so far with the Lewittek Project which somehow in this current state ends up or culminates at what Idris needs for his work. So the pie builder will come along during the talk. Okay let's wrap up what we have done so far. I think that we can safely say that Tech can do a lot. I think it can do even more than you if you first start using it, think it can do. And this is not really a surprise. Tech has a programming language so you can basically program solutions. And because it has some hooks for extensions, think of specials and writes and whatever, you can go on extending it to some extent by using external applications. At my work I've been using Tech for quite some time now. Most of the things that I do is direct XML to PDF processing. So if you, well, a lot of the things that are coming up in this talk might relate to the kind of work I'm doing. So we go from XML to PDF which is not the most, to be honest, not the most challenging part of the job because most of those documents are rather stupid. The nice things I do with Tech and show is Tech I basically done in my spare time and on products that are actually not related to my work. Actually it's the users who are the most demanding. It's not so much my work which is demanding if it comes down to Tech. So a couple of years ago I had reached this moment that you say, well, I can do basically anything I want with Tech. Do I need anything else? Actually it was even the time that given, well, the things that are happening in the world of publishing, you can start thinking about should I quit using Tech because is it still needed? Is high quality still, the typesetting still needed? Anyhow I use it in my daily work and you have to program some solutions. Solutions did become a bit more complex, not so much because of typographic needs but mostly because of strange things they wanted to do and it became even annoying to a certain point. You say I have to implement something and lots of code. You know you can do it. It doesn't look beautiful. Do I really want to do it this way? We had quite some examples of challenging code in the context code base like typesetting tables in the HTML way which is kind of tricky because Tech only works in one of the two directions horizontally vertically and not in two at the same time or doing advanced backgrounds behind running text crossing pages which is also not something that Tech likes to do because it doesn't have a concept of a page and definitely not of backgrounds. Then there is thinking about those users, the issue of all these input and coding. Each language has its own particular input and coding and well it's possible to implement all these ways but it's not always fun and it's somehow a tendency to interfere. So to come back to my work again where we are dealing with automated typesetting and basically unattended automated typesetting where processes has to run for years and you are producing tens of thousands of pages without even looking at them. Well you need stable solutions, predictable solutions and that's not always Tech's strong point unless you go to some extent in programming. But it can be done and as said it was the possible that way. And then I ran into Lua, probably most of you know that I used to use some of the Lua extensions in editing programs that we used and that was the moment when I started thinking about wait a minute, maybe we can extend Tech in such a way that it can become fun again to write complex solutions. So what was the first thing that could be done? We could, well I'm one page too far. The first thing we could do is and what I'm more or less talking about now is as I see at least is Tech for the next 10 or 20 years. What will happen after that I don't know. Input and coding, one of the tricky things, okay we are in here in the English language area so it's not so much an issue here but in Europe this is a big issue. Fun-Ten coding is a longer handle than Tech and we could sort out completely basically all the code because we could use input filters and all users somehow switched to UTF anyway so why bother. Mixed fun-Ten coding could also go. Fonts is a nightmare in Tech and we could kick out all the fun-Ten coding stuff. I'm talking about context mostly now. And it also means that quite some of the weird hacks that you have in your code can go away. So it's a simplification. There's no longer relationship between font encodings and I-Vanation. I don't know how many users are aware of that but those two mechanisms are quite closely related. In the master-partment, well everything has gone unicode as we already mentioned this this morning. So we kicked out all our mixed-mask encodings and replaced it by unicode. I'm not mostly talking about Lue-Tech of course because for instance if I would do the same thing in Cheetah I would probably still use or need to use some of the old stuff but we quit using that. By the time that we are thinking about these issues we also decided that the wisest thing was to freeze context to say okay this is where we stop with the old stuff and that's where the new stuff starts. And I think in retrospect that was a very good decision. It saves me a lot of time not to maintain the old stuff, it works quite well for users and it gives a pretty nice benchmark so you have a sort of standard implementation. If you re-implement everything from scratch basically then you know what you have to achieve. But it's still there and it's still used somehow. There are also things that if you talk about such a new engine that are changing in a more fundamental way and I will give some examples later of how we do these kind of things. For instance in Tech you always have this multiple path system. You do a job and then you write some stuff to an external file which is used in the following run like tables of contents generation or index of enduation or cross referencing and all that kind of stuff is gone. We now maintain all this data in Lua data structures which means that we have access to it at any time we want and we can carry out much more information. It also means that all the low level code has been rewritten for that. Initialization is another important issue. You have lots of characters and symbols and things like that. Everything is defined in a more abstract way using a database and we generate basically the tech commands and everything related from that. So for instance characters, we have a large character database which is information about all the characters and properties, information about it being mass characters or not and we generate everything from that. We don't do that anymore in tech code which is much better maintainable. Everything related to structure is now supported by Lua code. This has actually quite large impact on the whole system also in terms of trying to make it compatible. If you rewrite all this code and you have this working system there, well, this used to work, it doesn't work anymore. You have to explain that they have more control and then you say, okay, I need to provide some backward compatibility. So you add again a compatibility layer, things like that, but in the end I think this whole subsystem is completely new with much more control. Because while structure is a broad topic, so you don't only have sectioning, you have everything related to numbering like floats and whatever, food notes. I think virtual everything in the macro package relates somehow to structure. So everything is touched in practices means that 50% of the context mark for code is Lua and no longer tech. Float for instance, float management, also one of those areas. If you have documents with, let's say, 500 images, you somehow need to manage them. You need to make sure that they end up on the right spot. This can be done in tech, but I'm now moving that code to Lua as well. This is registered management, similar topic. You have to sort things. Well, sorting can be done in any programming language and if the language is part of the engine then it becomes easier. There are some very specialized topics like swapping cases, making things uppercase or lowercase or making sure that in tables or wherever the digits has equal width or maybe special language related kerning like the French, you have some spacing before and after certain characters. They are all delegated to the Lua code. We do that by manipulating node lists and carrying out attributes. I will show that later. Hyperlinks, yet another topic. There is some primitive support in PDF. Actually, I never really used much of that, only a few things. Everything is now, well, I don't use anything of that anymore. I just handle hyperlinks and cross-reference and everything at the Lua end by manipulating parsing whatever the intermediate node lists before they end up in the final documents. It means also that I have a bit more flexibility. Something that, for instance, Ross mentioned, this structured PDF. So far I've only looked at it, never done this something with it because nobody requested it. It's typically something that I would completely program in Lua and not in primitives or whatever. Well, numeric conversions, of course, is a good candidate. Graphics, I'm not saying that these are things that cannot be done by tech, but it's easy to do it at the other end. Normally, if an image is embedded, you want to know what's its type. You have to control the back end. You have to do some scaling. You have to locate the thing on disk. All that stuff is done in Lua. Everything related to spacing, horizontal and vertically. The horizontal spacing issues, I will show that an example of that that are done in Lua. The vertical spacing is also being re-implemented. So we now use a more advanced model for vertical spacing with weighted breakpoints, something like that. Of course, and this is something for a later talk, metapose support is completely delegated. That is due to the fact that we use the metapose library. That means that runtime for that kind of stuff is reduced to virtually zero. The example is the metaphone manual, maybe some of you have seen that. One it says some 1700 graphics and it processes in 30 seconds, including all the graphics, including some four times call out to other instances of tech. That used to be 10, 20 minutes. So this has large consequences for maintaining a manual. It's no fun to maintain a manual if it takes 10, 15 minutes to run. If it takes 30 seconds, it suddenly becomes interesting again. Context always had mechanism for buffering data, you could move it around, you could call it upon, different places in your document. Everything now happens in memory instead of your files. Forbate him, another one of those tricky areas of tech. The nice thing, well, verbatim itself is not a problem. Verbatim is a problem because you want to type the tech manuals with tech. Then therefore it's actually a problem. Well, again, delegated to the one. We have a couple of experimental features. These are the things that I referred to in the beginning of the talk like. They say, okay, do I still want to do this in tech or do I want to do it in a different way? There are some things that I started doing like multiple parallel streams in documents. Things like that, I will no longer develop them in traditional tech, but I will do that only in the Mark IV way. There are some experiments with that. You get a couple of things for free. One of them is that you certainly have a scripting language on board. So you don't, you are no longer dependent on external programs. The whole thing, including all the tools that we use is now done in Lua. Because tech live ships Lua tech, every distribution automatically ships all the tools that you need. You no longer depend on installing anything else. You don't need a machine or Ruby or whatever. Another thing with quite some impact is that we, and I'm talking about already four years ago we started doing these kind of things. The whole file I owe is no longer dealt with by the Capacity library. I've written my own variant of that. So we can do things like reading from zip files, reading from URLs, accessing FTP files, things like that. And that's a bit more flexible than waiting for something that will never happen. Because I still remember in one of the Dante meetings, the discussion is about extensions to the Apache and whatever. I don't know where it was four or five years ago. Nothing ever came out of that. I'm not going to wait for that. It's not going to happen. So in theory it's impossible to run contact from a zip file. That's the whole idea. One of the first things that they implemented was, or re-implemented actually, was XML support. Contacts also always has XML parsing and processing on board. It was a good exercise in Lua to do that completely in Lua. And it's integrated in such a way that we can access XML files completely in arbitrary way, the trees. We can access this. We have a sort of expanse path like language on board. And it's used, has always been used actually for certain things like image databases that were part of the context kernel. Which you can also imagine that there's now an experimental mechanism where, for instance, take Beeptech. We just load the Beeptech file. I convert it into an XML structure and I can use all the machinery that I have for accessing every node in every way by just using expanse path like requests. And it works quite well, I think. That's one of the things that we need for our work actually, not a Beeptech with XML. Contacts, processing context was always done with a runner script. You have something called Telexac that processed your file. It would take care of multiple runs if needed. That's no longer done with Telexac to Ruby script. It was context script, which is actually a Lua script which runs on top of the Lua tech engine. So, basically Lua tech, oh. Is this power supply not on? So the battery is less than half an hour. Okay, so that has already been delegated to Lua. We don't need other programs. I think Beeptech was the only external program that was used by the bibliography module, but we actually don't need it anymore. The whole toolkit that comes with Contacts has been rewritten, so there are quite some scripts. I think in this respect we helped people like Carl by making it easier. This dependency is easier. There's quite some, and that's part of the development history of Lua tech, there's quite some tracing and debugging on board. It teaches me a lot about what Telexac actually does. After that many years I still run in a lot of situations. Well, interesting, never knew it was doing this kind of so. We have really a lot of debugging and things on board. Belanda are all the big changes. We move more stuff into the kernel like chemical type settings and things like that. This is a step up to the next part. One of the driving forces behind the project is what Edus will talk about, Oriental Tech Project, and I will not spend much words on that, but it's actually one of the fun parts of the whole thing. And we have lots of plans. There is a lot of experimental coding in contact Mark Ford that are now cleaning out, removing, simplifying, because Lua tech evolves. You certainly see things like, oh yeah, well, that's really old code, needs to go. Some code is not even doing anything anymore. And this is still an exercise. This is still, we still haven't touched the wheel, interesting and hard things. So we will at some point start doing more complex things. And I think the consequences for context as a system means that each subsystem will be touched, will be completely rewritten basically. But it's a step by process. You need to be able to run the old stuff, so you cannot do everything at once. So there will be, for instance, be additional mechanisms for advanced font usage. We want to go a bit beyond open type, things like that. Language specific type setting is still an area where a lot of work can be done. Mass, I know probably most people here will not use context and definitely not this mass, but the users who do have quite some demands. And I think we will end up with more specialized sub-machineries. A bit like what open type mass, open mass project with these areas and fields and dictionaries and things like that. The basics are there. We can use it. We have already made it part of the database. It just matters of time before it shows up. The long-term plan is that we want to have everything more modularized so that I can basically say I want to construct a new, let's say, type setting system like I want to make a specialized version of context doing this or that. And that project is called MetaTec. And it basically means isolating all the individual components. This is not trivial because in tech and type setting a lot of things are pretty closely related. But you can imagine that at some point you have a machinery that doesn't see any tech input anymore, just Lua or XML or whatever. Anything we can dream of will probably happen. So we'll be busy for a while. Okay. I will now show some examples. A bit small screen. Oh. Okay. It scales a bit funny on my screen. Okay. I mentioned casing for instance. This is one of the areas where tech has always been a bit hard to beat. Like this is uppercase and lowercase stuff. But it operates in a sort of strange way. And it's very hard to use it in, let's say, to have a piece of your text and say, I want to uppercase this piece of text. There is some expansion going on. There is some, well, I won't bother you with the details. But it can interfere basically, especially if there are commands in there with everything you can imagine. So in context Mark IV, we don't use any tech mechanism or primitive or whatever kind of support anymore. I just take it. I take it with an attribute and say, this needs to be upper or lowercase or whatever. And eventually when the characters end up in the note list, at some point they intercept that code, that list, and then I manipulate that list directly. So it means that you can have far more easily the kind of mess stuff. Now you may wonder why is it needed? Well, normally it's not needed. But we work mostly for educational publishers. And those are not really the most simple documents. And well, anything you can think of will show up there. So this kind of crap really happens. So it actually comes out OK. But there's no, basically hardly any tech code involved in there. Just tagging. Normalizing is another example. If you have tables, you'll be surprised about the weird font specifications you can get from designers. And, well, in most cases they don't know what fonts do or don't do. So often you need to do corrections. And examples of that are, you know, if you have these digits, they normally have an equal width. And it's not so much a problem if you have the normal variant. But if you have an old style variant, then you can have a glyphs with different widths. So you can see this one is actually hard to see, maybe. But this is smaller. And if we apply the equal dishes, things become wider again. Again this is something we just tech. There's a net view carrying around with these specific, well, let's say characters. Can basically be any character. And then at some point I intercept this stream and I say, OK, now I know that these are the digits and I can change them according to what's in the font. Again, absolutely no tech code involved. Mathematics, it's debatable, but in school math, if you talk about coordinates, you don't want spacing after the comma. So again, all math is intercepted at some stage, at the intermediate know what stage. And at that point we can manipulate it any way we want. There are mechanisms available for that. So it means that in your document you can make it part of your document specification and then you don't have to add all kind of finished spacing commands. And so anymore, I want to get rid of all that kind of stuff. That's the ID. There's another one, compound words. Yeah, normally you can say, OK, I put some special code in there that tells tech that it may hyphenate at that point, but you can also imagine that you don't do that. So in this case, we just enable a mechanism and the mechanism again at some point will intercept. There's a lot of interception going on. We'll intercept this note list and we'll say, OK, this is a hyphen. It has been teched somehow by this attribute and let's hyphenate or create a discretionary note at that point that can be hyphenated. And then some languages have differences here. You can replicate that there. This is what we control with types. So you can have basically, if you know what the problem is, you can somehow solve it. That's my idea at this point with this kind of stuff. Of course, you always need ways to block it because there are situations where you can do it. You don't want it to happen. Then we come at things like, what things the French like. If there are microwriters here and I know that there are a couple of them, then you probably know that what the trick for the, it comes next. I will first mention that one. If you have a bunch of characters and you want to space them, and this happens often, happens in titles and things like that, then one way of dealing is writing a small parser that picks up the characters and then, OK, I put it here, put the space after it, not a character, more space after it, things like that. That doesn't really work well with the other languages. If you have French or German or whatever, you often have something like backslash, double quote, and then a character or whatever. So where does your parsing end and start? It's already easier if you have, let's say, UTF because then the character is one entity. This is something that will, in most cases, if you have an under-tended workflow, an automated, if you do it manually, you can do everything. If you have something that has to happen automatically, you don't know what gets in, then you can rely on parsing. So again, we just mark a piece of text that it has to be spaced, so to say, somewhere, the intercept list, I analyze the list and put spacing before and after certain characters. You treat spacing in a certain way. This is actually an example which came up last week when I was preparing this on the contact list. There was somebody say it fails on this word because this is a particularly nasty one because it has, in traditional tech, so to say, if you use the traditional, in context, you can have two ways of getting your characters turned into glyphs, so to say. One is what we call base mode, which is a traditional way. It will use a tech machinery. The other way is node mode, which will use Lua, type setting, either as well later. This is processed in base mode, and then you have the tech machinery, which has a ligature, which is basically two ligatures, it's the FF, and then it makes the FFL, so it's a chain thing. So in order to get the kerning between these ligatures and things like that, well, you need a little bit more trickery, but okay, it's doable. Because you don't want a ligature, you don't want A, FFL, or whatever, and then this one. You want to get rid of ligatures, or the most natural way to do it is to turn off ligatures. You may wonder what ligatures are doing anyway in types of documents. I think it's mostly left over from some time. So related to this is another language issue, is that the French, they have these spacing before characters. I was referring to that before. So they key in this, and they actually want some spacing around these guillemots. Now what was the normal way of doing that? Make them active. And then, well, the character is basically a macro, it might be able to look back and ahead, depends a bit on the situation. And then it can, well, insert itself, plus some spacing. And that's a nightmare. Not so much for guillemots, but it's really a nightmare if you're talking about, for instance, colons and semicolons. Because an active colon in the text is not so bad, but colons are used everywhere. They are used in, for instance, in cross references, where you may have something, chapter, colon, something. And so then you have to suppress the active stuff. And it really becomes messy. You can't program it, but it's no fun. It's one of the areas, as soon as you have this mechanism, I could kick out a lot of codes that was just there to take care of the odd cases. So in France, in French, you put some spacing there. I've exaggerated it a bit. I found out that our more languages that needed, interestingly, is that the users never asked for it because they probably somehow could foresee that it was a nightmare. So you can define any kind of, you can do this every any character. It's not hard coded. And again, the reason why I have this verbatim stuff in there, you don't want to do it in verbatim. So you want it on your normal text and not in verbatim. There's always a complication. Yeah. Now we come to a couple of more extreme cases. Don't ask me why, but underlining is sometimes demanded. And again, this is something that is a bit tricky to do in tech. I'm not saying it's impossible. We could do it in Mark II, because we have a rather advanced background mechanism. We can underline arbitrary text, but it needs meta-post to run in the background. This doesn't. This nicely crosses the lines, not related to fonts. Of course, you can abuse mechanisms like this. This one is especially, or the next one is for either of these. This one is still for myself. You can also make backgrounds this way. It's not normal way I do backgrounds, but you can abuse the mechanism to do it. You can even reverse the order and put it in front. Again, don't buy manipulating node lists. Here's another example. This one is actually for either of these, because he wants to do advanced documents with lots of footnotes. So this is something where we reverse the thing. We don't put the footnote in the foot, but we put the footnote in the text. And then you need to cross the line. So you want to have raised text crossing lines. And again, this can be done. Okay, I actually had an extreme example. I still have 10 seconds, so I can probably show that. So you might wonder why do I need this underlined? This is typically an example of a project that demanded underlining. And it didn't demand underlining it. It demanded actually strikes through, because it was needed to mark in a document which pieces were added and which pieces were removed. So everything removed should be crossed through, and everything added should get underlined. And then you get, because it's an automated flow and it's done in XML, whatever, it's unattended, it had to be applied to everything. You don't know where it's applied to. So if you look at this, I start the underbar at the top, and I enter at the bottom, and everything in between gets underbarred. It's not related anymore. So the text of the pictures, the captions, whatever, tables, if you, these are really, there's no time to read the text, but this is really interesting stuff about California. In case you don't know it, somebody was referring to astronomy, yeah? How many people of you know that Pluto is no longer a planet? Right? And okay, and in California, it's still a planet. There are several reasons for that. Because well, the industry here, so it will, this thing, it costs trauma to the children if they remove it. Another one is that it will render textbooks, et cetera, obsolete. That another one is that they need Pluto to get rid of certain criminal elements, and they need a far off place. It's really law-diss, et cetera, et cetera. New Mexico is different. New Mexico, Pluto is a planet if it crosses the sky of New Mexico. This one is a special one I also like. I put these examples in the content distribution, actually. You can think about what happens here. This is about what I wanted to do in the first half. Am I from time? All right.
|
All the time that I’ve been using TEX, I’ve been lucky enough to stumble into a solution just in time to save my day (or some project). In most cases it involved starting from scratch with the strong belief that TEX can do everything. After a while you reach a state where you can predict if something can be done or not. An extreme example of operating on the edge is backgrounds that span paragraphs and pages, adapt to paragraph characteristics, and can be nested. Another mechanism that made some projects possible was HTML-like table building. Imagine combining these two mechanisms.
|
10.5446/30861 (DOI)
|
Okay, so we have our history. My concern since about 1998 has been being able to write latex and get not only print output but also HTML. And of course that involves MathML as well since I'm a mathematician. And we have, if we're talking about writing classical latex, we have a history of efforts to translate. And we've been in the game of trying to translate latex into HTML for about 15 years now. And the question is how do you make it go? What works well when you're trying to write latex and get it to run through translation software? And there are various translators. I have my particular favorite for when I want to go use a translator. But actually pretty much since the fall of 1998, I've not been using translators. I've been following another route, which I'm not going to say too much about, but it'll come up. But if you want to use a translator, then what you need is profiled usage of latex. You need a carefully limited command vocabulary. And the translation software that you're using needs to be tuned for your use. It's not entirely rigid. Well, again, it depends on what translation software you're using, but some translation software packages allow the user, if the user is sufficiently sophisticated, to do some customization. So my suggestion today, and it's really aimed at the latex community, my suggestion today is to take what we have learned in 15 years of efforts trying to get latex to run through translation programs, this concept of profiled usage of latex and formalize it. And so I arrive at the notion of what I call a latex profile. Latex profile is a dialect of latex with a fixed command vocabulary where all macro expansions must be effective in that vocabulary. And a requirement that goes along with this concept, essentially part of the definition, is a language essentially equivalent to an SGML document type with a canonical XML shadow. The difference between SGML and XML is largely a technical one. Think of it as being something like the difference between classical HTML, which is HTML4 and XHTML, both of which were extant by the turn of the century. So this is a simple example of a document under a latex profile. Can I make that a little bit bigger for the, well, maybe you have a chance. Very simple document. It looks like a latex document. I won't bother to point out what's not exactly latex, but it looks like latex. It's no harder to write than latex. A latex profile is a special instance of what I called generalized latex. Actually, all right. And there's an example that I showed when I spoke at the Tug meeting in 2001 at the University of Delaware, which was roughly the time when I first submitted Gelmuth software to C10. I wrote up a catalog. The tech catalog had just fairly recently gone over to using XML. And this is latex like markup for writing the XML that goes with the latex catalog. I do not consider this to be a latex profile, however. It's generalized latex, but not a latex profile. Now the title is latex profiles is objects in the category of markup languages. The word category is being used in the mathematical sense, although not in a very heavy, serious way. The category in mathematics consists of objects and arrows. There's a rule that says that an arrow followed by a second arrow is also an arrow. And the relevance of introducing this terminology here is not really to be able to do anything new, but to suggest new ways of thinking about what we can do, new ways of thinking about what is not rocket science, but what is here and now. There are no plans for actually using, no, present plans for actually using category theory. The category of markup languages. The objects of the category are the markup languages. Latex is an object in that category. HTML is an object in that category. The arrows in the category, arrows run through objects, and the arrows in the category are translations. And one of the things about categories is that given two objects, there's first of all the question of whether there is any arrow from one to the other. But secondly, there's the question, how many arrows are there from one to the other? And correspondingly, given two markup languages, is there a way, is there a sensible way to translate one to the other? And if there is, in fact, I claim that it's just likely more than one way to translate from a given markup language to another markup language. So that's something to keep in mind. So classical latex, well, classical latex is an object in the category of markup languages to the extent that it's a well-defined language. There is the question of just what do we mean by latex as a language. But as an object in the category of markup languages, it's a reasonable target for translation from other markup languages. But latex is not a very good domain for translation to languages other than printer languages. We know that latex can be translated to DVI. There's a canonical way to do that. Latex can be translated to PDF. Well, maybe there's slightly more than one canonical way to do that. So you can have arrows going from other markup languages to latex. That's not too hard to come about. And that's not been a problem in the community over the years since we've had both latex and HTML. But getting arrows that go from latex to languages other than printer languages is what has been so difficult. I want to focus on an old example, which is tech info. Tech info predates HTML. Tech info provided certainly one of the first and maybe the first example of hypertext. Tech info is the language of the GNU documentation system. It is a good domain for translation. That is, tech info can be translated to other markup languages. Tech info is in fact essentially equivalent to an author level XML document type. That's a historical accident in the sense that tech info is older than XML. I'm not sure when tech info first came into existence. 84. 84. So that's about the time that Charles Goldfarb was bringing up SGML. I don't know whether those, I think both things were going on in California those days. I don't know whether there was much crosstalk. Okay, a little bit later. That's the ISO standard. Tech info, is it so correct? There's no author level, no provision for, nothing serious. When you want to do math and tech info, it's just for the print only and it's just basically falling back to plain tech math. SGML and XML. SGML is the subcategory of the category of all markup languages. There are markup languages. Laytech is a markup language. It's not an SGML markup languages. XML is a subcategory of SGML. It's basically XML made sufficiently tame for use on the web rather than in house use. And when it came along, there was this religious thing that said, you shall no longer use SGML. And I think this has been somewhat harmful because there are advantages of using SGML in house as opposed to across the web. And you can go back and forth and either, it's also possible to go back and forth, but I won't say any more about that. SGML and XML languages are good domains within what is sensible, are good domains for translation. Author level SGML and XML document types are by design good domains for translation. Arrows can flow from these document types. There are libraries in most computer languages that facilitate the construction of these translations. And these translations are reliable. The arrows can be chained. That is, you can start with one SGML document type. And when I say SGML document type, I am including XML as well. You can start with one and translate to another. And then you can take a translation that goes from the second to a third and you can follow the one with the other and get a reliable result in the chain. So that's a very important feature to be able to use. And it's important to start thinking about, I think it's important for the latex community to start thinking in a serious way about how it can make use of that. Of course, to do that, it has to start thinking about latex profiles. And I would like to see the latex project sponsor one or more reference profiles. And along with sponsoring a reference profile, I would like to see the project sponsor translations from reference profiles to DVI, PDF, and HTML. I would, I think I want to encourage maintainers of XML document types to reach, to think about reaching HTML and PDF. And most maintainers of XML document types are interested in reaching HTML and PDF, at least if these are XML document types for documents as opposed to electronic data, by translating first to reference latex profiles and then using reference translations from the latex profiles to HTML and PDF. And I would like to encourage authors to submit articles to journals as latex instances under reference profiles rather than under the profile of a particular journal. A particular journal will probably want to have its own latex profile, which can be some kind of variation maybe of a standard latex reference profile. But the point is, because one can use translations to move back and forth in a reliable way without intervention, and in the case of an article submitted to a journal, the editor of the journal is going to want to feed metadata in as a side stream to that translation process for generating the instance that the publisher is going to use. So this is the vision that I'm encouraging. Now there's this project that I've been working on since 1998 called the Gelmue project. The Gelmue materials are on C10. They were last updated there three years ago. Not a great deal of development has happened since that time. The project demonstrates that its main usefulness, I think, is that it demonstrates that everything I've described today is possible and not just vapor in the air. The Gelmue didactic document type, which is part of the what can be found at C10, may be viewed as a latex profile, and that I think can serve as a base for constructing reference profiles. These slides, well, you're looking at HTML with MathML. You can tell that it's MathML because the product symbol is not big enough. And that's the situation we just happen to have these days. But the slides were generated with customized use of the Gelmue didactic document type, which as I say can be thought of as an example of a latex profile. And I'm using the slidey package from W3C for the slideshow. And that's time for questions. So, are you ready? Question? Thanks for sharing these ideas today. I am really interested in what you've been talking about. And I wonder at how you feel about fitting this into how latex documents are traditionally written in terms of lots of extra packages, say, and lots of fine tuning of the output for typesetting purposes rather than semantic purposes. So I guess there's two questions in there, really. In some sense, there are infinitely many ways to go. You can have more than one latex profile. Latex profiles probably not going to, it's just going to be vocabulary. It's not going, don't think about packages. It's what commands are available. And then at the point where you have an instance written under a latex profile, one of the things you can do is to translate that to a classical latex document. This is what happens in the Gelmoo project. So that's one thing you can do for typesetting. But another thing you could do for typesetting is translate it to a context document. I've not done that, but there's absolutely no reason why you could not do that. You could have two different young people working in different universities competing with each other to see who gets the best results. And then also, it's conceivable that somebody who likes to code latex code, that is, beyond the user level, deeper than the user level, somebody could write code to digest the source directly. There are lots of ways to go. If you're talking about reaching print. Can I ask another question, sort of to clarify previously? Do you think it's feasible for the reference document type to be extended by, I mean, let's say we define a reference latex document type that is approximately what latex is now with a little bit extra, but then someone comes along with, say, you know... The answer to your question is yes, even though you haven't finished answering. Exactly, right. Yes, it can be extended. Yes. Thanks for jumping in. A little bit earlier, we heard a presentation on the biology of latex. I'm wondering what the two authors feel might be the connection between these two presentations. Are they to some extent working on the same problem, which is sorting things out? I can answer. Actually, I'm not trying to fix any problem. I was merely saying that you've got to live with the problems. If you find a solution, that's great. I'm not looking for a solution. All I will say about that is if you're going to introduce viruses, it will be done in the translators. The documents will be virus-free. Okay, so shall we thank our speaker now? Thank you. Thank you. Thank you. Thank you. Thank you.
|
The mathematical notion of “category” in the context of markup languages raises the idea of widespread use of reliable automatic translations between markup languages. LATEX profiles, which are dialects of LATEX with a fixed command vocabulary where all macro expansions must be effective in that vocabulary, are suitable domains for defining translations to other profiles and, where sensible, to other markup languages.
|
10.5446/30862 (DOI)
|
Thank you Paul. Well, this was supposed to be after Will Robertson talk, Will Robertson's talk which was more technical and mine is supposed to be more typographical about rare mathematical notations and some new mathematical notations. I'll start with a quotation from the Metaphon book. I don't read the total quotation here. The important part for me is the bottom two lines. Will font reeks abuse the story by overdoing it? Is it wise to introduce new symbols by the thousands? Well, as we all know, Metaphon didn't become widely accepted. There was a talk two years ago at the Tuck Conference in Cork, why Metaphon didn't catch on. And so for a long time, tech and computer modern plus the AMS fonts, shape, typography of mathematics in the past 30 years. I think it's a bit similar to reading and writing. All of us do a lot of more reading than we do writing. And so it's a lot easier to use the existing symbols than to create your own. And most mathematicians seem to have been content with what the fonts offered and they used that symbols, but they didn't create new symbols. Now with Unicode math, of course, we have symbols by the thousands, but Unicode gives us little explanations how to use these symbols, when they should be used, which symbol is superior to another. And many symbols are only described by their shape and not by their meaning. So it's a circled plus whatever and you don't get an information plus, it's even the semantics of the symbols. So sometimes it's described circled across or something and you don't know how to use that symbol. So what makes one symbol superior to another? What's a good math symbol? What's not so good or bad math symbol? I would like to give some quality criteria for that. And a mathematical symbol or notation should be, first of all, readable, of course. But what is readable? So I added clear and simple. Simple also means it should be short and should be, of course, shorter than write it out in words. Then, of course, it should be needed. It should be necessary. If there's already a good established notations, which is generally accepted, there's no need to create a new symbol. You don't have to reinvent the wheel. Then it should be international or if it's an abbreviation, I think it should be derived from Latin because most of the language which is common is derived from Latin, sometimes from Greek. And also most of the existing abbreviations like S-I-N for sign or C-O-S for cosine or L-O-G for log, logarithm is also derived from Latin. So if it's not a symbol but an abbreviation derived from Latin. Also it should be mnemonic. That is, once you've heard an explanation of the symbol, it should be easy to remember the symbol for you. Then it should be writable by hand. Many mathematical characters show a mathematical typography in general, shows a close relationship to handwriting in many places. And, of course, a lot of mathematics are still written by hand. In your own research, you write it by hand. Or on the blackboard, you also write by hand. But it's not a must. In some cases, it's better to differentiate in printing because your handwritten mathematics, you could always explain. When you write on the blackboard, you could talk and explain what you mean by the symbol. And in print, the symbol should speak for itself as good as possible. It should be a self-explanatory without the need to go to the context and look what the symbol should mean here. Also, the symbol should be pronounceable. That's not that important, but I will show you a case where I think the symbol wasn't accepted because it's not pronounceable. Then it should be similar and consistent. It should fit into the general system of mathematical typography. And in some cases, it should also be similar to existing symbols. It should respect what symbols are already there. But of course, it should be distinct and unabiguous. If it's a new symbol, it should be really new and not look like already existing symbol which is used for something different. I will show you what I mean by adaptable in a minute. And also, it should be available in former times. It often was a problem that the symbol was not available at the printer. Or you could see with many notations that the mathematician just took what was available at the printer and transformed that somehow for their needs. Nowadays, with font editors and Unicode and so on, availability is not that strict a criteria because you really could... You could really create a new symbol by a tool like a font editor or a metaphor or whatever. Okay. So an additional remark on similar and consistent. In many cases, you have dual concepts or complementary concepts in mathematics. And for these concepts, you should also have a dual notation to reflect that complementary concepts in unitations just like less and greater or logical and logical all, subset, superset, cap and cup and so on. So if you have dual concepts, look out that you have dual notations for those as well. And with adaptable, a notation should also be fit for its purpose. So you should be able to calculate with the symbol or manipulate the symbol. As an example, for a great common device that you have often this notation, I will show you a better notation later on. But you could also think about such an infix notation where you write such a symbol. Okay, but you could also extend it to three arguments. Still no problem with the notation in the bottom. You could write it like that. But you could also extend it to a set as like here and say you want to form the greatest common devices over all elements of that set, then you can't do it with that notation. So the notation in the bottom is not adaptable to all cases here. So to illustrate this quality criteria, I will give some historical examples first. The first is the equality symbol. The symbol we currently use was invented by the Englishman Robert Rekard in 1556. And you could see just above the display equation, he gives the explanation of the symbols. And he explains it as a pair of parallels because no two things can be more equal than a set of parallels. Actually, that's a very famous example because this is sort of birth certificate for a symbol because the author really explains why he chose that symbol. And that's not very often the case that we do have an explanation of a symbol. But there was a competitive for record symbol and that was used by Rene Descartes about 80 years later and he used this strange symbol. You could see it on the right in all the display lines. There's a letter starting the equation and then comes this symbol as the second one. He didn't explain his symbols. Maybe it stems from a rotated AE ligature as AE was often used before we had a symbol for equality as an abbreviation of Latin equalis. Typographically, it rather looks like an OE turned around or like an astrological torus symbol turned to the left. And for a long time, these both symbols struggled for supremacy. Well, both symbols are mnemonic. Both have an explanation which links them so you could remember them. But the symbol of record is much simpler than Descartes' symbol. And the main point, I think, against the symbol of Descartes, Descartes was that it's not symmetric and equality is, of course, a very symmetric operation. So I think that record symbol is really superior to Descartes' symbol. But for a long time, they really struggled and some others used record symbol, some others used Descartes' symbol and the general adoption with our equation symbol as we know it today only came with Leibniz and Newton as they used it in their important mathematical works. And so record symbol gained. A second example, symbols of Benjamin Peirce. I think these were the first American symbols of any significance in mathematics. I'm not sure about that. If you have an older reference for American symbol, please tell me. The important part is what I show highlighted here. We invented a symbol for what we today denote as pi for the circle number and a second symbol for what we denote as e as the Euler's number. And he explained it as it will be seen that the form symbol is a modification of the letter c and the circumference and the letter fb base. You could all see it, I think. So these symbols weren't generally accepted and possible reasons. I don't think that they were really needed. Then they weren't available at the printer. I'm not so sure about the mnemonic. I can't really remember which one is c and which one is b. And also his son, James Mil... one of his sons, James Milspierce, used a different form of the symbols to add to the confusion. On the right you could see modern notation. So James Milspierce also had a special symbol for the imaginary unit. But still I can't see the mnemonic part of it. But then here I think it's really difficult to pronounce those symbols. Should we all say circumference to diameter, ratio of circumference to diameter and neperian base? Or how should we pronounce it? So I think with these symbols this was really a problem. And then what I said about dual symbols before, the two numbers certainly are related and there are many interesting connections between these two numbers, but they aren't dual to each other. And here we have dual symbols for non-dual concepts. I think it just doesn't work. So third historical example, a more successful one. For the flow function for a long time, this symbol was used, which was introduced by Gauss in 1808. And only about 50 years ago Kenneth Iverson introduced these two notations. And I think that is a very successful innovation, because the brackets are used all over the place and not only for the flow function. And now we have a distinct unambiguous symbol and also we have a dual symbol for the dual concept. We have one for rounding down to the next integer and one for rounding up to the next integer. So that's really a very successful innovation and it really got generally adopted in short time. It was intact from the beginning and it's used in many places I think and almost completely gained over from Gauss bracket notation by now. So the next part is about unknown and little known notations. These aren't new symbols yet. Most of those are in Unicode already, but these notations aren't used very often and I would like to advocate these notations. First the usage of upright and slanted symbols or Roman and italic letters or fonts. There's a little known rule which says operators and constants with a fixed meaning should be set upright. The important part is with a fixed meaning and that it only applies to operators and constants. It doesn't apply to, for example, functions. So this applies to E as the Euler's number, Pi as the circle number, I imaginary unit, gamma when used for the Euler constant, phi when used for Golden Ratio. Then operators D as the differential operator, this curly D for the partial differential, delta for the chronicle symbol, for example, gamma for cross-offal symbols, etc. But of course then a normal uppercase Greek should be slanted opposite to the French style or a text style that Will Robertson presented. And I think the reason why sometimes uppercase Greek is used in upright form is purely historical and I would always rate consistency higher than historical correctness. And it also helps to make your formulas more readable and add clarity. I would also suggest to apply this rule to integrals as well because together with the D as the differential operator, you have some sort of delimiter around your integrand and it adds structure and better texture to your math and I think it makes it more readable. I have a few examples here. The first line is the standard notation without this use of upright and slanted and the second one is with E and I in upright form. I think it's much clearer. The second example with differential operators also I think it's much clearer and adds better structure to your formula. The third one with integrals, again, I think you can see that it adds structure and helps to make your formulas more readable. So second, little known notation are the Vinogradov symbols. I think you all know the big O notation, f of x is of the order of log of n and an equivalent notation is with this symbol and these are known to me at least and to other people as well as Vinogradov symbols named after Russian mathematician Ivan Matveyevich Vinogradov and these are mainly used in number theory. These two symbols are in unicode but I think the unicode names are misnomers. I wouldn't call them double nested less than and greater than and at least to me these are different symbols than these two, much less than, much greater than but many people use these two symbols in the bottom for the Vinogradov notation but I think mainly because they aren't available yet in computer modern and standard tech. Of course, it's quite obvious how to call them as tech macros. I would call them sub-order and super-order for under the order of and over the order, oh, yeah, super-order of and well both notations have their merits and advantages. The unicode notation does not require additional parent thesis. Also, it's symmetric. You could use it in both ways and it better fits into the general systems for example with Hardy's symbol of asymptotic equivalence but the unicode notation is usable in terms in arithmetic progressions so you could have a longer term and use it somewhere in the equation and also it's similar to other Bachman-Lander notations like the small o notation you have also big omega, small omega, theta notation and so on, so it fits in this symbol but of course it makes a strange use of the equality symbol but this is well known and has often been discussed and Don Knuth says the equal symbol should be read as some kind of is and not as is equal to but still I think it's strange and doesn't fit so well in the general system. Boris? I just wanted to say that there is another requirement for equality symbols they should be easy to make by hand and I don't think you can make by hand the equivalent symbols besides they would really be distant from much less and much more. Okay but I told you before that in handwriting you could always explain what you mean and in print it's clear and in handwriting you have to be a bit careful but most of the time it will be clear what you're talking about or what you're writing for your own research. So third one, notation of intervals, again with a quotation from the textbook some perverse mathematicians use bracket backwards. Well I must admit that I'm a perverse mathematician and actually at least I've been taught in school to use bracket backwards and think most people in Germany are for open intervals. It's an exercise from the textbook and the answer reads open intervals are more clearly expressed in print by using parent thesis instead of reverse brackets. I don't think it's so much better because the notation with parent thesis is used all over the place and it's overloaded with different meanings for an ordered pair, greatest common divisor and so on. So at least one improvement would be to use a semicolon instead of a comma especially in languages like German where you have a decimal separator as a comma and here you could see that a semicolon at least helps a bit for the readability and when you use a semicolon for your intervals it will always make clear that you have an interval and not something else here in your math. But I think it would be better, again that's not hand-writeable Boris, to use special delimiters and there are two delimiters in Unicode which are used for intervals and these are also used backwards for open intervals and I think these symbols are, if there is such a thing, an unknown standard. I know at least two important books which use these symbols for intervals but this notation just is not very well known but I really would recommend to use that. So again a formula example in the bottom, the formula from before. It still looks, still doesn't look so good but then I think it's not a very nice formula after all. You have two times the same interval and maybe it would make more sense to abbreviate it with a letter and then have letter s times s or something. So it's really a bad example but when you read in the source of your textbook you could see that Don Knuth took it from some real-world example. Okay, so much for the, how much time have I left? Okay, that's fine. So some new symbols. Unicode is open-end and Barbara is keen for new symbols or at least willing to accept some new symbols and of course Unicode is a standard and any standard by its very nature is deficient. There will always be things which aren't in the standard. Standardization people don't like to hear that but of course this is just like things are. First small examples in mathematical finance. There occurs the so-called Greeks, the sensitives of derivatives of stock option. You need them to correctly calculate the prices of stock options and there's a delta and a gamma and then a Viga and sometimes it's denoted like this, just written out. Sometimes people use a script we. Sometimes they even use a lowercase nu which is completely horrible. And so I would suggest to have a new symbol here, a new studio Greek letter Viga which works well in uppercase in upright and italic. But of course for lowercase Viga it would be quite difficult because this area is completely overcrowded with V and nu and epsilon and so on. So it would be difficult. But also people in mathematical finance are quite inventive. They derive other studio Greeks from volatility so they all start with V for volatility and they have a Vanna and a Volga or sometimes called a Volma. Maybe not such a good idea to have a special Viga because it's not possible to design new symbols for all the things that people come up with. Second example, one can often read sentences like this one, let H be a subset of G, be a subgroup of G. And I think it's bad style because people try to say two things in one sentence. It says H is a subset of G and a subgroup of G. So subset of G is completely redundant here because any subgroup is a subset. So the second sentence here would say just the same thing. But I think people like to use symbols and that's why they use it in the first place. So my suggestion would be to use a small sensory of G about the subset symbol to make clear that's a subgroup here. And this also works for other algebraic structures, for subalgebra, subring, subfield, subvector space. And of course it could be easily done in tech with StackRel and some font switch and font size switch. And I think it would really help readability here. But that's only a small improvement. And I think you don't have to add it to Unicode because we can do it with a macro. Another small example, a field extension in algebra is denoted often by a colon, or sometimes by a slash or by a vertical bar. But all three annotations are overused and even in the same field they also occur in algebra. So I would suggest to have a special field extension colon here. I think it's hard to see on screen, but it still looks like a colon. And if you look closely, these are two small triangles pointing to the, well, looking like this. You have a similar colon with two triangles like this in the phonetic alphabet to denote length of a vowel. And I think it could work in print. Of course it's not writable again. It's a bit hard to discern it in print, but on online PDF, of course, you could always magnify it and then see that really a field extension is meant here. But it's only a small improvement and maybe it's not that usable. Then sterling numbers. Anybody who's read or tried to read Don Knuth's out of computer programming or concrete mathematics knows that Don Knuth likes sterling numbers. And there have been many different notations for sterling numbers. And when I submitted the abstract for this conference, I thought I would have an idea for improvement for the notation of sterling numbers. But I must admit that was a very bold statement. And only afterwards I read what Don Knuth had written about the notation of sterling numbers. And I think my ideas aren't a real improvement. So I will show you anyway, but I think it's not usable and it's not successful. On page 66 and 67 of volume one of the other computer programming, you have some reference also to an article by Don Knuth where he explained the other notations for sterling numbers and these notations are really the best notations. And my first idea was to have new special braces with some S form in top for the first kind and S form in the bottom for the second kind. And it sounded nice as an idea. It still looked nice in my handwriting, but I tried it out early for this conference. And when you tried with that formula from the page before, I think it looks too disturbing, too obstructive. I think it just doesn't work. But I had a second idea, which also doesn't work, to keep the bracket and the braces but include an S form also in top and the bottom, but I think it also doesn't work and it's also too disturbing and it doesn't help for readability, but it confuses you. I think that you know you try to make them symmetrical, the braces should be. But if you just kept with the S, maybe your idea would finally bring through. I don't know. You know what I'm saying? Again, it sounds nice as an idea, but when you try it out as an in a font, yeah, maybe it could work. I think the people here are really like... Try to write it down how it should look like and we could discuss afterwards. Sometimes it looks as nice as an idea and still looks nice in your handwriting, but if you try it out in a font and in printing, it might come out strange. Yeah, it's true because the point must still be the point, so it's not as simple as I thought. Yeah. So it would be mnemonic here, of course, but I think it's hampers readability instead of helping readability. So I think it's not a successful innovation, please forget about it. Then my last point, and well, yeah, my main point, which I think is the most important thing I'm talking about here, a notation for greatest common device and least common multiple. I think there's missing a standard notation and to me this is the most important shortcoming in all of mathematical notation. I think it's really scandalous that people never came up with a good notation for that. Of course, you have all these abbreviations in English, GCD and LCM. In French, you have a four-letter abbreviation for plus grand, comment diviseur and plus petit, comment multiple. In German, you have this added ugliness of mixed case abbreviation. In Spanish, you have MCD and MCM, and here you could see why a Latin abbreviation wouldn't work because M stands for maximus and for minimus. So only the D and the M are different and it doesn't work, but also very common is this notation, at least in number theory, but again, that's completely overused. It's used all over the place for different things, and then you don't have a fixed notation for LCM. Sometimes braces, sometimes brackets are used. They're always explained, and the notation for GCD just with parenthesis, it's not very readable. It always relies on context. You always have to check what is meant here. Is it a GCD or is it something else? So that's not a good notation either. The first formula with these notations are not very readable and they are not self-explanatory. Well, first still is readable, but of course it gets a bit lengthy. And the second version just with parenthesis, yeah, that's not very good readable. Here a second formula where you have LCM involved. Here you have to know that at the right, the inner parenthesis are meant as a GCD, and the outer are meant as the function, as delimiting the function argument of the phi function. And that's a real-world example taken from a mathematical article. So here you have mixed the parenthesis notation with LCM notation as abbreviated word, and still you have to explain it. So that's really not very good. And I would suggest new special delimiters looking like this. So you have this pointing downward for GCD and this pointing upward for LCM. I like it. Okay. So now let's have a look whether they meet all the quality criteria I told you before. And of course they do because I made up that list of quality criteria. So I think these are readable. They are needed as I've shown you before. Of course we could discuss the form of the symbol and you could come up with some other form, but I think the notation is really needed for these. These are international. I'll tell you about mnemonic in about a minute. I think these are writable. Of course there's a possible confusion with the digit 7 and also with the floor and ceiling, the delimiters I've shown you before. So again you need some careful handwriting or don't show your handwriting to others. They are, of course, pronounceable. You could still call it GCD of A and B and LCM of A and B. They are similar to existing notations with the parentheses that they improve upon that existing notation. And they are consistent with the general system and also with each other. They are dual to each other and LCM and GCD are dual concepts. They are distinct and unambiguous. They are adaptable. I'll show you again what I had at the beginning. Maybe you could adapt this notation to more arguments or only to a set or whatever. You could also have a bigger expressions. You could also have larger forms around them. So these are adaptable as well. They are available at least in my fonts. And these are so simple, almost primitive symbols. I made this in computer modern and it's a free line edition. You could take a slash character, a backslash character, add a third point where you already know the coordinates of and then just connect those three points instead of two points. So it's really simple to add them in metaphor sources. And it's really simple to add them in any font editor. You just take a slash character and draw a little bar. So it's not very hard for font designers to make those available. And about mnemonic, well, this equation in general is for GCD and LCM. So the GCD is less or equal to the minimum of the arguments. And the LCM is greater or equal to the maximum of the arguments. And let's look at it as an example. And these symbols are meant to point downwards to a lesser number for the GCD and point upwards to a greater number for the LCM. So they're meant as two vertices of a triangle. And I think they're really mnemonic this way. I hope you all agree. So let's look again at the formula from before. As before and now on the right, I'll change to the new notation. It's shorter than the other notation and I think it's at least as clear as the other notation. On the second example, here now it's clear where the LCM and where the GCD is meant. Also, we could agree upon saving parentheses here and write it like this. But maybe that's overdoing it, so maybe we should just keep the parentheses here. And in the third example, first you could shorten it to this. And maybe you still have to explain that you now form the least common multiple over sequence because it's not standard to form it over the sequence. But maybe it's clear then you could boil it down just to this. And I think it's a go back again. It's much clearer and of course much shorter than before. So I think this is a successful innovation and I hope you like that symbol. And I hope people adopt that symbol and Barbara puts it into Unicode. And let me just finish with another quotation from a textbook. That's the final exhortation on the last page of the main text of the textbook. And again, I'll show you some example formula with the notations I just presented. Okay, thank you for your attention. I really love your idea of changing the notation for least common multiple, greatest common divisor. Maybe I could at least be slightly contrary to what having said that. The first thing with the floor and the ceiling functions, which are obviously really well designed, they hit what you think of. The thing with the little extensions on the bottom of the floor and the extensions on the top of the ceiling, and that carries through some of the meaning. And in some sense your notation is reversed. The smaller number, the greatest common divisor should have the little extensions on the bottom and the bigger number, the least common multiple should have the extensions on the top. Having said that, if you, instead of using your new notation, use the floor and the ceiling, which are already there, I think you have a marvelous idea for the following reason. Of course, when you use the same thing twice, there's a possibility of misunderstanding. But the floor function is a unary function, right? You put a real number in there and you get something, whereas the greatest common multiple is binary, right? The least common divisor is binary. There's no confusion there at all. The ideas really follow through, and in some sense, you know, the floor of an integer in itself is the greatest common divisor. It sort of fits really nicely. So I like your ideas, the extension of the floor and ceiling. I don't know whether I'm saying a compliment or a complaint, or maybe both at once, but I think you really wrote something that's really good. Thank you. Thank you. Actually, it's not a question, it's rather a comment. When I was a kid, I really loved the book by Littlewood. Mathematicians' miscellany, which probably some of you have read, there are a lot of stories and jokes about mathematicians, and one was about Jordan. You probably remember Jordan's cells and lots of Jordan's theorems and so on. Littlewood said that when Jordan wanted to introduce four quantities of the same thing, like A, B, C, E, D, like A, B, C, D, what he would do? He would do it in this way. A and dash 3, epsilon 2, and 5, 1, 2, double dash. Which makes it absolutely correct. But Jordan was actually limited by the typography of his time. He could not express all he wanted in the way he wanted, so he ought to go to such contortions. Now, when we have beautiful unicode and we have metaphone, he would be much more free in his expression. So my comment was that when I listened to this lecture, I was really happy, mostly for Barbara, that Jordan cannot be on the unicode committee to invent new things. Thank you.
|
Why have certain symbols and notations gained general acceptance, while others fell into oblivion? And why did mathematicians happily adopt TeX as a standard, while they hardly ever used MF (or other tools) to develop new notations? In this presentation I will give quality criteria for mathematical symbols. I will show many unknown, little-known or little-used notations, some of which deserve to be much more widely used. Also I will show new symbols and ideas for new notations, especially for some well known notions which lack a good notation (e.g., gcd and lcm, Stirling numbers, and more).
|
10.5446/30865 (DOI)
|
Whenever I have to run a workshop on page layout, the first thing I do is ask the participants this question. When you are doing a page layout, what is it that you are trying to achieve? What is your goal? What would you say? Uniformness. Uniformness? What else? Beauty. Present the material effectively. Present effectively, but what does that mean present effectively? Ease of use. Ease of use. Okay. So I've heard several things. Some go towards structure, some go towards aesthetics. For those of you who went to beautiful pages, here's my favorite quote. Oscar Wilde, I have found that all ugly things are made by those who strive to make something beautiful. And that all beautiful people are made, both things are made by those who strive to make something useful. So what I recommend we do in this presentation is worry a lot more about revealing the structure, make it usable, reveal the content. In other words, page layout is about visual structure. And if you've ever taken a training program on page layout, I have done that at tug 94 in Santa Barbara. Take was half as old as it is now at that time. And one exercise you would typically do then is something like this. You are given a spread as in a magazine or a newsletter, and you have to arrange a number of things on the spread. For example, you have to put some text, a title somewhere for your article here, probably a blurb, an abstract if you will, before the rest. And then you may want to put some references somewhere. You may have to include a large photograph with its caption and to make things even more interesting. How about we put in a graph, X, Y with the labels along the axes and everything. And so you might sketch on a piece of paper something that looks like this. And then you have to try to reproduce that with a type setting system. And if you are working with tools like Adobe InDesign, you would pretty much start like this. You can draw rectangles in there and then fill them. Fill them with text, specify that the text from one is going to flow over to the next one, fill them with pictures, fill them with other things. Question is, that's the question I had 10,000 binary years ago, 1994, how do you do this with tech? Because tech upfront looks like it's much more like a scroll, something that goes top down but not necessarily left right except for lines. And so that's what I started creating a number of macros to help me produce pages in this way by deciding this goes there and that goes there and that goes there. As you will see compared to all the other presentations you've heard at this conference probably, those are really basic macros. It's extremely simple but it's the discipline behind it that is getting to these designs. If you want to understand how I went to them then you have to know that I'm from Belgium. Belgium is a country that was created artificially to keep the neighboring countries from fighting with each other. It is the battlefield of Europe and so unavoidably we have a lot of influences in my case. The first one that would be the difference from France. Yes, I'm definitely a Cartesian mind, a rational mind and that has a direct impact on the designs. Segerd influence is the German school. My PhD was in optics so I had to do a lot of quantum physics. I could have mentioned Max Planck but since it's optics Albert Einstein is the one that influenced me. And then in terms of going the aesthetic way, in terms of artists, I really like a number of artists from the Netherlands. Dick Brunner is one that makes amazing drawings for children. Perhaps my favorite one is MC Escher with all the impossible perspectives and drawings. Now that would not be very good for a page layout because you don't want ambiguity. So my master for page layouts would be Pete Mondrian, Mondrian as he has been known since then. Let's, in this presentation, let's go through these three influences. Most obvious influence from Honé Descartes, Cartesian coordinates. If you want to do what we've just said a moment ago, which is this kind of schematic, then the most logical thing to do is to say we are working in two dimensions. We need x, y coordinates. Example, you want to position the photograph there on the right page. How about you put a zero, zero somewhere on your page and you specify the position of that picture with x and y coordinates. And this should not surprise most of you. I'm sure that most, if not all of you, you have little environments that you programmed yourself or borrowed from somewhere else to do x, y coordinates in tech. And the one that I have is as simple as this. Basically you put the stuff in a box, you raise it, you move it, you put all of that in another box that has zero width, zero height and zero depth. And that's what you put on the page, assuming that you are positioned, of course, at the zero, zero coordinates in there. What I like to do personally is make this box inside also of zero width because then it allows you to do variations. For example, by putting glue left, right or both ways, then you can easily say, well, I can position it left of the x, y coordinates or right of the x, y coordinates or centered on the x, y coordinates. This is nothing new for you, I'm sure. Perhaps the new idea then is just to say, well, if you do that locally in a figure, in a diagram, the only change would be doing it globally for the whole page. We put things in x, y coordinates on the whole page and you don't even need to really find the output routine for that. The only thing you need to do is make sure that on every page you are bottom left and you stay there. So one simple way if you like the begin page and page kind of syntax is to say, well, I go to the bottom of the page, I start a group where I don't want linescapes, I reset the left skip just in case and then I put the stuff in between, I end the page by closing the group and ejecting simply. So you could have a document that says begin page and page, positioning stuff and then immediately after that another begin page and page and so on. And that would be very easy for pictures. It would be also very easy for whatever is a well-defined block of text, like this block of references, like the caption. You just put that in the box and then you position the box x, y, I'm sure a number of you have done that. But what about the text that flows? What about the text that goes around? Well, in that case, what you simply need to do is put all of that in a scroll and then you chunk off pieces of that scroll and you position those pieces on the page. And the macros I'm using myself, but you could have variations is there's a box that I called Gali, then I have a text and ntext syntax. The text one starts a V box where you can put a number of parameters, most importantly the h size in this case, and then you can close that box and you can fill it so if you stretch it more than the content you won't have problems there. And then you can just chunk off a few lines from that box and position those lines that you chunk off in x, y coordinates on the page. The only little thing that you have to pay attention to if you try to reproduce this is that when you chunk off a piece of that V box in there, the top line, the characters in the top lines are going to be flushed against the top of the box. It's not going to be on the lines that you expect. And a simple way around that is to put something systematically at the beginning of the box. I put an h rule and then as you close that box, you chunk off what you have put at the beginning and from there on the spacing is going to be like you expect. The other ways, you could put a strut in the first line but that's not necessarily something I want to do because I don't know what I'm going to put in there. So this is the main idea for putting things in several dimensions that would be the influence of Cartesian coordinates. You just position it like that and you position that on the whole page and you just chunk off text. The quantum aspect of it is just saying that if you are going to present some things on the page, perhaps you can choose where you position them in a simplified way. This is actually more than Albert Einstein. My own quantum physics professor had this theory that you could also quantize space and time and the equations of physics would stay the same. So perhaps if the world is quantized, you don't realize that if you put a pen on the table, it cannot be anywhere. It can be here or here or here and of course the quantum is very small so you wouldn't notice but perhaps it's quantized. That's exactly the kind of things that we can do on pages. If I go back to my sketch there, you could say well, the most obvious one as you would expect is lines. Baselines, you would say there comes the text. It's there or it's there but it's not in between, at least in first approximation. This, by the way, and you may know more than I do about it, but whenever I had to help people with InDesign, Adobe InDesign, that was a big problem. It's deciding within a box where you put text where the first baseline is and then you have to add a few points to get it where you want as far as I can tell. But of course in tech, that's the easiest thing in the world to do because you grab boxes not by their corners but by their baseline. So you can do this. If you do it for text, you may want to do it for the rest of the page also for some kind of harmony but then you also have to pay attention. Imagine that I have quantized my space and I say this picture is going to be an integer number of baseline skips for harmony. The problem is if I put text on those lines like this, I remove my grid, look at the alignment. You may or may not like the fact that the picture is aligned on the baseline at the bottom but you probably agree that it looks wrong on the top. It sticks out too much compared to the text. So one thing that I found useful to work on is to say let's have two grids. Let's have one grid for the text, one for the pictures in the sense we are offsetting one grid versus the other. Here the picture is just a little lower, a little meaning in this case a quarter of a baseline skip and then it looks much better aligned with the text next to it. So here are in the sense the two grids. The blue one is for the text and the dark gray one here would be for the corresponding illustrations. That would be a first way to quantize space. That's natural because text is on lines and this is not a 21st century invention. This is a paper, this is a mechanical typewriter, it's everything you know. What we can do that's useful is do it in the other direction. If you've got this kind of quantum here, how about doing it the other way? For example, the picture you see there or the placeholder is just a little more than 10 times the line spacing. You could say, well, let's rationalize. Let's cut a little bit, resize it one way or the other so we have a grid in both ways. This, which is the grid that I'm using, is isotropic. You have the same dimension in both directions. You can decide to work differently. You could have a different quantum horizontally than you have vertically. That's a lot more difficult to use, but you can if you want. From this, you can simplify your macros. For example, before you positioned things in X and Y coordinates, each time you would have to specify the dimensions. If you think in terms of squares, square paper, then you can get rid of the dimension and just go with the units in there. The simple trick is you define a new dimension. I call it backslash PC because it's a kind of redefined pick up. In my case, it's 14 points. It's a little bigger than a pick up, 14 big points at that. Then you can have, say, X, Y macro capital letters, which is the same as the one you had before, except that you specify arguments without units and the macro adds this redefined pick at this PC, this grid square as the unit. You can just position by saying, well, I go three this way and two that way and you are on the grid, a little bit like snap to grid options in software applications that use a grid. You can extend that to the whole page. The spread I had sketched earlier is entirely coordinated with the grid. Coordinated does not mean it's necessarily exactly on the grid. For example, the picture is bleeding out, but it's still coordinated with the grid. And this is the coordination is giving you the flexibility. One remark I usually get whenever I discuss grids is people who say, wait a minute, I see the idea, but sooner or later, that is going to get you in trouble. That's going to look ugly. And some people also say, well, look at the title there. That's not the same baseline skip. True. But it's exactly twice the baseline skip. So it is coordinated with the grid in that respect. It's a subset of the grid you use every other line in a sense. And that means that visually it's also going to look coordinated with the rest. And the same for the caption over there, my experience with that kind of grid and the positioning of the picture, if you put the caption on the lines that are seen for text, it's just a little too close or just a little too far. So what you probably want to do is say, well, instead of using every other line, let's go the other way, let's refine the grid and cut all the squares in two. So we use a half grid in a sense. And that you can do in quantum mechanics also with spins and such. But Albert Einstein was not just a physicist, he was also a musician. And here you can start recognizing the harmony that comes from music. When you do a flat or a sharp note, what are you doing? You are using the half grid in a sense. You are saying, well, let's cut in between the two notes that I already had, or offsetting the grid in a sense. And that is another indication and rhymes, of course, with the word harmonic. If you double the frequency, you've got the same note. You've got a harmonic of your original frequency, that's why it's called in physics. The same idea here. You can be flexible with your grid, like cut every square in two if necessary in four or in eight or in sixteen, and still have some idea that the final design is going to look harmonized. It's not going to look random. It's going to be structured. So you can refine it. The same for the columns. This looks like it's a two-column design with a margin there. In fact, some of you may have guessed, it's really a five-column design. And then you use one column or two, or perhaps on a solid page, three or four or five, and so on. The only thing you have to pay attention to here is you want to automatize the way you size the width of your columns, is that it's not just times two, because you take a gutter with you. So if one column is five squares, then two columns is not going to be ten. It's going to be eleven, because at least in the case in which the gutter between the columns is also one square of the grid. But with that, you can easily say, I want a textbook that's one column or two columns or three columns, and make sure the macro defines that in there. If we quantize space, we could also quantize everything else. And again, those are things that I'm sure most of you do, for example, decide that you're not going to have any font size. You're going to have normal and big and small and perhaps extra big, the way you have for curly braces and parentheses and so on. That's quantizing it. You could extend that to other things. And if it's a dimension, like the width of lines or the size of markers in your graphs, you could also say, well, by default, I'm going to use this factor of two for harmony. If I have a thin line, a medium line, and a thick line, perhaps I just do times two every time and I'm sure my line widths are coordinated. You don't need many macros for that. You just need to have it in your head. Colors two, you could say, I've got light gray, medium gray, dark gray, and so on, and I'm sure many of you do that. But if you do, then the next step would be going towards the designs of Pete Mondrian. Look at this one here, perhaps one of the best known ones. This line here, that's what we've just said, right? It's twice the thickness of this line here. But the next step is, if the line thicknesses and line widths are coordinated among themselves, are they also coordinated with the rest of the grid? Because if we did, then it would be even more consistent. So if you want to push it, if you want to be as strict as Mondrian was in his design, you could go, wait a minute. If I make my lines thick enough, sooner or later they're going to look about the same order of magnitude as the grid. How about I start from that? I say that's an extremely thick line. It's one square of the grid. I divide by two, by four, by eight, by sixteen, and so on. And if I want to do that, I could say, instead of each time having to define a line thickness as zero, three, one, two, five of a square, it's probably better to scale it down by a factor of sixteen. And so you could redefine, in a sense, the point with a new dimension as one sixteenth of the square that you have. So my square is bigger than a picker. My point is smaller than a point because it's divided by sixteen and not by twelve, as in the picker. And you can start defining your line thicknesses easily with that. One simple thing you could do is, some macro to do that for you, again, no units in it because the units are provided by the macro itself. And then you can say a short line, medium line, big line, and you would have multiples, for example, a quarter of this redefined point, half or something else. Some of you will immediately have thoughts of scalability of this, which means you can change at some point the size of your grid and everything will scale accordingly with those macros there. That's line width. What else can you coordinate with the grid if you are going to quantize anyway? Well, one that I propose is actually the font size. Look at this list here. I always like to put square bullets in my design because I like very strict alignments. So this is strictly aligned there with the left margin. And of course, the indents for the items of the list is going to be one square and so on, as you expect. But what about the size of the bullet? Well, even before I worried about grids 20 years ago when I was still in grad school at Stanford, I just decided that the simplest thing you can do is make it 1x high. But if you want this 1x, which is the size of the font, to be coordinated with the grid, then you need to coordinate the font with the grid also. It's not an obligation. You could easily survive with this. But in a sense, it's the logical next step. Why not? Well, I'm using Lucida, as some of you will have noticed. And in Lucida, if you specify the font size to be 26.415 big points, that means that the x-height is going to be just 14 big points, which is my grid unit in this case. And again, you could define a similar type of macro in which you specify which fraction of the square you'd like to get. The title of these slides would be exactly in x-height half the grid. So if you take that this as the grid, take half of that and that's the size of the letter Z in font size in red on the slide there. The catch here is that sometimes you will want to coordinate with the lower case x-size. Sometimes you may prefer to coordinate with the uppercase x-size. Example, this is a type of graph that I use a lot. Well, you would want the uppercase size to be the same height as the thickness of the bars, which are of course coordinated to the grid. So you could define a similar macro that is based on the uppercase x-size. In this case, you'd need 19.364 and so on to define it. The pity with Lucida is that the ratio between the small x-size to the large x-size is almost, but not exactly, three quarters. I wish it was just 0.75. It made my life easier, but either you define the fonts when you need them or a trick that I've been using is my basic font is coordinated on the small x-size and then the script and the script of that are coordinated on the big x-size and I find that it helps me limit the number of different fonts, that's the quantization of it, and still do everything that I need to do. So we coordinate the line thickness, the font size, the logical next thing. For those of you using PostScript or PDF is what if at some point you do a little PDF illustration in there? This is an arrow I use in my book a lot. You would also want this one to be coordinated with the grid of course. So it would be nice if you can have some environment in PDF that is set in such a way that it corresponds to what you had in tech and you have the continuity of it. In this case you just scale it somehow and then you can say 0, 0, 1, 1, minus 1, 1, close the path and fill. But if you're going to have lines in there, not just areas, then you've got the extra catch that you would like the thickness of the lines to carry over. If you make a line with a nature or a tool and then you make a line with a PDF code in PDF tech, it would be nice if it's exactly the same thickness. That's just a little trick here, but not very difficult either really. In a sense, you can scale to your grid size with just a change of coordinate like this one and then to scale the line width, well, you'd first have to know what the line width is at that moment and this is a trick that, again, I'm sure many of you would choose, you make PNT to have cat code 12 and then you define a little macro that's going to strip the PNT from a dimension and so in here you can expand the line width, strip the PT in there, you just have a number and then you could set that to be the thickness in PDF except, of course, that you need to scale it. And here you've got a double scaling. One scaling is you have to divide by 14 because you are scaling up by 14. The other one is that this is giving you points, tech points and in the PDF world you may prefer big points to be coordinated and so you'd have 72 divided by 72.27 that I'm sure some of you have been struggling with if you do graphs and such. So what else can you do? Well, you can coordinate very many things. How about the next step saying how do you define the grid? Could you perhaps coordinate the grid with a page? Where do you put the grid on the page? Or which part of the grid do you use for a given purpose? And that too is something very old. Recognize this? Old Bible design, I'm sure some of you immediately see the diagonal of the page and the diagonal of the spread through that. Well, you can do similar things. You could either say, there's where I put my grid or you could say my grid is all over the place but for some purpose that's the area that I use. This is a page from this book here, Trismaps and Terrams which is my best example of the use of grids. And this is all on the grid but since it's the title page for a part, one of the main parts of the book, I wanted something a little more coordinated and in fact through there, if you draw a diagonal, you'll see that it's not random at all. There is a logic behind it but it's still on the grid at the same time. It's a subset of the grid that I'm using there. Another example of a subset of the grid, that's the grid of the book. Basically, there are a few more things but that's basically the grid of the book, 36 lines in four columns. When I do slides such as these, I use the same grid except it's a subset. That's my slide. It's got those dimensions. I use my work to print this screen here real size. It would fit this much space on an A4 sheet and that means I use a subset of the grid. Why? To reuse my illustrations. Example, here's a page from the book. Imagine that I want to show this on a slide. Well, with many of the tools available today including Microsoft PowerPoint, you'd have to scale it up. And once you start scaling, then the front-size will not scale with the rest and so on. Well, you can do it carefully, I'm sure. The other example is don't touch the picture. Just make the slide smaller. This would be exactly the same graph on a slide. The same size. Of course, it's been blown full screen but it's the same size. And this is there for the grid I'm using for slides. Those are half columns compared to the original design. I have eight half columns in four columns and five of the eight are carried over to the slides. So either the axes would be on the grid for the illustrations and the numbers here would be on the grid for the text. That would be consistent with everything else. One last thing about Mondrian style is that if you want to make alignments with rectangles, you want to make strict alignments, you need strict rectangles. That's easy with a picture, even with a graph, more difficult with texts. The first thing, and again, nothing new for you, I'm sure, is hanging punctuation. Put those punctuation marks outside of the rectangle so you have a sharper rectangle. But the thing that was bothering me most is this. If this is the margin, the text does not come against it. There is a little gap. How did I notice that? Well, that was even before doing grids when I first designed my square bullets. I put them at the margin and then I looked at this and I thought, this looks wrong. Of course, at the time, I kept checking my macros thinking I had done something wrong programming that. But no, what happens is it would look better if you just took them in a little bit and then it's aligned. And that was good at the time. It's not good enough for me now because this is where a photograph down here would be aligned. You don't want to tuck the bullets in, right? You want to tuck everything out. I'm still fighting a little bit with that one. At the moment, the only thing I do is change the left skip and the H size, but that is font size dependent. And so I know that on my long list of things to explore is character protrusion. I'm sure there would be a better way to achieve this, but at least it's working at the moment. And the same question, well, of course, it sticks out a little bit, but for the designs I do, that's not an issue. And the same question applies for dropped capitals. Do you align on the serif or do you align on the stem? And you know what? The best answer that I have is it depends on the size. If it's a small dropped cap, it actually looks better if the serif sticks in the margin and it's aligned on the stem. If it's a big drop cap and you really start seeing this edge of the serif, then it actually looks better if you ask me at least. Feel free to disagree if you align on the serif. So many complicated things. What have these inferences produced? This book I keep telling you about. Well, you've seen this page before, which is on these four column grids. This is a more typical page. You see I'm using two columns for most of the text, two columns for the rest, and everything's aligned on the grid. You can see almost the columns starting here, here, and here, and one more there. That would be the title of a chapter. But that doesn't mean it's condemned to text. The grid, as we've said before, if you know how to use it, is extremely flexible. That's a very different design, but it's still based on exactly the same grid, and it's very easily achieved with everything that we've been mentioning. This is a page within a chapter with, like, a section ahead, if you want, and there you'll have a left page also. That would be the spread. So I just wanted to point out how far you can go, if you are twisted enough, with grids. First, this grid on the left is actually tighter, smaller font, tighter line spacing. How do you coordinate? Well, this line spacing is 7 eighths of the line spacing there, which is why the block of text there is 36 lines, because 36 lines is 35 line spacing. That means if you put 41 lines here, the top line and the bottom line are going to be on the same baseline, and so coordination left, right looks better also. One thing. Second thing. This is a spread about using effective redundancy in communication. That covers the topic. So for every topic I cover in the book, the last word comes at the bottom of the page. Nothing goes over the next page. There's no line missing, neither on the right, not for the frequently asked questions here, not for the additional information here. It stops there. Some of you will already have noticed that all the paragraphs are rectangles. If you want rectangles, you need to finish the last line. Now, technically you just do parfields keep equal to zero, but that's going to stretch something unless you have just the right amount of text to fill the rectangle. That's what happens here. And that's not crazy enough for you yet. Look at this little paragraph here. This one, I wanted to write in exactly eight lines. So this one, which is on the title grid, is nine lines, and that is eight line spacing, which are the grid of seven eighths of the other one makes it equal at that point each time. And that's more of it. I don't succeed every time, as you see. Sometimes I think yes, the content is still more important than the page layout, and perhaps we need to let it so, but you can still find a way to put that on the grid. That's harmonious, including when you have lots of illustrations that the spread about slide design, as you can guess. This is a spread about graphs. And yes, you can get out of the columns. You'd still be coordinated with the grid if you do that. This is one of my favorite pages in terms of layout. And you realize that you've got to be writing in the layout to do that. You cannot write everything and then ask somebody else to put it in the layout. This has got to have the size as this. So this one is aligned with those two and so on. You need to write directly in there, which is possible. Now, if you thought I was crazy, this is the last step. This is a page about an executive summary with comments and an abstract with comments. There are several parts in the abstract over there. There is context, needs, tasks, objects of the document, findings, conclusions, perspectives. That's my template. Every one of them ends at the end of a line. So you can easily separate them. If you had to separate in between, that would have been a lot more difficult. So that's the next thing. I encourage you to look at one of the display copies there. Line breaks are optimized. I don't want to break a line between an objective and the corresponding noun or between an article and the corresponding noun or between anything that are supposed to go together. This came very slowly. At first, I would just fix a few bad breaks here and there, and then it becomes a habit. One of the reviewers of the book actually wrote on the web that I had an obsessive compulsive disorder. That may be so, but I have an excuse. I have been victim of very bad influences in my youth, and one of them that you are easily going to recognize is this one here, because I'm just doing rectangles. Some people are doing circles. And so, yeah, that's the result of it. So let me just close this presentation by telling you that I believe in what I said at the beginning. Some people think you want to do a good page layout, especially a two-dimensional one. You've got to be an artist. And it's true that if you are an artist, you probably can intuitively put the right things in the right place and not the right size and so on. If you're not, you can do a lot of aesthetic things by going first for structure, but by adding this one thing that one of you had mentioned at the beginning, which is consistency. And perhaps the best way to be visually consistent, geometrically consistent, is to use a quantum space, is to use a grid. And so nowadays, I have many people who tell me, oh, yes, but for you, page layout is easy because you are an artist. And I keep thinking, tell that one to my mother, she'll be rolling on the floor laughing. At an early age, I was told I was a nerd. We were not artists in our family, so we are nerds, and you will be a nerd like everybody else. But I do get good compliments about my designs, and that's because of the rationality that's behind it. So let me encourage you to, yes, if you don't feel yourself being an artist, to go the rational way and do quantum spaces. Thank you. Very nice. I was wondering if you can suggest any approach on the column level or smaller to implement this with the golden ratio, because I know that occurs on the page size or the page block size, but if all of this is on a grid, then what opportunity do you have to use that, and do you think that's important or not? Yes. Don't worry. I've got the golden ratio covered, as you'd expect. The only thing I didn't manage to do yet is make it rhyme at the end of the line, but the rest I can do. Now, you can just use subblocks of the grids. In fact, the page I had shown you with the title of the part with a block like this, that's very close to the golden ratio. The thing that you will have to do there is work by approximation. Example, one thing that I had a problem is not the golden ratio, but it's very similar for the slides. I used the grids. I had a margin, and so I've got 25.5 squares, which I can live with, except if you want something that's a 3-4 aspect ratio to be projected on the screen, then the height has to be 19.125, which is what it is. But then you've got something that's just out of the grid that you have to handle somehow, and the same for golden rectangles. You've got a width. You can chunk a number of lines. That means that the resulting rectangle is going to be about a golden ratio. What I've been doing there is not golden ratios every time, but it's similar. You want a square, you want a 3-4, you want a 2-3, you want a golden ratio. You will just have to approximate with the grid that you have, possibly refine the grid. Is this addressing your point? Yeah. Okay. Let me just mention one last thing. Feel free to have a look at the display copies of the book on the back table. Two of you are going to win a copy of the book tonight at the Bind Quets. And the other copies that I have with me are for sale, except this one, this signed copy, is of course for the person who made it possible. So here's the copy for that.
|
Most AllTeX documents are vertical scrolls: essentially, they place content elements under each other, possibly running the scroll in two columns, but hardly more. With the exception of floats, they basically place items on the page in the order in which these are encountered in the source file: that is, they construct pages by piling up boxes horizontally and vertically, gluing them carefully together to achieve the desired (elastic) spacing.
|
10.5446/30867 (DOI)
|
And yes, I am using Keynote. I did try using Slitech for a little while and decided that it wasn't quite the same for me, then I give a lot of talks. So this has been working well. Hello. And so, okay, we'll give this a go. I'd like to thank you all for having a slot for me. I learned about this meeting about two weeks ago from Barbara. But there had been something on my mind I'd wanted to do along this field. So I said, ooh, I could give a paper on that. And I said, well, it's two weeks before the conference. There's no way they'll have a slot. And sent mails. I dashed off and titled and abstract and got back, sure, we'll fit you in. And even on the day you need to be fit in, because I can't stick around, unfortunately. So all I had to do was get a travel clearance, which arrived today. And by the time, between the time I submitted the travel clearance, I work at AT&T Labs, and the time my boss said, oh yeah, that's been approved, the price for the airfare had gone up 600 bucks. So I paid almost as much money as it costs to fly from Newark to Sydney to get here today, which means I don't like to do that. I'm a pretty cheap date, usually, but it means I'm flying first class out and back. I even bought one of those little power adapters for use in first class. The two legs out, the first one, the brand new first class seats didn't have any plugs for them. And for the second leg, it was broken. So I didn't get to do anything there. Anyway, United thanks you. There was no question about my flying first class, though I bought Coach. It was a $1,600 seat, which is not usual. Anyway, I'm only here for a day. And this is my first tech conference since, I think, 1988. My first real paper was at tech when I did the Pre-Muted Index for Tech some 25 years ago. I don't know if any of you use it. It's still there. You do use it? Not now. Well, so we'll get back to that in a moment. My problem was that I had learned tech and I'd just gotten learned tech. I started using tech. I don't know who knows it. Even Don seems a little sketchy on some of the commands. But I'd hack these things and I'd have epic battles where I just want to move this over one M. And you add spaces and it won't do it and so on. And I'd spent two days fiddling with this stuff. And my problem was that the answer to my question was the string command, but I didn't know how to ask the question or find where in the tech book it was. And that's what brought me to try to do a summary of every command in the whole thing, trying to use words that made sense so that if you sort of knew the right words, you might guide you to the right thing. And I submitted it and I guess it's still available. I don't know. It was generated on a Plan 9 operating system, which is long gone, but it could be easily resurrected if anyone cared. But what this really did was open my eyes to typography, that beautiful layout is important. Ligatures. Who knew? I'd been reading for 30 years and I didn't know f's were connected to i's. And I went back, wait a minute, can this be true? And would take some of my favorite books down and oh yeah, look at that. They're glued together. I never noticed that. But so it really gave me quite an appreciation and a huge amount of humility in terms of this is something you can try. It's like user interfaces. Everyone likes to try it, but not many are very good at it. And in fact, one of the reasons I have a Mac here. I also became a nitpicker. In my recent years, I review a lot of papers and my comments often slide into and by the way, this layout is crazy. Quote, don't use footnote in your books, Don. I put that in some of my reviews because a lot of people abuse them. And nowadays people are including fuzzy JPEG graphs. These are sometimes in visualization papers with three point font. And it's just awful. Anyway, so I became a nitpicker. I went on, this was just before I went off to Bell Labs where I learned how to be somewhat of an academic. And it served me well. I wrote a firewalls book with Steve Belevin in 1994, you folks may have heard of. We invented a style for the book but said, okay, Addison Wesley, have a shot at it. And Marty Rabinowitz was the typographer there and he told us how to fix things and we did that. But despite having done the second edition and published a number of papers, I haven't done much text since then until about two weeks ago when you folks said yes and suddenly I had to show something prototype. So I have been burning the midnight oil and telling my wife no, I don't have time to go for a walk. This program has to run by next Monday at 2 p.m. Pacific time. And I made it but it's a much better demo now than it was yesterday at this time. So I am an early adopter. My iPad arrived, I ordered it by mail. The UPS guy arrived on the day it first appeared. He was going around Somerset County, New Jersey handing these out to many people. And here you are, yeah, I got a lot more. And I started reading it and really realized that this changes everything. It's not a big iPhone, it's not a laptop, it's something new. It's a lovely way to read a book which means it changes the way you go to a bookstore, at least for me. When I go to a bookstore I'll buy junk books that I know I'm going to read once. But now that I have the iPad, some books I want to own and have a treasure and have people sign and so on and I'm going to reread them and give them to my kids. And others like today's latest take on cosmology. I'm not going to want to read that in 20 years. That's like an old textbook. I just don't need, I just want to read it once. And why would I buy some pulp when I could just put it into the iPad, which I don't want to forget, you know, just put it in here and now it's ready to read in some form or another. The typography in iBook and the Kindle app on the iPad are okay. I mean, you can read it, I've read about it a dozen books since late March. But they're not great. And I also thought, gosh, why would a book ever go out of print again? You just store it on disk. Well, heck, you know, the firewalls book, the first edition is still in print, even though the second edition came out in 2003. There are people who are still buying a book that talks about TIS's toolkit. You know, they're gone, it's decades gone, but they're still there and I thought, gee, I'm willing to charge them a buck 99 for the first edition. We could split it three ways between the publisher, the authors, and Apple. It's free money. But okay, well, I've got the tech sitting here. Is there a tech to EPUB? Is there something that'll just do this? And the answer is no. Someone said, well, you could start by hacking latex to HTML. But of course, what it produces is not what Marty had in mind. I was also grading papers, March is paper reading month for me. I was on three program committees and it sure would have been nice to read some of these papers on the iPad. What I had was PDF, which I'll get to in a moment, of course. So there's this early versus late layout tension. The early layout, we all know and love. This is what I learned in the 80s. Tech, runoff, if you go through the 70s, and T-Rough versus HTML, XML, we'll figure out at the last minute, grandma wants this font and this size and we'll just lay it out for her and that's how we read web pages and iBooks and that sort of stuff. But really, I want Marty to do this. I don't want grandma to do it. I don't care if she likes Garamond bold. Marty, I'm a fascist in this case. I want Marty for large values of Marty to figure out what looks beautiful and to present them that way and I think the authors probably want that too. So my goals for the project, I wanted Marty to design the pages and again, large values of Marty. Typographic fascism. Each paper needs to be laid out separately for portrait and landscape. It's a different format. This thing is sort of 1024 by 768 or like that. It actually isn't, you can't just reverse the numbers because there's a little header on the top that you want to keep, which means that's different. I do want for grandma a large type addition and Marty knows how to make those too. So it would be nice to do this and it'd be nice to easily port old tech documents to this so that we could all be looking at beautiful layouts on the iPad. PDF did not appear to me to be the solution. Now, given two weeks, I didn't do a whole lot of related work reading. So it was entirely possible this talk was going to be totally redundant and maybe it is. PDF as far as I'm concerned produces pretty nice looking 8.5 by 11 or A4 papers that I print out and read. I don't want to read them on the iPad. You have to sort of scan this Fresnel lens up and down like a 1960s cataract patient or and reading in a column. Page two is below page one and you sort of halfway up and down. That's not it. It doesn't know about portrait or landscape. It doesn't know what to do the right thing and there's no large type option. Now I listened this morning with interest to what's happened to PDF and maybe I'm all wrong about that. We'll get back to that which is fine. I'm okay with that. So when to do this? Well you can run latech on the iPad. Heck the firewalls book runs through tech. It's 480 pages. It goes a second and a half on a modern Pentium which is just unbelievable. If you told me that in 1985 I would have said there's no way you could lay out a book in a second and a half. The DVI to whatever obviously takes a bit longer. How about sending the DVI to the iPad for processing? That's kind of close. You'd have to have the fonts lying around and stuff but that's doable. Or a special version of tech on the iPad. This was a paper done last year here that suggested something along these lines. It looked a little busy to me but maybe a good way to go. Or do you just pre-compute the images? And that's the way I went. Pre-compute every image for everything grandma is going to want to ask for. That means there's a portrait version, a landscape version, a large type portrait version and a large type landscape version. All of them laid out with separate style files. So it might be quite different. Download a bundle and the definition of that bundle ought to be if we like this idea ought to be standardized somehow. And then a proof of concept iPad reader. My experience with programming iPhones and iPads is now eight days old. I've been programming in C for 20 years and the guys who said that it's just like C. It's just almost completely unlike C. But I have an iPad reader working. It's not Tangle and Weave but Tangle and Weave's ugly evil stepsister hack that makes this available to you today. But that's okay. It's proof of concept. So it's strictly proof of concept. Don't push this too far. The code is starting new. The beautiful layouts that I laid out are not quite beautiful. Again, I'm not Marty. You're Marty. You can figure it out. And some of the code wasn't working till last night. There's also the quandary when you're in a rush, you want to get a program done by next Monday. Do you just sit down? It is sort of like C. Do you just sit down and write the structs and mallocks and do the C? Or do you back off a little bit and learn how it's supposed to be done and maybe get it done? So there's this quick way that might be efficient or not. And then there's the longer way we actually learn something. For the astronomers in the crowd, you may know that if you want to grind your own mirror, I don't know if people do this anymore, but it used to be. If you wanted to grind an eight-inch mirror and you'd never ground a mirror before, the fastest way to grind an eight-inch mirror is to grind a six-inch mirror first and then an eight-inch mirror. You'll get done before you do an eight-inch mirror. And so it's the same question of how much do you do? Also, I never was very good at tech and what I do know is 20 years old. So if I want to change the paper size, I'm in there hacking article.sty and art10.clow and these things. And they all have this ominous stuff at the beginning saying, don't change this. Go change the class generation program, which I know nothing about. I suspect there's some place I can go where I just say, change the page size to this, go, and all this magic comes out. But I didn't know that. So do I take time doing that or do I just hack what I'm doing? It turns out it would have been much faster to know what I'm doing. I hadn't heard of the geometry package, which would have made this so much easier. You know, Wednesday, well, I've got the iPad sort of working, so Wednesday I'll try to change this into beautiful stuff. That could have been Wednesday morning. I still have it shifted wrong, but I'm going to go back and fix it. The other problem is when research yields 1,000 blooms, finding the right bloom isn't always easy. I went first to DVI to Bitmap because I wanted to generate bitmaps. I'm guessing this is pretty old and hasn't been touched for a while, but I didn't know that at the time. So I was getting black and white bitmaps. It looked a lot like the output I used to get off my Epson dot matrix printer. And that's what I was going to show you folks today because, heck, you're the Marty, you can sort of envision what it would look like. And after a day of looking at this crap, oh, by the way, all of my specials, my PostScript specials didn't work either. So I said, well, let's do DVI PS because, gee, I could do anti-aliasing fonts and grayscale somehow. They give GhostView enough options and it'll probably come out looking almost as good as Apple stuff. I was one guy in one week trying to compete with all of Cupertino's cleverness. And then I noticed in passing DVI PNG, which was exactly what I wanted. And everything, it's something that produced TRIM output. It had just the options I wanted. It worked great. Wouldn't it have been nice if DVI to Bitmap would say, not, this isn't the official documentation, but you don't want to be here, Bozo. Go look at DVI to Bitmap. That would have saved me a half a day. So I've got a demo here. How am I doing? Pretty well. What to show? Well, I thought, gosh, I'll just lay out the tech book, a nice familiar thing. And I went to the web page and had forgotten the admonition by You Know Who who says, don't print out the tech book, please. So I didn't do the tech book. Well, how about the firewalls book? Okay. Started running that through it. Turns out, drops core in DVI PNG. Well, I'm in a hurry. I'm not going to go find that. I can find it. I think I know basically what it's missing. And so I've got to find a book. Went to Gutenberg project, thought of Huck Finn and said, no, how about on the origin of the species? So that's what I got. Otoots, as I call it. And I took one of my old papers. So there's a paper and a book just to see what it looks like. Now the demo is, I'm going to show you, is on the iPad simulator, which is on this book. I also have it loaded in here. And if you ask me afterwards, you can actually touch and fondle my iPad and try it yourself. The user interface is extremely crude because I didn't have much time. None of this swiping or clever page things. It just goes. But it gives you an idea. It's proof of concept. And so let's see. With a little luck, this is just going to go not like that. Okay. What do I do? Well, see, I'm usually pretty good at swapping these things in and out, but not always. So come here. We want this guy and we're going to move him over to here. Oh, I don't even know if this works in this one. So we'll give it a try. In fact, we'll start it afresh. Here we go. Simulator start. Boom. It's compiling right here. No errors. What's up? It's stupid. On the origin of the species. And if you click on the right side, it goes to the next page. This was I basically took the text, cleaned it briefly. Barbara, this is not a final book format. This is just Gutenberg text with a few things on it. And you can see I got the top a little wrong. I think geometry will fix that for me. Having trouble reading it, you click at the top and you get the large type version. Large type in this version is 12 point, but maybe it should be something a little bigger. And you can go through here and chapter one. And oh my goodness, Euler's equation. Did you know that was in origin of the species? That is used everywhere. It just keeps showing up. You thought he was just a biologist. And what's more, I found this delight, which you will never find in the EPUB version. You go a little further. We've got Maxwell's equations and the diffusion equation. So there you go. It all just lays out. It's in gray scale. It actually, I think, I mean, you can't tell up here. I think on here it looks just as good as Apple's stuff. Actually better because it's got Don laying out the paragraphs and you folks doing the rest of the stuff, sort of. And of course, I'm not sure this is going to work because I'm not sure it works on the auxiliary screen. If you go to this version, oh look, it's laid out in two column because it's a wide screen. And you can go back to previous pages. Now there's an interesting question. I'm in a different part of the book now. When you rotate, which one should it go to? I haven't really solved that. I do this proportional thing, but someone has to think about how that works. Anyway, so we've got, it's now landscape. I think we're a little bit beyond, see page navigation is just clicking, you know, what do you want? It's been a week. There we go, chapter anyway. You get the idea. And I also have, let's switch back to vertical, which you don't have to do. I have, if you click in the middle, a paper, a regular academic paper. This one I published about 10 years ago on visualizing the internet. And it's about the right size, shows networks, nice pretty pictures. But of course, with a good reader, you'd be able to click on that picture and zoom in indefinitely and so on. Because let's face it, we're not just dealing with a book here. This is something that's better. And of course, go back to landscape and, oh, well, double-sided seems to be about right for this. And you can go through and I think there are even some over-full H boxes at the end. Oh, there was, actually there was a rendering issue there, but who cares? Oh, vertical lines show up. So that's my demo. And you're welcome to try that. Those are the only two documents in there. There's no nice selection and download and stuff. It has the two preloaded in there and not the whole thing. But it'll give you, it gives you the idea. Now we go down here back to our regularly scheduled. Oh, this is going to be so crude. It's going to start at the beginning again, isn't it? Oh, no. No, there it is. How about that? OK. Cleaner than most demo switches, I think you'll agree. So zooming in on images would be nice and video. Little words for dictionary, glossary, the iBook. Apple iBook you go and tap something and a little dictionary option comes up and you can look up words. I have Podcain of Mars and I looked at three words and none of them were in the dictionary. But they should be and they should be hooked in with a glossary in the index and you can imagine all this sort of thing going on. And I guess you do it with little marks in the DVI somehow interacting with it. Footnote navigation. I have Dave Barry's book in here, his latest book. And you tap on a footnote and it goes to the back and you tap on 22. You go back and you got a bunch of numbers and you read 22. And if you suddenly forget it was 22, there's no back to where I was button. You have to start at 21, 20, no. Oh, 22, there. That's where I was. Footnote navigation. I'd also like to have an error code if you try to go beyond the page and I was hoping to have Don record. Nope. And just have that be the error thing. So I don't think he's here at the moment but if someone can record him saying no, I'd like to use it for this. There are other problems. Obviously, I'm going to make geometry work real soon now. Other latex styles need new sizes. It was frustrating. If you're doing letter size or A4, that's fine. But nothing in there had a paper size of iPad or iPad landscape. And it's time to put that in there. And as I say, this is not the same as this because there's a little 17 pixel band across the top. So you got to adjust it. And I know you fuzzy people care about stuff like that. The styles can be a bit over constrained. If you're tired of fixing over full H boxes, imagine that now when you say go to compile your document, you not only have your regular letter size one but you have four other layouts all complaining about sizes and layouts. This may be over constrained but I know we can handle it. And there are probably tools that one could put in there to do a better job. It certainly would be nice to have Kindle and Nook additions. I suspect that would be pretty easy. I don't know if they go into landscape mode or not. The iPhone, the new iPhone is now twice the resolution of the iPhone that I have in my pocket. One could imagine that the next iPad will do the same thing. So we'll need a high res version of it but we'll just look better, won't we? Page numbers. If you use that sort of an interesting thing, it could use a much better iPad app than my one week old convective C or whatever it is attempt. So what should I do with this? Should I keep going? Is this great? Should I learn Apple programming which is one of the reasons I did this? Do we try to get people to publish their papers in this format too? Do I get clearance from my software from AT&T? I haven't tried that yet. They're going to say, you want to clear what? This is not the main. Or do we let the PDF folks do it? If PDF really is the answer, then the PDF reader is going to do all this stuff sooner or later but I'm wondering if they'll get it right. So do we fork off and do it? I don't know. Also, ITech, that's what I call the demo, ITech, get it? The ITech, this bundle of how you put things could be a very easy GZIP directory structure, would need support and definition and stuff and we can come up with something pretty easily. So rounding third here, the permuted index, I pulled it out for my hacking this time. I looked up, I consulted it four times and it helped once. The other three, I just couldn't find the words that matched which was my first problem with it. So I don't know if that was a successful research project but it was good to do. This seems like a useful idea. I've actually inclined maybe to download all of the origin of the species to this app before I leave for my flight home and read it using my app. It certainly is a pleasant thing to do. Suggestions and feedback are welcome. Have I missed things? Am I wasting my time? I'm willing to put more work into this idea though you don't want me doing the style stuff. You probably don't want me doing the Apple iPad stuff either because someone who knows how to do this stuff can rattle it off much faster than I can. I'd love to hear what you think and I thank you for your attention and welcome questions. And that's what the talk is about. Thank you very much. Questions or comments? Where did you get the microphone? Yes, Mike? At the last talk meeting in Cork there was some behind the scenes discussion of how to make reflowable PDF in tech. Reflowable PDF is the one which can change pagination if you turn it, if you change it. There was even a mailing list but the problem is that nobody was able to create the application which would work. So your approach when you just pre-compute several sizes. We discussed, by the way, somebody told us this is the way. The more I see that, I see that you are absolutely right. Probably the good way for us to create the... So it's exactly what I would suggest you to contact Peter Flynn. Unfortunately, he's not here. He could make it because you are the person he wants to talk to. Okay. Would someone send me email and if you can't find me I don't want to talk to you anyway. That's great. It's pretty clear that someone who's not very good at tech and not very good at Apple can do this in a week and a half that this is not that hard to do. Yeah, yeah, yeah. Send me email telling me his name and I'll be happy to chat with him and see what should happen and of course welcome other suggestions. Someone else? Yes? I'd like to congratulate you on doing in two weeks what it took Wired Magazine and Adobe a couple of weeks, a couple of months to do. I don't know if you followed the Wired Magazine app. No, I haven't. Okay. The short story is they took an entire magazine done in design, floated out twice, once in landscape format, once in portrait format, formatting the pages by hand for each view, stuffed all of those into image files, wrapped it up with a page directory structure, done an XML, stuff that into an app and sold it as an app for $2.99 a copy. Cool. Yeah, but the problem is the magazine is 500 megs. Well, okay, so I just did the first 20 pages of Origin of the Species. If you do the whole thing, then all the layouts put together and it's a big pile of PNGs comes to about 300 megabytes. This is a 64 gigabyte iPad. It's going to take a while to fill it up and if I don't want the large type, cut that in half. So I've decided in the last 20 years CPU speeds and disk sizes have just changed dramatically. I'm not worried about it. But also the other thing is that this was a pretty good demo. Remember, there is absolutely nothing behind the curtain. They had to make it work and they care about their layout and so on. But I don't think it's that hard, especially if you really know what you're doing. My concern with doing bitmap images is you're going to run it up against a huge, countless number of permutations of different screen sizes and proportions and eventually you'll either run out of the energy to support them or you'll run out of the energy. It could be a problem. You might just do iPads for now. And again, the large type, switching to the large type was just my answer to grandma wanting to click a bigger font. My web pages, which I redesigned for the first time in 15 years, uses CSS that honors grandma's font size because I've had too many web pages where I can't read them. And so I want to honor that ability. It's not a bad idea, but who knows? Anyway, I think the approach is doable. It takes a while to download too. And it doesn't compress well because the PNGs are already compressed. Other questions? We've got two here. We have time to get up and time to go. Lots of time. Nice. Impressive. Oh, thank you. Really nice. But I suggest a completely different approach. Excellent, excellent. We have time. Let's hear it. I don't know how much you know about HTML5, Canvas, JavaScript. My idea is simply put the text into the browser. Do it all in the browser. Let the browser render the text file. Let it break the paragraph. We have already a hypernation algorithm running in JavaScript. We probably already have the par builder in JavaScript. We know that we've just seen that the browser can render mathematics beautiful. Jonathan here has done much for web fonts. So we can use open type features probably completely. We will live to see the day where we can use complete open type mass by the browser. So forget about bitmaps. Do it all in the browser because then your book works on the iPads, on the iPhone, on Android, or maybe even on Nokia. So it works everywhere. So that's interesting. So I guess you want a web to JavaScript converter because you actually want to run tech and latech in the browser, I assume. Which means you're going to have to have all the stuff, all the auxiliary files you pull in. Now it's doable and it doesn't take that long. Do I mind if there were a 30 second delay before I start reading a book? Maybe not. So I think that's a feasible approach and I don't know what the right answer is. I know this seems to work right now very well but you may be right. Also HTML5 is still gelling, of course. So we'll see how that goes. I want multi-column. But I think that's possibly right, yeah. I just want to second his suggestion. In fact, the origin of the species available already on the app store. We're using HTML5. I mean, we really have to get out of the custom of doing bitmaps and bitmaps and bitmaps and going to a vector. Vector is the direction to go. So another vote for a late computation or a late computation of the early layout. You've got it right there. Okay. Thanks. I was thinking about something that's kind of a compromise between the two. Would you put on the iPad something that would do your image capture that you're doing in the background instead of packaging the images, let the images be formed within your own app? So it would come from the DVI files then? Yeah. Yeah, I think that might be a good way to do it. Well then the resolution wouldn't matter anymore because it would happen on whatever machine it was running. Well, yeah, you'd still want to have a portrait in the landscape version. Right? You're going to have different layouts because you should. Okay. Yeah, because Marty. Right, right, right. Yeah, it's a different form factor. But it may be. So now we've got votes for all three places and that's fine. I was raising. Maybe we should do a show on that. Oh, I'd like to hear that. Oh, no, no, no, no. You have to use the mic. Yeah, the iPad can support PDF, right? So your step of converting PDF, PNG, is unnecessary. We convert PDF to PDF. If you have a PDF reader that doesn't put enough jobs, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it. Yeah, you can use it.
|
TeX and other traditional text layout markup languages are predicated on the assumption that the final output format would be known to the nanometer. Extensive computation and clever algorithms let us optimize the presentation for a high standard of quality. But ebooks are here. The iPad has sold more than two million units in under three months, and, combined with other book readers, offers a new way to store and read documents. While these readers offer hope to newspapers (and perhaps doom to many physical bookstores), they are an increasing challenge to high quality text layout. Ebook users are accustomed to selecting text size (for aged eyes and varied reading conditions) and reader orientation. We can’t run TeX over a document every time a reader shifts position. Do we precompute and download layouts for various devices, orientations, and text sizes? Do we compromise our standards of quality to use HTML- and XML-based solutions? These are new challenges to the TeX community.
|
10.5446/30868 (DOI)
|
My story begins about six or seven years ago now, when on a whim, I decided to study Mandarin Chinese. I've had a lot of these whims often over the years, and I know myself well enough to know that sooner or later these whims almost always burn themselves out, which is why this time I decided to focus on the interesting stuff and to the, at the expense of what I thought was the dull material. In this case, the dull stuff was the Chinese characters in their meanings, which I never thought I would really, would really contribute to my, to studying. Anyway, days, months, and now years have passed and I stayed intrigued, and so I realized it was a mistake to have ignored them. However, studying Chinese characters is not an easy thing, and it turned out to be way too tough for me. I couldn't, it was just impossible for me to, to get a handle on more than a handful of, of characters accurately enough anyway, to, to make any real headway in, in, in my studying. This was, as you can imagine, pretty frustrating, but before, before ditching everything, I decided to, I realized it couldn't hurt to apply one of the great lessons of metaphone. That is, instead of actually studying material, I decided to step back and, and think about how to study the material. I looked around to see what other people had to say on this matter and what methods that they used, and then I came up with the following scheme, seems to work pretty well. I took as an initial pool the 2,000 most frequently used characters, using the so-called simplified characters that are in use in mainland China, and I imposed an order on them, not an alphabetical or numerical order, but one based on how complex, how easy or difficult it is to write these characters. I arranged them from simplest to, simplest to hardest, and I used an induction-based learning scheme, relying on three essential platforms. Number one, the form and meaning of each, of any character depends only on the form and meanings of characters that have appeared earlier in the sequence. Second, it's possible to remember the character and its meaning by means of relatively simple and straightforward mnemonic narratives, which use the prior characters as elements in this mnemonic story. These characters are already known because they come earlier in the sequence. And finally, the initial item in the sequence has to be easy enough to be learned all by itself. And as far as these mnemonics go, any kind of story connecting the components, any pun, any play on words, any kind of outlandish scenario is all legitimate. As we create this, admittedly, a historical and unconventional method for remembering the character's meaning. Anyway, let me just show you just a moment or two. Let me just show you how this system works. Okay, the first three characters in the meaning are conveniently enough already up on the screen. The first character over on the left is clearly the easiest character of all to write. And by great good fortune, its meaning is one. So there isn't much mnemonics involved. Why is it lying down? I was tired of using the Windows computer to get it self-working. The next two characters also, by great good fortune, are built only out of horizontal strokes. They have the meanings two and three, and I don't even have to tell you which is which. However, it's not possible to create any further characters without introducing new components. And here I'm using the word component to refer to a geometric form, hopefully simple, but sometimes not. A geometric form that cannot stand independently by itself as a character, but which is used reasonably frequently in character formation. The first component, so this is the first or about 100 components that need to be introduced. And it's simply a vertical stroke that looks like a stick. And in any one of these mnemonic stories, we'll let this kind of a stick represent either a tool like a primitive hoe or perhaps a scepter to represent the authority of a leader or a political figure. The next character in the sequence is a simple cross comprised of a bar and a scepter. Its Chinese meaning is ten, and since it looks like a T, it's pretty easy to remember because the T stands for ten. But also, if you find that somehow unsatisfactory, you can imagine trying to stand this character up on a platform on a table. And naturally the character will tip 45 degrees to the left or right until it comes to rest as an X, which is the Roman numeral whose value is ten. The next character in the sequence is the, you can imagine it as the character for two with the scepter between the two strokes. You can do that, but it takes work to pry the two strokes apart. And indeed the primary meaning for this character is indeed work. Using the scepter as a tool, or as a primitive hoe that you use in the springtime to plant, you can imagine that the top bar represents the top surface of the soil, and the bottom bar represents the depth to which you have to, or whatever you do with hose, to turn the soil over. And as it happens, under that, that's a convenient interpretation because the primary meaning of that left character is indeed earth or soil. But sometimes in times of drought, not an afforded to go around, the soil gets very crumbly and dusty, thank you. And you can imagine that when you try to hoe in those conditions, the hoe goes all the way down to the ground until the top of it is flush with the top of the soil, something like the right character shows. The primary meaning for this character is draw. Finally, the next character consists of, you can imagine, consisting of the character for one, the character for two. Both of those characters being transfixed by the scepter component. And one interpretation in this method might be, well, this represents somebody, represents someone to lead. And the primary meaning for this character is king. So in the same way, more or less the same way, I proceeded to go on and develop stories for 2,178 characters. Using tech, or actually the Jonathan Q's Z-Tech variant, it was a snap to typeset all this material as a book, and here is one of these pages. Okay, and all these, the book is organized as a series of panels, and each of these panels has more or less the same structure. The reading or focusing, for example, of the top story on that page, you see all these panels are numbered consecutively, the number appears first, followed by the Chinese character, typeset twice in two different Chinese fonts. Now, I don't really know how the Chinese refer to those two different fonts. I, myself, tilt to think of them as the equivalent of serif and the equivalent of sans serif. Following that is a primary definition, typeset in a bold font, followed by a decomposition, and then finally the pinyin for that character. So, that's not really, especially legible. Let's see. How do you make it larger? I don't know if I did it the wrong way. How do you make it larger? No, that's a great idea. Oh, okay. Let's do that. There we go. Thank you. Okay, so, uh... That wasn't quite as easy as I was supposed to do. Okay, so, um... The decomposition refers to the fact that I thought it would be convenient to show how the prior characters contribute to the form of the current character. For example, in this topmost panel, the left part of the character is the Earth character that we just saw a moment ago. In the book, it appears as panel number nine. The right part of the character is a character, I haven't discussed it yet. It's a character that means inside, and it appears on panel number 71, which I guess is the bottommost panel on the preceding page. The... All the way over on the left, all the way along the right, is the so-called Pinyin pronunciation, which is the official transcription method in use in mainland China. The central part of any panel is, of course, the mnemonic story, which sort of appears in the central position. And I use a couple of typographic clues, which I hope will make it easier for the reader to make sense of. Any references to the current character always appear in boldface, and any references to the components of the current character appear in an italic typeface. And finally, just for fun, some additional information. There are a number of strokes that Chinese scribes prescribe for that character, and the actual frequency ring for that particular character. Not as easy as it looks. I have no idea what I want. That's a... Okay, well, I just wanted to make mention, and because I'm so grateful, for the typeface that I used for this book, which was designed by Philip Pol. It's called Linux Libertine, who made it available, I guess under the new license, made it available for free use by people. It was an absolute snap to install, at least under ZTAC on the Mac platform. It was an absolute snap to install, and once you do so, you get the full range of all the proper tech behavior. Small caps, ligatures, bold italic, bold italic, the whole suite. The thing I really like about this font is that you also get a very intriguing new ligature, the capital T, uppercase T, lowercase H, which, as you can imagine, gets used all the time. Anyway, the result of typesetting all this material was a book. It had always been a dream of mine to actually, at least on a small scale basis, go into the publishing business, and so I took advantage of this opportunity to do so. Here's the... Well, here's part... Here's the... Slowly unfolding the cover for this. Let's see if we can do something about that. Is this available here? Sorry? Is it available here? No. Books are heavy. I didn't bring any on the plane. So anyway, this is roughly the cover for that first book. I'm having a blast as this setting up this new company. It's an experience that I recommend highly to anybody. You never... All kinds of new things happen that you never anticipated, and it's really just a lot of fun. I set up a company. I call myself EasyChinese.com. And I hope you could all make out my contact information. There's a reason for displaying that, and that has to do with the reason why I want to make a presentation this afternoon is because I realize I am no topographer. And one of my reasons for making this presentation was to solicit advice. And so I really hope that anybody with suggestions for improvement, make things look a little better, make presentation a little clearer, will get in touch with me either in the next few days here in San Francisco or later on via email. This book is the first in the... In what I hope will be a series of EasyChinese guides. This one, this current one, the next best seller, is currently at the printers even as we speak. You can think of it as a translation guide. There's about 3,000 menu items given in Chinese characters and their opinion equivalents with the translation. So that next time you find yourself traveling in mainland China, you'll be able to better partake of the dining experience. As you can imagine, the format for this page is going to be a lot different. It's organized very differently. Again, the purpose of the format was to make it easy for somebody in a restaurant to easily figure out how to look things up, to easily show it, point things out to waiters and other assistants who can help them. Don't want to waste too much time. Everybody's hungry. Waiters are busy. And so you need it to go as fast as you can. A third volume, one that I'm hard at work on, is similar to the one that I've been speaking about, but it deals with the traditional characters as opposed to the simplified characters. These are the characters that were in existence and exclusively in use up until the 1950s. I've got a... I'm currently exploring slightly different formatting techniques for these panels. And I'd like to show them to you today, this proposed new format with some comments. The only real new information that I'm showing, you can see in the second line, both of these panels are referred to the same character. The new information really refers to what are called stroke order diagrams. The recipes are actually how to show you to actually how to write one of these characters correctly. And although this is not really a handwriting manual, it turns out that for reviewing characters, to make sure you really know it, it's nice to be able to draw the character in the air. And so to that end, it's helpful to know the correct way to draw it. I've also experimented with some new fonts. It's come to my attention just a few weeks ago that Google seems to be assembling a whole new bunch of fonts. I think their stated goal is to make the web a more beautiful place. But I suspect another goal is to also increase legibility on small size screens, on cell phones and so on, that are under the control of their Android operating system. Anyway, some of their fonts really are, in my opinion, absolutely stunning. And again, when you use them on Z-Tec, they just work right out of the box. Well, it's unfortunate. Let's see if I can bump up the... The topmost panel is typesetting a font called Old Standard. There's a font designed by Alexei Krugov in Russia. I don't know why he calls it Old Standard, because to all intents and purposes, it's really a monotype of a font. And it's clearly a monotype modern font. These are a family of fonts that are long out of fashion, but intensely, really quite beautiful, very, very legible. And of course, their spirit lives on by virtue of the fact that one of those fonts served as model for computer modern. The font below it is an example of something called Joyd Serif and Joyd Sans Serif fonts. These are fonts that are really quite handsome also, and they are really significantly more legible, at small point sizes than any of their... any of their... any other fonts. I recently switched to the Joyd Sans Serif monofont on my... on the version of Emacs that I use, and on my terminal program, and my eyes are very, very grateful. It was only at the conclusion... virtually at the conclusion of the book that I learned that I learned that a little bit too late. I learned a very important lesson from it. I'm a little afraid that old tech hands in the room will laugh at me, but I'd like to... I'd like to bring it to your attention because other people just starting out or doing... contemplating a project similar to this can probably benefit. At the outset of this program, I viewed this project, really, as if it was just another letter or another article, but just writ some... somehow just writ large, okay? This is where I went wrong. I now realize that it's far better to view this... to view this project as one centered around data management, and type setting and whatever else that needs to be done occupies a more peripheral position. This project and... and similar ones such as catalogs, dictionaries and psychopedias, collections of letters, comprise a sequence of records, each one of which will have a very similar structure. In my case, each record corresponds to a Chinese character, and we can imagine that the structure for a record... the other way... the structure for the record looks in a very abbreviated and a very simplified form, something like this. Really, what's going on is that the typeset format that you envision at the beginning of a project is essentially never the format that you... that you'll have, that you'll decide upon, that you'll... at the end of the project. But if you forego the... the data format in favor... in favor of creating a type setting command right from the get-go, then... then you'll find it challenging to... to force a command... to force a command created when you had one format in mind, to... to force it to create a very different kind of format. In my view, it's much better to create a data structure like this, and accompany it with a... with a... with a... with a script written in a language like Perl or Python or something like that, and have the Perl... have the Perl... your Perl script generate the text source... the text source file whenever you need it. The big win here is that when you do change formats, you'll be able to quickly... you'll be able to really easily make revisions in your Perl script, and so the result will be text source file that parallels your... your... your... your document structure a little bit more closely, a little bit more carefully. Bear in mind, though, that of course, a... a script written in a language like Perl or Visual Basic or something like that can't make any type setting decisions, but almost always there's lots of other stuff going on that for which Perl works just... just fine. Moreover, in specialized projects, some of us... such as these, you find yourself needing to create auxiliary material, material that's typically not the kind that Latech knows how to... how to... how to create, or at least right out of the box. In my case, such material included some electronic flashcard files, some review material, some graded... some graded reading practice, but in other contexts might include price lists or special purpose indices or glossaries or something like that. And I just find that it's much easier to generate this lot from a well-organized data file than it is from some messy text source file. Okay, well, that's about it. I have no more comments to make. I'd like to thank you for your attention. During the talk, a lot of which was really just shameless self-promotion. I appreciate your silence. Alright, thank you, Alan, for a very fascinating talk. Wait, I'm just sick for the microphone. Thank you. So, your driver, besides this, was Perl, right? I do, yes. So, you took this text database, did Perl processing and got a book? Well, no, no. You do Perl processing, you get a text source file, you run Z-Tech, then you get Perl. Yes, that's what I meant. You had text format, Perl gives you tech, and from tech you get PDF or DBI, whatever. Thank you. Thank you, Alan, for a fascinating talk, especially if you're someone who's just contemplating learnings in Chinese, essentially. But my important question is, do you need a field tester for the menu? Field tester, why? Field tester for the book on food, because I'm volunteering. Well, maybe we can arrange something later on. I'm a model of consistency. Alan, when you're trying to read a Chinese menu, what's the, in the visual process, what's the probability of a typographical error on your part and a surprise when the meal comes out? Well, you're always surprised when the meal comes out. Wait until I get some field testing done, then we can talk. Thank you.
|
I’ve recently used XeTeX to typeset and maintain a manuscript which develops a mnemonic technique for remembering the meanings for the 2000 most common Chinese characters. Following a brief introduction to this method, I discuss how painless XeTeX makes it to typeset Chinese and English together, and how TeX makes it (relatively) simple to implement this memory method in a handbook such as this. Some concluding comments emphasize aspects that are familiar to old TeX-hands, but may be overlooked by newer users. Because TeX source is ASCII text (or its Unicode extension), it’s easy to manage and maintain the information in these source files in a straightforward way via Perl or any other scripting language. TeX coding often becomes simpler, as it’s possible for Perl to make some decisions (not typesetting ones, to be sure) for you, so your TeX macros have less work to do.
|
10.5446/30869 (DOI)
|
Okay, yeah, actually there are some printed pedigree for you to keep. They were all made by the students of Layla, but we'll talk about it later. But first, I would like to thank Michael because he actually saved a lot of time in this lecture. He explained a lot of things I wanted to explain so we can spend some more time on demo. But let me remind you what LAMP is. LAMP is a combination of new Linux. So that's why G here is not pronounced. It's silent. Apache web server, MySQL database, and P used to be pure. But right now some people prefer Python and some people prefer PHP. And one of the common themes of many talks today is how to make them all friends with our beloved tech. There are several examples we have seen today and we'll see more. But before I start with my main talk, I wanted to show you another example. This is something we discussed in 2006, actually, as practical tech. It was an application which did automatic report generation with tech. The idea was it was a big server for a company which had a lot of customers. And some of the customers like NASA, like the Department of Defense wanted to have exact report, what people do and how do people work every day. So engineers would log in, they would put the log of their daily work in the MySQL database, and then we had tech as a back end to processes and to give well-typed set reports. This thing works for eight years with zero administration and absolutely no problems. Okay, and now our main talk. And as Klaus said, if you are missing pedigrees, here are some for you. Pedigrees are very nice charts. When I started working with Layla about genetic pedigrees, I thought I will do it in half a year or couple of years, but it's now like six years, and we still have a lot of things to do and a lot of things to cover in this. So here is a timeline in tech time in the sense that I just listed some tech conferences. We discussed what we did. We created some main algorithms. It turned out that standard tree making algorithms, tree drawing algorithms would not work with pedigrees. We cleaned the interface. We cleaned the typography. And at the conference, at tech users group conference in the last year, we discussed the thing that the only interface we could make was very much like covers, first interface before he started to talk with real programmers. It was more or less a spreadsheet with spreadsheet data, and people would just fill this and then get pedigrees with charts like this. And not many people are happy feeling spreadsheets. So we discussed this a lot, and then Carl said that this is not a problem, because instead of reinventing bicycles, why don't you make a web browser, and why don't you do this slide? So what we did, we did a web browser. Now I wanted to show you the interface, and this slide is for the case that we don't have connection, but I hope we do have connection. So this is a site of pedigree.varfi.com, and it still has spreadsheet like thing, but it's probably easy to do. So let me create a very simple pedigree of myself. Okay, my father, and as far as he is, obviously, male, and he was born in 1940, sorry, 1940, and my mother, Ida, female, 1941, and he is me, and I think I'm male, 1964, and let me put also my little brother. Let me put my brother, he was born in 1972. Okay, now I want to say that my brother and I are sons of my father, and he is my mother. So I want to say that my brother is a zero. Let me see, sorry, sorry, thank you. It's A1, it's A1, it's A0. Would your program detect that? Well, I thought about this, but then I thought that there are sex changes and so on, so I don't have any checks for this, and I am pro-bun, so when... And this is, we don't need this problem. Okay, unfortunately, the screen is small, but you can see that we have this like this, and I need to say that, okay, I put name on the chart, and I need to agree, and I will talk about this, that I basically do it on my own risk. So I hope this will work. If not, we will return to my slides with pedigrees of royal family of England, but okay, it says here's your pedigree, and you can do this as a tech file, post-script file, give of PNG, let's take PNG file. I hope it will... Oh, yes, he is Boris, my brother. This thing means that I am a pro-bend, that is the person who started the pedigree, and he is my father and my mother. Yes, very simple interface. I hope that anybody is able to understand this. He is a little bit more complicated interface. This is the royal family of England, it has several generations. I spent about 40 minutes pulling the data from Vicky and putting it into this, but I got very nice pedigree. Okay, some challenges. Most security challenges, I want to discuss them, but Michael discussed them, so I just mentioned that tech is too powerful to run on a server unchecked. And he is the same reference that Michael gave us, this famous paper in Eusenix about tech-based viruses. Actually, tech is a great language, because for any other languages, the time between birth of the language and the first viruses is about a couple of weeks. For tech, it was 32 years, which means that it's very difficult to program in tech. So, okay, our solution was very simple. The solution was we don't run unchecked tech code on the server. We run it only, the code we create ourselves, and so it's security through. It might change in the next versions, but right now, we just circumvented any security problem. There is another problem that Michael did not have, and we have, and this is a legal problem. And legal problem is this. In the U.S., there are some, and not only in the U.S., now in Europe, now in Russia, too, and everywhere, there are some very strict laws about what you can and what you cannot put about diseases, about medical history, and so on on the web. And if we really wanted physicians to use real names of their patients, we should spend probably a lot of money just safeguarding this information, and we don't have this kind of money, and we didn't want to bother. So, we talked to lawyers, and they said, okay, here's what you should put on your site, and they basically say that whoever does this does it in his or her own risk and would not ask anything. And if you don't check this box trying to make a pedigree, that's what you will get. You will get the same gentle reminder. You must check it, and it will not create your pedigree unless you identify us. And I spent some time programming that there will be no shortcuts, and in any way you want to make a pedigree or create a pedigree, you would make this legal challenge. I'm not sure it will survive real legal challenges, but I hope. Okay, a little bit about what's going on under the hood. It's very simple. You have user input, you have PURL, which takes it and creates a tag file, and we have chart in any format. And if you remember what lamp is, it's no Linux, a patch, MySQL, PURL, you will say, where is M here because there is no MySQL? There is no MySQL because we didn't need a database, but we didn't want to cheat you out of M, so that's... Okay, so what we did. Let me remind you what we do. We have a tag file from tag we do DVI, from DVI we do PostScript, and then we can do PDF or GIF or PNG. And since we didn't want to overload the server, of course, since we have our own tag, we don't have denial of service... Okay, we don't have as strong denial of service as my probably would have, but still I didn't want to overload the server, so I wanted to say I want to create only those files that are required. If a user wants PDF, I don't want to create GIF, and if a user wants PNG, I don't want to create PDF. How can you do this? And here is M for you. This great Unix utility makefile, which is really smart. You can tell makefile what do you want and how to make it, and makefile what does our program do. It creates a special subdirectory, it puts there a tag file, which says that if you want DVI, you'll attach your tag file, if you want PostScript, you do DVIPS, if you want PNG, you do it from PostScript and so on. So there is still M, and it's very fortunate that MAKE has the same first letter as my SQL. By the way, one little trick which I did not know when I started working with Pdgrease, and now I know I'm very embarrassing, because many, okay, all of you probably would say that they knew it from the beginning, I did not know what to do when a Pdgrease chart was too big for standard paper. And then I found out that if you call DVIPS, this calculated width and height, you can actually say up to, I think, 5 meters by 5 meters, or 100 meters, I think, Pdgrease. And what I do, I calculate Pdgrease dimensions in Perl, and then supply it to the makefile. It's a very neat trick, and sorry, I didn't know, I would probably, it would save me a lot of time if I knew it in the beginning. And we have another friend, by the way, because if you look at our makefile, here, if you look at this line, ps2img, ps2img, ps2img is a very nice script from GhostScript distribution. So we have another friend, GhostScript, which does all image processing for us, and again, we are very lucky because GhostScript starts with G, so we don't need to change G. Well, it's silent anyway. Okay, that's what's under the hood. I will answer questions, but again, thanks to Mike, we have a lot of time to discuss how do users work with this, and now I want to, I want Leila to take over and explain what we did with this. If you want, actually, maybe probably, you probably would be better near the mic. If cover wishes. He will be happy. He can sit somewhere. Okay, so as you know, I'm not a computer person, not a mathematical person, I'm a medical doctor, and I work at Beskir State Medical University, which is in Ufa City in Russia, and we did this pilot preliminary study and decided to ask our students if they like the software and the program that we made for them and for the doctors. It was a really small pilot study, and 33 students from our university participated. Participated in this pilot study, and our population of students was quite diverse. We had Russian speaking students and English speaking students, those who study in English language and in Russian language, and we have students from different years of study. We had students from the first year of study, and we had students from the fourth year of study. Most of them answered these questionnaires when they were studying either neurology or genetics or biology as subjects. So we made a questionnaire, and we asked them several simple questions, and they had either to tick the box or to write something in hand. Several questions that we would like to share with you are here. So the first question we asked was if our student was able to manage to create a pedigree using this online tool himself. Most of our students managed to do that, so almost 84% didn't need any help and managed to do this themselves, which is very nice, I guess. 16% of our students mentioned that they needed someone's help and that they managed anyway, but they needed someone's help. So the second question was if they planned to use this online tool or while they studied at our medical university for different subjects. And we're also very pleased to know that almost 94% of our students answered yes, and they were willing to use this software for different subjects, and they mentioned different disciplines that I guess will be there in the slide later. The third question that we asked was even more important for me as a clinician and as someone who teaches neurology is if they plan to use this program in their future work, all of them are becoming physicians, so if it is a useful tool for medical doctors. And almost 2 thirds, actually 3 fourths, yeah, 76% of our students answered yes, they will be happy to use this product when they work as medical doctors. 21% was not sure and 3% were sure that they are not happy to use this when they are medical doctors. So here we put the list of classes that our students mentioned. It was an open question and we asked them if you say yes, if you say that I am happy to use this software when I am a medical student, we asked them to specify the disciplines, the subjects. Most of them of course specified the subjects which were the subjects that we learned already and the subjects when we presented them this pedigree making software. But on the top of the list we had biology, genetics and neurology. Also some students mentioned other disciplines and one student mentioned that this software is useful for any clinical discipline he or she can study. Then we asked to classical questions what you liked and what you didn't like about this software and here are the most frequent answers. Most of our students, you might remember that we had 33 students here in this pilot study, answered that the best thing they like about it is that it is really easy to use, it is fast, it is functional. Some students mentioned that it is very nice that it is free so far. Several students mentioned, six students mentioned that we can get beautiful results and that the chart looks beautiful when we have it done. Among the things that my students did not like, the current first position was that this software is English only. But as you saw in the previous slide all of them managed to how to do that, although not everybody in Russia talks English. There were some other things that my students did not like but some of them were corrected already and now we have downloadable version of this software. Now we can save the input so Boris did his best to make this happen. Then we asked our students how much time did they use for creating their first pedigree. I know that this is a kind of silly question because some pedigrees were very big and some pedigrees were really small but still my students mentioned that minimum it was two minutes and for maximum they mentioned 40 minutes. The average that we calculated was about 19 minutes for one pedigree. There were written comments that it is the fastest they can do in the medical students' life and they didn't do anything better and more beautiful without using this software before. We asked them to rate the easiness of using this software and their general impression. We gave them the possibility to rate it from 0 to 10 when 0 was miserable, bad, impossible to manage. I did not manage up to 10 which is no problem at all. I am happy, it is outstanding, it is great, I really liked it. Our question was how easy it was to start using. The average was 8.9 which means that our students were really smart, not that dumb as they look like. For general impression, also the average was 8.9 which is quite high and we were very happy with that. But something that made Boris very happy was the thing that all of them liked the result. So that is probably what I should tell here and I give a floor to Boris. Some of you got charts with 5 degrees, you can keep them, they are all copies of the 5 degrees our students created. They give the informed consent that you can leave them. Yes, we asked them to use them and they said surely you can. They are actually families of our students. Some of them are really nice, here is another one, here is another one. This is a big one. Yes, there were several so big that we couldn't print them on the paper. Well, if you remember last year we were able to print huge p-degrees which was something like 10 feet or maybe 15 feet wide. We put it on a wall in Korg, not Radam. Not Radam. Yes, thank you. We did not do this but we had huge p-degrees probably from those who took 40 minutes to create them. What was the reality of that? Yes, yes, yes. Well, everybody has big families but in India they remember who are their ancestors, who are their siblings and so on. They could just sit down and write it down and put it down there and the nice thing about our software is that you can have as many rows as many entries as you can. There is another new feature. Yes, this is a new feature. I started with making kids with, I think, 25 rows and then Laila wrote me that I am crazy, not enough because any self-respected medical doctor would make this much more than this. So I said, okay, let it be as much as you can. Okay, so our main conclusion is that we now have a nice good company with a lot of nice good participants and as long as your security are aware, they behave nicely with each other. Let me remind you about the site, the site is pedigree.varphi.com. I really want to show you the phrase which in the beginning of the site eats from Pearson and those of you who studied statistics probably know this name. Yeah, and we tried to do this really easily produced work of art. We wanted to be not a work of a great labor because computers should work, people should think, but we wanted to preserve art as much as possible. The site is absolutely free. Well, it has some advertising, but to say the truth, it was a big flop because for three months of advertising, we got exactly $5.63. People come to our site, but they like it so much, they don't click on the advertising slicks. Yeah, lots of it. And this is free, you can use it, you can tell people about this, and if you write to Leila and me and tell me what we should do, we have a lot of plans to improve this, we probably would be very happy. Okay? So you have lots of time for questions. Pearson? Yeah. Yeah, it's actually from one of his papers, he wrote a lot of papers about pedigrees, and he created his own system of pedigree nomenklaccha. In the last year, we discussed the history of pedigree nomenklaccha and Pearson's papers about this, and it's in the tugboat. So I don't remember exact reference, but if you look at the tugboat with the results of the Notre Dame conference, you will find the reference. And actually, a lot of people we all know from different areas like Gautam, like Pearson and so on, a lot of people actually spend time working with pedigrees, with genetics, with genetic statistics, and if you look at the reference, you will, mathematicians probably find a lot of familiar names there. So the square for a male and the circle for a female is Pearson? No, no, he had a different, it was, I think it was, I forgot, I thought about this, I forgot, but he had a different system. There were lots of systems, the first system used, by the way, musical notation, they used half notes and full notes because the printer had musical fonts but didn't have anything else. And the modern notation is, I think, it's American notation of 1920s. In England, they still have a different, well, they had it up to 70s, they had a different notation. So there was a lecture, Leila did a lecture about this, but the problem is that after this lecture I forgot it. But you can read it. Can I download the Royal Family degree? Yes, yes, you can download the Royal Family degree. It's somewhere in my, yes, in my screenshots. You can go to the screen, ah, yes, you're right, you're absolutely right. If you go to, yeah, if you go to screenshots here, yes, you can find, actually, Leila tells me there are two degrees here, there is Leila's degree and there is a degree of the Royal Family. Here it is, let me, ah, yeah, it's, yes, it's here and it's huge. Carl wants to know why it's built in. Why are some of these circles built in? It's medical thing, ah, the degrees are medical too, they used to see how certain hereditary disorders go into families. So it's not just who is the father of whom. So filled circles are people who have this disease, or filled symbols. Symbols with a dot is somebody who is a carrier, there are asymptotic and so on. It's a lot of medical information and this degree actually shows hemophilia in the children and grandchildren of Queen Victoria. So filled circle is hemophilia. Right here, but anyway, if something is filled, this means that this subject has this feature. It could be a disease, it could be the color of your eyes, it could be the color of your hair, anything, but it shouldn't be a disease for sure. Yeah, yeah, it's, okay, it's somebody who has this feature in the phenotype, because it could be in genotype, but not in phenotype, it's a different notation. Yes? Um, actually this question is probably a little bit more for Leila. I'm curious, as a medical professional, professional who runs into various ethical issues, do you foresee a time where, for example, you know, maybe insurance companies will make you and Boris a multimillionaire by buying this and forcing all of their clients to fill out these pedigree charts and then, you know, raise and cancel people's insurance on the basis of medical histories and so forth. I'm just curious what your thoughts are about the abuse of some of what you've developed. So Boris made a very good job telling you and making these terms and condition things. And actually here, if you wish to do something for a publication, for a medical publication, for a medical journal, it's not necessarily to put in the names. You can just put some symbols, you can use acronyms, and Boris didn't show it to you because we didn't have enough time, but you can click either you wish the name to be saved and to be printed or just the symbol or just the number or the year of birth. It is manageable. We can really protect information, especially because we told our users that this is on web and this is really your responsibility as a medical professional to keep your patients' rights safe. Do you think that maybe an insurance company might decide to buy the software and then force anybody who wants to get insurance to fill out one of these things? I don't really think so, but they really can't do this without this chart. If you are asking about pre-existing conditions and things like that, when somebody buys an insurance, he or she has to mention if he or she knows something about inherited disorders that he or she had in the family. So I don't think that this will make it better or worse for the customers. Wouldn't this tool, Idris, be used to price insurance more accurately and therefore it wouldn't. It's not an abuse. It's like everybody's got their cards on the table and this is how much it should cost. It's an advance, not an abuse. So I don't know if my question is addressed to you. I'm not a medical professional, but if I'm allowed to, about this. I think the question about this is not question for physicians or programmers. It's a question for the law givers. It's a question for the government. In many countries, they now have laws that companies cannot use this information in their decisions about giving you insurance or pricing the insurance. And this is probably a good thing. What we do, we do tools which help physicians, which help researchers and so on. And it's probably a job of the government to make this not to be abused because everything would be abused. Yeah, and I should just confirm that this is a tool and you can use it as, you know, in medicine we have a kind of proverb that you can use a knife for different purposes. You can use a knife for saving a person's life or you can use a knife for killing a person. So it's the same tool. And as for medical professionals, I see really two great, great, great advantages of this software. It can help me make pedigrees beautiful and it can make me, and it can help me to make them printable nice and I can do it quickly now. Because I deal a lot with medical pedigrees for 15 years of my life and each pedigree is really an art. So it took me hours to make a simple pedigree that looks okay. But these ones are really beautiful. You talk about the pedigrees looking beautiful and I know that automating something like this is often much more complicated than what the output looks like. Can you comment on the complexities of the algorithm? Okay, again, there is a paper in Tuckbot. Very quickly because we are running out of time, there is a classical algorithm of making rooted trees. Rooted trees when you have root and go back. When we started doing this, we understood that pedigrees are not rooted trees. You can have several roots. So we created an algorithm. The problem, okay, now it works. The problem with this algorithm is that we have constant winning marriages and it means that graph is not necessarily a tree from the mathematical point of view. It could be any graph. And if you know mathematics, graph theory, it means that you can make a pedigree that it's impossible to draw without self-intersections. Right now, I have a hack inside this program which makes constant winning marriages in, okay, let me say 80 or 90% cases. Somehow I can improve it. I have some ideas how to make it better, some better algorithms. So in many cases, yes, but because people sometimes marry their relatives, my life is difficult. Okay, thank you again.
|
The acronym glamp is used to denote a combination of GNU Linux, Apache, MySQL and Perl, Python or PHP, which now is one of the most common technologies for dynamic creation of Web pages. In this talk we describe the use of this technology for automatic creation of medical pedigrees.
|
10.5446/30870 (DOI)
|
So people who have been at conferences before may have also heard me talk about ZTech, which has been my other project in the tech world, but this talk is not about ZTech. I'll try not to mention it again. So what is TechWorks? I've titled this talk TechWorks for Newcomers and I think I can interpret that in two ways. Firstly I want to introduce TechWorks to people here who may not have seen it before. I've talked about it at one or two meetings in Europe, but I don't think I've done one in North America before, so there may be people for whom it's new. But also TechWorks is designed particularly to be appropriate for newcomers to the tech world. It's designed to make tech that little bit more approachable, easy to get started with. And then we'll also look at some things that I knew in TechWorks now that weren't there when it first came out with TechLive last year. So by way of background, I'm sure we've all seen many demonstrations using TechShop on the Mac with many thanks to Dick. And TechShop has been, I think, one of the outstanding success stories in the tech world over the past, I don't know how many years now. But it's done a huge amount to make tech attractive to Mac users. It's a great program. The one shortcoming it has is that it's only on the Mac. But it offers this very simple, minimalist user interface that I think many users find straightforward, simple to work with. It works directly with PDF tech to create a PDF file which everyone knows what to do with. Nobody knows what a DVI file is. Well, obviously we all do. But all those people out there who might use tech if they weren't so scared of it, they've got no idea what a DVI is or what to do with one. But PDF they understand. And of course TechShop has this delightful user interface that fits right into the operating system and gives you lovely touches like the magnifying glass and now sync tech support to get between PDF and source. So TechWorks was conceived to try and bring a similar kind of experience to users who don't happen to use a Mac. Of course, an awful lot of people out there don't happen to use a Mac. They use Windows or they use Linux or they use BSD or whatever. So the TechWorks project with much encouragement from Carl and Dick, it sort of began a couple of years ago when we had dinner at a tug meeting and decided to try and do this. The idea was to bring that kind of experience to new users, whatever platform they might happen to be using. So here's a typical example of what interfaces, this one's on Windows, isn't it, to tech look like? If you're a newcomer to the tech world and you launch a program and it looks like this, I think for many people the reaction is one of horror. It looks complex. It looks intimidating. It presents you with, I don't know, how many dozen little buttons with cryptic icons on them. If all I want to do is type once upon a time in a galaxy called, there was a computer named Story from the TechBook, I want to type a letter or something. This is, well, it's overkill. I'm not saying this is a bad interface, but it's not the right interface for every kind of user. And I think for many non-technical potential users, it's a scary interface. Here's another one with special symbols and Greek letters and so on all over the place and a zillion buttons and a little area where you can type. Here's another example. These are just random screenshots I took a couple of years ago, so they may be out of date, but of what various tech interfaces look like. And for the people they're designed for, I think they're great. They offer lots of tools, but I think there's also a lot of people for whom they're just intimidating, they're frightening, they're confusing. So this was TechShop. Just a whole lot simpler, cleaner, very little to get in your face and confuse you. Just the bare minimum you need to start typing a document, type set it and see some output. TechWorks aims to bring that same experience to users across a much wider spectrum. So the goal was to develop this using free software tools components so that it could be distributed freely with TechLive or other distributions. So TechWorks is built using Poplar, which is a free library that gives us PDF display and QT, which is a framework for building applications and made it possible to put together a reasonably modern, GUI application in a finite amount of time. So what are the features that we have to offer? Well, it's in many ways a lot like the features TechShop offers, a straightforward text editor with the usual kind of features that people expect, including syntax coloring for tech and latex, auto completion of tech commands that is available there for people who know what they want to do, but you don't have to use it. It's not going to get in your face if all you want to do is type. Then there's the ability to execute the type setting engine to generate formatted output and then other tools like BibTek or MakeIndex and a window to view the output, which is based on PDF display. So by default, we expect people to be running tools like PDF tech or Z tech or Lua tech now that directly generate PDF. It shares a lot of interface features with TechShop. I don't know what I would have done without Dick as a model. So we have the same ability to retype set with one click from either the source or the output window. We can jump between the source and the output using sync tech features. The interface is localizable. So currently, you can run the tech works interface in 19 different languages, thanks to a whole bunch of people who have translated it into their particular favorite. Of course, there's a lot of room for growth, a lot of more power user features that would be nice to have somewhere down the road in the future. But one key consideration, I think, is that any power user features that we add must not get in the way of a newcomer. They must not complicate the interface so that someone coming to this for the first time wanting to work through a little tutorial type, their first tech document, isn't going to be faced with a thousand and one features that were specialized for people writing a physics paper or a math paper or something. But they're never going to understand. Actually one of those power user features is there now, a scripting language for extensibility. The idea is that tech works itself, the core application, will try to stay simple and straightforward. But now that we have the ability to add, to use a scripting language to extend that, then users with specialized needs or more advanced needs can customize it to suit their particular needs without us having to build that into the core program and complicate life for everyone. In fact, we have several scripting languages. We'll see that in a moment. You can choose now between three different languages to write your scripts. So hopefully that will avoid some of the language wars about which one should we do. Is it going to be Lua or is it going to be Python or is it going to be Ruby? Well, take your pick. Actually, we don't have Ruby at the moment, but somebody could do it if they want. So tech works was released last year, the time of the tech live release, version 0.2 was the first stable version that we released. There had been experimental prototypes before that. For Windows it shipped with tech live. For the Mac it also shipped with the Mac tech distribution. Mac tech picked it up at about the same time and so it is installed as a standard part of Mac tech for the past year or so. And there are people packaging tech works for various different Linux distributions and other types of, I think there have been some BSD packages. I don't know much about that. That's totally out of my hands, but obviously I'm very happy for people to do that and make it available through the appropriate channels for each different environment. So what's coming soon in this summer's release, which will be 0.4, not out yet. We're still in the experimental release stage, but very, very soon now. There will be extensibility via scripting languages. What goes into the application is QT script, which is like JavaScript for those who have done some web development. But then we also support plugins that can interface other scripting languages to the same core functions in the program. And we have plugins for Lua and for Python. And there are two basic categories of scripts that are supported. We call standalone scripts, which appear in the menu of the application and can do whatever you want them to do, insert a special bit of text or make changes to your document or something. And then there are hooks, which are scripts, which are automatically activated at certain times, such as when a new file is opened or when a type setting job has just finished. And then those script can step in and do some additional operation. So I'll show a few examples of those. So demo time. Let's hope this works. Let me see. This is actually TechWorks that I've been showing the slides from here. And it's really confusing with the resolution being different. Let's open up a new file. So this is a simple LaTeX sample document that everybody's seen, sample2e. And TechWorks editor shows it with syntax highlighting, which can be customized by the user if you want to. And if I rerun PDF. It types it and here's the output on the other side, which we can zoom in and look at. Let's enlarge that a little bit at least. Now what did I want to demonstrate while I have this open? Okay, well, first of all, let's just show that we can change the language that we're running in. So, for example, Bruno's here. So let's run it in French. There we go. And TechWorks menus are now French. Or if we bring up the preferences dialog, again, that's now in French. We could even go with Russian, for example. And now we're running in Russian. Or how about Chinese? Yes, that works too. So here's TechWorks in Chinese. Let's get back to English so I don't get too lost. Because those are coming from a list that's not part of TechWorks itself. They're coming from the language resources. Potentially we could try and do that sometime. But yes, there are certain things that don't show up localized right now, but almost everything does. Yeah. Okay, let's see. Sync text. If I can command click at a point in the source here and my PDF view has suddenly jumped to show the same point. Highlighting in the PDF. Or if I scroll to a different place in the view and command click here, it will jump to the corresponding place in the source. That's the sync tech feature at work to navigate between source and preview. Experimental new thing that we have in here now is autofollow option, which as I move about in the source file, it will automatically highlight the corresponding place in the PDF. So when I go on to the next page, it will automatically follow where I'm editing and show. Personally, I don't much like that, but a user was asking for it, so we thought we'd give it a try. Is the which? The output is paginated, yes. So that's viewing the full page, but if I zoom in a little bit more, then you can see it more closely. Let's enlarge that a bit more. Let's see. Okay. Features in the editor. One would be command completion, which again is modeled on TechShop. If I want to insert a new section command in here, I could just type it in completely manually, but if I just begin typing it and hit tab, it can also complete the command for me. There's a customizable list of all those completions if I wanted to insert a figure. So I need a begin figure, end figure frame. I could just use BF. That first expanded to text BF, but there are alternate expansions that I can cycle through. And so it can be very efficient to enter text once you're familiar with a set of abbreviations and say those can be customized by the user. There's just a simple text file that lists those. Because it's a TechAware editor, if I turn on spell checking, let's make it English, it knows to not spell check the latech commands in here, so we're not seeing errors. That would show up better, of course, if we pick a different language, like if we tell it this is French, then it's almost all spelled incorrectly. But it still doesn't highlight latech here because it's a command control sequence. It will try to spell check, well, except the math will mostly be control sequences, so then it will ignore them. It's not clever about that. Let's see. Okay, let's show a few examples of what you can do with scripting in the new version that's coming this summer. Yeah? By default, it will soft wrap as you type. There is an option to apply hard wrapping to the text, so if you want to do that, you can. It doesn't automatically hard wrap as you type at the moment, at least. That's a feature that has been asked for, perhaps we'll get it done. Let's look at scripts a little bit. Start with something very simple. So I don't have enough screen space here. Okay, let's put that out of the way. I don't want to build in a lot of features into tech works that are oriented to one specific format, such as latex, then all the people using context, those features will just get in their way, cluster the interface. So things like adding specific latex markup are things that can be added with scripts for the people who want that, and the context users can do something completely different, and the plain tech users can do their own thing. So examples of latex styles would be bold. I have a script that will apply bold tagging around the selected text. It's actually a script that's smart enough to toggle, so if I do it again here, it will take it away. So let's have a look at how that's done if we bring back managed scripts, toggle bold. So is that at all readable from back there? This is written in QT script, the JavaScript version. And because I wanted it to be the smart thing that recognizes if the markup is already there so it can toggle it on and off, it has a function here that I'm not going to try and go through or anything, but it looks at the text that's selected in the document to see whether the boldface markup is there around it, and if so, it will take it away, and if not, then it will add it in. And so we can call that function and just pass it the markup. And I've got another one that does exactly the same thing for emphasize text, and so that's based on the same function, just adding mf instead of text bf. You can apply those so we can make that bold and we can even do both. If I am bold, it won't take the bold away because it's not directly nested, it will add it again. So we can undo them in reverse sequence. Another one that's again done with a script so that it can be customized is title case. There are built-in uppercase and lowercase operations that you can apply to selected text. Title case is a bit trickier because it's very language specific. I don't want to hard code that into the application, but I do have a script that will apply a title case transform to the selected text, and it knows about words like of that should be skipped. And because it's a script, that's straightforward to customize for your particular needs in title casing. It's not hardwired into the program. Okay, let's look at what you can do with hook scripts. I have several here, but they're currently turned off. So if I put a couple of errors into my document here, let's take that away, but we'll add in break here, which is going to cause an extremely under full line, and let's mistype a command here as well. Right now, if I run that, I just get the standard latex log here with error messages hidden away in the middle of it, rather hard to find. But if I find, where's my scripts window gone? Manage scripts, here it is. If I turn on a script here, this is a hook that will run after the typesetting job, and then it analyzes the log to try and locate the error messages and present them in a nicer form. So if I rerun this now, then I've got two panels out, but I've got the standard console output, but I've also got this latex errors summary, and that was extracted by the script from the log, and I can click on that to get to the place in the source where that message came from, or the undefined control sequence here, which we can go and fix. So that makes it a lot easier to navigate the messages that the job produced. And again, because that's a script rather than a hard coded feature in the program, it can be adapted to suit different formats, context, I don't know, probably generates error messages that are quite different from what latex generates. It would be appropriate to have a different script to analyze the results of the run. Let's see, another one that a user asked for just very recently was, if I open a.bib file, I have TechWorks set by default to show latex syntax coloring, but in a.bib file, that's not really very appropriate. It only colors occasional fragments. Very important, I also have my editor configured to do smart quotes by default, so that if I just use the double quote key on my keyboard, it will generate the text style open quotes, and at the other end here, it will generate the close quotes. But if I'm editing my.bib file, I need to use real double quote characters, and it's going to be wrong if we start entering a new record and put text style quote marks around fields. So to deal with that, how about having a special mode for.bib files? Well again, that's getting very narrow and specialized, and I don't want to build that into the program, but I have somewhere here a script, let's see, manage scripts again, yeah, set bibtech mode, just a very simple script which looks at the file name that's just been opened. This is a script that runs on load file. If the file name ends with.bib, then it will turn off the smart quotes mode, and it will turn off the latex syntax coloring. That can, but you can choose your defaults to have. Yes, it probably should be and will be. So if I turn that script on, and then reopen my.bib file, it will come up without the syntax coloring, and if I type quote marks, I just get the plain ASCII quote marks. I haven't done yet, but the syntax coloring again is customizable, so if somebody wants to write a bibtech version, be welcome to have a go. Okay, just a couple more things that we can do with scripts. It's possible for scripts to present user interface of their own, so if I had a block of lines here that I wanted to turn into a latex list, I have a little script that can do that, and it puts up an interface to ask which kind of list do you want. So let's have an enumerate, for example, and it will wrap that block of text. An enumerate with item on each line. We can do the same thing, but with maybe a custom MyList environment that I might have defined in the preamble, and that is entirely again controlled by that particular one that's written in Python, so there's a Python script that calls back into techworks to put up its own custom user interface, take the results from that, put it into the document. So those kinds of features, I think, don't belong as built into the editor, but the extensibility is there for people who want to go further and customize it to suit their particular needs. One more fun one is to show that it's possible for a script to actually execute system commands, if I type ls-l in here and run a system script here, it will execute that as a shell command and put the results back in my document. Obviously, the potential there is to do all sorts of external manipulations that you might want to do as you develop a document. Running ls isn't particularly useful. Okay, let's quickly just finish up here. Come on, yes, there we go. So yes, one minute the right way up. Techworks is a free open source project. I'd love to have more people contributing. I do want to just give a special mention to Stefan Löffler, who over the past year has been a huge contributor. A lot of the new features that are there this year that weren't in Techworks last year are thanks to his work, and I'm as much just monitoring and managing as writing code anymore. It's great. Also, many thanks to the localizers and those who've contributed in that way. Alan Delmotte has written a manual that came out with last year's release of Techworks. I hope he'll have a chance to update that for the new version, but obviously that will depend on his time. That's really, I think, what I wanted to cover today. So there you have it. That's Techworks. Thank you. I'll say some questions for the break and just ask you personally. Is there any BIDI BIDI editor support at all in Techworks? Can you switch it like in like Notepad? Can you switch it to the, instead of left to right, but type from right to left? Okay. Yes, you can. That's one thing that works better on some platforms than others, I believe, because of how the underlying QT frameworks work. Probably the best for us to look at that kind of hands-on and see just how well it works. I know there are some Persian guys and Arabic users who are using it, but I don't really know much detail of exactly how well it works. You can switch it to right to left mode, or is it just follow the bi-directional algorithm? It should follow the bi-directional algorithm. I think the default direction switches if you choose the Arabic or Persian interface language, I think, but I'm not even totally sure on that because it's not something I use myself. All right. I know we don't have time. I'll save my next question for when I talk to you privately. You started with showing some additive is lots of icons and whatever. If I look at what you showed, there's a danger that it gets replaced by a lot of menus and options and scripts and whatever. My question is, how easy is it to configure techworks in such a way that it starts with only the specific subsets of features that you want? I tried to do something with context by having a set of setup files that it would use for start-up, but it's not trivial because techworks generates some structure itself and things like that. Is that on the agenda to make it so that you can say we can start up context beginners mode with only a limited menu? That was the original idea to keep it simple. At the moment, you can't customize the built-in menu commands at all. The customization is all in the extension of syntax coloring modes, indent modes, scripts and so on. Those you could customize and ship just a limited set or a more complete set. We do want to make it possible for scripts to actually not just add to the script's menu, but to modify all the existing menus as well, but that's not in place yet. You would be able to use a script to customize the content of the edit menu or something and take away the commands you don't want to show. Can you customize the key bindings, say Emacs style or whatever the user wants? As far as I know, I don't believe you can customize the key bindings built into the editor. You can customize the shortcut keys to all menu items and so on, but not the basic editing functions. Those are built into the QT editor that we're using. Hopefully that'll be a forthcoming feature. I mean, if you want to have a good user interface, we shouldn't have to relearn how to type. I'd agree that would be a good thing. I don't currently have a plan of, well, we can override everything QT does and we do override a few things. I don't think it's impossible, but it's not currently being worked on. Thank you. This goal is to give newcomers an easy to use interface. How is there a way for them to upgrade their installation when the next version? That would be nice. There's not currently any built-in support for that. I guess distributions like MiG-Tech or Tech Live will replace when a new version comes out, but there's not anything built into the program itself. I'd like to do that, but being a cross-platform product makes that pretty hard. Solutions to that tend to be very platform-specific. Okay. Thank you again. Thank you.
|
This presentation introduces TeXworks, a simple TeX environment based on modern standards\Dash including Unicode text encoding, and PDF output by default Dash with an uncluttered interface that does not overwhelm the newcomer. It is built using cross-platform, open-source tools and libraries, so as to be available on all today’s major operating systems, with a native “look and feel” for each.
|
10.5446/30873 (DOI)
|
All right, so I'm going to be talking today about typesetting unicode mathematics with a LaTeX sort of an interface. Now, I know not everybody here is a necessary LaTeX user, so I'm happy to talk about these ideas in other sorts of contexts if you'll allow me to put that sort of a spin on it. But I also know there are plain tech users here as well, and I'm not going to be talking about like the underlying functionality that allows this sort of work, but again, I can talk to you one-on-one about it if you're interested. So, normally, I'm from the University of Adelaide down in Australia. At the end of my PhD, I'm doing vibrations and magnetics and things, but I've also been doing a little bit of work with the LaTeX 3 project. I won't be talking about that today. And before I begin, I really have to thank a few people for, well, me being up here right now today. The Tech Users group was really, really generous with helping me to actually come all the way to America. And thank you very much for that, for everyone that's involved. Also, Barbara Beaton has done an enormous amount of work over the last however many years on the sticks fonts and with a lot of work with unicode mathematics just in general. And obviously, Jonathan and Tarko for ZTech and LuaTech, well, we wouldn't be with where we are today without them either. So I'm not going to be talking in too much detail because when I said it out loud, it went for a long time. So if you want a little bit more detail, there's an extended abstract printed in the conference booklet. So up here right now, I'm going to be talking about like what unicode mathematics actually is, what fonts are available for it as of today, and how we can use those symbols and alphabets that are within the unicode mathematics standards within Lartech and for selecting fonts, and how I think that mathematical alphabets or styles should sort of behave from the point of view of a Lartech interface. Now, where did unicode maths come from? Well, I guess you'd sort of say that the computer modern math fonts were the origins of computer typesetting of mathematics with a couple of minor exceptions. And back in the day, there were a bunch of fonts put together with, let's say, as many symbols as could be fit into these fonts that would encompass what most people would need to use most of the time in terms of accessing symbols in mathematics. The AMS Math Editions rounded out that set of symbols for a little bit more of a larger audience. Later on, there were some other fonts produced with Euler and Lucidra, Mathheim, and so on, as well as other non-tech fonts like Symbol and the fonts used in Mathematica. And the thing about all of these different maths fonts is that with some exceptions where there were overlaps, all of these fonts had completely ad hoc encodings. So there was no consistent relationship between symbols and glyph slots. So if you wanted to get some sort of symbol in some sort of font, then you needed to know explicitly what font you were using and where that symbol was located within the font. So this was seen to be a little bit of a problem because it made support for new maths fonts very difficult or not difficult but just very tedious to implement. So the Math Font Group way back when started to put together an 8-bit Math Font encoding which had the aim of making it as easy to switch Math Fonts as it is to switch text fonts. All you need to know is the name of the font that you want to use and then all of the underlying internals will take care of all of the encoding business. Now, unfortunately, perhaps not unfortunately, this system was implemented back at the time but it was never adopted for a number of reasons. But the main reason was because Unicode was this big thing just coming up at the time and it was seen to be much more important to get Unicode up to scratch with Mathematics than to get this 8-bit Math Font encoding really out there working because, well, after you've got that working, you still need to do the Unicode Math part. So the project morphed from taking these 8-bit Math Fonts which had 256 symbols each or glyphs, I should say, which encompassed probably the greatest number of Math symbols used at the time in any sort of tech font and expanded that as much as it could given the limitations of 8-bit tech. And then the Unicode Math people, I guess, expanded that to the limits of every single theoretically known Maths symbol with some minor exceptions. So at the time of speaking, I think there's some two or three thousand, four or five thousand. I forget. Many thousand of Unicode Math characters that have been identified and put into the Unicode standard. And so this is sort of where we are today. Next one. If you imagine this but multiplied by several hundred times, then you sort of get an understanding of all of these symbols that have been identified and put into the Unicode Math Font encoding. And the idea here is that you've got a very good coverage of all of the Math symbols that people want to use, almost everybody anyway. And you've also got a consistent way that font designers can incorporate these symbols into their fonts. Before the Unicode Math people got together to do, like, to extend Unicode for Mathematics, this just wasn't really possible. So you see all of those old fonts like the symbol font or the Mathematica fonts back in the day. Just threw all of their symbols in there but without any regard for interoperability between different programs. So the fact that we've got Mathematics in Unicode means that you can move Unicode plain text that contains Mathematics between different programs like a browser or a PDF or a LaTeX source document a little bit like Ross was showing in his earlier presentation. So around about the same time a group of publishers got together to create the Styx fonts. And the idea with the Styx fonts was to actually instantiate or create a reference for all of these Unicode Math symbols. It's one thing to identify them all. It's another thing to actually say, this is explicitly what we want them all to look like. And here's a font that everyone can use to actually render their Mathematics. So talk to Barbara about how much of a problem this is creating, an encompassing and consistent Math font encoding or Math font, sorry, when there hasn't been something like this before ever. So these have just been released for the first time a couple of weeks ago, which is very good timing for my talk. And they're at the stage now where we have a set of reference glyphs, which we can use typeset Mathematics for any sorts of Mathematics that we wish to typeset. But Maths needs more than just glyphs for proper typesetting. So I'm sure some of you are aware of the complicated algorithms that are incorporated within Tech and also how these algorithms require certain font properties to be included in the Math fonts that you're using. So Jaco wrote a great paper illustrating all of Tech's Math font placement algorithms in a graphical sort of manner, as you can see up here. And it's really good to sort of visualize all of the complicated things that are going on in there. But to go back to how this works for Unicode, back in Word 2007, Microsoft came and extended their OpenType font standard so that you could, for the first time, include in OpenType fonts all this extra information and the font parameters required to do the spacing of Mathematics correctly. This was a, you could say, a generalization of the algorithms in Tech and also a little bit of an extension. So there's information such as the placement of subscripts and superscripts on the left of a thing, which in Tech you would just copy from where they would go on the right hand side in Unicode Math that they can be incorporated separately if you want. So with this extension of the OpenType format by Microsoft, the OpenType fonts can now contain everything that we need. The OpenType fonts can contain all of the glyphs from Unicode and then they contain the Math parameters or the information needed to actually typeset these fonts in a Unicode Math rendering engine, such as Microsoft Word 2007 onwards, and also in ZTech and LuaTech, which contain within them an OpenType Math renderer. So I'm here today to talk about like a proof of concept implementation of how Unicode Maths will work in LaTeX. To run it, you can go to CTAN with Tech Live 2010, hit Use Package Unicode Math and I'll show you this package in practice in a minute. So because it's dealing with Unicode and dealing with OpenType fonts, it requires one of these Tech extensions such as ZTech or LuaTech. You wouldn't really be able to do this sort of thing with one of the older Unicode texts like Omega or LF because there are only 16-bit and the Math characters in Unicode actually extend past the 16-bit limit in Unicode. So right now this package works really well in ZTech under LaTeX or ZLaTeX. LuaLateX supposedly works, but it needs a little bit more testing. Okay, so we really want to know what fonts are available. There's no point actually writing this thing if you can't use the support in fonts that are available. So the gold standard at this point in time is Cambria Math which was commissioned by Microsoft for their OpenType Math font rendering engine. And this is a proprietary font. You can buy the Math font itself for $35 or the entire family of Tech fonts with the Math font for $120 and that's from the Ascendor font foundry. And the reason I call it the gold standard at the moment is because they've put a lot of work into it to put a lot of glyphs in there at this stage, not as many as sticks, but they've also tuned up all of the spacing and made it look real nice for their Word 2007, but also as you can see here, this is typeset by ZTech. So it looks pretty good. Ascendor Math is the first Unicode Math font made fruly available. It was put together by the FontForge editor by Apostolo Saropoulos and FontForge has added the extra tools necessary to add OpenType Math information to fonts. And it's an extension of the MathPazzo fonts which are Palatino clones with Math support which you can access in LaTeX. It works at the moment, but it doesn't have all of the polish that you see in Cambria Math. Now, I was saying the sticks fonts have been released, but they have not yet got the OpenType information necessary to do Unicode Math type setting. And while this is planned for the very near future, Khaled Hosni has started to do this himself because the sticks fonts are under a free font license. So the Xitz Math font is the sticks fonts, but with this extra information to do the OpenType rendering. And as you can see up here, it looks pretty good. Just by comparison, what I mean when I say you need extra information is that if you try and do this example right here with the sticks fonts, you get something very similar, but you can see this integral sign hasn't expanded up to fit its display style requirements. And if you had large braces and so on, they wouldn't automatically change size. So everything is there in sticks. It just needs a little bit more information which will be coming in the near future. Finally, the Euler font is in the process of being extended and put into Unicode with these extra parameters. Khaled Hosni, again, is the number one person behind this, but I think Hans and Taku are helping out a little bit. You might want to, yeah, I've got some confirmation on that. And I find this font extremely attractive if a little bit unconventional. So I'm glad to see that it is surviving into the modern age. So we've got these Unicode Math fonts and we've now got support for actually using them in LaTeX with this package, but how does it all work? The fundamental philosophy behind this package is that you should just be able to load Unicode Math and not have to change any of your mathematical source documents to take advantage of what it can do. So all of the Maths documents currently exist and should work, but there's big fingers crossed on that and I'm sure there are lots of edge cases probably where there's a little bit more work needed to be done. I just thought this morning that the current BM command won't work properly at the moment and also the Brecken package is not supported yet either, but, you know, development is continuing. And so the way that you input symbols and characters into Unicode Math with this package for a start is exactly the same as if you were using regular LaTeX. If you want a mathematical W, you hit W on your keyboard. You can also, however, as Ross was showing in his examples, if you've got the Unicode Omega, sorry, the Unicode W in a string of plain text, you can paste it into your LaTeX source and it will behave correctly. Similarly, if you want to input a symbol, the traditional LaTeX way is to write slash circled asterisk. If you have the Unicode string for that, however, you can paste it into your source and it will work correctly as well. The biggest change is for Unicode Math alphabet styles where you have alphabets in Latin and Greek that might be in bold or italic and so on. And these are actually incorporated within the mathematical Unicode standard so that you can copy and paste plain text of mathematics and a lot of the meaning will be preserved as you move this information around. You can refer to these characters with actual control sequences now. So if you want a bold X, you can write slash mbfx. But there's also the backwards compatible method of doing so where you would use the MathBF control or commands to get the bold X. In some cases, this will be more convenient. And finally, again, if you have this bold X lying around in some text, you can paste in the Unicode value into your LaTeX source and get the output. I'm sure you can see the pattern here. So as I was saying, there are lots of these alphabet styles within Unicode Maths just very quickly. You've got a Roman variety with Greek and upright italic, bold upright and bold italic. With Sansarif, you have Latin in the normal case and for italic, for bold Sansarif, you also have a Greek alphabet. And this might seem a little bit strange that the normal case Sansarif does not have Greek. This is because Unicode or the Unicode consortium is very strict about what requirements. They will allow symbols to go into Unicode. Evidently, we haven't got any proof yet that mathematicians use Sansarif Greek without being bold in any of their documents. So at this stage, it's not allowed into Unicode. That might change in the future, in which case, and the Styx fonts I think indicate that this might happen in the future, that some of this support will be rounded out. The good thing about Unicode is that it is open-ended and things, it will adapt to change. As mathematicians and other scientists, technical writers, use more symbols in their documents. So we've got bold italic as well. And then all of the other styles that we'd be used to seeing with try-pride or blackboard, script and bold script, fracctor and bold fracctor. Actually, to be honest, I didn't know that there were bold script and bold fracctor alphabets, but that's why we do what we do. Now the six alphabet, as I said, have started to already expand on the number of symbols that are in Unicode mathematics. And so we have a little bit of a rounding out of the alphabets that are already present, including the Sansarif and the blackboard fonts, where we now have Greek styles of Sansarif and as well a complete alphabet of blackboard italic and bold blackboard and bold blackboard italic. I'm not sure if that's for consistency mostly or because it's a need that has yet gone unfulfilled. I might talk to Barbara about that a little bit later. But one area that I've had recent feedback from an actual working mathematician is in the calligraphic alphabets, where this mathematician was using math, cal and math script, so a script font and a calligraphic font within the same documents, even within the same equations in his mathematics. And there's published proof that this sort of requirement is necessary. So the Styx fonts have a calligraphic font and a bold calligraphic font. And I'm sure within a certain amount of time these will also be included into Unicode. One point that I feel I should mention, although I don't think it's that important, is that the calligraphic fonts don't have a lower case alphabet. And I'm not saying that it should. I'm just saying that this is the only Latin alphabet in mathematics or in Unicode mathematics that doesn't have a lower case. Am I wrong about that, Barbara? Why I've got dimples? Numerals. Okay, I might have misinterpreted one of the, I haven't looked into the Styx table properly to get all of the support working for calligraphic alphabets. So I might have misinterpreted some of the glyphs that I saw that were present in the fonts. We'll talk about that later. Okay, so you can input symbols and mathematics just as you normally would with a bit of an extension there. But one thing that I've tried very hard to achieve with this package is a separation of content and form that you don't traditionally see in LaTeX mathematics. So up here I've got a couple of Latin and a couple of Greek letters. And depending on how you load your font or you can load the package with this style, you'll get a different output based on your preference of how your mathematics should look. So LaTeX, I should say, traditionally has italic Greek letters except for uppercase Greek, which are upright. But the ISO standards say that, well, we should be a little bit more consistent and use italic shapes for everything. With the Unicode math package, and also there's a couple of other packages on C-Tan that are my inspiration for this, you can switch between like a text style or an ISO style simply by changing a package option rather than changing any of the symbols that you're using in your math source. There's a couple of other options for completeness. Some French mathematical type setting uses upright Latin uppercase and lowercase Greek. And then there's an upright option to support the Euler math fonts particularly, and it has a couple of other uses as well. Along another axis, the bold fonts change their visual look depending on, well, who's doing the type setting and what program is doing it. So in LaTeX, traditionally, you would use MathBF to access the bold Latin letters and you'd use slash BM to access the bold Greek letters. I can't say how many times I've been asked by users, how do you get bold Greek letters? I've tried MathBF and it just doesn't work. So in Unicode Math, you can use this MathBF command for both Latin and Greek which I think is a big usability improvement. And again, you can choose the style that your bold math will appear in. So in LaTeX, traditionally, you'll have uppercase, sorry, upright Latin letters, but you'll have italic Greek letters except for the uppercase. The ISO standards again say we should be a little bit more consistent and simply use italic for everything. And again, we can use upright in case that's what we want to do according to the font that we're using. Right. So I just want to show you real quickly what this actually looks like in practice. So here we go. Make this bigger. Different sort of bigger. All right. So you start off, you can see this is a ZLatech document and you begin your document as with any other. And you start off with a load the package. It does have some package options but you don't need to worry about them. And we can then set the math font using whatever Unicode math font we have available to us. In this case, I'm going to use the extension of the sticks fonts to demonstrate what's going on here. So this is an example of a mathematical equation from the AMS Math test suite. And it looks like this. Just to show you. What I'd like to demonstrate is changing the style of the bold math in here. So if we take a look at the bold K that's here upright, I've just changed this to be in an ISO style. So when we re-types at the document, wait just a second, we see that it comes up in an italic bold format now. We didn't have to change the source of our mathematics to do this. I'll just change that back to the textile because I prefer that. So taking a look at another equation, we've got this Laplace transform that I demonstrated earlier. And the source of this looks exactly as you would have typed it in real life. What I want to demonstrate is that you can now use Unicode symbols in your input source to write this mathematics. So I push escape just then to auto complete the tech control sequence into a symbol. This is just a function of the editor that I'm using TechShop. TechWorks will also support it. And this stuff can be automatically generated. So I'm hoping that we'll be able to get support for techs editors to do this as this package becomes more widely or if this package becomes more widely used. Similarly, your integral can be represented as a symbol and the infinity sign and so on. Re-type setting this will produce exactly the same output. So I'm not sure if I even need to demonstrate it, but there you go. Similarly in our previous equation, you can get this product sign and make that look like a product sign. Sorry? Not at this stage, no. But I'll think about doing that. I'm not exactly sure how that would work. There's only one, as far as I know, there's only one bracket within, sorry, parenthesis within Unicode mathematics. So you can't differentiate between different brackets based on its Unicode value. Finally, if we take a look at some more extensions of, or more examples of using Unicode input in our source, we can see a couple of magnetic field equations up here. And to take a look at the source in the LUTIC code coming down a little bit, ooh, it's gone. I wonder why that happened. That's embarrassing. It showed up earlier, but I wonder where that would have gone. Oh, well. I wonder if we can copy this out of here. I don't think that's going to work. No. All right. Down here we can see another example where you have an example of a Greek letter here that is pasted directly into the input source. And there's even a couple of subscripts and superscript characters that you can use in your equations. I'm not saying that people want to be doing this, but when you're copying and pasting text between different areas from a PDF or from a web page or something like that, you might end up with these superscript or subscript characters in your source unintentionally. If you do want to use them as part of your LUTIC source, that's great. And as you can see, it does help the readability here a little bit. There's another missing symbol in the source here, which is this differential D, which is part of... I think it originally came out from... Well, that's a long story. I'll go into that later. So what I've tried to show you here today is that the traditional way of doing the LUTIC maths still works. You can switch fonts without having to change anything. A new Unicode math font might be produced, for example, for Minion, in which case you just need to buy the font and plug the name into this package, and everything would automatically work, assuming that my package works perfectly all the time, which... The more people use it, the more chance that has of it likely to be in the case. Now, just as a couple of final remarks, you're not always going to be wanting to use the same font for all of your Unicode math characters. Some fonts will only contain partial set of characters, and you can always use the sticks fonts to fall back on any characters that you happen to be missing. So it is possible to load different ranges of Unicode symbols with different fonts. So here, you can see that I've loaded different coloured fonts for different ranges in the equation. So you can select the equals sign and say that you want that to be green, or you could have a range in there, say, from character 100 up to character 1,000, make that purple. I mean, this is all silly, but it emphasises the flexibility that you can use to select different fonts. So the operators and the script fonts are red, and you've got the brackets up there which are blue. Yeah, I mean, this might be of use in some sort of education setting where you're trying to teach the kids what the different symbols are, and you could use the colours to emphasise it, but really, this shouldn't be taken too seriously. What's going to happen next? Well, who knows? I'm not really sure where the future will take us, but I'm sure it's going to be fun to find out. Thank you very much. Is it possible to take a glyph from one font family and use another font family except for that one glyph? Yes, it is. I did have an example for this in which you might want to load a Greek alphabet, for example, in an italic style, but only use, say, an upright pie, because in the ISO standards, you want to use an upright pie to represent a constant. You can do this with this font, so it hasn't been extensively tested, but yes, you can use just pluck one single symbol from an alphabet. Thanks. Okay. Maybe we should just go on to the future. Thank you all. Thank you.
|
Over the last few years I’ve been tinkering with Unicode mathematics in XeTeX. In late 2009 I spent a few weeks ironing out the significant bugs and think I’ve got a pretty good handle on the whole system now. In this presentation, I’ll discuss the advantages Unicode maths brings to LaTeX, challenges faced dealing with Unicode, challenges with maths fonts (including the STIX fonts), challenges with compatibility with pkgname{amsmath} and/or MathML, and assorted related remarks. In future plans, I hope to use this system as the basis for equivalent development in LuaTeX as well.
|
10.5446/30875 (DOI)
|
Good morning. I'm here today to talk about some really simple ideas for controlling Widow and Orphan lines. I call them Orphan lines here. I'll get to that later. But basically the goal of what it's trying to do is to control them automatically. To have a program that would go and deal with them without human intervention if necessary. The tech has very, very sophisticated ways of doing this manually if you know where you want to do it and how to handle it. But my colleagues and I, we wanted to do this automatically. We didn't want to have to go through that. We start by mentioning, in case anyone doesn't know, I imagine that almost everyone does, know what are Orphan lines. Orphan lines, it's a single line at the top or bottom of a page. It comes from a paragraph. So basically the paragraph is getting chopped off and the last line is on the top of the page or the first line is at the bottom of the page. And people have called these various things. Some people call one of those a Widow line and another one an Orphan line. And people actually get them backwards sometimes. In this, the complete manual typography, Fleece just calls them Orphan lines. He uses Widow lines for different, something different. So that's what I'm following here today. We don't really like the names, but that's what the standard is. Basically, my colleagues and I have developed a simple little typesetting program that we call AML. And we did this just, we had several targets. Primarily we wanted to generate very high quality, very dense PDF. And we also wanted to develop eBooks. And this was a while ago. EBooks have made a dramatic change in the time that we started, especially recently, with some recent product announcements. But our goal back long ago was to try to focus on creating PDF for these eBooks and tagging it. And we wanted to be able to look at not only on laptops, but also on column pilots and things of that nature. So this is why we were looking at this program to optimize that. And at its heart, it is pretty much a PDF generator. We wanted to develop target PDF. We wanted to make sure that we could support the features of PDF and also do so very cleanly and very well. The various operators and their text operators, graphic operators, things of that nature. So when we were starting this, we started with tech, which we loved. At the time, we didn't think that it was creating the PDFs that we needed or wanted. At the time, the memory on some of the handheld devices that we were targeting was very small. And we needed to heavily compress and take advantage of the operators there. Also, we wanted to try some different things. So we wanted to process the document as a whole. And for one thing, we didn't want to just go page by page. We wanted to see if there were some optimizations that were possible there. And we wanted to get beyond some of the limitations, the 256 registers and things of that nature. We wanted to try to expand things. So one decision that we made was that we, our macros are scheme like. I put scheme quotes there because it's not a full scheme. It doesn't probably implement tail recursion and all the necessary features that you can probably mutate, non-mutatable values and things of that nature. But the syntax is very much like scheme and so are the ideas. Another thing we wanted to do was we wanted this to be purely line based. We did not want to go off line grid. We did not want vertical glue, for example. We wanted this to be strictly line. We do have concepts of skipping half lines and things or fractions of lines. But the line is one of the fundamental points in this package. And as I mentioned before, we wanted to process the whole document. We wanted to be able to do several things with that, including forward references. If we wanted to refer to a section that's later in the document, we wanted to be able to process it, go back and update that reference there before producing the PDF itself. So mentioned before, this includes, so AML includes this ability to backtrack. If we find the forward reference and we put a marker there and once it is defined, we can go back and fill that in and effectively reprocess the document. And all this processing goes on prior to the PDF generation. So we store it in an intermediate form before that happens. And there are various rules for backtracking, but it may only go back to the last unresolved point. But here's an example. One of the things that we were targeting at this was basically books for schools, children and things like that. This is a page from Pride and Prejudice. So downloaded from Gutenberg, we wanted to have a copy available for kids. And so this was one of these, one of our example texts. And you can see at the bottom, there's a line that, one of the orphan lines that we're trying to get rid of. I should mention that my obsession with the orphan lines, I call it that, come from my master's thesis where we had to present it to the document formatting people who would go and judge it and mark it, red mark it and tell us where all the bad parts were and what we needed to fix. And these would be a standard reference. And two of us in graduate school decided we were going to go through perfectly, which had never happened. No one ever thought that could be possible. One of my friends used, I used tech, which was new to our university at that time. And I thought I did a wonderful job. And it came back and one of the things was, they mentioned the orphan lines, which was a minor thing that I had missed in the document format requirements. And basically I had a night to fix it. So I would spend all night trying to fix these lines and I'd fix one and another one would pop up and then it just, just strolled forward and forward. And it was an all nighter and even though I was much younger than it could do that, it was still not a pleasant experience. So in the years since then I've always tried to target these things and hope that they would go away. This is an example of the output. Normally we, as it turns out, most of our stuff is on 8.5 by 11 paper because that's what people have that we print out on. This is a, I shrunk this down to a more standard paperback size. And this is an example of kind of some of the things we can do in AML and some of the, our philosophies with it is that we, with the embedded scheme, we can actually output some, and annotate the text itself saying, giving features of it on the, your right hand side is the stretching or shrinking of space in that line, particular line. The reds are the lines that we consider to be out of control. And that's a threshold that's set in the language itself. And the, you can see on the third line, I don't know if you can see that or not, but third line is zero. That means that all the space was used there perfectly. And the, on the, on your left hand side, there's both paragraph and number markers. And the number to the left of that is the, the penalty in the paragraph. That's a, a measure of how good or bad that paragraph is. And we'll get, get to that later. But, and you can see a bounding box around the text. We have lines there at the text itself. And again, this is baseline, developed on the, on the baseline. We wanted pages that were printed back and forth to line up nicely. That unfortunately didn't turn out because the printers seemed to offset it. So we, we never quite got there. We tried various rotational schemes to handle different printers, but it was too random for us to, to deal with. And you might be able to see here that basically at the start of the program, we figure out how many lines there are, there are going to be. And we adjust, actually adjust the spacing to fill the lines in, in different ways. At the top line there, we, we've optimized it on the X height. We can also optimize it on the cap height or the, or the bounding box. And these are some of the, some of the annotations that we do when we want to look at text. The program that outputs log files that say, they'll say your paragraph so and so is out of tolerance or bad line at this paragraph. And what we can do is have it printed this out, go back, look and see, say the first line of paragraph 512 there is bad and figure out how we want to deal with it. For the most part, the text that we have, we don't really want to change. We don't want, don't have the option to change. So that's, we usually just have to live with it. But in case we, you know, when we go through it, we, we can get the, the reports and go back and see what's, what's gone on. So I'm, I'd like to talk about the news plus algorithm which is the basic type setter that we use for breaking the paragraphs into lines. This, when we started this, you know, we implemented a simple algorithm for basically a first fit. And it looked okay, but I got a hold of this book, Digital Typography, and found the paper in there, which I think is a wonderful paper. We implemented this algorithm and made a dramatic difference. And I, I don't know if, I imagine that everyone here knows more about this than I do, but if you've never out-implemented this, I would suggest trying it and trying the differences and seeing them because it was quite a revelation to us. I'm just going to briefly try to describe it because, you know, I think most people know more about it than I do and also that I can't do any better than the, than the paper that gives it. I was rereading that last night and I think it's a wonderful introduction to typography and also to the saga. But basically it can, the, the words in the paragraph and the breakpoints are converted into a graph and each potential breakpoint is given a penalty and then the, the algorithm optimizes that from the beginning to the end. And the, the penalties add up and that is the number that is given on the far left-hand side in this example here. So it's a measure of how the goodness of the paragraph shows it. So, and in, in the paper it describes various features that have the science penalties to them so you can optimize the output typographically. How it, from line to line can change the spacings, make sure the spacing is consistent for the from line to line. And it is wonderfully flexible. I cannot say enough about it. And speaking from someone that has tried both, it is, the, the difference is remarkable. In the paper itself it, they called it the total fit algorithm. And the, I think the key concept from it is that it looks at the paragraph as a whole rather than line by line. When I think about first fit and best fit algorithms, they typically look at a line and then look at the next line and forward. But it looks at the whole paragraph and that gives it much of its power. So, and in AML, basically, very simple. Read the input. We basically translate it into lists of characters, and boxes, and, and flu. We use the algorithm to create line breaks. We, where the line breaks are, we, we dole it out into lines and we look and see if any variable was changed. This is part of the forward referencing. And if it does, we go back to where we need to and go through again just to make sure that, that nothing has changed. Obviously, there are some degenerate cases where a variable could change and it would throw something off and it would change back and that. We've never seen that. We've never tried to hunt for it. This was mostly, we wanted to use this to create documents and one of the nice things about having a group of friends do it is somebody's down to a problem and complained about it, they'd be told to go fix it. So that tends to limit people's meanness to the program. So, the, the whole concept of working, working control with this algorithm was mentioned in the original paper. And there is a loose, looseness parameter that can control this so you can go plus or minus one line to try to optimize, to get rid of those lines. And a reference to where that is mentioned. Thinking back many, many years ago I, it wasn't very successful for me when I was undergoing my night of torture and almost doomed graduation. So, the nice thing about the, the, the algorithm graduation. So the nice thing about the algorithm is that at the end you have a graph and you go through the active nodes to find the best one. But different nodes have different lines. Different breaking paths can have different line numbers. So what AML does is it stores that information. It goes through and for each paragraph it will store not only the optimal breaking point for that paragraph but it will also, if it finds one that is one line less or one line more, it will store that one as well. And if an orphan line is found it will go back then and figure out which paragraph would have, basically figure out which paragraph would take the penalty for being adjusted plus or minus one line to optimize itself, to optimize the page so that the orphan line would go away. So in doing this we found that in pages there are natural page breaks. We unfortunately call them anchor points. It's a bad name but we start down a bad path and here we are. Chapter ends. Typically in our documents we end chapters on a page, begin chapters on a new page I should say. So if there's a chapter break that is a natural point we're not going to optimize anything prior to that. We're not going to shift anything prior to that. Also if we find a paragraph whose ends on the bottom of a page that's a, we don't want to touch anything prior to that either so that we adjust that to be being an anchor point there. So I'm going to say, and I will not try to adjust anything earlier and this makes saying the obvious. If we tried to adjust anything previous then we'd just be creating problems for ourselves rather than creating goodness. So we've also found that two paragraph, two line paragraphs are interesting that you can give us flexibility or they can create havoc but they can be moved up or down obviously. So there's a little more opportunity to deal with them than there are with other things. So when we find a two line paragraph unlike others we keep on going for a while. And then we figure out at the next convenient time whether or not we want to move it up or down. And we can get strings of two, strings of pages that end with two paragraph lines and delay the decision for a while. In doing this in AML and these are oddities of our program, there are two things that kind of work against this. For one thing we are, we handle tolerance a bit differently than we, we want it to be best as possible but we want to place it if possible. So we don't want to, as a default we don't want the user to have to go in and make the, increase the tolerance for a particular paragraph. So by default we start with a low tolerance and if there's no, no good set of break points found we increase it by a small amount. The point 025 there but it's actually adjustable. And then we try again and we will ratchet it up until we find something that will work. And again it's quite possible that there would be paragraphs that wouldn't, would never work and there's a, we have fixed limits and if any one of us complains that person's going to have to go fix it. So the problem with this in AML is that this typically only gives one possible line break solution for probably about more than half of the paragraphs. And then doing it this style it finds a breaking with, it has the least tolerance for the paragraph but it doesn't really find other options. And so we do have ways of turning this off and starting with a looser initial values so to try to find better, better breakings. That issue is hyphenation. AML has a very poor hyphenation scheme at the moment. It's because one of us is convinced that there are natural ways of doing it. He believes that he saw rules once for English language hyphenation that he wants to implement but none of us can ever find it. But the bottom line is the fewer break points, fewer hyphens options, fewer break points, fewer line breaking options. So for the text I showed, part of the Pride and Prejudice, one page of it, there are 2,124 paragraphs in Pride and Prejudice which something you can wow the kids with. In that processing we found that there were 125 orphan lines, 14 of which were two line orphans and in that entire text there were zero alternate breaks found. That is not a typical result. Typically we deal with larger paragraphs for one thing. As it turns out Ms. Austin wrote fairly short paragraphs most of the time. But it happens and it is not unusual to encounter an orphan line where there has been no previous alternate break options. So we had developed the alternate method for handling. So what we do is we keep track of the paragraphs which require the least amount of stretching or shrinking to change and then we go back and stretch them out basically. Sometimes we shrink. That does not normally happen. Shrinking is a pretty difficult thing to make work. And when we find our target paragraph or unfortunate paragraph it gets stretched out or shrunk until we get rid of the line. And this is the example page. In this example it was paragraph 511 that was victimized as it were. It was stretched out by about this spaces were actually stretched out by about 2.5 percent and it got rid of the bottom line there. You can see. So basically the conclusion here is that we have found that orphan lines can be handled in many cases. Which is a good thing if you are obsessed with them. Sometimes there are few good choices in the ways of dealing with them. So we had to take sledgehammers to things to get them to fit. And as always looking at the text to see if it's what turned out is a good idea. That's all. Thank you. Correct me. I'm wrong. So if I understand correctly in your standard method you always look exactly one paragraph up unless it's two line. In alternate you go several paragraphs up. Is it correct? No. Okay. We start from the last anchor point. I'm sorry. We start from the last anchor point. Which could be the start of a chapter or it could be the last point at which the paragraph ended at the bottom of a line. And we backtrack to there. And there's a paragraph somewhere in between that is going to get affected. So you have full proof. Okay. Thank you. Yes. I have a few questions. On one of the sheets or whatever it's called you mentioned that you make boxes and then you apply the algorithm. Does it mean that because if you make boxes you lose basically hyphenation. So you don't do any hyphenation at all. Well it's a little more complicated than that. What we do is we have a list of characters which and each is what we call a fixed box. I don't know if I have that terminology correct or not. But we have list of characters and list of spaces with shrinkability and stretchability associated. Size, shrinkability and stretchability. And some embedded commands also changing costs and things like that. And that is what we give to the algorithm. The algorithm can goes through and adds up the character widths. I don't know if that's... So is it cat-cactus? Yes. And then it adds up the space and the shrinkability and stretchability. And we have a little feature where we have a, we can have infinite stretchability or infinite strengthability boxes. And if we wind up with infinite we start adding factors of infinity, things like that. But that's basically how it works. The second question is, can you go back to the original page which is in the beginning? Yes. If you talk about obsessions here and maybe publishers being obsessed, how about the dangling I at the end of the line? I would say that's even more problematic than having... That's... Yes. Well, thank you for mentioning that. I now have another item to deal with. My psychologist will be sending you a card. Now I agree. This was a... Basically took the online Gutenberg text and shoved it through. But you are right. There are many other things that can go through and optimize for typography. I don't know that... I certainly would not consider myself to be an expert typographer by any means or stretch of the imagination. And I think it's a good point. We somehow should figure out how to programatically deal with that. Maybe stretch the space to the limit. Yeah. I just wanted to mention that I and probably many others in the room can give you a reference for those supposedly natural rules of hyphenation. But they don't work. You need to implement Frank's algorithm. Yeah. I appreciate that. That's a... It's been a point of discussion, shall we say, amongst us. Yeah. Yes. One of the issues that we developed hyphenation for was to get away from the... I think it's called rivers of white space. And one of the things that I saw in your solution there was a lot of very large intraspace words. So, intraspacing between the words. And so, are you addressing, apparently not, this issue and handling it. I mean, if you're going to now go and add the hyphenation algorithm in there, that's going to squeeze up the line space. But the obvious solution is, well, just put one more word on the next line and then you're going to larger workspaces. So, this looks like a... Damn it if you do and damn it if you don't. I can understand having written a college program to do word processing and word wrap and stuff like that. You got too many hard choices to make. So, that's really commentary. Yes. I think we could assign penalties to orphan lines and figure out how bad we wanted to get versus that. We just decided to try to eliminate them completely. I've been doing page layout for... Since I was a teenager. And all that time I've had exactly one instance of a chapter coming out perfectly with no widows or orphans. It was the fastest 45 minutes of my life. And doing that, I found that the best strategy is, you start by setting everything up as correctly as possible and including putting in lots of global replacements to fix things like that dangling eye, putting in non-breaking spaces after all the cat-bys that are solo like Hans mentioned. And then, you know, much as you have your algorithm start at the front and work your way back, one thing I found to look for and I didn't see a mention of this was also adding lines to paragraphs to fix stacks. So, if you've got, you know, say two word stack at the bottom of the page, you back up a little, find a long paragraph, increase it by a line, and then you've broken the stack up by taking the second word off the other page. Have you considered looking for stacks and reverts in your lines and fixing those? We have thought about it, especially rivers. We haven't come up with anything good. So, it's something we would love to figure out how to program them away, but we don't know how to yet.
|
The Knuth-Plass line-breaking algorithm is one of the many exceptional features of TEX, taking a paragraph of text and converting it to a vertical list of well-proportioned lines. Through glue and penalty markers TEX gives the user almost complete control over the spacing and look of the paragraph.
|
10.5446/30877 (DOI)
|
Thank you very much. My talk will be more historically than technical. And I think some of the younger don't even know the times 32 years or 26 years ago. So maybe I should remind you how the situation was before Teich. So this is a PhD thesis of my colleague Urs Kirchgraber in Zurich. His thesis was written with an electrical typewriter, but of course no symbols, no formulas were available. So he had a wonderful handwriting. I could have never have written that in this nice fashion. Then my PhD advisor Peter Henritzi, he used IBM Golf Ball Typewriter. And this is a part of his book that he wrote, Computational Complex Analysis. And as you can see, it was possible to write symbols and the Greek characters here. But of course the integral looked ugly because they had the same size as the other fonts. No text was, it was not possible to store text. And the typeset page looked like this. The book had to be typeset by a typeset. Next step is so called Flexoriter, a solution of Heinz Ruttishhauser. So Heinz Ruttishhauser wrote the first book in this series Handbook for Automatic Computation. This series of Springer was meant to encode once for all everything and you would not have to touch any program anymore. Imagine that. So the first volume was the description of the language, description of Algol 60. And Ruttishhauser wrote this book using a Flexoriter machine with a tape reader and punch here. And why did he use this? Because he wanted to make sure that in his manuscript that he would give to Springer, there would be no mistakes in the program. He would copy the program directly to paper tape and copy them here on this thing. Anyway, the book has to be typeset. And I tell you this Flexoriter killed Ruttishhauser because I as a student of him, I had to write my thesis of course also with a Flexoriter. And how does this work? Look, this is a part of my thesis. Let's look at this fraction here. So you write the thing sequentially from top to down. So what you write here is first H minus H. H minus H. And then the second line comes and you write this K underlined. And this K minus right underlined. And then comes the third line. You write the minus and the big T and so on. So it's a terrible work, you know, to do that right. Of course, we had our tricks. We went with the paper tape. We skipped some and we went back and punched there and so on. It was possible. But the tedious thing was when correcting. So when you had to correct something here, you wanted to add a line or something, then you would make the Flexoriter type this. The paper would be red. And then at the right moment, you had to say, fine. And then you get a heart attack, you know, because just before the character was printed, you had to stop the machine. And then you would type your new stuff in and then you skip the other thing and you go on. So this is the advantage of this thing. The text was stored. You could fix it. So in 7778, I had the chance to be as a postdoc in Stanford and I wrote my habilitation. And I had another chance. Phyllis Winkler, the technical typist of Don Knuth, typed my habilitation. I have written everything in hand, of course. And if you look at this, she did a wonderful job. She was not only fast, she was also very correct. You could rely that things were right. So she typed it using an electrical typewriter, but there was also no storage of the text. So in that year as postdoc, I got to know Don Knuth and Duke Knuth. And once I don't, I remember that day I saw for the first time a printed page coming out with mathematics out of the printer. I didn't understand what was going on. I saw maybe it's a photocopy or something like that. It was really strange. So this was produced by Tech. And when I understood what it really was, I decided to learn right using Tech. So after, when I went back to Switzerland, I got the first beautiful type document in Tech. This is a part of the PhD thesis of Nick Trefetin. This was 1979. And then in 84, I had a sabbatical. And I decided to go to Stanford because in Switzerland you couldn't use Tech. It was just here. So I wanted to go to Stanford to learn Tech and write this book. So I went with a back, a suitcase full of manuscript. I went to Stanford. And you see us here. Here is Don and Jill, my wife Heidi, which sits also somewhere here, my two daughters and myself. So the goal of the sabbatical was to write a textbook in Tech. But then a graduate student, Mark Kent, pointed out, Professor Gander, there is a version of late Tech now just came out. Maybe you should use this for writing a full book. And look here is a second preliminary edition, February 28, 1984. I have this manual at home still. And so I said, okay, you have to learn Tech or late Tech. Doesn't matter. Let's learn late Tech. So I started with late Tech. And I wrote that book. And here are a few graduate students. You can see here Mark Kent, Chris Fraley, Nick Hyam was a visitor postdoc here. Then Pat Worley, Veronica Kent, the wife of Mark. This was a Belgian visitor. I don't know his name anymore. And Heidi and my two children and myself went in the Foothill Park for a walk. So the book which I wrote was a German textbook teaching, auto written, written in Pascal. And mostly focused on numerical analysis. The computer I used was a Unix wax with Tech late Tech's installation. I had to learn Emax. That's what they told me you have to learn that. And I still love it. And I started with a chapter four, which was entitled Poly Nome, because this was mostly finished, this chapter. Okay. But that was already quite a challenge. This is the chapter. And to type set this as a late Tech beginner, was a nightmare. So here you see the code. And that's how it looked like then really. Sorry for the bad fonts here. Okay. Anyway, so graphics, there was no use package graphics and include graphics at that time. So I used basic late Tech commands like a vector or short stack and so on put frame box. So this thing here is generated by this late Tech primitive commands. Then type setting Pascal. Today, I think we are a bit sloppy or lazy, lazy. We use verbatim or verbatim input and the program which would be written like that would just appear like that in the text. But at that time, I didn't want it. I wanted the beginning end to be bold face and nice, you know. So I wrote my program like that. I have made a little Pascal program that would read a Pascal program and would indent and would do the reserved words in capitals. So this was my input. And so I had to ask how to indent and Leslie Lampert recommended the tabbing environment. I corresponded with him by email. And I didn't want to retype the program. So I used Emacs to put some additional commands into that thing without changing them heavily. So you see this is a program of before. I put some backslashes in and I put backslash in front of each reserved words which I had all redefined. I replaced begin by slash begin and so on. And then from this thing here, we got this. So it looked quite nice, you see. So this is the printing room in Stanford. Here's a photocopy at that time, 1984. And this is the laser printer, the Dover printer, one of the first laser printer. Now if you stand here and look to the Dover printer, you see this. And notice the notice here. It says, please, do not put paper in upside down. If you responded to one with a, huh, please do not put paper in. So fall, we have our Swiss National Day party, 1984. We had a big party and Ocky Björk was visiting here from Sweden. This is a colleague of mine from Zurich, Don Knut and my daughter Beatrice, who serves here in a Swiss National Dress. And in fall, 1984, the book was finished, the manuscript was finished. There was a party at the wiederholz. Woj Widerholz invited everybody to celebrate. And Don Knut said, now it's proof that Leitech is useful. And I submitted the book to publisher, to several publishers, all accepted it. They were thrilled by the nice looking output. And then I decided to go with Birchhoiser. I had, of course, to do proof reading, I found there a comma missing, there's some mistake and so on. I did that. But then in Switzerland, there was no way to correct this thing. I had no tech installation. So how, what do you do? On Christmas, I bought a plane ticket, $2200. And I came back to Stanford. I lived with the Kents in Quillen, high race, a week long. I corrected everything. And then I wanted to produce the final manuscript to give to the publisher. And Don Knut had not only the Dover printer, but there was in the basement something which we had no access, but it was an alpha type, photo type setter. It must be a huge machine. So I said, okay, I will pay, but we print it on this alpha type printer. And of course, it was down in that week. I could not use it. So what did we decide? We said, okay, we will, it was down in 85. And I asked Mark Kent to print it and send me the rolls by email. It was on the rolls. And a few week later, I really got the book. But when I looked through my God, the page break was completely different than what I expected. It was not the same as I had in Stanford in January. And well, most of it was okay. It didn't matter so much. But one page was really terrible. Half page blank, and the table somewhere else. So I took a scissor and glue and made copy paste, real copy paste for the few pages and send that in. And then the book appeared. So that was this book here. First edition, 85 from Birkhoiser. And there was a second edition in 92. Okay. So in that book, I had lots of exercises, which the student had to solve, mostly programming exercises. So for the teacher, you need to make a book where the solutions are. Otherwise, you cannot give that book away. So I had to write a second book, the solution for the tasks with turbo pascal programming. So I had to write a second book, which would solve all the exercises. Okay. I bought a PC Olivetti M24 for $6,000. 10 megabyte hard disk. I mean, not too floppy disk like they used before. 10 megabyte hard disk. Tech used up 5 megabyte. So I could not install latex, would ruin my machine. Okay. So what to do? Solution book had to be written in plain tech. So I had to learn that. I wrote a few macros so that things looked nearly as the same as in latex. It was not too difficult because it was just only the same structure. You had the exercise and then the solution, next exercise solution, so quite simple structure. And then came, I had a dot matrix printer to print that stuff out. It took forever. And it was very ugly to look at. And when the book finally was finished, there was no previewer, as I recollect. You had to print things out to look at it. So when I was finished, I had the problem to print this thing. Take another ticket to Stanford and print it on the alpha type. No. So I asked around and I found Jan Olof Stenfloh from ETH, an astronomer. He's a suite. And he had 1986 latex installation at ETH. I think he was the first at ETH, who had such an installation. He was a laser printer. And so I could print it there. That was this thing there. So epilogue. I will not use the 35 minutes. I'm afraid. The book is no longer in print. The copyrights are returned to me. And I wanted to put it in public domain. So I offered it to Google. My God, this is a long process. I haven't heard from them for half a year, at least. So I decided to put it myself on a website. And it can be downloaded from this thing here. And the astonishing thing was I compiled the book with PDF latex. Of course, I had not many figures and complicated things in. But it compiled without major changes after 26 years. Just amazing. No other type setting system is so stable for so many times. So that's why I should just say thank you, Don and Leslie, for what you have done. That's it. Thank you, Falter. Are there any questions? Yes? I was just wondering on your alabete, did you use eMax? I'm still using eMax with OrcTex. And I like it very much. Okay. Very simple question. When you did it for the third time, now the page breaks, did it change again? If I did, no, it did not change again. No, it was the same. 1986. Yes, yes. Okay. I find that really interesting because in the very same year, 1984, I went on sabbatical to Rale, what was then Rale Holloway College in London and actually imported a tape from Maria Code. I don't know if there really was a Maria Code or whether that was really clever. Oh, Barbara says there really was a Maria Code. What an eponymous name. And put it on a vax just like you did. And used the vax editor, which was really interesting. It was done in plain tech. And I had to write my own driver for the printer. They had a dot matrix printer which printed a glorious 200 dots per inch. And I was, so I had to write a driver that would just put the dots on the page in the right place. So tedious but not difficult, just long. And I do remember that when that thing was printing, I could hear what part of the page was going on. Because when you had a lot of white space, you know, every time the thing started going, you hear this little, you know, so when you got to dense text, it would go, and when you got to empty spaces, it would suddenly go quiet. But what I really remember is the printing room was in somewhere else, you know, not where ordinary mortals could go. And I knew I had it right the first time because everybody stopped. And it was this crowd of people around the printer watching things come out. And things have never been the same for them since. This was in plain tech. And I think it would compile like yours. I think it would compile to the same thing again. However, I'm not going to do it because one of the things I realize is I've learned a lot about typesetting since then. And I'd be way too embarrassed to let you see what I did. It was really bad. I had similar painful experience in around 1983 when I started with tech. And at Imperial College, I was the first number one user in Imperial College, supported by Malcolm Clark. And so we were trying to work things out together. Of course, this is before any screen preview, any laser printer. And we had an autologic typesetter, which was running off-site. We would have to send the job as a batch. It would come back as a big roll. So I tried very hard to paginate it, but it was just too much work. So in the end, I just printed a long galley and literally cut and paste the whole thesis together. But it still looked good. Well, what a glorious hero we live in. No doubt about it. Thank you. Anything else? Okay, let's thank our speaker.
|
In 1984 I wanted to write a German textbook called Computermathematik using the typesetting system TEX developed by Don Knuth, which I had always admired and which I had been aware of since my first sabbatical year in Stanford in 1977. Mark Kent, a graduate student at Stanford in 1984, pointed out to me that Leslie Lamport had just finished a new typesetting system called LATEX which I might want to use instead. I did, and in Fall 1984 I had finished the (at least I think) first book written in LATEX. In this historical talk I will present some reminiscences how the book was produced.
|
10.5446/20587 (DOI)
|
Ich hoffe, mein Name ist Christian Zöllner, ich übernehme all den Deutsch und ich bin der erste Speaker heute hier und bin hier auf dem Behalt der Konstitut und der Konstitut bin ich und mein Kollege Sebastian Piazza und wir haben Design und Research Studio in Berlin-Kreuzberg seit 2012 und machen hauptsächlich so independent Art Developments, also wenn wir produzieren von uns selbst heraus Kunst, eine Schnittstelle zwischen Technologie, Design und Forschung. Wir machen Design Commissions, Leute laden uns quasi einen für sie Sachen zu machen. Wir machen viele Workshops, Academic Teaching an Hochschulen als Lehrbeauftragte oder Gastprofessoren. Haben früher viel Publikation gemacht im Bereich Human-Computer Interaction und machen wie eben gerade jetzt so Talks und Lectures und das Thema, womit wir uns in der letzten Zeit viel beschäftigt haben, das sind narrative Strategien in der Technologie, Kommunikation und vor allen Dingen im Design, das wird alle gemein oder so Design Fiction subsumiert und das macht uns ziemlich viel Spaß und zum Thema Virtual Reality, so mal kurz zur Geschichte zu uns, haben wir mit dem Fraunhofer Institut mal eine Zeichenanwendung für den virtuellen Raum entwickelt und in der letzten Zeit 2013 ein Projekt gemacht und das ist das Eisekt und das Eisekt ist ein Helm, der da drin ein Virtual Reality Hatset drin hat und zwar das erste SDK von der Oculus, die rauskam und was das Gerät macht, das ist ein Mobile Experience Helmet sozusagen und ist eigentlich ganz einfach. Bei Kameras hält man in der Hand und diese Kamera streamt direkt auf das Auge des Hatsets und die Kamera streamt direkt auf das Auge des Hatsets. Bedeutet, wenn man so dasteht, ist man noch relativ normal im Leben, wenn man aber plötzlich so hoch geht, verwechselt man den Blickwinkel und zwar eben auf die Höhe oder man kann halt eben auch auf so Kinderhöhe rausgehen. Das ist noch die aller einfachste Methode, schwierig wird es dann, wenn man so raus guckt oder beginnt sich selbst anzuschauen. Und das war eigentlich ein Experiment mit diesem ersten Virtual Reality Hatset, was so consumermäßig auf den Markt gekommen ist und soll einfach nochmal so zeigen, wie arbeiten wir? Wenn man Virtual Reality sieht wie eine eigentlich eine Skill oder eine Fähigkeit, in der man arbeitet, dann sind wir weniger so die Diastrates oder Eric Clapton, die so ewig im Keller sitzen und üben, bis die Gitarre stimmt, sondern eher wie die Remons und machen eher so VR Punk und sagen, okay, wir probieren das aus, wenig Code, eine coole Erfahrung und raus. Aber worauf wir geachtet haben ist, dass man die Virtual Reality Brille nicht sieht. Also wo ist die Oculus, das super coole Tagding, was 2013 so rauskam und alle waren so total heiß darauf, die haben wir irgendwie versteckt. Und darum soll eigentlich dieser Talk gehen, warum haben wir diese Virtual Reality Brille versteckt? Warum ist die nicht zu sehen? Oder andersherum? Warum sind Sonnenbrillen cool und Virtual Reality Brillen nicht? Und kann man eigentlich von der Entwicklung der Sonnenbrille lernen, einen Übertrag zu finden zum Einsatz von Virtual Reality Brillen. Und hier mal so ein erstes Bild, was ich bei meiner Recherche zu dem Thema gefunden habe und das ist so eine erste Staubbrille, die so Anfang des 20. Jahrhunderts eingesetzt wurde. Und die Staubbrillen kam früher eigentlich aus den Kruchindustriewerken, von den Schweißern und von den so Leuten, die quasi in gefährlichen Umgebungen gearbeitet haben, um ihr Augenlicht zu schützen. Und als ich das Bild gesehen habe, dachte ich, boah, das sieht genauso aus, als wenn man in so ein Virtual Reality headset so von der Distanz reinschaut. Und interessanterweise haben die sich dann aber, haben die sich so ein bisschen gewandelt von dem praktischen Einsatz in der Hochindustrie, in den praktischen Einsatz der frühen Mobilisierung. Also Autofahrer muss man sich ja so vorstellen, sind ja früher in so großen Gefährten durchs Land getuckert und hatten, es gab ja keine asfaltierten Straßen, die waren eigentlich die ganze Zeit diesen Wiedrigkeiten ausgesetzt von Staub und Dreck und Wasser. Und das waren auch offene Automobile, die mussten diese Dinger tragen. Auf der, interessanterweise haben die die getragen und die ganzen Leute, die das gesehen haben, weil was total Fremdes diese Automobile, haben so, oh, da fährt einer in so einem Gefährdrom und ist irgendwie total toll kühn, weil er, weil alles wackelt und alles dampft und kracht und das war was total Fremdes und gleichzeitig aber auch was total Attraktives, weil das aus dem aus der Norm rausgestochen hat. Und die Brillen, die damals benutzt wurden, hatten einerseits natürlich diesen Schutzcharakter, aber andererseits waren die schon mit farbigen Gläsern ausgestattet. Was vor allem daran lag, dass die sich das an Stück UV-Schutz schon mit drin war und auf der anderen Seite so bestimmte Blendeneffekte auch einfach funktioniert haben. Es ging dann so weit, dass natürlich die Motorisierung auf dem Land fortgeschritten war, aber es ging dann weiter in die Fliegerei und die Fliegerei wurde so in den 1920er, 30er Jahren auch zu so einer Art romantisierten Kult hoch gestell. Das ist ein Bild von 1933 und was ich hier gerade in Bezug auf Fashion total spannend finde ist, dass die Person eben nicht einen Fliegeranzug trägt und sozusagen diese Wildness am Flughafen oder am Flugplatz repräsentiert, sondern eigentlich so ein Businessanzug an hat. Und dieses Tollkühne, dieses Fliegen, dieses männliche und unendlich coole, das wird hier halt schon so repräsentiert. Und dieses in den Wolken, dieses Freisein hat viel auch mit den Sehnsüchten zu tun, die damals im Ende der Weimarer Republik Übergang ins Dritte Reich sozusagen eine Rolle gespielt haben. Und das ist immer noch das, was die Werbung ja immer noch repräsentiert oder immer versucht, zu rekurrieren. Und das zum Beispiel ist Théa Rache und Théa Rache war The Flying Frawline und The Flying Frawline war eine Kunstfliegerin und Langdistance-Fliegerin in den 1920er, 30er Jahren. Und was die gemacht hat, das war total beliebt, sozusagen Flugschauen zu gehen und die hat voll für ihre Eigenpromotion immer diese Fotos von sich gemacht, wie sie im Fliegermantel mit der Sonnenbrille meistens auf der Stirn selten direkt vorm Gesicht sich präsentiert hat und dadurch eben diese Tollkühnhalt verweiblicht hat und gleichzeitig die Pose mit dem Device, was für diese Tollkühnhalt steht, genutzt hat. Das ging dann so weiter. Ich renne so ein bisschen durch die Sonnenbrille durch. Das sind die ersten Motorradbrillen aus Amerika, die rauskamen. Hier merkt man schon, dass es von dieser gesamten Bindenkappe weggeht hin zu einem filigraneren Setup. Da ist schon noch mit den Gaserschutz da dran. Und hier ist schon der Übertrag eigentlich von der Motorisierung als etwas Fremden neuem Hin zu einer sportlichen Autoaktivität gegeben. Und das hat sich, Sonnenschutz war dann beim Tennis, beim Baden, hat sich total geändert, auch mit einem Änderung des Körperkultes. Also man war wieder eher braun und weniger so Nobelblass. Was dazu geführt hat, dass die getönte Sonnenbrille oder die getönte Brille eben kein Zeichen mehr für eine Behinderung war. Also dass man behindert ist, um eine Quasi eine Korrektur für die Sicht braucht, sondern eigentlich für ein Tollkühnes, Agiles und modernes Verhalten steht. Was dann wiederum noch ein Schritt weitergeht, und zwar mit Marlon Brunno in der Wilde oder The Wild One 1953. Und er fährt hier, die kennt jeder, das ist die Aviator von Ray Ban, und die ist eigentlich auch eine total praktische Brille, weil die ist für die Piloten im Zweiten Weltkrieg entwickelt worden, weil die nämlich nicht mehr wie die ersten Piloten, wie zum Beispiel Therrasche, ohne Cockpit geflogen sind, sondern die hatten halt schon Cockpit, die brauchten also quasi nicht mehr den Windschutz, aber diesen Blendenschutz. Und dieser Tropfenform erlaubt, dass nach unten gucken in den Instrumententafeln und eine Nichtblendung in unterschiedliche Richtungen. Gleichzeitig rekuriert die aber mit dieser Tropfenform total auf den Totenschädel und hat so eine, also erzeugt dadurch so eine Verruchung und wirkt halt gleichzeitig total shady in so eine Richtung. Gleichzeitig wiederum haben die GIs, die aus Amerika nach Europa mitgebracht, und das waren ja die Gewinner. Also ich irgendwie für mich auch in der Recherche festgestellt, das ist einfach die Gewinnerbrille. Wer so eine Brille aufhat, der ist der Gewinner. Und wird aber auf der anderen Seite hier mit den GIs, die aus Amerika zurückgekommen sind, die in Amerika geblieben sind, ist das in diese ganze Rockerkultur übergegangen und gleichzeitig sind so outsider mit der Tendenz zum Outlaw. Und das ist so immer noch der Look, den die Sonnenbrille verkörpert und es ist ein Zeichensystem, was immer noch funktioniert. Was dann in den 80er Jahren mit der Waveherrer von Ray-Ban in so einen Superboom reingegangen ist, wo ich dann auch gedacht habe, ja natürlich, die Waveherrer, das ist die Brille, aber Ray-Ban hat total viel Geld bezahlt in den frühen 80er Jahren in unterschiedliche Set-Design- und Costume-Design-Studios, damit die Brille benutzen, um in den Videos oder in den Kinofilmen aufzutauchen. Also es ist ein ganz klassisches Product Placement und hat und unterstreicht vor allem in der Narrative diesen Safferachick, der gibt es die outsider und die tragen die Brille und die distanzieren sich von der Gesellschaft. Und es bleibt immer noch cool und die Formvielfalt, die sich dadurch entwickelt hat mit der Brille, ist eigentlich immer ein funktionaler Stereotyp, der aber stilistisch sich total drehen kann und eben eigentlich schon ikonographisch ist. Also wenn man hier unten mal guckt, sind die Sachen ja nicht bloß bekannt von realen Personen, sondern eben auch von Leuten, die aus einer Narrative, aus einer gewissen Diagesis heraus uns bekannt sind. Und die wird eben auch hier immer wieder transformiert, reinterpretiert und re-arrangiert und ist einfach das Individualisierte und individuelle Accessoire. Und das ist das, womit wir eigentlich bis in die 90er Jahre oder eigentlich heute noch in dem Fashion-Bereich oder Accessoire-Bereich zu tun haben, wo wir jetzt stehen, ist dann eben hier. Oder vielleicht hier oder so oder so, aber eigentlich meistens so. Und hier kommt eigentlich das Problem rein, wo ich sehe, wie so ist das immer so ein globiges Ding, was man am Kopf dran hat. Und ich glaube, das hat viel damit zu tun, dass Virtual Reality und die Technologie damit ein relativ junges Zeichensystem sind. Wir haben einfach nichts, worauf wir uns berufen können, so ähnlich wie das, wie das gerade bei der Sonnenbrille funktioniert hat. Weil es ist einfach immer ein closed headset. In der visuellen Kultur gibt es das ganz selten. Vielleicht so Ende der 80er, der 90er fängt das so langsam an aufzutauchen. Aber eigentlich ist der Look, der damit verkörpert wird, immer so leicht idiotisch. Also man steht meistens immobil da, weil das halt eben auch immer diese 360 Grad Experiences sind, guckt meistens so ein bisschen nach oben, dadurch klappt das Kinn runter und alle stehen drum rum und glotzen einen an. Und man sieht aber nicht, wie man die ganze Zeit angeglotzt wird. Und das halte ich für ein großes Problem bei den Virtual Reality-Brillen. Und das andere wiederum ist, und da will ich die Leute verteidigen, die länger designen, ist, dass das ja ein ganz anderes Setting ist als die Sonnenbrille, weil das ist das Setting, in dem Virtual Reality-Brillen gebraucht werden. Die werden ja nicht draußen in der Straße gesportet, man läuft ja nicht über den Boulevard und repräsentiert sein neues Outfit und kompletiert das mit einer schicken Sonnenbrille, sondern das ist eine Single-User Experience und diese Single-User Experience ist eine private Angelegenheit und das vor allen Dingen ist das Reales Setting total privat, woin gegenüber die Vernetzung der Computertechnologie das soziale Setting aber total virtuell ist. Also man kann mit total vielen Freunden in der Virtual Reality in Form eines Avatars unterwegs sein, aber so richtig zu Hause kann man immer noch in der Jogginghose sitzen. Und das, der Look ist virtualisiert und Out of Body und das, das, das virtuelle Setting, das soziale, das ist in der, das ist in der Experience und da kann ich auch die Designer verstehen, die wollen halt eine lange Experience, sie wollen, dass es soft sitzt, dass sie wollen, dass man gut sieht, dass es abwischbar ist und dass es halt einfach auch eine stabile Sache hat. Aber es ist immer Product Design, das ist immer nach Funktionalität und nach Pragmatik entworfen und wenn man jetzt mal so guckt, wie, wo sind jetzt so die ersten Experimente, wo das so in Richtung Fashion geht, ist es zum Beispiel hier Tommy Hilfiger Stand, ich glaube letztes Jahr oder vielleicht ist es auch nur so ein Promo-Studio, das war für mich nicht so richtig rauszukriegen und hier ist mir aufgefallen, dass oder wieder aufgefallen, dass Mode ja eigentlich anders funktioniert als Product Design. Mode, da geht es um Zeichenproduktion, da geht es um Reinterpretation, um Rearrangement und weniger um eine Pragmatik und eine Funktionalität im Entwurf. Also eine Jacke, die sollte möglichst ein Statement sein, ob die jetzt warm hält oder nicht, ist erst mal eine zweite oder eine dritte Frage und die Mode externalisiert Repräsentanz, woin gegen eine virtuelle Brille halt das internalisiert und der VR, also alles ist okay in diesem Bild, nur dass diese Brille so aussieht, wie sie aussieht, ist der totale Fremdkörper und zusätzlich sehe ich immer noch diesen, sehe ich einfach die Perspektive, dass es eigentlich darum geht, bei Mode umsehen und gesehen werden, aber hier sieht die Person und wird gesehen, aber das sind getrennte Prozesse, die da stattfinden und ich mir überlegt, okay, wie könnte man das lösen, vielleicht indem man erstmal von der Virtual Reality ein Stück weg geht und hin zu Mixed Reality und Augmented Reality, das Bild ist aus, ist der Start Screen der Website von Magic Leap. Magic Leap ist in der gerade unglaublich größt geförderten Mixed Reality Startups, die es irgendwie in Amerika gibt, die sitzen in Florida, hatten gerade ein riesiges Feature in der Wired und ich denke, das könnte eine Alternative sein, was sie vorhaben, nur dass man überhaupt nicht weiß, was die vorhaben, aber die haben das alles ganz schön schädig so kommuniziert. Auf jeden Fall ist das Interessante, dass das über eine andere Technologie funktioniert als über die, über quasi ein Brett vom Kopf, nämlich die benutzen dieses Dynamic Digitalized Lightfield Signals, also man setzt quasi eine Brille auf, so wie ich und wir sie aufhaben und es wird von der Seite den Lichtimpuls reingesendet und der über Nano Partikel soll der mir in die Netzhaut den Lichtstrahl reinbrechen, bedeutet ich könnte euch sie alle hier sehen, aber ich kriege quasi alle Pop-up Informationen direkt mit reingespiegelt und das ist ein total spannender Punkt und hier seht könnte man über die über die Ekonografie der Brille eigentlich wiederkommen und die Technologie in die Gesellschaft reintragen, hat aber schon mal nicht funktioniert, weil das ist das Google Glass, was von ein paar Jahren rausgekommen ist und das hat ja schon mal versucht, so eine Augmented Mixed Reality in den Consumer Markt oder in den Early Adopter Markt reinzubringen und das fand aber, das war so allgemeiner Hype-Konsens, dass das jeder Scheiße fand und ich glaube, das lag an so ein paar Punkten einerseits, war es als eigene Experience total schlecht, also es hat sich einfach blöd angefühlt und gleichzeitig war man gegen, war man total, den anderen Leuten hochgradig suspekt, also wenn man sowas auf hatte, hat man alle gedacht, hey, guckst du mich gerade an, guckst du in diesen kleinen Chip, filmst du mich, was nimmst du war, was nimmst du nicht war, also dieses Sehen und gesehen werden war da auch wieder so ein bisschen aus der Kippe geraten, aber auf dem anderen Feld und was nochmal aus der Fashion und aus der Produkte seines Sicht wichtig ist, das Ding sieht einfach misst aus, also wer will denn so aussehen, also wenn man mal zurückdenkt an Marlon Brando und an diese tollkühnen Flieger, die haben halt alle was anderes verkörpert und das ist so so ein positivistischer Call Center Agent, der den super coolen Service verkauft, das ist der Look, aber so will ja niemand aussehen, hat ja niemand Lust drauf und deswegen ist hier, finde ich hier in mangel an Identitätsübertrag und einfach an, an einfach keiner Wahrkeitsigkeit im Design, das könnte man natürlich überlegen, so, das ist für mich gesagt so quorwar das Zeichensystem, also was sollen wir uns denn da jetzt berufen, wenn wir das machen, wir können natürlich so retrofutteristisch sein und die ganze Zeit eine Wayfarer und eine Aviator und alle möglichen Brillenformen kopieren, aber da macht man sich ja eigentlich auch keine Freude, weil man ja die ganze Zeit immer in so einem, in so einem Reinterpretationsstrudel drin hängt, das was mal okay ist, aber eigentlich braucht es hier immer so Diversitätsimpulse da rein und ich dachte mir okay, vielleicht muss man einfach mal gucken, wo ist denn Virtual Reality und Augmented Reality uns schon mal dargestellt worden, im Nutzer-Szenario und da ist mir eingefallen, dass es ja sowas eigentlich in der Narrative schon gibt, also wir kennen das aus, aus Star Trek, aus Star Wars, aus Matrix, aus ganz vielen Science Fiction Filmen, da ist uns ja quasi das heute schon gestern als Zukunft verkauft worden, also eigentlich müssten wir jetzt schon, müssten wir das ja jetzt schon haben, die Frage ist aber immer und da kommt das Narrative wieder rein und eben auch wieder in das Zeichensystem, was wird uns denn verkauft, also zum Beispiel bei dem Beispiel, Jody LaForge aus Star Trek The Next Generation ist eigentlich so ein Visor, das ist aus den 80ern, das hat sich so ein bisschen durch die, bis so in die 2000er hat sich das so leicht verändert das Gerät, aber es bleibt eigentlich immer dasselbe. Interessanterweise ist aber Jody LaForge super uncool, weil das ist Star Trek hat, da gibt es eigentlich in der Gesellschaft keinen Suffering, also es gibt nicht das Elend, auf das man sozusagen so rekurrieren könnte, weil man adaptiert und daraus uncool zu produzieren, weil der Trick so ein Schlafanzug, der ist immer immer up to date, nie müde, nie schlecht gelaunt, immer an seinem Gerät dran und es bleibt einfach eine total sozial positivistische Utopie und das ist eigentlich noch nah dran an dem Google Glass und ich denke oder ich möchte gerne vorschlagen, dass man da nochmal guckt, okay, wo gibt es denn eigentlich so was ähnliches wie Vaghalsigkeit und Fraggwürdigkeit und Risiko und so ein leichtes Outlaw Feeling in der Science Fiction und da sehe ich eher, wenn man nochmal bei Star Trek bleibt, eigentlich das immer auf der Gegenseite, also da gibt es immer die Bösen sozusagen und die sind immer noch viel interessanter Design, immer noch viel interessanter technologisch ausgestattet und immer so ein bisschen mutiger und rougher. Das ist das eine und das andere, wenn man nochmal so Richtung Closed Headset geht und das war eben eins von unseren, von den Moods, mit dem wir bei dem ISAC Projekt gearbeitet haben, war der Gedanke, okay, wow, das ist eigentlich das, was wir wollen, weil das ist auch das, womit man eigentlich zu tun, das ist der aus Alien 1 1977, 1978, 1979 der Facehugger sozusagen, der da quasi auf das Gesicht drauf lebt und einfach auch drauf bleibt, weil nämlich das ist eine Sache oder das ist ein Begriff auch in der Soziologie, das ist diese Alienation, diese Entfremdung, mit der man zu tun hat, wenn man sich sozusagen in eine bestimmte Situation begibt und hier bei der Virtual Reality ist es halt so, man ist sofort der Fremde und damit muss man aber eigentlich im Design spielen und wer damit spielt und das jetzt vielleicht noch mal als so ein Vorschlag in die Richtung und als finde ich sehr gelungenes Beispiel, das ist das Motion Design Studio Field aus England, aus London und die haben letztes Jahr eine große Ausstellung gemacht, wo sie sich mit Virtual Reality und eben der externalisierten Repräsentanz des virtuellen Inhalts beschäftigt haben, das heißt, das Projekt heißt quasi und man schaut eigentlich in eine abgefahrene, nicht körperliche, virtuelle Welt rein, also es ist eher mehr eine ästhetische Experience, anstatt eine Narration, die da durchgeführt wird, aber um das, aber diese, diese, diese Device ist, durch die man das wahrnimmt, das sind aber solche abgefahrene Dinger, da ist auch ein Oculus System oder ein Virtual Reality System drin, aber es bleibt, es gibt einen Link zwischen dem draußen und dem drin und es ist eine künstlerische Arbeit, die bezeichnen das auch selbst als interaktive Skulpturen und ich glaube da, da wird das spannend, weil wenn man sich das mal anschaut auch draußen in dem, in dem, bei den großen Stages gibt es so Virtual Reality, übrigens, die da hängen und die sind auch alle irgendwie so maskiert und da wird es dann spannend, da macht es Spaß auch das zu benutzen, weil das Spiel mit Identitäten, wie das eben in der Fashion auch ist, da wieder funktionieren kann und gleichzeitig und das rechne ich, fiel einfach hoch an, dass sie eben nicht so eine produktige Ansprache zu ihrem Projekt haben, es ist ein, es ist wie Fashion Fotografie, das ist also mit dem ganzen Look, die ganze Silhouette, die ganzen Stoffe, die Pose, das funktioniert alles auf einem Nicht-Product Level und da denke ich, da kann es funktionieren, wenn da einfach mehr Mut da da ist, zu sagen, okay, lass uns doch mal diese Brille einfach komplett demontieren und versuchen neue Silhouette, neue Symmetrien zu finden und auch ein neues Zeichensystem zu wagen und also als nochmal letztes Beispiel Marshmallow Laser Fist ist auch ein Design-Kollektiv aus England und die haben bei dem End Festival letztes Jahr oder vorletztes Jahr dieses Projekt gemacht und das heißt Eyes of the Animal und die haben unterschiedliche virtuelle Szenarien gemacht, die man im Wald aus wie man im Wald die Natur wahrnehmen kann aus unterschiedlichen Perspektiven eben an der Stelle aus unterschiedlichen Tierperspektiven um sozusagen dieses technische technologische Setting zu überwinden haben die eben auch diese diese Helme Design wo sie vorne eben nochmal so ein Stück Wald einfach dran gemacht haben und da gibt es einfach eine klare Arbeit in einem Gestalten des Devices und der Verbindung hin zu dem was eigentlich in der interaktiven Experience passiert und deswegen sage ich irgendwie, das muss irgendwie so über das praktische hinausgehen, weil diese Produktlichkeit einfach in der Fashion so nicht funktionieren wird und es muss einfach in eine Richtung Multi-Use gehen also hier sind mehrere Leute die das irgendwie gemeinsam benutzen aber wenn man so in so virtuelle die guckt geht es eigentlich immer darum dass so einer da steht und irgendwie so guckt also noch mal das Kin und die und man muss einfach auch Virtual Reality an beiden Seiten ernst, nämlich einmalseits in der Virtualität und das andereseits in der Realität und wenn man das irgendwie beides zusammenkriegt dann kann man im Mode Design und im Fashion Design ich glaube echt super neue coole neue Tendenzen einfach aufmachen und vor allen Dingen Akzente setzen ich sage mal damit stop und vielen Dank und wenn was ist kann jetzt noch Zeit für Fragen denke ich und vielen Dank gibt es eine Frage ja da hinten gab es eine Frage gerade einmal melden danke ja das ist eigentlich keine Frage sondern mehr eine Idee die ich jetzt hatte oder eine Anmerkung und zwar dachte ich mir bei diesem einen Design wo so stylisierte Haare sind dass da eigentlich auch eine neue Funktionalität reinkommt weil da könnte man ja also das verhindert ja oder könnte verhindern dass man irgendwo anläuft und das finde ich eigentlich ganz schön dass dann über diese primär nicht Funktionalität dann eigentlich ein Möglichkeit Spielraum sich öffnet um wieder Funktional zu werden super Idee gibt es noch weitere Fragen danke Christian
|
Christian Zöllner, Zeitmaschinen Operator beim Berliner Kunst- und Designstudio The Constitute krault in einem Stream of Consciousness von der Sonnenbrille zum Samsung Gear VR... Mit dabei immer die Frage: VR als Fashion Accessoire: geht das?
|
10.5446/20550 (DOI)
|
Katerina is a co-founder of migration hub network and is the head of marketing right now at QoInventures and welcome and here's your stage. Thank you very much. As you already mentioned, I have two more people with me on stage, which I will introduce in a minute, which is my friends Rafi and Kashif. Because I'm going to talk about the rebalancing of offline and online integration, I'm not going to talk about what integration means or whatever, if it's a good term to use or not, but I would like for them to share their stories, what online offline tools they were using and what they kind of see as where we have to proceed towards. So for me, when I got into the whole topic and you saw Paula on stage before, actually last summer I was part of the crew of the first start of vote and the reason I actually went there is because when Paula called me up, she said, hey, we're going to build digital tools for refugees and myself coming from a start-up background, I thought, okay, of course, digital tools are the solution because they're super scalable and therefore we only have to implement them once and then they're going to solve a lot. So when we went to Samos, we wanted to not only build digital tools but also learn from the people who are there, which is why we of course applied methodology like design thinking, whatever, which involved being out there helping people and learning about their needs. And what we learned back then is apart from all the other things they needed is one thing that they're liking is information. So for us, the consequences were that we built an information platform. It's still out there. It's not really up to date anymore, but I mean, that's part of the issue if you're dealing with information. So for me, the key that I found there is you have to talk to people to figure out what their needs are and therefore build solutions which they really need and not something that came up in our minds thinking that somebody else needs. What I learned when we were coming back, and that was last September, so I guess we can say there was a huge enthusiasm back then, a huge hype. A lot of people getting involved for various reasons, and which was of course very good that so many people got involved, but it also meant that a lot of things were being built, not only online, but also offline. A lot of apps were being built. And what we realized when we came back is that a lot of people are working on a lot of things, and a lot of people are working on similar things, not only all over Europe, but also all over Germany or all over Berlin. And quite often people don't really know what everyone else is doing. What we also saw is that a lot of these tools are being built without ever involving those whom they are built for, in this case newcomers or refugees. And what I also realized is that there's different kind of tools that are being built or different needs that need to be fulfilled. Some of them are very short term, which meant when we were in Samos, of course it was crisis response, meaning that the needs of the people were kind of similar, meaning they just arrived in Europe, they were in need of certain things like food and water, information, access to electricity, access to clothes, access to internet. But this is only a very short term solution that you can build then, and it's kind of attracting or it's answering a need that the masses have. So there's one solution for all, but if you move along the different steps, you realize at some point that the needs in the mid and the long term get more diverse, which means there is not one solution for everyone. And a lot of the more online tools or digital tools that were built last year were really addressing these crisis response topics and were really addressing a solution one fits all. And it's not only about where people are coming from, that you can't put everyone in one part, not everyone from Syria is the same, not everyone has the same needs. Fatuma before was talking about what females needs, it's different what men need, it's different what kids need, and it's also very personal, depending on what kind of personality you have. And I will share a story about that in a minute. So I still see a lot of things being started which do not involve the users. And I find it a bit funny that these days when everyone is talking about user-centered design and I believe Rocket is not investing in any company that's not done some user research before, they're still especially also from big corporates, but also from a lot of our ministries, a lot of things being done for refugees who never involved refugees along the way. So the other thing that one can see which I find funny that a lot of people build tools for refugees, online tools, but on the other hand at some very specific points there are no tools at all. For example, if you go to Laguizo, the process there is still not very digital, which means somebody could focus on that. I know it's very hard to do so, but just thinking about that a solution for refugees does not only always mean that you build an app for a refugee, but it can also mean that you're building an app for an organization or for the volunteers. So one example that I wanted to give is when it comes to legal, to make it a bit more concrete. The all of our ministries have information on their website of quite often also addressing refugees, which means it's in Arabic, it's in Pashto, whatever, but it's even hard for us Germans to understand what's going on there, which also looks like that legal information is also once again a one fits all and maybe Rafi and Kashif can share in a minute that it's very, the issues are very individual, which means there is not one legal status for everyone, but it's really based on the individual story. I have a friend that I actually met, I met his brother on Samos when we were there last summer, he's a former lawyer or he studied law in Syria, and he kind of knew what that language is like, and so he really looked into things, but everything he does is he's looking things up and then he's calling me or somebody else to check if he really understood correctly. So at some point I asked him whether it's making sense for him to actually read all that information online, he's like, basically not because I'd rather talk to you about it or talk to another friend about it to explain to me what's going on, because I don't really trust the information I get there because I'm never sure if I understood correctly. So there are people, for example, from refugee law clinic, it's law students from all over Germany who come together to, they specify in immigration law to actually consult refugees who come here for free, meaning they are doing that one-on-one, sometimes they're doing meetings where on a specific topic, so people can come say, okay, I'm in the beginning of applying for asylum, so what is it, what do I need to do? But they're doing it one-on-one, they're not trying to put a manual together saying, hey, this is handing it out, this is all the information you need. What they digitize is actually a manual for the people working for them, which means basically for the German lawyers, so they kind of know what they need to tell people. So this is one example where you can see one might think that the legal issue is very scalable because it's all the same, but it's very individual, and in this case it makes much more sense to condense the information for those who are helping and for those who it's actually for. Paula mentioned before, and I was also introduced a way that we founded MigrationUp together as a place where people and initiatives and also refugees can come together and cooperate and create synergies, so this place still exists, and like for half a year, for over than half a year, we're bringing people together on a regular basis because we realize if people don't come together in person, they can connect online and join Facebook groups as much as they want, but basically they have to come together to join forces. And to mention maybe a few of these initiatives that are combining that offline and online component, and I saw that somebody in the back from Vfugee is sitting here, for example, Vfugee is an online tool, it's a Q&A platform where refugees can ask questions, and they collect questions and the very general questions, you can follow in a thread, so maybe you look in there, look them up and you see that, oh, my question has already been asked and it has been answered, maybe I find my answer there or I just ask that question again and it's actually people answering, which means it's an online tool but it's very direct communication. Another very much online tool which was not aiming at refugees but actually at the volunteers is a volunteer planner, some people might know about that, which was trying to build a tool so all the organizations can actually organize their volunteers, which has been super successful. In this case, an online tool was the perfect answer but it's helping refugees but it's not for refugees in this case. Some other very offline solutions but which are still scaling is these initiatives which are called Start With A Friend or über den Telleran kochen, which are both aiming at bringing people together, the way that they are scaling is not because they are building something online but they kind of created blueprints out of the work that we are doing here in Berlin and scaling it all over Germany and all over Europe at the moment, which means they wrote a manual and they are training people and so this idea spreads without it having to be online. I told you that I wanted to introduce my guests here and I would love for Rafi to come up here, Rafi and I met a while ago last October, I believe, people share in a minute how we met and can you come up? Rafi is from Gaza, so from Palestine and maybe you can actually share the story how we met because I think it explains very well how you were actually using online tools when you came here. It's working? Yeah. Hello again. My name is Rafi, I'm from Gaza, so like before year and a half when I came here, before I came to Europe, it just took me three days to decide to be in Europe. So first day, okay, I'm going to be a refugee in Europe. All right, so I search everywhere, Facebook, websites, people asking people how to go there, how to be in Europe. The second day, okay, which city I want. There is like Greece, Sweden, like many countries, it's like everyone, you can't travel everywhere. It's like for me, it's very exciting because like from Middle East or from Gaza, it's hard really to travel everywhere. So it took me one day to choose, then I choose to be in Germany and Berlin. The third day, I start to check where and how, how can I do there. So I found this application called Vamos and when I came to Germany, I start like it's about application showing some events in Facebook and order it by time, by place. So for me, like every day when I'm in Berlin, every day in the morning, I check, like do my research, I'm starting going, I bought a bicycle, so I'm driving everywhere from eight until 12, then I came back to my home. So it's like for three months. So I'm moving a lot and I meet a lot of people. Then I meet a friend by just in Facebook, someone she wants help with her boat, someone to paint, then okay, I want to do it. Because like for me, I spend a lot of time not allowed to work, not allowed to travel. So I felt I don't need a permission to work. I don't need to have just paper, just think, it's not about like okay, I'm an engineer. I should be an engineer. No, it's hard. So I want to do something. Then I met my friend, Kristin, then we friend, then she invited me to event. Katarina was there. She talked to Katarina, Rafi, he wants a job. Please give him a job. I said, yes, I have. So then we start to work together and we did a lot of projects. I start to, I'm doing many stuff. So I learned from just start with small application. But the thing is, as far as I remember that the application you were using is not for refugees, right? It's a tourist application. It's a tourist application. So why did you use that one and not a refugee app or whatever? Believe me or not, until now I don't know any application for refugees. Yeah, it's like, why, I don't know, like why, okay guys, like if you have an application for white people, you're going to use it? Like yeah, why should I have an application for refugees? It's like, yeah, well, I feel like I'm normal. I can read English. I can search. Like why I should need something for me. I don't want to feel like I'm special or different. I'm also like, I have a knowledge to be normal. Yeah, awesome. Thanks Rafi. So to maybe summarize a little bit what Rafi just said and what I also wanted to point out is what I learned over the last year is basically that the app is not the answer, which doesn't mean that digital tools are not part of the whole integration or tackling the refugee crisis process. But it's definitely just one part of the solution and can never be the whole one. And I also think there needs to be a stronger focus building like digital tools for those who support instead of for those who are actually seeking for information. So I was very glad for example when the guy who did the refugee law clinic at some point shared his document with me, which had like all these immigration law issues on there, because then I could tell my friends being like, hey, this is where you have to go and actually connect people. And another example that I want to give, and I mean I was introduced that as head of marketing at Kyron. So just briefly I don't want this to be about Kyron, but as an example, what Kyron does is we enable refugees to get access to high education. And the way we do that is by having them study online wherever they are for two years. We do that because we have partnerships with MOOC providers, massive open online courses. We gather all that content and have four study tracks at the moment. Kashif is actually studying engineering with us. We have business administration and computer science and social sciences. And after those one to two years that they studied with us online, we transfer them to a partner university so they get a credit degree from a partner university. One other thing that we do is something that's called Direct Academics. And actually one of our teachers this morning shared a very lovely post on Facebook and that's why I included it. So Direct Academics works in that way that he's a teacher. I think he's a professor in Aachen. And he, I think 10 to 15 people signed up to his course. I don't know if you studied with Burkard. I don't know. He's doing some engineering stuff, but maybe at some point, yeah, you know him. So for example, he was teaching a lesson last night. So people are joining in via Skype and he shared this morning that he had people in there from Syria, Eritrea, Afghanistan, Thailand, Bangladesh. They were based in Italy, Sweden, Germany, Netherlands and Austria. So in this way, this is a beautiful online tool, but on the other hand, it enables very direct interaction. So because I can talk a lot about how Kyron works, but to give you a better insight on how it works, I would actually ask Kashif to share some of his stories. And the first one would be how did you hear about Kyron and when did you get involved? Hello, everyone. Yeah, when I was coming from Pakistan, I started my journey. So I met with a Canadian girl who was a social worker in Greece in the camp. So we discussed about our future planning and what my future and my destiny and what I want to do. So when I arrived, so she messaged me that there is such a platform which can help you and you can access to higher education. So then I found that it is Kyron and it was like a new life and new identity for me to be a student, not to be only a refugee. And I really dislike the word refugee because, but in fact, every child is a refugee because when we are born, we come from the new world with no status. But then our parents give us a status. So what year are you with Kyron for half a year now? So maybe you can give people an insight in how your life as a student looks like. Oh, yeah, sure. When we came, we were just lost. So when we got the platform, then I asked myself, you have only two options, do or die. So I prefer to do. So now I'm doing. And I prefer engineering because engineers are the architect of the world because they decorate and modelize the world. And well, the world started with the wheel and now has become a global village. And I would say something about digital integration. Kyron is a transition between digital and traditional because it digitalized as we know that in the era of modernism, robotics, three tech and artificial intelligence, ignorance is not obligation. It's an option. So for youths, we all refugees. When we came here, we had nothing, but we are thankful to you people. You give us love, respect and a meaningful life. So now we are one nation new up Blaston for the German shaft, Zondens by the wagon on Soudine. So we are also part of you. We also want to come in front and to participate and to contribute to the host community. Thank you so much. So kind of to summarize what this whole talk is about and what Kashif and Rafi were sharing is that basically the idea is that we are somehow building bridges. Kyron in this case is building a bridge between being a refugee and not having the ability to actually access university, but always bridging that to for that perspective change that Kashif was talking about because we want our students to be students and they are not refugee students, but they are students. And we are not only doing that online, but also offline and that's something, this whole mix that I was also emphasizing with the different examples, what seems to be the right solution. I wanted to share data about how many online solutions are out there, how many offline solutions are out there, but unfortunately there is not really data out there to see how many projects have been started. I can tell from the experience of being involved for almost a year now that I have seen anonymous all over Europe, all over the world. I'm happy to see also a lot of friends that I met all over Europe, friends from Italy sitting here. So we met last year, for example, in Rome where we were working on things. There were people from South America, some people from Tel Aviv joining in right now who we are also working between Berlin and Tel Aviv, so it's obviously a global issue that can be solved globally, but quite often taking into account the needs of the very specific location and the needs of the individuals. So some very small data that I can share is a friend of mine who did research for a political building, she had a look at 40 different apps for refugees, she was testing them, how much they were being used, she did interviews, but also looked at download rates and all that kind of stuff, and it's kind of the results that Raffi was already giving that very few of them are actually being used. Another maybe example, another number that I can give, there are people from an initiative called Social Collective, they are looking more at bringing everyone together, they also did some research and they figured that all of the initiatives they interviewed, so in this case I'm talking about initiatives for refugees, that 67% of them don't have refugees in their core team, they might involve them sometimes, but they would love to involve more people but don't really know how. So this is the other issue we are tackling. So I guess for the final, I would just love the two of you to come up here. Again what I would like to ask you is what do you think is missing and what do you like to pass on to those who are trying to build tools or trying to support refugees, though I know that the term is not very much liked. I think human life is always problematic, but the main thing is your attitude, how you deal with it, because success is not a scale, it's an attitude. So we have a lot of opportunities, we have a lot of things, a lot of tools, but we just have to utilize, we just have to take advantage of. And the main thing we need that is mutual understanding and the main thing is harmony, because both of us, we have to work together and we can make the Germany proud and let's make our generations proud, I would say. And the main thing is to work in a teamwork and man can do anything. And the impossible word is no word, there's no room for failure, only the success is our destiny, just make it. Awesome Rafi. Anyone say, Grace, an anatomy? Yeah, most people saw Grace anatomy. Yeah, really smiling, but very shy. There's a scene like doctors from Syria, they want to make an operation, so the American doctor, they told them, please, simulate what you're doing in Syria. So they told them, okay, there's a lot of tools, we didn't use it, just two things. And there is no electricity, we just have a flashlight and we make a lot of operation. So this is what the idea, people don't have a lot of tools, but they're doing a lot of stuff. Can you imagine if you give them tools, if you give them like the same people here, what they're going to do? Thank you very much. Thanks, you guys. Yeah, you should say. No, you should stay here. You should stay here. So the final words that I want to say is, I guess, if you look at the numbers, how many people came and how many things were done, we have a lot of numbers and a lot of data, but sometimes we forget that there are a lot of stories behind that. And I believe telling the individual story is not only because it's such a nice story to tell, but it's the way how we engage. And I personally am very happy to have Kashif and Rafi as my friends here. Not only that Kashif is also a student with us and Rafi is working with me and I heavily rely on his work in the marketing team at Karan, but it's also about coming together and being friends. And I believe the way to move forward is, yes, we can use a lot of online and digital to start the conversation or to keep it up to date, I guess, as everyone else is doing, how you stay connected with friends all over Facebook. But at the end of the day, it's about meeting in person and finding that way to learn from one another. And I believe that that's what it's all about. So we are family. Yes, we are family. That's right. So thank you guys so much for also being here and sharing your story. And yeah, thank you guys for listening and I hope we somehow start that conversation about how what the right or what a good balance is between the online and the offline. Not only when it comes to the integrating refugees using two terms that I actually don't like, but it's just what we're talking about. But also, yeah, how we can all engage together. Thanks a lot. Thank you very much. Thank you very much.
|
The engagement of the civil socitey for refugees since last year has been enormous and still keeps a lot of people busy. Especially in Berlin a lot of digital projects were started, but which tools are really being used? What is the right balance between online and offline support, one on one and mass communication and information?
|
10.5446/20556 (DOI)
|
Thank you. Welcome everyone to this session in the health track. Before we start actually with our fight club and the combatants of the fight club regarding eHealth and regarding health apps. I have the privilege to just share a couple of thoughts in order to introduce the session, a couple of thoughts on transformation of healthcare delivery. So let me start by just asking, we hear a lot about disruption in many, many industries. The question for health is, is healthcare also an industry that's going to be disrupted or is healthcare going to be different than other industries and it's going to be more about transformation? Remember that in essence healthcare is a conservative industry for good reasons. It's about health, it's about people, it's about people's lives, it's about privacy and sustainability and many other things. So the question really is, do we want to disrupt such a system or do we want to transform that system? Now today, later on, we're going to talk a lot specifically about diabetes as this is one of the big chronic diseases that can benefit from digital health and healthcare transformation. So I want to share with you some thoughts on diabetes specifically. We have a huge diabetic population all around the world with amazing increase in prevalence, about 55% increase in prevalence up to 2035. Today we're talking about 420 million people worldwide with diabetes and the prevalence increase is not only in the emerging countries but also in the US and the European countries with quite some significant numbers. Now the interesting thing is that diabetes has been around for quite a while but still if you look at the clinical outcomes and what really has been achieved so far in the treatment of diabetes, we must say that only 50% of people are actually diagnosed, only 20% have access to care, only 12% get decent quality of care and only 7% actually reach their clinical treatment targets and outcomes. So this is a big challenge out there that needs to be, in our opinion, that needs to be addressed and that has huge opportunities for digital health. Now if we look at the innovation that is around and here I'm talking about Europe specifically, we have a lot of medical and clinical innovation around and there's a lot of medical and clinical innovation coming out to the markets every year. In Europe we have about 320 different oral antidiabetic drugs, we have over 150 insulins and insulin types and brands, we have other more modern pharmaceutical treatment options as GLPs and DPB force and we have far over 200 different types of medical devices that is medical device technology at the disposal of people with diabetes in order to treat and manage their disease. But yet as you've seen in my previous slide, only 7% of people actually reach the treatment targets. So there must be quite obviously something missing, there must be something else that can connect the dots and actually lead to better outcomes at hopefully lower cost for the healthcare systems. On the other side, well traditional care delivery is all about healthcare professionals, doctors and nurses treating people, well with the prevalence and the increase of the number of people with diabetes and the let's say struggles that most of the public healthcare systems have in terms of their financing, these resources are already today not enough and for sure there will not be enough in the future to take care of this disease. Let's have a look at the cost that we have in the healthcare systems and maybe put that into today's perspective of the disease and also of digital solutions. In Europe we spend about 1,000 billion euros just on healthcare, this is total direct cost for healthcare. We're spending 700 billion euros on treating chronic diseases. We're spending 120 billion euros in direct costs for treating diabetes and you can add to that another 30 to 50% of indirect costs coming from absenteeism, reduced productivity, disability and condition induced unemployment. Now the interesting part is that 80% of those costs are potentially can potentially be saved because they come from potentially preventable complications and the overall cost of the healthcare system to operate. So there's a huge potential that we see also for making the healthcare systems more sustainable if we find ways to use the innovation that is available, connect the dots with digital health solutions, apply them for the better of the patient and for a more sustainable healthcare system. Now if we then look at what we call the M Health market and the projections that we have for the M Health market, by 2017 the market is projected to be at around 5.7 billion. These are basically apps. These are basically direct to consumer revenue models, mostly based on the app sales, subscription and the devices and services that are sold with it. Now compare the classical market estimation for M Health with the potential that could be out there by engaging really in getting better outcomes and helping healthcare systems to better provide care to patients. There's a huge gap and I think that gap can also be addressed with digital health solutions. Coming back to the 120 billion that we spent in direct costs for treating diabetes, there's an enormous increase in the cost per patient if you compare the different stages of disease progression, complications and comorbidities, going from twice the cost of the general population up to 24 times the cost for treating this population of patients. So what is important here and I think here is also the opportunities that we see with new digital solutions is find the adequate balance and the right tools for integrating self-care and professional care and supporting behavioral change processes, avoid disease progressions and comorbidities and promote healthy living, prevention and good health in general. As you can see, I'm kind of already addressing one of the topics. The apps will be a very important part of that, but the apps will not be enough. If we want to, let's say, take care of the triple aim, which is taking care of the cost of providing care, having a good patient experience with the care provision and also manage health, manage population health in the different markets. And for that, we believe that digital health care and technology and IT will be enablers for a better procedure, a better process of delivering care to patients and as apps will be an important part of that, we will have this discussion right now with some concrete examples. Thank you very much. Well, thank you, Lars, for your presentation. I think it really puts a context of what is going to pass now. You are going to be witnesses of the first Health Apps combat. We have brought together two influencers that are really big in M-Health, especially for diabetes. These people, they really think that new technologies and mobile health can change the world. But we want to face them with the difficulties and the shadows of these new trends that are really reshaping how we deliver healthcare. For doing this combat I have with me, Minson Kim, please come into stage. He is going to be the referee of this special combat, but first of all, I need to ask you, do you really think that Health Apps are really valuable and truly, please come up, those of you who really think that these apps are really giving us a big value? Come on, get up. Okay, I don't see much hands up. Okay, no, who thinks they are not really adding value, they are just tools. I don't see hands. One, two, three. Well, what about the rest of you? Do you have an opinion? No? Well, let's see this discussion and maybe this will change a little bit. Can you present our fighters? Yes, sure. We love to do that. Just to give you a little bit about my background, why I'm standing here. My name is Minson Kim. I'm a venture capitalist from Excel Health. We solely focus on digital health startups. We invest in our fund 500K to 5 million euros in series A companies. I've seen 2,500 digital health startups and I invested or we invested in 5 of them. The last one was MeDoc. It was a telemedicine company from Finland, 3.5 million last year. Happy to be on board. Also, I'm very honored to announce that I am the successor of healthcare startups.de. If you want to know about digital health, please look at the blog. Yes, so thank you very much for having me. I will now present the healthcare experts. I will be guiding you or I will try to guide you through the session here. So now to make it a little bit more fun and we are here in a combat, I will try to moderate it in a more, let's say, boxing way. So I will try it. I'm not a professional announcer, so I will try this. From the cold northern part of Germany with 1.93 meters and 94 kilos, let's please welcome founder of D-Doc and K-POM survivor, Bastian Ho. Last but not least, from the cold northern part of Sweden, please welcome 1.75 and 64 kilos co-founder of my sugar, Frederik DeBall. All right, all right, all right. Carla, please explain the format, how this battle will go on. Okay, this is how this is going to be done. They have a first round, 5 minutes each to present a little bit their common ground. Frederik will be discussing about why health apps are really, really good and Bastian will go for the difficulties and the shadows. Then we have round two, they have 3 minutes each for replaying. And finally, we'll have a discussion, we'll have Lars again on the stage so they can really fight hard. That's why we need to be stuck in the interstate. You can also join the discussion with our hashtag, please go on Twitter, healthappscamber and let's try to do this really, really nicely. No blood, please. All right, then you say 5 minutes, you will start Frederik. Round one, go. Hi. That's how we started boxing match, right? Well, to give you some background, my last 31 years I've lived with type 1 diabetes, meaning my life depends on blood, needles and sweat and tears and everything else that comes with doing this therapy. My body is broken, I need to replace a bodily function, yeah? It's not easy. Technology helps, technology helps, it keeps me alive. A little bit of hormone when I need it, injected through a needle, it keeps me alive. Through technology and medicines, I can control this disease. Yet, it is always a struggle. The past 10 years, I have taken this disease I have and this therapy I live with to a professional level. I've become a working diabetic or what should we call it, a professional diabetic. I've started working on making diabetes suckless using technology. This has launched me into the science of diabetes, into the science of bringing technologies to market, into as an entrepreneur out in the field, speaking with doctors, patients, grassroots organizations across the world. What we've built is an app which helps people in day-to-day life. But to do so correctly, we needed to start with the science. We took a look at what was out there and were astounded by the fact that it wasn't a new thing. People have been building apps and electronic systems since 25 years, 30 years back, to manage diabetes, apps or things like that, computer programs. Looking at the number of trials out there, over 700 of them. We distilled that down to the most impactful studies and trials in this field and started seeing a pattern. Even among the 31 top-notch trials that have taken place over the past 25, 30 years, there was a difference. You see, if the quality of the trial was good, it still didn't mean that it was a good thing for the patient's diabetes. But if the quality of the system which was used in the trial was good, if it was based on feedback, if it was based on solving real problems and not just keeping a journal of therapy data, then it actually had an impact. It had a positive impact, not across the board, due to the quality difference. The well-built and well-thought-through systems actually drop that little magical number we all strive to drop, the number of hypos. It drops the risk for heart complications and heart disease down the line. That's the outcome number everyone is looking for. So it does help. Because apps drive engagement. If you engage in therapy and the data becomes useful in day-to-day life, oh, that's when magic happens. Now, science tells us it kind of works if it's built well. Now, let's take this to a patient's perspective, away from the science and into day-to-day life, because we don't care about outcomes. What we care about is living life, is about taking that feedback in the therapy and making it a short-term positive and a long-term positive feedback loop. And that is possible through apps, because they are so interwoven in our day-to-day lives. Now, this little app I've been part of creating and this ecosystem of apps I've been part of creating has now helped a bit over 600,000 people, 600,000 people have now experienced a different way of seeing diabetes therapy, of seeing it as the monster you contain instead of this drag and eternal grind. It's a philosophical change really using technology. I could tell you hundreds of user stories because I've emailed over 35,000 of our patients, of our users, of my brothers and sisters. But it all distills down to a moment which I shared two years ago here at Republika. An email I got one day from one of our users in Germany saying nothing, but it had an image, an image of an app's logo tattooed on a forearm. That is powerful. What's happening with the time? Yes, exactly. What is it? We have five minutes. You got there five minutes. Then my closing remark, and that is not the only time a user of an app has tattooed the logo on his forearm. If you look at the impact in real life of an app and can still add down to the number of tattoos, I think we have a winner. Thank you. Well, thank you, Fredrik. This was basically from a perspective of a co-founder of a diabetes startup. I'm very curious about your opinion from the perspective of a patient. You have five minutes. I will count the time. Go. All right. Hello. Thanks for coming. I'm happy to be here. If this was a real boxing thing, I guess I'd be used to fighting somebody with tattoos, be it over an app or whatever. I feel like fighting Robocop, the digital thing and the professional diabetic. I'm not. I'm Bastien. I'm Astridik. I'm type one. I do think I know quite a lot about my own diabetes, but that's about it. I care about myself and my diabetes. Before I start into going into the argument, I like to reframe the question a little bit in three small points. First point, some of the arguments I will make, I will not truly mine because I don't really, I'm not really against apps, right? I actually do use at least one app on a very routine basis, but there are a couple of points that I think need to be considered, so I'll get into those. Second thing is I will really look at apps from a patient perspective and I will look at them from today's perspective. So I'm not going to get into where apps could be in the great new future five years, 10 years from now, but let's look at apps as they are today, health apps as they are today. And most of these apps, at least as far as they concern me as a type one, they are about collecting data. I'm not talking about fitness trackers that they do that as well. Ford IB is important as well, but they collect my or they want me to collect my blood sugar levels, my insulin doses, my carbohydrates, my activity, my mood, whatever. They actually want me to give them all that data by which ever means, mostly manual. I'll get to that later. The reason why that is important is from a patient perspective, I know we're at Republica, right? So many of you are probably into all this quantified self thing and playing with data and you actually do a lot of, I've seen a workshop here on food and stuff. So everybody here is very pro doing all these things and that's great. Do that if you're into that because it's a hobby or it's fun or you like doing it or it's your passion or you want to prove something or whatever it is in a way, it's fun. For me, it's not. I don't do this because it's fun. I have to do this. I have to do this 24 seven every day and I fucking hate it. And that makes a big difference. Now, when you look at all these apps that ask me to enter all this data and they come by making it more fun with gamification, et cetera, you know, doesn't really help me. It still doesn't, it's still not fun. It sucks. That's the one point. The other point is that I enter all this data into all these apps and then it's kind of stuck there because nobody makes any use of it. So I enter all this data into an app. I go to my doctor, I show her my phone and she's like, yeah, right, where's your written logbook? And I kind of understand her because she can't deal with this 1,600 apps out there dealing with diabetes. My doctor can't know all of them. Now, I'm using this one, somebody else is using another one, et cetera. The data is stuck in an app and it doesn't get anywhere. What we need are ecosystems where this data kind of comes together and makes sense, right? So far, where we are right now, they don't. I enter data, I collect data, I have to do this, I want my doctor to make use of it and it's not happening. I go to my endocrinologist four times a year at least and she doesn't even look at my phone. She doesn't care. Now, that doesn't really make sense to me. What we need is we need some sort of standardization here. We need interoperability. And that's something that I found out this morning going to the restroom, even the toilet paper factories have figured this out, right? If you go to the loo in the morning and you take your toilet paper roll and it's there, that's great. If it's not there, you can ask your girlfriend to knock at the neighbor's door, ask for toilet paper, she'll give you a roll of toilet paper and yes, it fits on your toilet paper holder. It's that easy. And that works all over Germany, it works all over Europe. It's great. It doesn't work in healthcare. If I use my app with data and I want to share this data with somebody else, that ends. Thank you very much. Thank you very much. All right. Now we enter round two. So, Frederick, you have, so now it's a second round. You have only three minutes. Are you ready? Absolutely. All right. Go. Where to start? Ecosystems. I love that one. So, ecosystems are right now being developed. And you know what? It is not the electronic health record systems developing it. It's not the industry developing it, the med tech industry developing it. It's technology. It's technology companies developing them. Right now, we are, right before, big bubble is bursting and opening these systems up to each other. And it's super exciting. Right now, we are in a discussion at my little company, integrating with a number of different hospitals. But still, that's just a number of hospitals. I think this is going to be bigger, this change coming now that technology companies are stepping into the game. And I love it. We also are at a point in time where the medical industry is changing. The med tech companies are opening up. A few weeks ago, we launched integrations with a market leader. We already get into our little app, data from CGM systems and blood glucose testing devices. Soon also other gadgets. I won't go into detail there yet. But data is slowly becoming passively gathered. So, it becomes a question of making use of the data, not just with HCP, with a healthcare professional, but that the systems become intelligent enough to make use of the data directly facing to us, patients. And that is super exciting. Now, I believe that we need to get away from this belief that to keep on top of stuff, you need to sit there manually enter data all the time. One minute left. Apps are marvelous. If you want to gather data on a problem, you need to experience in your therapy and make the best out of it. If they help you make decisions based on that data. See, I'm the co-founder of a mobile health company in diabetes and diabetic myself. I use our own app about two weeks a month when I see something starting to change where I don't know what it is. I use it for two weeks to figure out the problem, solve it and go on with my life. And then I come back when I see the next problem occurring. This is the level we need to be speaking about. This is where they come in super handy to solve problems and not as a lifestyle. Thank you very much. Thank you. All right, Bastia. Same rules apply to you. Three minutes. Please go. I agree with your second statement that apps should go away from me having to enter data and data should come through sensors. I actually wear a sensor. That's the kind of app I use on a daily basis. I do use it daily. Many, many more times than I've ever used a blood glucose meter. I agree with that. I don't agree with your following statement that we should go away from using apps on a daily basis because for my condition and for yours, I would say I do need it on a daily basis because diabetes is not about a problem that you fix now and then for the next couple of weeks you don't need to think about it anymore. An app will not move that problem to somewhere in the future. I need an app that will help me every day, but it needs to be with me without bothering me playing around with me and asking me to do stuff all the time. So I kind of agree and not agree. But okay. I also agree or I like the fact that you put up this ecosystem argument because I think, yes, that's where we're going. However, when you're saying it's not the MedTech industry that's going to be developing this ecosystem, but it's going to be others, then I get a little bit cautious, but we're not going to get into this whole data security privacy issue debate because that's going to just disrupt. This could be the knockout punch, right? If we start talking about data security and privacy, then this is dead. We're not going to get into this. However, we do need good data security and privacy protection rules engaged. I think I have one more point that's maybe farther away from the app discussion as such, but it's something that I feel is often overlooked and that has to do with the human side. And that's something that maybe we can keep for the debate. We had this whole discussion about software as a drug in other forums with the whole app thing and sensors and automation and EHR systems. We're getting towards software as a doctor, right? Or apps as a doctor. So apps start actually doing diagnosis and you can actually use apps to find out if your heart rate is okay or maybe in the future if your diabetes is all right. One minute. So what does that do to me as a patient and to my doctor? Yes, maybe I don't want to see him every three months, but in a way it's good to know that he or she is there. We already have two little doctors out there. If we start introducing more and more automation apps, et cetera, then we will have even less. I talked to a doctor just a few weeks ago and he said, you know what? I'm a land artist in Germany, right? He likes driving around seeing patients. He says, you know what? I actually like my job because I go out there, I see patients and I work with them. He said, you know, I was introduced to telemedicine and now I sit at my desk all the time. I sit in my office and I look at screens and data. He doesn't like his job anymore. Now, that is going to be the ultimate result of digitalization and more apps that my doctor doesn't see me anymore. I don't see my doctor anymore. But all I talk to is apps that keep on annoying me with beeps and messages and I don't have anyone to turn to anymore. I don't know. Thank you very much, Bastia. Thank you very much. So I think what we're going to do now is we're going to bring up Blas on the stage, you know, and start the discussion. I think you already mentioned something very interesting. You know, you think that the ecosystem will be, maybe we should go behind it or how should we do it? Let's do so. Good. Yeah? Should we go behind it? Don't hide. Don't hide. No, no, no. Don't hide. Maybe these people want to come up also and fight with you. Do you have gloves? So Lars, I mean like the statement that the ecosystem will be developed maybe not from Meta companies, but maybe companies such as Apple and Google. What is your statement to this? Well, obviously we see loads of those companies, big ones and also startups coming into the arena, an arena that has previously been, you know, only been with the MetaC industry. And for sure, they can, what they can provide and where their strength is, is in, let's say, hosting, collecting data, managing data, interoperability issues, standardization issues and connectivity issues. And for sure, they are the experts in those kinds of things. Now the MetaC industry still needs to develop the technology, the sensors, the drug delivery technology that actually then has an impact on how the treatment is being, how the treatment is being delivered. And with the medical knowledge that the industry has, I think it's important to take that also into account in terms of the regulatory, the quality and the way healthcare systems work and how we can best support patients and healthcare professionals in delivering care. So there's a lot of, let's say, intangible knowledge available in the MetaC industry that other industries obviously don't have. But I must say, digital is just the next step in the evolution and also MetaC industries need to, let's say, add to their hardware, to their sensor, to their technologies that they have today, the data and the digital part in order to make it a complete picture. I mean, you have over, I mean, over half a million diabetes patients on your platform or on your app, MySugar. So how would you like or how would you see the future? How would you like to work together with these MetaC companies? We already work together with quite a few of them. Just two weeks ago, we integrated with the world leader in diabetes, blood glucose measurement. We already integrate with these automated systems, which both Boston and I, we ask small sensors and certain in our skin. We get those data points already. And I think that when the MedTech hardware companies start working with the software companies, that's when magic happens because what you connect is internet technology with medical technology. And in that intersection lie a lot of innovation. So software isn't just like just an app, just an app is not enough to create relief. I mean, that's a question. Yeah, it is already an ecosystem and it will start, it just started to grow for real. The question like growing the ecosystem, where do you see who will push this interconnectivity? Let's say for instance between the MedTech companies and the health apps or even with the healthcare professionals. What do you think, Pustian? Well, I think that's, as I said, that's what we need, integration, interoperability and data exchange between different platforms. And I'm happy to see that you as an app company, you're working on dealing with others. And I'm happy to hear that the MedTech sector thinks that they have intangible knowledge that will kind of protect them from the speed in this whole digitalization debate. But I'm not quite sure that's really going to be the case. In a way, I would wish that is, but four years ago I was in Brussels on the MedTech forum as a keynote speaker. And I was pulling out my blood glucose meter and my iPhone 4S at the time. Four years in IT is a long time at a mobile. And I said, you know what, guys, beware, my next blood glucose meter will be my iPhone. And today, the device, I pull out 20, 30 times a day to check my blood sugar is no longer my blood glucose meter, which still sits here, which I use twice a day to calibrate, but it is my iPhone. If you ask me a similar question today regarding apps and ecosystems, I would say, beware, guys, because my next diabetes app, and that goes to both of you, is not going to be an app. It's iOS. You know, Apple already introduced HealthKit, ResearchKit, and now just the CareKit. That's where things are going. You will be delivering data points. You will be collecting them through hardware. You will be maybe upgrading them through your app. You will be delivering them there. And that's where I will see them. That's what I think. Lars, what did you think four years ago? Did you think that the iPhone can be used at the glucose meter? Actually, yes, I mean, for me, that is very clear that consumer technology and medtech will converge in one point in time. And obviously, if you have a device already to date that is able of connecting and of doing that, why should you have two or three or four? Nobody wants that. Now, the challenge is actually the quality in the regulatory setup. So as we know, consumer devices move much faster and lower, let's say, regulatory hurdles than medical devices. So we need to find ways in order to bring that together and to assure quality for the safety of the patients. That's one thing, but we see that happening. Now coming back to the medtech industry, I do believe that the medtech industry with evolving after the four years you've been there has also evolved. And we need to sit not where the data is collected. We don't need to sit where the data is hosted. We need to sit where actually actionable decisions can be made with the data. Now there has been a lot of discussion of, you know, let's collect all the data. Have all that data integrated into one ecosystem. But the more data we collect, the more the healthcare professionals as well as the patients will be overwhelmed with the data that they have. Already today, they're actually not capable of processing all that data in a traditional way of care delivery as we see it in a doctor's office. So what we need here is actually an automation, is algorithms that go into retrospective analysis of the data that is there, go into prospective analysis to see, you know, what could happen and what could I do in order to prevent something and then eventually go into prospective or prospective analysis to really actually also automatically give the right suggestions on what could be done. But the question is also like when you have an app like my sugar, are you going into that direction to make suggestions for the users and further down the line, are you going to replace a doctor? We already help make decisions because that regulatory and quality assurance which is needed in MedTech, we have already dealt with that. We already have the quality assurance systems in place and we live up to the criteria of the FDA, the TGA, the CE, the tube, etc., etc., etc. Little app companies are audited by tube twice a year, sorry, once a year and blind audits too. It's kind of exciting. But we're now at a level, also app companies where we deliver class to be medical devices, meaning actually telling you how much insulin to inject. And that's pretty hardcore, a little app company. So there are now two or three apps doing so and also at the quality level which is needed, Roche, my sugar, well, they're actually just two. So even startups and MedTech coming together and living up to those quality criteria, it's already happening. Now replacing the doctor, I don't think that's possible. You see, diabetes is not just about the data. Diabetes is so much about the psychology. Me going on stage here, my blood glucose rises because stage fright. But that little tidbit of information, it's hard to capture with apps and data points. You need someone who knows it and who is able to help you get through that. But that's human. When I see so many digital health apps, they are trying to give more suggestions and try to dehumanize basically the whole healthcare process. I mean, what are your thoughts on this? You've seen also a lot of diabetes apps. Well, like I said in the beginning, I'm not really against apps. I just think that they can't stand alone. They need to feed into larger systems. And I think we need to think about them a little bit differently. I agree with your point as well, you know, the making things more automated. The dehumanization part, I don't know. I mean, I don't want to deal with me as myself in terms of looking at different apps and figures and think so much about it. If there is an app that really helps me in an unobstusive way to make better decisions that I'm still the master of, I want to make my own decisions. I want to look at what I eat or the sports I do and then decide to need more insulin or less. If an app can really help me make better informed decisions, and if my doctor through that app can maybe even assist me, then I'm all for it. And I think now to leave that combat form a little bit, I think, yes, this is where we're going. But as I said in the very beginning, I was looking at where are we today? And today we're not really there yet. Yeah, I mean, when I talk to healthcare professionals and patients on the whole notion of digital health and apps and all that is included into that, they basically always come up with two barriers of adoption. On the healthcare professional side, it's the fear of obsolescence. And on the patient side is the fear of dehumanization. And I think if we do it right, so if we design the right technology and put it to service to the system and the people with diabetes, what we're going to achieve is exactly the contrary. So we're going to achieve, I mean, there are not enough healthcare professionals anyhow. And with the growing prevalence, it won't be possible to take care of all those people in a traditional way. The second point is, if we look today at a normal consultation, it's about 10, if you're lucky, 12 minutes, four times a year. Now if your patient of those 10 minutes spent six minutes looking at his screen and the data and trying to figure out stuff in only four minutes in meaningful interactions and conversations with the patient, that is dehumanizing. But if you actually have smart data algorithms, you go into automation of decision making, of things that the human mind cannot process anyhow at quick speed. Well then you might be reducing that type of activities to a minute or two and have more time for those meaningful conversations with the patient. So it's completely the contrary that will be achieved with well-designed systems. So basically he has more free time to the patient and less time for all these administrative red band basically. Exactly. As Dustin said before, exactly physicians are physicians because they want to be close to the patient. And they're forced into administrative, let's say, other types of work that could be avoided if we design the right systems. Now the bottleneck that we're facing here when we're talking about this ecosystem thing, you know, you are here, Flutteric is here, I am here as the patient, we actually do lack a doctor because that's kind of the other side for me. You guys kind of do the ecosystem and the tools, but I am talking to my doctor and he or she should stand right here. That's a bottleneck we will be facing or you are probably facing already because, you know, I've already, I do use my iPhone or whatever every day and I do use it for my diabetes data. And most of the type ones I know deal with one or the other app as we speak. It's the doctor so far who don't. And I can, like I said, I can understand them because the ecosystems that we are talking about here are not in place. So it makes it almost impossible for them to deal and interpret all the data. But we need to convince them to come on board. You need to provide the better ecosystems, but you also need to convince the doctors to actually want to do this. So far, my father is a doctor. He just retired until his last day in the office. He didn't even use a cell phone or a PC. He was doing it all manually. That's the state of affairs in Germany for some of the healthcare providers. I mean, the average age of a doctor is 60. So I just wanted to ask you, and when you started this company, did you try to convince doctors to use your app and how was the result? Like how was the process? We focused exclusively on patients from day one. We didn't even build a doctor's portal or something like that. Why is that? Well, one of the things we found out already six years ago was that if you ask at a hospital, if you ask the nurses, the diabetes educators, the doctors and the administrators about using IT systems currently on the market for managing patients and stuff, and the administrative parts, the administrators loved it. The nurses, the educators, the physicians, bloody hate it because they want to spend time with the patient. So why start there when currently it's only seen as a burden to the physicians? When software comes in which solves real problems for physicians in their practice so that they can focus on the real patients, on the real issues as soon as possible, then that's going to work out. Thanks, all right. Sorry to interrupt, but I don't know if you have been saying what's going on in our Twitter fall. I think that maybe in the audience there's someone who has a question who wants to share an opinion. Are there any questions? Are there any questions from the audience? Or opinions you want to share besides the ones that are in Twitter? Yeah. I'd like to know more. Sorry, I'd like to know more about that and where are the hurdles that you need to work on as companies. Can I take it? So I mean, diabetes is decision making and physician decision making in diabetes is very much a data thing. So basically what you do as a person with diabetes, you measure your blood glucose several times a day. Today, most write them down in a little book, go to the doctor, he looks at the patterns and at the values in order to make therapy adjustments in a nutshell. Now this happens in about 80% of the cases still today. The data is not gathered automatically. It's patients that write it down on paper to show it to their physicians. 60% of the data that is in there is wrong already. So you're actually giving the physician a data set to make therapeutic adjustment decisions where the data in the way it is there is actually almost non-understandable and 60% is even wrong. Now the next evolutionary step is, and that's about 20% of the cases, that data is already collected from all the different devices that a person with diabetes uses and is aggregated into all sorts of graphs and graphical ways of showing that data that should be helping the physician to make better decisions. Yet a recent study that we have put out, even in the cases where you have the right data collected automatically, no errors, no flaws, and displayed in a graphical way that you can change and analyze, still up to 80% of clinical decision making is wrong. Not understanding, not getting the right data patterns for making the right clinical decisions. So here the big challenge and also the big opportunity that data can provide to physicians is actually not only adding more data, is adding meaningful algorithms in order to filter and data mine and show the physicians already where the problem lies and what possible solutions to that problem could be out there. So there's another, when we talk all about data, I think there is one question that we want to take from the Twitter fall. If we introduce more and more apps, do doctors become data analysts? Who wants to... And that's the last question. That's the last question. I forgot. Who wants to tackle that? I started that issue in the last, in my second round. It's something that I wouldn't want and that the doctor that I interviewed a couple of weeks ago said that he wasn't very happy with the land arts to actually liked having real contact with his patients and was introduced to the great new world of digitalization and telemedicine and was kind of complaining about now having to sit in front of the computer and looking at graphs and visualization charts and FaceTime interviews or Skype sessions and kind of losing the joy in his work. If that was the result and that remains to be seen, then I think we do have a problem. But as Lars pointed out in response to that, the idea is something else. So wait and see. All right. Just to sum up the whole thing. Like Lars said, there are fewer numbers of healthcare professionals, doctors, but there is a rising number of diabetes patients. So therefore health apps or digital health solutions can provide or are an opportunity for relief towards society. Moreover, digital health solutions can save time for the doctor with all the administrative part to focus on the patient, to have more time where it really matters. On the other hand, Bastian said, you know, as a chronic or as a diabetes patient, it's not fun to collect data like me for instance with this fitness thing. He has to do it because due to his chronic decision and therefore he doesn't want to only collect the data. He wants to share it. And therefore I believe that's what Frederick also said is we need our interconnected system between the healthcare professional or the care providers and the patient. Only then, I think we all agree that we have a holistic solution together with the ecosystem that health apps can be a relief. I think in the next five years or in the next four years, we will have also a lot of dynamics within this market. We mentioned Google and Apple coming into the European, entering the health market. So I am looking forward with you guys in the next four years to watch the digital health ecosystem carefully and also invite you to think about data privacy, about health apps, about the ecosystem. Thank you very much. Thank you. Thank you.
|
"Health apps, add real value? True or Fake – Join us in this exciting session where top influencers on mobile health will ‘fight’ to reveal the truth about this outstanding trend. Health apps are hot, moreover, they are a powerful market, the past years, an enormous increase in the number of available health-related applications (apps) has occurred, there are more than 165,000 mHealth apps in a market worth $489m. However, little is still known regarding the effectiveness and risks of these applications."
|
10.5446/20557 (DOI)
|
Wunderschönen guten Abend zusammen. Ich bin laut. Ein wunderschönen guten Abend. Hallo, herzlich willkommen bei der RP10 Icon Session heute Abend. Ihr habt euch ja heute Mittag schon aufgewärmt bei unseren Kolleginnen, bei Britta und bei Ralf diesen glaube ich auch irgendwo. Wer war dabei heute Mittag bei der Sketch-Nose-Session? Hände hoch einmal. Sehr gut, alles klar. Die Hälfte von euch ist jetzt schon aufgewärmt, trainiert heute den ganzen Tag fleißig Sketch-Nose-Geüb natürlich, um jetzt noch mehr zu lernen. Okay, wir haben euch zehn Jahre mitgebracht und es ist schön, dass wir eine Session vor uns hatten, bei der es um Sex ging. Schon mal auch ein guter Anwärmer, wir werden eine volle Bandbreite von Privacy bis zu Pornow für euch dabei haben. Und was ihr jetzt braucht, das als allererste Stifte. Habt ihr Stifte vor euch auf den ihr alle ausgerüstet? Habt ihr was zum Zeichnen? Fordermann tut es auch. Ja, genau. Rücken vom Fordermann zur Not. Trickt ihr alles unter. Wunderbar. Dann legen wir doch mal los mit dem Jahr 2000. Wer bist denn du eigentlich? Ach, oh Gott, das will ich nicht. Lama, das ist ja... Ich hab dich noch nie gesehen. Ja, ich kenne dich aus dem Internet. Ah, ich seh's nicht an dir. Ja, genau. Ich hab' ein Großzubeil. Habe ich auch gehört. Du, ich bin Anna Elina. Ach, die? Ja, genau. Manchmal kommen auch Leute auf mich zu und sagen, du bist auch Didi Malt. Und das ist mein... Das ist Didi Malt? Ja, ich bin Didi Malt. Und das ist mein Job tatsächlich seit fast Republikan. Die Republikan ist unter anderem daran schuld, dass ich jetzt diesen lustigen Beruf ab und mache Sketch Notes, mache Graphic Recording seit 2008 inzwischen. Habe lange Zeit. Und auch hier lange Jahre. 2010 hast du die nämlich angefangen bei der Republikan. Richtig. Und da hab' ich sie nämlich getroffen. Ich damals 2010, komm' ich später auch noch dazu. Hab' sie zum ersten Mal auf der Bühne gesehen. Und sie hat mich angesteckt mit diesem Sketch Noting Fieber. Und seitdem bin ich auch bei der Sache mit dabei. Und liebe Tanja, will bist du? Ich, ich bin die Frau Hölle. Ich glaub', die meisten, wenn mich jemand kennt, dann wahrscheinlich besser unter dem Namen Frau Hölle. Wenn ich mit dem Ort überhaupt nichts zu tun hab', bin ich ganz lieb und nett. Wer mich kennt, der weiß das auch. Und genau, ich benutze auch sehr viel Stiftepapiere und mittlerweile auch iPads. Deswegen haben wir euch heute auch unser iPad mitgebracht, um das dann, wenn das auch angeht, euch alles großindividual zu tun. Ja, mir hat jeder dein Code. Da, damit das schön groß ist für euch. Wenn wir, wenn wir Workshops geben, ist die erste Frage immer, wenn wir auf dem iPad bezeichnen, welche App ist das und damit ihr das alle nachher nicht fragen braucht, sagen wir das nämlich gleich jetzt. Das ist die Paper53 App. Genau, sind wir mal auf dein Home Screen gucken, damit ihr wisst, wie das Icon aussieht und ihr euch das runterladen könnt. Mal gucken. Hier unten rechts, runder kleiner Kreis 53 drin und so orange, wenn ihr das euch gerne aus dem App Store runterladen möchtet. Gibt's nur, gibt's nur für iOS, ihr armen Android-User, ihr geht leider leer aus. Oder ihr Windows Phone-User. Wer noch Tipps braucht, der für kann gerne nach der Session kommen und sich noch ein paar App-Tipps abholen. Okay, stellen wir ein. Schatten wir mal. 2007. Und überhaupt, wer war denn 2007 schon dabei? Einzig, zähle eins. Ehrlich. Wer war denn schon überhaupt öfter als dreimal auf der Republik? Okay, okay, okay, mehr als fünfmal. Ja, immer noch zehn. Und mehr als siebenmal der Henning. Oder hinten noch, ja, immerhin. Wie oft hast du? Ich war zu acht das erste Mal. Ich hab nichts ausgelassen. Also ich hab es neunmal geschafft tatsächlich. Nein, nicht nur fünfmal. Nun gut. Schön, dass ihr heute alle da seid. Ich würde sagen, starten wir 2007. Starten wir 2007. Henning, was war 2007? Weißt du nicht mehr, ne? So lange her. Ist aber gar nicht so schwer. 2007 hast du verschlafen. Da gab es noch keine Clubmarte, ne? Alles klar. Und auch kein WLAN. Also das kommt gleich alles noch. 2007, das was es ganz viel gab und ganz viele Sessions ging damals darum, waren Blogs, Blogs und Blogs, Monetarisierung von Blogs, Schreiben von Blogs und mit Blogs sich im Internet bewegen. Und wie malt man Blogs, geht es seit ihr dran? Ganz einfach, man kann es erst zum Beispiel, ein Stift geht nicht, Tanja. Nein. Guck mal, ist vielleicht doch der falsche. I know why. Bluetooth. So Stift, sie brauchen ja auch Bluetooth, sonst bin ich nicht sprechen. So, das ist das für meine, oder? So, ich muss den Achtung mal rausmachen. So, wie improvisieren. Achtung. Gleich klingt mich die Technik auch noch. Anna, danke, macht doch mal hier Besparungsprogramm, ich connecte kurz. Alles klar. So, Bluetooth. 2007 ging es um Blogs und wir haben uns ein paar Icons für euch ausgedacht, die ihr gleich schön nachzeichnen könnt und jetzt geht das Ding auch schon wieder hervor. Im Anschluss an diese Session werden wir dann ein Apple Pencil Review schreiben und bestimmt von Apple für die nächsten fünf Jahre umsonst ausgestattet werden. Alles klar, los geht es also. Genau, was ihr ganz einfach und ganz schnell machen könnt, einen Stift als Vier-Eck, eine Spitze und drumherum noch eine schöne Sprechblase. So einfach kein Blog sein. Und was auch noch geht, guck mal, jetzt mal ich das mal über Apple App zum schönes Tool. Dafür müsst ihr ganz lange auswahlen. In zwei Sekunden. Okay, next one. Was war denn noch damals? 2007. 2007, hat Twitter gerade angefangen. Stimmt, genau. Dieser komische Vogel. Ja genau, dieser komische Vogel und am Anfang sah dann noch so allmäßig aus. Ihr erinnert euch vielleicht ganz dunkel. Da sah er ungefähr so aus. Hat ein großes Auge, je nach so ein Beuchlein vorne dran und solche Flügelchen. Der erste Twitter von mir. Damals, je nach ein bisschen Spitze. Und heute, da müsst ihr ein bisschen mehr Schwung holen. Da ist auch dicker geworden. Also Twitter im Laufe der Jahre einfach auch fetter geworden, obwohl die immer noch keinen Gewinn machen. Köpfchen oben drauf, Auge nicht vergessen. Und dann haben die so eins und die werden immer kleiner, zwei, drei Flügelchen. Okay, sieben Jahre dazwischen. Dann machen wir weiter mit, zwei acht. Ja, war schon das nächste Jahr. Ob der auch bei mir mag, schauen wir mal. Ja, Speaking of Twitter, das wurde ja 2008 auf jeden Fall schon exzessiv genutzt. Das haben wir ja noch alle gemerkt. Stimmt, Hedding? 2008, was du dabei? Ja, genau. Sehr schön. Der uns allen bekannte Mr. Failwale kann da natürlich auch zum Tragen. Also der Wahl, der liebegute Wahl, der so freundlich dreinschaut, wenn wir dann doch kein Internet mehr zur Verfügung hatten bzw. kein Twitter in dem Fall. Und der war ja immer so ganz traurig. Aber er wurde ja getragen durch die Lüfte. Der liebegute Failwale von diesen kleinen Twittervögeln. Und dann schaut er schon gar nicht mehr so traurig aus der Failwale. Auch wenn es immer sehr ärgerlich war. 2008, das gab es dann auch aus dem bösen, traurigen Failwale. Leute wie Hedding, die dann aufgewacht sind oder den Failwale, den man auch wieder wecken musste, haben, bitte? Oh, falsches Jahr. Wir haben das ja festgeht. Aber nicht schlecht. Wir gehen jetzt erstmal in die Getränkentheke. Oh ja. Lecker, lecker, lecker. Was trinken wir denn alle immer so hier auf der Republik? Immer noch. 2016 auch noch. Ja, natürlich. Was könnte es anderes geben? So, eine Flasche ist schnell gemalt. Bisschen aus wie eine Ginflasche, aber schön. Das ändern wir jetzt. Die Punkte da unten, will ich nicht. Dafür könnt ihr das Blatt umdrehen und einfach neu malen. Das stimmt. So würde aus dem Gin ganz einfach und ohne viel Zauberrei die Klub-Mateflasche. Und was macht die mit uns? Vor der Klub-Mateflasche sind wir natürlich immer so ein bisschen müde und traurig. Haben ja auch schon ordentlich viele Sessions gesehen. Deswegen brauchen wir Klub-Mate. Danach geht es uns immer sehr, sehr gut. Wir freuen uns, dass die nächste Session kommt. Dann Klub-Mate. Und einer, einer, den darfst du malen. Alles klar. Gut. Einer unvergessen. Wir haben wieder kerne Momente und Dinge und Menschen, die wir in Jahren natürlich nicht zuordnen können. Aber was euch natürlich allen geläufig ist, die Ärzte hatten damals so einen guten Song, Lüffrisul. Erst mal muss man natürlich hier so ein bisschen Schnurrbart malen. Wer hat schon erkannt? Voller Obart. Voller Obart, richtig. In der alten Schönhauser damals. Ja. Die alten Hasen. Und ganz schnell, Wiesung, kam einmal oben drüber die Frisur. Zackfertig und vielleicht noch ein kleines Ohr. Mehr braucht es da. Mehr braucht es eigentlich gar nicht. Dann gehen wir mal ins nächste Jahr. 2009. Und jetzt kommt der gute Poken. Ich hab das Ding, ich kannte das gar nicht. Wie funktioniert das? Erzähl du das mal. Das war für einen Minal. Wer hat ein Poken besessen von euch? Okay, ich geh auch mit dazu. Poken, es war ein Toller. Man machte kein WLAN, kein Internet, nichts. Und konnte sich mit jedem zumindest verknüpfen in der Form von, ich geb dir alle meine Kontaktdaten und Datenschutz und so war damals auch noch kein Thema. Das war noch nicht. Und Zack hatte jeder deine Daten. So eine große Tatze, vielfingern. Ja, ich hatte so ein Panda. Das zieht man hin und dann hat das, glaube ich, auch gebummt. Und dann ein bisschen wie Ringelpiez mit Anfassen. Genau. Nur mit Panda und so. Das war super damals. Habt ihr ja wohl nicht durchgesetzt. Wir waren 2009, sind wir jetzt? Genau, da war ich ganz normaler Teilnehmer dabei. Und das Problem damals war, wir waren im Friedrichstadt-Palast und das mit dem Internet hat damals noch nicht ganz so gut funktioniert. Genau, so wie heute. Das mit dem WLAN war zwar geplant und es wurde auch versprochen, jeden Tag, dass es besser werden würde. Aber nichts da. Willst du weiter machen? Ja. Gut, wir sind nämlich inzwischen... Ich habe immer noch 2009. Wir sind noch 2009, aber wir haben inzwischen die Locations gewechselt. Oh ja. Denn wir waren vorhin nur in der Kalkscheune mit ein paar Hundert Leuten und irgendwann wurde das alles zu groß und wir sind umgezogen nebenan im Friedrichstadt-Palast. Der Friedrichstadt-Palast, wenn man sich fragt, Friedrichstadt-Palast ist ja schon ein schwieriges Gebäude, habe ich jetzt auch nicht spontan ein Bild vor Augen wie so ein Palast. Ja, Glitzer und Artenzapp. Ja, Trampfer der Tür. Brauen wir aber auch gar nicht. Dann überlege man sich einfach, was denn so eine schwierig aussehende Location so besonders macht und so eindeutig macht. Und das sind natürlich bei dem Beispiel Friedrichstadt-Palast. Wie könnte das anders sein? Schöne, man selber lachen muss. Hatten wir gesagt, dass es Spaß macht? Macht es euch Spaß? Also uns schon, ne? So, eindeutig jetzt noch. Der Friedrichstadt-Palast. Haben wir zwar nie gesehen, ich war dann auch nach 2010, hab ich nie gesehen, aber... Nee, 2019. 2019, ja. Genau, 2010, weil man auch im Friedrichstadt-Palast. 2010. Und was da als das erste Mal aufkam, waren die Katzen. Echt? Ja, ja, ja, genau. Katzen gehört ja zum Reportoir, der Internetzeichner, das müsste also alle drauf haben, ganz besonders Cat Content. Und jetzt gucken wir uns mal an, wie das aussehen kann in einer Kombination. So, natürlich lacht die auch. Katzen runter Kopf, zwei Zacken dran. Sieht ein bisschen traurig aus, Tanja. Sieht ein bisschen traurig aus, ne? Ja, ändern wir gleich. Erst mal kommt es in den Content rein. Ist ja nicht irgendeine Katze. Ist ja die Contentkatze. Und wir können ja hier auch... Glückliche Contentkatze. Wunderbar. Aber die Contentkatze, ja, genau. Ist ja noch viel mehr als die Contentkatze damals gewesen. Das war ja auch noch die Lolcat. Damals auch schon. Die Lolcat. Okay. Da sind zehn. 20, zehn. Wir sind immer noch im gleichen Jahr und was über Deutschland hereinbrach, sah ungefähr so aus. Oder über das Internet. Und heute immer noch. Es gibt sie tatsächlich immer noch, aber es war eine der ersten Begegnungen der Art, würde ich sagen. Mir wurde gesagt, ich sollte noch Augen mit rein malen, in so einen kleinen Stricken runter mit. Ja, schaut so ein Schifffiz-Dorm. Hat doch auch noch was Nette. Und damit ihr noch wisst, in welchem Jahr ihr seid, als das anfing mit den größeren Geräten, die wir jetzt auch für uns haben, als wir alle noch auf Smartphones handierten, hat sich Apple dann was größeres ausgedacht. Und das sah so aus. Das iPad kam nämlich 2010 rein. Und damals gab es, glaube ich, so zwei, drei Leute auf der Republik mit dem iPad und alle kamen dann so in die Kalkschweine und sagten, darf ich mal anfassen, darf ich mal anfassen. Und heute ist das alles ganz normal. Okay. Und wisst ihr, was noch passiert ist, dass das erste Mal bei der Abschlussveranstaltung und Bistown von Twitter sollte sprechen und Johnny stand auf der Bühne und das funktionierte nicht und Bistown war nicht da und keiner wusste, woran es liegen sollte. Und als Überbrückungsstrategie wurde YouTube angemacht, also als beliebte Karaokemittel. Und wir haben Bohemian Rhapsody gesungen, was tatsächlich inzwischen zum großen Kultschlager der Republiker geworden ist. Und das könnt ihr ganz einfach machen, indem ihr so ein Kampf zeichnet und zwei Ärmchen an der Seite mit jeweils ein paar Figürchenköpfen so leicht versetzt. Das übrigens eine Evolutionstufe der Sternmännchen, die wir heute Morgen schon von Ralph und Rolf und Rolf und Rolf und Rolf und Rolf haben das ganze Zeit gesehen. Und vielleicht noch ein paar Minuten dazu, damit man auch weiß, dass die Leute singen. Die haben Spaß und die singen. Und was singen die immer? Die singen jetzt auch. Golly Leo. So, Figaro. Was kann sich das Leben in der Visualle so manchmal tatsächlich so leicht machen? Und das war der Ursprung der Abschlussveranstaltung. 2011 geht es weiter. Ich habe schon vorher besprochen, was man tut. Großes Thema Revolution 2011 ging um Facebook, ging um die Geschehnisse in Ägypten. Und was kann man am einfachsten malen? Eine kleine Revolutionsfaust. Ein paar Kästen dran. Zack. Fünf Finger sollte so eine Hand haben. Das haben wir heute Morgen auch schon gehört. Die Gliedmaßen sollten stimmen, die Gelenke sollten stimmen. Und die Anzahl der Finger. Das ist schon die halbe Miete in der Visualisierung. So, und next one, wo wir gerade über Facebook geredet haben. So im Laufe der Jahre haben wir jede Menge soziale Medien ausprobiert. Twitter hatten wir schon. Und wer mal keinen Bock hat, immer dieses Logo zu malen, das alte Facebook-Logo, kann natürlich auch rebosartig Dinge zusammenfügen. Zum Beispiel so, ihr malt einfach einen Kopf und ein Buch. Kopfbuch? Was meinst du damit auch noch? Kopfbuch, Kopfbuch. Ich sehe es nicht. Und dann diese Köpfe in das Buch rein, diese Gesichter. Zack. Und jetzt habe ich ein Facebook. Ah, Facebook. Ja. Vielleicht. Ach, toll. Und jetzt geht es weiter. Manche. Sie kann ja jeder. So. Machen wir weiter. Ja, machen wir weiter. Willst du? Ich würde mal sagen, machen wir noch eins vom Jahr 2011. Ein ganz wichtiges Thema, was sich bis heute durchgezogen hat. Deswegen auch heute immer noch einsetzbar. Wahrscheinlich auch morgen auf den Speakers, auf den Sessions noch, die bei der Publika stattfinden. Ein ganz wichtiges Thema, Privatsphäre und Datenschutz. Schutz, Sicherheit, natürlich immer ein Thema, das man gerne erst mal weg sperrt. Wir sperren unsere Daten ja erst mal weit weg. Und wir sind ja hier nicht auf irgendeiner Konferenz. Wir sind ja hier auf der Internetkonferenz. Das heißt, es geht hier um die Daten, die in diesem Internet sicher weggesperrt sind. Wollen wir uns mal, wollen wir das noch machen? Das mal auch. Und dann gehen wir uns ja weiter. Genauso wichtig, mindestens wie das eigene Daten weg sperren ist, Daten gar nicht erst rauslassen aus unserem Mund. Also, das heißt, sich selbst zensieren bzw. zensiert werden, je nachdem. Männchen, haben wir schon alle. Jetzt kommt wieder der Fernsehstaff. Leider zensiert. Das darf nichts mehr sagen, das Männchen. 2012. Weiter geht's. Was war denn da so? Okay, wir fingen an mit der Selbstvermessung. Und wie sieht das aus, wenn ich mich selbst vermesse? Das bist du? Ja, mein Mann. Und am besten stelle ich mich dafür auf eine kleine Waage. Es ist auch immer wichtig bei so trockenen Themen oder Begriffen wie Selbstvermessung oder Quantifat, selbst so ein bisschen Spaß und Emotionen mit reinzubringen. Dann wirkt das nämlich gar nicht so trocken und dröge. Und natürlich, was nicht fehlen darf, das Buzzword des Jahres. Und ich bin mir sicher, das schafft ihr alle. Scheißhaffen? Ah, ne. Eine wunderschöne Cloud. Und die könnt ihr in allen grüßen. Und vielleicht auch noch mit ein bisschen Gewitter dazu. Nutzt du den? Oder wenn man mal wieder so ein Datenleck hat in der Datenbank. Dann regnet es vielleicht aus der Wolke kleine Daten raus. Die Lieb. Ah, das ist dann schon schade. Okay. Und wo wir gerade bei censorship waren, da ist es, jetzt gehen wir mal weiter. Möchte ich mich selbst dazu entscheiden anonym zu bleiben. Dann setze ich den Balken einfach eine Runde nach oben. Soll ich den auch noch machen? Okay. Wir haben ein bisschen über Orte nachgedacht und 2012 sind wir in die Station gezogen. 2011 oder 2012? Ich glaube, 11. 11, ne? Okay. Und das war ganz prominent. Und ihr seht es an der wunderbaren Haltung vieler Menschen. Von übergebeugt über ihre Geräte. Oder auch liegend mit dem Laptop auf dem Bauch. Seht man heute noch, ne? Die Republik ist immer sehr gut für Haltungsnoten und Rückensprobleme. Der gute Affenfelsen. Okay. So, ich würde sagen, da die Zeit schon ein bisschen knapp wird, sputen wir uns, dass wir die letzten drei Jahre auch noch alle mitbekommen und im Jahr 2013. Im Jahr 2013. Zum allerersten Mal auf dieser Neuseinweis, wenn ich diese ist, war eine andere Bühne. Egal. Das wichtigste Thema der Republik. Ralf. Ralf der Einsatz. Schenotes. Ja! Super. Schenotes. Schenotes mit Anna und Ralf 2013. Kamen zum ersten Mal auf die Bühne. Schenotes. Ein sehr wichtiger Begriff, den ihr alle unbedingt visualisieren können müsst. Der war auch ganz leicht. Einfach Quadrat mit hier so ein bisschen Telefonspirale dran. Was machen wir? Das hat Anna vorhin schon gezeigt. Wir malen mit dem Stift in unser Buch. Genau. Und ganz wichtig, Schenotes besteht ja nicht nur aus Text. Das kann ja jeder, sondern auch aus visuellen Begriffen und Bildern. Zum Beispiel an der Sonne. Weil heute hat nicht viel Sonnenschein. Aber nicht nur das, sondern in diesem Jahr ging es auch noch um andere spannende Dinge. Dinge fingen miteinander an zu reden. Dinge fingen auch miteinander zu reden. Richtig. Und wir kennen dieses Milchpackung-Kühlschrank-B Beispiel. Aber das viel schönere Gerät ist eigentlich die Waschmaschine. Und das gute Internet of Things ist entstanden. Das brauchen wir nicht, ne? Nein. Dann machen wir den noch unbedingt. Ein sehr wichtiges Medium, ein sehr wichtiger Kanal, auch bis zum heutigen Tag noch, auf dem wir auch alle aufgenommen werden unter anderem. Kann man aber auch mal ein bisschen altbackener Zeichen, als es tatsächlich ist. Hattet ihr auch noch so eine Antenne zu Hause? Nee, aber ich mag es gerne so. Was aber natürlich nicht fehlen darf, ist das eindeutige Markenzeichen. Das worum es hier geht, nämlich natürlich YouTube. Damals zuerst, aufgekauft von Google, damals glaube ich auch schon. 2014, wir springen in den Jahren, wir haben noch zwei Jahre vor uns und neugierig glaube sieben Minuten Zeit oder fünf Minuten Zeit. Es gibt viel um Geheimdienst, um AdWords-Node, um die NSA und wie kann man das gut darstellen? Man male drei, vier Ecke. Ein paar Striche rein, so dass es zu säulen wird. Vier Ecke oben drauf und eins unten drunter. Kasten drum herum, das wird Ralf sehr gefallen. Ja, Mags, ich mache aber jetzt noch was obendrauf. Das ist in Ordnung. Noch so ein kleines Podest. Eine Auge oben drauf. Und schon habe ich einen schönen Geheimdienst. Und was man natürlich nicht vergessen darf, Spionagegeräte für den privaten Hausgebrauch. Die Drohne. Kleines Kreuz. An den Enden, kleine Flügelchen, damit sie sich auch bewegen kann. So in der Mitte, irgendwas mit Strom und noch kleine Striche, damit sie auch fliegen kann. Und wer gerne meinen Roboter hätte, auch ganz einfach. Kasten, noch einen Kasten obendrauf. Das ist ein Lieberroboter. Und hier diese schönen Arme nicht vergessen, kleine Kragenarmen. Und noch zwei Räder dran, damit er auch fahren kann. Und auch Roboter haben natürlich Herzen. Ein Herz für Roboter. Okay, dann machen wir den noch und dann machen wir den. Und großartige Aufregung, qualitativ eher so lala. Ich habe mir den sagen lassen. Du fandest ihn gut? Ich meine mal weiter. Nur deswegen. Gut, es dürft ihr raten. Ich male zuerst so was. 2014. Ja, reicht. Mach weiter. Ein Mitgesicht. Ein Mitgesicht. Okay, wir den Rest sehen. Willkommen nachher zu uns und wir machen Private Viewing. 2015. Ein Jahr letztes Jahr, in dem ich nicht auf der Republik war, aber ich habe mir sagen lassen, was ich unbedingt leider verpasst habe letztes Jahr. Auch ein wichtiger Mensch, der ein bisschen spannender war als der andere Mensch, den wir gerade gesehen haben. Ganz wichtig, Astronauten haben immer Vollanzüge an. Und Astronauten befinden sich auch im Weltall. Deswegen bekommen sich hier noch ein paar Sternchen hin. Der Alex, Alex war es. Und letztes Jahr auch ein sehr wichtiges Thema. Da können wir dir das schöne Instrument zum Einsatz. Hier, ne? Die Rolle. Ich werde mal das andere noch. Und in Zeiten von WhatsApp vs. Dreamer vs. Telegram. Was ist wichtig? Verschlüsselung. Und ganz einfach, schön kleinen, alten Schlüsselmann. Sehr schön. 2016 aber noch eins von, nein, das Wichtigste, das ist auch schön. Das aller, aller, aller Wichtigste Wort. Wir haben es davor auf der Bühne auch schon gesehen. Und es zieht sich durch die Jahre durch. Mit den Anfangsjahren gab es auch schon jede Menge Sessions über dieses Thema. Und jedes Jahr wiederkehrend, irgendwas gibt es immer dazu. Schön groß auch. Reicht schon noch? Reicht eigentlich schon. Ein paar Sachen fehlen schon noch. Und noch ein paar Bubis. Bubis? Ja, komm' an. Jetzt habe ich schnell weiter. Jetzt seid ihr voll ausgestattet für alles rund ums Thema. Vorne um Sex. 2016? Jetzt. Okay. Ja. Was haben wir denn dieses Jahr so für Themen? Was haben wir denn dieses Jahr? Wir lassen Periscope einfach aus, aber das, was uns gerade am meisten Spaß macht. Yay! Okay. Deckelchen oben drauf, zwei kleine Ärmchen. Und unten wie so ein alten Rock. Und fertig ist das Snapchat. Und... Der wackelt aber auch noch nicht. Der wackelt aber so schön genau. Wieder Deckelchen oben drauf. Zwei kleine Ärmchen. Und den Rock wieder unten. Und was dann kriegt er manchmal Augen in der App und strickt auch so die Zunge raus und fängt an zu vibrieren und zu tanzen. Okay. Gut. Ja, heute Abend, spätestens heute Abend und morgen Abend und übermorgen Abend, was machen wir da? Nach getaner Arbeit für uns und nach getaner Arbeit für euch. Und da steht da cooles hinten drin und dreht sich Bier auch. Bier und Klubmarte für alle anderen. Na wieder Bier. Aber natürlich ganz wichtig, Tanzerei mit einem kleinen, schönen, unterlittenes Kugel. Mit ganz viel Glitzer. Sieht so ein bisschen aus wie eine Weltkugel, ne? Aber die hängt ja. Das ist okay. So, zack, noch ein paar Streifen. Jetzt ist eine Glitzerkugel. Und für alle, die sich noch ein bisschen betrinken wollen, entweder aus einem Maiendlässchen oder aus einem Sektlässchen. Prost. Sprudelnde, kalt Getränke. Auf die freuen wir uns alle schon. Sehr gut. Und dann Last but not least. Last but not least. Das Motto der diesjährigen Republikan. Haben wir heute Morgen ja schon gehört bei der Einführung. Republikan Tennen, ich schreibe es noch mal auf. Jetzt muss ich mich ganz gut konzentrieren, damit ich nicht so... Mach zuerst die Außenstürche. Ja, genau. War total richtig. Das ist jetzt auch noch mal für euch, Bretter und Reif. Ganz wichtig, der Spiegel. Der, der euch sagt, so wie Tanya, dass die andere Tanya nicht ich es heute Morgen so schon gesagt hat. Hey, du bist Republikan. Ihr seid Republikan. Und in diesem Sinne würde ich sagen, wir haben uns als All erfüllt. Wir haben euch die wichtigsten RP-Tennen-Icons der letzten zehn Jahre im Schnelldurchlauf einmal gezeigt. Haben euch hoffentlich noch mal ordentlich warm gemacht, sodass ihr heute, morgen und immer noch fit bleibt und vor allem alles mit Sketch noten könnt in eure Bücher mit den Stiften, die ihr jetzt auch noch bekommen habt. Viel Spaß euch. Wir laden dieses PDF hoch, falls ihr mal noch mal wissen wollt, wie so ein Snapchat-Ding geht oder wie man am besten ein Spiegel malt oder ein David Heselhoff, das lösen wir dann auch auf. Unser Twitter-Accounts sind Ed, Frau Hölle und Ed, Anna-Lena. Viel Spaß. Wir sehen uns online bis später. Vielen Dank. Ja. energy
|
Wir wandern visuell durch 10 Jahre re:publica und lernen anhand von Symbolen und Icons, wie man re:publica-Geschichte nicht nur schreibt, sondern malt. Ideal als Ergänzung zum Workshop “Sketchnotes für Einsteiger” – oder auch so zum Mitlachen und Mitzeichnen.
|
10.5446/20562 (DOI)
|
So, good afternoon everyone. The idea to host this panel about circular approach for making a movement was basically try to understand how can you bring a discussion that's really important for this bubble of maker movement that's growing and growing around the world and sometimes it's really worried about how can you make new products and how can you disrupt some industries, how can you like make this for revolution, industrial revolution and everything and sometimes we forgot to think about but which kind of system are we building with these new tools, which kind of systems we can make with all these powerful tools that we have in our hands nowadays and this circular economy as a concept that is becoming also more popular and it's trying to bring a concept that people should look to materials in a way that products should be designed to be designed again in a way that everything could be circle and we should use things more connect to the environment and be more friendly and not produce a lot of waste. So, we at TOLABI, Maker Space and Rio, been working with this kind of concept and with this kind of idea for like almost one year and a half now and there are more people around the world working on these, we are just like some of them but this community is growing and growing and our idea was like just to show a few examples about how can you use this kind of concept in the day by day in our kind of in our innovation hubs in our space to try to stimulate this kind of discussion and also not just the discussion but also some actions and some projects and some products that can be like useful to trying to make another system and not use this new all wonderful new tools like 3d printers, robotics, electron components and everything that we are really proud to have in our hands nowadays and try to use these to really make a movement to change the system and not do the same thing and ends up with the same concentration of power and few hands and everything so this is why we start this discussion and we could start with actually the tsunami and then we can put it later. We can start with tsunami, we are gonna talk like five minutes each really quickly because it's just a short panel just to introduce the subjects and then we can be more than open to follow the discussions later outside and give our contacts and whatever you can do with this and after Matia can talk for more five minutes and then I can finish showing our project in a few minutes and we can open for a few questions we don't have much time but at least we can try to reply and answer some stuff and then we can discuss outside more so tsunami from Togo. Hello, okay thank you, thank Geek for this new invitation. I will talk to you very briefly about what we are doing in Lome Togo so we have a tech hub there but it is a sort of urban vision who give birth to this tech hub. The idea is this tomorrow the city will be built by the entrepreneurs. It is RBNB who decided how you live, it is Uber who decided how you move in the city and tomorrow it is startup who will decide more than politicians, planners etc. So the idea to think about the city of tomorrow is to create space who launch new type of entrepreneur who could be able to transform the city. So we have decided to launch this urban project called Upset City and the idea in Lome is to create a network of spaces like FabLab, Co-working Space and each space have to launch business who will transform the city. So we create the first of this new generation of space which is Wellab and in Wellab we have about 11 startups created there and I would like to show you some video about these startups. So Wellab is supposed to transform one kilometer around him so it is supposed to be the energy central for the one kilometer around to be the greenery for food for 10 kilometer around to be the space who address the waste issue for one kilometer around. So we have one startup by example who work on this e-waste problem so we transform e-waste in new machine we transform old machine in new machine and the idea is to collect all the e-waste in one kilometer around build new machine with and maybe sell it or send it back to you because the all these wastes come from the the west. So this startup is WebOts and the first project we develop is the Wafa which is the first tree the printer made in Africa and entirely made with e-waste. We have another startup who work on this this plastic issues which is scope so the idea behind scope is to collect all the plastic one kilometer around and put it stuck in in the lab and transform it in new in new product like accessories, clothes etc etc so there's quickly what I would like to talk about and if you have some question about this I will ask. Hello hi good afternoon thanks thanks to come and thanks Gabby to invite me last minute here to show a project that I've been working on for the past year and I think that we can all get something out of it and the code the project is called precious plastic and we launch roughly I'd say a month ago and what we aim into do is to boost plastic recycling so how to do this we thought that the best way to boost plastic recycling is to empower people with tools to recycle plastic so to do this we've developed a series of machines that allows technically anyone in the world to recycle plastic and we basically put it all online for free and anyone can access it and can build the machine in a matter of days and we've a couple of hundred dollars and I think a video will too much better than I can do to explain the whole process. That's a sign that we need to collaborate. I don't know how can I do with the sound. Yeah but I'm just like trying to find it sorry. Sorry everyone let's do it again. This bottle is made from plastic actually a lot of things we have are made from plastic. It's used everywhere but it also ends up everywhere damaging our planet which is weird it's a precious material laying around everywhere for free we could also turn this waste into something new unfortunately we haven't got the machines to do this ourselves they're only for the big boys. So we develop machines that enable everyone to work with plastic and start their own little factory waste is shredded in small flakes which will be used to create something new. You can work freely with your hands or make your own molds to set up a small production it's like being a craftsman in plastic. Love the space you recycle colors or keep it plain simple you can create little containers and pots make your own functional tools or use them to create new raw material. Now you can turn plastic waste into something valuable. We love to see plastic getting recycled all over the world so we share the blueprints of these machines online but also step-by-step instruction videos lessons about plastic tips and tricks and useful templates a complete package with everything you need to get started ready to download. The machines are specifically designed using only basic tools and low-cost materials that are available in every corner of the world and they are developed modular in different parts in this way you can always upgrade repair and customize them or even adapt them to your environment wherever you live you should be able to build them yourself or find a local handyman to help out. These machines allow everyone to create new things from plastic set up a production start a business and clean up the neighborhood. With this project we want to try and boost plastic recycling by providing people the tools to get started so this is all the basic information people need to start their own little recycle business anywhere in the world and they can all download it for free but in order for this to have an impact we need to make sure that people actually know it's now possible and that they can just download and start. Yeah as I said the video can do much better than I can to explain what the whole project is about but yeah so since we launched with a huge attention globally and I think that's there is a that tells us that we have a huge global issue with plastic waste and so far we haven't been given many solutions to this we have some solution to collect the plastic but we haven't really come up with solution that could tackle the issue of recycling this plastic and turning into new objects that could be using many different fields. So I would say I would open it to questions later and if you have any questions in regards to high work so how to access information I was responsible for the website and the identity so don't ask me too much engineering based questions as am I not be able to respond to them but I'm also the living proof that this machine can be built by anyone because last week in three and a half days me and a friend we built the injection machine which is just exhibited there and I have got no understanding of engineering whatsoever so if I could build it anyone here could too as well. The machine is in the maker space the gig maker space isn't it? Yeah absolutely it's next to the fab lab area. Yeah so if you want to check it it's just over there in the gig area outside where there are some hands-on workshops it's just like gig maker space you can find around the Republic and there is one piece of precious plastic over there showing the people how does it work. Great so I will show a little bit some projects that we have at the lab. Now I have to find it here. I don't know. This German is not gonna be as good as okay we can live on this way. So as I told you before we start to work with some with some events and some actions connected to this reuse repair and recycling movement so we basically did a manifest and start to try to spread around our community and the makers around the maker space how could they use this mindset to try to build new products and it will host different kinds of workshops like this one where people build some furniture with like wood piece of wood that you can find on the streets some workshops just like do-it-yourself crafts to build a new jacket with the one that you are not using anymore or things that you are like just like repairing and you can build something new and not just like throw away the things that we already have some workshops to repair guitars electron components or whatever you have it hosts different things on this way we build something like really simple system that you can like take water from the rain and then collect it and use it again so it's just like some simple when in Brazil we are facing a really big problem of water to have waters in our spaces in our homes so was like really important that that moment to try to spread this kind of content that everyone can do something to have like at least something to do and we also did a program called gambiarra favela tech which started like seven months ago and it was still working on this we did two editions already and we are planning more around the country and basically what we did was like try to use this gambiarra concept which is a really Brazilian thing gambiarra is basically a word that we have that I didn't even translate because it's a kind of local slang that we have to just do simple fix fixing things with simple materials and things that we have in our home that's really popular around the country a lot of people are doing stuff just because basically is the way that you can repair your TV or conditioner or whatever you want to do and we are trying to work on the concept that this gambiarra mindset that use the materials that you have to fix the things and to build something with creativity is a really really powerful mindset that you could transform a lot of things not just for simple repairs so what we did was a kind of ten days artistic residence when we are trying to kind of transform people and gambiologist which is mean the one who made gambiarras was just like working in a concept and we say that the gambiologist was the one who are waste collectors and who are like using all the materials we have in our days to give new meanings to the materials and invent the new machines that can be useful or sometimes just play with the words or sometimes just express themselves so we are not just we are not exactly oh sorry so we are not exactly I don't know what I did but I will try here so we are not exactly working in a way that try to give skills to people build new projects to solve problems in a more technical way even if sometimes it ends up working on this way but we are much more try to bring the concept and give space for people to think in a different way and be able to have a different mindset to express themselves and there is new way to do things so basically we are much more focused and culture and arts on this program that in second knowledge in a way like thinking about products and was really really great to see that basically people for ten days just went to different kinds of places where you can find the waste and then you can build a lot of installations and after a while people was just saying oh that's so great because I've been doing that for quite a long time and now I find out that's important that's great and that can be a maker thing not just something that I do because I don't have other resource so this is why we are trying to pushing and show that we can use local culture we can use all this creativity that we have already in our local contests to promote a different way to look to the world and to make better impact and this idea to focus in the our culture make us have connections around the world with people who are interested in the same approach because sometimes this maker culture approach not that focus and local cultures they're just like trying to give some the same answers for everyone and sometimes like the contest is different and then we end up in January of this year in New Zealand helping the local the local some local NGOs with the government to build a project that was basically with the same concept try to potentialize and stimulate the crafties and all the do it yourself work which they already had in their community to make some social change and make better impact around it and then we designed together with then in Mexico data with a global consult innovation consultancy which was an amazing process they like was leading this process and we designed this project with them call it maker hoods which was basically trying to bring this concept of repair for the people on the streets around South Auckland which was like the most most vulnerable vulnerable area in the city of Auckland in New Zealand so was just like few examples that things that's already working around the words there are much more that we could talk about but maybe it's better to open for people if people want to have some questions and talk about it we has to have ten minutes for this so do you have a microphone let's okay thank you my name is Stephen Kovats from the agency for open culture and know some of you guys it's great to see you here again and thank you so much for this really amazing food for thought I have less of a question as I want to just give a little bit of feedback to you you were mentioning for example the open source circular economy thing are you part of the open source circular economy days event are you familiar with that there are people here who are participating in that so I'm just wondering about that so that's an event that takes place in Berlin but a number of cities around the world focus specifically on these issues that you take up so if you're not if your cities are not part of that event which will be in June I think 9 to 13 it'll be cool to have you in that global network the other thing which I just want to mention is that in in in this section of culture we are always using the word open and in fact what you were talking about is in terms of the urban scenario is really a closed system we try actually not to bring extra resources in so there are times when to use the word closed is actually quite good the whole notion of the circular economy in that urban sense in terms of resources is to keep the system as closed as possible in order to reuse and upcycle and repurpose materials and resources especially resources because one of the problems of urbanism is the is the waste of resources so just the notion that we are it is okay to use closed sometimes as well so that's all I wanted to mention and if you are part of open source circular economy so be cool yeah I think Lars from the open source and yeah yeah he's over there he was here before I don't know I didn't yeah he's over there yes he's just over there he's the one one of the then behind of this open source this we we couldn't be part last year because we didn't have time we found out like just few days before and at this year we are trying and definitely we need to be more connected globally sometimes it's really difficult to know to implement things and to do all this work that we have to do locally and at the same time try to connect with the initiatives globally so it's definitely such like a great point to bring and something that I think everyone should like improve on this way that is like the big conversation around gig how can you improve more this collaboration so yeah it's great it's great to remind definitely we should be more involved yeah yeah I'm called Emmanuel and Bismo from J-Hub in South Sudan Juba yeah we also you know we also do do kind of the same things like now focus most on open source and you know recycling so finding the resources like the resources for doing the you know for the recycling and stuff like that seems to be very hard I don't know how do you forget yours yeah adding to your previous question with precious plastic we're gonna be at the OSCE days in June and we're planning a couple of workshops with Simon and we are planning to actually build a machine this time there so we're gonna get gather all the parts together beforehand and then we're gonna get a small group of people to actually build a machine in the time of the festival so just to show how easy is to make this machine and get started so anyone that is interested to know more about the precious plastic machine should join 7 8th of June something like that yeah your question is what where we find the resources yes for for the e-waste by example we have now in West Africa a lot of e-waste dumps you know the US and he's a US to China and they are send his e-waste to Africa so he was is now quite a local material you can find it easily yeah in Brazil is the same like you can find pretty easily there are a lot of space that you can just go and find it and if you start to look for this you'll be really impressive how how easy it is you know and how big they are and how many things are inside and sometimes really useful things and also sometimes we work with things that we just find on the streets like just wasting on the streets okay I'm not meaning the waste but I'm meaning like the materials for building them the machines mmm yeah well Dave has been going a series of times to Africa Ghana specifically to sort of check the waste materials in there and what's what's available in there in the location so that the machine can be built everywhere in the world or in Africa for this we would love to partner with Sena because yeah he knows way better than what we can kind of imagine from here however most of the parts needed for to build the machines are available globally there's some issues with countries that don't don't have access to eBay or eBay doesn't ship there but otherwise it's just metal that you can find anywhere technically and bolts and it's based on the whole all the machines are based on the plumbing system which is available anywhere in the world and it's the same across the planet and on top of these all the machines are also modular so if you can't find a specific bit or part of the machine you can you can hack it so you can you can build it according to your needs or your conditions or what you have available around you so I think that the potential of the machine this machine is that you know we build a version and we tell you how to build them but then it's up to you to adopt them to your own needs and yeah whatever is around you basically and yeah I would say it's just started and then you know as you go along and you find issue you will you will be able to yeah to adopt them and to resolve them it's pretty accessible to anyone I would say even though I haven't built them in Africa but I'm planning to come over okay so she told me already that we don't have more time so we will be here we'll be around for the next hours okay on the market can I do it can I do it my name is Stephanie nice to meet you and thank you for your speech for your for your talks I'm also interested in the topic of e-waste particularly informal processing in the developing world I'm curious if in the development of the plastic recycling machine and in your maker projects if if there are groups working on machines that can do similar e-waste processing so like lead smelting in a safe way e-waste disassembly in a safe way and if that's something that's even needed or if it's just too unsafe or if there are any people doing that sort of work I am personally not aware of any anyone doing any anything in regards to metal yeah actually I was asking if you know of any projects there that do similar things as we do with plastic but with we thought I think I'm not reminding now but there are a lot of information around the internet if you go to large events and all these communities around that are a lot of ideas coming but precious plastic become really popular on internet the time that they they launch the video because I think a lot of people was impressive about the final product because a lot of people are trying to do things and I've been part of a lot of forums and communities that are trying to build stuff and prototyping but they did something that is like the final product and I don't have any reference on this way that is that well designed until the end of the process yet but I'm sure that in a few years and month after month we will find more and more solutions like that hopefully so yeah so do you want to finish it? okay so yeah it's time to finish it so if you need to have more questions or if you want to contact or talk more with one of us we'll be around so it was great to bring this discussion here and thank you Geraldine and all the geeky communities
|
"From Brazilian Gambiarra to African DIY and repair culture in Europa, makers are exploring more sustainable production methods around the world. This new mindset can contribute significantly to the drive towards a more circular economy and also to validate a creative work done in the peripheries of the world."
|
10.5446/20571 (DOI)
|
So, hello everyone. Today we'll talk about big data, algorithms and personal information. So the motivation for this talk is quite simple. Today, algorithms already control many aspects of our lives, especially they decide many things that affect us as internet users and consumers. And soon, algorithms will also make decisions that can affect our lives on a much deeper level. Yet, most of us don't really know how algorithms work and how they analyze our personal information to make decisions about us. And with my talk, I want to change that. So as a technical background, I will approach the subject from quite a technical perspective. However, I will try to always highlight how the choice of technologies affects the outcome we will get. There will be some graphs and technical details in the talk. So please don't leave the room. I will try to keep that up to an absolute minimum. And I will also try to explain everything that you see on the slide. So the talk is roughly divided in three sections. First I will briefly talk about what big data really means to us on a personal level. Then I will try to explain to you how algorithms actually learn things about us and why they probably know more about us than we would think, you know. And finally, I will discuss a few new ways to think about personal data that might actually be useful in the big data age. So usually when I talk about big data, I show graphs that explain how the volume of data that we analyze is so and so many C-dabytes today. And it will be so and so many X-dabytes in 10 years. And this is nice, but I realize this is also very unintuitive because no one can put that kind of information in relation to oneself. So today I want you to think a bit different about big data because if you think like what is the difference between our modern industrialized society and like medieval societies 500 years ago. And one key difference is that today the amount of energy, be it electricity or fossil fuels or resources that each one of us can consume is by a factor of a few million higher than what we could consume earlier. And with the data it's the same. I think we are today the first generation of people that actually leave data trace behind that will be measured in gigabytes or terabytes instead of maybe like a few kilobytes as for our ancestors which had only like maybe photo documents and a few text files about them. So I think we really like in a new age of data and I put a smartphone on the slide because I think that's the device that embodies this new paradigm the best so to say because this is a small device that everyone has in its pocket like a few hundred grams in weight yet it measures data across more than 20 different channels. So for example we have the obvious things like the GPS position, the device idea, the phone calls that you make, the duration, the time, the callee, the installed apps on your machine but you also have like things like the temperature, the humidity, acceleration data, your contact and a whole host of behavioral data. So when you use your phone, when you switch it on the first time in the morning, when you switch it off in the evening, how long you use it for browsing, what kind of things you're doing. So a lot of interesting data streams that we can use and that we can analyze. And at the same time what also is amazing is that today we not only have these data sources available but we also have the means to analyze them. If I gave you $2.50 you could do several things with that today. You could either go out and buy a nice coffee for that or you could go to your favorite cloud provider, sign up there and have one hour of private time with a server that has like 40 cores and 160 gigabyte of RAM. Or alternatively you could also like have 100 gigabyte of storage for one month. So that's really impressive because it means that we can today use computing power like we use a faucet. So if I want to do some data processing I can just go, I can turn on the faucet, I can do my processing and when I'm done I close it again and then I stop paying. So it's really a change to the way how we can analyze and process data. What comes in addition is that many of the tools that we need to do our data processing are also freely available either as open source or as free images that we can use with these cloud providers. So this means that a lot of projects which would have been really, really difficult 15 years ago like analyzing a human genome for example are now well within the reach of single individuals and in 10 years or in 15 years I think that many of the things that we discussed that big corporations like Facebook, Google, Twitter, etc. are doing are also able, will also be doable by single persons. So if you're still asking yourself when's a good time to learn about algorithm I would say now is a good time because algorithms will affect your life if you like it or not. And the first thing to understand how an algorithm works is to learn the basics of machine learning and this is what I try in the next few minutes. And I was looking for an example that would be easy enough to understand but at the same time realistic enough so that it could really come from a real world scenario. And what I decided on was to tell you a bit about what we can learn from clicks because a click is probably the most fundamental building block of the worldwide web. It's how a user navigates from one website to another. It's how you like something on Facebook. It's how you interact with many of your applications on your phone. So it's a prime form of interaction of the user with the web and with any software today. And here we have our user. He's a bit unhappy that we're doing experiments with him but no worries. It's just a simulation so there won't be any privacy harmed. And our user he has an opinion about a certain subject which we classify between zero and one where zero would be like some left wing opinion and one would be the most right wing opinion that we could think of. And now this user interacts with a number of articles online which also express a certain opinion so they can be left wing, right wing or moderate. And the thing is of course if you like interact with an article the likelihood that you will like it or find it good is higher if the article expresses the same opinion as you have. So that means that the opinion which is in the head of the user affects his behavior as he interacts with the articles that we present him. And now in our simulation we will just have 40 users which interact with 60 articles and we will only record whether a user likes the article or not. And then we will see what kind of information we can learn from that. So if we plot that we first need to think about like how we measure how like two users are similar or not and the easiest measure that we can think of is just to compare the amount of articles that any two given users both like or both don't like. And this is what I plot on the Y axis and this is also what a machine learning system could measure in that sense. The more interesting information for us is the difference in the opinion or the opinion of the user themselves which is here plotted on the X axis but which we can't measure. So if you look at the data we can see that it looks pretty chaotic so there's a lot of noise but you can also maybe see that there's an upward slope between the difference in the opinion and the difference that we have measured through our click rates. And now we can use a machine learning algorithm to extract this information and learn something about our users. The algorithms that we use here is called K-means clustering and it's basically an algorithm where we tell the computer to divide the users into three groups where each group is as similar as possible within the group but it's different from the other two groups as possible. And the result is quite interesting as you can see that the algorithm returns as three groups and here I plot again the number of users in each group as a function of the opinion of the given user. And as you can see the algorithm has correctly classified our users in left wing, moderate and right wing opinion users. And this is basically the essence of what machine learning is about because we have not told the algorithm anything about opinions or about like the different types of articles. We have just given it some click data and we asked to like group a number of observations in a way that seems reasonable to it. And the emerging information that we have about the opinion of the user was generated by the system itself so it's not something that we have explicitly programmed into it and this is what we call machine learning in this sense. So what this means to any of us is that the information or the data that we give to the apps or to the websites that we visit contains a lot more information than what we think. And to visualize this a bit better I made a proposal for a redesigned permission screen because you all know this screen here from your mobile phone where some app would ask for like permissions to read your contacts, to see your identity or to see the installed apps on your device. And that data is basically only a means to gain more information about you. And I would really like if you could have like a second permission screen which would then show us what kind of information the app could infer from these things. And it's quite a lot that you can actually learn from simple data like the installed apps and the clicks that you have on your service. For example, your religious beliefs or your political, ideological ideas where you live, where you work, where you study, maybe your sexual orientation, your income, your social class, your ethnicity maybe, your relationship status and if you're cheating. And so I could show a lot of examples where people actually, researchers actually went and extracted this kind of information from like social media data. For example here I have a link to an interesting paper doing this with the Facebook likes pages. So I just want to like stress the point that as a user you should really try to think about the data that applications ask of you. Not as something that is just an isolated thing but that's something that can convey additional information about you. So another interesting aspect of machine learning is that we can't always understand how our algorithms make the decisions. And to discuss that I want to talk about the new machine learning techniques which has been like gaining wide adoptions in the last ten years. It's called deep learning and it basically works by mimicking the structure of our own brain in order to like be able to like learn more efficiently, hire more complicated relationships between any data points. Because the algorithm that I showed you before is quite powerful in what it's doing but unfortunately it's quite limited as well because we have to give it an explicit representation of the data. So we basically have to pre-digest everything we feed to the algorithm so that we can get out a useful result. And deep learning tries to overcome this by like letting the algorithm itself build these high level concepts that we feed to it so it can like while it's learning, while it's ingesting data, construct its own new representations, new concepts from the data that it can use then to accomplish its goal. And the technique has been quite successful for image recognition which is a very simple task for people but very difficult for computers. Probably no one here in the room would have any difficulty distinguishing the parrot on the left from the bowl of guacamole on the right because it's like we can see that this thing has like a beak, it has feathers, it has an eye and there's some sky behind whereas this is like a bowl with some like fruits and vegetables inside. A computer on the other hand would have a much more difficult time distinguishing the two because it only sees the pixel values and from the color composition and the shapes that are in the picture those two images are actually quite similar. And so let's see how deep learning tackles this problem. So the deep learning architecture is quite simple, I show it here so you can imagine that the information flows from the left of this graph to the right so we have one input layer where we feed for example the image data and then we have several so-called hidden layers that take this image data, crunch it together, process it and like generate new representations from it. At the end of our processing pipeline we have one output layer which in many cases contains only a single output value which for example would tell us yes this is a parrot or no this is not a parrot. So and as I said the way the algorithm constructs new representation of the data is kind of similar to what our brain does when we are seeing things so you can imagine that on the left we have the individual pixel values of the image whereas in the middle we would have like higher level representations of different shapes for example ovals or rectangular shapes whereas on the right we would have then high level concepts such as feathers or the concept of a beak. And for this technique to work it needs an immensely large amount of parameters so typically if you have a 500 by 500 image we would have 250,000 input parameters and then in each of the intermediate layers we can again have several 100,000 parameters so all in all that means that we have tens of million different parameters in this scheme and this means that this kind of machine learning can only work if we have a very large input data set that we can use to train our algorithm. And the strengths of this system is at the same time its weakness because this parameter space is much too large to understand for us. So this is the first time I think or one of the few times that we have an engineered system where we know all the components but we still can't understand how the system exactly makes its decision. And in fact you might have seen these quite beautiful images from the deep learning project at Google and those are actually an attempt to understand how such a deep learning neural network is working because it's the result of like feeding the information backwards through the system so like giving some signal into the right of the system and then like amplifying whatever makes the signal stronger on the left side. So today we are really in the position where we need to reverse engineer our own systems to understand them. And this can be problematic because if we have a system that we don't understand and possibly some input data that contains uncontrolled information content we cannot know how the system will process it. And as Kate Crawford said yesterday from machine learning system we always need some human input so we always need some training data that we feed to the system in order to like optimize the outcome of the machine learning process. And if this data is contaminated as I call it for example if it contains like discriminations against certain groups of people and the algorithm has information available through the data that we give to it about these groups of people then it can use that to perform the same kind of discrimination that a human would do. So in that sense the algorithm can take up our bad habits if we like don't control really well what kind of information they process. And with the system that we have today this kind of control is not guaranteed. As a final aspect I want to talk about why it's more and more difficult to assure the privacy of users the more data we actually have about them. And to do that we have to understand what a fingerprint is. I mean you probably all know like real world fingerprint this one here is from the CCC but what I want to talk about is a so called database fingerprint. The database fingerprint is something that we can put together from the various data attributes that you have which would be suitable to identify you uniquely or with a sufficiently high probability in a given context. And how exactly does that work? I mean the math behind it is actually delightfully simple. You probably all or most of you already did this game where you think of somebody famous and then another person tries to guess who that is by asking you a series of yes and no question. And each one of these questions if you answer them correctly then narrows down the possible number of people that you can actually that can actually correspond to the person you're thinking of. And a database fingerprint works in the same way by just defining a number of attributes that can be binary or categorical and that we can set for each individual. And if we have enough of these attributes we can generate a unique vector set for each one of the individuals that we want to track. And then in order to like compare if somebody is really the same person we can just take those attribute vectors we can compare them, we can see if they're equal and if they're not we can say no this is not the same person and if they are we can say with high probability this is the same person. So let's again have a look at some real life data. So this is a data set from Microsoft research called Geolive and it contains the GPS data of about 200 people measured about over during a time frame of about two years. And the question is how easy is it to reidentify single users through their data here. So let's have a look at the data. As you can see here the individual trajectories are plotted over the time. You can see there are different modes of transportation. Some of them are by walking, some of them by car, some of them even by airplane. And in case you were wondering if you were wondering what city this is, this is Beijing. And in case you wondered who would be so crazy to give up his or her data for that I think this is the university district there. And you can see that this data set is quite rich. If you like plot it as the individual trajectories you can see that it contains a lot of diverse information about each subject. So here each color would encode a different person. And now the question is how easy is it to construct a fingerprint from this kind of data. And we're going to do a very naive approach by just putting a grid on the data here. So an 8 by 8 grid and measuring how often a given individual has been found or has been like measured in each of these quadrants. And if we do that we can just plot the result as a color coded diagram. Here you see the results for like about 60 or almost 100 of our individuals. And you can see that the data footprints are actually quite different. I mean there are some cases where they look very similar but there are other cases where we can like see very different patterns given the individual that we have. And now we could like if we wanted to compare these two these fingerprints we can just go and multiply them together. And as a result we would get like a third fingerprint that we could then just sum up and the sum that we would obtain from that would act as so to say an indication if this is the same person or not. So now what we're going to do is to test how easy it is to reidentify user through his or her data. You can imagine that we would basically somebody would use this mobile phone and then throw it away and get a new phone. But apart from that not change any of its life of his or her life habits. And the question is how easy is it to reidentify that person even if she or he has a new mobile phone. So we're going to use 75% of our data as training and then 25% of the data to test our assumption. And the result is shown here. So we have the rank of the correct user identified plotted versus the percent of the cases for different grid sizes between 16 and 1024 grid units. And as you can see the identification rate is quite good already with this very simple method because it's between 20 and 30%. So that means with this like very, very small, very crude method we are able to already like uniquely identify 20% of the users. And if we count all the correct identifications where the user that we're looking for is actually within the first 10 proposals of the algorithm, the success rate would be even higher at about like 60%. So and remember this is only one data dimension that we have used here. In fact, in real life there are many more dimensions that we could use like your GPS data, your email, your phone number, your browsing behavior, your social network, the installed apps on your browser. So it would be even much easier to construct a more rich and probably more unique fingerprint. And you can think about is this a problem or not. So I think it can be a problem and to illustrate that I can compare it to something from my earlier life as a physicist because I had like my colleagues in astronomy that would always be worried about the risk of their satellites that are sending into space being destroyed by some junk flying around there. Because today as you can see behind us there are quite a few objects in orbit around Earth and it can happen that like two objects collide and so like objects get destroyed and leave behind more debris. And the most catastrophic failure mode that one could actually imagine in such a case is the so-called Kessler syndrome or Kessler cascade where the reaction or like the collision between two pieces of junk would create enough debris to set up a chain reaction and basically destroy everything that's in the orbit and like make space unusable for future generations so to say. And you can really ask yourself if with all the leaks of the data leaks that we have today like where we have several hundred millions or even billions of records of users being like published in the wild or like centralized by either governments or other organizations if there could be a time where it would no longer be possible to be private to be anonymous on the internet because every bit of information about you that you can't simply change like your behavior, your character traits, maybe your facial features etc. can be used to piece together a full picture of you regardless if you try to be anonymous or not. So this leads to like the question if we should think of private data or personal data more like a toxic asset actually instead of a precious resource that we need to exploit. And this is an interesting notion which I read the first time about like maybe a year ago in US blog and which is becoming more popular now with the recent leaks and I think this way of thinking about data actually has some merit because it tells us to be more cautious with what we do actually. Because when I was learning about data analysis it was mostly about having fun exploring different data sets and like squeezing out a few more percent of success rate from the given data we have with some new algorithm. But what nobody absolutely wanted to talk about was like the safety or like if we could like actually do harm with the things that we do with the data analysis. And I think this is something that we have to change because now data analysis is becoming so pervasive that it actually has a really big effect on real life and we as data analysts should be careful on how we handle the data. On the other hand as users I think what we today need is a better quantitative and intuitive understanding of our data. I think today everyone already knows that giving away data can be dangerous but what we lack still is I think a good like understanding of how exactly our data can be used. And so I think this is something where all of us we have to put in a little effort and like just try to understand better what we can do with the data, how algorithms work and how the lifetime our data is and how it can affect us in the future. And I mean at the last conference I spoke to this about the subject. Some person asked me if it would help if we just acted randomly in order to confuse the algorithms and like make them think we're somebody different. And I thought a lot about this and I have to say I still don't have a quantitative answer if that works or not but as I think it's a fun thing to do maybe why not do it. And I mean to end this presentation on a good note I really think that data analysis is an amazing tool and it can really really help us to improve our lives in many ways that we can anticipate and in many other ways that we can't even grasp today. And so I would like us to try to use the technology to its fullest potential and to be careful not to destroy the trust in it by abusing it. So thank you very much. Thank you so much Andreas Davis. One question is John Fass here already the next speaker just come up on stage. And if there are any questions I think Andreas will be on the side of the stage just answering every question you still have.
|
When asked about their online privacy, most people think they got nothing to hide. With my talk, I want to show that this is probably not true. To do this, I'll show a series of experiments that demonstrate how easy it is to learn interesting and sometimes very private things about people by analyzing the data trails they leave behind. I will discuss the risk of permanent de-anonymization of user data and propose technical as well as societal strategies that we can employ to protect our privacy.
|
10.5446/20576 (DOI)
|
Music We have two guests here who talk about how science fiction can change the world. I'm keen, as you are, to know how this works. Welcome. Anne Schüßler, she is a software developer and Uri Avier, he is the director of a Topia festival and it's in the keynote. Welcome. Thank you. Thank you. We go first. Sure. All right. I'm a bit excited. Thank you all for coming and I'd like to thank the Republic Conference and the lovely people at this stage that I've been harassing for the past five, six hours. So thank you. And there's a Confundation who's been lovely to arrange for me to get here. A little bit about myself. I'm a creative consultant. I'm the founder and director of the Utopia Festival for Science and Science Fiction in Tel Aviv. I was recently program manager at Geek Picnic Jerusalem and last year I was program consultant at the Frankfurt B3 Biennial of the Moving Image, which means I have thick geek blood. But my first and foremost love is for science fiction. And I consider myself a science fiction evangelist. And my goal today is to have you all walk out as better ambassadors of science fiction. And I'd like to start with philosopher Slavoj Zizek. This is him speaking at the OWS at the Occupy Wall Street in Zagori Park back in 2011. And he told a joke to the protesters that I'd like to repeat. I should say I heard it first from Edwin Kupremont, another lovely utopian speaker who will be speaking tomorrow. So I encourage you to visit his talk as well. Back to Slavoj Zizek. So the joke. And so he goes. It's an old joke from Communist times. A guy sent from East Germany to work in Siberia and he wants to keep in touch with his friends. And he's going to send them letters, but he knows that he's going to go through censorship. So he tells his friends, let's establish a code. If I write you a letter and the letter is blue ink, it means it is true. If I write the letter in red ink, it is false. After a month, his friends receive his first letter. And it is all blue ink. What he says is, everything here is wonderful. Stores are full of good food, movie theaters are filled to the brim, and they show great films from the West. Apartments are large and luxurious. The only thing one cannot find in stores is red ink. So language is paramount. It is key. We supposedly live in free societies, but we lack the words to articulate our non-freedom. The language used by those who would sustain the status quo, the powers that be, mostly that of capitalism and recently the war on terror, misuses and distorts, meanings of words we use to describe our society, words like democracy, information, terror and freedom itself. And we desperately need red ink to voice ourselves. Now, science fiction. It is a highly creative storytelling art form. It is a source for unbound inspiration, for tech entrepreneurs, for scientists, for designers. It is a wonderful platform to engage kids with STEM education and the general public with science and technology. But all these lovely attributes of science fiction I'm going to put aside and suggest to you science fiction as our red ink. Now, science fiction creatives have a much harder task than their colleagues of realistic fiction, and we're going to do another talk about the paradox of realistic fiction at a different time. Their stories don't take place in Tel Aviv of the 1980s or Berlin in the 1950s or New York in the 1920s. There are no ready-made blueprints for science fiction. They need to imagine, to design an entire new world. It can be a future, it can be an alternative present, an alternative past, or a completely new world. And they need to design its physics and ecology, its history and economy, its technology, its pop culture, its language and its slang. And then on top of that, they need to create compelling characters that will lure us into the story they want to tell us. Now, world or scenario building, the what if question, and I hope Randall is okay with me promoting his book, is the basic framework of speculative and fantastic creativity. And it enables science fiction creatives to ask basic questions about the structure of reality and society. It is the goal for many of them indeed to debate those structures, be it the nation state, the military, the corporation, the city, the family, religion, citizenship, democracy, police, justice, money, age, gender, sexual orientation, privacy, identity, mortality. All and many more have come under the inspection of science fiction creatives. Mike Cantor for this talk, Anna, we'll go in depth with a few examples and I'll try and quickly cover a few past capitalism, anonymity, child soldiers, crime, refugees and space. Time, it'll be fine. Ten minutes. So let's start with a few past capitalism. The first two science fiction examples were not chosen by me. I'm glad to say that we're chosen by Yanis Varoufakis. And I chose him. Over the past year, I became very attentive to what he has to say, the former Greek finance minister. And in his recent TED talk, he imagines a world beyond capitalism as we know it. And he refers to that world with two diametrically opposing scenarios, describing them, and I quote, as, a Star Trek like utopian society where machines serve the humans and the humans expend their energies exploring the universe. And opposing that utopian scenario is a surveillance-mad hyper-autocracy, a matrix like dystopia. Now these two visions of the future are so well-known, so well-defined and powerful on their own that they need little or no introduction by myself or by him. And as a science fiction evangelist, I'm proud and quite unsurprised that he shows these modern myths to propagate his ideas. I'm going on to anonymity. And perhaps the most well-known of examples in recent memory, of course, V for Vendetta, graphic novel by Alan Moore, film adaptation by the Wakowski sisters, while with extremely different messages, these two, the novel and the film, both discuss the power of anonymity. The film is currently celebrating its 10th anniversary, and it actually gave the literal and virtual face for an entire movement, first and foremost dedicated to anonymity. Going on, military service. The 1985 Orson-Squad Card novel, Andrew's Game, with a 2013 adaptation to the film by Gavin Hood, was prescient in many ways. The power of the video game is there, online debates and the blogosphere are brilliant ideas presented in the novel, but one of the major topics it evokes is military conscription and child soldiers. Presenting a world where genetic and psychological tests determine if from childhood you'll be drafted into the army. Most wars were and still are fought by teenagers and young adults, as well as, sadly, children, still children. And we should also remember that childhood itself is a social construct that only recently came into prominence. Andrew's Game suggests an existential threat to all of humanity, a defensive war, a war to end all wars. But isn't that always the case? Our next topic is crime, and let's recall the 2004 Steven Spielberg film, Minority Reports, starring Tom Cruise, based on the Philip K. Dick story. The production has done a marvelous job working with scientists and engineers to imagine a tech plausible future for the year 2053. Now, ubiquitous personalized commercials and touch screens have arrived much earlier than expected. Autonomous cars will soon be with us, but the truly interesting idea explored in the book and in the film is that of pre-crime. What if one could stop and arrest people a few seconds prior to them committing the crime? Now, the ability to do that in the story is fantastical, but the idea is now explored by law enforcement agencies from the US to China, and by anti-terrorism and intelligence units everywhere. Utilizing big data, analysis to look for patterns and calculate whether a community, a group of people, or an individual have a chance of committing a crime or a terrorist act. It would definitely cut lines at airports, at major sports events, it will increase profits. But what would it do to the average non-white person, or the ex-convict, or any member of a marginalized group? So, and what does one say when confronted with the fact that these methods will save lives and stop terror attacks? Once again, the need for red ink. I would be remiss if I would not speak about mass migration and the refugee situation, with two magnificent examples. The first would be District 9. The 2009 film by South African Neil Blomkamp takes the alien invasion story and turns it on its axis, telling a story not about attackers or infiltrators, but about helpless refugees. The marketing campaign that preceded the film contributed immensely, as you can see, to the film's conversation about segregation and racism. And remember, the director is South African. The second film I want to explore with you is a work of art. It's Children of Men by Alfonso Cuaron, 2006, a masterpiece of cinema and science fiction, and it asks a very simple yet very profound what-if question. What if all over the world, women stopped having children, stopped having babies at all? With that single question, Cuaron takes us on a journey into a bleak future devoid of laughter, devoid of naivete, and with no hope for the future. And it's no surprise that the most powerful scenes depicting massive migration and a refugee crisis from recent cinematic memory are from that film. It's an image from that film, obviously. Now, a little bit about space, because I've been very bleak. So apart from the start of Utopia, most of these visions are dark and pessimistic, and there's a reason for that. We're all interested in what can go wrong. We're all interested in critiquing how things are today, and not big on positive visions for the future, and so is our science fiction. Not to say that science fiction is storytelling, so it requires conflict. And thus Utopias will appear more in philosophical essays than they would in science fiction narratives. But for recent times, somewhat positive, outlook towards the future and of the human spirit, I should definitely mention Interstellar, a sober and inspiring, I felt, return to space travel, post our disappointments from the visions of the 20th century's science fiction space escapades and the NASA missions. Whether we like it or not, our future in the long run is in space. The cinematic return to the stars, Gravity in 2013, Interstellar 2014, Matt Damon, The Martian 2015, corresponds with new space initiatives of recent years. Whether it's Elon Musk's SpaceX, Mission to Colonize Mars, the Google Lunar X Prize, Returns Us to the Moon, and the newly established Breakthrough Starshot mission, led by Stephen Hawking, aimed to reach Alpha Centauri with unmanned, as yet unmanned, unfortunately, spaceships in less than 40 years' time. To start summing up and setting up Anna, science fiction is a laboratory for big visionary ideas, a place celebrating the possible, but even more so, the impossible, the grotesque, the taboo, the ludicrous, the what-if question may seem naive, but it is highly suppressive. It is the possibility for possibilities. The radical notion that things actually can be different. The only way to discover the limits of the possible is to go beyond them into the impossible. That's Arthur C. Clarke, and that's the heart of science fiction, a voyage of exploration, a challenge of our conceptions. Science fiction creatives are explorers. Every step they take into the unknown expands our imagination and our language. Be it Big Brother, Cyberspace, Ritual Reality, the computer virus, the technological singularity, the robot, Newspeak, Hive Mind, Pre-Crime, The Matrix, and I Could Go On and Barrage You with so many other examples. They enable us conversations that we were not able prior. They are the purveyors of our red ink, which we so desperately need. I'll end with a quote from Bruce Sterling. If poets are the unacknowledged legislatures of the world, science fiction writers are its court gestures. We are wise fools who can leap, caper, utter prophecies and scratch ourselves in public. We can play with big ideas because the garish motley of our pulp origins make us seem harmless. Very few feel obliged to take us seriously at our ideas, permeate the culture, bubbling along invisibly, like background radiation. That's me. APPLAUSE Can you hear me now? All right. Yes, thank you very much. You and I were kind of... How do you say? We didn't know each other before. We just had a very similar topic, so that's going to be interesting. I'm coming from a more personal angle a little bit. That's what we decided. And from a more literature angle. And I decided to call this wise science fiction is good for us. When I told my husband what I was trying to... My thesis that I was going to present today, he said, well, that's interesting. Do you think you can bring that across? He is very critical. He doesn't let me get away with things that I just say and say, I think that's that. I'd like to start with saying something about me and science fiction and fantasy. For me, it's both. I like both science fiction and fantasy. I'd like to focus a little bit on science fiction, but I think most of it is true for both genres. I'm a big reader. I started reading before I got into preschool for some reason. And it was only at the age of about, I don't know, 23, 25, or something that I realized, oh, I think I like science fiction and fantasy, because those are the books that I like most. It wasn't like I thought that I was a geek. I just suddenly realized that, oh, this is the genre I like. I didn't pick these books on purpose. And that's how I got into all of this. I'm now reading books with an online book club online, and it's a lot of fun to compare all of our ideas about science fiction and fantasy, and it's still my favorite genres, but I'll read books from all over the place, so it's not just that. And I know there are a lot of prejudices against science fiction and fantasy. It's a lot about, is it escapism, and it's all just entertainment and made up worlds. It doesn't have anything to do with the real world. So, and I don't think that's true. They are made up worlds, but they have to tell us so much about other worlds that we live in, and the society we live in. They kind of give us so many ideas about how to change things and how to make the world better, and that's what I was... that's what I'm going to tell you all about today. How do I do this? Ah, yes. This is a quote. Reality is already there. Why should I picture it? It's from an interview from a very famous German fantasy writer, and you're going to have to stay until the end, and then I'll tell you who it is. That's how I'm going to keep you here. And I think it's a very good quote. It's the exact opposite of what a lot of people say. It's why do you read about made up worlds? And he said, so he's a he, so you know that already? He said, well, it's already there. That's not... why should I picture it? I'm going to make up new stuff and not write about the things that are already there. And when I prepare for this talk, and like I said, I'm in an online forum, an online book club, and I ask around, I said, well, do you have any examples? Do you have any examples of books that meant a lot to you that taught you? Do you have any examples of moments when you suddenly realize, oh, this story gives me so much more than just the story. It gives me an idea about something in the world today. And this, I think it's a long quote. I'm sorry, I know you're not supposed to do this. But here's someone who actually made a lot of remarks about David Brennan, I'm not sure if you know this. I write Existence because my boss told me it's a great work. I usually have bosses who read science fiction fantasy and make me read their favorite books. That's how I had to read six books of George R. R. Martin because my boss wanted someone to talk with him about that. And Existence really is a great book. It has so many ideas. And he said it's a wide ranging... It takes a look at internet, at journalism, at social media. It looks at climate change, at genetic engineering. There are so many topics. And David Brennan invents worlds that are not dystopias. They're kind of like utopias, even. But of course there are things wrong with them because there has to be some kind of conflict. Otherwise the story doesn't work. And I actually try to focus on two parts that I think are important today. One is surveillance. That's the plaza... I don't speak Spanish, the plaza de George Orwell in Barcelona. And yes, I think you get the irony of this sign. And I get... When I look at today, I get why a lot of people don't understand why it's very important to question the little changes, to ask why do we have to have cameras everywhere? Why do we have to have surveillance everywhere? Because it's so easy to say, well, I'm not doing anything wrong. It doesn't hurt me. I want security. And I get that. I really get how a lot of people don't realize that all these little changes take freedom from us. Because they're just little. There's just one camera. It doesn't hurt me. I don't have to go there. I'm not robbing anyone. I'm not killing anyone. So why should I be worried about this camera? But if you read a lot of science fiction, you see where it all ends. And then you realize, okay, maybe we have to be more careful. Because now I don't think it's dangerous. But if we install that, and then maybe we watch that, and then maybe someone installs some kind of surveillance there, and suddenly everything is watched, and then you don't have any freedom anymore. There are a lot of young adult fiction nowadays. They're really always dealing with dystopia. And I'm kind of a young adult fan. I like reading these books. And a lot of them deal with exactly these topics of losing your freedom because everything is watched. And you just can't be yourself. And you can't do anything that's remotely different from anyone else. And that kind of schools you in thinking that maybe it's not that bad now, but where will we end if you keep on doing this for the next 10 or 20 years? And I don't think I want this world. This is maybe the best book you can read now about surveillance. It's called The Doctor of the Little Brother. And I think it must still be free on the internet if you want to read it, because Doctor Of's a really cool guy. And this is a quote from the book that says, it's not about doing something shameful, it's about doing something private, it's about your life belonging to you. And that's what all these surveillance topics are actually about. They're not about being in camera, they're about taking your freedom from you one step at a time. And here's another example. Maybe some of you remember that's something that Alex Schmidt said. I said in all these street view things that he said, if you don't have something that you don't want anyone to know, maybe you shouldn't be doing it the first place. And I think that's a very wrong thing to say. And I think he retracted it a little bit afterward, or at least said, well, it's not actually what I meant, but maybe it was what he meant. And I think it's important to realize the very difference that of course it's not about me doing something wrong, it's about I don't want to be watched with all of the things I do, even if they're not wrong. They're private, they're mine, nobody has to know about them. This is another great book, I think. It plays in the near future, I'm not sure. Has anybody read that, read that? Just a few, you really should. Although I've heard different opinions, I really liked it. It plays in the near future, they have those little, they're called apparats, and what's special about this world, it plays in New York, it's a love story basically, but everybody knows about the other person's credit score, and also, I'm not sure if that's a word to reuse, but actually about their fuckability. So they have scores about that. And so if you're someone who's not really cool, you're not really rich, you have a problem in this world, because everybody knows that. And that was the book that really played to me, because all of the things you notice there, they are already here. All through the book you keep thinking, well, we're not quite there yet, but maybe in five years, maybe in three years, it's so close to the future that it starts to scare you where we're going. And there was recently an app, I don't remember the name, but it was about rating your friends or rating people, so not just your job or something, but it was really about rating people. And I think there was a lot of backlash, and they started, okay, maybe that was not a good idea, so that everybody could rate anybody else and say, well, he's a jerk, and then put it on the internet for everyone to see. And that was, that kind of, I think that project kind of got shut down, but when I tried to find out what it was, I stumbled upon this, and this actually seems to be a thing. It's an app where women can rate men for dating, and as a woman, I think that's maybe kind of great, but I'm not sure if the general idea is a good one, but we'll see how that plays out. Preparing for the talk, and I'm going to move to another topic now, I was really, really glad when in a podcast, I heard a fantasy writer talk about why he writes fantasy and not real books, and that's what he said. The question, what is evil, gets clearer. I translated it from German, so I hope that all of this gets across. When we don't talk about terrorism, old age, poverty, or racism directly, but shifted to a world where all of this can be felt, but you can't put your finger on it and say, I know this, I read about this in the newspaper, I understand this, and it's exactly this alienation that helps us understand what it's really about, and I think that, when I heard this, I thought, okay, I just have this quote, and that's my talk summed up in one quote from another person. That's great. And I actually, how many minutes do I have left? Four minutes, okay. I read this book. It's not a fantasy book, it's not a science fiction book. It's a German book about refugees. It was nominated for the German book prize. It's a great book, but I had a problem with it because I noticed that I was comparing it to my life. And I was comparing it to my experiences, and I was starting to question the book and its ideas because I was constantly caught up in trying to find out if it was real, if it actually held up to what I thought was happening in the real world. There's another simpler example, something more superficial maybe. It's when you see sitcoms, or this is apparently Kerry Bradshaw's apartment, and you know the prices of apartments in New York, and you say, how can they afford that? And it takes you out of the story because you think this is unrealistic. This is maybe great for the series, but once you try to compare it to how real life is, you notice all the little things that don't add up. And that's a problem with books and series that play in the real world because you have the real world right there and you can compare. This is me. This is more awkward for me than for you now. And this is me with my history, with my experiences. I don't think knowledge is the right word. It's stuff I know, I'm not a professor. And my prejudices, of course, everyone has prejudices, and all these things add to what I can take from a story because I'm constantly comparing it to my history. I have my knowledge, I have my history, my prejudices. And this is the book by N.K. Jemison. It's very great, it's just recently, I think it was published in 2015 or 2016 even. And there are people that like called orogenes there, and orogeny is some kind of, you can sense the earth and the falls and the shifts. And there's a derogatory term for it, and it's called roga. And maybe when you call it roga, there's some kind of you notice there's something there. And this is also from my online forum, and it's Joanna who said that the term roga means inhuman, and I asked it not lesser, she said, I'm white. I'm, oh, that's not good. I'm never going to completely get it, but I like to think that this book helped me get it a little better, and that's a good example of why fiction has value besides just escapement and entertainment, it can also promote empathy. And that's what I think it's all about, it's about empathy. It's about understanding something, and sometimes this works better if you're going to take something completely out of your own context, if you give yourself nothing to hold on to, but just a completely new world, and then you have to make up your mind about all these things and get an idea of what it's really about. You have this quote again, it's Walter Mörr's. He said this in an interview in 2001 with the German newspaper, The Zeit, and this is one of my favorite quotes of all time. So thank you, that was it. And I think we don't have any time for questions. Thank you. Thank you very much. If you have some questions to them, please, because we are running out of time, catch these two persons. I guess you're still around, catch the faces too, and get your answers.
|
In this session we will explore how science fiction is important for civic society as it is at the forefront of the freedoms of thought and expression. Science Fiction explores the ludicrous, impossible, unthinkable, and by doing so, it expands the possible and eventually, the plausible, probable and real. We will talk about how taking a viewpoint a bit more detached from current events we are able to see different concepts of society that might seem utopic, but which could provide us with new ideas regarding how to handle or at least understand problems in the real world.
|
10.5446/20580 (DOI)
|
Hello. Yeah, I'm very flattered to be here. Thank you for coming. Today I will talk about my favorite subject. This is inflatables and why they are such a great tool for actions, political organizing. And as I was preparing this PowerPoint, I was thinking, what does inflatables has to do with immersion? And then I was thinking about my first experience where I got really excited about inflatables. And that was in 2009. I cycled to the Copenhagen climate conference to join the protest during this big summit about climate change and what to do about it. And there was this no-border demonstration. It stagnated in front of the parliament in Copenhagen. And no one really knew what to do. And then suddenly they unleashed the ropes of this big balloon. And then police tried to hold it, but then the wind got caught of it. And then people started to run with it. And they were choreographed by the winds. And it was like a big social amoeba, like floating through the city, going up and down. And people said, this was the best experience ever. This was the nicest protest. And I was also part of it, of this running. And that was an immersive experience. And there I also saw the power of inflatables as an object really also can create spontaneous crowd unity. Something that trade unions work on for years. People just do in a second. So, yeah. Then the next year, there was the climate change conference in Cancun, Mexico. And as artists asking questions about how can we contribute to society, how can we, yeah, be kind of useful, we were thinking what can we do, how can we do something also for these protests as we were in Copenhagen. And that was actually the start of tools for action. We did a 10-day workshop. We invited all kinds of people. And we made this 12-meter inflatable hammer. It was very intense work. It was a bit like a party. All kinds of people helped in. Then we put it in a suitcase and then send it there. So we stayed in Berlin. But the hammer went there. We had contact with a Mexican activist group. And we didn't really have a clue what will happen. But this happens. And, yeah. While delegates are trying to hammer out a deal on climate change at UN Talks in Cancun, Mexican protesters have been carrying a giant silver inflatable tool down the road outside. On Wednesday, one group arrived with a 12-meter inflatable hammer. The blow-up hammer was sent by the German-based eclectic electric collective of artists, hoping to use it to symbolically stamp out the talks. They say it's to allow the demonstrators to symbolically stamp out the talks. The conference should be cancelled. But then I see a silver hammer pushed by the fucking resistance that broke the spell of state's invincibility and gave a nice reminder to all those who attended. That's the right gear, Claude. Police were in no mood to deliver the hammer to the delegates. As it was chucked over the gate, they descended on it, tearing it to pieces. So what you saw was a compilation of different media footage. What happened was that this group, they had this big inflatable hammer, ran with it to the fence of this conference complex. And then the police, they threw it over, just like that. Police tore it to pieces and there was a Reuters cameraman. And within two, three hours, the inflatable hammer became the icon of the protests of the day. So there we suddenly understood how inflatables can create media spectacles. That the inflatable is almost like a media spectacle, how it blows up into giant proportions and then deflates again as if nothing happened. And yeah, basically we were art students. We had a bunch of press contacts and we were just spamming all the time our press release. We didn't knew and then this happened. And that was the beginning of tools for action. And we made different tools. For example, here in Russia, with a Russian arts activist group or socially engaged group called Partizanin, we made this 10-meter inflatable saw because the saw represents corruption, how it divides the budget. And it was one year after a year of silence and there were no protests in Russia. Pussy Riot was in jail. 30 organizers have been also in jail. So this was the first demonstration and for this it was really good to come with some surprising, something that authorities don't know how to cope with and then that's why it works. And yeah, sometimes it not totally worked how we wanted. I was in a project with Tilly, my collaborator who is here in India and we somehow got involved with a theater group. It was just the time after the Delhi gang rape and a woman got raped in the bus by a group and the response of the authorities was very patriarchal and there was a big feminist upsurge. So a lot of women started to protest around the country and with one theater group who was doing work around domestic violence, we made this slipper because the slipper is also a tool. If you got hit by a slipper, it's a big offense in India because the underside is dirty. So we tried to find symbols that work in the local culture even when the object didn't totally work, it still kind of worked. Another example was the inflatable cobblestone we first tested in a general strike in Barcelona. The whole city was on strike because of austerity cuts and then two weeks later we tried this as well at the first of May demonstration in Berlin just to try different ways of creative tactics because this first of May thing is always this ritual that is always happening in the same way and we thought, hey, let's try to change it. Tilly brought one already, look, that is it. And yeah, this is the movie that shows what happened. And it was interesting because first even the people from the demonstration were not, didn't know what it was and also were not sympathetic to it and then the whole dynamic changed. I tried to show the movie. No. It doesn't work. Shall I show it from a different way? One second. Can I get in there? Here we go. Okay. George, George? Come on! Come on! You hear that now? Come on! Come on! Come on! Come on! Come on! Come on! So basically we did a... basically we did a performance or a theater piece intervention in this demonstration and mixed it up a little bit. And yeah, I'm very interested how these objects create situations, adults become children suddenly because they get reminded of their childhood that they were playing with big balloons and suddenly there's this big balloon. So we're playing also with skill and proportion and also how the street turns into a playground and it is the escalating all kinds of things play in it. Yeah, are there any questions? Maybe later. Okay. And basically this tactic then got replicated by different groups. Yeah. So I'm getting into the PowerPoint again. One second please. And now I would like to make a jump. This was in 2012. And as you see, somehow got invited also to participate in protests just to make a performance. And the last one was in December for the United Nations Climate Change Conference in Paris. And because it's in... It's like an important summit, reducing emissions, making binding agreements. And yeah, being skeptical about this whole process in the group we decided there will be in the end of the year, at the end of the conference there will be a big day of civil disobedience. And how do we gonna do this? We're gonna do this with inflatable barricades because France is... Actually Paris is the inventor of barricades. The first barricades are from the 16th century. The word barricade comes from the French word barric meaning barrel. Hollow barrels were rolled out into the streets. They put stones into it and secured with metal chains. And these were the first barricades. And then after a while this tactic got spread through Europe. And the French people are really proud of this heritage. So it's again a symbol that really resonates within French culture. Here are some examples of barricades from the Paris commune 1871. The left and at the right, second world war. Even in the second world war the French were building barricades. And when we were there there was just the terrorist attack. And there was a ban on protest with the argument big people come together. This is dangerous so that's why protest is not allowed. We saw this also as a suppression of freedom of speech, the right to protest. But basically this whole state of exemption really mixed up the alliance and the politics within the protest parties. So then we as tools for action we decided we make the inflatable barricades but we send them out to the world. So we export them for this day of disobedience. So we send them to New York and Westchester. This is like north of New York. They blocked the construction site of a fracked gas pipeline. And the protest happened also in New York at the same time in Portland and in London. And now there's also different groups making this inflatable barricade around the world because we exported it. So there's a really active London group. And here in Paris, although there was a ban on protest, it still happened. And actually it was very peaceful. And contrary to what you think, it was actually a party. I can show you the video. This is a handy picture so it's an immersive video. So basically this is a barricade of the 21st century because it's light, it's mobile and it's not secured with metal change but with Velpro. And you can transport it over the whole world so it's like more fitting or a globalised economy. And this was in December and now I would like to tell you about the future project that will be in Dortmund. And on the 4th of June, there will be a big neo-Nazi march happening. It's called, they call it, the organisers, the day of the German future. And they demonstrate against alienation of German society. Basically 1500 neo-Nazis are expected from all Germany to come to Dortmund. And the Dortmund citizens really want to do something against it but they don't know really how. And this is how we get in by creating different ways of engagement, how people can engage. One is like they can build inflatables with us and now we give barricade workshops in schools. We do the workshops in the theatre and then we do trainings every Sunday. And we also do a crowdfund campaign. So for people who cannot make it there, they can still support it in a different way. For me, the school workshops are the most interesting because thinking about future generations, how do we want to go on the next generation? Thinking about future generations, how do we want to go on, do something against the rise of xenophobia, right wing politics and the right wing extremism is by working together with their main target audience. It's like 16-year-olds, 20-year-olds, adolescents who are still finding identity and then maybe glitch into extremism. So by working with them and letting them really take action, I think this is, I am really excited. I think this is the thing we should need to do. Because I felt it doesn't help to all the time chew the same historical topics over and over again. And yes, we all know history was bad, but we need to kind of change political attitudes and working. So now I want to show you one of our first workshops. This was Thursday last week. At the end of the workshop, we all built this. And these are from the Phoenix Gymnasium. They are like, it's like from an agai working group called TV Courage. So basically a workshop consists of making the cubes in four hours, then going outside in a schoolyard, doing a training with it. And this training I want to do with you in the afternoon at two. Because what I find very interesting is that you have to put the barricade together and then you can work with it. You can do different formations and choreographies and this creates instant cooperation working together. And so you start to get a sense of collective actions on a very basic physical level of just working together. And the front of the inflatables have mirrors to if we intervene in the march to block it to put a mirror against xenophobia in our society, as you would say. And I have a last crowdfund movie. The crowdfunding just got online on Friday. And I would like you to support me maybe after the Q&A. And here's the movie. On June 4, 2016, around 1500 neonats are waiting in Dortmund to demonstrate the alleged alienation of the German society. We don't want to give such a brown thought to our city. It's just great, it's just beautiful. The project is about tools for action, planning together with the show-game Dortmund a lively action, inflatable mirror barricade. We want to build 200 castles with the civil society and need your help. I find it very exciting that we are building a community here and bookmarking with the construction of inflatable castles so that we block the neonation march. The goal is to prevent the neonation march in Dortmund and to keep our society the mirror. To be able to successfully perform the action, we are looking for supporters. The castles that help us with the financing of the material, build together with 200 castles, become the castles, the mirror barricade in Dortmund. Infos at the show-game Dortmund, mirrorbarricade.de and at facebook.com. So please read tweets, mirrorbarricade, come to our crowdfund campaign, we need support and I'm very interested in all your questions. Thank you very much, Attila. We immediately open for your questions. We would like to ask you to speak always into the mic because we're streaming live and otherwise you're not heard from the white internet audience. Hello. I just have a short question about the lifetime of one of these cubes because I saw it depends on the riots that it's into it. But I would like to know how long is the time span of one of these cubes and how many cubes do you kind of need to hold up like a situation where people actually interact with it? Oh yeah, I can. So there are ephemeral objects and that is also the beauty of it that it is, yeah, it is fragile. The fragility is like the power of it. The lightness is the power of it. And in the demonstration in Paris, people were playing with it for hours and hours. So it was like a whole day of intense playing and we still have the cubes. You get holes in it but you can repair it. Yeah, so it's a bit like this. It's not like forever. And as you have seen also we plan interventions or perform to that the object also gets destroyed because that is part of the theatrical drama. Hey, I like the idea of how those inflatables are really shaping public space temporarily. I was just wondering whether this technique would be, well, I mean, has the challenge of being that corporations and marketing activities would also adopt this and this way it would lose its effect or you've seen any response from that perspective. So this is like basically the main issue. The Macy's Thanksgiving Day parade, 1930s started in New York. Basically it was promoting the shopping time before Christmas and this parade is still going on. One of my groups I'm very interested in is from the 1970s. It's like the event structure research group. They did events and, oh, what happens? I think your time is up. Ah, okay. I wanted to show that picture. Anyway, so they did events and then afterwards all these events got incorporated into event marketing in the 90s. So what we try to do is have a subversive content and subversive, yeah, do a subversive situation because what advertisement is always doing is more like doing a parade or having it as an installation, something like this. And what we try to do is create situations of dialogue, of interaction, what is not interaction I would say, or creating decision dilemmas to really provoke kind of a reaction. But I think provoke is not always the right term. I'm sorry, but we have to finish now. I wish you best of luck in Dortmund and it's not far away if you want to play with that in facing. Yeah, well, come to two and try the inflatable barricade. We have 17 and we're going to do analog Twitter because basically every cube will be a letter so you can make your analog hashtags. So I would like to invite you to come to the relax area and we will have some fun. Thank you. Thank you very much. Great having you.
|
Civil disobedience with inflatable cobblestones? That's the tactics of the collective "Tools for Action". This talk will address their artistic practice of linking action, politics, and arts and reconnecting them with life.
|
10.5446/20581 (DOI)
|
Thanks for the introduction. I'm pretty excited. This is my first time at Republica and thanks for the opportunity to present some of the concepts that we have thought about in the last year. So before I really start with my presentation, I see a few familiar faces. How many of you have been to the Bosch exhibition area where we have showcased our show car? Okay. Good. So I'll talk about this Bosch concept car during my presentation on 40 minutes. The way I'm going to present, I'll start with the major trends which are impacting the automotive industry. And then I'll talk about what we call what you saw as beyond driving. I hope something comes on the screen. Yes. So beyond driving, the topic beyond driving means that how are the cars in the future going to look like? So I'm going to talk about how the automotive interiors are going to look like. And we at Bosch call it as the third living space. So I'm going to talk what do we mean by third living space. And at the end, I'll show you some of the use cases. If some of you have not really been at the Bosch show car, then you can see some of the use cases right here. So let's start off. So automated or autonomous driving is something most of you have heard of. This is potentially one of the most disruptive innovations or it has the potential to be one of the most disruptive innovations for the automotive industry. The technology for automated driving has progressed so fast in the past years that we no longer wonder whether there will be driverless cars in the future. It's not whether but when it will happen. And when it happens, it could this potential disruption, what I was talking is when this happens, it is going to impact the way we travel, the way business models are going to look like. It has an impact on traffic congestion, emissions and so on. So what you see here is really a vision. This is really driverless cars. But the way towards driverless cars is not going to happen overnight. There are going to be different levels of automation. For example, the regulatory authorities in USA have defined five levels of automation. Level zero of automation is complete manual and level four is something like that, where it's complete driverless cars. In between, you will see the automation levels in the car progressively moving forward where the driver still has some kind of responsibility in the car. So one typical example could be that you are in a traffic jam and the car wants to take over control from you because however passionate you are about car driving when you are stuck in a traffic jam, your passion comes down considerably. And when this passion comes down, if the car can take over that part of the drive where it is really boarding, where you would rather do something else, that would be something very nice to have. So now for the user inside the car, what does it mean? So when the car says, I will take over from you, fine, I can give control to the car. Now when the traffic jam is over, the car has to give back control to me. Now during this time, what am I doing? Am I reading a newspaper? Have I gone drowsy? Am I sleeping? These are the kind of factors which we should consider when the takeover, handover functions take place. So I will go to the next trend which is electrified, which has been there for quite some time. But in the next 10 years, we will see considerable amount of change coming here, especially because of the advancements in battery technologies, whether it's in terms of the costs, whether it's in terms of the volume, size, weight, we'll see considerable improvements here. Also, there will be more and more regulatory restrictions coming on emissions, and this will also accelerate the push for electrified. And finally, the third key factor will be the development of the infrastructure. So these three key factors will really push this trend, push the automobile industry further in this direction. The third trend which I wanted to talk about is connected. Connected means always online. It's not just about accessing your apps, your social media in the car, but a little bit beyond when you talk about the car. When we talk about connected, we talk about predictive driving, which means that you get real-time data from the Internet about what lies beyond your line of sight, which means you're driving, and beyond the curve in front of you, is there a traffic jam which is getting built up? Or in the tunnel in front of you, is the road getting more icy? So these are the kind of things which you get from the Internet in real-time, and helps you to drive much more predictively. It also brings about a kind of predictiveness about how much range you have. So if you have an electric car, you can also determine if there's going to be a traffic jam. What does it mean for me going to the destination? So connected really brings about a lot of real-time information which you can really use for planning your current journey. The second part of connected is what we call as predictive diagnostics, which means that the functioning of the components of your car can be monitored on a regular basis, and information can be given to you ahead of time, whether you need to do preventive maintenance of your car. When do you really have to take your car to the service station? So these are some of the kind of things, kind of services or functions which we will see in the future. Of course, there are challenges. When we talk about connectivity, one of the key challenges is security, whether someone is going to hack into my car. Can this person start operating the car, sitting in his small room in a hotel? So these are challenges. From the security perspective, there are a lot of advances which are happening at the automotive manufacturers and at the tier ones where we are looking at security from a real holistic fashion, which means that you look at security in the way we develop our products, in the way we manufacture it, and also during operation, whether it's connecting to the cloud or giving updates of software over the air. So these are things which are happening in the area of connectivity. Last and not least is the impact of consumer electronics on the automotive industry. I talked about our show car, the Boss show car. This is the so-called world premiere I would say was in the Consumer Electronics show in Las Vegas. The reason why we did it is not just because we wanted to do it there, but because all the automotive manufacturers and suppliers now showcase their latest technologies in the CES. So the automotive world, the number of functions, the kind of electronics and hardware that's coming into the automotive world is strongly influenced by the consumer electronics world. So we showcase this in the consumer electronics world. But there is a major gap which we see between the consumer world and the automotive world. If you look at your smartphones, you see a new model coming out every six months. The lifecycle of consumer devices are very short. However, when you have a car, the lifecycle is pretty big. You might not like your smartphone and check it out three months after you buy it. However, the lifecycle of a car is much longer. You'd ideally like to hold it longer than those three or six months. Now, the challenge here is users are somehow used to getting new features, new updates on your devices. So you would also expect to see new functions in your car, the way the car looks like. You would like to see that. So this is one big challenge which we at the automotive industry are looking at. So these were the key trends which are impacting the automotive industry. Now I'll come to the core topic of my talk, which what I said as third living space. What is this third living space we are talking about? The third living space is we see it as a space which redefines the boundary between the car, the place where we work, and our home. The third living space is a space where you as a driver or a passenger in a car, you feel more secure, more safe. The third living space is a space where we see that the interiors are so personalized for you that you feel that it's almost an extension of yourself. The third living space, we feel is a space where when you sit inside the car, you have a seamless interaction with the different devices in the car in a very holistic fashion. Now all this was kind of marketing statements. So let me go a little bit into four, I would say four key areas which I'll talk about which defines this third living space. So one aspect is what we call as no frontiers. What does this mean? I talked about connectivity earlier. When we talked about connectivity, we talked about how connectivity can enable different features inside the car. No frontiers means that the boundaries which are existing today are slowly disappearing. The boundaries between what I can do in my car, how can I connect from my car to my home, from my car to the place where I work, between the real world in the car and the virtual world, or the communication between cars or communication from the car to the infrastructure. These are all the areas where connectivity is affecting and we believe that slowly in the future these boundaries will disappear. I'll give you an example of how connectivity and automated driving can come together. For example, I talked about a traffic jam earlier. Again, passion low traffic jam. I'm sitting there. The car tells me please can I take over? I say yeah, the car starts driving by itself. Now I have a whole lot of time with me. What do I do during this time? This is the time when this no frontiers is going to come into pictures. I decide whether I want to look at how my home is. Is there someone around my home? Is it secure? Or I say should I access some documents what I was working on? Or you can go to the extent that you say maybe I'll do a video conference with my colleagues from my car. These are the kind of features which we do not think about today. But these are the features which could be made possible in the future. I'll go to the next one. I take care of you. The car saves. I will take care of you. What does that mean? It's all about safety and security. So if you have noticed, you will probably when you get into your car, most of the technology that enables these kinds of safety functions are all under the hood. You will not be able to notice it. But the number of sensors, maybe radar sensors, video sensors, laser sensors, ultrasonic sensors, there are so many sensors which are being integrated in the car today that the car is able to really have a fantastic perception, a 360 degree perception of your environment as you are traveling far, far superior to our own perceptions. So when you are driving, your perception is here. Maybe when you look here, you're here, but you'll not be looking behind. So the cars are really equipped so much with technology that they're going to give you this perception. And how can we use this perception to make you feel more secure? Especially when we go in the direction of automated driving. I'm not sure. This seems to be an elite crowd. How many of you have sat actually in an automated car? Great. One, two, not bad. That's pretty good. This transition, you know, transition of going from a manually driven car where you have the complete control and getting into a car which seems to have its own will, which seems to have its own mind, that the transition is going to take some time. I'll give you an example. My wife drives my car during the weekend and I sit beside her. And for example, when she starts to park, I get some kind of a funny feeling. And it's probably the same way for her when I park, right? And just imagine, I trust her completely. She trusts me completely. But when I give over my car to her, why do I develop this funny feeling? Another example, I'm driving, as I drive, I see a pedestrian crossing and I see a small child with a dog crossing, the pedestrian crossing. And when I see from here, I know, hey, there's a child coming. And I know for sure, I see the child, I'm going to break. And I'm going to ensure that after the child passes, I'm going to drive over. Now, when I'm driving in an automated mode, I'm not sure whether the car has seen this child, whether the car will really break when it nears the pedestrian crossing. These are the kind of things which come into my mind when I'm sitting in the car. So we need to somehow propagate this trust from the car. The car has to say that, hey, I have seen this person, I've seen the child, don't worry, I'm going to break. That gives a tremendous amount of confidence for me inside the car. So that's what I was talking about, that the car takes care of you. The next topic I'm going to speak about is extension of myself. This is all about personalization. Personalization to an extent where you can start with simple kinds of personalization. You can say, I like pink. My colleague, Janine likes pink. So when she sits in her car, if she gets everything in pink color, hey, she's a great. My day is going to be great today. It can go beyond. It's not just about the color, but it's also probably about the kind of music that I would like to hear. So in our Bosch show car, we have something called as a mood weave. You have so many different kinds of music. You might have music on your cloud, you can have music on your SD card, you can have music on board. And when I would like to listen to some particular music, I say, hey, I want to listen to Madonna. And the car starts to detect a pattern. This guy listens to Madonna every Monday morning. And the next Monday morning, it automatically says, it gives me a choice of all Madonna songs and then I say, hey, great. So this is the kind of personalization where you can go into the next level of personalization is how the car can start understanding your behavior, understanding where you go when you go. It might be a little dangerous sometimes, but it'll get to understand you far better and make you feel that, hey, this car is really an extension of myself. In our Bosch show car, we have done it in a little bit dramatic fashion. What we have done is you enter the car, there are seven different screens. You have a central display, and you just take your thumb with a fingerprint, just press on the screen. And it's as if your DNA flows into the screen and just distributes it across the car. And suddenly with a nice welcoming sound, you feel, ah, I've been welcomed so nicely by my car. You can name your car, you can call it nice names. It'll also respond with your pet names. But it's all about the extent to which we can personalize. So we'll see a lot of intelligent HMI solutions in the future, which will make this possible. Don't worry, we'll not see the driver in mid-air suspension in the future, it's just a way of representing it. By holistic interaction, what we mean is you see that the number of functions which are being integrated in a car are rising exponentially. And all these functions, if they want to interact with the driver at the same time, on one hand it results in information overload. You are actually supposed to keep your eyes on the road, your hands on the wheel, but with all this kind of information overload, it goes in the negative direction. So we have to somehow ensure that the kind of information which has been, which has been passed on to the driver, the way we interact with the driver, this has to be controlled in a fashion. The second thing is we talk about a lot of interaction technologies, whether it is touch, whether you use gestures. I'm not sure how many of you have heard about this. How many of you know that we can also control using your eyes? Could you raise your hands? Pretty good, how I'm impressed now. So the first time really I used EyeGaze interaction was I had a laptop in front of me and they told you you could control things with your eyes. I said, let me check it out. And it was a game of space invaders where you had to save Earth from invading spaceships. And when this was happening, I was just sitting in front of my desk, just looking at all those wicked ships coming and just exploding them, blasting them out of space. It gave me an immense sense of satisfaction that I could just do that, just looking at them. I was just trying to see if I brought that in my car and some person just overtook me in a wrong fashion. I just look at him and you know, of course, it's not possible, right? But the thing is, you need to see how can we integrate these kind of technologies in a car, in a meaningful way, in a way which really helps you. And we have done it in our show car in one fashion. I'll show you a use case also later. So this is one part. But when we talk about holistic, what does holistic mean? Holistic means that we are adapting the interaction with the person, not just from the function perspective, or from the technology perspective, but also looking at the state of the driver, whether he's feeling drowsy, whether he's overloaded with information. These are the kind of things which we look at. Second aspect is, what is the kind of situation the driver is finding himself or herself in? We look at the situation. And also the environment. How's the environment? Is the drive currently happening in the night? Is it foggy outside? Is it raining? Just imagine, old lady driving late in the night, foggy, comes to an intersection. The car wants to give a warning that there is a speeding car coming from the right. How do we give the warning? Or there's a young guy listening to loud music, a similar situation. How would you like to warn this person? So these are the kind of things you need to consider it, state of the driver, environment, situation, and adapt the interaction in that fashion. So this is all about how holistic interaction is going to look like. So I'll now quickly talk about how we created the third living space in our car. So in the automotive industry, from the long time, we have been focusing on usability. Usability is all about efficiency, effectiveness. I have to give satisfaction of use. All the things have to be done in a way that it happens inside six seconds. I should always have my hands on the wheel, eyes on the road. It's very important. In a car, it's very important because your main task is driving and not interacting. When you're using your mobile, your main task is interacting with the mobile. So there's a fundamental difference between how you interact with the device and the way you interact with the car. So keeping this in mind, the automotive industry always had usability in mind. But after the iPhone, the way, the expectations of users, how they want to interact with devices have drastically changed. And the automotive industry also, I would say, post iPhone era has also looked at what we call is user experience, which means that it's usability plus many other things. So we talk about emotions. We talk about perceptions. We talk about beliefs. How can we make the complete experience of the person when that person is sitting in the car better? How can we give more positive experiences? How can we reduce negative experiences in the car? And usability, if you really look at it, it's about when you are using it. User experience, there can be many, many definitions. This is our definition. User experience is more about before the use, during the use, and after the use. How can you define the interaction, the experience for the complete cycle? So this is something which we also do. So what we do is we really discuss with users. We do user research. They tell us what they feel. But we also observe the users because sometimes, you say, is it good? Yeah, it's good. And you can see from the body language, this person was not really convinced that it is good. So you really need to try to read between the lines when you are doing user research, and especially in the context of use, which means that you do that at the place where the user is really going to use that function. So you know how the user is going to feel when that use actually takes place. So for, I'll quickly just go through the slide, because this is what we did for our boss Shoka. So we started sometime in April with the user research and so on, but the actual car was developed from August until we flew to Las Vegas in January. So what you see in this spiral here is really our user experience process. So we plan the user experience activities at the beginning. And then, like I mentioned, understand context of the use, which means we extensively discuss with the users, observe the users, try to understand what is it that they are feeling, which are the areas where they feel they really require a solution or they see a problem. And then we synthesize insights. So we form clusters, what we call as opportunity areas, where we can offer solutions to the users. And then we ideate and realize, which means we do rapid prototypes, test it with the users, and if the users find it good, then we integrate it in the car. Good. So now I'll really go inside what we call as the third living space. So the first use case, what I would like to show, which I talked about earlier, was a video conference in the car. So let's see if this works. We are entering the highway. Incoming video conference call. Automated driving available. Some colleagues trying to reach me in a video conference call. Since I'm driving in manual mode, I cannot pick up the call. But the system offered me, we are driving on a highway. The system could take over for automated drive, and that's what I'm doing right now. You can also see it here in the cluster display, we are still in manual driving mode, but the A indicates that automated drive is possible. So again, by putting both thumbs at these buttons, automated driving engaged. Now system took over, I can take the hands from the steering wheel and I can accept the call by simple gesture. Here you can see two of my colleagues being in a video conference. And since we are now in auto driving mode, I can make use of this great infrastructure in a complete different way. I can use it as a complete working environment, also addressing the passenger display for example. I can also select some documents which are being discussed during the presentation and expand them over the displays. And as you can see here, with the help of the different interaction technologies that we have and the large number of displays, I have a better working environment than I have in office because this position is optimized for the driving position and everything is around me as a driver. So this was a use case what I told you earlier. What you might have noticed here, or you might not have noticed, that's why I want to make it clear. You saw at the beginning that the car informed the driver. Yeah, automated driving mode is now available. So it's an information to the driver, it's now available. Now it's up to the driver whether he or she wants to use it. And this handover from manual to automated should also happen in a particular fashion, which is, at least say, not, I do not want my wife, he's driving so bad, getting to automated mode. But it should be in a very particular fashion. So here the way was you hold the steering wheel, there are two buttons, you hold both the buttons, pressed for three seconds, and then the car gets into automated mode. So there's a particular way you go into automated mode. The other thing which you noticed, of course, was the video conference where you were really able to use the complete infrastructure. So you could really use it like a workplace. You had different displays at the side where the person could choose the documents and then with simple gestures, push the content of the documents on the different screens. So this is one use case. Let's go to the next use case. Here we talk about connectivity to smart home. Incoming smart home call. Someone has cleared our house. Automated driving available. Someone rang the bell at our house and the board security system is forwarding the call directly into our car. Since I'm driving in manual mode, I cannot take the call. The only option that I have is to cancel the call. But the system offered me to take over since we are driving on a highway. So that's what we're doing right now. Automated driving engaged. I gave control to the system. Now I'm being driven in an automated way. I can take off the hands from the steering wheel and now I can start the conversation with whoever is waiting there at the door. So let's see what he wants. Oh, I know him. He wants to provide a package. So this he needs access to my house. Let's check different perspectives from the video cameras. There I can see he's alone. I know him pretty well. He comes at least two times a week to provide some packages. Now I can open the door for him. Again, I have to confirm this with a fingerprint. So in case someone else took the car, he won't be able to gain access to the house. And as you can see, the door in the background closes and the front door opens. Now he can step in. He can drop the package. He needs a kind of confirmation that he has provided the package. For this I am sending a fingerprint to him. With the help of the magic mirror in the hallway, he receives the confirmation, takes it over in his system. So he has a confirmation. He's leaving the house. The door closes automatically. Again, I check different camera perspective that he already left. Everything's fine. So I can now stop the conversation. So here it was more of no frontiers, a combination of how the car is able to seamlessly connect to a smart home. And also some kind of security where you really used your fingerprint to authenticate that you can open your front door. You saw gestures with which you can control the cameras of your house to see whether the person who is in front of your house, who is standing there, is alone, are the other people with him who are probably outside the first camera's range. So these are things which you could probably configure, manage and monitor from your car. The next use case is about cooperative driving mode and safety. We are driving in corporate driving mode, which means with the help of the driver assistance systems, we are helping him to guide his attention. For example, there's a pedestrian crossing the street. And our system helps him also with ambient light and the information in the head-up display to guide his attention in the right time, in the right place. This is also something that driver beginners have to learn. And with the intelligence and knowledge that we have with all the driver assistance systems in this car, we can support him. Whether it's with the help of lane keeping support to keep his attention high or making use of the ambient light if a car passes nearby as an extension to the blind spot detection. Also, young drivers might easily get distracted by using whatever other devices or they have in their mind. And if they get too close to another vehicle, we can also make use of ambient light and acoustic and visual warning and head-up display to keep the attention high. So here the use case was about how can we personalize the car for someone who is learning to drive, learning to use all these safety systems. It's pretty new. How can we communicate it in a way, for example, in our show car we used ambient light for, you know, the person is not really paying attention and comes very close to the car in front of him. How can we warn him? The last one is about different kind of different way of interaction with the car. Let me see. As you can see there in the center display, there is some information about one of my colleagues. I want to get additional information about him. But since I'm driving in manual state... Prepare to turn right. Turn right. Since I'm driving manual mode, I want to address this information in a not destructive way. So by looking at the road with a simple gesture, I can activate the eye gaze feature. As you can see, depending on which screen I'm looking at, the center display or the cluster display, it's been highlighted. And now with a short gaze towards the central display and with a gesture, I can now expand the information... Prepare to turn left. I can now expand the information to the cluster display. And now it's on me as a driver to decide when it's the right time to read this. And here I can easily see that my colleague will be in office on time and we can have the meeting as scheduled. With a swipe gesture, I can move back the information to the central display. And with another gesture, I can switch off the gaze feature. And this is one example of how we combine different interaction technologies to access such level of information... and how to distribute it among different displays. It's always the same type of gesture with which we address the system. And depending on the gaze where we are looking at, it decides whether it's affecting an application and cluster display or in center display. So this was an example of how we combined eye gaze interaction. It's not for controlling. We found out in our tests with users that it's more easier for us to use eye gaze to identify where the user is looking at. So in this case, the car is able to make out which display I am looking at. And then if I want to shift information from one display to the other, I just make a simple directional gesture. And this gesture is now interpreted by the system is meant for that display. And when I do not want that information, for example, in my central cluster, I just do this. It knows that I am looking at this display and then the directional gesture. So a combination of this is going to make it more intuitive in our perspective. So what we have done with this show car, the kind of concepts, the kind of use cases, what I explained to you, is not what we think will definitely come in a car. These are ideas, these are concepts which we developed after discussing with users. And we use this car and the use cases as a means, as an instrument to discuss with people like you. How do you feel about it? Is this in the right direction? Do you think it is absolute bullshit? If it is absolute bullshit, then we would like to understand why. If you say it is great, we would like to understand why. So it is a means of a dialogue for us to get to understand users and technology and bring them both closer together because if we just focus on technology, we will come out with great features which at the end will lead to no satisfaction. So bringing the users and technology together is one of our main aims with this instrument. And that was also one of the reasons why we gladly took the opportunity of presenting this here at the Republica. So thank you for listening. Thank you very much for this very interesting presentation. Now there is a possibility to ask some questions. Thank you very much for that interesting talk. I got two main questions. First there is lots of talk about information overload, about people always having to be reachable for work. And to be honest, I find the idea of having to take conference calls even while on the road to be quite scary. And I wonder how much research has been put into how much connectivity is really wanted by customers, especially while driving, while going from A to B. That's question number one. Question number two would be a more general, actually it's the same thing. I know two people in general because I know it's not true for me, really want to focus on everything but driving while being in a car. Because all we've seen so far was about, okay, I engage autonomous mode, I take my eyes off the road and I just focus on whatever else. Music, taking a parcel delivery, whatever. So I'm kind of wondering what about driving? Yeah, so I completely share your opinion too. There are a set of users who say when I'm in the car, I would like to focus on my driving. I would like to focus on what I'm doing. There's a conscious acceptance that when I enter the car, I should not access certain things, certain functions. But there's also we see a set of users who say the car is an interference in my life. When I go from point A to point B, I would not like to get disconnected from what I'm doing. So what we are trying to find out is what is the segment of users? We will have all segments of users using the car. And the idea is can we offer what you call mass customization, which means that depending on the profile of the user who sits in the car, is it possible to personalize the car for that person? Which means that you can have that feature, but that feature will probably not be available or recommended to a user who would not like to use it. But there's no one, I would say, silver bullet answer saying that this is exactly what a user does in a car. So there will be different segments of users and it will be our goal to address as many user segments as possible inside the car. I have a question concerning security because I'm a little bit concerned about all the information I have got to give the car in order to use it. And I just wonder if you're thinking about very secure systems on the one hand, well probably of course, but on the other hand, do you think about systems that the car and only the car needs to know that information? For example, what is my point A, what is my point B, what is my fingerprint, what is everything else? So does that car have to talk to the internet, talk to your company, talk to any other company, or do you have closed security environment around the car that maybe I trust my car, but I can get rid of all the information that is surrounded by it? Thank you. So I'll answer that in two parts. The first part is I'll talk to you how we are ensuring security. The second one is also about, I think you had two questions for all, two parts. One is about security and the other is about data privacy. So about security, when we develop our products, we develop it right, we address the software, how our architecture is going to look like to ensure security. Our hardware, we define something called as trusted hardware modules, trusted zones, and we look at what we call as security risk analysis. We do a comprehensive security risk analysis. What are all the kind of use cases, what are the kind of access points one could have, whether it's over the communication channel, whether it's when you are updating the software. So we look at all this and build it into the product development. So we make it in a secure manner. The second one is, for example, you are using, say, key encryption. You use key management, public key, private key, kind of an infrastructure. How do I integrate that into my device in a secure manner? So even at our production plans, you have a server which is offline, which can generate these keys and insert these keys into the hardware which we are delivering. And the third one is, when you are doing actually software updates, how can you ensure that this update or this connectivity is secure? For example, Bosch has bought a firm called Escript which specializes in security. They have been doing extremely good work in the area of security for the past 10 years. So we are really integrating these solutions in the product. The second part about data privacy, Bosch itself has recently announced our own cloud solution, our own server, which is made available to the users in the car, for example, for a car manufacturer. And the cloud location, the server location itself is in Stuttgart, Germany. So everything is under the German and European Union Data Privacy Act. There are three key pillars of this, apart from the fact that you have this secure private cloud. One is the data of the users. Is it very transparent to the user? When is my data getting uploaded or used up by the system? Where it gets stored and what is it used for? And when the user says, I do not want this data, can the data be deleted? So these are the four key aspects which we look at from the Bosch perspective. So I hope I have answered your question. First of all, I would like to thank you for the nice presentation. In my opinion, it was a very concrete dream what you showed us. So thank you very much. My question is going in direction of communication. I think that the current situation is already so that you do conferences, you do communication, you do several things in the car. So it's very clear the next steps. My question is the realization of all this communication, which is possibly in the car of the future. So think of the current situation, you do a phone call, and quite often you don't have any connectivity. I worked three years in Chennai, in India, and there the infrastructure is even worse. So all these nice features are nice, but they rely on an IT infrastructure, on a network infrastructure, which we even now, 2016, do not have. So how do you go on with this problem? Yeah, I share your views completely. I'm also from India. I've been staying here for the past six years, so I can understand what you mean. I think one aspect what we can control is, of course, the kind of technologies that we can build up, but I completely agree that we also need a back-end, and a network infrastructure, which is provided. That's why I mentioned earlier in my presentation also that when we talk about these trends like electrified or connectivity, I think it's very important that over the next ten years, optimistically looking that these solutions, how concrete solutions... I was wondering if there are thoughts about who's responsible if there happen some accidents while I drive in an automated way. Very, very difficult to answer this question. There are multiple aspects of this. There are... One is in terms of liability, who takes the liability, and the other one is also something which is also ethical. For example, there have been questions. I'm only going to answer you with questions, because I really do not have the answers. One is about liability, who's responsible. There have been manufacturers in the past who have said, for liability, I will take responsibility. So probably that might get clear. There are other aspects like ethical aspects where there is an unavoidable situation where the car is going to crash. Maybe there's a bicycle in front of you. Do I crash into this bicycle rider, or I take a left. There's an old lady going there, do I crash into her, or there's a child playing on the right side, do I take her right? It's a very difficult situation. I think we will... I do not know whether we'll find the true answers. We are confronted also with other questions. These are questions which we will not be able to probably have answers to. We will not be able to have answers very quickly. But there are other questions which come out like saying that you are driving automated, and there is an accident which could have been avoided. Now, the responsibility, who takes that? Automated driving itself is predicted to bring down the reduction of accidents by a huge, huge percent. I don't know. Some studies say it's 50%, some studies say it's 80%. But it's also going to cause some accidents. So if you really objectively, without leaving the emotion out of it, if you look at it, for all those people who are saved, you do not even know that they have been saved. So there are maybe thousands, tens and thousands of lives which are saved. But there will be some lives which will be taken because of this technology, because it was not able to foresee that. And that will be very few. But for those people who are affected, they say, the car caused it. It's a technology which caused it. So these are all situations which we'll probably encounter in the future. Very tough to get used to this. That's why I mentioned it is a disruptive. It has the potential for really disruptive innovation in the industry. What you are doing to cars reminds me pretty much of what Apple did to telephones when creating the iPhone. But when I think of my iPhone, I don't think of it as a phone. So taking or making phone calls is just a minor use case. But in your use cases, they seem to be related pretty much to driving. So did you think about different use cases? Because you're still speaking about cars, speaking about drivers, and I'm still referring to my iPhone. Well, it's in the name as a phone, but it's not a phone any longer. So did you think about that? Let me tell you what we have thought about iPhone or an Android and our car. What we feel is we are not going to go away with this nice little thing which we have in our hand. So what we thought about are the use cases when probably you come into your car and sit in your car with your device. How can I use now the infrastructure in the car, but the functions of your phone? Which means can I have... My question was referring to... Although the iPhone was called phone, I don't use it as a phone basically. So it means something completely different to me. It's my brain extension in a way. And does the car, in your opinion, have the same potential to develop into something which is not referred to being as a vehicle to move from A to B? But maybe something... You were referring to living space, like a new whatever. So I was wondering what these use cases could be. We have looked at two aspects of it. One is the trend towards ownership of the cars themselves. Whether in the future it will be... If you really drive automatic, if you have driverless cars in the future, would you still want to own a car? They're not sure. So the very core aspect of what you talked about, ownership, there might still be people who would like to own cars. And when they would like to own cars and when they would like to drive, we really see the third living space as a solution. Which means it's like a room. It's your own room which moves from A to B. The other part, like I mentioned, the moment it goes into a different model where you are just part of the ecosystem. There are companies which are making cars available for you, then it's no more as personalized as what you would have with your own car. I might not have answered your question regarding the use cases, but we have looked at two concrete scenarios when we were doing this. Mr. Ladi, I was wondering, you showed us all these beautiful videos and you explained that it's only a show car. So my question is, do you have some experiences in reality? Could this car already drive? Or when do you expect it to drive on the street? I've got a second question. We talked about digital infrastructure and we talked about the question of insurance. What are the most important fields for the government to act and to give you as a company a frame to work in? And I have a question from my colleague close to me. He was wondering who is the owner of the data you're collecting? Is it the user? Is it Bosch? Is it you? Thank you. Okay, so there are three questions. Let me try to see whether I can sequence it right. The first one about, talked about, was about the features. The reason why I said it's a show car is that we want to have a dialogue with the users. If they say this use case is good, then we'll be able to put it in a car inside five years. Some of the features we will in fact be implementing this year in October. So we are using this for us as a trigger for a roadmap which we will define what we can offer. So some features will come this year, some features will come in two years, some features will come in five years. Like I mentioned, there are, which I can make a transition to. Your second question, which means about what is the kind of infrastructure which is available? What are the kind of, for example, the colleague talked about the network infrastructure. Can I have seamless connectivity everywhere I go? Can the regulations be made in such a manner that, for example, I talked about the infrastructure for the electrification. Can we have enough available charging stations with which I can say that I can move towards an electric car? And third question was about the data. Right now, we give the choice to the user. The user can say whether he can provide the data to us or not. So based on the acceptance of the user, we use the data. And like I mentioned earlier, we make it very transparent to the user what we are doing with the data. And if the user wants to delete this data, we give that option and ensure that this data is deleted. And this is the key aspect for the user to feel much more confident and gain an acceptance also to give the data to us. Thank you very much, Mr. Mahaladi. Thank you again.
|
The world of mobility is on it's way to disruptive changes. Cars of the future will be connected with the outer world and will drive autonomous with an electrified power-train. This will have a major impact on the way the future automotive interiors are going to look like and how they are going to be used. Based on these trends, at Bosch we see the automotive interiors evolving into what we call as the 3rd living space. See and discuss the future car Human Machine Interface and user experience.
|
10.5446/20585 (DOI)
|
Music Brigitte Stravault, she's physician, graphic designer and researcher in the field of health communication and data visualization. And she's a person behind the idea of Citizen Pharma. Thank you very much. And welcome. But first of all, I'd like to thank the organization team for making this re-health real. And I can't even imagine how much time and effort you've spent making this happen. So thank you very much for that. I've been working as an emergency doctor for years, and I think that all of us working in healthcare remember certain patients, situations, moments, a bit specific. And one of mine was a call during the night, 2am, 3am. And we arrived at the apartment, an amazing little lady, open in the mid-80s. And she complained of pain in the breast, she had severe trouble with her blood pressure, felt dizzy. And so, well, we cared about her. And I've asked her, among many other things, of course, about her medication. And she gave me a big bunch of pill boxes and packages and tablets, and I went through it and checked it. And I realized that about two-thirds of it passed the expiration date. And I asked her, did you mention that? And could be at least part of the problem you have at the moment. And she said, well, yes, I do. I always try keeping medication from my friends and relatives who don't need it anymore, don't take it anymore, because I can hardly afford to buy the prescribed drugs I'm taking. And she felt so embarrassed about that, that she started to cry, which was a really heartbreaking moment. And one lesson out of that is real medical care, or many problems in healthcare we have are low-tech to no-tech. She obviously didn't understand the system, how to get reimbursed, and obviously we have a problem taking wrong medication or expired medication, and here we don't need an app, but we need people who care for that. And on the other hand, the other lesson is that we are living in a world of stereotypes. Starting a discussion about fair access to medication immediately turns to a discussion about developing countries. And you have the image of poor Africans in the middle of nowhere needing donations from the richer countries, because this is the target group. But of course we have that problem at home as well. We have it global. It's in a globalized world. We have this as a globalized problem. And in different ways, and different characteristics of course, but well, we have it. And this has been unmasked, at least in parts, in a shit storm last year. So what happened? It started as a usual shit storm with headlines and messages, Twitter messages like this one. XH funder raises price of AIDS drug from 1350 to 750 per pill. Detested XH funder increases price of pill. Greedy Immokid raises price of DeraPrem from 1350 to 750. So who is that guy? This XH funder, detested XH funder, Greedy Immokid. Well, that is Martin Scraily. And he was at that time the CEO of touring pharmaceuticals. And he became overnight, well, the most hated man in America. Clinton talked about him, Sanders talked about him during their campaign. Even Trump talked about him. He didn't like him as well, which is a quite interesting alliance, I think. And he said, well, he treasures for my hedge fund guy who jacked up drug price. He looks like a spoiled brat. Which might not be the point. But anyway, Martin Scraily makes it really hard to like him. He had answers like, well, I've done the right thing. It is so important. I should win the Nobel Prize for this action because it's for the good of mankind. And one of his answers was this one, capitalism can seem ugly if you don't understand it and you are not prepared for it. It's capitalism stupid. So I think he confirmed all the cliches we have of big pharma and capitalism. But nevertheless, looking at a shitstorm, it's always worth to go back to the beginning and to see what is it all about. And in that case, it is about Dera Primm. Prescription drug, the brand name is Dera Primm, and the compound is Primmethamin. And it's a medication for infections like malaria and toxoplasmosis in the US, mainly used to treat toxoplasmosis. It's a protozoal infection or parasitic disease and an interesting disease because many of us might have had it. We assume that about 30% of the world population are so-called seropositive, meaning we have had contact during lifetime with that disease. And usually, we don't even mention it, meaning there's mostly no symptoms or flu-like symptoms, and for the healthy ones of us, it's really no problem. But 30% is quite a lot, so 1, 2, 3. 1, 2, 3 seems pretty much. But for the healthy ones, as I said, no danger, but it is a real danger for those with a severe immune deficit, meaning patients with HIV or during a hemotherapy, or also for newborn children who can be infected by their mothers. And here, toxoplasmosis can cause severe brain damage, eye damage, it can even be fatal. We have 750 deaths per year in the US due to toxoplasmosis. So it's relevant, and therefore, their preem is relevant, and therefore, it is on the WHO list of essential medicines. This list has been a breakthrough because you find all the medication needed in any healthcare system all over the world, and the aim was that access should be given, affordable access to the medication on this list. So that is the idea. Now, what about their preem? It has been sold until 2015 by core pharmaceuticals for $13.50, as we've learned, and then sold to touring pharmaceuticals in August and in September. The price hiked up to $750, meaning 55 times higher, 5,500%. So I imagine the usual Republican ticket, the regular one, is, I think, €200. So next year would be 11,000. I mean, it would quite be interesting to see who would still come, but it makes a difference, obviously, or a contraceptive pill. Usually, it's an average of some €10 per month, and I imagine next month it would cost €550. And of course, it would have an impact on the use. Many women would no longer be able to afford it, and that would have consequences as well. So it's dramatic. It's obviously a dramatic increase we have. And it's not even in that case the whole truth, because before Co-Farm, it was owned by a Glexosmith kind of line and distributed by them, and they sold it for $1 per pill. So in fact, we have an increase from $1 to $750 in five years. And one of the immediate questions we have is how can they do that? What the heck is going on? Is that legal? So how does that work? And in that case, how did that work? Well, the first reaction usually, drugs, price increase, must have to deal with patents, because patents meaning someone has a monopoly for it. It's like an IT, if you have a patent, you can buy it and no one else, sell it and no one else. But here with DeraPrem, we have an old drug. It's been available since 1953, and the patent expired a long, long time ago. So no, it is not about patents. Is it about rights? Because touring pharmaceuticals, both the rights, yes and no. Yes, because no one else would be allowed to sell DeraPrem, the brand DeraPrem, but other companies would be allowed to sell a generic version and produce a generic version of pyrimethamin. Generic version usually means you have the same compound, you have the same quality, same indication, but much, much cheaper. That's the principle of generic products. So why not generic? For DeraPrem, in fact, we do not have a generic version available in the United States. So yes, it's a bit about rights, but the main problem is we don't have a competition. We have a generic version of DeraPrem, so the only company producing and selling pyrimethamin is touring pharmaceuticals. And this means they have not a technical monopoly, because it has expired, but a de facto monopoly on the market. Again, the first reaction is, I mean, it's capitalism as we've learned. Why do other companies don't start producing it? Immediately, if they didn't do it before, why don't they go into the market and produce it? Well, then we have another bunch of problems. We've learned that, why is there no generic version available? We've learned that toxoplasmosis is a common disease, but only few patients need, fortunately, only few patients need treatment. Few patients for big pharma equals rare disease equals small market. This is not good for big pharma. It can be good, rare disease can be great to them if they have a patented drug. And then they can establish what they call a niche buster and sell it extremely expensive. We have that in Germany at the moment with hepatitis C medication. But here, they're a prime, as we've seen, no patent. So we have small market, no patent. Big pharma is not interested in it. But maybe, if not big pharma, maybe a small producer, a tiny little producer. Well, here we face another issue that any production of medication needs an approval. And in the US, it's an FDA approval. FDA is the Food and Drug Administration, and this is the agency being responsible for the approval process. And this approval process is time and cost-consuming. So for the small manufacturers, even then, it's too much. So having these three points here, small market, no patent, approval need, results in, once during both the rides for Deraprim, they could raise the price to any amount they wanted. And that's what they've done. Now, Deraprim produced the shitstorm we've seen, but in fact, it's not the first time that this happens. We have dozens of other companies doing more or less the same thing. Duxesuclain is an antibiotic, a really important one, also on the WHO list of essential medication, also out of patent medication. And in 2011, it's been sold for 4 cent per pill. In 2015, it was $3.70 per pill. So we have a price raise of 9,000%. Digexin, a medication for heart diseases, sold for 11 cent per pill. And in 2015, for $1.10. So compared to the others, that looks really cheap, but it's 1,000%. And I could show you dozens and dozens of other examples like that. So obviously, we have a new business model finding out of patent medication without a generic version on the market. It seems to be very, very attractive. And obviously, the traditional market model fails here. Now, seeing that, what are the consequences? And, well, one reaction could be, we accept it. I mean, it is like it is. And it's always been that we could be outrageous, blame Big Pharma, which to me is not much more than accepting it. Or we could at least try to change something. And the question is, how can we do that? And what can we do? Or more precisely, how can we guarantee access to affordable essential medicine? And let's take the WHO list of essential medication, because it's a global list, and at least the out-of-pattern drugs on this list to make it easier and more pragmatic. By the way, the easiest solution would be a trade agreement, meaning free import and export. In Germany, DERRAPRIM is still produced by GlaxoSmithKline and is sold for 1 euro per pill. So the easiest idea would be to send care packages from Germany to the United States with DERRAPRIM. But obviously, this is totally illegal, and in times of TTIP, it becomes even more fixtures. So this won't work, but fortunately we have a wealth of ideas and organizations. There are only some of them caring for access for essential medicine. But all of them care for access for developing countries. And it's mainly work to get access for patented drugs, which are very, very expensive. And they're making deals with big pharma that they donate or reduce the price. And this is great work. It's absolutely great work. But it's not really a global target, if you want. And it's also depending on the willingness of big pharma and or the willingness of the government regulations. And all of them rely on the traditional manufacturing paradigm of the pharmaceutical industry still. So this is not wrong. But could we find alternatives to that? Could we find another way? And could we find solutions driven by needs and not by markets? And could we influence and transform the pharmaceutical industry? Meaning we as the civil society and the digital community. And could we just use maybe not primarily the wisdom of the crowd but the money of the crowd with crowdfunding? And when we use crowdfunding, how could we do that? So what could be the aim? And how could we find really good solutions for that? So the biggest one would be a vision. The manufacturing facility. I mean, let's buy it. Let's make it on our own. Let's make an open world pharmaceutical organization. Bought by the civil society, financed by the civil society, a non-for-profit organization. So can that work? Just two examples. There's been Institute for One World Health in the United States. It labels itself as the first non-profit pharmaceutical company. And it works. But they are focused also on the developing world. So they do not produce for the richer countries. It's just their target is the developing world. But they got 200 million to run their company. Not with crowdfunding in that case, but with traditional fundraising. But nevertheless, obviously they achieved the goal. Another example is cheap drugs, which has been a typical crowdfunding campaign. And, well, they wanted to get $1 million for the seed investment for a public benefit drug company in the United States. They totally failed. They got $610. And obviously it is difficult. But difficult doesn't mean it's impossible. So, I mean, the idea is still fascinating, I think. The biggest challenge anyway would be that let's buy it. Let's build it in the United States or in Europe or in India. But we will still have to deal with normal trade regulations, meaning no free import and export to other countries. So at least we would have to find solutions for that issue. But the biggest advantage of such a manufacturing facility would be that we could produce many, many drugs, not only in quantity, but also in diversity, many, many different drugs out of that WHO list, which would be wonderful. Another possibility, the approval process, crowdfunding the approval process, is you remember that this was one of the negative incentives for companies where they do not produce generic versions. So what about funding this process? Making contracts with existing generic manufacturers and helping them through that approval process so that they can produce this. The biggest challenge here is that we have different approval processes all over the world, so we would have to find solutions for either the United States, for Europe, for Asia, for Africa, it's a complex thing, expensive thing. But the real advantage of that would be that we do not care for trade regulations because when we produce in the region, we don't have a problem with import and export then. So this would be a quite pragmatic solution, I think. One step further, when we go deeper, we could also crowdfund specific drugs. What I mean with that, just an example, there was a crowdfunding campaign for open insulin, we've heard about insulin in the discussion. Insulin is for the treatment of diabetes, it's one of the most important medications all over the world. It's old. Until today, we don't have a real generic version, it's a shame. And we know that about half of the population worldwide does not have access to insulin. So there is a significant need. And they started an amazing project, they crowdfunded $16,000 for it to develop an open source protocol to produce a generic version of insulin, meaning this is the first step for the production. But they started it and they do work and they do a great job in that project. So it's fascinating. The problem is that it's only for a few drugs, because we can do it for so many of them, but these are really important, there's a significant need for some of those drugs on the list. And it would be helpful because the pharmaceutical industry will never do that job. If all that seems to be or sounds too old-fashioned, what about another idea? Downloadable drugs, crowdfunding, well, the 3D print of your own medication. If that sounds fictional to you, in August 2015, the FDA approved the first 3D printed medication. So it works in a way, but to be honest, there is still a long, long way to go. It's one medication, it's one drug, it's enormously expensive, it's nothing for mass production, and of course we have a bunch of safety issues when we talk about that. But nevertheless, I mean, it has the potential to radically transform the pharmaceutical production process and distribution process. The biggest problem to my point of view is that the company who pioneered that 3D printed drug has already, I think, more than 50 patents on it. So we are facing currently the same problems as with the traditional pharmaceutical production. The other problem is a more practical one, yes, the FDA has approved this drug and this 3D print, but this doesn't mean that in Europe, in Asia, in Africa, this will work the same way. So the approval process will be quite tricky, I think, but of course, I mean, it would be cool to crowdfund a drug printer for legal drugs and to produce open source software for it, and hardware, of course. So all of these are ambitious projects and I think there are others around, but which of them are worth a try. And here, we have to take a closer look at it and we have to analyze the potential, the feasibility, but also the visionary aspects of the different projects. So this is a call for action. Let's connect and share and learn from Big Pharma. They consolidated and got more and more powerful. We can do the same thing. Let's connect and share, but I think we should not forget that we also need, if you're looking for an alternative way to produce, we also need a lot of information before and we need to know the prices paid, for example, for medicine on the WHO list worldwide, meaning data from real people in real life situations all over the world. Even in times of big data, there's a lack of a lot of those tiny little data. Some of the drugs under this are really old, so they don't conform to the current standards of what we call evidence-based medicine. And we should do a lot of research there concerning side effects, interactions, things like that. No one will finance that because usually pharmaceutical research is paid by Big Pharma, which is another problematic issue. But of course, they will never pay for out-of-pattern drugs like this one. And here, the power of the crowd could be a real help. And the digital community could make it work so that we could make really big steps on that. And that is why we always should think about not only crowdfunding these projects, but also integrating citizen science, meaning citizens working together with scientists, with universities, with NGOs to make it real. Again, this is really a call for action. And yes, the pharmaceutical industry is a big player. And yes, it will certainly take time and effort to change something, even the slightest steps. But, I mean, we're 7 billion. So it should be possible to make or to start that. And it's really great to initiate a fair pharmaceutical production away from shareholder value, but aiming for something like human kind value. So may the force be with us. Thank you. Thank you very much. Brilliant talk. Any questions from the audience? Please, what's your mic? Your opinion, which one of the crowdfunding options is the most promising? I think the two in the middle looked good, but... That's a very difficult question, but a good one. I think the most pragmatic ones are those where you have the specific drug, of course, specific aim, and you can calculate how much it will take, and it's kind of sexy to invest there. I think the approval process would be also quite promising, because again, it's pragmatic. You need good contracts, but I think this regional aspect is really, really great. Still, and, nevertheless, I think we can also dream about the first version. And why not finding a solution for a tax-free zone where we can produce that, and at least for the essential medication on the list. It's not a pharmaceutical industry like the other ones on the market. So, limiting to these productions, why shouldn't it be possible to find a solution there in our dreams? I don't know whether it's make-up, but still, I would like to dream about that. Thank you. Next question. Here and there. I have a question regarding the other medicines on this list. Are there many medicines without a patient or with a valid license, or do we have to buy the license for some of them, which is impossible for crowdfunding? That's true. We have 320 medications on the list for adults and about 230 for children. I think about 90% are out of patent drugs, but the problem increases a little bit, because we, or the pharmaceutical industry in that case, developed some really important medications for treatment of cancer, for treatment of hepatitis, and these are patent drugs still, and they are nevertheless on the list of essential medication. We will change a little bit, but nevertheless, I mean, 80% or 90%, the majority of medication on the list is out of patent medication. We could focus on that. I wouldn't care about these patent drugs, but I think the situation is so clear. We can produce it, we can produce generic versions that is legal if you get an approval, and then I'm in hooray. Okay. One very short last question. I guess the ISO processes for approving medication is not transparent. Wouldn't that be a good place to start? Is open sourcing the standardization process? The approval process? The approval process, yeah. The approval process is transparent, meaning the rules are clear. If you think about the persons behind who are deciding yes or no, there will always be doubts, but the process per se is transparent. It's a bit different. We have different processes in the United States, in Europe, in the EMA, in Asia, it's different, and in Africa, it's different. I don't know the details about the Asian way, for example, so I don't know, but you're totally right. This would be one of the tasks, I think, the homework if you want, before starting to have a look at these processes and regulations. Absolutely important. Thank you. Okay. Thank you all very much. And thank you.
|
Even within the industrialized world we cannot guarantee access to affordable essential medicine for everyone. In developing countries the situation is even worse. Can crowdfunding provide money for the production of out-of-patent medicine? Can citizen science be used to supervise the pharma industry?
|
10.5446/20597 (DOI)
|
Okay, for our final talk for today, I am very happy to announce Eden Koopamins. He is working at the Utopia Festival and is based in Tel Aviv, which is very nice to hear. He is also passionate about science fiction. Welcome, Eden. Okay, is this on? Yes, it is. Wow. Okay. So, hi. As our moderator said, my name is Eden. I am soon to be 29. I study in Tel Aviv University. I am doing my MA in general history, which actually means European history. We will talk about that in a second. My BA is in philosophy and history. One of my professors from the BA, whom I hate, always told me to never apologize when you are giving a talk. So just to piss him off, I am going to start with three apologies. The first one is that later on in the talk, I will be giving some examples of science fiction. Now, I would like to admit my ignorance of non-Western science fiction and just say that those fields are massive. French science fiction, Chinese science fiction, Japanese, Eastern Europe, and all over the globe. These examples are just those that are closer to my heart. The second apology is to the thinkers that I will quote because I will completely butcher their theory for the sake of brevity. And I encourage you to go online. All of these texts are by, unfortunately, deceased people, and some of them have been released to the public domain, so you can read them all for yourself. And the last apology is to you because I will be covering some basic terminology. And if any of you are familiar with them, then I apologize for repeating things that you already know. That being said, let's get started. What am I here today to tell you or convince you or even sell you? I'm really grateful to be giving this talk on the second day because I heard many speakers, Julia and Eva, who gave an excellent talk before us, Richard Sennett, who spoke yesterday and others, say that there needs to be new thinking about what a city is. And I couldn't agree more. But what are the blueprints which we can use to rethink this very basic idea that governs the way that we live? I'm here to tell you that science fiction represents a source for those blueprints. It's not the only source, but it's definitely one which is accessible to all. So the way we will be doing this is by following this trail. Our first stop will be postmodernism. I will talk very briefly about what postmodernism is and even try and create something which one of my professors whom I like called the bumper sticker version of postmodernism. Then we will talk about urbanism. And I will ask the question, what is urbanism? And how can we use it to be radical? And lastly, we will answer that question by looking at science fiction. Okay? So that's the plan for today. Let's get started. So the first stop that we have is the bumper sticker version of postmodernism. Now there's a reason I chose this image because it's not actually a pun. It looks like a pun, but it relies on the fact that you don't know how to pronounce Immanuel Kant's name. It's not Kant, which is what the pun is supposedly meant to be, but it's Kant. However, and I apologize in advance if I'm offending anyone, if you're American and you read it, the pun works. This drives the point home which postmodernism would like to make. There is no such thing as objective knowledge devoid of context. Everything depends on the eyes of the observer who reads the sentence. Now is that to say that I am a college student having a mind blown moment that nothing exists? No. The sentence clearly exists. The meaning which you derive from it depends on your perception of reality. Now is that perception of reality naive? Is it free of outside influences that would like to shape the way which you see the world? Of course not. You are constantly being shaped by the forms around you. You are all sitting very nicely now and being quiet while I'm talking because I'm the authority figure. If you would want to ask a question, you would raise your hand. When was the last time anyone enforced these rules to you? Way back when when you were in kindergarten? Ever since then, you have been silently modeled to fit these structures. And who does the modeling? Power. More than that, power is the modeling. The power structure in this room doesn't make the chairs in these neat rows. It is the neat rows. And what this power does is it activates other forces at little cost to itself that monitor and cultivate society. Now you can just take society, cross it off and fill it in with anything you want. This room, your life, your relationship, school, work, whatever, society, the city. Power doesn't need to operate on you. There is no police officer here or someone from the convention telling you how to sit. And yet, here you are. It activates other forces like shame, anger, hate, and positive forces as well, like happiness, love, compassion, to make sure that you follow the structures which make it live. This is the bumper sticker version of post-modernism. And believe me, it's a bumper sticker. There's a lot of side questions that can be asked like, what is power? Who is power? How does power operate? The question that I would like to focus on is how can we resist power? How can we make sure that these hidden forces which we have no control over shape our lives into ways which we find optimal? Now this train of thought called post-modernism as a very general term is essentially an urban train of thought. All its speakers, thinkers, and writers grew up in the city. And that's no accident. It's post-modern. And the modern experience is entirely influenced by the city. Ever since the 12th century, and we won't go into the history side of things now, ever since the 12th century, cities in Europe, but also in China and Turkey and other places, became exceedingly, if not the main ways in which we live our lives, the main influencer on how we live, even the villager living outside of the city, would sell his produce inside the city. So it's no wonder that post-modernism is urbanism. And when we say it is urbanism, what do we mean? It is the way in which we think about being, living, producing in the city. What you see here is a picture. This guy is Michel Foucault. This guy is Jean Porsart, two great French thinkers. And what they're doing here is taking part in a protest. This protest was in 72, and it protested a Maoist activist who was shot by a guard working for Renault, one of the biggest automotive companies in the world. The claims were that it was no accident. Renault hired the guard to assassinate this activist who was working for workers' rights within the city of Paris. This protest did not end here. They marched through the streets of Paris. And when the protest was done, and this is documented, Foucault and Sartre took a seat at a Parisian cafe to discuss the next steps. All of their ideas, all of their thoughts, all of their theories were formulated within the city. Now you think these chairs are power structures? Think about the street. Think about where you walk, who you talk to. I don't know how many of you here have experienced the shocking day-to-day reality of being a stranger in a city, or being in a place where you're not supposed to be. Where the skin color of the people around you is different than yours, where your gender is not the right gender, and how segregated our cities have become. Personally, I come from Tel Aviv, one of the most segregated cities in the world. Five minutes from the high street, there is one of the poorest neighborhoods. And if one of those poor people came to the main streets of Tel Aviv, there wouldn't be a police officer to take them down. There wouldn't be someone to shoot them, but they would feel the full force of shame and being perceived and being looked at as something weird. So here, urbanism tells us, is where resistance begins. You must say to power that there are other ways to live, that there are techniques and technologies which enable us to be different than power wants us to be. New ways in which to live. And by live, I mean everything. Eating, having sex, giving birth, going to sleep, collaborating. Every time you think about a new way in which to be, you are resisting. Okay, great. What does science fiction have to do with it? Science fiction is the art of thinking about new ways to be. It is the art of asking what in our lives could be different. The way we travel, the way we talk, the way we do everything. Diversity and richness and multiplicity is the name of the game. You are seeing an illustration by Moebius, one of the greatest comic book artists of all time, with Jeff Dero, another amazing comic book artist. And when you look at it for the first time, it looks amazing. It looks exactly what I'm talking about. Look at all these people. Look at the colors. Look at their faces. Look at their skin color. Everything is so diverse. But if you look a bit deeper, you will see that one thing has been maintained here. Gender. All the gender roles are as you know them. The woman has a baby. This woman is promiscuously clothed, and she's engaging not in trade, but in the background of the trade. So is the clothing. The men's clothing is practical. It's still flamboyant, but it's practical. So this shows us how science fiction can either resist the power structures and preserve them at the same time. The way in which it preserves them is if it takes for granted the facts of our lives and projects them with its vision into the future. So it says this one thing is different, but these 10 things are the same. So it reinforces, reenacts, reinvigorates the current power structure. To drive my point home, we'll take a look at some examples. They are somewhat chronologically in order, but that's just an accident. I'm not saying anything chronologically here. Science fiction in the 50s was better added than the science fiction of today. It is, but that's not the point I want to make today. Our journey starts with one of the fathers of science fiction, Arthur C. Clarke. He wrote many books. Some of them you know, Childhoods and Rendezvous with Rama, his name is a mainstay. But he also wrote this book, which is a bit underrated and underread, called The City and the Stars. Can we dim the lights a bit? A bit. Oh, perfect. Okay. So in it, you have a city that is controlled by a computerized entity which codes the DNA of every one of its citizens. Should that citizen fall to disease or violence, it simply recreates that citizen from the database. It also maintains the city itself, leading to a very interesting fact that you can see clearly here at the cover. The city is eternal. It is self-sufficient. It is its own object and man, the small, transient, momentary. But then one of the citizens of this city breaks the mold, goes outside, and I won't spoil the ending because it's a great book. But it does something which changes everything. The cover itself and the story tell us all sorts of things about the city that Alpha C. Clark would like to imagine us living in in the future. First of all, there are two polar opposites, the city and man. This is a thinking which would be familiar to anyone here who studied architecture and urbanism of the 40s and the 50s. The structure and man are opposite. We create the interface. But in essence, they stand apart. The city is grandiose. Like Schopenhauer said, architecture is music which froze in time. Think about that saying. The grandeur of the city is maintained in C. Clark's work. And even though it tells the story of a radical, someone who breaks away from the system, he's the genius. He's the prodigy. He's one in a million. You aren't rebels. You aren't fit to be rebels. You are code in the database. Amongst you somewhere, starting to see the matrix comparisons, amongst you somewhere, there hides the one, the aberration. That's what matrix tells you as well. It's not one of my examples, by the way. What matrix tells you is bedocile, except for that one of you who is the genius. That's a mainstay of the nonradical science fiction group. Sure, rebellion is possible, but it's only the exceptional human who can perform it. Second example is by Osula Kailer, one of the best science fiction writers to have ever existed. She wrote a lot of books, but she also wrote a dispossessed. This dispossessed tells of two planetary bodies. One is the mother planet, the other is the moon. The moon is very stark, it is, a desert. And because of its starkness and lack of resources, the community which it creates is a communist one. Now, when I say communism, I don't mean Lenin Marxism, and I don't mean Stalinism. I mean the old concept of the Soviet, the working agricultural body whose members were working unison for a goal. It existed for like two years, but it was idolized for 100. So these Chomsky and syndicates operate on the moon and regulate their own resources. While the planet remains a distant partner to which they export all their goods, until a philosopher who is somewhat of an aberration within a Soviet society decides to leave the moon and go back to the planet. Now he was born on the moon. His grandparents moved there, but he was born there, it's all he knows. And when he arrives at the planet, he sees a horrible reality. The planet is paradise. Resources are everywhere. They're so everywhere that society is free. Read, free equals capitalist. Everybody can consume as much as they want, but of course, that a few hold the faucet. So he begins to interrogate the society and disassemble it and destroy it from within, although he doesn't want to do that. And everything comes crashing down in a very beautiful and also a Laguinesque way. But Osala took a good step from Arthur. She moved forward. She saw that the city is not this eternal object to be worshipped, but a very real object grounded in the politics of every society which builds it. But what she missed was that everything is inexorably part of the city. There is no escape from Ursula's city. Even if you are against it, even if you are everything which the city hates, you are within it. And there is no way to operate outside of it. That philosopher, which leaves his moon to the capitalist planet, he never wants things of the possibility to run away into the rugged wilderness. He never wants things of the possibility that there is something outside of this mega-capitalist city in which he has arrived. Ursula's tale begins to chip away at the boundaries of the objectified city. Wow, that was a big sentence. But it doesn't go the full distance. The experience is still completely understood through the lens of the city. And along comes this madman. He had no problem that I'm calling him a madman. He quite likes that. He's still alive. Also, he's still alive as well. Arthur C. Clarke isn't. M. John Harrison. M. John Harrison writes weird books. What he does is he takes a genre and he deconstructs it completely. He wrote Space Opera. He wrote Sci-Fi Adventure. And he also wrote Viriconium. He wrote Viriconium from the 70s and up until the 2000s where the final compendium was released. And I'm actually cheating a bit because this is not actually the cover to the book. This guy I found on Tumblr, he made a post, possible covers for Viriconium that I would like to see. This is actually a cover for Edgar Allen's poem book, Story, The Man of the Crowd. And this is an illustration by a guy called Harry Clarke in 1928. But it's perfect for our needs because think of the eternal city before and then also as desert and look at this. This looks more like a city the way that I know it. It is degraded. It is chaotic. It is brutal and violent and beautiful and ugly. What John Harrison does in Viriconium is say, look, you think that if you get a board large enough and you draw the lines of the streets, you will know the city. But the city is unknowable because what knowledge does it give you to know the name of the street? Listen to that, 22nd Street and 34th Street. Frederick Strasser and the countless streets which cross it. What does that tell you about the actual place? Nothing. The city is always moving. It makes Viriconium a very hard book to read because literally the hero lives on a different street corner every chapter. It's the same house but it's on a different intersection of streets. And what John Harrison is trying to tell you is stop thinking about the city as an immovable object. It is not an immovable object. Every time that one of us perceives it, they see something else. Instead of letting the central power tell you what this and that corner mean, define it yourself. And even if you don't want to define it yourself, it will happen anyway. So you might as well take control of the process. Instead of letting it operate beneath the surface, own it. And let's make cities that are fluid. The last author is very much alive. He's also very active on Twitter. He's a great guy, Jeff Andermere. He wrote a series of a trilogy of three books called, he started a genre around climate change, horror, science fiction in which the environment becomes the enemy or something to be understood. But before he did all that, he wrote Venice Underground. It's not a typo. It's not actually the name of the city. It's just what its denizens call it. And in this very cyberpunk city, as you can see from this, there exist two layers to Venice. Venice proper. Well, I'm taking a chance here. People like you live. Students, upper middle class, middle class, maybe some lower middle class, but that's about it and of course the rich. They live and they operate and they breathe and they work above. And underground are the unwanted. Are those which the city has no useful. But if that was the point of the book, it wouldn't be that interesting because we said it so many times before. We get it. The city has wanted and unwanted and it is segregated and they distance themselves from them. But what Vandermier says is even more interesting. The city exploits both the wanted and the unwanted to the same degree. The lines which you think separate you from those who are unwanted are superficial. Even if you have the right skin color, even if you have the right name, even if you have the right body type. You are still being exploited. But more than that, stop worshiping the oppressed as the free. And that's what the free stories that we showed due to varying degrees. The oppressed have always been romanticized in what is essentially, and here I'm self-deprecating like I like to do, bourgeoisie thought. Oh, the worker has a hard time, but at least he has his freedom. He goes back home and he can drink his Guinness and he can go to sleep how he wants and he can curse and he can swear and he can be physical and the bourgeoisie are so limited in our social shells, at least they have their freedom. And Jeff Vandermier says, what the hell are you talking about? It's there's a reason they're called the oppressed. There's a reason that they are called the exploited. Now to summarize, just in case this all seems to you as science fiction in its derogatory sense that it doesn't matter and it's all ideas, I'd like to tell you a story about a city that you all know. It's called Jerusalem and it's one of the most hotly contested cities in the last 4,000 years. But now it appears that even this ancient struggle has ways to create new dimensions to it because apparently, remember Venice underground? Now there's Jerusalem underground. One of Israel's leading newspapers, its liberal newspaper, Haaretz, ran a story about the structures that are being built beneath Jerusalem as architects drill down to discover the secrets of the past. But in those hallways, they haven't just found repositories or storage houses. They found spaces to make their point. That point happens to be Jerusalem belongs to the Jewish people, of course. They use those spaces which were created underground to tell a story, to say this city is ours. It's just one example of how space is not just a place in which we sit or put our things or sleep. It is the backdrop to the stories which we tell about ourselves and society tells in our name for us. Only by reclaiming those spaces and understanding them as these radical or conservative vectors can we ever hope to actually implement our ideas about open societies in smart cities and communes. The first step is to reimagine the city. Thank you very much. Thank you very much, Eden. We have time for a very short question and a quite short answer, maybe. Are there questions? I'll also be around. Yeah, sure. Pardon. Could you wait for the mic? Sorry. Pardon. Well, short, simple question. If the city is everything now and everything is the city, to start reimagining it is quite a big task because it involves reimagining everything. So I'd like to give us a hint about how you'd start off doing that. Perfect answer. Every time I give a talk about this subject, this is the question that's asked of me. So you got us pumped. What now? What should we do? I don't have a good answer. The answer that I do have is that you have to break down the unit as little, as small as you can, from the city to the neighborhood to the street. I'll give you a personal story. In Tel Aviv, there was an old police station left there by the British. And when they left, the military built a base there. And when they left, that territory was promised to the citizens of Tel Aviv as a park. But instead, the city wanted to make some money. So they said, let's not build a park. Let's build five towers. The citizens heard of this, and one of them, biased, my mother, decided to found for the first time a neighborhood committee. And that neighborhood committee, consisting of 10, 15 people, worked tirelessly, day in and day out to block the construction of those towers and get the park built. And they succeeded. They succeeded not by tackling these grandiose issues. They succeeded by working in the field and politicizing their neighbors. Why did they need science fiction for that? Why did they need to reimagine the city for that? For that first move, for that first stand up to say, these streets belong to us and not to you, and this is a negotiation, you need that switch in your thought. I'm not going to take credit for what my mom did, but from discussions with her, she told me that she's also a reader of science fiction. It enabled her to look at the reality and say, why should this be the case? Why should we live as those that before us have lived? We can live in new ways, not big ways, a neighborhood committee. And from that committee can be sparked great things. It's not a good answer, because it's more complicated than that, but it is an answer, and I think that's where we need to start. Thank you. Thank you.
|
Science fiction has long known this and has bequeathed this knowledge to digital culture: to liberate is to create new ways of being but first, one needs to describe a new place for them to be. And in what space is the (post)modern world defined? None other than the city. This too, science fiction has long known and has bequeathed to digital culture. From its genesis, science fiction has been obsessed with the city, imagining and re-imagining it in beautiful, ugly, invigorating and depressing ways.
|
10.5446/20599 (DOI)
|
Music Oh hey, can you hear me? So in the meantime, I suggest maybe some of you are familiar with using a laptop computer. Maybe some of you are using a backpack every day. Maybe some of you have ever experienced neck pain, back pain, maybe aches that goes all along the arm, the elbows, because of too much typing, you know. I learned some years ago these very simple exercises that really changed the way I related to my upper back and shoulders and never experienced pain again since I practiced them quite regularly. I want to show them to you. If you agree, you can do them about any time. It's quite quick. I'll do kind of an abridged version here. Whatever happens, be careful. You know, every mileage is different, but the principle is simple. Look, you put your shoulders up, then down, up, then down, like on, off, on, off. And you do that a couple of times, actually 10, 12 times maybe. We don't have time now. Then you roll your shoulders like this, gently, at first, then depending on how much you suffer, you can vary that. Then in the other direction, then in a disjointed way, it goes very well with music, like funk music. Then the other direction, again, you should do this 10, 12 times, whatever works for you. And now the point is to really warm up the upper part of the back, the spine, and well, mostly the shoulder blades. Now that they're warm, you can turn your head all the way to the left, then all the way to the right. You may hear and feel a cracking that may feel really good, actually. And that is the tension going away. Then all is left is to bring your head forward, like a turtle, and back, forward, and back. So to summarize, on, off, on, off, then rolling in one direction, rolling in the other, disjointed in one direction, disjointed in the other, all the way left, all the way right, and again, then forward and back, like a turtle. You can do that several times a day when you feel a bit of pain, or anytime I hope it helps, like it helped for me. Well, it is now time we begin. So I'm Jeremy Zimmerman. I would define myself as a hacker, an enthusiast of technology and systems in general who like to understand how they work and hopefully make them work better. I used to be an activist for seven years, full, full, full, full time. I was the main coordinator and spokesperson and campaigner and analyst and graphic designer and whatnot for La Quadrature du Net, a Paris-based citizen organization defending freedom online. I got in many fights in France and in the European Parliament, the telecoms package, the ACTA, net neutrality, and so on. And I went very, very, very close to a very bad burnout, and I learned quite a bit about this. So, hi. Am I on? Yes. Hi, my name is Emily. I'm a well-being massage practitioner. I have been doing that for 10 years now. So I make, I share, and I study massage and recently classical Chinese medicine as well. I like to work in a combination of techniques in a rather open, freestyle way. And I like to collaborate with people outside my field, like musicians, for example. And I like to write massage protocols. And I am not especially a tech enthusiast, and I was not computer literate. But I improved on that second point and had much pleasure learning thanks to Jeremy and the lovely friends he introduced me to. So when we met with Emilia, I figured that even if I knew how, I knew about hacking computers, networks, parliaments, legislations, press releases, graphic design, video games and what not. I had no clue what massage was about. So since then, I learned a lot to her contact. But when we met, I was so badly burnout, I slept three hours and three hours and three hours. I had my phone constantly on my ear burning, was running from place to place, and wasn't really myself anymore. Emilia offered to give me my first massage, and after one hour, I figured I ended up in a state where I was so much re-centered on myself, so much myself again, on a slow, deep breathing mode. I opened my eyes and I remember looking at her and saying, well, hey, you're a hacker. Right, so we've been talking already a bit about hacking, activism and what it is that we do. But that's actually when Jeremy calls me a hacker that I really understood the meaning of the word and that I could really relate to it. So then I really gladly accepted his invitation to join him and La Quadra Childrenets on a summer hacker summer camp in the Netherlands in 2013. Well, La Quadra Childrenets were experimenting with their first tea house, now famous, and where other friends were there as well, we'd been reflecting on the same questions. So that's basically where hacking with care started for the first time. So that was three years ago. So what's hacking, what's care, and what's hacking with care? To me, hacking is an emancipatory practice of humans versus systems or tools. It is a systemic approach where you have to understand the whole box in order to be able to think outside of it. It is sometimes about breaking, it is about remixing, it is always about inventing. It is, before everything, to me, a set of ethical values. It is about the free flow of information, the free sharing of knowledge, and it is about enabling others to participate. Care, there would be much to say about care, but simply put, I would say that care is about goodness and even it's about common goodness, actually, and nurturing that. It's also about creating, sustaining harmonies and finding balance between one and the world, one and the other beings in the world, and one with oneself as well. I would say that care is not defined by a particular set of skills, rather it has to do with a presence and attention and an intention. What it is that you are going to do with what you love to do, what vision of the world are you going to support with what you do. And so, in that view, care, community care, self-care, for us, will be all the same thing. So this is what we mean by hacking with care. It's shared vision, shared ethics with common good at heart. It is our way to create situations, moments, formats in which to exchange and share. It is about producing resources to embed this knowledge we want to share, and it is to give us ability to engage in research. This is how we plan to hack care. This is how we plan to be hacking with care. This is how we do it. This is how we do it. Already. And so basically the very happy encounter between us and other friends with this proposition would like to scale and bring more people together and make alliances, because actually we think even if from a distance it doesn't seem at first that we're doing the same thing, well, in fact, we are. So let's do this together. So care for hackers, activists, activists and whistleblowers. What's common in activists, hackers, and I guess whistleblowers as well, I think is passion. Passion as an engine that makes you do things better, that makes you do things more. Passion as a fuel that takes all of yourself into the battle, into the cause, into the project. All of yourself means all your resources, your mental and your physical resources. This passion is something burning hot. It is something of tremendous power. We used to joke in the European Parliament that one passionate activist equates ten, maybe a hundred robotic lobbyists. So this passion is something very beautiful, very precious that we must preserve. But the flip side of this passion is that as everything that is tremendously hot, it burns. And it burns and the risk is that it ends up burning you. When you're exhausting all your resources and it keeps burning, the result is what we call a burnout and a burnout has terrible consequences. It has terrible consequences first of all on your cause, on your project, on whatever this is you're defending. But it also has terrible consequences on yourself. It makes you stressed, anxious, therefore it makes you make mistakes, therefore it makes you lose your confidence, therefore it will make you be more aggressive and therefore it has consequences on the people around you. It makes you make mistakes the way you behave with people around you. And this is what leads to anger, this is what leads to tension, this is what could lead to infighting. And we know how much infighting is the main cause for failure of social movements and projects alike. So it's one of the main. So mitigating burnout, of course, is a major objective. Also, keep in mind that activists in many contexts and increasingly we hope will have to handle security seriously. And this brings extra constraints on the body and on the mind. Handling those keys, keeping those keys on a USB at all time, keeping one or several laptops with you at all time, retaining physical control over those machines. How do you go to a doctor? How do you go to a sauna? How do you lay down on the beach when you feel tired if you have to get those three, four, five laptops with you? It's something also we should keep in mind. Also what is very interesting to me after my own experience is that when you're burning out, you're submitted to fast time. Everything around you is going faster and faster. Those phone calls, those tweets, those emails, you have to take care of them. You have to handle them. You have this feeling that if you stop, everything is going to crumble. So you have to be on top of it, which means that you become a slave to your environment. Your environment defines the time in which you live. When we all know that it is in other moments of time, in those long moments of time, when you decide, when you control, when you sit down, when you breathe deeply, that you can look at the future, that you can think for your well-being. It is in those moments of longer time that we can think strategically. And strategic thinking is what we need today, whatever your cause, whatever your project. This is what we badly need today. So it is about collectively finding ways to mitigate, detect this burnout when it comes, but also about taking back control of our time and of our timelines. So on the subject of careful hackers, why careful hackers and activists? Of course, we're interested in care for everyone, but we do have at heart to focus on hackers and activists and whistleblowers, is because not only do they work themselves out in very specific conditions, but they also face pretty intense repression from the systems of powers that they disturb with their action. People, companies, states who don't have public interest at heart, unlike our friends, and rather favor their profits and privileges. So it is a fact that hackers, activists, whistleblowers are closely monitored, they are intimidated, harassed, and they get all sorts of abuses, whether they be made-up accusations and disproportionate charges and whatnot. So how do we bring care in such a complex and sensitive context on top of all the specificities that Jeremy has described? How do we facilitate that access to care? As a caregiver, that will mean learning and experimenting and adapting with security, because obviously, as we have understood here, access to care will be closely tied to security aspects, which will have to do also with anonymity and privacy and also much more. And actually later, we will argue that much of it would benefit actually everyone and should be default setting for everyone, but right now we're focusing on hackers and activists. And one other very important motivation for us to bring care for hackers and activists is gratitude. So our care as an expression of our gratitude for their action. And with that, the idea is that care is transitive, it communicates itself, and although we are very aware that a massage cannot get someone out of prison, cannot win a case, cannot bring the justice which would actually be really good for their health, a massage can maybe help a lawyer, a journalist, a friend, a family, someone who's out there kicking us for us and for them. And this very notion of care as gratitude, I think demonstrates that care is not only hacking, that it is also actually activism. So hacker ethics and tools for caregivers. So when we speak of hackers, ethics and tools for caregivers, we refer to, for example, very concretely, access and circulation of knowledge and resources for care as opposed to their privatization and restrictions around them with paywalls, copyrights and what not trade agreements. We refer to, as we've evoked already, best practices with regards to anonymity and privacy and even amnesia, that would be a good one as well. And here, let me paraphrase the publishing organization WikiLeaks. And so we want transparency for the powerful and privacy for the weak. And who are the weak? Well, the list is very long, unfortunately, very sadly. But if we listen closely here, privacy for the weak is a part of a Hippocratic oath, basically. So that should bring to mind the doctor-patient relationship and the contract between them. And how is this done today in a context where there is massive data exploitation for commercial or social control purposes? And how is this done today with the Internet of Things and Medical Things, which increases potential for misuse and failure? And also, how is this done today when we're in a context where there are abuses of power in the name of anti-terror? And basically caregivers and other state bodies who are being asked to turn into thought police and therefore breach policy, breach privacy with their patients. So that's the context. And when we talk about exploitation of data, it's important to understand that it has very concrete implications on everyone's freedoms and what they can do with their lives. So, for example, are you going to get this job if you're planning to have a baby? Are you going to get fired because you smoke weed? How expensive is your health insurance going to be if you can't show that you made 10,000 steps if you walked 10,000 steps last month? And can you enter this country if you're HIV positive or another one? So that's very, very concrete repercussions on freedoms. As a quick question here, who encrypts their communication with their doctors and other caregivers? You see, there's one hand in the room. So for all the others here, what's happening when you're asking a question about an STD, about an abortion, about mental illness? What's happening is that it's going unencrypted, right? So we know now that all our communications, all our behaviors are being recorded, aggregated in profiles, and are potentially being used against ourselves for commercial or political purposes. But you know what's worse? The worst consequence of mass surveillance, maybe, is self-censorship. It's the things we don't say, but also the things we don't do, the places we don't go, the people we don't associate with. So what are we going to do when we feel low? Will we stop asking for help? When we're asking questions about this thing that is growing somewhere on ourselves, will we stop looking for answers? Will we stop caring for each other because we are too afraid that those things will be known and used against us? So hacker ethics and tools provide a concrete path of action. With free Libre software, we will take back control of our machines. With decentralized services, we will take back control of the infrastructure and know where our data is. With end-to-end encryption, we will get hold of our keys and secure ourselves, our communications. So our hope is that by transmitting those tools and those values to the caregivers, maybe they can help, maybe they can help leading by the example. You know how the white gong is a symbol of authority for many people. Maybe this authority could be put at good use. So this is a bit dark and I'm sorry, but our hope is that we can do all this in a joyful way, because hacker culture is mostly about a playful enthusiasm, about doing things in an inventive way. And that's precisely the point and the way we want to do it with hacking with care. So what we've done so far and what's next? So for the past three years, what we've done is very concretely offer care to individuals and organizations. And we cannot tell you much about this because privacy, right? And we've also been present at hackers, congresses and camps, conferences with workshops, care sessions, documentation, and often teaming up with others like La Quadrature du Net, very good friends. Center of investigative journalism, tactical tech, courage. And we have created some resources as well. You click. So this hands massage manual for everyone, for example, which you can find and download on our wiki available in English and Portuguese for now and soon Spanish and more. And there's also other massage protocols on the wiki that you can find. And we've compiled other resources that other people have put together in the same spirit on the wiki as well, for hackers and caregivers. So what did you do? That was a screenshot of a video tutorial that we shot for freedoms of movement. And we dedicated it, as you can maybe see, to Julian Assange, who will have been four years arbitrarily detained inside the Ecuadorian embassy in London on the 19th of June this year. So that's dedicated to him and it will help you with your freedoms of movement. So what we want to do in the future is basically keep doing this and doing more. We want to experiment, keep on experimenting with new formats, new formats for actions, for events, some of them directed towards organizations going directly on the field with activists and hackers and try to transmit this knowledge by the practice on the field, but also formats and events directed towards caregivers themselves, help them encrypt their communications, for instance. We want also to be able to better response to crisis individual and maybe collective. And we want to engage in more research, maybe starting with the very question of burnout in activist communities. Our capacity to scale and grow all of this will mostly depend on the variety of skills and the material support we collect from the one, hint, hint, material support. It will depend also on the organizational skills, the way we will manage to make all this work together, arrange all this beautiful diversity in a joyful and meaningful way. And maybe more importantly, our capacity to grow and scale will depend on you and the way you will feel like participating and contributing. Yeah, there are basically as many hearts and skills, as many entries as there are hearts and skills to this project. So you can contribute whichever way you feel. We have lots of ideas, many ideas we haven't had yet as well. And we can discuss this after the talk. We have two behind the dome in the relax area where we will meet with another hacker, Sarah, who is talking right now on gender and medicine. So we all meet together in the relax area after this talk. And so we're very grateful for the occasion to speak here. And before we open the floor, we'd like to take some time for a little shout out for everyone who cannot be here in the room today. All the ones who just can't be here because maybe the price of the ticket and all the ones who are watching the streams. But a particular focus on all the ones whose freedom is being restricted right now. All the ones we wish could be exercising all their freedoms with us today the way it should be. You were saying what we really need for everyone is justice and care is just what we can do in the meantime. Right, so we send our love and support to Julian Assange, to Chelsea Manning, to Edward Snowden, Jeremy Hammond, Barrett Brown, Lori Love, all the ones we are not. And many, many more. Thank you very much for being here. Thank you for everything hacking with care. Join us. And we'll be now opening the floor for questions. Be aware that we have a list of keywords that if they are pronounced during this session, will immediately lead to a song being sung by Emily and I. Please. Who has a question? I don't see anybody here. Don't be shy. Somebody wants to ask a question. It's now. Well, there are no questions. Be aware that if there is no question, you will have the singing. Okay, now we have a question. There is a question. It's good as a threat as well. Hi, thank you for this talk. I would like to suggest nonviolent communication as a way to learn how to talk to each other with more empathy as one of the caring activities that I would like to help you with in the future. Great. Very interesting. Thank you. Thank you. Is there another comment or another question? No, so, well, thank you very much. We have this song actually. It's a song, okay. Yeah, we have this song that we sing. It's not by us, but we sing it because it gives us a lot of courage. It shakes the stress. And also we like to think that when there is someone listening in on us, maybe, we like to sing this song. Singing as a form of care. Yeah, singing is care. Singing as a form of care. It's in my face. Thank you.
|
Hacking with Care is born from the magical encounter of Emily King, a massage artist, Jérémie Zimmermann, an internet activist, and friends with common good at heart. The collective explores well-being and care as components of hacking and activism, while also seeking to liberate care, and to inspire alliances between "caregivers" of different competences.
|
10.5446/20606 (DOI)
|
How are you? Have you seen interesting talks so far? Like also today here, I haven't made it here today. So okay. Yeah, fashion hack day. It's the, as I was introduced, thanks. It's the first, the world's first fashion hacker phone. And let me quickly introduce you to our event and explain a little bit what it's about. And after my introduction, we will see, I think, four finalists on stage and they will pitch their stuff that they made to you. So it's a little bit like the best, the cream of the weekend for you. Fashion meets technology. I mean, I guess you're experts in this since you're sitting here now. If you have seen other talks, I guess you learned a lot maybe about this or maybe your background is from fashion or from tech. Who of you is from fashion? One, two, tech. One, okay. Two, that makes two teams. And the others, what's your background? More like media or programming? What are you into? Okay, we will talk afterwards. So yeah, the thing is fashion meets tech. How can you bridge these two worlds to form up something new? That was the question. And we invited 50 participants from all over the world to come to Berlin. There was an application form online and we had about 100 candidates that send us their ideas. And we had to select 40 of them and in the end we were 50. So we brought them to Berlin. Fashion people, tech people from Boston, Italy, Netherlands, even from Russia. We had people there. Then we have these 50 people. We gave them 20 high class mentors. People who have already shown their success in fashion tech, especially in the Netherlands, you have a really cool scene about that. And we brought the best people that are from this industry to Berlin and we put them together with our 50 candidates. We gave them 48 hours. 48 hours. In 48 hours you have to come up with something new. You have to convince the jury of your idea. And yeah, if you have never heard about electronics, no problem. There's somebody that has never heard about fashion before, but you can form a team. And since maybe I should explain what a hacker phone is. A hacker phone normally comes as taking place more in the software surrounding. You take 100 programmers, give them a challenge like, okay, program the next app for selling fashion online. And give them 48 hours and they come up with an app and they get a price and then it's over more or less. Since this was the first time in the world that we brought together real fashion designers from the fashion industries with software developers, there was a question coming up. How do fashion designers work? I mean, this is a totally different workflow than software. You have to prepare, you have to think about materials. And so we dived into the whole thing in fashion weeks and so on. We talked to these fashion designers and we decided to give them a preparation phase of about three weeks. We found it on, it was really rapid, like a minimal viable product, so to say. We just founded a secret Facebook group and invited all of them to share their ideas and the fashion designers, they could start working on some ideas and just to get the process running. So that's how we prepared. I want to share you with you some experiences that we made, what we learned and what we found out about the participants and just what's interesting about the whole thing. So here's some insights. We had 63% women in our hackathon, which is really unusual. And we're really proud about that, that we could have so many women in tech getting people interested for this topic and getting them also be part of our event. And we didn't select by gender, we selected by the ideas. So then their background is design and tech in the first place. Design is a third, tech is a quarter and the rest is something else like from the textile industries, from the media, project management, event management, these things. So we thought this is a pretty cool mixture of people that should get to know each other. What can you do with them? Our briefing for them was, there are some in the audience, so this was one of the first slides we presented to them. We are here to create the future of digital fashion, which sounds pretty big, but it's true. It's one step into this direction. And oh my God, let's have some fun doing it. That's also a big thing about hackathons. You want to have fun and get to know other people and just have a good time. And of course, we want to break some rules. This is not meant to work for the industries. Well, if you have an idea that works for the industries, it's cool, but just break the rules if it's necessary. No problem for us. Before we start, a few words about us, about me and my team. Our company is called Verit Berlin. We are hosts, we have founded the first conference on variable tech and fashion tech in Berlin, which is called Verit Festival. And we work together with the industries and we set up also performances, exhibitions, workshops, networking events. Our goal is to bring together these people from these two worlds. This is the stage at our festival from last year. And we bring in people from Frauenhofer Institute, science, but also companies. Maybe a little bit like here, but I guess it's more like innovation driven, I don't know. So there's some participants that showed what they have done. Our speakers, they worked for, they are experienced in this field. And for example, Moritz Waldemeier, who was our keynote speaker last year, he did the dresses for Take That. If you know Take That, who knows Take That? Wow, you're pretty old guys. Take That. And he did, yeah, the dresses for the Olympic Games. And as I told before, I really love the electronics that he uses like a like jewelry. And yeah, we get really inspired by Moritz Waldemeier. I know him personally and we work together in Barcelona and it's great fun. So it's about connecting people. This is another scene. We also have these makers and these exhibitions, performances with variable tech. And the second thing that's our background is TrafoPop, which is a LED bicycling club. It's open source. We have developed software for Macintosh and Arduino also, where you can configure your jacket. And it's about meeting at night times, going out for rides, joining together, meeting each other. And yeah, it has been very successful. We have now found a chapter in London at the moment. And then we also do commercial work. This is still from a commercial for TV. Or for the internet. It's for Eon and we built for them a light suit that's water resistant and they used it for skiing and making their videos. Eon, like electricity, its light, its movement and so on. So it was perfect for them. And then, for example, we work with companies like Deutsche Telekom. This is in Barcelona. We set up a whole maker space for them at the Mobile World Congress. And it was also really successful. And we did it again three weeks later for them. So our goal is even at Telekom at this booth, we bring together people like Mobile World Congress is for CEOs and CTOs, all these people. So our goal is we connect people in fashion and tech. So let's talk about fashion. Hack day. It started with a hack phase. So yeah, we brought together the people in FabLab Berlin. Here's a picture. They got machines, laser cutters, 3D printers. We organized all kinds of materials for them. Conductive yarn, conductive fabrics, sensors, electronics. And yeah, and then they formed up teams. This is a mentor showing, explaining what electronics can do for the participants. Then we saw the first products coming up. And we had workshops that were given by our partners and our mentors how to use a 3D printer. And at the right side, you can see people working on a data glove. So yeah, we found the teams found together on Friday evening on the Saturday on Sunday was working. Yeah, what we have learned until this point was that we gave the participants a great freedom to do what they want, to select the topics they want, to select the sensors they want to use or select the materials. And I think for next time we should think about narrowing it down, give them some challenges, because that makes it easier for them to discuss about what to do. You have a fashion designer, and he thinks about all the sensors and he doesn't really know what you can do with it. And then you have the electronic guy and he just wants to do his idea or something. So think more about ideas. The ideas connect the people and together they can work on something if you give them a challenge to find new ideas. Our partners were out to desk. They supported us with the software Fusion 360. The Kingdom of the Netherlands was a big partner because from the Netherlands we had all these mentors and there's a big scene in fashion tech going on. I think they are one or two steps ahead of us now. And I don't know why they have this cool scene there, but it's really many successful people come from there. And we had some cloud services from SAP. We had this source book, which is a fashion network. They supported us. Yeah, and we had many, many more material partners. You can find them all on our website, fashionhackday.com. The projects, so until this moment, so I talked until this moment, I was there for you on stage. And now I would like the participants to present some of their projects. In total at the Fashion Hack Day we had 13 projects. 13 projects that got presented on Sunday here on the stage. And tonight we have four projects for you. All winning projects. And it's like the cherry of the cake for you. So if you missed it last time, this is your chance. So we start with Status Ute. And yeah, thank you. Thanks for your attention. I speak to you later. And please give a warm applause for our participants. So this is our project, the Status Ute. It started with this idea that our bodies, our complex systems that are just as mysterious as the universe as a whole. And the cosmos and the bodies represent a massive scale difference with humans stuck in the middle. And normally to understand this difference of scale, we rely on our visual sense, on our vision to look at diagrams, to look at any kind of visual aids, to understand that how small even the insides of our bodies are as compared to the massive cosmos. So that started an interesting question for us. What if we use another sense besides vision and relied on sound? Are there visual limitations that a sound can extend even further and interrogate the experience of our bodies and the experience of being in the cosmos at large? Are there patterns in sound that would allow us to see emerge between the world of our bodies and the world of the cosmos? And the idea behind this was to privilege an oral mode of introspection and discovery across these massive scales. That is the sound of space. Now technically, space has no sound, but this is transmuted electromagnetic vibrations picked up by NASA's Voyager spacecraft as it was traveling through our solar system. And some of these sounds that you're hearing now are the sounds of solar flares. The interesting part about this is that it sounds remarkably similar to our bodies. The cycles and the patterns, the whistles, the bleeps and the bloops. A little hard to hear at the moment, but I imagine some of you already know what this sounds like. So this led us to think about how we could build a wearable that addressed these similarities and some of these ideas, which is when we hit upon the stethoscope and the space suit as a way to kind of bridge orally this massive landscape of space with the small inner workings of our bodies. So we started thinking about how we would design it. We started with a pretty common reference point, the 2001 space odyssey with the hats designed by Frederick Fox, a very famous British milliner. What we liked about this was the oval shape. It kind of suggested a bit of a elliptical pathway that celestial bodies would take through space. Also the egg was very evocative of an organic new life form. And the color story, we stuck with eggshell white and space gray. And this is exactly the only thing that we started with at the beginning of the 48 hours was $120 stethoscope that we immediately took apart. It was a delightfully analog technology. Decided that we wanted to focus on the round piece, the actual sensor bit of the stethoscope. And we started sketching it out what it would look like. And from the very beginning, we decided we wanted an array of stethoscopes attached to a garment with several tubes that would feed into one ear. And the other ear, we wanted to have the space sounds coming into. So in our construction phase, we went over to the laser printer, I mean the laser cutter. And we started with strips of small four millimeter pieces of wood, started cutting out eggshell shapes. In here, the hole in the middle is where we would start placing the stethoscopes. This is a diagram showing how we would build up the different layers of the stethoscope of each one of these pods. There are different heights and some are open and some are closed to give a variable pattern to this, to the design. So we didn't want it to be a uniform, a uniform constant visual. And then we 3D printed the stethoscopes with a maker bot. Since this is just a prototype, further iterations would be a little bit more sophisticated. We also 3D printed the hinges to create an exoskeleton underneath and was able to assemble them into a pretty sturdy device. And for the space side of it, we had an Adafruit sound FX board with two speakers and a lipo battery and a switch in which we preloaded some of the sounds from NASA onto it. And that's in the left ear, all that's contained in that spot. And this is the flat lay of all of our components together. When we put them together, the pods ended up looking like, remarkably like a constellation, which is pretty surprising to us. And here is the model, debuting for the very first time this stethosuit on Sunday night. And this is my partner Rosie. We worked for the first time together on this project. We've never done anything together before. So the entire process was fabulous. We completely merged our design fashion technology backward together. And what, as a result, one of the most interesting things about this collaboration process was focusing on wearable technology, fashion technology, not so much on the tracking or monitoring on what that can do for us quantitatively for our lives, but how something could change our lives from a qualitative perspective. To have a different experience for the wearer and go beyond measuring or any kind of metrics. This has been a wonderful experience. I look forward to the next iteration. And please keep in touch if you'd like to learn what happens next with the stethosuit. Thank you. Thanks. Are there any questions from the audience? Do you have any questions? Okay, I have a question. I know that you came over from the US. So can you tell us a little bit how you found out about the hackathon and what made you come here? Absolutely. It was on Twitter. And Ada Fruit had tweeted about you guys. I know this is the first time he's hearing about this. Ada Fruit tweeted about you guys and said it was the real deal for anyone who's interested in playing with circuits and fashion to take a serious look. So we applied. Yeah, cool. It's super. Okay, nice. Yeah, we were really happy that you came over and that it's, yeah, that we had so many people from all over the world. Yeah, thank you so much. And I'm really looking forward for the second edition, maybe on the next hackathon. Thanks again. Big applause. Thank you. Thank you to our model Sarah. Okay, thanks. Okay, thank you. We have the next team coming up. So keep in mind, everything was done in 48 hours. Okay, wait one moment. Okay, big applause for emotional fashion. Hello, I am Yasna Rook of team emotional fashion. Our team existed out of a scientist programmer, mechatronic engineer, a textile designer, a gospel player and a fashion designer. And together we created an avatar called emotional fashion. We started by building a mask who is visualizing your brainwaves. So what you saw right now was his brainwaves visualized by changing lights right on me. So actually, I become a mirror for him. And he's getting real time neuro feedback. And tell me, Jose, how did we do that? We build this mask back in Belgium with the help of the center for micro systems technologies in Ghent. And yeah, they have this patented way of connecting circuits and components to polymers and to thermal form them in each form, whatever form they want. Also, we think that fashion should be an extension of your mind and your body. And your muscles are an extension of your mind, right? So we also added all together a special mechanism to create wings. These wings are actually responding to your body movements. To make this happen, we used also some special sensors. We used EMG sensors to monitor the muscle tension and contraction. That way we could actuate the wings by Bluetooth to the other person or avatar. So this is our project and we are looking at how we can create fashion into the digital and virtual world. And if you want to see that, you can have a look at the Facebook YASNAROCK. There is a 360 view on virtual reality of the project. We also got picked up by the newspapers and this is actually published today. And we are very proud of our team. So thank you very much. And thanks for the fashion hackathon because they brought us all together. Thank you very much. Oh, this is for you. So maybe let's stay on stage for one or two minutes. Do you have any questions about this? I mean, it's pretty exciting, right? You have brain waves connected to fashion. It's maybe not the first thing you would do if you think about fashion technologies. So what do you think? I mean, is it something that we will see in five years in your opinion on the streets? If I buy a hat, do they track my brain waves? Well, I think in five years, technology will be smaller. So we want to wear this big thing on our hat. But yes, it's super important because also problems of today we can already solve. For instance, think about people with a concentration disorder by seeing their own brain waves and lights. They could train to focus themselves better. So there are already solutions for problems today. You can use this for. Super. And what do you think about this playground? Because you're thinking, we are talking about how can this technology solve a problem? But also on fashion hack day, this was more like a laboratory, right? It was more like playing around with tech. How do you see the connection between this kind of playground that maybe in the first site is more like arty and the next step, how to get a product that's relevant for us? What do you think? Well, I think the fashion hackathons as a playground are very, very useful because we also had a participant. She's sitting right over there and she never did this before. And because she saw it and it was fun to do, she got in touch with it. And now she really loves it and she's going to take it to a further stage. So I think it's very attractive also for people at the first time. They don't really are reliable with it. And then they try it and they start and you getting a community. It's really the future. I'm sure of that. Okay. Thank you so much. That was beautiful. Thanks. Next project is smooth. We're going into fashion, deeply into fashion design. So yeah, please, big applause for project smooth. Thanks. Hello. My name is Mona Weber. I really, what my project was was create a, to create a smart necklace which you see here. Unfortunately, I don't have anything to present it on. The idea was to create a fashion accessory that looks beautiful and that does not look so techy. So a lot of accessories like smart bits and Apple watches and whatnot, they all look so technological. It doesn't, for the average woman or man, it doesn't, a woman wants to buy an accessory because she finds it beautiful. Not because she wants it to look like technology, not necessarily anyway. So that was the idea. So we created a necklace that together, I was designing it and then my teammate was helping me program everything. So the idea was to create a necklace with brass. So very sleek and simple design. But as you see here with illumination underneath it. So this was the illumination part and it is smart phone compatible. So it was completely Bluetooth, it has a Bluetooth receiver in it. So you can power it on and off and control the illumination underneath it with the Bluetooth and with your app which we programmed while throughout the sacathon. And the idea was also to not, to be able to be sitting in a meeting or in a dinner party or something like that and you don't want to have your smart phone in your pocket or because you don't have a bag with you or something. But you still want to know if somebody is calling you or sending you a text message. So as opposed to having a bracelet that is vibrating and that is letting you know, oh, you are getting a phone call, you have your necklace vibrating and letting you know that you are getting a phone call. So the vibration is set in the neck, in the neck, so it's very discreet. Nobody else knows it and you can just sneak off or be, well, excuse me, I have to go to the ladies room and be discreet about it and not be rude to have a look at your cell phone and know who is calling you. The other thing is that, so the illumination can also show that you are getting a text message or call, but then you are not being very discreet anymore. And then we have a 3D printed clasp in the back which is the closure for optimal protection of the technology because it has to be so small and it has to be optimized to a person. And a person doesn't want to constantly change batteries or stick the cable into the outlet and power it up and recharge it. So the idea is to create a 3D printed clasp that is perfectly molded around the back of the necklace together with the battery and all the technology in it and you can recharge it wirelessly on a base and on a little socket that recharges the necklace because you don't want to have any wires sticking out of your neck. So, yeah, that was the project. Thank you. Thanks. So I know that you are working at a company in Berlin. You do wearable tech ready to wear stuff. Can you tell us one or two sentences about this company maybe? The company is called Moon Berlin. It has been around for a couple of years now. The designer has been doing this for about 10 years already. So he specialized in fashion technology but in a very wearable way. So it's ready to wear. It's very elegant, very beautiful and using technology also for aesthetics and not just as functionality which was what I wanted to elaborate on with this project and kind of expand it. So because you know a little bit about ready to wear that means stuff you can actually sell or buy in a shop. What do you think about the step from a prototype like this to a product that you can market like a product that's out there that we can buy because we are all waiting for these products, right? When can we buy a necklace like this? I think the big step is going to be getting rid of all of this and making it very small and really incorporating it into the back of the necklace. So for the user to have a really and the end consumer to really have a very useful very easy to use product, the user doesn't want to think about how do I charge it, how do I wash it, how do I, he doesn't want to think about all of that. He wants to wear his necklace or her necklace as any regular other piece of jewelry that she already owns. So that's that's what we are used to with fashion, right? You just exchange your t-shirt or something and you don't think about it. You don't think about it. You want to stick it into the washing machine, you want to travel with it. Okay, good. Let's take it to the next step. Okay, thank you. Thanks. Now we have two more projects coming up. Yeah, first we have the shirt. We have two more projects coming up. So next, I'm really happy to present, I'm really looking forward for that presentation as well because it's something that's really useful and maybe used maybe tomorrow. Yeah, big applause for Artie. Yeah, so me taking part at Hackathon was more or less a coincidence. I went to the FabLab with my boyfriend to represent Sourcebook and then I saw all those people talking about their amazing ideas and I was like, I have an idea too and I'm a speech therapist so I have nothing to do with technology or fashion but I'm very interested in working with kids, especially kids with autism and a problem that these kids have is they can't really read facial expressions, emotions, because they don't like keeping eye contact and I thought it would be really great to actually have a tool that makes it easy for the kids to understand and also communicate emotions non-verbally. So I talked to a guy who also had an idea that was aiming at like social responsibility and I told him about it and he said, it's great, let's do it. So I stayed the whole weekend in the FabLab and so that's the t-shirt and it has six basic emotions and six corresponding colors and you even have different stages of the emotion. That means I'm really, really angry or careful, I'm getting angry and there are actually only two colors working and the thing was okay, we can make this t-shirt pretty easily, we used like a vinyl cutter for the emotions and then we thought about how can we actually like use LED lights and we were like trying to fix things and then it was like no, it's not working and then I was like okay, we need LED lights that you can just switch on and off, bike lights. So we took some bike lights in different colors, we have them here in red and they're in blue and yeah, so we just fixed them inside the t-shirt and that's the result. Thank you. This is a real hack because you know where this word hack comes from, they were playing around with these little trains like for children and somewhere in England or so, they made something out of it that was not the case, the inventor didn't think about this, they invented something new out of these wagons and so this is a typical hack, somebody invented a bike light but now you can use it for autistic children in a very different way and this is perfect for us, so the Turi was really convinced by your idea and by the project so how was it for you when you saw all these tech people in the fab lab, I mean how did you get in touch with the other guys there, you said you met this other guy and then he said I help you but how did you meet him and how did you connect because I was basically organizing and I just saw everybody's connecting so it would be interesting to see to hear about this perspective of how did it take like what happened there, how did the teams form, how did they meet the people, how was that? So I asked somebody who kind of had the same approach and he was a designer so I knew that he could help me with like the look of it but then we had no tech person in our team, that's why we were struggling but I have to say those two days they were like a playground for me, there was stuff I have never seen in my life before like 3D printer, laser cutter, I mean it really depends on what you are doing in real life and I have to say that this weekend was amazing because I felt like a kid, I was exploring things, I learned new stuff, I met new people, I would have never met and that was such a great opportunity and then we had this little presentation here and I won the second prize and it's like wow I mean you just have to try things out you know that's what it's all about and you have to let kids try things out and that's how they can also achieve great things and become inventors and great human beings. Cool, thank you, thanks. And now we're coming to the first prize, the jury was convinced of this design of this product because of its multifunctional aspects and of the because of the great teamwork. Yeah, big applause for the project, Knowledge is Power. Hi, my name is Nayeli, this project is called Knowledge is Power. Okay, this project talks about how for women it's so hard to move in public transportations in cities like Mexico where I come from. I took this data from Thomas Reuters, a foundation that makes a study about how dangerous it is to move and which are the countries where it's most dangerous to travel in subway or in the bus. This study asks questions to women like have you been verbally harassed by men when using public transport or have you been or experienced any other form or physical harassment when using public transport? Here we have that Mexico City is the first country in the world where women experience this kind of attacks frequently. In fact, another study from United Nations said that 65% of women have experiences sexual violence in their lives, traveling public transportation. Every day in Mexico, a million of women move, a million of people move into public transportation. The problem with public transportation is that it's part of the city and cities need to be a better place to live. So cities cannot be places where women should be scared of living or to have transportation and this affects directly on the quality of life. This means that women don't feel confident to go out alone or they are segregated in the case of Mexico City. They need to use their spaces that are only allowed for women and we think that this is not the solution. Another study from United Nations and other entities that are international entities declared that one of the forms, one of the ways to fight against this problem is to make that women report this problem because women are not reporting this because women don't want to go to the police to say something mean to me or somebody touch me on the subway. It's not something that women in Mexico want to do or in other countries in Latin America. So we started this project asking to ourselves how can we make or invite women to stay anonymous but to anyway continue reporting because when women report then there are statistics, there are information, there are knowledge about how to fight against this problem that is an invisible problem but everybody knows is there. So we started on the fashion hack day. I was working with one colleague that we never worked together but I invited her to the fashion hack day and we met Sophie. Sophie was there also on the fashion hack day and we just made a perfect match and we said okay we like the project, we want to continue working on it and pre-fast we show like what we had in mind. We knew that it was pretty ambitious what we wanted to do so we only wanted to make a statement about how to show on a final act our project and how to say that we are pretty worried to raise awareness about this problem. Not only in Latin America but it's a problem because in 2015 70% of the global population is going to live on the cities. So when the population increase there are it's going to be necessary to have more public transportation and more better spaces. So we started to work with other sketches. I think we were a pretty like we made so perfect match that we just started thinking or to flow pretty easy with the process. Then we talk about drawing the the subway in Mexico City and we talk about how can we represent it and we found that it is possible to represent them with LEDs. This is the subway from Mexico City and we talk about representing this with an alarm that is yeah you can touch like it's only one red light that indicates that something is going on in one of the stations. We work it and we use all the techniques and the machines that we had on FabLab because we wanted to use all the resources so we need the our textiles we made the textiles in FabLab and we also use the laser cutting machines so we made also these textiles and well this was the final presentation for our statement but one of the purpose of having an alarm would be to locate the event in the time or in the place so as when a woman have access to Wi-Fi then it's possible to go up to the cloud and have all this information in a cloud to make data statistics and to predict where or where all the cases will gonna happen or where all these things are happening so in the future the politics will gonna have pretty clear information to say okay this is a problem that it's affecting to the women in Latin America specifically in Mexico City and we can for the future very make a better make a better city that it's not only for men but also for all including us women so this is an example of how could it work reporting all the cases like time and and the like frequency of the cases in a map so it was gonna be possible to start making predictions this another photo and yeah we are the team knowledge is power Nayeli Vega Sophie Kellner Sarah Hermanutes it was incredible experience to participate on the fashion hack day for us it was pretty pretty much about play pretty much about try all the materials and try with that with technology to communicate this social issue that we think so fashion and technology also can help to solve social problems thank you thanks so much thanks so I think this is very interesting because fashion is political right with fashion you show you express your status in society or yours express a certain mood you express yourself your individuality and then suddenly you come up with an idea that expresses also a political problem and I think this is really interesting because it looks good I mean you did it in 48 hours and if you had time like two months or a half a year of course this would be perfectly done I guess but it's you can get the idea it's like decorative but then it's it combines a function also with with an idea that that's really relevant for us and I think that was something that convinced the jury and the idea is very powerful yeah I forgot to mention that our concept is based on making like an armor here like the meaning of protection and safety and then we use this material to express vulnerability and it's translucent because we we wanted to show this parts of the body so that's why we chose these materials and black because it's pretty strong and white because it's you can't see through the body that and that's how the that's the feeling that many women have every time they go to the subway in cities like Mexico City so yeah do you have a question about this from the audience no so um yeah what are you going to do next what is your personal background like are you from the fashion background or from tech background what's your personal I am product designer and my partner that's held the Sara she is working on the tech field and Sophie she is working on the textile area she's studying textile engineering so we were like a kind of designers and that's why we I felt like we have so similar ideas but at the same time we come from different places Sara is from canada uh Sophie is living in syria and I come from Mexico so it would be super cool because I know that SAP the partners they they offered you to to bring this to the cloud using their system so it's really possible to do it I think yeah and it's it's so possible that I um I I receive an invitation to participate in a in a call from their organization of countries from Latin America to develop a project based on technology against violence against women so this this can continue it is really a project that can go further so I would really like to see this working and some yeah it's good to give that topic of spotlight through fashion and through something that's also fun and not so heavy but still we have the project so yeah thank you so much thanks for taking part thank you so um thanks yeah um I hope it was something in the like the presentations were enjoyable for you as an audience that you get some insights how this digital new fashion can be developed in a new way and I think the ideas that came up they're so different and especially the for autistic children these ideas are or the harassment the sexual harassment you need people who actually work in this field who have these ideas they come up with things that are really needed because it's their daily life so as a designer or engineer how can you have these ideas if you're not in touch with these in these environments I think it's really important to connect more to connect industries why not connect adidas with airbows or something else that's what we are working on we're next we are going to do is think tanks to connect people even more and I think that's our mission and um yeah these are all the people that found together during fashion act day it's right here on the stage uh last sunday and thank you so much if you have any idea or need or something tomas at varied minus Berlin dot com thank you so much thanks yeah thank you very much thank you very much for the presentations
|
Wear It Berlin presents Fashion Hack Day: Meet selected finalists of Europe´s first Hackathon for fashion and technology.
|
10.5446/20612 (DOI)
|
So welcome. Hi everybody. I'm very excited to be here. I'm usually on the other side of the stage because I'm an event producer when I'm not here. But anyhow, we have 60 minutes now to recap Music Day and the program here at the laboratory, which is such a brilliant pun on Burkine, here at Republika. And just before we get started, really this is a live conversation on stage with my wonderful guests and the audience. So who of you is familiar with what a fishbowl is? Anyone? Can I have a quick raise of hands? Okay, one there. So maybe, yeah, exactly. I'll come forward. I'm just gonna quickly explain basically what we're doing as a panel, but we have one seat here, which is free, and it's for you, dear audience, to join. So at any given point in the discussion where you feel you have something to contribute or there's something we're leaving out or you have a question to throw in, please feel free to stand up, get on stage, and take a seat and join us. And then, obviously, go back to your seat and leave it up to the next one, because the idea is to rotate people in and out and hopefully have a more involved and inclusive discussion with all of you. By the way, I'm also, I'm speaking English. I know we we're partly German here, right? At least like four or four of a half of us. Who needs an English panel? Should we switch to, all right. Okay, I'm happy to have English. Maybe, English, English panel. Exactly. The English panel, great. But if you do have questions, please throw them at us in either English or German. We'll answer both. So I'm here with my four great guests and the idea is to really reflect on today's program, what were the key insights, the key learnings, what can we take away from, we have heard so much about virtual reality today, about startups in the music tech space, about apps as new marketing platform for musicians. And we're really here to condense it all and extract some of the insights that we have learned today. So I say let's do a quick round of introductions and maybe we can start with you, Eric, and you can say a bit where you come from, what your background is and what brought you here. Hello everybody, I'm Eric, I work with Music Pool Berlin and we do consultations for musicians in Berlin and beside this role I'm also involved in curation of events here in Berlin focusing on music and tech. I'm Roxanne Sebastian, I'm a singer-songwriter from London, that's mainly what I do. I'm also part of an artist organization called the FAC that's featured artist coalition. So that's a group from and for artists that do lobbying work and aim to give artists a united voice and they also do loads of like educational grassroots events. Hello I'm Pavel and I'm musician and songwriter and I'm here with Soundtrap which is the recording studio online and I wanted to do the presentation about Soundtrap just recently and I work with marketing and communications with Soundtrap and yeah that's me. Hi I'm Horst Weidenmiller, I'm the founder of K7 Records, we founded 85 so we went through everything which happened to music in the last years, I'm here and happy. We also we run three labels which is K7 Stratt and Ours, we run a label service department where we take care about 25 labels, we run a management company and we work with 30 people out of offices Berlin, London, New York. I'm personally a founder of the Global Rights Association, an organization Merlin. I'm also on the board of the European Independent Association in Parla and yeah and I'm here. Cool thank you. So a really diverse group of music experts are from different fields though. I was wondering because we have heard so much about music being influenced by these new technologies today especially virtual reality. What is what have you taken away from this? What are the new technologies that you foresee defining the future for the music industry? You want me to start? One of the important takeaways for me is something that I had a feeling for beforehand but when I heard it today on one of the panels I was like okay I was damn right. I just had the feeling that this whole virtual reality thing which totally reminded me of like this second life hype back in the 90s might be a bubble and my feeling is that this augmented reality approach to rather get some more context into your music feeling seems to be much more relevant from my point of view and I think this was like definitely something I'd love to stress and push forward. What's much more relevant? Can you repeat that? The augmented reality aspect. I had this feeling that this could be a very important part of this whole story. I know your label K7 has been a real sort of cultural reference in terms of taking up the visual side and combining that with audio very early on so I don't know if anyone here is familiar with the X-Mix series that K7 did in what I think the early 90s. Maybe you can tell a bit about what that was and how you started it and how you're taking it. We had a quick chat earlier on about how you're taking that further and looking into virtual reality also. So K7 was was founded as a music video label and we recorded live recordings of bands like the Nick Cave and Einsturtzen, Neubauten and suddenly this new type of music called Techno bubbled up in Berlin and for me it was like as a video label that's what I want to produce a video for and the music was done on computers so the idea was the visuals need to come out of computers as well and you can imagine technology by the days or by the late 80s there were just 8-bit Amiga computers and the rendering of one frame was about two days sometimes and but what interestingly and developed with the X-Mix was that we say okay there is a DJ and this DJ makes a mix of 12 songs and and these 12 songs come from the underground they come from what is really happening in the clubs and we give that music to computer artists in all over the world to visualize and and that was mainly the start of of computer animation connected to techno music and I can probably proudly say that we created the imprint of what a techno video needs to look like but more interestingly what happened to that is that X-Mix was the only platform for computer art these days because we in total produced 10 videos we started in the early 90s and ended in the late 90s and we had from Laurent Gagnier to Richie Horta and to Dave Clark all the big big techno producers and MTV was really soaking up this videos because by that time there were no there were no YouTube there was just all this Euro trash videos and they made all these special shows about the X-Mix of the real club sound and and I think yeah on the one hand it was just played on on all the chill out parties but on the other hand it was a form for computer art and I think that made X-Mix quite special in a way and we then ended X-Mix by the late 90s because we had the feeling we need to move on and there are much newer and better concept coming into the market and we wanted to move on with DJ kicks and our other visions we had and we left visuals completely behind and for me it's of course interesting suddenly now that after that many years to be in touch with virtual reality again because that was we always were referenced to and and new remensor was a Bible we all had to read you know and and and it's interesting it's interesting that we know all coming back and and and and and this type of art form becoming so present again do you see virtual reality as something that could replace live performances in terms of I'm sitting in my living room putting on my goggles or whatever you know to I have and experience a concert live without actually being there is that something where you see VR's potential or is it more in the say music creation space which we have also heard at the panel earlier on there's some really interesting works being done in there I think that I think the add-on experience I don't think that the experience of live concert which are filmed with various VR 360 cameras are going to replace the live experience but I think it's going to be a very strong add-on I've been at that concert I've been at that festival and now I want to be the stage camera and I want to see it from the stage into the audience that certainly a strong experience and I think it it will come it will it will be mainly an additional fan product I think there is the the risk of of creating more fan products that it's soaking up the diversity because if when we know if for instance football goes into VR and all the major acts are going into VR they taking the money away for people experimenting with new music and though there's could be a concentration which could be unhealthy for the market but in general I feel that VR is adding a new element to it which is a good element can add something very unsexy to that I I'm really excited by by that too and I've been at a Paul McCartney concert from the view of his piano which was amazing but especially for new music if we really want to take advantage of that I think the licensing really has to be made simpler it has to be a one-click thing and it's too complicated at the moment so it's almost like come on music industry get your act together now because this thing is about to explode and it's a massive opportunity that we're gonna lose out again if we don't just make it simpler to monetize it really and to have you know have it a one-click license and to have the payments actually filtered through to the people that matter I'm 100% with you and I think it's always difficult when you speak about support of innovation giving away control embracing innovation by giving rights away I think it's always difficult if you speak with about the music industry I think you have to separate the music industry in those who have a business model which is driven by market share and I you know so if you come into a market and you have you control 40% market share like a company like Universal and you and you give that license to a venture capital company you can get a kickback of the capitalization you do which is multimillion dollars but these are only three companies to do that and I think if you go back and see the other companies which are all there which are very much pro give license away give embrace innovation and everything and I think that there needs to be the separation and I'm 100% with you we have to make license very simpler but luckily saying so licensing nowadays doesn't seem the problem anymore because we have so many new business ventures music ventures coming into the market which all get a global license so I think the days five ten years ago where you couldn't make a license when you came into the market are pretty much over you just need the capital nowadays to please people in this market share yeah yeah anyway I think you're okay seven is definitely straightforward with regards to new business models I figure so I have the feeling that you you you might be the the expert to tell us a bit more about like these kind of new business models that that might be possible in this whole sphere or what you just said like a for instance universal has its startup unit as far as I know so they they already have their platform to to come up with a new business models and ideas but would be cool to hear more from your end what you guys are currently planning with regards to that kind of new business models I mean for us it is more than I think in a way nothing has changed in music I mean music is creating an immunization of a fan which then starts to spend money so and back in the days it was vinyls and and now they then they came CDs and now different type of stuff comes and perhaps it's a VR experience and I think for us as K7 we just want to create these these emotions and and this is the most important part and we want to be part of how these how people spend money and bring it back to us you know but is it just emotions or is it also like what we heard today like there's a big market marketing aspect in this whole technology aspect at the moment but at the end of the point at the end of the day of selling no you have of course a big marketing machine but I think what is still happening is the music is touching you and by the touch of music you start to spend money because you want to be closer connected to the music and then there comes the marketing on top of that and and I think I don't know what is what we see is coming into the market is probably more the kind of combination where consumers can become producers I mean for instance if you see messaging I think there is something where more and more business is coming into where it goes into the direction where people want to attach music to messaging and there's a blurring line between getting a toolbar which is pre-produced sounds which are subject to a license and creating it by yourself that's for instance something we see more and more coming with Snapchat coming with music and everything and just curious do you think in the future in the near future will be able to buy let's say a special type of concert ticket and just watch the whole concert being at home on your couch yes just curious what do you think if in the near future you'll be able to buy a concert ticket and just watch the call the whole concert like let's say being at home without going to the gig like this type of application of VR I'm sure you can I think that is definitely what we experience is the consumer choice and how you consume music is completely fragmented and new business model coming in every day whereby you know in the former days you only had one linear one record one concert and and hopefully does so because as more choices you have and in the more choices you can give to the consumer as more he finds his own consumption model you already have that to a degree not in virtual reality but people already so I do online shows and there are a myriad of platforms that offer that kind of service where you can either have a set ticket price or you can have a pay what you want and it's either you could have it a whole thing as an online concert and I'm just sitting in my living room playing songs or it's not my living room I try to make it a bit more exciting than that but or it's at an actual gig that you're streaming live and you can monetize the live streaming of it so so that's already been a really great add-on and an additional form of form of income then it just enhance it with a device just if you have a headset can yeah then it just would make any difference if you actually add the gig or you don't know about that but yeah well one of the critique points about VR writers that is creating this really private private spaces and music as we know it has always been a very collective experience music is there I mean you know you can look at its history it's like a it's a it's a collective form of like sharing an experience together what do you think about that okay now I'm coming back with my augmented reality you think I always had the feeling that for instance if you if you could implement some some music sound stuff in your environment this would be fantastic and also a wonderful thing to share music with other people for instance like just put some some some your song or your tune somewhere in your neighborhood and just make other people explore what what what you just left there as little statement for instance so I I really don't see this this walled garden virtual reality thing like your headset on and you all alone with the music or whatever I just see rather that you explore stuff in your environment so I really like this idea also with regards to this thing that I really love to have more context coming with the music I like and I and again I see this rather in a in a mix of reality and information or context to also for me to consume rather help me to maybe buy some more things from a band or from a producer rather than being in this virtual reality bubble all by myself it's almost like going back to the days of having like a record sleeve and being totally immersed into that and it's really is really cool in that way because it's all I think you've hit on a trend there that we heard today for sure well done Eric it's all about added value and like having added like added information on that and yeah how cool is that so even if you do have the the isolated experience of your headset and in your room alone experience that that's almost like people always say people don't have the attention span anymore to listen to an album but if you've got your headset on and you're in a virtual reality world of Pink Floyd's dark side of the moon I can I can see people spending that half an hour in their room exploring that and enjoying it at least it is a really exciting new way of experiencing music as well like we heard earlier it's like especially electronic music is something that has always been about forming these soundscapes in a way and that already has some sort of like even if it's if that happens in your mind there is like a visual attached to it so I think it's a very very exciting technology that however as also we heard today is like still in this you know early stages and basically the hardware is there not so much content and from what I've seen so far as really in the music space it's been videos that you know you can now sort of with Google cardboard or whatever that are crossing the line to virtual reality and I think it's important because if if I could say what I'm really looking for is music which makes you listen to music and perhaps we are is the answer because what we what we experience is that music is more or less an emotional wallpaper we put it on because we like the emotions but we immediately start to talk and and we even have the feeling we listen to the music while we talk and and I think coming back to what we heard before it's about the record sleeve and the experience of going to a record store and being at home and listening to the lyrics is actually what is missing to me in a way is the experience where you give 120 percent of your attention to that music for that moment and perhaps we are can create that in a in a new experience where people give full attention which I think would be great because that's a little bit missing at music is the full attention also let's face it is like super escapist and fun from everything that I've seen today people have a lot of fun exploring virtual reality at least for the first ten minutes until they get very seasick but spin start puking exactly it's been like it's a it's a fun experience new way of experiencing music but how about the creation side of things your startup is basically about bringing musicians together in a collaborative environment could you see applying VR at some point to that meaning you meet your co-creator actually in VR and creating together I mean definitely it's in the near future is very possible just a short intro about what we're working with it's a company called Soundtrap and we have a recording studio in the browser and it's a collaborative recording studio which means it's pretty much like a garage bands skype and text chat in your browser in the same window so you do you can play together record together and video chat and chat as you do it so it's all in the same window and of course in a some years if we apply VR to that they'll be amazing you know you don't just see somebody in the you know in the chat window you will be able to basically it's like you'll be in the same recording studio together so I don't know how much time it will take to get to that point but there's definitely the the future of the project as I see it and currently we have we have quite a few virtual bands on the platform which means let's say I recorded some guitar riff and I need to add bass and drums and maybe ukulele at some point and I would find musicians all over the world connect with them and invite them to my project and we would basically design this virtual band together so if at some point we all wear like VR devices and play it together they'll be mind-blowing so I think that's that's where it's all going eventually yeah last thing on the I know we're a fishbowl guys we have an empty chair here so please feel free to join us especially on that subject of VR I know that's been dominating Republic or at least the music day and I mean is it just a hype do you have any opinions about that come and join us there's someone we can hear you thank you good afternoon ladies and gentlemen I'm no stranger to this stage there's been a lot of talk today about AR and VR which is all very exciting and very stimulating but we had a lot of bars about 3d in the cinemas about 10 years ago where is that now the add-on is always interesting at the moment but I want to come back to something that this gentleman said because we're missing a point music is essentially to be consumed by hearing not necessarily by seeing of course we've always had the live performer we've had the street musician we've had the classical concert for 600 years and we are in a position where we can still go to see rock musicians or musicians play live on stages thank goodness we may well explore the opportunity of seeing a virtual reality performance and enjoying the sight sound and even possibly the smell of being next to someone that hasn't watched will see how far technology will bring us but I've been working in music and with young people for the last four to years and I have noticed one trend which is quite sad that in the last 40 years people have stopped listening to music and are consuming music visually and that for me is a problem in many ways I don't want to expound on that problem but I'd like to come back to the point the basic point is that music is an audio sense it appears to our audio emotions of course we can enhance that and whatever technology will come along will be used and will be exploited unfortunately I use that word deliberately but I just wanted to make the point that let's be thankful that we have wonderful equipment here still to be support not to be support hasn't actually been surpassed by technology and maybe one small PS at the end of my delivery making music and the way that we make music over the last 50 years has changed very little the way that we consume the way that we market and we appreciate and listen to music has changed far more dramatically than the way that we make music I've tried rehearsing with someone using Skype it's not perfect it can be done you can send files across the world but it's not instantaneous so we've a long way to go about moving into a world where music making is totally digital and doesn't need the human interface maybe that's a good thing but I'm going to stop now thank you very much for listening I just wanted to remind people that we have ears for listening to music with and long may we continue to do so thank you wonderful thank you so we stop here also the other main sort of recurring theme that we've taken from today I think is really going back to how technology can empower artists and there's been a lot of talk about shifting hierarchies maybe what are your key ideas here what's what have you taken away from how technologies actually can empower startups and then especially if you look at you know the streaming market is it is technology really empowering artists you know it's such a I know this is a really broad question but I think because we haven't discussed streaming at all well with regards to streaming I think a lot of artists that have spoken out without rage and against the concept just don't know very much about it have been very informed and if you're not very informed you get scared I'm just gonna go ahead and speak on behalf of the artist community here innovation is is great and tech startups and streaming is great and there's no way of going back you know we can only go forwards I think the problem with streaming is that artists have been excluded in the conversation how do you define a stream we still actually don't have a definition of what it should be legally is it's some because it's somewhere between a radio player and a sale and it's neither really it's something else but those contracts have been drawn up and we've got NDAs non-disclosure agreements between those three massive companies and the streaming services and a whole lot of revenues generated and Spotify pays out 70% of their income on royalties but that doesn't trickle down so yes to streaming but no to excluding artists in the conversation so so that's really where the disconnect is it's the same with YouTube right now in the States we've got all these high-profile dare I say quite old-school artists saying boo YouTube you're not paying out Universal's suing SoundCloud but I don't think yes we need to look at the safe harbour thing and yes we need to go away from tech companies going oh it's got nothing to do with me why is not my responsibility what's on my platform but the innovation is is good what I'm I'm aware that I'm rambling because it's beer time now but the one thing that gets me is that all these new things are great but they're just more middlemen between the artists and the audience and it feels like the person that needs to pay out first is always the artist get this service then you can do that like this new service will answer all your questions be on this new platform beyond this new social media and I just think as fantastic and positive as all these developments are it needs to be it needs to be streamlined it needs to be simplified this new digital age should mean a simpler music consumption for everyone and that should result that should result in generating money that will ensure that people can continue to make music I mean isn't it also that you know the music business is run by big technology companies who are not necessarily they don't come from a music point of view they come what they want to do is to attract more consumers into buying their hardware so when everybody uses music like that you know I'm gonna steal a quote from Fran Healy he was like music's like I'm not gonna try and do the Scottish accent but it's like a music's like candy that people lay out to attract people like come to my shop come to my restaurant buy my thing and that's that's the problem that that is that that is what it does because it connects on people on such a deep emotional level but we need to yeah recognize and remunerate the people actually creating it fairly we need to update that whole system we had a roundtable discussion albeit a private one in research that's taking place now about streaming and where we're at in the artist community with it and if you look at the rough split who says that the record label should have roughly 50 percent of it that's an outdated figure coming from like no no but that's an outdated figure that comes from a time when major labels still spent a lot of money and time on nurturing and developing acts and they don't do that anymore I think I think it really I think it really depends a rock zone I think I think what you always can say these days is who holds the risk is holding the rights and that works on both sides if you're an artist who says I can take the risk of my career you're in a position to get 90% of all your incomes you're talking financial risk financially and if you're on the other hand you are a young artist to say I could do it on my own but I would like to have a proper video and I would like I need to support you probably end up at the other side of only getting 20 or 15 percent and I think what you're saying is very easy to attack especially but I think it's always about the risk and the rights you're a right and I agree with you but where's the financial risk if the person you're investing in gets nothing until your entire investment has been recouped I see I see I think that's I mean that's always down to individual doing contracts and and and I luckily we have now 15 or 22,000 independent labels and we probably have 400 self-releasing chart artists all over the world and you no longer dependent on the gatekeepers and I think people who are signing stupid deals should not complain that they signed a stupid deal I totally agree with you on that but even the standard indie deal is 50 50 and the 50 50 is you that's still in which you have a no other industry please correct me if I'm wrong but even the best indie deal is a 50 50 percent split on the profits after we've recouped our entire investment that is a standard indie deal no I don't necessarily think it's a standard indie I mean I have deals which are 90 10 to my disadvantage and I'm happy with that I have deals which are 90% in my favor and I think it's the worst deal I ever done I think you can't mix one fits all it's very individual where do the artists stand where the wrist and what is the investment and that's very individual and I think there are many artists out there who have a 50 50 deal who are out there and love for the amount of money they make and there are other ones that say we actually should do more I think it's it's what I take out today is that the music industry is accused for many things which don't happen and I think that's wrong I think that's fundamentally wrong because nowadays the access to market is extremely flat everybody can self everybody can produce very inexpensively everybody can release their music very inexpensively and you can market your music very inexpensively your social media you know so you have an access to market and all artists we are signing our self releasing artists who created their own fan base who created the only way to do it now really we created a heat level which we say oh come on we want to work with you so I think it's all possible and I think we have to be careful of not generalization and especially if I see people saying the be creating a business model whereby the majors are squeezing all artists I think it's just these days are over these days are over and I think and also to criticize you Roxanne it's like one thing you said today go on look I'm in a company and I would love to have more women in my decision-making processes within the company if I advertise for a job I get 90% male application 10% female application and I struggle to find well-skilled women for my positions and this problem is not that I don't want to employ them the problem is it starts with socialization in the childhood it starts with what they study it starts with what they I couldn't agree more yeah I had the UK music board say the same thing to me about race saying we would love to have more black musicians on the board but I can't find any and it's yeah of course it's an education thing it's the fact that from it so we're kind of going back to this gender conversation now right but it's the fact that from a really early age both boys and girls are done a total injustice by being targeted this is for you and this is for you right it's the fact that yeah girls don't grow up with those role models and they don't grow up with the chemistry sets and the war toys and the and the robots and the guitars advertised at them they have the cooking stoves advertised at them and the pretend Barbie dream house so yeah absolutely it's first and foremost a matter of education and where our society is at and reinforcing that positive role model we've got to stop telling children what they can and can't do depending on their gender or their race so yeah I can't agree with you more okay because then I misunderstood your presentation because I saw it a little bit like establishment doesn't give access for women to come into music and I could say I can only say I would love to have more women and it's very difficult to find good skilled educated women in the music industry well I I said certainly no shortage of very well educated skilled women in the music industry so perhaps they are but when we see our applications which are coming in it's we always because we prioritize female for our positions and we always struggle to find them and it's where do you advertise out of interest we are we do LinkedIn we do we do I don't know CMU music week yeah you know okay just as a quick aside because I know I feel like I'm monopolizing the conversation here I was just in the big in the main hall on a talk about our data imprint that we leave and the future of that and it was so interesting and it was saying that how it was talking about artificial intelligence and if you typed in a Google search for CEO what are the top pictures that come up they're white men in suits and if you fed that information to an algorithm it what it would conclude from that is this is what a CEO looks like and then if you have targeted advertisement for a job in a certain income brackets they found that for instance those adverts aren't served to women so you're actually not even seeing that job ad so I'm certainly not saying that that's happening with your adverts but I thought that was a really interesting aside but yes I totally I totally agree with you except for the fact that yes there are there are a plethora of highly skilled women out there in the music business and hopefully more soon yeah I think to some point though you're right it is a socialization thing where you only see women like especially in the like in dance music breaking through now or in the past few years and really sort of appearing on the surface where I you know I like we can talk from personal experience but it's not something girls are not given to decks and a mixer from when they're two years old like when that was like 20 years ago you know like when when guys get into making music or DJing like they did 10 20 years ago it wasn't as accessible to women it is much more now I think and it's definitely changing but I think you do have a fair point I would like to come back to streaming because what I think what I because we spoke was streaming before and what I think is very interesting in streaming is we see there are that that every consumer is looking for their own streaming service partner and I think that has a big potential for the market to grow and I think it started what amazed me most is that the Apple Music advertisement was mainly driving Spotify subscription which means which clearly showed that the real music lovers and all the music nerds go to Spotify and suddenly now Apple Music is close to 22 or 30 million they're going they have a completely different kind of audience that's more goes into fashionable they go into China because people have iPhone so we never expected that Apple Music would be able to grow such a fan base outside the spot if I fan base without cannibalizing that and and I think it's going to be more and more interesting what comes next but for instance if Netflix goes into into streaming that's going to be again a completely different audience to what Apple Music and Spotify has and one of the question was what comes next in or is there still a market for streaming and I think yes there is because I think streaming is really going into your social environment and wherever you are is where you want to stream your music with the brand you trust and I think there's loads of space for new music streaming services and how you guys there are going to program it and bring them into the market which I said this discussion recently here also with Barbara on the Twitter timeline I was just about to ask you we are seeing like this this big problem of unique content in the streaming business and we see all these pre-release marketing ideas like Beyonce just releasing her new album on title and so you got all these these these walled gardens again each so so my feeling as a consumer is like each high profile artist needs its own app or streaming service nowadays so what's your your perspective on this phenomenon phenomenon I think the exclusivity is driven by market share I think the reason why Apple music and other going into exclusives is mainly to drive their market share up because they they you have everything on Spotify and Spotify is very much against exclusive because they have a lot of consumers I'm personally also against exclusive even I'm always tempted to do this marketing deals because exclusives are driving people again back into piracy because it's like if you have your 999 sub Spotify subscription and you want to have the exclusive and it's not on Spotify you go into piracy again and again you driving people who have a business model of using music but not sharing the the income and that's not good for all of us so I think it's it's a difficult one in a way because it's a great marketing tool to be in a position of say I'm giving something exclusive to Apple music I'm going to have wallpaper all over my cities I have access to a marketing budget I cannot afford by myself but on the flip side of that is you you you enhancing piracy I agree I think you can do that if you are like Beyonce or Adele and also as you were saying people go with the brand that they trust and you can't change so difficult to change people's listening behavior like a fun fun fact I heard that rainbows in rainbows radio heads in rainbows album that they put out for free or pay what you want was the most illegally pirated album in that month of the release which only you know that's just wherever you get your music from that's where you're gonna get it from okay so we're coming slowly to an end again this is a fishbowl so someone has something to say please get up and join us do you really think there's room for startups to enter the streaming market oh yes there we go hello so talking about streaming I'm gonna I have different heads actually and now I'm put on my user digger hat exclusives actually and so I talked a lot with friends so some of them in they have Amazon Prime for movies but they also have Netflix so I think that user are more they want to pay more for movies than for music streaming actually they really do not care if they have three subscription models for movies but for streaming streaming is music is just a thing it's like a candy you said before which is really a great thing so with exclusives you don't drive anything you can as an indie artist you can do an exclusive if you want to make a promotion for your new bandcamp account or something like this but the moment you're a big artist like Beyonce last week I pirated it immediately and usually when I steal music I buy it as soon as it it's available on my service or whatever but Beyonce maybe not I don't know but I think it's very important that we and the future will be that the the young people they get wherever they can get it they they are not loyal to companies they really don't care is it an Amazon is it a Spotify they don't care they want to hear this Beyonce I want because it's the shit and one adding to you and commenting on your audio data scan know is he still you're still there about like this focus on audio content or music is audio I always have this 18 year old daughter of my girlfriend in mind and she doesn't give a shit if it's like audio video whatsoever it's check all the same for her it's just like music in some form and if you have like the capacities you watch the video but basically you don't care you just grab what you get and another thing I wanted to you your the acoustic is really bad here huh so I really like what you said but I would like to add one more thing because with all this new possibilities the music is coming also to people who are not looking for it so I think we are getting a new a new audience on top of everything and the audio field people you know the people who are really like this vinyl or the you know the quality they are still there and they can they don't have to drive they then don't have to fly to Brazil to get a vinyl you know you can have it at home so I think it's the audience is growing what we have to take care now is just to share the money somehow more equally this is and to give more transparency to all behind all these things but we are in the beginning and so we have a lot of time that somehow it will be solved I mean pop music top 10 charts music is a totally different thing we are we don't have to talk about but in the music I think there are way more people out there now who are willing to pay and I have a lot of friends and I am really religious like use band camp use band camp since he is already and they are talking to me they are telling me it's really cool Barbara the people are actually buying my music I think on the on the sharing side I think you know it's like there's been also this initiative about the worldwide independent network of signing this code of conduct how actually music labels are paying their artists and I think that's the right step into the right direction you know that we as independence labels we can agree that we share all incomes with our artists and we can agree a fair treatment and I think agonization is like in parlor and winner speaking also to your organization to actually develop a stamp of actually saying we want to prove something we want to prove music that this music is released under fair trade you know and I think that's really where we need to come to you know to actually also be vocal about that there are different types of industries because I'm getting a little bit nervous when I see this industry is about the bad music industry if I could if I could imagine three I would say they have a bit different business model but necessarily not call them bad you know and I think that's that's that's important okay thank you are there any more questions from the audience no then do you have anything to add I'm only that I was wondering whether we were gonna mention blockchain but we've probably ran out of time now I think we need to make this work for everybody you know I think there's a place for certainly think there's a place for labels I even think there's still a place for major labels I just think there's no place for intransparency anymore so and I do think we've got all the tools now to make it work for everybody it's just gonna take a bit of shaking you know what I always wondered why has Google or Facebook never bought a major label what keeps them away from that why they've never bought a major label why would they do that because because because their business model is based of devaluation of music because as cheaper music is more people are going into using Google and whereas cheaper music is as more traffic there is as more advertisement Google is going to sell so there's a conflict of interest between people who are using the emotions of music to drive the value of music down in order to enhance their traffic in order to sell more advertisement and that's a conflict people like Google and the content is rehab Google hates copyright because they know copyright makes music expensive whatever is expensive has less traffic and has less advertisement just one one one one announcement before we we stop here because there's another talk coming up right afterwards on blockchain technology and music business I think you were just about to start but yeah if you're interested in like new technological aspects and transparency also from an artist perspective you should definitely stay here and see the talk coming up now cool thank you so much everyone thanks for an inspiring talk and wish you a wonderful rest of the evening and thank you
|
Berlin ist eine Stadt der Kreativen: Musik ist seit jeher eine der wichtigsten Kulturadern, die durch Berlin fließen, mit Strahlkraft und Anziehung für Musikfans, MusikerInnen und Musikwirtschaftende aus aller Welt. Komm zu Fishbowl und sei Teil der Diskussion? Was sind deine Ideen und Meinungen zur Verknüpfung von Musik und Tech? Nutze die Chance mit SpeakerInnen und AkteurInnen aus der Musikbranche zu sprechen.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.