doi
stringlengths
17
24
transcript
stringlengths
305
148k
abstract
stringlengths
5
6.38k
10.5446/30608 (DOI)
Det! Formig hemgo. This is Simulator-signal Display Tam77 Tristi.ела comport. Andde byr Deτικ.. Men Edil左右 먹고 affiliated med survivor i det idag Phoenix我覺得 man sen joined collaboration publishing bipot organisations Edill Fahrenheit mor Spänpor crafts Après då 부탁abred pasteobay över för introduce paladin en ny generation av video kv animals o. År som den. Dörren translom, Städnings Ab withdrawing v successfully testskap항ardan pää om den och djupat och n playbackจ Hua godvideo rekod stemmarkerad ger även fel utkväls walked i kod och röktt Barishe Du får views그렇 sittskänst question och före Babby djupat Dys frameda properties av Agour из theories Um. But give you the back side, Marw DAYS, got briefly the difficult av ihop, som Ros assembled the, the, the, A volumet signals. Att som urlös kameran, Det saw.Nothing, thrust. Modifier disputers imperaments exigicha perception. En recomend AB то brötlig gave a göyg 3 av Brent A adaption. Reference, video, Det är en brä comercial n Emer galaxian v вместе Cückler Theo heel de gradin i markinger, Beno Land motion E75, äsgris �re открывades eller re promoter och det var på sågivTech förbatt solveds. Så hospitalized arrow Exильно debate på. Eerm Iro pretty de scale aware Zoo Snutロ. Éva. Han told to use ticks för leather kémpar longo forumsisti där or bands downstream och sedan en undistinsvist och varum reparerat saks över en varning gris via Den vart Schupper för glilla i varest alla informerade enda innat pesto. Där vi i vindsteg отп adding Enjoy Oreoなく TV. Sábly stretching av sins Eu asked dotar Bergen�at för s. I<|sv|><|transcribe|> Puncher i utgick fastbördning ur salgu, yani ocksåhhh mustache är vi hooks att ta iom än vi голос var form det 1 fearless Wombau题 vi Beck makes lön med Windows Media Classic från PC N.я. TV och sin intudie modörréal. Vid det allekomoff learn inbryts bar e Midi eller arbets, Cookedverig bos 02 på Greening, Vega även iuellinze easiest Web đi där varför Dito Enligt en 11 Pess måste Max Den de fiffzyapa Talar faceming Där vi var CMS South optimized actually i aprove den haderoad Schortblick in människor. Klausman Lange bara, göra idag och ditt talas en den unconscious purchase. Duormås corona, Danceb 여러� off dit oral el i splidet dödd 쓰는 dehärIG Session barten på slår vänb토 Victorèveцуke en dist龔 test witch lastet app de van en k guilt den, i enlig g L�門 sin Gnop Win production på Ingete Bres. Bös political cir 내� stimulationer, la sats Först en ob söyled just baraッ kan urge in geleks av New Kabul, competitor i CRN I. schön, announce paracelser, comparison, frus och D i ju dunno dejad, borneupun? Evet dang movements, Helt i oron. Now we don't have a database of av my knowledge of simulator sickness questionnaire för 3D TV, så vi är. Vi did administrative disk before and after to have as a comparison. In our questionnaire it is one more level to score den in the original version. And this was the way it looked. It had a column with different symptoms that could arise when you are in a simulator. And then you can grade this symptoms on five gradings grade, not from non slide, moderate, strong or severe. The SSQ participants were about it was 68 of dem that completed the questionnaire before and after the there were an age between 16 and 72, min av 34, medan 29. The third were females and then to the room mail 48 of them. Yes. For the statistical analysis, we. Vi converted this. Grades in to numerical values, zero for none, one for slide, to for moderate, free for strong and for for severe. And then we looked at the individual symptoms before and after. We did parvise statistical test, both parametric and nonparametric. We also performed an over to look at post hoc test for parvise, difference between vi, before and after. But also we we looked at the analysis that the Canadian colleagues had developed in in their paper 1993, where they grew up, the visa symptoms together and suggested an analysis to get in the strength of these symptoms in in a combined way. En disk analyser i Vos, en disk is baser if you pick in from from their paper. Så de de. De found in the statistical analysis that the the symptoms could group in different ways and one of the stable solutions var av 3 groups, where which they identify as och. Eric eller en. Mer fallösés. Det är anslutande gå legislature i så submerged under varius för det Tao betret under symptom för Marqubom. Manteradiz Tamine. Endadutเอ 있으면. Ändanden ju iivery positiv. Det koefficient och puppetad Han sar gestigt att geta Valirus, proxim rigor, kompar Бы Back registered root Enfoda tota skor einfach beryっ</i0 proven av efter perfektion S shares och le 다음 Jiti blir gap av Skos B てbei hab dess intona strender name lock är tum på lågan japony som đâu Olexac, aussimline blad Express intации Mpo skåret Svar Two, Angley sen musimy vivna. Äta Lockner sig nära och byra med muscular安全 confirmed intuvans. En dubbel här sự känns hacklere roll och en sam en sam austeritsEven till mastan al volt på Mandle, reciar enğimiz övrigt Sintervall i Aru Personal. gdzie Margarir. En och farti eicher in difficulties och som enemy och i koncentraten. Den precio homemade bakåt. Baksen Mus� var en behinder. Vanpingy kan sihov costa de. R الله nucleier har au vontade. Får ihop. Stad encoderat har gått san vos under ans deadlines. De är vill, inte varje när rubber,being moderad, strång i en. Men du kan också roku den vore bra votes om und arises i är billar soll. Letmad medüss, tonsar. Lucha igår, installed, Viapper brinner, er Welfare, Statistiken calls Papa och p Braét Fjapa Beach och den Est sedan consider dig av Bouquet, övligt, och det vaccines i BAT knen som stadens idiz stora det var Sam compact mening iрам Verm 어때a test. Och det står på the pod sporten i graen, fatnin i intrig revkl I fotbo generals en då. Denину mot väldar indemidels av Debrik så tvcket mer från Sam развanna, Aff guessing komprising för罪 medida de cybertär som var vars. Arbtan before insåner, significanteveryone i клopor- internet Devzenie i Fantastichamn니다 både Singer ποus Nopps. Signifikanter there fossing Wicia värt ig sån Tuesdaynian filmer att k合 lunt. Neil, ni var det dialogue stormeridyd, 중요 dom skruv Bör Miami Emääng remembered ut i lutit sk Chinatнюv Thor Anáparon band Vjax lasistisk ugs Av einem Garen Alcinaothermal Den ånder değer religionsaresja As you can see the- also by statistics that all these were significantly higher than the before- and the biggest difference is for the Okolesterolstiri. The question only is basically that this Table forehead system is most is the strongest effect by this i Owind baser i en beetle och fall, en alleony assist, den uppochtade i Allez m ditt dito dotяж verge för den vacuum park crank<|sv|><|transcribe|> dem i Sobelökerna like, de fransándet vi kun centra det, började smal tendensid att G deg giv Kort textning planiga läs kingdom kapttet해� automatic veckan de också occur. Det varit vart. I och för kaltech tendens i det burgerar Evil. OSU Ambrévi harCrowd yours in 2 2 tyder fuego Ging. upp till bopman i en side Sophia i Movitime. Vi är minister i S IQ趕 resent попадlands annales länderlå underlying Fötton inför cause 15 vörst oderrHa skojande annalssi remark emotional det fanns New<|fr|><|transcribe|> Vi find och säger, det har ju Hondpaj wurden gir auch currently för farm Command. Hon performen för vad deran reportits, det vill syttans affärskympt undan på den omp characterizeffe Beauty AM bära vi lite som var K sponsors, with the government and agency for innovation system.. and impact. Thak' you. Tryck på service manner for. Hask herausk武 ifish i att juliet plac radar stös vandrad extremely introduces Ska related with age. Kuljur repeat, please. Vad var sannöffektet bygget? You said there was 40% that were largely unaffected. Oh yeah, okay. I haven't analyzed that. Kombination, sorry. I don't know. Good question. I Andrew, check your switch. That would do it, wouldn't it? Thank you, Stephen. I got it, so please join me in thanking Shell and yes. Thank you.
The MPEG 3DV project is working on the next generation video encoding standard and in this process a call for proposal of encoding algorithms was issued. To evaluate these algorithm a large scale subjective test was performed involving Laboratories all over the world. For the participating Labs it was optional to administer a slightly modified Simulator Sickness Questionnaire (SSQ) from Kennedy et al (1993) before and after the test. Here we report the results from one Lab (Acreo) located in Sweden. The videos were shown on a 46 inch film pattern retarder 3D TV, where the viewers were using polarized passive eye-glasses to view the stereoscopic 3D video content. There were 68 viewers participating in this investigation in ages ranges from 16 to 72, with one third females. The questionnaire was filled in before and after the test, with a viewing time ranging between 30 min to about one and half hour, which is comparable to a feature length movie. The SSQ consists of 16 different symptoms that have been identified as important for indicating simulator sickness. When analyzing the individual symptoms it was found that Fatigue, Eye-strain, Difficulty Focusing and Difficulty Concentrating were significantly worse after than before. SSQ was also analyzed according to the model suggested by Kennedy et al (1993). All in all this investigation shows a statistically significant increase in symptoms after viewing 3D video especially related to visual or Oculomotor system. © (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
10.5446/30609 (DOI)
Good morning. My name is Andrew Ellis. I'm a third year doc student in mass communication at Florida State University. However, more importantly, I guess to our topic today, I am a proud member of Florida State's 3D media team. Before I get started and with the full agreement of my colleague Sophie Yanakey and Dr. Art Rainey, I'd like to thank the conference for giving me a few minutes to present our research. We are most excited at Florida State about having the opportunity to study 3D and so certainly cherish these opportunities to get to talk about it. So what are we doing with 3D at FSU? Well, our folks, we work in a larger discipline called media psychology. Essentially, media psychology is kind of nestled in the crossroads of human psychology and mediated communication. It's a rich field, a field with 100 years of research into the 2D world. Essentially, we study how people feel, what they think, and how they behave when engaged with media. Needless to say, we're quite eager to sink our academic chops into this phenomenon of 3D. Why is that? Well, I don't think it takes a room full of PhDs to see that the consumer age of 3D is upon us. A few stats that I'm sure most of you are familiar with. About 2016, half of all homes will have 3D TVs. 42% of 3D capable TV owners watch five or more hours of 3D content a week. I just learned that. That came from the recent study done by the Consumer Electronics Association. 42% of 3D TV owners watch live programming as a regular 3D source. 48% of 3D TV owners report having a Blu-ray disc or having a Blu-ray player. Heck, I think the demo session last night was all the evidence we need that the 3D is here. Now, we could probably debate for quite a while, I want to note this adoption is due to the push of supply or the pull of demand, but the truth is, it doesn't matter. 3D is making itself home in our homes, whether we like it or not. Unfortunately, academic research into the effects of 3D technology is not kept pace with the technology itself, especially in our discipline. This lag has left us with a, what is becoming mainstream technology, yet very little understanding about what that technology is or does or how it influences the media process or the consumption process. The consequences of this ignorance are ranging and significant. Fortunately, by sharing research and events like this, we can begin to understand the 3D's powers, whether they be from shifting attitudes and beliefs to filling seats in a theater. Today, I'll be talking about three pilot studies we recently conducted, featuring a variety of 2D and 3D content. Each of these studies examined some of the psychological and physiological variables known to influence the media experience. And it's important to note that while I will show you some statistical findings, these experiments are probably a better thought of as conceptual proofs of concept. They represent our best efforts to begin mapping out how to do academic investigation into the 3D world from selecting the right content to using the right measures to finding the right delivery system. These pilot studies are essentially our first steps into more robust research efforts. A lot of the work we do both in 2D and 3D focuses on narratives. Why? Because narratives or stories do far more than just entertain. In fact, we use stories in almost all of our communication, whether it's to tell our spouse how our day went, or whether it's the media telling us how a bill just made it through Congress. Narrative stories are used. So what's that mean? It means we learn a lot from narratives as well. We shift our beliefs based on stories, both as individuals and societies. In fact, many people credit the fall of slavery in the U.S. to a book, Uncle Tom's Cabin. With this in mind, our study first examines the influence of 3D on narrative persuasion. Specifically, we wanted to see if people were more likely to believe facts presented in the storylines of 3D narratives. For 2D media, research has shown that a phenomenon known as transportation has been consistently linked to the psychological mechanisms associated with narrative persuasion. Transportation occurs when a viewer forgets the world around them and feels they are actually part of the narrative. What happens is people get so involved, so focused in the narrative that they don't have many cognitive resources left to critically argue the details or the facts presented in the storyline. As a result with their guards down, their attitudes end up getting shifted, their beliefs end up getting shifted. We hypothesize that the rich and immersive environment of 3D will lead to a greater sense of transportation and consequently greater persuasion. We also measured three other variables that in literature have been shown to influence transportation, assuming that these variables, each of them, would be elevated in the 3D condition. To do this, we selected two prom-tom television dramas, Numbers and Grey's Anatomy. Each of these episodes dealt with the common issue of organ donation, but they presented this issue in different ways. They used different facts. They had different storylines. They said different things about it. I use the word facts loosely because really some of the information presented in these episodes had nothing to do with fact. We call them facts because that's how they were presented to be to the viewer, however in real life they are far from true. One of the key ones that we looked at was that in one episode they presented a thriving black market for organ sale in the United States and that simply doesn't exist. We converted both of the episodes using a real-time processor. Our sample consisted of 196 undergrads in communication. The participants randomly assigned to both episode and 2D or 3D condition. The first thing we did once we collected our data was confirm what we knew or what we thought would be the case based on 2D literature. That is that narrative persuasion does occur. If you look across both formats, we found that to be the case. People believed differently about the facts based on what episode they had seen. You can see that in the Numbers episode right there in the middle of the organ black market exists was elevated four numbers. That's how it was presented in the episode indicating a shift or an adoption of new beliefs or attitudes based on that. Narrative persuasion does exist in 3D. However, surprisingly we were unable to find any significant differences in the variables that have been shown to contribute to narrative persuasion. Because this lack of findings disagrees strongly with existing literature in 2D and because we are trudging through uncharted territory and investigating 3D, we began looking for other reasons why for our no-show of findings. Here are a few. Despite our best efforts, the conversion process compromised the quality of our 3D content. The relatively low quality, especially when compared to the pristine HD that the other conditions saw may have distracted viewers, caused discomfort, or even weakened the credibility of the narrative. Secondly, these shows were scripted, shot, and produced for 2D viewing. Many people believe that to realize the full potential of 3D, the content must respect the format through all stages of development. Essentially not all 2D content works in 3D. And finally, as is frequently the case in media studies, we also looked at gender as a possible explanation, not gender genre, as a possible explanation. Some of the things that we figured out though is that transportation is greater in narratives presented in 3D. So while persuasion wasn't enhanced or elevated, transportation was, I would continue to hypothesize that that would lead to more enjoyment in the future. This changes there. This should improve ratings profit, possible implications for narrative persuasion, the product placement, social issues, the promotion of social issues. And then we also have to say again that 3D storytelling is critical for the production of a successful narrative entertainment. Next we looked at enjoyment. Except this time, we hope to correct for some of the possible problems with our last study by using narrative content designed for consumption of both 2D and 3D world, native 3D narrative content. This allowed us to skip the conversion process entirely and at least to some extent lessen our concerns regarding the cross-format viewing. We also worked to recognize genre in the study by comparing sports content, a highlight segment from ESPN to a big budget 3D movie, Resident Evil. Again, we used transportation. We theorized that transportation would be an effective predictor of enjoyment. However, this time we also included measures for presence, attention, and emotional arousal, all shown in previous research to predict enjoyment. In this study, we had 60 participants randomly assigned to 2D or 3D conditions. Overall, we found the 3D condition to be more enjoyable, as we suspected. However, when you look individually at each condition, the significant difference in overall enjoyment was attributable entirely to the difference in the sports highlight. In the sports condition, people reported a much greater degree of enjoyment. The movie trailer did not offer significant difference in enjoyment. What made the sports clip more enjoyable? We found emotional arousal would be the leading predictor of enjoyment in the sports clip, but we also found a significantly lower heart rate in the 3D sports condition. Now, to many of you, that may seem counterintuitive, but the truth is, based on literature, heart rate is actually a great measure of attention, essentially saying that the higher your level of attention to content, the lower your heart rate will be. So we use pulse frequently as a measure of attention. For the movie clip, transportation was significantly higher in the 3D condition, yet this did not result in greater enjoyment, as literature suggested it should. Also contradicting the sports clip, attention was shown to be less in the 3D condition for the movie clip than the 2D condition. People paid less attention in 3D when watching the movie clip than they did in for the sports. Now, one possible explanation for this is that the added, you know, the richness of the 3D environment or the added sensory experience of the 3D world may have caused people to kind of shut down from this trailer, almost like when you're watching a horror movie. You cover your eyes a little bit, or you look away to kind of step back from too strong of an emotional reaction. People may have limited their attention. Finally, we decided to look at video games, enjoyment and video games. Now, why do we select video games? Well, video games, we believe, and I think history shows, has an almost perfect track record of leveraging new technology for improved gameplay. Even now, game marketers are working to package new games and promote new games as being in 3D and therefore having an enhanced gameplay. There's plenty of literature suggesting the psychological variables most critical to gaming enjoyment. Based on literature, we believe 3D will increase enjoyment by increasing a sense of presence, which is similar to transportation. However, it just involves the feeling of being there. Immersion and involvement. We also measured, there's our suggested model, we also measured a few physiological measures, self-reported physiological arousal, heart rate, again, as a measure of attention and physical discomfort. Physical discomfort, of course, is interesting. This is really our first attempt to start looking at the discomfort associated with or believed for some people to be associated with viewing stereoscopic content. Here is our model here. As predicted, and our results were far more consistent this time, 3D play resulted in greater enjoyment, presence, immersion, and emotional involvement than the equivalent play in a 2D gaming environment. There's my colleague Sophie in our lab actually playing a video game. Again, this shows what I just said. Enjoyment, presence, immersion, involvement all higher in the 3D condition. Yet at this point we still didn't know how those variables work together to predict enjoyment. What we learned was that presence, as we suspected, was the greatest or leading predictor of a game enjoyment and that presence was most associated with immersion. Involvement, while elevated in the 3D condition, did not contribute or elevate the sense of presence. We also looked at pulse again and we found that pretty remarkable differences in pulse for the 2D and 3D condition. As you can see, both games, which were Killzone and Gran Turismo, I failed to mention again, 60 subjects here randomly assigned to condition and game. The 3D condition resulted in lower heart rates, again kind of shocking. As you think about bullets coming at you in 3D or getting around cars in 3D in a racing game, you'd think that your pulse would be elevated. But literature and our findings suggest otherwise that it's actually a better measure of attention. So what of the physiological variables contributed to game enjoyment? We only found physiological arousal to be an effective predictor of game enjoyment. While heart rate was elevated, it did not predict game enjoyment. And interestingly, while physical discomfort was in no way correlated with game enjoyment, that means people who did report a sense of discomfort did not necessarily report liking the gameplay any less. We kind of called this the roller coaster effect, where when you step off of a crazy roller coaster, you could be on your way to vomit, yet you'll do it with a smile on your face and saying, that was the best thing I've ever done in my life, that there may not be a one for one that you would imagine would be there. Based on our studies, we found that nato persuasion does still function in a 3D world. We found that transportation is higher in the 3D condition and that video games appear to be more enjoyable in 3D. We also learned that content and conversion may play a significant role in the appearance of 3D effects. Content made for 2D may not have the same effect in 3D and vice versa. Essentially, we learned that 3D is a finicky medium and that new and special efforts must be made to control these sensitivities during research. Still, it's not stopped us from moving forward. In fact, my soon to start dissertation will focus on viewers' ability to free recall information presented in 3D versus 2D. I'm excited to keep studying 3D and hope to share these findings and others with you in the future. Thank you. Thanks. We have time for one question. Any questions or comments? Please. Hi. Andrew Hogue from UOIT. Just wondering what were your measures for presence and immersion? Were you using the GQ or something else? I'll have to look it up exactly what we used. We used the standard measures of those variables that you will see in all the 2D literature. I don't have them. I don't have them with me. Just a moment. So, listen from K-Zero. My question is, is the cost factor to the cost? Like people go to theater and then 3D movies are a bit more cost, more expensive. Sure. I think it's a factor. How I think it's a factor is that when people pay $7 more for a ticket, they expect an over-the-top effect. That in some way is kind of driving how content is created. That in order to make a return on movies, content creators must in some cases provide an over-the-top experience, which may not be working in the best interest of 3D. So please stand through the speaker.
With 3D televisions expected to reach 50% home saturation as early as 2016, understanding the psychological mechanisms underlying the user response to 3D technology is critical for content providers, educators and academics. Unfortunately, research examining the effects of 3D technology has not kept pace with the technology’s rapid adoption, resulting in large-scale use of a technology about which very little is actually known. Recognizing this need for new research, we conducted a series of studies measuring and comparing many of the variables and processes underlying both 2D and 3D media experiences. In our first study, we found narratives within primetime dramas had the power to shift viewer attitudes in both 2D and 3D settings. However, we found no difference in persuasive power between 2D and 3D content. We contend this lack of effect was the result of poor conversion quality and the unique demands of 3D production. In our second study, we found 3D technology significantly increased enjoyment when viewing sports content, yet offered no added enjoyment when viewing a movie trailer. The enhanced enjoyment of the sports content was shown to be the result of heightened emotional arousal and attention in the 3D condition. We believe the lack of effect found for the movie trailer may be genre-related. In our final study, we found 3D technology significantly enhanced enjoyment of two video games from different genres. The added enjoyment was found to be the result of an increased sense of presence. © (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
10.5446/30610 (DOI)
Ik wil ook mijn collega Martin Hammer te vertellen. We deden dit werk samen. Hij is eigenlijk een high-dynamic range expert. En de idee was om eigenlijk een 3D en high-dynamic range te ontdekken. En dit is wat er uitgekomen. Om 3D te introduceren,增leren de bevoeringen. Dat is ook de intentie van het werk. De mensen verwerken de content als meer natuair. Er zijn nog een paar meer geëmerd, die zijn heel positief natuurlijk. Maar het heeft ook een paar site-effects, zoals een visiele verkeer. Het is altijd moeilijk om 3D verkeer op een 2D projector te vervelen. Ik vind deze image altijd particulair effectief in dat doen. Maar misschien een meer realistische voorbeeld is cross talk, zoals je ziet op het slide. Het is eigenlijk de inkompleet isolatie van de linker en de rechter image, zodat één image op de andere leeftijd. En, zoals je het ziet, het vertrekt de image kwaliteit en het introduceren ook visiele verkeer. En het is eigenlijk een belangrijk facteur, omdat het bijna al de stereoscopische verspreidingen dat we weten. En het is ook een probleem van deze presentatie. Om je een overvieging te geven, zal ik je een bakgrond informatie geven over cross talk-recherchees in de lechter, iets van het human-visieel-systeem en het vergelijksobjectief. Ik zal natuurlijk de experimenten uitleggen dat we opgezet hebben hoe we cross talk-rechercheen en wat quantitie-effects we in het gegaan hebben en in het einde ik zal het op ons reflecteren in de discussie. Om te beginnen met wat visiele recherchees van cross talk, wat je ziet hier is een graf waarin ik eigenlijk een bepaalde vergelijksprobleemde van verschillende recherchees. Je ziet de vergelijksprobleemde op de vertere vergelijksprobleemde cross talk als een functie van een contrast op de horizontale recherchees en als een functie van een disparaatie. De disparaatie is kleurkodig hier, dus de darker de kleur, de meer disparaatie. En wat we gelijk zien en wat we al weten is dat als je de disparaatie increase, je ook de deel de vergelijksprobleemde en als je de vergelijksprobleemde increase, je ook de deel de vergelijksprobleemde. En als je de disparaatie op de vergelijksprobleemde op de vergelijksprobleemde meer dan 30 aakmen, de vergelijksprobleemde niet meer. Dus eigenlijk is dat vergelijksprobleemde geseteld. Als je op de data kijkt en je ziet de contrast eigenlijk op de 100 gegaan. Dus wat gebeurt er naast, dus als je de vergelijksprobleemde vergelijksprobleemde increases, is dat op dit moment niet echt welkig gekozen, of niet bij ons. Dus als we in de details gaan, een beetje meer in die bezoekers, je ziet de aarders, en ze hebben allemaal eigenlijk een simeel vergelijksprobleemde van 0,2, 0,3 procent. En een paar dingen zijn interessant. Voor één is dat ze niet direct die vergelijksprobleemde maar ze vergelijksprobleemde het op de existerende data points. En de reden dat we denken is eigenlijk door de limitatie van de display. En dat is ook een tweede interesse. Dat is dat de display correcte viskicks allemaal van een low dynamic range. Ze hebben een maximale contrast van 1 en een maximale piekloemenance van about 100. En als je dat comparet met het humovisieel system, zie je een slechte misgever. We kunnen op de periode van 3 tot 4 vergelijksprobleemde van contrast. En als we, wel, een slechte dag kijken, beperken we ongeveer 100.000 lux. En dat is een schijn, want er zijn al deze plekken dat de humovisieel system meer gaat. Bijvoorbeeld OLED, waarin een limitere contrast heeft. Maar ook de high dynamic range is deze plek dat, wel, een contrast van 1 miljoen tot 1 en piekloemenance van 4 tot 5.000 lux. Dus dit validert in mijn opinie nieuwe reisers op koststokke waarschuw. En als we kijken hoe we vergelijksprobleemde van contrast en luminance en hoe ze vergelijksprobleemde van elkaar zijn, deze graf is altijd heel erg gebruikbaar. Wat we zien op de verticale acties is contrastsensitiefheid als een functie van luminance, slagnumulance. En een paar dingen zijn niet waardig. Bijvoorbeeld als je de luminance increases, je ook de contrastsensitiefheid increases. Wat je ook ook zeker ziet is een kink in de kruif, die basieel vergelijksprobleemde van de verschillende verwerkingen van de rotsen en de konen in de RI. Maar interessant, je ziet ook een soort vergelijksprobleemde van een contrastsensitiefheid rond 100 nits. En dat is interessant, want dat is eigenlijk waar de tv en de displays opereren. Ik zal terug op dit graf in de discussie. Maar voor nu, om de leger te summariseren, hebben we de vergelijksprobleemde van 0,2% met een luminance van 100 nits en een contrast van ook 100. En onze werk, onze objectief, was eigenlijk om de vergelijksprobleemde voor een vergelijksprobleemde met hoogte en hoogte contrast. En we hebben gevoel dat luminancelevels op de, of luminancelevel, ongeveer 100 en hoogte niet effecteerd vergelijksprobleemde en contrastlevels op 3-4 aarders van magnitie vergelijksprobleemde. Nou, de experiment dat we voelden, we hebben de vergelijksprobleemde op een vergelijksprobleemde met een sterkeesmethod op een vergelijkssysteem. Je ziet een scherm op de slide en de participaties had eigenlijk gekeerd, welke image had meer cross talk of lesser, de boven of de onderkant. Als ze de correcte antwoord hadden, de cross talk deel, als de antwoord was incorrect, de cross talk increase weer. We had 14 participaties of met goede visual acuity en goede serie acuity. We hadden drie image's waarin de zwarte en witte is van het meest interesse omdat dat de hoogte contrast heeft. We hadden drie luminancelevels 125, 500 en 1500 nit en we hadden twee contrastlevels 1,000 en 2500. De display dat we gebruiken is een high dynamic range display. Het had een contrast of 100,000 to 1, een maximale contrast. Het had een piek luminance of meer dan 4,000 nit en het had roughly 1100 backlight segments. We hadden ze niet separate verdedigd omdat je niet eens spatieel dependent contrast krijgt. Maar wat hier belangrijk is, is dat de backlight is dependent van de panel. Dus we konden eigenlijk vergen of zetten luminance en contrast independently van elkaar. En dat is heel belangrijk zoals we denken omdat we eigenlijk wat we doen is we simuleren verschillende displays op een een single display en het is dus belangrijk dat contrasten en luminance moeten worden dependent. Maar het is ook belangrijk dat luminance en contrasten constant blijven waarschijnlijk in crosstalk. En hier komt de eerste difficulty omdat heel vaak en ook in ons geval crosstalk is model als dit. De luminance mixer is eigenlijk de zwarte niveau en een leakage van de witte niveau en het leakage is alpha. Dus als je crossstalk increases, je increase alpha. Dus meer witte aan de zwarte en dat is eigenlijk ook betekent dat je veranderd je contrasten en dat is niet wild. Dus zonder naar detail kan ik terug naar dat in de Q&A. Ik heb een extra slaag op dit. Wat we eigenlijk did is we ook maakten de zwarte level en de zwarte level dependent op alpha. Dus in de vergelijking als je crossstalk zo define is dat beide zwarte en zwarte level dependent op alpha. Dus weer hebben we de crosstalk model en wat je hier ziet is alpha wat een float-point parameter is wat mooi is maar de luminance-levels zijn quantisiseerd en dat is iets dat je moet betekenen. Dus we hebben een paar algorithmische stappen dat ik nu zal laten zien. Als dit werkt, ja. Eerst kunnen we de optoelectroof en optoelectroof verantwoordigen. Voor elke drive-level van de panel hebben we de luminance-output gemet. We hebben een paar gevaarlijke luminance-level gegeten, zoals ik zei in het slide. En dan door de voorbeelden bevinden we de klokste quantisiseerd drive-level D zoals je hier in red ziet. En door het inverse look-up vinden we via dat relatie de quantisiseerd luminance-level. Oké, de resultaten beginnen met de impact van de contrast. Je ziet de luminance-output op de vertere accesses en de twee contrast-contacties. En zoals je ook zo ziet, wel, de contrast is niet een signifieke impact. We hebben dezelfde graf voor luminance. En hier zie je een signifieke impact. Je ziet ook dat de luminance-output relativief hoog is. Dat is omdat het over alle drie image's is. En wat ik zou willen doen is focussen met een zwarte en zwarte image. Dat is wat je ziet op deze lijf hier. Je ziet twee panelen vergelijkt met de twee contrast-contacties. En op deze panel zie je de drie luminance-contacties. Maar je ziet ook de luminance-impact. Hijder luminance betekent een luchtige trechel. Maar wat je ook ziet is dat de contrast een impact heeft. Dat is niet signifieke. Maar voor hoogere contrast-vergelijken zijn de trechels ook een luchtige. Ik kom terug naar dat in het moment in de discussie. Om het te rappen, hebben we de drie image's met de twee trechels waarop de top-op is het meest interesse voor ons. En als we terug naar de eerste hypothesis waarin we steden dat luminance van 100 nit en hoog niet effecteerde. Nou, we hebben het eigenlijk gevonden. En we denken dat het kunnen zijn geëxplagend door de fact dat er een verschil is tussen de luminance waar we depten en de luminance die er op de display was. De luminance waar we depten eigenlijk domineren door de fovea. Het is een verschillende 2,2 graden. En als je dat verschillende ondersteamlijke schilder verplaatst, krijg je de red-cirkels. En wat je eigenlijk ziet is dat alleen een deel van die red-cirkels is opgepakt door de witte object. Dus de luminance die we depten hebben is veel lichter. En als het lichter is dan 100 nit zoals ik je eerder aan de slide showed, je contrast-sensitiefheid verblijft weer, waarin je kan uitleggen het signifiek effect van luminance. De tweede hypothesis staat dat contrast-sensitiefheid op 3 tot 4 auto's van magnitie effecteerde. We vonden het niet. We vonden het trend. Dus met respect te doen, deze hypothesis eigenlijk zeggen we het effect van hoge contrast en met hoogte meer dan 100. Het is niet absinthe maar het is meerdere klein. We hebben er misschien meer participaties te maken dat het signifiek is. En de laatste deel van mijn discussie dat ik wil focussen is dat het quantisatie effect van de display luminance natuurlijk ook propagaties in de cross-talk-measure. Dat is visualiseerd hier op deze graf. Je ziet een cross-talk op de verticale axis, de alpha-parameter op de horizontale axis. En als je de cross-talk op de floating points precies krijgt, je krijgt een red-lijn. Maar als je de quantiseur luminance-levels klopt je de zwarte deel. En op het eerst beantwoordig, het ziet er aardig uit, we hebben genoeg zwarte deel. Maar als je de deel van de trechelt van de zwarte en zwarte deel ontwikkeld is, zie je dat er tussen 0 en 1 procent alleen 15 deel blijft. Dus dat is niet erg erg zeker in dat sens. Dus eigenlijk wat we gingen is dat de finitielevisie deel van de transmissie in de LC-panel deel van de accuzie van de maagdcontrast. En dit is vooral voor hoge contrast deze plek, want dat is exact waar je in dit probleem rijdt. Nou, om de toekomst te ooit te gaan, we hebben de krosselrechelt van 0,2 procent gevonden. Dit is een lijn van de vorige rechert, ze hebben het eigenlijk vergeten, dus dit is in een agreement. Ja, de hoge luminance en de hoge contrasten zijn van een hoge impact. Bijvoorbeeld als je op de Pan-O-let reflecteert, we verwijzen een systeem krossel die eruitziet, maar we zullen het niet als we het vonden. En natuurlijk gingen we naar het quantisatieverdeling en het effect, dat is goed te nemen in het account. Wat je kunt doen is bijvoorbeeld increase de verkeerde deel van de bittst en de toekomst van de recherche. Ik denk dat dit was het. Dank je. Nr. 2, please. Nr. 2. Test, test. Ik ben hier. Dank je, Mar. Dat was goed. Er moeten er veel. Krossel, high dynamic range displays. Nee, ik wil een buy one. Als ik een high dynamic range display ga, wat propiet is er nodig om voor 3D te kijken? Hoe is het resultaat? De high dynamic range display dat we gebruiken is custom-billed. Er is alleen één available. Ik ben niet zeker hoeveel money je hebt. Ik denk dat het meer dan 60.000 euro is. Maar wat is er daar aan de markt? Ik denk niet dat het veel is of ik denk dat het deze correcte recherche gaat. Dus is het een conclusie? We moeten niet zorgen om de high dynamic range 3D te gebruiken. Niet met respect voor de krossel, ik zou zeggen. Ja. Dat is de voortstelling. Charles. Dank je voor het kijken, Mar. Heb je of je weet van iemand die een krossel verkeerd is als een function van krossel? Want veel recherche is een percepsel van krossel of een visibiliteit. Doe je het krossel van krossel als je spijt bent? Nee. Als je een Howard Domen type test hebt op een sampledesplats in de krossel, hoeveel is de krossel de grade stereoscopic deptepersception? Een gesprekseld krossel? Ik ben zeker, ik heb niet de eerste deel van je vraag. Heb je de stereo depteperschulden als een function van krossel? Nee. En ik ben ook niet echt wel heel erg bewacht van welke studies ik heb. Ik denk dat in Serlijn, York University in Canada een papier hier presentsen, die ik al vorig jaar of vorig jaar heb gevonden. Inna Serlijn, T-S-I-R-L-I-N. Ze werkt met Laurie Wilcox. En het effecteert direct het krossel over 1 procent, een goede ding. Maar 0.2 procent zal een geweldige target zijn, als het er een 0.2 procent gesproken is. Ja, ik geloof. Een beetje tijd, Andre? Ordinarily LCD's zijn niet known voor hun goede contrast ratios, die niet heel goed zwart zijn. Ik ben er verkeerd om je te zien, met een 100.000-to-1 contrast ratio. Hoe was dat gekregen met LCD-technologie? Ordinarily, het is gedaan met lokale dimming, dus hoe heb je dat gedaan? Ik moet zeggen dat met de technologie ik het niet kan antwoorden. Het is in de paper een beetje gered, maar ja, ik heb geen antwoord voor dat. Sorry. Oké, dat is het. Naast de crawändertouren. Star
Crosstalk is one of the main stereoscopic display-related visual perceptual factors degrading image quality and causing visual discomfort. In this research the impact of high display contrast and high display luminance on the perception of crosstalk is investigated by using a custom-built high-dynamic range LCD (liquid-crystal display) in combination with a Wheatstone viewer. The displays’ opto-electrical response was characterized and the display calibrated, to independently vary luminance, contrast, and crosstalk (defined as (BW − BB) ⁄ (WW − BB)). The crosstalk visibility threshold was determined via a ‘one-up/two-down’ staircase method by fourteen participants for three different images that varied in luminance (125, 500, and 1,500 cd/m2) and contrast (1,000:1 and 2,500:1). Results show that an increase in luminance leads to a reduced crosstalk visibility threshold to a minimal value of 0.19% at 1,500 cd/m2. The crosstalk visibility threshold was independent of the tested contrast levels, indicating that contrast levels above 100:1 do not affect crosstalk visibility thresholds. Important to note is that for displays with high contrast, the finite discrete levels of transmission in the LC-panel quantize the luminance levels, which propagates into and limits the accuracy of the crosstalk visibility threshold. In conclusion, by introducing OLEDs (high contrast), the system crosstalk will increase by definition, but visibility of crosstalk will not. By introducing high-dynamic range displays (high peak luminance), the crosstalk visibility threshold will be lower. As the absolute threshold levels of low-dynamic range displays are already very low (at or below 0.3%) this will result in little perceptual effect. © (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
10.5446/30612 (DOI)
The horizontal motion parallax and then this that prevent to have a natural view in some situation. Of course, we could talk a lot about the 3D TV which is a little provoking statement in my view. They are still 2D devices. As of the auto stereoscopic multi-view systems. So they are already 3D displays and again we are talking a lot about the 4K where the 4K will be the solution. Because we can generate a lot of views, so 28 views for instance already announced. But don't forget that this is a good game with the numbers. So if you multiply 9 by the 3 colors, you are immediately there at this number. So on my observation again when people are watching the glasses resistance of the real problem that there are still very few spots where you can get the real 3D. So as you see, most of the locations are blended or overlapped or invalid zones and you really have to position yourself. And then that tells me that we are still in the evolution phase. So still we have a lot of work to do so this conference will live for long. And just to tell you something, but we think about 3D, so the longer the definition, I'm the more suspicious. So I used to tell that 3D is when you look out of the window. It's not that. So if you're making more and longer statement than you are restricting already. I made one point here, the open, it means that you not just look out of the window, but the horse could push his nose into the window. So I mean object could be in front of the screen. For that to see a perfect view, for instance, if you had a display like this and you would hang it next to your window and you were not able to differentiate, then you had this ultimate display. And it means that when you look out of the window, you don't see the light beam, sorry, from the garden, but you see the light beam from the internal glass surface of that window. So once you are able to reconstruct all those light beams, and this is the way how hologram work and so on, with all the parameters, then you had the perfect 3D view. And then this is a quite general approach. Just to very quickly define what is the light field, when you really represent a 3D scene with objects, I mean light sources, point light sources. And as you have the light beams coming off the points, you have an intersection with the surface, practically the screen. And you can describe each light beam with a space or, let's say, location position and two parameters, which is the angle. The angle where you emit the light beams, this is the field of view. The number of light beams emitted in this range, this is the angular resolution. And mostly this defines the depth budget of the display. Okay, so this is the very general definition. So once you have a 3D display, it should have some way, sorry, this is here, direction selective light emission. And that's the point I would separate displays that you see this direction selective emission. Those are 3D displays, in my terms, but you don't have it. They are 2D displays with some specific 3D features. Okay, let's check that point. And exactly this is what we are doing. The whole of this is based on the same principle that we have a lot of optical models in the background, let's say, and we hit the same screen point with large number of light beams coming off those optical models. And the holographic screen introduce a certain diffusion. So it diffuses the light only a very narrow window. And if this angle, how the optical models are arranged, is equal to the angle between, I mean, the angle between the models equal to the screen diffusion, then you will have a continuous view. So this is what we are doing. And see the reason that we do not have any deflection at the screen. So not like at the lenticular system where you have sub-pixel structures and you make the deflection there. We just can cross the screen with large number of light beams. And then there's a reason we can make wide field of view. So this is a total geometry-driven system. One more point here that what is the difference between multi-view and light field? Because sometimes it's a frequently asked question. So we never reconstruct views. So if you are in front of our displays, you never see an image coming from a projection model. So if you had a point, then this is always reconstructed in combination. So all the models contribute to address this point. So those points are addressed physically. And all the models contribute in the whole image organization. In that sense, you will never see discrete borders between views. But as you move in front of the lights, your display will be totally continuous. Okay, how does it look like in the practice? Now you can see these are some historical shootings. So we have displays from the monitor size up to the large-scale systems. This is the very first monitor from Holographica. And maybe you can have, I hope that works. Yeah, it was the other one. Another example for a larger-scale system, as you can see, it is really continuous as the camera moves. And then you can see from many sides a total continuous 3D view. Okay. And now I arrived to the point, transmissive and reflective systems are optically equivalent. And it means that all of those arrangements with the projectors and screens can be configured differently. For instance, you can have a reflective follow screen, which if you remind the drawing from the former side, then you can see it's absolutely the same. Or if you use retro-reflective surfaces, basically the same. Again, please note that you have the direction selective emission. And of course, you can have periodic screens. And it is sufficient to use fewer projection models, and it means that, for instance, for a cinema situation, this will be fine. So as you see, so this is where our pattern portfolio is. So any kind of projection system where the projector arrangement and the screen diffusion angle is in some relation, this is where Holographica activity is. And we make rare projection, from projection style arrangement. Okay, from projection is very good where you have space requirements and then critical space issues, because you don't pack the big box. But as in a cinema situation, your viewer can be in the range where your light beams are coming from the optical models to the screen. So it absolutely corresponds to a normal cinema geometry. The point that, on the other hand, so while telewarthically at the balance in the practice, front projection systems are much more sensitive. The screen should be really very carefully manufactured, because you can see any kind of cosmetic defects immediately. Let's see our system. So this is the Hologizio Glass S3 Cinema System with concrete facts. So our concrete model has a 3.5-meter diagonal. You can see here the distributed projection unit, which basically an 80-channel projection system. And the 80-channel of SGA resolution, all together 63 megapixels. The field of view is 40 degrees. And then with this angular resolution we have, maybe practically we can have 1.5-meter out and 1.5-meter in depth budget. Of course, we have other components of the system. I just remind you that this is an absolutely 2D-compatible system, so we could project also 2D films on that. And then, of course, at the moment there are no light field movies, but this is very good for 3D simulators or promotion applications or even rental. So this is where we are using this. Okay. So the system architecture, we have the content acquisition side. You can see the display system. And of course, we have the Cinema Cluster, where all the software are there and they are making the conversion, whatever operations are needed. The content acquisition, there are basically three ways. One is when we have the 3D models, then we can have a dense camera array, like a light field camera, and of course, we can have fewer cameras. If you see, that's the way we can see how we can represent 3D. We can make geometry ever. I mean model-based representation. That's done, it's clear. Or we can have large number of views. So we can have, let's say, from this room, 100 photos, then I could build you up a very high-quality 3D image. Of course, this is not always practical. So you make fewer image streams, so you have fewer cameras, and then you have to interpolate or extrapolate between them. Or very similarly, if you have few image streams and depth or disparate maps, you can also build up the 3D content. This is already a kind of 3D model, by the way, a depth map. It's an implicit 3D model. I used to make a very easy questioner. I'm just running quickly through because time is running as well. So probably no one would like to wear glasses. Agree? Yeah. If you are sitting in front of your display, would you like to be positioned? No. Thank you. Do you expect that multiple people can see the same? Okay. I love you. Do you expect invalid zones in the field of view? Who is expecting or who is accepting that? No one. Thank you. And you expect that the field of view should be reasonable as for the 2D? Okay. Then you told. Basically, not me. It was you. So if your expectation is like this, then you have to collect the information from the same range. So today we tried to collect the information from a camera pair or maybe a few cameras. On the display side, however, we have a much higher expectation. So this is definitely a conflict. Hard to resolve, but there are some solutions. Okay. With that point of view, it means that to have live content or natural content on your 3D displays, you definitely need wide baseline camera arrangement. So this is the basis for the free viewpoint television. The good point that in studios, that's reality because you have a lot of cameras. These are permanent areas. It's not a problem. For the cinema, that's a little bit for the next couple of years. Now they are happy with the 2 camera, but one day they will face that they have to use more. So that's a real question again, that whether you make a full acquisition and then you make a total view reconstruction using all this information or you make the sampling on a few cameras and you try to synthesize the views to find out the information you lost once. So let me show you one way for the live exhibition. This was made at Holographica. We are using dense array of cameras and you can see the first results here. It was shot with a 27 camera array, very basic cameras. Let me quickly go to the other alternative. This is developed in the Muscat project. This is a European project with more participants in the consortia. The camera is designed by HHI and then Cookfilm. You can see this is a 4 camera array. We have a stereo pair in the center to be compatible with stereo and we have two satellite cameras with adjustable baseline. So this is the theory and then this is the reality and then this is the content. We have 4 views with 4 depth maps. I'm just quickly running through. So when you have 4 views and 4 depth maps, of course you have to interpolate. That's done. That's more or less clean today. But the real problem is to extrapolate. So we have to hardly extrapolate and then you can see that very extreme range where you really calculate our views. I don't go into the details. That's very typical R&D. When we have the holes on the image how to imprint it, we try to keep the structures. You can see some examples here and not only just simply impainting what you can do it. And here you can find an example and then this is running a little bit. Maybe some views on the cinema, some static images you can see as the camera moves again that it has a real parallax. And you will see some natural content soon, maybe in 60 seconds or 20 seconds. It's very interesting. So this is, you can see the screen size to see glasses for images in depth scale. It's remarkable. And then just imagine that a normal movie cinema screen can be replaced by a hologram screen, one single projector can be replaced by a distributed projection. And here now you can see the sequences from the Muscat project. This was shot by four cameras and this is heavily extrapolated up to 40 degrees. That's a normal task. Normally there we are just a few degrees range for the autostereoscopic systems and then this is very extreme. If you wish, I finished my presentation. Just I will leave you with a video here because I always try to show something new when I'm here with you. And then there is a prototype developed by Holographica recently. There were some news about it already and this is it to show you and judge it for yourself whether to have a full angle 3D display and this is 180 degree monitor. You can see the camera moving from the very extreme angle to the other one, whether it is too much or not. In my opinion, this is the first monitor which can be really used like a 2D monitor. So this is the, I used to tell the explanation, three monitors. I do not have to explain that you have to step here and you can see the 3D. You can be everywhere and then it gives you a very natural and real view. The image never disappears. So I try to be on time. Hope you enjoyed. We have time for just one question. I'm curious how you actually, how you actually execute all that with the screen. For every one of those screens, did you have a big field lens behind the screen or you were projecting all those images even on the huge screen? No, no, no. That is a point that is not. So we have the whole light field arrangement behind the screen but that is a little bit tricky. So we do not use lenses because if you had a Fresnel lens, whatever you killed, it is visible and it gives you side effects. Okay, but what concentrated one projector's image out at one viewing position so that they weren't contaminating each other? It is interesting for us the projectors are not the source for an image, just the source for light beams. So an image comes from the combination of the light beams from projectors. And it means that if I am able to generate an extreme angle within the display and then that happens in that monitor, then I can give you very wide angle. So I can have tricky light pass in the display so it is a lot of tricky stuff and I can come under very steep angles. Thank you. I would ask for follow-up questions maybe during the break because we are running into the next presenter. But thank you very much. We are here. So continuing along the theme of paths to making large scale displays, I am sorry, Jorg Ritterer will be talking about a large scale autosterioscopic display. He holds a master's in electrical engineering from the Vienna University of Technology and is now a PhD student at their institute of sensor and actuator systems. Thank you.
We present a combined hardware-software solution to visualize free viewpoint 3D video on a cinema-sized screen. The new glasses-free 3D projection technology can support larger audience than the existing autostereoscopic displays. We introduce and describe our new display system including optical and mechanical design considerations, the capturing system and render cluster for producing the 3D content, and the various software modules driving the system. The indigenous display is first of its kind, equipped with front-projection light-field HoloVizio technology, controlling up to 63 MP. It has all the advantages of previous light-field displays and in addition, allows a more flexible arrangement with a larger screen size, matching cinema or meeting room geometries, yet simpler to set-up. The software system makes it possible to show 3D applications in real-time, besides the natural content captured from dense camera arrangements as well as from sparse cameras covering a wider baseline. Our software system on the GPU accelerated render cluster, can also visualize pre-recorded Multi-view Video plus Depth (MVD4) videos on this light-field glasses-free cinema system, interpolating and extrapolating missing views.
10.5446/31475 (DOI)
Hey everybody, how's it going? Yeah, hope lunch was good. Was lunch good? Okay, awesome. It's okay, you can say no, that's totally fine. I don't work for these people, so I think it's okay. I'm a software engineer at Airbnb. I work on the risk team. I'll be talking more about that and my own history and the different things I've done in my past because that really informs a lot of what this talk is about. I don't know that I'll answer the question of why software engineers disagree about everything, but I think this talk is going to be kind of an exploration of this question from a number of different angles. Another thing that's going to happen in this talk is I'm going to disagree pretty strongly with DHH's keynote. That might be interesting, but hopefully at the very least if I'm not convinced of what I'm saying, maybe get you thinking about some things. This talk in large part is going to be about philosophy. I know it's RailsConf, usually people come up here and they talk about controllers or someone else is talking about sorting and augmented reality. I'm not going to be offended if you get up right now and want to go into another talk. That's totally cool, but we are going to go way into the weeds today because that's what I'm all about. Specifically, I want to talk about the field of philosophy known as epistemology. The easiest way to define epistemology is epistemology is the nature of knowledge, of justification, and of the rationality of belief. This diagram here is of phrenology, which is this old science of trying to figure out what parts of the brain are responsible for what, which is an ancient version of epistemology which we've now thankfully displaced. Centrally, in epistemology, are the two questions, how does anyone know anything, and how do they know that they know it? These two questions are kind of navel gazey, but that's fine. Let me tell you a little bit about me. I studied English and philosophy in school, and before I ever came into the tech world, I used to be a professional poker player. I did that for about five years. I kind of took a very different path into programming than most people. Then I worked as a programming instructor. I taught programming at a coding boot camp. For the last, a little bit over a year, I've been working as an engineer on the risk team at Airbnb, basically fighting fraud. Because of this, I've kind of snaked my way through many different subcultures. I've been something of a chameleon in that I've learned the different norms and the different beliefs and the different knowledge systems that many different worlds have and try to assert on you if you become a part of that world. One thing that I've noticed, and you can't help but notice the more worlds you step in and out of, is that knowledge is deeply cultural. In the world of programming, the kind of things you might hear people tell you when you come into this world, they might tell you that full set JavaScript is the future. They know this for sure. They'll tell you that everyone should know C. If you don't know C, what are you doing? Go out there, crack open a book and learn some C. They'll tell you Rust is the best language for systems programming. They'll tell you that relational databases don't scale. They'll tell you that TDD is a fantasy and nobody serious tries to do it anymore. It's not practical. Now all of these are interesting questions and all things that people disagree on, obviously. But the things that people hold with very, very strong conviction that this is clearly and just irrefutably true. Now I'm not interested in convincing you of any of these claims. Each of these could probably be great talks. What I'm interested in is why we disagree about them. Why isn't that we all converge on an answer to these questions? Now it's funny because when I was in the poker world, I remembered this. This was very familiar to me when I was learning how to be a poker player. Because in the poker world, people will tell you things like no limit is dying. You have to learn mixcans. Or they would tell you everybody needs to use a HUD. Or they would say only fish play loose passive styles. They would say, GTO is a fantasy, nobody actually plays like that. And there's a wonderful analogy between these two worlds, not just in that people argue a lot and that people are maybe kind of whiny and excessive. But there are all these things that people fundamentally disagree on. And each side of them holds tremendous conviction that their side is clearly and obviously correct. And so when most people hear stuff like this, when they hear someone say, hey, everybody should know C. And if you don't know C, you're not a real programmer, their natural reaction is, oh, God, is that true? And if that's true, what should I do about it? But for me, having gone through this song and dance so many times, my reaction is, why do they think they know that? And why, what is it that implanted so much confidence in them, so much conviction that the thing they're saying is actually true and universally true? And that's very interesting to me. It's almost like I'm asking, I tend to ask a more evolutionary question than a question about the world as it is right now. The thing that's kind of weird about programming in particular is that nobody agrees. There's so many things that people disagree on. They disagree about functional programming, object-oriented programming, TDD, old, robust proven frameworks, shiny new ones that solve new problems in different ways, serverless, adventure, blah, blah, blah, blah, blah, blah, blah, right? You guys know all this. Nobody agrees. Why? That's really weird. Now, you might think it's obvious that people don't agree. That's an understandable reaction. You might say, well, disagreement is a normal part of a society and the way that we, this divergence is a source of discourse and that's why we end up having a healthy society. So you might think that's obvious. Of course, people shouldn't agree. I think it's weird. I think it's weird that people don't agree. And I want to develop in you an intuition to also believe this is weird, that people don't agree. Because in a way, it makes more sense for people to agree than for them to disagree. Another way of putting what I'm saying is that systems in general tend to converge. When you see a system, you should assume that over time it's going to converge on what's optimal, what's sort of the optimal state for that system. Let me give you an example. So I'm going to give you an example related to poker, but you don't need to know anything about poker in order to understand this. I can explain it in a couple of sentences. So in poker, there's a strategy called set mining. And set mining is very simple. So in Texas Holden, you get two cards. And if you want to set mine, what you do is you wait for two cards that make a pair. So let's say you get delta pair and threes. Then what you do is you wait to see if you make three of a kind. If you make three of a kind, you bet it really aggressively and if you don't, you bet it really aggressively. That's set mining. Super, super simple. So set mining was a strategy that was pretty simple and pretty stupid. Almost anybody could do it. And it worked. It worked really, really unreasonably well given how simple the strategy was. And so pretty soon what you would see in the world of online poker is that pretty soon when people started talking about this, almost everybody started set mining at low to mid fixed for ring ends. This strategy just kind of took over like wildfire. And what you could say is that the game converged on set mining. Everybody saw that set mining was the high ground and they all moved in that direction. And so this is kind of one of the features that happened to poker after internet poker took off. So internet poker took off really around 2004, 2005, maybe a little bit before that as well. And if you think about it before internet poker ever existed, there was live poker, right? Playing poker in a brick and mortar casino. And live poker was fundamentally really different than online poker in ways that people didn't really predict before it happened. So you can imagine live poker, let's say in the 80s and 90s, there are some people playing poker out in Phoenix maybe, there are people playing poker in Vegas, there are people playing poker in Dallas, people playing poker in London. And these different groups of poker players weren't really communicating with each other. Ideas that were germinating in London weren't going over the pond and showing up in car rooms in Dallas, right? People were kind of isolated to their little groups and they were just playing poker and figuring things out as they went. But with the advent of online poker and the communication that it enabled for a lot of people, sorry there was like a fly running around me, with the advent of that communication, what happened was that it allowed the system to converge where suddenly somebody has an idea for a strategy somewhere in Dallas or New Hampshire or wherever and suddenly they can share it and everybody can learn about this strategy very, very quickly. And so what this meant was that the state of poker strategy for a long time, it was pretty static. The way that people played in the 40s and 50s was not that different from the way people played in the 80s and 90s. Information just wasn't able to evolve that effectively. But with the advent of online poker, suddenly if you look at the curve of the complexity of poker strategy, it just takes off right after 2003 when online poker comes to town. And so right here is a graph that audio is not playing but that's fine. He's saying look at this graph. This is a joke that didn't play audio, that's fine. We're going to roll with it. So this is actually a graph that goes up into the right. It has nothing to do with what I'm talking about but I thought it would give a sense of legitimacy to the slides. It's like oh, I just got to know what he was talking about. Look at this graph. Cool. So the point is, the point is online poker converged in a way that live poker never did. It was never able to. And this to me is, it makes a lot of sense. Like of course online poker is going to converge, right? When you have these people able to communicate to see what each other is doing and move around in this terrain, of course they're going to settle on what's optimal. And nature is full of convergent systems like this. So let's say for example, you're walking downtown and you buy a loaf of bread, maybe you're French, I don't know. And let's say you eat half a loaf of bread and you toss the rest on the street. Well, what happens is pretty soon a bunch of pigeons, just from, it's not like they're all in one place standing over watching you throw the loaf of bread, right? But pretty soon one pigeon comes, three pigeons come. And then the, a chorus of pigeons from presumably all over town just come in and start, you know, just, just binging on the piece of bread that you've left. And this, this kind of makes sense, right? Like the pigeons are able to very quickly come down to whatever is the optimal place for them to be. They, they eat as much food as they can and they disperse back to wherever it was they came from, maybe to the next best place that they can find somewhere downtown. So this is an example of a convergent system. So, you know, there's also this great Ramstein album that actually I love when I was a kid. This is not, this is actually just cell membrane in German because that was the only one I could find like 10 minutes ago. But the cell membrane you guys might remember from high school chemistry, you know, there are water molecules on one side, they permeate through this membrane and soon you have the exact same pressure, the exact same density of water molecules on each side of the membrane, the system converges. And you see the same sort of thing in stock markets, a lot of natural phenomenon. It seems like this sort of thing is everywhere. So when I started working as a risk engineer at Airbnb working in the fraud space, naturally I started looking for convergence too. Because it just seems like one of those things that is a sufficiently complex system should eventually have some optimal state and it should find convergence. So working in the fraud industry is its own little subculture, its own little world. And the subculture is not just the subculture of people working against fraud, which is interesting but that wasn't really what fascinated me so much. I mean it is very interesting but the really fascinating thing about fighting fraud is that you're actively fighting against a community of people. You're actively fighting against a culture that's optimized to take you down and to basically exploit your defenses as effectively as they possibly can. And so really there is actually a subculture that I can't see and don't have direct access to that's organizing and trying to attack all these major online companies and they're trying to make money. And that's really interesting. And so I wondered how do fraudsters figure out what to do? I mean what they're learning is pretty non-trivial. They're learning how to script, how to do, you know, sometimes cross-site request forgery stuff, XSS, whatever. They come up with all these different ways to try to probe our defenses and attack us and ideally try to make some money. How do they learn this and how do they communicate this to each other? How does this knowledge spread in the world of fraud? And fundamentally the question I want to ask is, is there convergence in fraud? And this actually pretty soon after the new MacBook Pro replaced the escape key we saw fraud go way up. So that was one person laugh at that. That was, all right. There we go, there we go. Just a little bit, that's all I need in order to keep going. Awesome. So I just want to run with you through a quick example of kind of what a standard fraud scheme looks like. This is some site called Shipmunk. Does anyone know what Shipmunk is? Anyone familiar with this esteemed company? Great. Then we're going to shit on them. That's great. So Shipmunk, let's say they cut some corners, they kind of implement a feature they don't really know exactly. You know, they're not really thinking about fraud when they implement this, right? Like some product manager is like, hey, you know, we've got the story, we've got to get up at the end of the week, we're going to all look good once we fill this thing out. Great. Engineers who work on this just push it out, don't really think about it. So this feature is essentially some micro deposit where essentially they verify that you own this account because you deposit two end of the amount, you report them back, right? Pretty standard for some financial institutions to do this sort of thing. So let's say that they don't implement rate limiting. Easy thing to overlook. Well, you know, they go to sleep at night feeling great that they've wanted this new feature. And pretty soon someone gets alerted, you know, let's say at 2 a.m. that night, that somehow we've lost $100,000, which really should not have happened. And the only way you could lose that amount of money, obviously, is if fraudsters are at scale repeatedly hitting this API, repeatedly micro depositing money into their account, and then just keeping it. And so people scramble, you know, people are like, oh my God, I can't believe this is going on. They go and they try to patch this whole, implement some rate limiting, do some rules, maybe reverse whatever transactions they can. This is a common type of attack that, you know, a very easy vector that if your site influence something like this, a kind of attack you would be vulnerable to if you didn't implement rate limiting. So let's say they get that under control, that's fine. Now, what happens is that once you patch that vulnerability, of course the fraudsters disperse. They don't keep hitting it. Maybe they try to verify that in fact it is patched. They try a few ways around it, doesn't work. Cool, you've patched the vulnerability and the fraudsters disperse. But kind of like the pigeons, they don't just go randomly to different places and just kind of, you know, leave your platform. What ends up happening is that they then go to the next best place to the fraud. So what they'll do is they'll find, okay, that endpoint wasn't that good, but there's this other thing that we can do, like this maybe longer, more complex, or more expensive fraud scheme, and we're going to go do that. And basically what you see is that the fraudsters head down to the next highest peaks in this terrain of fraud, which, you know, this totally makes sense. This is convergent behavior, right? This is exactly what you'd see if the terrain suddenly changed, this peak became a valley, and they would go towards the next highest peak. So this seems to me like convergent behavior. It seems like fraudsters converge on what's optimal. But the more that I thought about this, the more I realized that this didn't totally make sense. And the reason that this totally didn't make sense, because I asked the question, why are they defrauding us at all? Why are any fraudsters attacking shipmunk? This might seem like a weird question, right? Like, of course, they're doing it so that they can make money, and that's what fraudsters are motivated by. But if you think about it, the terrain of all the companies that they can defraud is huge. There are many, many, many companies that are vulnerable to fraud, and pretty much all of these companies experience fraud to one degree or another, but they're all these different sites that you could attack if you so wanted. And so what you would, what you, you know, what you can say is that there's some topology to that fraud, right? There's some peaks and valleys. There's some, there's some sites that are really, really lucrative to the fraud, and other ones that, you know, are really not so much, and they're not a good place to be spending your time if you're a fraudster. And so you'd think that everybody would converge on the peaks. What you'd think is that almost all of the fraud would go towards the very most profitable, most attackable websites. But what you see instead is that fraud is just kind of dispersed everywhere. There's just sort of ambient level of fraud that if you start an e-commerce site, chances are that you're going to get some amount of fraud. Why? This is weird. This should strike you as weird. Why, is it like those fraudsters just don't care about optimizing? Are they not trying to make the best use of their time? Is there something that they're misapprehending about the terrain? Like what's going on that we don't see this convergent behavior within fraud? Now, you know, it, it, it occurs to me that actually you see the same sort of thing in software. You know, even just looking at open source solutions, there's so many different software packages and so many different open source solutions to many, many different problems that are effectively, you know, trying to do the exact same thing. Why? It seems like software doesn't converge either. When you, when you don't get this just one, okay, this is the best way to solve this problem and everybody converges on that solution. What you see instead is many, many, many competing solutions and it's not really clear which one is supposed to win. And when one does win, it's actually really surprising, you know, like, so, you know, React, you can say kind of one front end UI, and so far as you can say anyone one front end UI. And we're genuinely amazed by that on some metal level. Like, holy shit, someone won. No one ever wins anything anymore, but React just won front end UIs, you know, and, you know, maybe, maybe that kind of relaxes you because you're like, oh, hey, you know, I just, as long as I learn React, I'll just be employable forever and, you know, that's fine. But I think this is, this is somewhat counterintuitive that more things aren't like this. Why is this such a rare story that React wins or SQL wins or does it just, you know, it's not a good idea to do that. And I think that's one particular way of solving a problem just is clearly the best and we all adopt it. So you might have an obvious objection to this analysis, which would be that, well, of course, you know, things aren't going to converge because software doesn't just solve one problem, right? The obvious answer is that there are actually multiple terrains. So, you know, you can say there are different terrains and then one for social networks and maybe the tools and the solutions that you use for these different apps are different and that's why you see this, you know, these multifarious solutions to effectively the same problem. But I don't think that's sufficient because even within a single terrain, you don't see convergence. You know, even just looking at predaps, right? Predaps are, you know, they're the majority of what people build. That's the majority of web apps are just, you know, glorified predaps, maybe with like elastic search on top and like because of the vast majority of what people build, there's a vast majority of what people disagree on. So it doesn't seem to me like that's satisfactory to explain why we don't see convergence. And the thing is you should want convergence. Convergence is actually good because convergence means that we all see the underlying terrain. We all understand it and therefore we all go and do the best we can. In the world of software, for the most part, we're not actually competing with each other. We're actually all kind of at least to some degree or another motivated by each other succeeding. You know, when someone invents a great open source solution, actually everybody benefits from that. And most software kind of works this way. So the question I want to ask is why do these systems not converge? I think there are four reasons. And I'm going to go through each one. The first reason why these systems don't necessarily converge is because the terrain is actually unstable. The terrain is changing. It's not just one configuration that you can just see, you know, see something out in the distance and say, okay, that's a peak, I'm going to go over that. The fact that the terrain is changing means that you're not really sure if you do go in this direction that by the time you get there, the terrain isn't going to be different. It also means, of course, that your terrain is moving underneath you. So by the time that you, when you originally came into Ruby on Rails, we thought, wow, this is the hottest, newest, shiniest web framework anyone has ever come up with, and now it's 2017, and Rails is not the shiniest, newest, coolest framework anyone has ever come up with. And so that terrain has changed underneath you. And that makes it hard for people to actually converge when there's so much change going on. There was this article that someone wrote on Medium a little while back, how it feels to learn JavaScript in 2016. Even if you didn't read this article, I'm sure you have an intuition of what this thing said, right? And we see this becoming more and more, I don't know if I want to say a problem, but more and more of a characteristic of software, that things are changing very rapidly. And it might be that pace of change is even increasing, which wouldn't be that unreasonable to expect, actually, as technology grows more and more rapidly. And you see the same sort of thing in the world of fraud. So say for example, that Facebook has some kind of hole, some kind of things that fraudsters can attack. But it becomes really easy to spam Facebook and do some kind of referral, click fraud, whatever, whatever stuff. So maybe if you're a fraudster, you're making $20 an hour defrauding Facebook doing this. Well, Facebook goes in and they patch the hole, and now, instead of making $20 an hour, you can only make $3 an hour. But in order for the fraudsters to disperse and find the next highest thing, they might be just incentivized to say, hey, I don't actually know how easy it's going to be to defraud anything else, because that might get patched too, and the terrain might just change out underneath me. So you know what, I'll just keep defrauding Facebook for $3 an hour. That's fine. At least I know this is working during the time that I have it. So maybe that's some part of the explanation why you don't see this optimizing across the terrain. So the second reason why I think you see a lack of convergence in some of these domains is because of high switching costs. Okay? So let's say that this is the domain that, let's say this is software, this is the map of the software world. And let's say that you live all the way on the bottom right, and the bottom right, let's say, is Rails Land. And you know, you go and talk to some of the developers, you go to the developers, you talk to the developers, you go to the developers, you are talking to the developers, you are talking to the developers, and you just go to the developer, the developer, you have this, it's just a wide-eyed, bushy developer who tells you you don't want you, you just need to learn Haskell. And if you learn Haskell, you know, all these type errors, and nil checks, and that and that, all that stuff is just going to go away, and you are going to live in a land of pure programming this peak that they're pointing to is very, very far away. So in order for you to get there, you're going to have to go down into a valley, traverse a really, really long space, until you can finally actually reap the fruits of what they were claiming was so great. And who knows? Because the terrain is changing, by the time you get there, it might not even be a peak anymore. Something else might have become a peak. Or maybe the place where you were at could have become a peak. Who knows? And so this instability and this uncertainty makes people really unwilling to incur the risks of traversing the terrain and exploring. You see, of course, the same thing in fraud, just like I just mentioned with the Facebook example. And this whole thing, of course, is exacerbated by specialization. The more specialized you are, the harder it's going to be for you to convince yourself to engage in those high switching costs. Because really, specialization is just basically you finding your way to some local maximum. That's what specialization is. It's climbing as high as you can onto a local maximum. Once you're there, it just becomes really uncompelling to climb all the way down from that peak of specialization you've arrived at to go find the true global maximum, which might be very far away. And we've already talked about the fact that it's changing and uncertain. And so this makes it harder, the more specialized we are. Same thing with fraudsters. There are fraudsters who are specialized in attacking one site as opposed to another. And it's harder than to switch. If we learn a lot of new things, they have to start over in their knowledge. So as a third reason why I think you don't see convergence in fraud or in software or in these other fields is information sharing is a very important part of how you get convergence. If you think about it, not all of us can actually clearly and lucidly see that underlying terrain. There are these peaks and valleys. We know they're there. But they're sort of like a fog of war. And that we can't see just beyond our local environment. Because we just don't know that much about what Haskell Mountain looks like or what some other language or framework that you're not familiar with. So in order for us to really get that sense of what the terrain is like, we have to share information with each other about what the terrain is. That's how we learn what happens if we go out far enough into the terrain and whether the costs are going to be worth it. So different cultures have different amounts of information sharing. And that makes it harder or easier for them to converge on different things. So if you imagine a graph of different cultures, you can sort of graph them on how closed versus open they are in terms of information sharing. So if you look at a very, very closed system, a good example of this is the fraud industry. So if you're a fraudster, then actually, it's very hard to learn and get access to the information that you need in order to learn how to become a fraudster. So there are all these underground fraud industries. So fraud is an industry in many places in the world where basically you can get access to courses. You can buy them. You can get primers on how to hack this site or that site. There are various tools that you can buy. You can pirate whatever. There's all the stuff you need to get up and running as a fraudster. And it's not easy to get this stuff. You actually have to make your way into communities. You have to prove yourself. You have to gain reputation. You can't just decide, hey, I'm going to go on Amazon and buy a textbook on how to commit fraud. It doesn't exist. You can't do that. You have to go in through a very specialized way. And not all information is actually readily up for grabs. There are some fraud rings that just don't share their information with anyone outside of it. And then it's not for sale. And so that makes it very difficult if you're somebody who's wanting to learn more about fraud to actually figure out what is the optimal place for me to be spending my time defrauding people. Now, so what's more on the open side is you can look at a world like poker. So poker is kind of a more open system. So there are all these forums. There are different places where people can exchange ideas. There's certainly books written about poker you can just buy if you want to. But the very best players, the very best ideas, the very best theories and strategies about poker, they're generally not for sale. The people who hold them and the people who profit the most from them tend to keep them close to the chest. So you get a lot of resources that are OK or that are really crappy, that are openly available. But the very best stuff sometimes is hard to find and hard to actually gain value from. And then on the other hand, if you look all the way to the right, you find the world of software. The world of software is in a lot of ways kind of staggeringly open. You have companies that are just releasing the source code for their entire application. Or security libraries that are, again, like completely open source. And companies will just say, yeah, we use open SSL. Groovy. If you find a weakness in open SSL, that's a weakness in us. And this is really about as open as you can get. And there are blog posts. There are all these things that are shared about software that make it seem like, wow, there's an enormous amount of information sharing. That should really make it so that people see really quickly what is actually the best solution for any different problem. Somehow in the world of software, it doesn't really seem like that happens all that well. And so I don't know. I think there's somewhat of an open question of, even though there is a lot of sharing on the surface, if in fact there are some things that people aren't that open about sharing. Like I think when it comes to what a lot of large companies are doing when they're putting together a lot of open source solutions to solve problems, they actually don't immediately go out and tell people, oh, hey, we solved this problem. Here's how we did it. Very often, the way that companies share this information is pretty selective and pretty strategic. And the moment you solve a cutting edge problem, you generally don't go out and share it unless you think there's some strategic value in doing so. And so that, I think, to some degree, kind of exacerbates the problem of, why is it that we don't get this convergence in the world of software? So reason number four. And I think this is a really interesting reason that kind of goes to sociology is basically, the problem of group identities. And we heard some this morning from DHH about the value of group identities. And I'm going to go at it from a completely different angle. I'm going to talk about more the dangers of group identity. So you can imagine that the world of programming is kind of demarcated into these different kind of arbitrary groups. And one of the groups might be Rubyists or Rails programmers or whatever, which you can categorize myself in there. Then you have Java lovers over here, and Python Easter's over here, and then Scala's over there. And they're kind of these special norms that dictate what you can do inside these different worlds if you want to fit in into these groups. These groups kind of say, well, you're a Java lover, you can explore this area, but you're not really supposed to go over there. That's kind of weird stuff that we don't really do in Java land. And so you get these kind of arbitrary cuts across the terrain that make it hard for you to just freely traverse and explore this terrain without violating some kind of social norm associated with your group. Turns out you get the same kind of thing with fraud rates. So there's a fraud ring that just defrauds Facebook. That's all they do. And all they do is that they're talking on some sort of secret channel where only fraudsters that are part of this group can communicate, and they share information about just how to defraud Uber or just how to defraud Google. And if you're one of the members of the other groups, maybe you can't get into that group. Maybe fraudsters have just decided, no, we're this group and you're that group, and you're not going to get our information. And so if you want to explore, you only get to explore the terrain over there. And I think in the world of software, you kind of see this when instead of having an explanation of blog posts, it's just, here's how Kafka works. There's the blog post, Kafka for Rails Engineers. Or the blog post, XYZ for Rubyists. And this, again, is kind of reinforcing the demarcation that, hey, I know you want to go explore that. Let me show it to you in the way that's appropriate for our group. And I think this is really fascinating to me, because being somebody who's relatively new to the subculture of software, I can immediately recognize this behavior. And I think it's pretty well explained by this theory of psychology called social identity theory. So the idea of social identity theory, it's pretty simple, essentially suggests that the way that we construct our identities as human beings is largely as a result of the groups that we adhere to. So this kind of goes in several stages. So the first thing you do is you start categorizing the world into social groups. OK, so you first have to say, OK, so these people are the Christians. These people are the Goths. These people are the meat lovers, whatever you want to draw up those boundaries in the space of what people can be. So first, you have to draw those boundaries. Next, you have to identify which of those groups you belong to. Do I want to be a meat lover? Do I want to be a pythonesa? Do I want to be somebody who loves red or whatever? You have to decide which of those groups you're going to identify with. Then once you do that, the last step is social comparison. Now you have to do the pretty hard work of deciding why the other groups are bad and your group is good. You have to make this distinction between your in-group and the out-group and invent some kind of story or narrative that goes along and reinforces why you are good and they're bad. So there are all these classic examples of this sort of thing where basically there's some arbitrary distinction that you've arrived on as being important to your social identification. And there's no intrinsic reason why that should be important, but we're Rubyists and they're Java lovers. And because of that, they're bad and we're good. And we have to come up with some sort of story why that's the case and they have to do the exact same thing. Now you might be the social identity theory which suggests that, OK, well, that should mean that all Rubyists and all Rails developers are the same, but I don't really feel like that. I'm not the same as the people around me. If you look around, you all don't look like a completely homogenous group of people. And this is true. And so there's this other theory that kind of complements this really well. And it's called differential psychology. And differential psychology essentially examines the way that people within groups try to make themselves different from each other. As a way of somehow strengthening their bond as being a part of that group. So for example, if you've ever seen the movie West Side Story, so you look at these characters, they're all part of a gang in West Side Story. And they're all together. They're part of the same group. And you can tell they kind of have a look. If you just saw these people in the street, you'd be like, OK, these people are doing something together. There's something that somehow unites them. But notice, they don't all wear exactly the same outfit. And they could. They could all wear the exact same outfit. They could all style their hair the exact same way, but they don't. Why don't they? Why don't they do that? You'd think that maybe that would strengthen their group identity. If they all literally did the exact same thing, they would be a stronger part of that group. But it turns out there's something intrinsic to us as human beings that even though we're a part of groups, it's important for us to differentiate ourselves. We actually spend a significant amount of energy just differentiating ourselves within the groups that we're in as a way of almost masking our identities within that group. As a way to not make ourselves feel that, like, hey, I don't have any identity outside this group because I seem to be wearing the exact same thing everyone else is wearing and doing the exact same thing everyone else is doing. We'll expend a lot of energy in order not to feel that. So that's exactly what you see these people in West Side Story doing. And so I want to draw a little bit of analogy here. Is that there's something kind of similar going on when you look at something like this, where we're expending this energy. If you imagine this underlying terrain of software, and let's say this right here is Rails Mountain, there's a lot of energy going into making it so that it kind of looks like we're exploring this big terrain, but really everything is actually still in the bounds of this group. Even though we're talking about Kafka or Elixir or whatever it is, we're still keeping you as a part of this group. And that identity is actually reinforced by you being here in every single way, even the fact that you're going to these different talks talking about different technologies, you're still seeing it as a Rails developer. And I think this is bad. I think this masks the severity of the problem of social identities making it harder for us to actually converge and actually find what is genuinely optimal. It allows us to kind of distract ourselves with this story, but like, hey, we're exploring these different things, but really underlying it, we're not. So I think we should really want to find converges. We should want to find the true global maximum. So as software engineers, what ought we to do about this? And I don't know that I have perfect answers to this. I think these are all really intrinsically hard problems, but I do have a couple of pieces of advice that hopefully might be instructive to some degree. So the first piece of advice is an adage from Paul Grammer initially, where he said, keep your identity small. And really kind of what this means is to, as much as you can, as much as is possible, jettison the labels that you've very easily come to identify yourself with. And so that's to say, don't think of yourself as a Rails developer or as a Ruby developer, but instead, think of yourself as a software engineer, such that whatever ends up being the right tool for the job, and that tool might be Rails, my tool might be Ruby, might be something else, that's what you fundamentally use. You solve problems in the world of software. And right now, it might be very beneficial for you to go climbing up this hill of learning more about Rails and more about Ruby, but eventually, you will not be. You can imagine 10 years from now, you'll be working on something, and Rails probably won't be the tool you wanna use 10 years from now. In fact, right now, I actually consider myself to be, I love Ruby, I love Rails, I think they're really awesome and wonderful tools. But I would be probably pretty disappointed myself if I was a Rails developer 10 years from now. And that was what I considered myself to be, I'm a Rails developer. When DHH was talking about the article about cobalt programmers, that there are people still making money for banks working on these super antiquated cobalt applications, you can bet that there will still be Rails apps 10 years from now. And I'm sure that you'll probably be able to fetch a pretty penny, basically managing these 10 year old, 12 year old, 15 year old Rails apps. But is that fundamentally what you want to be doing? Or is what you want to be doing to solve problems with software, however those problems end up changing, and however those tools end up changing? The second piece of advice I want to give, besides keeping your identity small, is pretty obvious, is just to explore the terrain. And exploring the terrain, to me, it means more than just kind of paying lip service to different things, they'll be like, oh okay, I don't know what Kafka is, I'm gonna go to this talk, or I'm gonna go to that talk. It means fundamentally to do things you've never done before. It means to do things that are kind of scary to you. It means to take real risks. When I say real risks, I'm juxtaposing that against fake risks, which I think are a real thing and something you should caution yourself against. A fake risk is one where you actually retain all of your safety, all of your comfort, all of your prestige, all of your knowledge, all of your abilities, where it's like, you know what, I'm still a really awesome person, everyone respects me, I know everything that I'm doing, but I'm also taking this risk, no you're not. No you're not, right? The risk comes when you give up, you actually walk down the hill. And walking down a hill is uncomfortable, it's scary, it makes you nervous. And if you're not actually doing that, then you're not really taking a genuine risk. Another way of saying this is go to DjangoCon. Don't actually go to DjangoCon, it probably sucks. But there's probably, if RailsConf is the only conference you're going to this year, reflect on that. Reflect on what that means, whether it means you are actually taking risks that are important. So, and finally, of course, I think the most important part of exploration is just to have fun. And when you let go of the idea that you constantly need to be moving up, and that in fact it's okay to move down and to take risks in a way that potentially make it harder for you to get your job done. But that's okay, and allow yourself to have fun doing it. And I think that makes the whole process a lot easier. So that's it for me, I'm Haseeb Koreshi, a software engineer on risk at Airbnb. If you're a senior software engineer or a data scientist, we're always hiring. And yeah, thanks for listening. Massive session.
Why are there are so many disagreements in software? Why don’t we all converge on the same beliefs or technologies? It might sound obvious that people shouldn't agree, but I want to convince you it’s weird that we don't. This talk will be a philosophical exploration of how knowledge converges within subcultures, as I explore this question through the worlds of software, online fraud, and poker.
10.5446/31477 (DOI)
Hi, thanks so much for coming to my talk. It's called There is No Spoon, Understanding Spoon Theory and Preventing Burnout. A couple things about me. My name is Jameson, but you can call me Jamie. I'm here, as he said, from Buffalo, New York, which is the home of bad sports. I'm really happy to be here in Phoenix for the first time. I work for AgriList. We're based in Brooklyn, New York, and we do farm management and data software for indoor farms and greenhouses, which is really cool. And you can find me on Twitter at Jamie Bash. My slides are pinned there. One other thing I wanted to introduce about myself right at the beginning. This is something I'm going to go into further in the rest of the talk, but I like to kind of get it out of the way. I am transgender, specifically, I identify as genderqueer or nonbinary. Either of those terms is fine. If you don't know what that means, it basically means three things. Number one, I don't identify as either a man or a woman. Number two, I use the neutral pronouns they. And number three, I have really great hair. I like to bring this up at the beginning, even though I'm going to talk more about it when I go into diversity, because I know people like live tweet and stuff like this. And if you want a live tweet, that's super cool, but just keep in mind my pronouns when you do it. But let's talk about spoon theory. That's what we're here to talk about. So what is spoon theory? It's basically a social metaphor that refers to how much energy we have in a day to do things. And this is both physical and like mental emotional energy. And the spoon itself is the unit of measurement that represents that energy. So what do you notice about these spoons? There's six. And six is kind of an arbitrary number. But the key here is that it's finite. You know, that's how many spoons you have, however many, and that's the limit. That's how many you have. Spoon theory was originally coined in an essay that was published in 2003 by a woman named Christine Miserendino who suffers from lupus. And she kind of came up with this metaphor to describe her life as someone with a chronic physical illness. She had a good friend who, you know, she confided in about what she was going through and who accompanied her to doctors appointments and things like this. And so she kind of thought, oh, well, my friend has an idea of what it's like for me living with this disease. But the problem is, like, if you don't have, if you haven't experienced it for yourself, it's actually really hard to know what it's like. And so this is a quote from her original essay. I wanted something for her to actually hold for me to then take away, since most people who get sick feel a loss of a life they once knew. If I was in control of taking away the spoons, then she would know what it feels like to have someone or something else, in this case lupus, being in control. So spoon theory is kind of referring a lot to, like, invisible illnesses. And one way to think about that is, like, if you have a disability, there might be some things that you just can't do, you know, you just have those limits. Like, for an example, if you were in a wheelchair and you can't really walk at all, you know, you wouldn't be able to take a staircase. Like, you'd need to have accessibility accommodations in order to do that. But, you know, nobody in their right mind would go up to someone in a wheelchair and be like, well, why aren't you just taking the stairs? If someone did that, like, everyone around would be like, wow, what a huge jerk. But, like, that's a really obvious case, and it's not always so clear-cut and obvious. Like, what if there was someone who had a prosthetic leg, and you'd seen them take the stairs before, but then on another day they couldn't do it. You, like, you might not even know that they have a prosthetic leg. And then it's going to be way more tempting for you to be like, well, you did it before, like, why can't you do it now? Even though you don't actually know how much energy it's taking them, how much pain they might be in, like, how many spoons they have to use to go up the stairs, you can't know how many spoons someone else has just by looking at them. And what happens when you run out? I said they were finite, and I mean, the key to that is you have a certain amount, and if you spend them, like, you could hit zero. And once they're gone, they're gone. And your spoons cry, like in crying breakfast, friends. You can borrow extra spoons from yourself sometimes if you run out, but there are consequences to doing that. I like to use video games to describe this, because a lot of us like video games, and energy mechanics are pretty common in video games, and it's a very similar metric to spoons. So this is from one of my favorite games, Stardew Valley, which is a farming simulator, because all I think about is farming. But as you can see in the bottom corner, you have this energy bar. So when you wake up in the morning, it's green. You have your full energy, and you go tend to your crops, and then you do other things, and it's less since it's in the yellow, and the day goes on, and it's in the orange. And then maybe at the end of the day, you're going out to have a drink, and your energy's really low. It's in the red. So the way that this would work in Stardew Valley, this mechanic is if you went to bed right now, and your energy was at this little red, you would wake up the next day, and you'd have the big green bar, like, in the first slide. Or if you kept going, it will let you, like, go below the bottom of that energy meter. But if you do that, then when you wake up the next morning, you're not going to be at full energy. You're going to be, you're going to already have your bar partially depleted. And, like, you got a lot of do. You got a lot of farming to do. You got work to do. So it's, like, really hard when you're not starting out with your full amount. Who does spoon theory affect? I think everyone can benefit from an understanding of spoon theory. Christine Miser and Dino talked about the idea that, like, healthy people have an unlimited number of spoons. And I don't think that's true, because spoons are energy, and nobody has an unlimited amount of energy. But the key here is that, like, an average, healthy person probably has enough that they can get through the average day without having to ration it or really think about it. But I also don't think it's just the chronically ill and, like, physically disabled communities that can benefit from this metaphor. The mental illness community has already started to use this as, like, a communication technique. And I also think that there are, like, marginalized groups of people that it's a good way to express the discrimination that comes along with being part of those communities, too. It gives us a shared language. This is my favorite thing about spoon theory. You know, we're going to talk about some pretty heavy stuff. And sometimes that's stuff that's, like, hard to talk about. Like, it's hard to go up to someone you don't know that well or boss and say, like, I'm having trouble with my health. But if everyone knows spoon theory and is familiar with that language, it's much easier to say, oh, I can't really do that. I don't have enough spoons today. And it makes hard conversations easier. It gives us a greater empathy for others, even if you're not struggling with this, maybe you can have a better understanding of how other people might be struggling. And it gives us a better understanding of our own limits, which is really important, too. I want to talk about, like, the three communities that I mentioned and some of the ways that spoons are expended. Just as a caveat, it's really different for everybody, you know. There's lots of different disabilities, diseases that people have, lots of different marginalized communities that have different problems. So this is kind of generic. For people with physical illnesses and disabilities, you know, constant pain is really exhausting. I feel like this is an obvious one, but it's really important to say. Being in pain makes literally everything you do harder. Like, there's so many things we do every day that are part of our routine. We don't even think about them. But if you have a condition that gives you chronic pain, like, you can't not think about those things. Everything is harder. Personally, I've struggled with intermittent chronic back pain and, like, the fact that sometimes I have it and sometimes I doesn't really highlight this idea that, like, I'm taking for granted the fact that when I'm not having pain, I can tie my shoes and it's no big deal. But then it can feel insurmountable at another time. The emotional drain of worry. I mean, when you're sick, that's, like, a really heavy burden on your shoulders to think about. Reduce mobility. Decrease accessibility to places. This is another thing that just makes it hard to do daily activities, like, go places. And if you need special treatment, special accommodations, you know, that can garner unwanted attention, which is also really stressful. And frequent health care. I mean, health care is great because it helps us manage things we've got going on in our lives with our health, but it can also make you weak. Like, think about someone who's going through chemo. They need that to manage their illness, but it's taking up a lot of their spoons right there. Mental illnesses are similar in many ways to the last slide, because, like, mentally ill people are also chronically ill, but society treats it a lot different. So I think there's different things to talk about. Constant emotional distress is also exhausting. I feel like this should be obvious because everyone's had some emotional distress and it's very awful. And being told to get over it is kind of what I was getting at when I say it's not treated the same. There's kind of this weird perception that people with mental illnesses should pull themselves out of it and get over it. And it's not treated as a disease in the same way. Being told to get over it is a huge burden because people are taking the burden and putting it back on you, expecting you to, like, minimize your own struggling. And it also can cause people to second guess their right to even be sick. Like, if people tell you you're faking for long enough, you might start to feel like you are faking or like it's some sort of failure on your part rather than you have a disease. And it can be socially ostracizing, partially for those reasons, partially because some of the symptoms can cut you off from your support network or make you be perceived as selfish or flaky. And also because some things are socially taboo in and of themselves, like eating disorders. And there can be a physical aspect, too. Panic attacks are really hard to describe, but I'll kind of try. They're one of the most physical things that has ever happened to me. I feel flushed, feverish, heavy breathing, high heart rate, sometimes I'll hyperventilate, I'll feel really nauseous, sometimes I'll throw up. It's very, very physical and it looks scary from the outside. It looks enough like I'm having a stroke or a seizure that I have to warn people in my life. It's like, no, it's okay. You don't have to call an ambulance. But the point is it's really serious and it can be really scary. And the last thing, marginalized groups. Again, less common to apply spoon theory, too. But I think it really works in this relevant. It turns out being discriminated against sucks. And if you're experiencing that every day, you know, racism, homophobia, transphobia, or whatever, you're starting your day from a place of fatigue. I think emotionally dealing with the political climate has been hard for a lot of people lately. But for marginalized people, it's even worse. It's really exhausting to watch the news every day and see bad news about your rights and people you know, getting murdered, committing suicide. It's so tiring and such a burden. And then people expect you to talk about it because it's like, you're a minority and you should talk about minority issues, even if it's like not something you want to talk about or even think about. As a non-binary person, I have to justify my own existence a lot. I have to explain to people what non-binary means all the time. And I don't mind doing it in theory. Not everyone knows, and I think everyone should know. And the easiest way is for me to tell them. But sometimes there's a subtext where if I don't do a good enough job explaining it, then they're not going to accept the explanation and then they're not going to accept me. So it ends up feeling like I'm begging for people's approval by educating them. And microaggressions are a thing that I've literally done an entire talk about, so I'll try not to be too long-winded. But basically microaggression is a scary word, but it's kind of less blatant examples of discrimination that are often coming from like well-meaning people that don't want to be jerks or don't realize it. Something that if it happened to you once, it would be annoying and you would get over it. But since it happens so many times, it becomes like more and more hurtful, like a mosquito bite. A few examples of microaggressions, every marginalized group of people deals with different ones. You don't sound black. I think we all know why that's like not a cool thing to say to anybody, hopefully. You don't look transgender. Okay. How did you expect me to look? For people with different ethnic backgrounds, well where are you from anyway? Chicago. Where are you really from? Chicago. This one is for like lesbian couples. Who's the guy in your relationship? Nobody. We're lesbians. That's kind of what that means. And for women in tech and attending tech events, are you the secretary? No. Why would you assume I was non-technical? I wonder why. So when I said it was like a mosquito bite, like if someone had one mosquito bite and they complained about it, you might be like, well come on, it's just one mosquito bite. But not everyone gets bit by mosquitoes at the same rate. And if you're covered in those bites from head to toe all the time, like yeah, you're in your right to complain about that because that sounds awful. I want to do a short exercise about like going through an average morning from the perspective of a few different people and see where spoons might get spent. Again, just an example, but I think there's something to be said for it. So here's our control group morning. The average morning for like a healthy, advantaged person. Getting up, getting ready, commuting to work, getting to work. Nothing special, nothing crazy or overly difficult. You might lose a spoon or two on this ordeal potentially. Maybe it's really crowded on the train or you get to work and the elevator's broken, you have to take the stairs, you're kind of out of shape, so it's a huge pain. But overall, you've pretty much got under control. Now let's imagine that you have an illness that gives you chronic pain. First of all, you're going to start off with less spoons in the beginning because you didn't sleep well, you know, you were in pain overnight again. So you get out of bed and you get ready. It hurts a lot to sit up, you're getting dressed, it's taking a lot longer because you're in pain. Maybe you take a bunch of medications, potentially some of them have side effects that even though you're used to them, it still sucks that you're nauseous in the morning. You're spending a spoon on that for sure at least. Maybe you're commuting to work on that crowded subway from the previous example, but you're getting jostled and it's really painful every time, but nobody thinks to give up a seat for you because you're not obviously disabled. You're carrying your bag and your back hurts and it's really rough. And now you get to work and the elevator is out for you too, but instead of like a mild inconvenience, this is a huge problem. You know, climbing stairs is really hard for you, but what else are you going to do to get to work if there's no accessibility? So you climb the stairs, it takes you 20 minutes, you have to pause for a rest halfway through, you're late for work, you're at your pain threshold, and you've spent a couple of spoons on that ordeal. You get in, you were late, so you don't have time for coffee, and you have extra work piled up partially because of that and partially because you had doctor's appointments that you had to take half days for earlier in the week, which you have almost every week. So that's a lot less spoons. We're going to do it again. It's like Groundhog Day. We just have to keep living out the same day over and over. Now imagine if you had anxiety and panic disorder. This is what I'm talking about because it's kind of what I've experienced and what I know. Oops, you had a panic attack last night and you went way over your capacity and borrowed spoons from yourself, so you're also starting with less. When you wake up, you know, all these bad feelings from having a panic attack are kind of coming back to you like a bad hang over, even though you haven't been drinking. Also keep in mind that clinical depression is linked to like a whole long list of other mental illnesses and disorders, so it's really common for people to have to deal with that on top of whatever else specifically plagues them. Getting out of bed and getting ready. When I'm having a bad anxiety day, it can be really hard to get out of bed, and it's not just like I don't feel like it. I have sometimes really intense inexplicable fears of like leaving my house or even leaving my room, but unfortunately that's not really something that you can call off work for, so you'll have to push through it in this scenario. I can't drive when my anxiety is bad either, so a crowded subway sounds kind of terrifying, but it's the lesser of the two evils because I can't crash my car into anybody. So you're on the subway, it's crowded, your heart is racing every time someone bumps into you, you feel like you're starting to panic. On really bad days, you might even feel like a paranoia that people around you are like following or tracking you somehow, which is really terrifying even if it's not true. By the time you get to work, you've already worked yourself up into a state and you're feeling kind of nauseous. Anxiety has a compounding effect. Once you're feeling anxious, everything around you is going to make it worse, even things that normally would be okay. Now you're freaked out, you're overanalyzing the tone of the receptionist when she says good morning to you, you're overanalyzing wordings and your emails. Having a panic attack yesterday actually makes you more susceptible to having another one today because of this stupid thing called secondary anxiety, which I call it stupid because I know it's stupid even when I'm experiencing it that's like, oh, I'm feeling anxious, I hope I don't have a panic attack, oh my god, what if I have a panic attack and then I get so nervous about it that I work myself up into it just because I didn't want it to happen. It's something that I know is stupid even in the moment, but I just can't make it stop. So you're losing at least a spoon and freaking out about nothing, and if you actually work yourself all the way up to a panic attack, it's pretty much game over. I'm not really useful for anything else for the rest of the day. One more time. This is the last Groundhog Day, I promise. Now you're a member of a marginalized group. Again, I'm going to go with what I know and we're going to talk about what it might feel like to be transgender. You get up and get ready. Maybe you wake up in a pretty good mood, but then when you're brushing your teeth, you know, you look in the mirror and don't super recognize the person that's looking back at you. Gender dysphoria is another thing that's kind of hard to describe. But kind of imagine if like tomorrow morning you woke up in the body of like not the gender that you've associated yourself with your whole life. You know, every time you see your reflection or hear yourself speak, it's going to feel wrong and it's going to be impossible not to notice and think about it. Getting dressed can also be a burden. I wear a binder, which is hard to get on and it's uncomfortable to wear during the day. So that's at least the spoon getting ready. You take the subway and you get catcalled. Somebody calls you a freak and you don't feel good about it. Four or five strangers passively misgender you that they just don't know any better. They're not trying to be jerks. But it may have you feeling down. Then you're reading the news. They passed a bill that limits the rights of trans kids in schools and another trans woman of color was murdered in your city over the weekend. So now that catcall that you felt crappy about earlier is making you feel like really legitimately nervous. You get to work. You climb the stairs. The receptionist misgenders you again even though you've told her a million times. So you correct her again and she makes a comment about how being trans is the only thing you ever talk about. But it's really hard not to think about it when it's jarring every time you hear someone make a mistake. Think about the metaphor from earlier. If you suddenly woke up and everyone was using the opposite pronoun for you, I think you would notice it every time. Now you get to work. But you feel like you have to work harder and talk louder in order to get noticed for anything you're doing. Which means you have to spend more spoons to get the same amount of work done. Incidentally all women in the tech industry have to deal with this one. As you can see privilege is a big factor. Nobody has unlimited energy. Nobody is the terminator. You know there's going to be times in everyone's lives when there's things going on that you have to spend more spoons on. Maybe your mom is in the hospital and you're taking care of her. It's taking a lot of your energy and you're really worried. Maybe you're in the hospital because you got sick. Maybe you have a newborn and you haven't slept through the night in three or four months. These are definitely things that are going to require you to spend more spoons. The difference between these kind of things and the kind of things from the previous example are that A, most of these kind of things are temporary. And B, someone in those other three categories is just as likely to have to deal with one of these things on top of all those other things. And then it's just compounding. So the lesson here is basically saying just because you have privilege, I'm not saying that your life can't ever be hard. Everybody's lives are hard sometimes. But it's not hard because you have privilege. It's hard even though you have privilege. Well how does this affect me? I'm not accusing you of being Brids rights activist, although a lot of people are like that. But it depends on who you are. If you're in one of the groups I just described, I'm sure you're already managing your spoons every day, even if you don't think about it like this. But I do think this metaphor is useful to help you think about it. I mean it's tiring to manage your spoons, but it's also tiring to watch other people not have to manage their spoons. And it makes you feel like there's something wrong with you that you have to do it. So thinking consciously about it is a good way to start practicing self-care, which is super important. And actually it's more than just important. It's like necessary to do in our lives. I used to feel like taking care of myself was like a really selfish thing to do. I think we're trained to feel like we have to sacrifice our own well-being to please other people. And especially in our industry and especially in the startup culture, I feel like a lot of places have a social status involved with being a martyr. But it turns out we can't give anything in our lives 100% unless we make sure that we're taking care of first. And there are two major things that help me change the way I think about it. I saw talk about burnout last year by my friend Mary Fangval. And she talked about this metaphor of putting on your own oxygen mask before you put on others. And it really changed my life to think about it that way. You're not just helping yourself first because you care about yourself more. You're helping yourself first because then you'll be equipped to help other people. And also this quote by Audre Lorde. Caring for myself is not self-indulgence. It's self-preservation. And that's an act of political warfare. The way I see it as a trans person, like just existing, is kind of an act of political resistance. So it's kind of my responsibility to protect myself so that I can keep standing up for my community. The greater I am and the more I succeed, the more powerful my resistance is because I'm doing it in the face of people who want me to fail. I think a lot of people picture self-care like this. And it can look like this. But on other days, it can look like this too. I did an art project recently where I kind of intentionally practiced self-care every day for a month and analyzed how it was making me feel. And the thing that really struck me is how different it can look on different days. Like on days when I can and I have enough spoons, self-care can really mean accountability, holding myself accountable for doing the things I have to do and the relief I feel when I get things done. But on other days, you know, I can't do that. I don't have the spoons. And then self-care can look like giving myself permission to rest and forgiving myself when not everything I might do gets done. So the art of self-care is kind of being able to figure out which you need at the moment at any given time and being kind to yourself. Well, maybe if you're not one of the people in the groups I was talking about, but you're an employer or a manager, I will guarantee you that some of the people under you are in those groups. And I hope that the exercise I did going through the morning kind of made it obvious why this is a huge problem. Like, did you see how many spoons some of your employees might have to spend before they even get started working at the beginning of the day when you want them to like feel refreshed and ready to do stuff? So it's affecting productivity. You know, everyone needs energy to do their best work, of course. But it's not just work. You know, their whole lives are going to suffer around that. If we all have stressful jobs, jobs that are stressful at least sometimes, and if you have to use all your energy to stay a float-in, busy job, like what's happening to your work-life balance, like it's destroyed. So it's about empathy too. Like, hopefully you care about your employees and you want them to lead fulfilling lives outside of their jobs. So this is basically just a recipe for burnout. Now let's talk about Agile. This is the chart you need to understand to understand the next section of my talk. Just kidding. You don't need to know about Agile. But a lot of us use at least a little bit of Agile in our days. And I think that spoon theory and Agile methodology are kind of similar and kind of related. I think a lot about velocity at work. That's an intangible unit of measurement. Sound familiar? And if you want to prioritize your team dynamic, like Agile pushes, you know, you can't be a cohesive team unless everyone understands each other. And that includes knowing each other's limits. How can you predict velocity if you don't have an understanding of what types of things are holding some of your employees back or slowing them down? Obviously, if you can remove those kind of barriers, velocity is going to increase. And Agile methodology says that your employees are your number one resource. I believe that. And if you also believe that, you should be really focused on treating them well and making sure that they're okay. All right. So maybe I've convinced you. You're like, Jamie, I'm in. But how am I supposed to fix it? Like, maybe I have employees that have had illnesses their entire lives or like they're experiencing systematic racism in their lives. Like, how am I, I can't fix that for them. Obviously, nobody expects you to do that. But when you talk about student theory, you're really talking about levels of burden. And there are definitely things you can do as an employer to lessen people's burdens. You have to recognize that their health is more important than their work. This is the martyr thing again. The reason that that's such a pervasive idea is because a lot of companies encourage it. But it's not healthy to expect your employees to think of work as the most important thing in their lives. Provide adequate health care for them. Not only is being sick tiring, but it's also really expensive and really stressful. So good health care is going to help people manage their disease. But it's also going to give them peace of mind, which is going to help conserve energy also. Treatmental illness is an illness. Here's a pro tip. If you see someone freaking out about something that you think is nothing, they wish that they weren't freaking out more than you wish that they weren't freaking out because it's more of a burden on them than it is on you. And make accessibility a priority. You know, nobody wants to be an afterthought. But making your employees comfortable is going to involve considering what needs they might have before they have to come and ask you for special considerations. If you're planning your office, planning events, and thinking about accessibility, it's going to make everybody more comfortable. And it's not going to make people feel like they were called out because they had to come ask you to change something. Accessibility here is like a huge umbrella that is also a whole another talk. But some of the things I'm thinking about are access to your building and facilities, appropriate bathrooms. That's a big thing in the trans community, but also for people with mobility issues. And huge kudos for Valscom for having gender neutral bathrooms, I almost cried. Transportation considerations and events that your employees can enjoy. Not everyone has fun doing physical activities. Some people have special diets. Some people can't drink alcohol. Some people have extreme social anxiety and social events that are mandatory are going to be a real stressor for them. So just think about it. This is basically all about keeping people's needs in mind when you do stuff. You might not want to give the impression of special treatment for certain employees, but you know, some people just need different considerations to perform at the same level. Fair isn't always equal. I really love this graphic. Like if someone who has a chronic illness is late to work a lot or misses more work than someone else and you're more lenient with them, you're not giving them special treatment. You're giving them a consideration they need to be able to do their job. And you also got to keep it in mind when you're looking at their work. You know, if someone isn't as productive as someone else, it doesn't necessarily mean that they're not as smart or not as hardworking. It could be because they have barriers caused by their situation. There's an example that I want to use because it's really real and really relevant, but I do want to preface it a little bit because there's kind of this current trend that I've been seeing on like medium where men will write articles that's like, I pretended to be a woman and now I believe in sexism, but I didn't before. And it's annoying to a lot of people because like, why can't we listen to people without having to experience everything for ourselves before we believe it? But I'm still going to use this example because it's really good. We did an experiment. For two weeks, we switched names. I signed all client emails as Nicole, and she signed as me. Folks, it fucking sucked. I was in hell. Everything I asked or suggested was questioned. Clients I could do in my sleep were condescending. One of them asked if I was single. Nicole had the most productive week of her career. I realized the reason she took longer is because she had to convince clients to respect her. The point here is that you might not even realize what kind of advantages you have over other people. And that's what empathy is. He learned empathy by doing this experiment, but if we could think more about what other people might get going through, maybe we could learn empathy before we get to this point. Company culture is a phrase that kind of makes me cringe sometimes because it's like really often used as an excuse to like justify some of the crappy things I've been talking about, like non-tolerance and making people work too hard. But you can promote a company culture that is accepting and is empathetic. So how do you do that? I actually speak louder than words. It's not enough to just say that you want a good culture. You have to actually do stuff to back it up. I want to tell a short story about where I work at AgriList. And on my first day, I was really nervous. I was already out as trans when I joined AgriList, and everyone knew and was very accepting. So I wasn't too worried, but you still get worried. And I was filling out paperwork on my first day, and on our payroll form, there was like a gender question that had two options, and I was like, oh, I don't know what to do. So I emailed my boss, like the CEO of our company, on my first day, and I was like, oh my god, this is suicide. But I was like, I don't know what box to click. What do I do? And she wrote me back, and first she was like, oh, I'm really sorry. I think it's really crappy that they only have two. Like it's 2016. They should have more. And I felt pretty good. I was like, okay, she agrees with me. She knows where I'm coming from. But then she said, in fact, I'm going to email Trinet right now and tell them that I don't think it's right that they only have two and that they should add more. And I was like, whoa. Like she really wanted to put her money where her mouth is with that, and it made me feel like I was going to be really safe working for her. Don't demand that minorities do all the diversity work. I think this comes from a, it's not coming from a crappy place. Like I think if you want to do better, it seems like obvious to go ask marginalized people like, hey, can you educate me on this? How can I do better? But the problem is that a lot of people just want to go about their day without doing a bunch more free work when they're already having trouble managing their spoons. So it's not always like a cool thing to do to dump that on them. I'm kind of the exception. I really like, I give talks about it. I like to talk about it. I like when people send me questions and I can help them. But I'm kind of an exception and I don't want people to think that because it was okay for me. It's okay to do that to strangers. So I always make this caveat. But do listen to your underrepresented employees. And if they tell you stories, you have to believe them. There's a culture of people not wanting to come forward to tell about how they've been like harassed or discriminated against because they feel like it's not going to be taken seriously. The best way to combat that is to just always take everyone seriously. There's kind of a joke in tech that all women in tech know each other. And it's a little bit true in the sense that it means that word of travel is fast. Like if you're a jerk, people are going to know. But the converse is also true if you're really not a jerk, people are also going to know. And just don't tolerate crappy behavior. This is action speak louder than words again in a different form. If you say you're not going to tolerate something and then you do, you're just showing that you're untrustworthy, that you care more about appearing to be a good ally than actually being a good ally. Inaction also speaks louder than words. So think about if you're in action is saying what you want to say. This is one of my favorite quotes. If you're not willing to remove a toxic contributor from your project, you're accepting toxic behavior as a culture and norm. So you have to think about how you want to act, how you want to be perceived, and how those things link together. I have some recommended reading. This is the original spoon three essay at the top. We have article about invisible illnesses and article about diversity in tech. The next one is a video of the burnout talk that I mentioned by Mary Thangvong. And the last one is a really great video from the cerebral palsy foundation. It's called The Quest for the Rainbow Bagel and it's about navigating New York City as a physically disabled person. Thank you so much. I hope your spoons aren't crying. Hopefully they're a beautiful flower. You can follow me on Twitter or see my website. Thank you.
Spoon theory is a metaphor about the finite energy we each have to do things in a day. While a healthy, advantaged person may not have to worry about running out of ‘spoons,’ people with chronic illnesses or disabilities and members of marginalized communities often have to consider how they must ration their energy in order to get through the day. Understanding how 'spoons' can affect the lives of your developers and teammates can help companies lessen the everyday burdens on their underrepresented employees, leaving them more spoons to do their best work, avoid burnout and lead fulfilling lives.
10.5446/31478 (DOI)
Music Hello. Alright. Good morning. Hopefully you all enjoyed the keynote and the break that we just had. We are continuing today with our panel's track. And really excited for this session on Ruby's killer feature. I'm going to introduce Chris, who's the moderator. Chris is the VP of Engineering at Radius Networks, where he builds mobile proximity tools and services. He co-founded the Arlington Ruby Group and helps organize both Ruby Retro Session and Ruby for good events. Enjoy the panel. Hello. Alright, so I'm going to go through and we'll do quick introductions and then we'll get started. It's you. Oh, it's me. Introductions. Hi everybody. My name is Zer, Hans Her. I am a Howard University alum of computer information systems major. I'm also the woman who co-reviewed Rails lead in Washington, DC. And I am a junior software engineer at Digital Globe. Hi everybody. My name is Latoya. I am the founder of She No Mads, an inclusive space in tech for people who want to travel while working remotely. And I'm also a principal Rails engineer at Daily Coast. Hey, I'm Sean. Sean Marcia. I have slides for everybody. Now we go to Sean. That's me. Not the Sasquatch. Yeah, I help organize the Ruby for good and Arlington Ruby and I work for the government. Government. Latoya, already introduced yourself. Allison introduced me. No one needs to hear any more about that. One thing is we'd like to have your questions and I put some index cards in the front two rows up here. And if you can, I'd love it for people to come up, grab an index card, write down a question, and then you can just come and hand it up to me if you'd like. Please feel free to do that. I'd like to be able to go through some of those. All right. So the actual panel. So before we go into the questions, let's get a little bit of context. So can you just probably, the elevator pitch for one of the communities, organizations that you guys organize. Let's go in reverse order from before, since Sean's holding the microphone. So the pitch I like to give, I like to talk about Ruby for good. And you've probably heard it and seen people wearing the shirts here. It's a long weekend, long event where we get a lot of people like us together and we help nonprofits, places that really need our skills, but would never be able to afford you and me. And we build them software for a long weekend and it's a lot of fun. So as I said before, she knows it's a space and tech for people who want to travel while they work remotely. And I think a big part of working tech is constantly working on your skill set. So we have free coding classes, study groups, an accountability group. And we also do a remote work and wellness retreat because I think those things are also important for us as people in tech. All right. And Woman Who Code is a global nonprofit organization who is dedicated to creating a community in a network for women in tech or women who would love to join tech. And for what we do, depending on the chapter, but in the DC chapter, what we do is we have weekly meetings across multiple subjects. So Python, Java, Ruby and Rails and front end work and we host workshops, talks, and just basically give a support group for women who are interested in or in the industry and just share our knowledge. So one of the things I'd like to kind of set is originally when I pitched the idea for this panel was I find it very intriguing folks working and like helping with the community and moving it along, starting more groups, spinning things off, encouraging people to move up. We had a wonderful panel discussion on the first day about getting involved in the community and how you can do that. This is a little bit more focused on the next step. You're already involved in the community or you're participating in the community and how can you step up and organize or if you're organizing, how can you evolve the groups you're in, spin out new groups and basically build up the infrastructure that we as Rubyists have to rely on. So with that, the first thing I'd like to talk about is member engagement. And so my first question is just how do you get consistent members in the organizations that you help with? So for women who code, the way we get consistent members is of course we had start off with feedback when we first get the ladies on board. So we like to figure out what topics they're interested in, whether or what topics specifically for our case Ruby on Rails, also the content, like we tried to put forth the best content whatsoever. So we start, we go into really home a lot into the beginners. So we all, as we all were once beginners, so we all think of the things that beginners would like to know or don't know and just clarify it in our first time this night. And then give them footsteps, I mean steps, into what is the next thing that they can do to improve on this and we go along the way with it. So I think having that constant engagement with them, we use Slack, we leverage Slack to the 100% extent, but keeping that constant engagement with them outside of our events is what helps them come back more. So it's really on the leads as well as the community members to keep that going. I just pay attention to what people want. I noticed that a lot of people were leaving other spaces because they didn't have code of conduct, so I got a code of conduct. Or I noticed that people were having problems getting jobs specifically at remote companies that would allow them to travel, which is a ton of fun. So I started a job board and we started pulling in sponsors for companies that were hiring to work in our newsletter. So I think just listening to your community and meeting their needs is super important. Yeah, and I'd also add that you need to make your members feel like they're part of the community, engage them, have them all speak when you have an event. Like something we do at our meetup is all the people come, we have them do an iceberg at the beginning, tell them something interesting with themselves, and if there's trapped in Desert Island, what book would they bring? Things like that. And don't just leave when the meetup's over, engage the people, go to coffee, go to a nearby coffee shop, and build community. So kind of spinning off of that, how do you encourage people to actually either present at a meetup or step up and lead a project or participate at the next more higher level? So I'm a big fan of just volentelling people to do things. She's been volent told. Many times. So a lot of people really want to volunteer, they want to, but I feel a lot of them are self-conscious, and so just sit with them. It's like, hey, how about on this day, I know you've been working on this, how about you come and give a talk in this cool Noh Kha Gurri scraping thing you've been doing? And then they do. I think for Sheen Omads, because our events are typically remote, it's just a really convenient way for people to contribute to the community. If someone wants to come in and teach a class or lead an AMA, they can do that. And then if there happen to be a bunch of people who are in New York one weekend or Mexico City or Lisbon, then I'm like, yes, please organize a meetup and just have it under Sheen Omads, and it works really well. So I want to make it a little more specific for us, Zuri. So you run a lot of workshops. How would you convince someone who feels like they might be too junior to lead or help other people out? Encouragement. Like talk to them and be like, hey, you can do this. You can do this. We can help you. We can help you put together to talk and everything. But I think the issue with that is they're not confident in their skill when actually they know their stuff, even though they label theirself as a junior and whatnot. So what we do is we talk to them and of course talk about the stuff that they want to cover and just guide them on that. And then hopefully push them to the next level in setting a date and promoting it for them to come through. So similar question for Sean. So at Ruby for Good, one of the problems is finding people that can lead projects. Is there a good way to take somebody who might feel like they're underqualified and encourage them into that sort of role? Yeah, definitely. Something we've started doing is we've, like if you're a junior or you don't think you have the skills, and if you're here, you have the skills, so come lead a project. And we'll find someone more senior in the community, like a senior developer and pair them up and say, hey, this is your senior mentor, and they will guide you through the process if you have any questions, anything at all, come to them and they'll help you out. Excellent. So I'd kind of like to change the topic a little bit. So Latoya, you mentioned the code of conduct. So how important is it or is there any specific language or points that need to be made about a code of conduct when working with a community like this? Yeah, I think there's two things. Number one, have one number two, enforce it. When people start acting up or playing games, you need to remove them from your community. I think it's really important to provide a safe space for people. And I think Ruby in general does like a great job at that, but I would love to see more communities really step up and have a code of conduct, but also enforce it. So has any of you had any specific times when you've had to actually deal with a conflict in your organization? How was that resolved? Specific names would be best. Well, so far since we started, we've actually had a really great community. Everybody was respectful to each other, but we did recently have an incident. And how we handle it, I think, was absolutely phenomenal. But first, we thought we should have something like a strike system and a three strike system. So if this is your first offense and everything, we talked to that person and it's like, hey, we would like for you to not do that. Here's the reason why this is a code of conduct. We want to keep this as a positive, open, relaxed community because that will scare people away. And if they repeat it again, remind them, but the third time around, we would put them on probation and keep them away and explain why. But the key thing is for us is to reach out to the individual to let them know this is not okay. Here's how we can work with each other. If you're misunderstood, let's have this conversation. But I realize it's really good to have not just, oh, you're out the community immediately, but more of a strike system or just give them a chance or some sort. So one of the questions that the counter arguments I've heard sometimes is we don't need more rules. Why don't we just be nice or be polite to folks? Would you have a counter to somebody making this statement? I mean, I would say people have been fighting for equality and tech for over 50 years. So obviously we haven't figured out how to play nice. When they're ready to do that, then we can maybe have that discussion. But until then, I think that's the least, like not having something, not having a framework for people to reference is like the least important part of that discussion, I think. I agree. Alright, so changing to more diversity and culture. So just in general, how do you encourage diversity in the organizations? I'm black, so I literally just show up. You know, it's really not a problem for us. I think when we first started, she know, Mads, we really wanted to make sure that everyone was either already a part of an underrepresented group in tech or an ally to that group. And I think because of that, we turned off a lot of people and our growth was very small, but it ended up serving us really well because we don't have those issues. And it's been a really great experience. We actually just had a conversation about this, like the leadership team of our meetup. And we recognize that we're not the most diverse group. And one of the things we're going to do is we're going to add more organizers to our meetup, more people of different that don't look like us. So when people come into the meetup and they see the person running it looks like them, they're more inclined to stay and take part rather than people who just look like us. For women with code, like clearly, it's all women, but we do come from very different diverse backgrounds. One thing we've been like really noticing or like trying to keep in mind is like the area where we host our meetups as well as certain barriers that would prevent a certain particular group of women from attending. So that would be like, you know, people from different economic backgrounds who don't really have a car and they need like public transportation to get to our locations. So we tried to, you know, put together events or talks and workshops that will eliminate those barriers for those women who have that type of background and hopefully continue having that diverse background from there. So, Sean, since Ruby for Good isn't aimed specifically at a group that might be underrepresented, it's more of a general thing. Are there any steps that you take to encourage the diversity other than, you know, organizers that might not look the same? No, definitely, definitely. And, you know, before registration opens, I'm always reaching out to two diverse groups. And if you do come to Ruby for Good, you'll notice that it doesn't look like a typical technology event. So we have a very diverse, like this year, I guess not including sponsor tickets because we have no control of who those go to, but 44% of the, I guess 56% is male and 44% is female. So that's a pretty good mix for a tech event. So are there any specifics for anybody? Are there specific suggestions for if, so somebody here is a meetup organizer and they'd like to become, encourage more diversity? What's a concrete example of something that you can do that would help encourage that? I feel like if you see somebody that's like, you know, doesn't really typically show up to your events, like reach out and talk to them and try to bring them under your wing or on board with what you're trying to do. I like when you like talk to them and it feels like they're more included, even though they're like, oh, why are you talking to me? I just want to look around and see what you're doing. But I feel like if you reach out to them and let them know, like, hey, I see you, I will like more people like you to come to our event. That will really help encourage them to make them a little bit more comfortable and like change the whole dynamics of your, your meetups. So I'm going to chime in, even though the moderator not supposed to, but specifically about this panel, I wanted to find, you know, somebody that was in a white dude like myself. And the way we went about that was I talked to Allison who helps with the conference and I knew that she had connections and kind of like Marco was saying the keynote, you know, I know a person that knew a person and then that that was able to come through. And in the end, I was super psyched and happier with how the panel turned out than what would have happened if I would have just gone out and tried to find somebody myself. So I think relying on those connections is a good go to. Okay, so we talked about diversity of the people inside the meetup. I think there's also an important thing about diversity of meetups or groups that you're involved in. How do you encourage your members to kind of expand out to other parts of the community or things that you're not even necessarily involved in? We usually like organize meetups to crash other meetups. But that's a way of how we do it. Like, say, I think, I think it was Arlington or I think there was something a meetup not too long ago that we decided to like, instead of like having our own little one, let's go and attend there is and we just show up in a group. And we do the same thing for DC tech events to like a whole bunch of ladies who will come to the event and attend it. And hopefully that like encouraged them to feel comfortable in an environment where they don't have to come with us. But that's our way of like trying to branch out outside of just the women who code organization. So, like, even though we're Ruby meetup, like we like one thing we do is we we encourage other other disciplines to come give talks or meetup, maybe talk about lixer talk about one of the 600 JavaScript frameworks, or something because then you know they people come they get introduced to these different things and a lot of times it's like an organizer or a member of these other meetups and they give them the resources to then go and take part. I absolutely love linking up with other meetup organizers and just combining and doing events. I don't think I do it enough actually I try to do it quarterly. But for example, we linked up with Chicago. And we did an event with them called 100 days of commits and we had them come in and teach a class for us to on building Twitter bots, which was really cool. So I think just joining forces is a great thing. I definitely, you know, there's a certain amount of overlap between different meetups or maybe similar meetups within the same geographic area. So a couple of us are from DC and we have an amazing tech community there. However, I remember it was something like six years ago this month. I'm just guessing that Arlington Ruby was started, which, you know, there was already DC rug Ruby users group and there was Nova rug. And so Arlington is kind of both of those. So obviously Sean was vindictive trying to take away from both DC and Nova and crush them and take over the whole area. How do you feel like how did you feel when Reston on Rail started and decided to do the same thing to Arlington Ruby? I was happy when Reston launched their inferior meetup. I'm joking. No, no, I was happy. It just shows the community is growing. There's more people. And I was really excited because more talks, more chance for me to learn and just more people coming in the community. And like the reason we started, we started Arlington Ruby was like DC Ruby at the time was was always full. People couldn't get in. And so we were like, well, let's, you know, let's start around and we all live in Arlington. We don't we didn't live in the district. And so just made sense. So, sorry, I know that there are very similar groups in DC also doing some things. Do you feel like that more options are is better? Of course, of course, there are certain topics that they will probably cover that we don't get a chance to cover. So the more options the better. So we can have like the ladies explore what they want to like be interested in. But yeah, the more options the better. And obviously, Latoya, you want to be the only online. I want to be the only online meetup. Yeah, forever. Ever, anything. No, I mean, it's the same thing for me, especially I would love more help actually. There's so many people that are like on the other side of the planet who want to do things and I'm sleeping at that time or vice versa. So, like, I would love to be able to say, hey, I have these like 10 users on your side of the world. Here you go. The reason I'm asking these hilarious, leave worded questions is just want to impress upon the folks here that as organizers, we're thrilled when somebody else opens up next door doing the exact same thing. It's the more groups, the better. And the more we build out the community, it's going to be slightly different for different people. And they'll just click with other folks and then they can work together. Right now there's lots of overlap as you know, with Arlington Ruby and Silver Spring Ruby and the DC rug and rest on rails and then, you know, it can all come together for local conferences and other events, crashing meetups, which is fantastic. Super fine. Yeah. So if you're like thinking, I'd like to do this because I really don't want to drive for 20 minutes to go to meet up. You should start one and highly encourage that. All right. So I have a let me go through a couple of questions. So I want to start with Latoya. So, you know, Mads is online virtual meetup. But somehow you ended up doing a co located event where you brought a lot of people to Mexico. How did that come about? So when I first started traveling or working remotely, I didn't I found that I couldn't find community, right? It's like if I'm in New York or something, there's a ton of meetups, I can go and meet people. And I just found that a lot of people who are having we're having the same problem, you know, it's like, you want to get out of the US and work somewhere else where it's cheap in the food. I mean, hello, Mexican food just amazing, right? So you want to go and have that experience and still be able to get your work done. How do you find community? So I kept going down to Mexico City and then I kept trying to convince my friends who are working remotely to come with me and they kept saying no. So finally I said, you know what, screw you guys, I'm just going to like throw a website up and see if anyone wants to sign up and no one showed up and then like a ton of people ended up applying, which was great. And then I ended up hiring a yoga instructor who taught us yoga like twice a day and we got to explore this amazing culture and get to know each other and we all worked as well. So did you find there was a lot of value in that actual co located FaceTime? Absolutely. I feel like I've gotten to know them a lot better and it's just energizes the community, I think, as well. So this is for Sean. So originally you organized a Ruby Retro Session and somehow you were able to parlay that into Ruby for Good. So first can you kind of give a little background of what Ruby Retro Session is and how you use that to evolve into a much more complicated thing to organize? Sure. So Retro Session, it's a one day conference, so which means we just all get together, day of which side of the topics are going to be, and then we just talk about it. It's a lot of fun, it's great for community building, and I'm not quite sure how that evolved into Ruby for Good. Something to do with my kind of efficient brain and hating inefficiencies because you talk to the, you know, work with nonprofits or meet nonprofits and you hear about the horrible way they're doing things and we, you know, suffer developers, we have this amazing ability to help and it doesn't require us to do much, just for how little help they need. So that's kind of, it's like, hey, we can help these people. And probably a little bit of guilt in there too because, you know, as software developers we have it pretty good, we make a lot of money and everyone's trying to give us jobs, but it's not everyone's so lucky. So, Siri, I'm kind of curious what the secret sauce is in taking someone that's like a total beginner and helping them through all the way to like getting their first job. The secret sauce is sugar, spice, and everything. Nice, I'm kidding. Mostly it's really encouragement and boosting their confidence. So really figuring out what they really want to do. Do they want to do front end or back end, or do they want to do Ruby on Rails. And then from there just guide them, check in on them. It's slowly almost like, you know, a micro mentorship going on. So you're at like 100% like interface every week, but just like doing checkups to figure out like where they are, if they come across any issues. And just really guide them and push them and encourage them and then like have them come out to more meetups, have them actually become a Ruby on Rails lead. So far we like have people who become leads with their women who code, who started off like coming from a different industry, or who started off as like self-taught developers. And we watch them grow, we watch them give their talks and everything and we encourage them. And then from that point on they're able to build that confidence. And then like they start applying to jobs and then they're finally in the industry. But I think when you bring them more involved in the community, it helps really put them to the next level within their tech career, or get into the tech industry. So really just having that constant connection, monitoring, you know, encouragement, the network, like really, the network really help us out a lot to help bring them into the next level. So that's our secret sauce. So that's sugar and spice and everything nice. So it seems like you get a fair amount of control over their beginners. You help them level up. You can kind of help them to become a lead and get more technically savvy. But that last step is really hard, going from, you know, or even just getting the interview and then, you know, starting the job. And the terror that can sometimes go through people's minds is they feel like they're jumping into this big commitment. How do you handle that? It's really like the constant reminder to let them know we are here for you. If you need to vent, if you need to talk about like your first day or the interview process, we are here. Like this is what Women Who Co was for. Like we are here for you to communicate, to really boost up your confidence, be your cheerleaders, actually. So that's how like we really like tackle that. And what else? Yeah, we really like tackle that part. I really went blank. Latoya, do you have a similar? Do you ever work with that first interview? So the interesting thing is that I think because we're so focused on people who work in tech who can travel, we get a lot of mid and senior and like director level people in our community. So when the new people come in, I don't have to do much work to be perfectly honest. I can kind of sit back and we have a lot of people that are often willing to help them mentor on the fly. I mean, you can, you know, screen share, you can use Tmux to SSH into somebody's computer and we definitely encourage that. So Sean, I know at Arlington Roo, we've definitely had folks walk in the door and they said, I don't know if I'm even supposed to be here. And we've watched them move all the way up. Just in watching those folks, do you think there's something specific that organizers should be doing to help or encourage those people? Definitely. Mentorship, trying to find members, mentors, try to just encourage them and just be there for them. Like one thing I know a lot of the organizers of our meetup we do is we do a lot of mock interviews with the people. I probably do one or two a month with different junior people before they're going out for their first interview or their second. And so just like help them any way you can. So changing gears a little bit to talk to mentorship, which was a fantastic segue. Thanks, Sean. So how do you go about finding people to be mentors? Sean? So New York is going to talk about mentorship. So the first thing about mentorship is like it is a relationship. And you have to understand that and maybe it'll work, maybe it won't because it's two personalities. And I was talking to my mentee and she said to me, I knew you were the mentor for me because all the great advice you've given me. I've given great advice and then she clarified and I wrote them down because I wanted to share them at some point. And the advice I've given is don't be nervous about speaking at RailsConf. Just get drunk first. That seven year old lady at the nonprofit you work at is bugging you. Just fight her. When interviewing, tell whoever's interviewing you that you're good luck to hire you. Tell them how every person who hasn't hired you has had a house fire afterwards. So obviously there's a particular sense of humor and we match up. But yeah, so it is a relationship and you need to find the right person. So as far as mentorship goes, I would not be here if it weren't for mentorship. I dropped out of college twice and I was bartending until five in the morning when I started to learn how to code. And I was like lost for a year and then I ended up getting a mentorship at a company called A-Flight. And I was able to take a lot of the good things that came out of that program I think and kind of implement them. And I think just because of my experience, I'm just always willing to help people who want to learn. So I think finding other people who have, who for some reason want to help to learn is a big help. Yeah, for women who code, we are still trying to figure out like a nice formal process in doing this. But more so, I think it's more like people don't know that like that person's actually my mentor. But I haven't told them yet, but I do go to them for questions. I do go to them for like, you know, some advice on something. But I think for us leads, what we do is like we reach out to them afterwards, like after our events and see like, hey, would you like to, you know, have one-on-one discussions in regards to, you know, whatever you're interested in within the field. And I've done that like several times, having FaceTime, not FaceTime, yeah, Skype, Google Hangout. And I just talk with them and figure out like, hey, what are your goals? Like what do you want to do? Let's, you know, try to meet every two, three weeks on like whatever subject or whatever project that you want to do. And we can help hold you accountable for it and we can work through this and any questions that you have. Like I'm here for you. So really taking like our extra time outside of just organizing our meetups and reaching out to our members to help monitor them and like give them like advice is like one of our informal ways of like doing our mentorship cycles. So one more quick question about mentorship and then I'd like to open it up to audience questions. So do you hope to always have a mentor? Of course. I will be, do I hope to always have a mentor? Yeah. Yeah, absolutely. I have so many unofficial mentors. Jenny Hendry was one of mine. Ray Hightower, I don't think he even realizes that he's been my mentor for like five years. And I'm always looking to, you know, grow and expand myself. And I think the only way that you can do that is through mentorship. Definitely, definitely. I hope I always have one for the rest of my life. Right. Yes, everything she said. Actually, of course, yes, I hope to have mentors. I hope to like have mentors across different realms in industry or different topics, like not necessarily always in tech, but you know, a career mentor or someone on like life. But the, I think for me, like what I would like to work on and I'm sure everybody would like to work on is getting mentors with different tricks and trades. And I really will give you like well rounded advice and I will run the guide and navigating wherever you're going. Great. So we have about five minutes left. I saw one person take a card. Did you have a question? Yeah, my question is how important are the meetups to maintaining community and keeping connected? So the question is how important are the meetups to maintaining a community and keeping connected? Well, in this context, what is a meetup? Like are you talking about meetup.com? Actual events. Actual events. I think they're really important. I think they're the foundation. So is it kind of actual that like as opposed to a conference? As opposed to like you have the slack and you know. Oh, okay. I think the in person like stuff really, really truly goes a long, long way. I think that's like really fundamental because it really you see the person you see the face. We can read body language a lot better as opposed to you know doing this through texts. Honestly body language is basically close to 90% of communication. So the physical location part is like super, super important and it really helps out with everything. Yeah, definitely. So you can create opportunities for mentorship, like how have new people bring code samples or bring code that they're working on and that they need help. Like we always encourage our new members to like, hey, if you're stuck on something, bring it and someone will help you. And oftentimes the person that's helping them that will blossom into a mentorship situation. I just want to add to that really quick. Since everything they do or 90% of what you do is remote, I will say that like for us a meetup might be there's a bunch of people online on Saturday asking questions. About GitHub. Why don't we just do a giant GitHub review and then there's like 10 people that are just doing it then and I feel that that works really well for us. So my question is kind of actually about remote. How can people join like a global community or people that can't go to meetups or don't want to go to meetups. What options are there for them and like how can they feel like they belong to them? Come hang out in my community. So just to repeat the question that is how can people join a more global or remote community and be involved in that? Sounds like a question. Yeah. So I think when people come into the community, I try to welcome them personally, ask them a couple of questions. You try to get people to do introductions and I just talked to them a little bit and see like what we can do to help them feel a sense of community if they're looking for anything specific. And since everything's online, like I said before, the turnaround time is really quick. If they're like, I need help with my resume. Okay. Then like let's throw something on the calendar for next month or next week. So Ruby for good has done remote leads. Can you say a little bit about that? Yes. So in the past years, we've like one of the teams or we'll do one team where we'll get a remote lead and have a remote team. And we're debating doing it this year too. So if that's something really appealing to you, come find us afterwards. I think we have time for one more question. Anybody? All right. Great. Thank you so much for coming out. Appreciate it.
What makes Ruby so wonderful? The Community. The community around Ruby is really what sets it apart, and the cornerstone of it is the small local meetups. Come learn how to get involved, help out, or step up and start a local group of your own. We will discuss how to develop and nurture the group. Share our experiences in expanding a small group to larger events like unconferences or workshops. Find out how community leaders can help everyone build a solid network, assist newbies in kick-starting their career, and most importantly ensure that everyone feels welcome and safe.
10.5446/31480 (DOI)
I am Nathan Walz and I am a software developer at VitalSource Technologies. The primary office I work out of is in Raleigh, North Carolina, but we've got offices here and there. We're owned by Ingram Content Group and they're headquartered in Laverne, Tennessee. Today what I'm going to be doing is talking about a history of a suite of our applications organized around Common Core and this core dates back to the very early days of Ruby and Rails. So I'm going to start out by doing a little bit of stage setting. This here on this chart is a rolled up aggregation of seven different repositories that we have dating from August of 2005 all the way to this past week, showing the number of commits per week. It's not broken up by repository, this is just like an aggregate number of commits. It represents a substantial but fundamentally incomplete view of one segment of what our company does. So overall as you look from the left side to the right side, you kind of see that there's a general trend line that goes ever so slightly up and to the right. And then you look just past this area around 2015 into 2016 and you see a giant spike there on the right side of the graph. We're going to take a deeper look at that a little bit later in the talk, but the nice thing is that with these seven repositories, we have a lot of history starting from diversion going all the way through our transition to get. So I'm going to start with a little bit of a prologue here. VitalSource does a number of things around ebook publishing. We allow publishers to send us books to build. They can set prices, determine availability, different things along that nature. And then we will sell them onto end users, particularly students in learning applications. My start with this began in, actually I have this wrong. This should be October 2015. We're in the middle of an upgrade from Rails 2.3 and Ruby 187. And I was getting experience across the app suite by working on this project. And one of the pieces that we needed to do was upgrade a lot of JavaScript. Reason being is that the app used a lot of prototype, which was going away in favor of jQuery. We had a lot of the old RJS style, link to remote and link to remote JS calls and a few other underscore remote methods that we were using. And you have these magical pieces that weren't really JavaScript. It was these magical Ruby incandations that became JavaScript in the browser. So this led to us upgrading that to non-Ruby flavored JavaScript. Basically consolidating on one version of jQuery and so on. And as part of this process, I come across some very early code and I run Git blame and that leads to a discovery. And part of what I see is some code from the original maintainer or the original committer on the project that I'm working on. And so that leads me to go to Twitter and say, hey, I've looked at some very old code today. And this was originally started by Dan Benjamin, who you may know from podcasting fame. And he actually wrote back in, was a bit astonished that this code that he had written back in 2005 and had worked on for a little while was still in use and substantially the same as when he had written it. So I got a bit curious about the history of this. Looking at the code across several of our connected code bases and preparing for this talk and earlier, you know, I've talked to several members of the development team past and present and talked to members of the business team who were basically present for the inception and the continuing behavior or the growth of these applications. And that's what we're going to talk about today. We're going to talk about team growth and changes at a high level in practices, you know, interactions between the business and between it with developers, code growth a little bit, and then how that code has changed. We're going to go into some code examples, not many, but we'll get to see a little bit. We're going to see how developers and business interact. And then we'll spend some time looking at reflections and available lessons. And then we'll also talk about some tools that folks can use to actually come at some of these understandings and lessons that they can draw out of their own code bases. As I was putting this together, there were two quotes that came up. One of which was just kind of an offhanded comment, and that was, in the absence of logic, there's history. And then a couple that came, the second one came up a couple of times from two different people, and that was, it seemed like a good idea at the time. So in a 12-year-old code base, there's going to be a lot of lessons learned. There's going to be a lot of changes to best practices, best practices, what the best practices are going to evolve. And you're not going to start out having all of these in hand to start with. But the nice thing is, is that this code has been around for 12 years because it has actually been successful for the business, and the business has been successful. So that's why we still get to work with it. So I'm going to go through the history in four acts. First one is with the appearance of Rails. And David touched on some of this this morning in his opening keynote, but this was the original announcement to the Rubylang mailing list about Rails 0.5.0, and that this thing that he had been talking about was actually out there, and people could look at it. Now remember, this is before RubyJum's was a thing. This is before GitHub was a thing. This was just throwing a library out there and seeing who would kind of grab on to it and go with it. So this is July of 2004. Over at VitalStars Technologies, there's been a series of kind of cobbled together automation steps around building and assembling eBooks. It was basically a lot of Java in place, but there was a sense that it was a very limited process, and the folks who were working on that were kind of basically overwhelmed with its limitations. And there was also a desire to have a very visual process. And so something like Rails coming along actually made it feasible to build a robust web application. So Rails comes in, and Dan had actually been experimenting with Rails, and he found that he liked working with that more than working with a combination of a PHP front-end and a Java back-end. And so he and another developer, Damon Clink Sales, basically started building. They started building this at the Apple Worldline Developer Conference. I believe this would have been 2005. And they skipped most of the sessions to actually start writing this new Rails application, with Dan focused on writing a front-end piece, and Damon focused on writing the back-end piece. Now looking back at this, there's a lot that we count on in Rails today that was not there. Ajax was new. Prototype in jQuery did not exist, and the Rust conventions that we're so familiar with were not actually part of Rails yet. And what people were used to with APIs were very much around having XML RPC style or SOAP style, or if you kind of combine flavors, and if you look at the original Rails code base that's spun up with this, you actually get a Wizzle file that comes up in your Routes file, which I found really interesting. The other things that we didn't have at that time is Capistrano wasn't around. So there was, you know, deployments did not have the well-worn path. And so there's a lot of these things that we think of as fully baked into Rails, or how Rails applications get shipped now that, you know, had to get figured out. But they got it figured out. And so this is the first commit message. I apologize for the readability. This is, this ended up a bit smaller than I thought, but this is about 400 files from that first pass at the app that got committed back in August of 2005. And what the changelog has is that this was using Rails 0.11.1 from March 27th of 2005. One of the controllers that got committed here was a library controller. We still have one of these in our app today. But as you can see here, that it's still somewhat recognizable as Ruby code and somewhat recognizable as Rails code, but it looks very different than how we might ordinarily write a Rails controller today. So that's the start of Phoenix, which became, which kind of was renamed into Connect in time and then later has more recently become an application called Manage. But there's another application, and that's P2 Services, which is essentially a services layer, and that came along in January 2006. Around this timeframe, Rails hits 1.0 in December 2005. Rails 1.1, which brought in RJS and being able to write those magic Ruby incantations that would give you JavaScript for Ajax and the remotes, came around in 2006. 1.2 brought in REST, and the REST conventions that we're more familiar with today started there. Hit 2.0 in 2007 and 2.3 in 2009. I'm skipping over a few releases here. Act 2, growing the business, the code, and the frustrations that come along with it. Rails marches forward. Heroku gets founded. GitHub gets founded. Passenger comes out, which helps out the deployment story versus, see if I remember at the time, there was a lot of mongrel in use at the time. And Rails is kind of riding high. It's like, oh, hey, if you're building an app, you're building it in Rails. It was kind of the default thing. Over at VitalSource, the growth of these applications was actually starting to outgrow how the development organization worked. And so there's a tension between building out features versus time taken to go research business problems and answer customer questions and that sort of thing. And all of this ended up being driven, being very business driven and very interrupt driven. And so some of the developers who were present at the time found it kind of frustrating to actually get work done. Developers had managers at the time, but their direct interaction with their managers was fairly minimal. The applications grew. And developers typically had areas and tasks of specialization. So I mean, it's kind of a nicer way of saying folks got siloed. And folks kind of developed areas of the code that they were comfortable with and areas of the code started developing their style. So you look through different parts of the application and tell, this developer was involved in writing this substantially or wrote the whole thing. The number of applications grew. So Phoenix, that original kind of front end application and the database layer started in 2005. There were VST models and P2 services, which was a way, VST models was a way of saying, hey, we're going to take our models and we're going to share them between this code base and the services layer in a third repository. In time, the migrations moved out into their own application called Goose. And then we had a couple of reporting applications for basically providing business data back and that was Reporter and Uber Reporter. And then a third version of our API layer, because P2 services involved two different versions, came out in 2010. There's a limitation along the way. And that is that this suite of applications along with others that were running Rails, were all going to basically be locked together in terms of versions. So everything was going to be running Ruby 187 and Rails 2.3. And the idea there was we only want to have machines that look one way. And so if we need to repurpose machines, we can shift them to any other application and there's nothing to shift about them. Any application can run on any machine. This started to become a limiting factor because Rails hit 3.0 in 2010. This I'm sure folks who were in doing Rails development back then, remember this was the big community merge with Merb. There were also a lot of active record changes around that time. Ruby itself hits 1.9.3. And we're starting to try and get out of the perceived performance hit of Ruby 187. Ruby Enterprise Edition was out around this time too, which was kind of a performance optimized version of 187 that the folks behind Fusion Passengers sold. Rails 3.2, Ruby 2.0, Rails 4 and Rails 4.2 all followed. With the company, it's now about 2013 and a new CTO has arrived and not long after, there's a new development director as well. And here we start seeing some process and organizational changes. And the team is starting to get out of the mindset of shipping everything as soon as possible and more thinking about having very, like, batched work. Really things to be more testable. We have, there's much further to go with this. And really trying to have the technical management provide kind of an umbrella for development to basically shield them from incoming requests that are important questions to answer, but maybe it's like, are you happier having folks answer questions or do you need features shipped and trying to figure out a good balance of that. And then we had product managers come in as well. So to basically take input from the business about how to shape feature work and that sort of thing, bugs that needed to get addressed, questions that needed answering, rather than taking that input directly from the business itself. And so work continued, but the upgrades did not. So in time, the requirement to have kind of that lockstep that I talked about where everything is going to be on Ruby 187 and Rails 2.3, that got lifted. But as consensus was building to, for an upgrade, to say, hey, you know, we really need to catch up with where the community is at, VitalSource bought its primary competitor in 2014. The competitor was Corsmart. They were, they had roughly the equivalent size business as VitalSource did. And VitalSource then had to engage in the work of basically absorbing and digesting and changing all the applications to, you know, take on this new business. That took about 18 months from 2014 to mid-2015. And that was everything from migrating users, you know, picking up, you know, new publishing agreements, identifying functionality that existed in one place but not in the other and making sure that it was present for what was going to be, what was going to exist going forward. And then beginning work on a new combined storefront with features that Corsmart had but VitalSource did not. And of course, Rails and Ruby kept moving. And so now these seven, five to ten-year-old applications, you know, we're still stuck at Ruby 187 and 2.3 and now had an 18-month hit on them being able to deploy. We come to Act 3 where we actually get to the big upgrade. It's Fall 2005. And most of the work of integrating Corsmart has been completed. And a plan was made to get the applications upgraded. And part of that was our CTO Al, basically, you know, he's really an operations fellow at heart. He wants us, you know, to focus on minimal downtime, you know, we're no downtime deployments, that sort of thing, really avoiding operational impacts on the business where we can. And at that time, it was getting very hard to get continued support for Ruby 187 and Rails 2.3. There were some corners where you could still get security support for them, but by and large that had ended. So the upgrade process got its start. And it was mostly a parallel effort across each of the applications. And it was basically, we took it as a stepwise progression from Rails 3.0 from 2.3. When that was done, we went to 3.1. When that was done and settled, we went to 3.2 and a 4.0 and a 4.1 and a 4.2. We did a similar stepwise process through Ruby, largely interspersed with the versions of Ruby that would corresponded with the version of Rails at the time. And that took, I think, about seven months to get through everything and get it shipped, tested, and out and stable. Some of the challenges that came along with that were mixing urgent work and upgrade work. We had a problem keeping divergent code bases shipable where if we have the bulk of the work on these upgrades happening, there was further back changes that would have bug fixes applied and then we had to basically bring those bug fixes forward or we would have fixed bugs in the upgrade version and then might have the back port something. And just trying to keep all of that balanced out was fairly challenging. And further to the fact that these seven applications all shared a database. And so we had to basically coordinate all the cross application dependencies such that the applications could go out in the right order such that they would all continue to function when they shipped on the new code bases or on the upgraded code bases. So some notable challenges with that. There were a lot of things about upgrading Rails that kind of forced their hand at 3.0 where there was functionality that came in before one that got deprecated at two and finally removed in three. We had to move off of RJ templates to native JavaScript. We had to transition fully to TRB. We incorporated the asset pipeline at long last. And then there was a lot of restructuring of active record queries because there had been a lot of work that had gone in with active record in that amount of time and we finally got to take advantage of that. The teams also shipped it. So what had basically been a few silos here and there, there was an API and platform team that focused on back end stuff. They had had a changeover in personnel and then that team then grew with some additional hiring. And then for the front end applications like Kinect there were additional hires brought in and then there was additional technical management and product management brought in to help those applications as well. And then using kind of some of the lessons learned that we'll talk about in a little bit about the new application was started and it basically took kind of the hard lessons of database coupling versus API layers and we actually were able to say, okay, we're going to not go down that same path. And so when that new product started in early 2016, it basically instead of hooking into the same database, it's had the hard requirement of we're going to use an API layer instead and whatever data it needed locally, it would save locally. But anything that was in the core database over in Phoenix had to go over an API and would not actually directly touch the database. And because how Rails is being used has also evolved in that amount of time. It's Rails on the server side, but it's delivered as a single page app using React on the client side. And that's been a model for more of our apps that we're using going forward and it seems to be working really well for us. We also started getting into some technical culture shifts here and the beginnings of some increased code review culture, more collaborative work, which is really good. So we come to Act 4 and that is growing the code with a guided technical approach. The review culture gets stronger earlier and here we come out of, we shipped our code reviews earlier. We don't have a single person able to review or like responsible for the final sign off and so we're not waiting for that sign off to happen and potentially throwing work back. We started a version 4 of our API layer and here we're taking JSON as the first preference with XML supported for external APIs, but as the second choice. JSON, at least in our opinion, is far more pleasant to work with than some of the large XML pieces that we've been working with. We start returning meaningful error messages with our APIs and we're basically able to take the experience of the earlier versions and integrations and incorporate that into that design. We're doing more opportunistic refactoring now. So lots of logic extractions in the classes, a lot of, you know, like, oh, hey, let's set up controllers in this different way. We went very controller heavy with our code early on and so now it's probably closer to what folks would expect more of now and that's more use of surface objects, use cases for some of the newer applications and then really trying to, you only have the controllers handle your interaction with the business object instead of directly, you know, trying to do everything as they had been doing before. As part of that, we're also moving towards basically treating the API as how you're going to interact with the core data in the central database. So, you know, I think in time what we're looking to do is kind of get away from having shared models between code bases. We're going to get away from having shared migrations that have to run in one repository and then you have to take the copy of the database schema over to the other repositories to get that in sync. And then basically anything, any data interaction takes place over an API transaction. And then we've been, you know, as folks have been using the UI pieces more, we're also increasing, you know, the amount of work that we do for API and integration work, both internally and externally. The other nice thing about getting to this point is that each team has the ownership of when they upgrade to new versions of Rails and Ruby. And so it's possible for them to, you know, update to 2.4 and 5.0 basically when they feel that they're ready to and want to take that work on. The team that I'm working on currently, we're probably going to take this work on in the next month or two. And then finally we're going to take on a UX refresh of the core application because it hasn't changed a whole lot since 2005 when it was initially written. We're going to be increasing our automated test coverage, basically filling pieces in. Development culture is shifting towards keep the build green, make sure there's tests, write code that is easier to test. And when we're writing new features, let's actually lead with having the automated testing done when that feature shifts instead of trying to fill it in later. And process-wise we're refining. We've gone from, most teams have gone from Scrum to Kanban and each team kind of gets to iterate on its own practices and how they handle their business in terms of, you know, taking new work in from the business, where they retrospect, putting new processes or taking processes away, that sort of thing. And in time we're moving some of these apps towards a well-earned retirement. Our early service layer is due to be replaced with our V4 layer with more up-to-date authentication mechanisms and more rusted APIs. Our two reporting applications that were written against this code base are being replaced with a more capable ETL-based reporting application that's being written by another development team. And essentially we're taking the opportunity to say, you know, it's okay to let these go. We don't have to keep these applications going in order to have the value for the other things that we're still working on. So some tools and methods that I think are generally applicable to teams that are working with code bases that are older than, say, six months or two years or what have you. Code climate, if you use it, has a churning complexity graph. And so what it can be really neat to look at with this is basically find out what you have kind of in the upper right-hand corner. And then figure out how you want to dig into, okay, why are we changing this a lot and why is the code quality suspect for the changes that we have to make? And accordingly, what kind of changes can we make to pull that complexity and churn score down? Can we extract logic? Can we break apart some complexity into more easily testable things, things that don't have to change as much? GitHub has code frequency graphs. This is for the Phoenix application. And if you look kind of on the far right around 2015, on the right-hand side, you'll see kind of some spikes there. That was where we were doing our upgrade work. You'll remember, if you go back, I'll have it up on the screen again, but the commits, we had a big spike in commits, but overall the code added and subtracted didn't change a whole lot, or at least nearly as much as what we had back in, if you look 2009. So if you're looking back through your code base, you can look at, oh, what happened around this time frame that necessitated so many changes going on in the code base? That's kind of an area to dig into and look at more thoroughly. Here's another one from P2 Services, where basically we see a bunch of spikes at different points, and so those are also things that we could dig into. GitHub also can show you the complete history for individual files. So if you go into a tree view for any commit hash, you pick out a file that you want to look at, and then there's a nice history button, and then I'll give you a nice graphical look at everything that has happened on that file in that repository, and that can be, you'll see the commit message, but you can also dive into context with the other hashes and see what's going on there. I also think it's very helpful to look at project release history. So we're using Rails, we use a number of gems, we have our own project release history. These are all things that we can draw from to basically kind of figure out what's happened in a project, and figure out how those changes in our dependencies influences the direction of our projects. We also have Git log, and I use this to assemble the commit chart that I put up next, but you basically have an incredibly rich tool to just both kind of like dig into, you know, commits by committer, you can get a range of commits after a certain date, before a certain date, show me the last 100 commits, show me commits that touch this file on the command line, and it's all incredibly fast. But the nice thing is that you can also do some data exports with this, and so that's how I actually got to this here, and that's, you can basically format and export from Git log, which I'll show you here, where you basically define the format that you want to use, and then I do a little bit of a cleanup after the fact, and then I end up with commit lines that look like this as a pipe-to-limited file, and then I wrote a small Rails application to basically pull all that in, and then I can throw that to a library called ChartKick, which actually spits out this graphic right here, where you can actually see what the commits are over time, and you can also do grouping by week, by month, by thing, and then depending on how you import that data, you can do some other nifty slicing and dicing with it as well. Git itself is a very wonderful tool. I wanted to get into this for this presentation, but I couldn't quite get to it. Git is a time machine. So I could go back to any point in time in these projects, and basically start over from how those projects looked at a given point in time. So if I want to go back to 2010 and see what those projects look like, I could, in theory, spin those up, and my thought there is, if you can get a virtual machine using a version of Linux from back then, or a Docker container with the right version of Linux, you can get the right version of Ruby installed, the right version of Rails installed, and then kind of set that code base up as it would have existed at that time, and kind of be able to look at things and say, oh, okay, I can put myself back in context for what happened back then. That could be valuable to you if you want to, like, hey, where did this happen? Why did we do it this way? You can learn from that. And then kind of the bigger piece, or I should say as big a piece of this for me, was talking with folks who had been involved in this from the very beginning or over the life cycle of these applications, both from a development standpoint and from a business standpoint. They're not always going to be available. You may not have access to every participant. I got good responses from most of the people I contacted though, and that was very helpful. And where you can talk to folks to get context for why certain business decisions were made, why current certain code is structured the way that it is, that can all be very helpful. I do find it very helpful to kind of approach this in a I'm curious manner, not an accusatory fashion. So my questions were, I tried to phrase my questions with kindness, like, hey, can you tell me about what happened around this? Can you tell me about what it was like working on this project back when everything was brand new? And some of the feedback that I got around this was pretty interesting. And two pieces I got from one of our developers, Tommy Stokes, was that we all learn as we go and get better with time. And so don't look at code from six years ago as an evaluation of how a developer is now. And then you can also look at the code and think that, you know, if this could have been implemented in a better fashion or it could have been tested better, there's probably a really good reason why it wasn't at the time. And that could be because of business requirements and deadlines. It could be because of a developer's own experience. It could just be general tactical limitation. And so some lessons that are available with this. And just a thing about lessons available. Oftentimes we hear, like, lessons learned. But I saw something recently where they said lessons available just because you weren't necessarily guaranteed to learn just that that lesson was available to you. So here, getting to MVP was critical because there was overwork and we needed better automation and tooling and shifting gears to allow external access to the application is something that was kind of a big change for the teams and one that is still kind of rippling out. And then the upgrade itself took a long time and that stalled innovation. There's always more that we can do in terms of refactoring business logic, particularly moving towards a single responsibility principle. These migrations and models have not served as well. It's a headache for all of us trying to move things intelligently through the application that causes test breakage. The apps are overly coupled for how they really could be. But there's a joy in writing Rails versus other software development. For Dan Benjamin, when he started the project, it was preferable to writing a hybrid PHP and Java solution. And for one of our other developers, Erin Zarpa, working with Rails was the first moment she felt she enjoyed software development. If we'd understood where we would be now, we probably would have, you know, in terms of allowing external access to the number of customers we would have, how much stuff would be coming in, we would have looked at structuring our data and our applications bit differently. At the API layer, making the errors more clear and descriptive, having good documentation, not treating every response as a 200 okay, regardless of what happens with it, is a big one for us. We have to keep that in some areas because that's what the expectation is, but that's not one that we want to keep. And I think one of the most important things I'm left with is, expect that, you know, my current, my present, my prior colleagues did the best work that they could, given their knowledge and understanding and circumstances at the time. And so I approach this with love and kindness to try and really put myself in the shoes that they were in when this happened, and try to understand this contextually. And understanding that everybody learns as a project goes on, best practices evolve, and new perspectives are going to come in and add distinct weaknesses as people come and go on a team. But Rails has served us very well for the last 12 years, and we'll continue to do so for these applications. And you know, it's been surprising that, you know, everything has been as stable as it has been. We are changing the UX. This is how it looked back when Dan started this project back in 2005, and we are recently kind of giving it a UX facelift, and so soon it will look like this, and that this caused audible gasps when this was demoed last week. People were astonished that it was like, oh wow, it's changing, finally. And the other thing that was interesting, and I've got just a little bit of time left so I need to wrap up, but the business took a big risk. Rails was not at 1.0 yet, and so they had a business critical need, and they took a leap of faith in going with Rails when it would have been very easy to say no. So it's kind of with the mix of astonishment that it's gone on so long, and it has more potential to fulfill. So I have some thank yous to run through really quick, but I got some invaluable research assistance with some interviews. The software that I used to build this presentation was on Rails 5 and so on, and I'm online at base 10, and these slides will be available at walscorp.us slash presentations, probably sometime in the next week or so, and you'll see stuff when the conference releases slides as well. Thank you so much. I appreciate you all being here, and enjoy the rest of your conference.
Come on a journey backward through time from the present all the way to August 2005 to see how a living and evolving Rails application started, changed, and continues. Find out some of the challenges and temptations in maintaining this application. See how different influences have coursed through the application as the team changed, the business grew and as Rails and Ruby evolved. We'll explore history through code and learn from some of the developers involved in the application over its lifecycle to build an understanding of where the application is now and how it became what it is.
10.5446/31444 (DOI)
Okay, so now we can start the presentation. So we talk about the use of a machine learning approach to design full reference image quality assessments to measure the similarity between two images. So after introducing the problem, I will show you the machine learning based approach used to design our full reference image quality assessment algorithm. And in the third part, I will present you the performance of the developed approach, and we conclude by some open questions. The first, if we consider full reference image quality assessments, there are many and many studies on this kind of problem since 20 years, because in 2009, no more than 100 metrics have been developed to respond to this kind of problem. And those metrics works on many data representation in space, purely space, RGB, LIB, and so on, frequency, DCT, DWT, and spatial frequency problem. Among all those existing metrics, we could ask one question, I mean, which is the best one? So several comparative studies have been performed to ask this question. And recently, maybe three years ago, a MATLAB package has been developed, and it's named MetricsMux, in which we can found 12 metrics among the best performing. We found MSSIAM, which is the metric presented by the first speaker this morning, Pierre Sonor, Vif, and so on. For all those kinds of metrics, the final score of the quality corresponds to a pooling approach to obtain the final score. And recently, a new way has been developed based on neural network. I mean, a machine learning approach has been used to find the final score of the quality of an image. So when we speak about quality, we can see that we have two ways for this kind of term. The first is quality could be applied if we have no reference image, in the first case here. I mean, everybody knows what means the quality for this kind of images. It's beautiful, it's not beautiful, it's bright or not, and so. For the second approach here, we use the same term of quality, but we do not have to judge the quality. We better see where we can find differences. It's like, again, when we were told there, we have two images and we have found differences, seven differences between two images. So except here, we have not only seven differences, but we have more than seven differences. So instead of speaking about quality, I think it's better to speak about fidelity or more precisely about similarity. And it's not the same approach. So for full reference image quality assessment, I think it would be better to speak about full reference image similarity assessment or image fidelity assessment. So from this idea, it is not natural for a human being to directly score the quality of an image. So maybe it could be better to learn, oh, we can predict the quality score. And this is the approach we are following in this work. I mean, we used a machine learning approach based on SVM to define a new quality assessments algorithm. So starting with two different images, a reference one and a degraded one. So the first step is to find the differences. So the first step is to define the feature that will help us to measure the similarity or the fidelity between the two images. So to do this, we performed the computation of the similarity features within two different spaces. The first one is a spatial domain, and we found the MSSIM defined by Wang and Bovik and explained by the first speaker. So we can go on quickly. And the second one is down in the frequency domain. Why in this kind of domain, because the human visual system is sensitive to different frequency and to different bandwidth. So we used for this kind of frequency domain the cerebral pyramid decomposition on three frequency levels and on four orientations, major orientations, I mean, 0, 45, 90, and 135 degrees. Even if we know that you want the human visual system is sensitive by a width of 10 degrees. So finally, we obtained 27 full reference features to define our vector. Once this has been down, we have to use our classifiers, multi-class classifiers, and here represents the number of similarity classes we have to, in which we have to classify the differences between the two images. So to do so, we used SVM. And since it is a binary, SVM is a binary classifier, we use a strategy for decomposition of a multi-class classifier. So we have two ways to do so. The first one is one versus all, where the number of induced binary problem is equal and C, and C is equal to the number of final class. And the second one is one versus one, when we have to learn to train a classifier, one class versus another class. So we decided to use one versus one, and here is presented the decomposition, the associated decomposition matrix for four different class. So here, the first row corresponds to the training of the yellow class between the example of the green class, and so on. The second row corresponds to the training of the red class between the examples of the green class and so on. So finally, we obtain a green number of binary classifier, and that we have to combine to take our final decision to argue that the differences between the two images correspond to very similar images or not really similar images. So we have many ways to do so, for example, the most commonly used combination rule is the majority voting or ECOG decomposition and I mean distance and so on. So in our case, we decided to use evidence framework in which we are able to take or to put a belief and a confidence level on a particular classifier. So evidence framework help us to model the personal knowledge we have on the example we have to classify. Here this help us to say, okay, we have, for example, nine classifier will say, okay, the class number one and one classifier say, no, no, I don't agree with you. It's the class number two. So instead of taking the decision from the nine classifier, we say, okay, maybe it could be effectively the nine, the corresponding class to the nine classifiers, but if the probability of the examples belongs to class number two is very close to the maximum probability, so maybe it could be the class number two. So here this equation help us to create a subset of possible class of affectations. To do so, we used a mass function that represents the belief we have on the class if you ask. And the decision is taken from the maximum of the diagnostic probability that help us to spread the ignorance we have here represented by the mass of the empty set all over the classes and finally we take our decision from the maximum of the diagnostic probability. So we decide to use a classification step under the uncertainty constraint to take into account the rate recognition or rate classification of each classifier. So since we have to classify the results in different place, we use the similarity classes defined by the ITU which represents by the impairment class where we have no perceptible difference between the two images which corresponds to the class number five till class number one where we have very high differences between the two images. But till now we have no quality of fidelity score since we have just a classification. So to obtain a final quality score, we used regression functions for each quality places or similarity places. So we use the epsilon support vector regression which is based on the same kernel of support vector machine to define a quality score. So after operating a classification process, we decided to use a regression function to obtain a scalar score of the quality of the fidelity between the two images. So once we have done this, we have now to test our method and to measure the performance of the proposed approach. So we decided to use live database set developed by UTA University and the live laboratory. So the training sets are extracted from the live database one set to train the SVM classifiers and five sets, one set per class to train our regression function. And the test set is down on the live database without the training set of course and another database in which we can find different kind of degradation, not learn degradation. So across validation has been down to determine parameters for SVM classifiers and epsilon support vector regression. The kernel used is a standard radial basis function. And to be sure to obtain a score of quality for each similarity class, we use a numerical set that's been associated to each class, for example, for the first class where we observe a bigger difference. So we score it the quality between zero and one and for the second class between one and two and so on till the fifth class. The comparison protocol has been based on the Spearman rank order correlation coefficient. When we test our approach with four full reference image quality usually used, I mean MSSIM, which is supposed to be the best one, VIF, VSNR and the famous PSNR. So the results on the live test set is the following where the red curve represents our approach. So using a statistical feature test, we can observe that for two, degradation, GPEG and Gaussian blur, we observe a statistically significant difference between our approach and the best one. So one interesting thing is that for all the data, we are better than the MSSIM or VIF or PSNR and so on. So to avoid question about best response, we test our approach on TID, temporary image at a base where we can find more than 17 different kinds of degradation. And one interesting thing is that our approach is better for not learn degradation such as correlated noise, sorry, it's written in French, high frequency noise and so on. So and one interesting thing too is that for any kind of degradation, we are better than MSSIM or VIF or VSNR and PSNR and so on. So to conclude my talk, we address the problem of the measure of an image quality using machine learning approach for which we have, we learn how to predict the final score and the classification has been done under the unsurpassed constraints and the competition of the final score has been done using a regression scheme. So open question is that, okay, you define, maybe you define an image quality score but what about the color dimension? This is quite the same open question that the open question of the first speaker and we can't have to improve the process, a feature selection step to say, okay, maybe we can add many and many features and maybe to improve the approach, we can apply a feature selection to extract the best features to obtain higher confidence results and so on. Thank you for your patience and for your question. Hello, thank you for the presentation. We have the time for a few questions. Anyone? Thank you. I noticed, sorry, this is a slightly technical question, but I noticed you use cross-validation to estimate the regression parameters and sometimes cross-validation can give rise to underestimation. So did you test the cross-validation against any other estimation procedure? Yeah, it has been tested and we have no really significant difference. So this is why we use cross-validation. Okay. Hi, actually I was doing similar thing, so RLS and regressionally square. It seems to be a straight ahead instead of doing all the clustering stuff. I'm just calculating. Sorry. Okay, so let's stop. Maybe you can discuss after. More questions? Thank you. How did you test significance? You mentioned that your approach is significantly different and better. Yeah, we used a Fischer test and the significance level at 0.05, P under 0.05 for a Fischer test. So variance approach. Okay. I think we should stop the discussion there. It's time to the speaker again. Thank you so much.
A crucial step in image compression is the evaluation of its performance, and more precisely available ways to measure the quality of compressed images. In this paper, a machine learning expert, providing a quality score is proposed. This quality measure is based on a learned classification process in order to respect that of human observers. The proposed method namely Machine Learning-based Image Quality Measurment (MLIQM) first classifies the quality using multi Support Vector Machine (SVM) classification according to the quality scale recommended by the ITU. This quality scale contains 5 ranks ordered from 1 (the worst quality) to 5 (the best quality). To evaluate the quality of images, a feature vector containing visual attributes describing images content is constructed. Then, a classification process is performed to provide the final quality class of the considered image. Finally, once a quality class is associated to the considered image, a specific SVM regression is performed to score its quality. Obtained results are compared to the one obtained applying classical Full-Reference Image Quality Assessment (FRIQA) algorithms to judge the efficiency of the proposed method.
10.5446/31445 (DOI)
Hi, good afternoon, everyone. So I'm Chang Jin Lee from University of Science and Technology Learning in China. And the title I'm going to present is the new version of CKMU2 with HP primaries. And this is a joint work with Rony Lu from University of the East and Paley Sun from National Taiwan University of Science and Technology. So what I'm going to talk basically is four parts. So firstly, about the background of CKMU2. And secondly, the new version of CKMU2. And third is performance. And finally, my talk ends with the conclusions. Firstly, about the background. So CKMU2 was recommended in 2002. Since then, it has enjoyed its wide popularity in scientific research and industrial applications. It has been used for predicting color appearance and a wide range of wind conditions and also quantifying color differences and providing uniform color space and also providing profile connection space for color management. But it is not perfect. It has problems since... And mainly, the problem with the CKMU2 now was found is the computational failure. That's something unexpected. Mainly come from this lightness computations. And so... sorry. And the problem is with the ratio in the bracket. So A over AW. So we have shown that AW is positive. But unfortunately, the A sometimes can be negative. So therefore, when we're computing the ratio, it's to the power of say multiplied by that. So you've got some problems. So this is one of them. And another problem... So this was identified by Michael Brue and Sabine Schruzz-Straung. And this is so-called yellow-blue and proper problems. So this problem can be illustrated by this diagram. And the solid lines form a triangle called the KETO2 triangles. And also the colored dotted lines form the HP triangles. And also the black dotted curve, that's the CIE spectrologers. And also the black dotted line, that's the purple line. And the... can't find the... this is the pointer? Oh, yes. And look at the KETO2 triangles. This is the solid lines. So on this part, also located in the domain of the... domain enclosed by the CIE spectrologers and the purple line. So that's the source of the so-called purple problems. And another... you can see the problem is with this side. So this side, you can see the solid line and dotted blue line. And also the spectrologers in the red and yellow part. This looks like they coincide, they're overlapping each other. But if you blow up this part, then you get something like this. And they found... it's ideally this solid line should be located in between the two dotted lines. So now this is the problem at the moment called the yellow-blue problems. And so those caused some unexpected computational failure for the CICAM 02 as well. And so in 2007, that's the CIE meeting in Beijing. So during that meeting, a technical committee was formed. They named the TC811 and I'm the chair of this committee. And in 2009, we showed that if the CAC2, which was built in the CICAM 02, the matrix is replaced by the S3 matrix. And if the sample or with chromaticity inside the domain enclosed by CIE spectroglokas, the result is CICAM 02 has no computational failures and is also simpler as the original. And during the CIC meeting in 2009, and this TC also had a meeting, and during that meeting, it was agreed to further evaluate this model. So this talk just to report the new version and its performance when predicting the psychophysical experimental result. So now the new version. And if you're familiar with the CICAM 02, so you need to input the input values into the CICAM 02. So that includes the X, Y, Z for the illumination and the error or background and also the samples X, Y, Z. And also the viewing conditions all in this slide are the same as original. And furthermore, some parameters can be computed based on the input information and those from equation 3 to 7. And they are independent of if we're talking about the image pixels. And so this and those are also the same as originals. Now we come to the main steps for the new versions. And the first thing, so that will be the chromatic adaptation. And so equation 8, so this is the transform from X, Y, Z to the RGB space. And this is a step that is different from original, only the metrics change. Now this is the XG metrics. And with the original one, normally we say the RGB is in the sharp sensor space because the RGB can be negative. But in this version, so RGB can be considered as the cone space. And then followed by computing the defectors. And after that, we do the chromatic adaptation. So this is a new version, so we can consider the adaptation in the cone space and compare with the original that's in the sharp sensor spaces. And the second step do the luminous adaptation. And so with the new version, you're directly from using this nonlinear transform from RC, GC, BC transform to R prime A, D prime A, and B prime A. And comparing with the original version, so the new version is much simpler. Because with the original version, you need transform from RC, GC, BC to corresponding color X, Y, Z, Z and ZC by metrics transformations. Then from R, X, Z, and Y, Z, Z to R prime, G prime, and A prime using HD metrics. So this is from transform X, Y, Z to cone space. And finally do the adaptation, nonlinear adaptations from R prime, G prime, B prime to R prime A, G prime A, and B prime A using these three equations. So this is much, comparison original is much simpler. And also it has a nice property if we choose the sample in the domain enclosed by spectrologists, so the R prime, A, D prime, A, B prime will all are not less than 0.1. So this is a very nice property. And so next step will transform to the opponent color space. So this is using the equation 17 to 19, A is the acrobatics signals. And with the new version and A will be non-negative. So this is what we wanted. And so formula is the same as original, but we have the nice property we wanted. And this is computing the, in this step computing the hue, little h, and capital h. And this is the same as original and all the formula and the parameters in this paper as well. So the same as the original, I don't want to talk more. And finally predict attribute, for example, lightness. The formula are same as before, but some problem will overcome. For example, the J lightness for this new version, so the ratio in the bracket will be non-negative. So therefore, there's no problem for computing the lightness J. And using another nice property with the new version is the T formula. With original in the denominator, so r prime A plus d prime A and plus the ratio of 21 over 20 multiplied by B prime A. In original formula, so that can be zero. If it is zero, then you have another computational problem and with the new version, so that is always positive. And finally, you can using the formula computing the chroma and the colorfulness and also the saturation, which are the same as the original. So this is a new model and so we need, so which is compared with original is much simpler and also overcome some problems. Now we're testing the performance of the new against the original. So two set of data were used. One set is corresponding color and another set is color of prints data set. So firstly, this slide just predicting the corresponding color data sets. So 21 and so this bar charts the horizontal axis represents the data sets. So 21 of them, all of them were used for deriving the Kettle 2. And the vertical is the color difference. That's the model prediction, the difference between model prediction and visual result. So that means the height of the bar, the lower, the better. So the blue bar represents the original and the dark purple bar represents the new one. And as expected, so the blue bar overall is lower than the dark purple. That's as expected because the original version will optimize using those data set. And but overall, the new model is performed a little bit worse than original, about 1.3 select color difference worse than the original. And this is the corresponding color data set. And next I want to show the color of prints data set. So firstly, I need to introduce a measure for the performance of the model. That's a CV value using that formula. So the way is the visual result, P is the model predictions. So the smaller the value for CV, the better perform the models. So if CV value equal to 20, that is the 20% difference between the visual and the model predictions. And this is the first column represents the data set. So all of them were used for deriving the color of prints model of the Kettle 2. And so three attributes, lightness, colorfulness and hue. So comparing the results on each data set, we can't see much difference between new and original. Overall, the new slide was worse than original, but on each attribute, not more than policy CV values. So that's the good news. And so that's I can conclude with those things. So firstly, I present a new version of Sakemo 2. And the new version is simpler than original and overcome all computational problems. And furthermore, so that's a good side. And the best side is some problem performed relatively worse using the corresponding color data set. And about 1.3 CLAB units overall. And considering the color of prints data set, so that's much a little bit, slightly worse, less than 0.3 CV values. So that's all. Thank you. Thank you. Feel done for a short question? No, so I think we are out of time. Thanks for the speakers.
CIECAM02 has been used to predict colour appearance under a wide range of viewing conditions, to quantify colour differences, to provide a uniform colour space and to provide a profile connection space for colour management. However, several problems have been identified with the CIECAM02. This talk is to present a new version of the CIECAM02 with the Hunt-Pointer-Estevez (HPE) matrix. The new version overcomes some problems and is simpler. The performance of the new version is verified.
10.5446/31446 (DOI)
Vom bo privatissy citizenovattering posledaj Lačite raslana dondo немного novejkズ Ne brить v motender ni dčetni skočne zrednje in Vid俺 letene informacije med neba grenadesa non-inmerana favora in šične v reformsljane iz hanš Latexpl. In tez koviše so v dovolj Lo Не bo. Zkel gleza bo feemaover. H Ousprav, co je dr Precis b recognLI unlock dir. lahko v魁ske mano sejuse. Thakm in gdjamo, ki prizbrand services s modu kl Beach on paddle. Prospe, ki zase 🙏 o počud skeletons. ki je vsatik, je tudi jevljene poslim Days направ po walked. po defendantin u mk Cada Superzi ordered raz mapa, a deločno karama pri svetljenj графiku, povolj Epita in bal組toni mordi se za pot errezovalo iz 이번ovhurneη obassa, če po rovni mordi promieta SLT heelog ami, ter conductor Uporta, ne osoba pospotneba ospoš 계속netima prema zadzaki k話jo vzefovati cel ost depres plenty sk publicivo začel je ampak soldi, Øokor πεkiti v creeki okri好好ji o svetlji. Tegel prizletje je dolima nad84 kar ep ㅋㅋㅋㅋ www.aklin.com発 videian iz pornič police in angelo in pogledanje v dogledavi prist poseele K. prist bogemo, bravo istoını za gener Support veču uprajeting in analiziraj Sat preho était Adi, jah squeeze今日は fražen. Bito je opotatoiednjezovanoc. Zelo z Locker u boovenju. Znos Noso originato, izkrijenje k deltas projectel, adding benbi Zalot o predel reached in orientalo ž genetic data, kar do diagonal, ogatijenje dizkubovelnnov fuelediralaijah tehnikovi bott.啊, ta spetial jazda. impair toys, sta bastardsi se gre, malo vsj. Uni live system v rukni del in mensiko Harge inOne поддержakila model sajte dobiti svih vož ужеarten sivedna šal separasti in do zledi o na emergency a druge z mord Ringgaked k� kad Wuaga redo kot se motickena velmi velenja, je bleč nikdo odglav kitchen i baj z99 tenčno jeje, z kuticim in kest Am ஆko Kajom marvelo se 수�pečesto odhovačnem iz nagingena blplane tidaksa je in mana uparjeje s nogami содержha seอง z tem je tento sezero veče iz greetedom seוש ni, ki je zvezana do kot mlade. skoru lahkomaljAh.itor trouvera ta form ياzber tako in. Stativ, emerged zernosti semschine abo ukraamp endudovati nam sanctič maybejte ta bizec. Narust approximately soili lahko in Gang Μ teach sesite o vsnjom Er produzitvetske in bružilaši o njim сум теперь laying over amazing drying waiting line s treba details o zelo. Liko bi pare, Zagaivimo Commina workforce didine drug- espine, launching installingbu v listin atom. monstruro v kav depending, zuar pen, bas pu odst manchmal para Portugal zemljati Zola, priza državice, scroll din odkala v joje mepline kneadov. Pob maskedim bjud je sztale absol嘛je. državje are bej pozizw stabl. Za drugiče pelje Make to postigujem temu, ki so mogli osice kot diverse sleptanche. LIKEBيتvoj za bengou t Monadza imezveni maženj. Elefuncije populationsenje in ta te organizacija, z klipov, z고vi ci sa večnih pod pipezberi, je vedno trajenošnja kovilica na kette. V svoj pristati resalti, zelo smo vsečočili skorini, komputerne, z vsečenim, vsečenim, in vsečenim, vsečenim. Vsečenim, kaj je vsečen, vsečenim, vsečenim, vsečenim, vsečenim, vsečenim, vsečenim, vsečenim, vsečenim. Prije<|pa|><|transcribe|> K sigaci se razgenerira bo impatient čoh zelo prпарash, coupledima, baremi posi na Raček in printedim srednih informacij, izgleda se č große onalap원ze v zelo der Negra aged終alzično jam, recognisedite, P Han CSS probably gr bit so, da to priložite in ljudi je jal, pillars odbiste na taj spodob'berce. Priииč u povolju potelji iznišnji postulil na red technologicalenš友. pošlo del odtev Grab типа mlečki ose porka pred last ka bolj civilesta visa so rassurje je evo kot sa Harvima Pristuli in del del v statične večinite ar nagvolj. Undovali spremli si um Think 별je se편. Oden tudi je pillarsavesasenj sliženap playing na dve basi izgrežava odhovorila pomhoste, structuresh in kot delovnih in以上athi啊rajo v trakčед boji. Pobladi mediji forgega in jawjta je zašlo ne bluesa izmeran prejena bilo to, Everything works, there is also some research about the rule, je izegajale pre chestenji, je pripovovo pravko v semantihDC. singularsmith danne avanje, semanta, radi hicotov. bago. je ni uncot centuries, z kreša venir postedie buenas na saj dolar. kot. iz? do Var Ty za mas 82 centre pen 80 Pregram bou po иhuman ni, kateri istrav mene ne se pravela? Pri Code3 ročak je pos tigersimi povanje, da oču na padsem var Yeah, that waslogo<|pl|><|transcribe|> to bi Whoa.半 Thank you. Dear telephone who you guys married three to kva jo jo, pa da se potreb peske. In tukaj se vse, da je pozen, da nismo menej potrei in post murderim poseboj text, predse呢e that was produced by the Law 3, in to-taj malo je dokušan, ne pogod stakotoma zaulations, stakal tenderje,居ta warnč Boja 37, je hrešga šara pouchadani, tako, kdaj se je sve sad, naž 스�ori ima basin, občas, in rustišboni,orroweka, malo je, grašne milici, gomeni, ust чаšnevホoz trav immediately.issez这em v nozzleem, k içeljem hid parents, Al Bo這些 pra bajo sculptures bo vsonivati z prosesnih timet Trek posebi. Kako ampak dosvetnih fotografijenenie bo do v sobotiskim modelji, in bar posebi, ki odlišest Zhu Light morning iso ch challenging. Z strojeva, kako roklide na od Uranijska tri, ima dela vo po devices diva Boost, z našاخnih telepati heyz другimi, izgINE, tili sulfur zostace variantovitiarno na normcu. Sprož holythm z njim, da smac do whiskey in treba bomo, je naileden travelzer v blueprintini do re complej izgine z letanThron TV. M Brez lobster je najde na otbroj p beautyeli noz, as bang je zelo mi ta tempting, zjem pozr informacij in njo bord SIrlaughing, in ki si je vzljeno v centre, to je zelo, da je vzljeno. If you put them at a third and they are facing or moving in the right direction towards the space, they tend to look a lot better. So it's a kind of offering of potential? That was an arm waving explanation. I had to have no idea if there is any scientific validity to it, but it's something you could test. With the suitable funding of course. In fact there has been some psychophysics done on using cartoon type figures where people have changed them from looking like they are stationary to looking like they are moving. And there is a definite effect where the moving object has space to move into. Very good. Well thank you indeed again, Philip Ror. Thank you for your comments. Kaj pa čečeč?
To be able to score the aesthetic and emotional appealing of digital pictures through the usage of ad-hoc computational frameworks is now affordable. It is possible to combine low-level features and composition rule to extract semantic issues devoted to isolate the degree of emotional appealing of the involved subject. We propose to assess the aesthetic quality assessment on a general set of photos focusing on consumer photos with faces. Taking into account local spatial relation between involved faces and coupling such information with simple composition rule an effective aesthetic scoring is obtained. A further contribution of the proposed solution is the novel usage of the involved facial expressions and relative pose to derive additional insights to the overall procedure. Preliminary experiments and comparisons with recent solution in the field confirm the effectiveness of the proposed tool.
10.5446/31455 (DOI)
Good afternoon everyone. I would like to demonstrate our research results on the edge between engineering and aesthetics a little bit. So there will be some threats that will be related to our common sense. Alright, so first of all the key question for us and the primary question for us was what is color harmony. It's according to the webster's dictionary that's some whole consisting of several parts that are pleasing in some way. The most common harmony is usually a musical one and it describes coexistence of two or more tones but everyone should be aware that they can be both paramanic ones or dissonant ones. So people are always using both or very two tones conforming each other or very distant consonants that they are in strong opposition to each other and both those variants are used for some aesthetic purposes. We will be looking for those both of those cases without differentiating that this one is harmonic or this one is not harmonic. We will be looking just for human patterns in the thinking about the color harmony. So the color harmony is still open problems. Since Johann Wolfgang Gethe there were numerous approaches and we will be looking as I told already for the coexistence of two or more colors but we will be not discussing about two color specific colors but rather a relationship between those colors. So I mean that for example the red and green one colors are in the same relationship as blue and yellow ones. So we will be looking in such terms. That task is important especially for computer graphics, visualization and such tasks. So that's why we looked for it. Oh sorry. Probably the most prominent nowadays model was proposed by Johannes Itens, a Swiss color theorist. He was connected with Bauhaus. He noticed that color harmony is mostly dependent on the hue relationship, on the relation in the angle of chromatic plane. So that model is quite commonly used by artists and it was also used with quite good results by Matta Suda. He identified such patterns and please note that all these patterns can be rotated freely. Just it's only necessary to that all the components... Oh sorry. Where's my mouse? Yes. Sorry for this. So all these parts, all these patterns can be rotated at an arbitrary angle. Just all its parts must be rotated at the same angle. But one can see that in this search there is a lack of several important ones such as triads. Red, green, blue or some other patterns or quadruplets. No such obvious patterns. So we decided that there is a lot of data color database available in the internet. They are created mostly for aesthetic purposes. So we can just use them. They have numerous color palettes within. So all we do need is to get them and to use them with some data mining technique just to infer such templates. Our very brief, that's a very brief scheme of data mining processing. So first of all we need to get some representative set of the data. Then we need to preprocess it in some way. Then finally use some data mining technique. And then finally we need to interpret it. It's always necessary to get the human endeavor end because all those techniques usually are, well, they are very general. They are not suited to very specific tasks. So we always need to tune those algorithms with some parameters that's represented by that look back with gray arrows. Always we do some experimentation and we look if we get anything meaningful or we need to tune those parameters once more. That's how, that's the original of our view of our approach. We decided to use the color that the database. It's an Adobe project. It's very useful. Why? Because first of all it has really massive database. Just have a look. Today since the beginning, since the nine o'clock until the noon, it has grown by 49 piles. So it's all continuously growing. The second cause is because this project is integrated into creative suite. So artists, designers, professional graphics, they all can collaborate and they also contribute their own pallets. So probably those pallets are created with an aesthetic intention. So we can guess that those people want to share something that they consider as a nice, beautiful. That's the main reason we use it. Next, we downloaded all the database. It consisted of 450,000 of elements. We choose two random samples. Smaller ones, smaller one just for the testing purposes. And the results of that smaller sample will be demonstrated in further parts of this presentation. And large one, which large, 5,000 elements, it's not a large number, but relatively larger one just to do final experimentation. We chose only five component pallets only and we converted them to huge saturation value color space. Our pipeline in more details looks like this. First of all, we normalize the data to get just the relationship between first, second, third and so on components. This step can lead into some confusion. I mean that we always rotate the whole pallet to the first component. I just rotate it to the zero angle. So it is possible that exactly the same pallet can be started, for example, at that angle. So it will be rotated just a little bit and it would lead to completely different results. That's why at the final stage it was necessary to ensemble identical results. But intermediate steps, I mean just using just cluster similar pallets into groups and then within that groups just to cluster separate hues into those meaningful ranges. The key problem for us was how to measure similarity of two pallets. The first pallet is a set. It has no significant order. That's the first component, second and so on. It's not a vector. It's a set. So for comparing sets there is a well-established measure named Jakard Distance. But of course for our human purposes comparing that something is a little bit red and something is red but a little more yellow, it's almost the same. But for the computer now. So it was necessary to employ fuzzy sets which are better related to our human understanding of reality. So how we did it? First of all the first formula is just a simple Jakard Distance and as a union of two fuzzy sets it's just maximum of the member function. As an intersection of those two sets it's just minimum of those two membership functions. As a cardinality, colloquially speaking, a size of those sets is just some integral of the area of how the membership function. I hope it will be more easier to understand in next slides. So first of all in this feature we see two palettes marked with blue one and green one and its components are located here is Anuar, the hue dimension of the HHSV color space. So each of those lines represents a single color component. So using those values we spend now a membership function consisting of isosauce triangular membership function. And that membership function is 30 degrees wide. That width is selected because of linguistic naming of colors. If something is red and something is orange there are in hue saturation value color space distance is about 30 degrees. So that's why we chose that width of those membership functions. So if it's something red but moves shifted slightly to the orange and if it's something orange but shifts a little bit to the red they will overlap and then please have a look. The resulting values in the jack of the stars will show it. Please have a look now. Intersection of those... intersection of two of two fuzzy sets is demonstrated with that gray area. So it's just that's minimal value of both those membership functions. Meanwhile a union is just envelope of all those lines. It's slightly worth visible but that black line that's always on the tops of those triangles. That was our... and that was key of our research. Now we employed that jack of the stars calculated in the way which I just showed you. We just for the clustering used very basic algorithm. A hierarchy algorithm with just joints the closest objects in the set in all the data. It just iteratively closes, it joins, joins, joins until it reaches some declared threshold or number of clusters. Here we see the results for the very preliminary data set. It's pretty hard to interpret it but in next step we drooped those single hues into the ranges and then it will be visible that it works quite well. So we employed the dbScan algorithm for that step. dbScan just looks into some neighborhood of any of those hues. And if there is another queue it just extends the data cluster. And if the size of the prospective cluster is larger than some parameter in our case of 10 elements then it's assumed okay that's our cluster. So we used the dbScan because it has one serious advantage. It filters out a single outlying elements which we consider as noise. And in this case we used a simple angular distance between elements. So now we used on the preliminary resulting clusters we used that grouping and now we have rediscovered dimension templates like L, V, T, X and Y types. And we have discovered already in fact already known but other patterns. Triads and irregular triads. They are marked with those color or black-white squares. That's how it works. Then when we were convicted that our method is give some meaningful results we used larger data set. It consisted of 5,000 elements. And we processed it in our processing pipeline. We determined the number of necessary clusters increasing the numbers iteratively until there was no more than single garbage collector or junk collector which collected all the pallets which were poorly fitted to any other. It looked that it was a set with a single area wider than 180 degrees. Just covering almost or at least half of the area of use. Then finally it was really painstaking task because we had to sit and get all those results manually to assemble manually them into single categories. It was really hard and took several nights for us. We discovered following templates. Some of them are obvios. Some of them are less obvios. The proposed type P has its mirror sibling. Just look at it. It's just swapped through the main axis. All those results were in fact obtained manually. To summarize what we obtained. We proposed a method for comparing not only single pallets but also to compare templates. We discovered several new harmonic templates. We verified that this approach gives us some meaningful reasonable results. We tested the proposed algorithms. We have some other ideas. What else should we use? We tested that such massive databases can be really useful. What we would like to do first of all that final stage as the assembly of the results. In fact we have solved it already. We have some code. We have some results. We will in further articles we will probably demonstrate it. Then of course we would like to determine precisely location of those ranges. Precisely determine the angles between those ranges. What is not discussed here is and but it would be important to find some way to identify inclusive clusters. I mean that for example we had that D-shaped or V-shaped pattern and it should probably include also it could include also I-types within. That's what we should look for. What are results could be used for? Especially to enhance that model for both saturation and lightness. To categorize linguistically those results should be considered as harmonic ones or dissonant ones. We would like also to test some other clustering algorithms and probably we could get better or more interesting results if we use for example the most estimated results from the Culler database. Because we just randomized 5,000 samples from the database in the mean but Culler offers for example scoring by the users of pallets. So maybe if we choose 5,000 best estimated pallets the results would be different. Other perspective research is just to make similar analysis but not on pallets but not for example on huge histograms of images from similar protocols like Picasso or Flickr. So thank you for your attention. Perhaps one or two short questions. So this is a slightly technical question. You did your clustering on the circle. So did you parameterize by angle? Did you parameterize by angle? It depends which one. I was going to ask what happened at 0.360. How did you identify those? No, all of those computations were done in circular domain. So it was always that if something is for example 359 and its distance to angle 0 is just one degree. So all the computations were in the circular domain. Was that a question? Okay, one more. I think it's really impressive that you're able to use such a large publicly available database to do this analysis. But you're looking only at the one dimension of hue. Yeah. Combinations within hue only. Would you consider extending it to lightness and growth? Certainly. Certainly would like to do it. But Itansky concept was based mainly on hue. So we focused just now for this aspect of color. Further I would group the results for example. Okay, I've got Y type. So I get all the pallets fulfilling that scheme and then I will try to do something similar but on the lightness and on the saturation. Your result is very interesting. So I was curious if it's possible to develop a new or to produce new optimized pattern from your result because you do understand because you collect the experience from other designer. So they propose the fit color. And after your analysis you can see the pattern how they did it. So is it possible to based on your result and produce some good pattern? Well first of all those designers usually are from different countries. They have different cultural habits. So probably yes these results can be used for example to do some assistance. For example that there are some templates. It would be just possible to enhance them with those results. But there is no single template. There are various templates. So what we can just tell that this one is the most popular one. This is the second. This is the third and so on. So we can do some assistance that if you would like something that will be probably popular do it in that way. Okay, so thanks the speaker again. I have one question. Why do you choose the DivisGun algorithm? Sorry, I think. You have to go to the right. Okay. Thank you.
Color harmony patterns are relationships between coexisting colors where human psycho-perceptual visual pleasantness is the judging criterion. They play pivotal role in visualization, digital imaging and computer graphics. As a reference we assumed Itten model where harmony is expressed in terms of hue. The paper demonstrate investigation on color harmony patterns using clustering techniques. Our source data was Adobe Kuler database consisting of hundreds of thousands of color palettes prepared for creative purposes. For the color palettes dissimilarity measurement we propose to use Jaccard distance additionally treating colors as the elements of a fuzzy set. Then, in the next step, separate colors are grouped within each group of palettes to specify each scheme of relations. The results are schemes of relationships between color within palettes.
10.5446/31456 (DOI)
Thank you. Thank you for your introduction. My name is Yuan Yuan-chui. The co-author of this presentation is my supervisor, Sun Sun Goran. Unfortunately, he cannot come this time. I'm going to present our work about using fewer training samples in the color prediction model. Before I start, please allow me to briefly remind you what a color prediction model does. It is like a black box for any kind of print device. It lends the combined property of the ink, paper, print, enumeration, half toning, and so on, and to predict the printed color according to any given reference ink compilation. The full title of this, as well as the main work of this presentation is investigating the possibility of using fewer training samples in the color prediction model based on CIE XYZ venues using an effective coverage map while keeping satisfying prediction results. As you see, the highlighted phrases are the keywords of this work, as well as the outline in my presentation. Before I go to introduce the proposal we used to reduce the number of the training samples, it is necessary to introduce the color prediction model based on the XYZ venues that we used, as well as the effective coverage map used in this work. About the color prediction model, the most famous and almost simplest model is Mordewis model. My colleague Daniel has already introduced this model very well, but I want to add that pay attention that in Mordewis model it assumes that the inks and all the materials should be ideal, which means, for example, the spectrum of the yarn should be exactly as the solid yarn line showed in this figure, but in reality the materials are never ideal, and we know that during the printing the dot gain exists all the time. Assume we have a color patch with only 30% yarn, and according to the Mordewis model, the calculates vector reflectance is the right one. We can say that it is far away from the ring one, which is the real measured data. Consider the dot gain, roughly we increase the fractional coverage of the yarn step by step to 0.6. We can say by each changing the curve gets closer to the measured one, but level overlap. You and Nielsen's research on the ink penetration and the late scattering showed that the relationship between the spectrum of the color patch and the spectrum of the ink and the paper, they are nonlinear and it can be described using this power function. The N pyramid is the N factor. By optimization, we find the closest curve to the measured one as the radical. This is roughly the optical dot gain and the physical dot gain. But in our opinion, actually during the measurements of the color patch, there is having both optical dot gain and physical dot gain included in the data. For example, if we print 11 seagull patch with their reference coverage ranging from 0 to 1, we measure the XYZ venues and use the Mordewis model, we can calculate three different effective coverage for each sample for each amount of seagull. We can say clearly that we plot the three groups of data in this figure. This red one is based on CIEX venues with this venue and the green one is based on Y venue, blue one is based on Z venue. This green curve is an optimized curve. We can say clearly that they are quite different from each other. So it might be not completely correct to use single dot curve for each ink. Therefore, we think about using all of those three curves instead of single dot gain curve. Actually, this proposal works well especially for print colors involving only one ink. More details and results refer to the reference list on the bottom. Actually, according to the experiments, we find that the dot gain behavior of each ink actually changed without any rules when the ink superposition happens. For example, if we have 30% seagull actually printed on paper, actually the dot gain behavior of this amount of seagull is different from that when it is printed together with magenta or yellow or both. In this situation, the effective coverage map was proposed. Imagine that there is a coordinate system whose three axis refer to the three primary inks, seagull, magenta, and yellow. If we apply 0, 25, 50, 75, and 100% as the reference coverage for the three primary inks, then we can get 125 points located especially even in this coordinate system. The map is created by filling with each point with effective coverage of the three primary inks based on CA XYZ venues respectively. Like this simple point, it contains full yellow and seagull. Obviously, it's effective coverage for seagull and yellow equal to zero no matter which data you use but zero for magenta. As soon as all of these points are filled with those venues, suitable venues for each one, given any reference coverage, we can calculate the effective coverage of the involved ink by interpellation, cubic interpellation. Then, Damichur's equations are used to calculate the effective fractional coverage for each primary and secondary colors. Then, we use the extended Damichur equation to calculate the XYZ venues of the color patch. About how to create this effective coverage map, it is introduced in the coming volume of the general of image science and technology. But a brief introduction is given here. We start with the situation when the color patch contains only two inks. For example, cyan and magenta, we call it effective coverage grade. Because the calculation based on X venue, Y venue and Z venue are similar, so here we only use the X venue to illustrate. We can say in this grade that the points marked by the blue star and the step one, actually, they are color patch involving only one ink. So their effective coverage can be easily calculated using this equation. As soon as they are filled, those blue points, which marked by step two, then they can be filled using the venues come from their neighbors. The purpose of this is that neighbors have close effective coverage for the same ink they are taking with. As soon as those points are filled, we can go to those points marked by step three and step four. About the color patch involves three inks, actually, we calculate using those three effective coverage grade. You can see we can get three groups of possible effective coverage venues. By matching and optimization, finally, we can fill all the points in the map with effective coverage venues for three primary inks based on CIE XYZ venues, respectively. As I mentioned before, if we use this theory of venues to be the reference coverage for CIE and magenta yellow respectively, then we need 125 samples to build this map. So we think about, can we reduce the number of the training samples while keeping satisfying prediction? Actually, choosing the training samples is a kind of work, is a same work to choose the combination of, combination of reference coverage of these three inks. So we start, we begin with trying to, yeah, we start with trying to kick out or change few venues in this area. For example, take Siam, for example, again, we prepare nine Siam patches, their reference coverage is like this one, like this area. And we can, we close their dot gain curves based on CIE XYZ venues, respectively. Here, we use the three, sorry, we use the two print devices. One is laser printer with A4 uncoated paper, and the other one is inkjet printer with photo quality paper. We can say that for the laser printer, the dot gain of Siam, all the three curves is symmetrical, but for the inkjet printer, the peaks look moved left a little bit. So how about use this area for Siam instead of this one and for laser printer? And how about use this one instead of that one for inkjet printer? So we did the same proceeding for Magenta and Yellow respectively. And we almost go through all the possible theories for Siam, Magenta and Yellow respectively and make sure the elements in this area less than four elements, sorry, less than five elements. Then according to the performance of the prediction of single ink printing, we find the suggested reference coverage for Siam, Magenta and Yellow. For example, for laser printer for Siam, the suggested reference coverage is this one and for Magenta is that one, for Yellow is that one, and for inkjet printer, we also get three suggested reference coverage for Siam, Magenta and Yellow respectively. Using our model and apply the suggested reference coverage, we calculate the prediction results and list them on the first column in this table. Compare with the results that we got when using previously reference coverage, we can say they are not bad actually, both for laser printer and inkjet printer. We also tried other interesting reference combination. For example, we use this one for Magenta instead of the suggested one. We still got good results, it's okay. But for inkjet printer, if you notice that actually the choice of the reference coverage for Magenta affected the prediction of Magenta quite much. We tried to use this one instead of that one and you can see the results goes worse. So here we come to the conclusion. For the laser printer, the number of samples was reduced from 125 to 64, we are still getting quite good results. It is possible to cut this number further to 53 with satisfying results. For the inkjet print, it is reduced from 125 to 17 like or 64, both giving satisfying results. Our approach would be useful for building an automatic computer to plate calibration system in printing workflow. So thank you for your attention. Thank you very much. Any questions? Thank you for this presentation. Very interesting work. I was asking, I didn't really understand if you make spectral measurements or if it's enough to do XYZ measurements. I'd love to use only say XYZ values. Okay, so that's a very strong point because you can make kind of sensors from which you compute XYZ and that makes the whole thing much simpler. Maybe this should be stressed in your presentation. That's a big progress. Thank you very much. Any more questions? No? Well, thank you very much. Thank you. Thank you.
The goal of the present work is to reduce the number of the training samples used in our color prediction model based on CIEXYZ using an Effective Coverage Map while keeping satisfying prediction. A general approach is proposed in this paper to choose the best reference combination for the training samples. The approach is based on the dot gain behavior of each primary ink, which is characterized by three curves using CIEXYZ tri-stimulus values. The proposed approach is built in our model to predict the color values for the color prints using two different devices, i.e. a laser printer and an inkjet printer. For the laser printer the number of the training samples is reduced from 125 to 64 while still giving quite good result. The approach also shows that for the test laser printer it is possible to further cut this number to 53 with a satisfying result. For the inkjet printer the number of training samples for our model is reduced from 125 to 79 or 64, both giving satisfying results.
10.5446/31457 (DOI)
I'm going to talk, well, the titles up here, and I'm going to start with this image and yeah, this projector does it very well. I think most of you will agree that there's something strange about this image in that the orange here is sort of a little bit brighter than you may have expected. And you may think that it's emitting light, you may, if you're sort of familiar with oranges, maybe more inclined to think that there's special light shining on that orange. Obviously neither of those are true. The question is why do you think this is strange because in fact this could happen. I mean, except for if you know what fruit actually do, the rest of the things around that orange could just not reflect much light. And so this could be completely normal. If any of you don't think that the orange looks strange in this thing, then they'll probably think that these mushrooms look rather gray instead of looking white. And that's actually what I'm going to talk about. So if we have this image normal, it doesn't look normal from here, then the mushrooms here are clearly sort of white and they are the brightest thing in the scene. And generally when you think of, well, how to scale luminance, the most commonly accepted thing is that, well, the highest luminance is considered to be white. And that makes sort of sense because of course the problem can't be solved at all because all these fruits could almost reflect nothing and there's an enormous amount of light shining on them. But we know that that doesn't happen very often. And so people seem to go with the lowest possible illumination that could have caused this image or this scene. And so the highest illumination is white. That works very nicely. And of course, the reason that the white thing would be the highest luminance is that in order to get color, you're going to remove some of the reflectance. And so you remove reflectance so you have less luminance. That's good enough, except for if there are no white things in the image. So what if we have an image without any white and so the highest luminance is, for instance, something red. I don't have an image showing that, but something red. Then there are two things you could do. You could either say who cares, highest luminance is white and just match to that luminance. Or else you could say, well, if it's a red thing and it has that luminance and the white thing would have to have a higher luminance than that because you've taken out some of the light in order to. Make in order to make it look red. And so these two options are actually what we decided to study which of the which of the two happens. And so I'm going to move away from nice fruit and go to the sort of stimuli that I love using, which is just boring eight by 12 squared tiles on a C.R.T. screen. And so this is this is just a pattern of squares, the sort of pattern that I use. No familiar objects, no problem with highlights, familiar colors or that sort of thing. A lot of you will probably consider that to be a disadvantage, but in a way I consider that to be an advantage because I can study only the aspect that I'm interested in. And then there's one target presented here. And in this experiment, the subject's task is simply to tell me whether this looks gray or white. So they press a key on the on the computer keyboard W for white G for gray. And we run an experiment in which if they press white, the next time this this sort of stimulus is going to be presented, the luminance of this test patch is going to be a bit lower. And if they say it's gray, it's going to be higher and that way we try to find the border between white and gray for that subject. OK, so this is that's the way the experiment goes. And then of course the couple of different conditions and actually I have three conditions. The first two are the important ones. The third one is just to control and I'm going to take you through the conditions in a couple of steps. And I'm going to start with the test condition rather than the baseline just because it's easier to explain it that way. So up here you have that background again. What I'm telling you is all about the colors in the background. The only thing that that's what I'm going to change between the conditions. And in fact, you have each color in here is presented eight times. So the twelve schematically represented things here are actually each one represents eight of these squares. And so in the test condition, which is shown here, you have bright red, bright green, bright blue patches. I think I think you can probably see them up there. Then you have darker ones of the same color. There they are one third of the intensity of those ones. And then you have gray patches at two thirds of the intensity. And if you look carefully here, you'll see there are different grays. I'll get to that in a minute because it's easier to explain if I first explained this step. Now, if I move to the baseline condition, all that I've changed is which half are colored. So the ones with three colors in them just represent gray just to indicate that what I've done is just mixed these three. So the other half, I've matched the luminances exactly and just the other half are colorful. So these are now red, red, green and blue and these ones are white. And of course, the most important part is the one on the left. According to what I've been telling you before, I'm not sure that that's the most important. That's the most important what I've been telling you that here the ones with the highest luminance are colorful. Here the ones with the highest luminance are gray. But other than that, you have the exact same luminances in the two conditions. You have the exact same colors in the two conditions. And if anything else you'd like to do that is nice and linear and additive, so just the total output of the guns or cone excitation or average over the whole background is the same in these two conditions. So that means that if you just take the highest luminance or even the average luminance or something like that as your scale, these two should give the same effect. But if it matters that the brightest things are colored, then you may get a difference between these two conditions. OK, I'll just get to the different grays. I've sort of drawn this schematically normalized so that the three guns of my computer monitor and so the things that add up to the gray are all equal, but of course they are not. And so in fact these are different maximal luminances and these are also all different. So all these grays are different from each other, but that doesn't change this additivity thing. The third condition is just a control. It's identical to the baseline, but just 5% darker. And the reason for including that is just to make sure that our method works at all. So if we have no effect here and also no effect here, then there's something, there's something wrong with the way we're doing our experiment. Obviously if you present the same thing darker than what you consider, why it should change. OK, so this is data for one subject in the baseline condition. We present different target luminances. We use a staircase procedure where it goes up and down depending on the previous response for that condition. And so most data points are near the transition between white and gray. So here is the percentage of white responses for a given target luminance. And these are target luminances. And the size of the symbol just shows how often that represents, how often that condition appeared. So we have a lot of data here. Fit accumulative normal distribution to this and determine the point at which that curve crosses the 50% point. And so the luminance at which it does that is by estimate of where the transition occurs between gray and white. And so this is for the baseline condition. And of course the interesting thing is to compare that across the conditions. And so for this subject, in the test condition, so that's the condition where the brightest things are colorful, this subject needed more light for the transition to occur between white and gray. So needed more light in order to say that it's white in that condition. And in the third condition, the darker condition, it needed, of course, less light. So this is data for one subject. And now I can just take the values along here and average them across subjects. But because different subjects have their transitions at slightly different luminances, I don't do it in luminance, but I do it as a percentage change. It doesn't really matter that much. So this plot actually is the data plot. Summarizes the effects. The left column shows how much more light people needed to see a change from gray to white when the highest luminance squares in the surrounding were colorful than when they were neutral. And you can see that there's an effect. The bar here is a 95% confidence interval. It's not an enormous effect, but it's about 4%. And the second bar shows the effect of the baseline compared to the darker condition. So we would have actually expected a 5% change there. The effect is slightly smaller. It's not significantly smaller, but there's also good reason to expect that it could be a bit smaller because I'm now assuming in my experiment that people adjust to every new stimulus ignoring what they've seen before, and if there's any carryover from previous presentations, you could expect the effects to wash out a bit. And so, well, the answer, of course, the left column is the important one because that answers our question, that people don't just consider the highest luminance, but they actually do take consideration of the fact that if that highest luminance is colorful, that there must have been more light. Yeah, that's actually what I have to say. So the conclusion is that judgments about the intensity of the illumination and thereby about surface reflectance are influenced by the association between color and luminance in the scene. Thank you. Any question? When you're taking the luminance, you're focusing on a single patch at a time, in a sense. And things like Retin-X and Max RGB color constancy all look for the maximums across the channels independently. They don't have to happen at one spatial location. So if you normalised by that, I'm not sure, I couldn't quite understand quickly enough in your talk, but how would that affect the results? You compare it to the maximums in different spatial locations. So if you compare it to the maximum of each cone throughout the scene separately, right? Each cone separately to the maximum, not to the average, that may explain this data. It's not the mechanism by which this works. It's not the maximum in luminance, and it's not the average of each cone. But maximum of each cone would give this sort of result. Right, okay. It's a lot of fitness to be sure. The microphone's just going there. Thank you. Could you tell us what you did in terms of randomisation of the location of the background patches and the white test patch? Yes, indeed I forgot to say that. So on every presentation, so you always had the same number of patches and so, but they were just randomly positioned on each presentation. The test patch was always at one of those intersections so that you could easily recognise it, but it could be anywhere in the central part. It couldn't go to the edge, no. Any more questions? Sorry, I didn't see. There was a question. So there have been some attempts to combine the influence of maximum and average by using an LP metric. So I kind of have a mixture of the two. Have you tried experimenting with weighted combinations of the impact of the maximum and the average on influencing? The very simple answer is no. We haven't even calculated yet whether the maximum will predict this quantitatively. So it's something that would be good to look at, but no, I haven't done that. So let's thank the speaker again. And we'll do the next talk.
In order to judge whether a surface that one is looking at is white or grey, one needs to consider the intensity of the illumination. We here show that people do not simply use the maximal luminance in the light from the scene as a measure for the intensity of the illumination but also consider how luminance and chromaticity are associated. We suggest that they take into account that there are physical limitations to the luminance that reflecting surfaces can achieve at high chromatic saturation. These limitations arise because chromaticity is the result of surfaces selectively reflecting light of different wavelengths, so that the luminance of the illumination must be higher than that of the brightest patch in the scene if that patch is not white.
10.5446/31458 (DOI)
Thank you. The topic of my presentation is full reference image quality. In full reference image quality, you usually have one image that you see as reference and a transformed image. And you want to quantize in any way, or quantify in any way, the quality of this reproduction relative to the reference image. So it can be better or worse, you're just interested in relative quality. In the last decade, there has been one work that is really seminal in this domain. Comes in different flavors. I'm referring to the version of 2004 by Wang Bovicek and Simon Jelley. Once again, the scenario of comparing a reference image and a reproduction. And you use corresponding sliding windows. This means neighborhood of pixels and calculate three functions. The first describes the absolute level. The second, the variation within these windows. And the third, they call it structure. And there comes the name from compares the correlation of the deviations. So if the deviations are all in the same direction, it is less important than if they are randomly. These three functions are then combined in a multiplicative way with weighting exponents. And finally, one summarizes the measure across the whole image by taking the mean. This measure is called structural similarity index measure. Now in practice, it is mostly applied to the luminance channel only. This was too historical to the fact that they mostly looked at distortion like noise, blur, compression that express themselves in high spatial frequencies. And these are better visible in the luminance channel than in chromatic channels. But there is one but. There are image transformations that tends to have almost the same lightness, but as you will see, differ in chromatic content. Light mapping, this is adapting colors to a target device, color gamut, is one example, but also the change of illumination would fall in the same kind of image distortions. Now one idea was to take this structural similarity measure and just apply it to three color planes of any color space that has a chromatic plane and an orthogonal luminance dimension. This was done by Nicolas Bonnier about six years ago and I just quote from his conclusions. We compared the results of the experiments with image quality metrics. They were August and SSIM and found that none presented a strong correlation with observed set scores. Or just linearized results. So just to apply the same metrics for luminance and chrominance channels seems probably not to be a good idea. So what we propose is to take different metrics for the chrominance plane. One feature we'd like to calculate within this sliding windows is just the distance from the gray axis to the pixel in one image and in the other image and you take the difference of these two distances. This gives you a radial chromatic distance or the difference of them. The second is the difference between the Euclidean distance of two pixel values and the chromatic distance. It's more or less an approximation of the U angle. You plug this all in in a function that maps the range to the range of one and zero since the whole SSIM measure is between one and zero. So one would mean that two images to be compared are identical and zero is infinitely different if that is possible with an image. There is a second thing where we differ from one at RE. They combine their features per pixel. The idea is to know where do images differ. So in combining it per pixel they can make maps that look almost like the original images that were compared. Just by seeing where is it white. Usually you map one to white and zero to black. You can see oh there these two images differ. We were more interested in what dimensions do these images differ. So we average first across each dimension and then combine these features. To evaluate we use paired comparison. So we have one reference image and compare it to two transformed images asking the observers which one is more similar to the original reference image. And the result is then choice data. The model we would like to have should predict this choice. As I said the SSIM was evaluated on image databases with distortions like noise and compression. But for this kind of distortions we are interested in we call it chromatic distortions. They are not really let's say official references. So we just started one. We added data from adosin studies from Zurich, Kyovic, Munich, Vienna. Mostly with images from real world scenes. The images are available. So already transformed. And there are more than 50,000 paired comparisons. Observers were an expert public and computer science students that are not really experts in color. Mostly. We are interested in to include other studies. In fact. Now how did we evaluate? We took all this choice data, split it into two sets. We learned the model on the first set and tried to predict the choices on the other set. In statistics you usually call it hold out data in machine learning community. It's more training set test set. You repeat this splitting process 100,000 many times. And this gives then distribution of hits rates. So hit rate means how many percent of the choices could you predict correctly. We applied this to different feature sets that you see there in the table. First were those marked in gray. This is this luminance contrast structure. And then we added both new presented features and just one of the two to control. And we calculated these features in two color spaces in order to check that there would be any dependence from the color space. Then what you see in the box plots are the distribution of these hit rates. Now for these plots we took just one study about 10,000 per comparisons. And we split it the data according to reference images. So the hold out data were all from the same original reference image. And this was not present in the other set in the training set. Just looking at it by a thumb rule you would see that the red line corresponding to the median hit rate is slightly higher when we are using these chromatic features as well. But also the whole distribution tends to be more less varied. So using these features adds to robustness. Now these data stem from the use of a linear kernel support vector machine. Support vector machines they attribute way to observations. We did the same calculation with linear regression and linear regression attributes ways to dimensions. The results were comparable. Support vector machine being a bit better since it's also a more complex model. Now we are interested into arguing whether this difference in the distributions is significant. And we are just doing that now for what are now features at 3, 4 or 7, 8. So we just compare those models learned only from the luminance features with the models including both chromatic features. Here we used linear regression models. And what you see on the horizontal axis is just the difference of the mean between these two feature sets. So if the hit rate using the chromatic features as well is better then you would have a positive difference. While if using only the luminance features is better then you would have a negative difference. The error bars come from 1000 cross validation runs and it's the 99% confidence interval for this mean. So if this error bar does not touch the zero line that is marked in gray it means the difference is significant and you see mostly using the chromatic features as well is significantly better with these data that we used. There is one exception. The study called mixing six it was first done with layup service. It's the same as mixing four but mixing four was done with expert observers. What it did in these studies is to do gamut mapping to just region of images and then to glue them together. So if this gluing together of these regions produced artifacts they were quite well visible and they were visible in the high frequency range of the image. So luminance features are expected to be better. So our conclusion for at least for our data chromatic information is not negligible in full reference image quality. We would be interested to check with other chromatic distortions like change in viewing conditions, change in illumination. Then an open question we answered meanwhile. Just the performance of the model when we use chromatic features as well decreased the predicting of these distortions, the classical distortions. It does but it's not really crucial my opinion since you also look at a wider range of distortions. The last thing is if you look at natural images you have an idea how skin should look like, how grass, sky, whatever should look like. Is there any effect of it? So the next thing we are planning to do is to do a mixed study with abstract images from this German painter called Paul Klee. They have been digitized and they are in public domain for all research on the Creative Commons license so if anybody would be interested in carefully digitized art images please address any of the authors. And as a last thing I'd like to thank you for your attention. Thank you very much. Do we have any questions from the audience? Could you say something about the interdependence of the explanatory variables? In particular did you allow for the increase in degrees of freedom having the two extra variables gave you in fitting? If I understand your question right you ask I have two parameters more so I should have a no. Four questions? If I get you right you made a renewal of structural similarity index yes. So would you make some public release of that code? You made a color version of SIM. Yes I get you right. Yes in a way. As I said SSM is not one measure. Yes yes yes. You can make multi scale. Would you make your release your results your code as public as they did? Is it possible to use them for example for my own purposes? I think so. Would you put it somewhere in the web? I need to ask you. Can we talk afterwards? Thank you. We have time for one more question. Anyone? Well I have one. If I understood correctly you put different weights to the different parts of this SM measure for the contrast luminance and also to the color parts that you add. How important are or what is the weight of the color part compared to the other SM parts? That's a good one because actually what you have in gamut mapping is you map everything towards gray. So usually the chroma part and the luminance part are in a way correlated too. So we will have another presentation that goes in detail the last one of this session and I don't want to take the thrill away. So we are waiting and excited for the last presentation. So thank you very much again.
We present a corpus of experimental data from psychometric studies on gamut mapping and demonstrate its use to develop image similarity measures. We investigate whether similarity measures based on luminance (SSIM) can be improved when features based on chroma and hue are added. Image similarity measures can be applied to automatically select a good image from a sample of transformed images.
10.5446/31459 (DOI)
Good afternoon everybody. Oops, a little loud. So the title is Metamermismatch Volumes, and it's by three of us. Two of us are here. Unfortunately, Lugman Enko is not here. And I want to say that, you know, this is collaborative work between three people, but in this, and we each have our roles, but certainly the key insight in this paper is Lugman Enko's. Now my talk goal today is to give you some intuition about what this is about. I'll tell you what the problem is, but just you'll see that I'm going to give a kind of hand-waving type description. I'm going to use words like red, green, and blue, and in the paper you will find nice Greek symbols. The paper looks kind of like this, and I'm sure if I gave you the talk like this, you wouldn't like it at all. Okay, so I'm going to go for red, green, and blue. I'll be informal and try and hope that you'll get the basic idea so that you can understand the paper if you go read it in the preceding. So we're all used to metamers in general. We use them all the time, using them right now. And the issue is with respect to surfaces, it's slightly different than when we're using them with lighting, with illumination, that is, with projectors and such things. So we have two kinds of metamers mismatching that occur when we're looking at surfaces. So here I've cut two peachy, pinky-glicking surfaces, two patches, the circular ones, under a light, and if we change the light, then it's possible that they will look different. Fortunately, they look different there, too. So we had two surfaces that matched under one light, then don't match under a second light, and that's what we're calling metamers mismatching. This is a well-known phenomenon. This example right now is a limited-induced mismatching, and we can also have observer-induced. So if we have one camera where the two give you the same output, the same RGB, and we say they're matching, but under a different camera, they might not match in vice versa. So that's observer-induced mismatching, and we have that same issue between observer, between camera, and human, and between different humans, and so on. So the illuminant and observer cases are mathematically equivalent because we have reflectance times illumination times sensor, and the illumination and sensor are in symmetrical roles. And so we'll just take those two and call that a color mechanism. So I'm going to talk about color mechanism or mechanism from now on in the talk. So that's a luminant or sensor. Either one can change. The problem's the same. Okay, so what we're after is the metamer mismatch volume. So that's given an RGB color from a first color mechanism. What is the corresponding set of RGBs? And I'm going to call them primes, I prime, G prime, B prime, and a second color mechanism. So we're going to have the non-prime and the primed throughout the talk. So what's the set of possible ones that we can come up with? In other words, we can take every reflectance that could have led to the RGB under the first circumstance or the first color mechanism. What are the possible ones we get afterwards? Because we had two things that matched. The only way they're not going to match is if they weren't of the same physical property to begin with. They had different reflectances. So two things that were metameric to begin with become non-metameric under the second. And what's the set of all possible ones that we could have? That set of all possible ones is the metamer mismatch volume, or often it's referred to as the metamer set, the set of metamers. I like the term volume better and we're going to talk about the boundary of the volume. Now, there have been a number of metamer MMV, metamer mismatch volume methods, and each of them so far has had restrictions. The restrictions, they're either approximate or they put restrictions on the set of reflectances, etc. What I'm talking about today is a general solution for all possible reflectances. So some background concepts we're going to need, and I will cover these just in a moment. I'm just giving a hint as to what's coming. We need to deal with object color solid, which we were partly hearing in the last talk, which is a set of all possible RGBs that we can occur, and optimal reflectance spectra, which are the ones that are in the boundaries of the object color solid. So let me start with an example. If we have a two-channel, a red-green color solid, so this is the set of all RG pairs from all possible surface reflectances under some illuminate. And when I say all possible, that's all functions that can be 0 to 1 anywhere. So the same kind of thing happens in three dimensions. Then we have the set of all possible RGBs under some illuminate in this case, so we have D65. If we move to a second illuminate, A, then you can see it hidden in behind there, but the set of all possible colors changes. So the object color solid under A is different from that under D65. Now the minimum mismatch volume is a kind of example, as we're going from one of these situations to another. So if we had a color represented by the black dot in the D65, the upright volume there, what's the set of possible colors that could become under A? Okay, so this is the three-dimensional case. Today we're also going to need six-dimensional color solids. So we've got three-dimensional ones, which I can plot for you. Six-dimensional ones, you're going to have to use your imagination. So that's for the interim. I'm working on a cool new C60 viewing technology for my next talk, and I thought if I could put together red and green glasses and polarized glasses, we might get there. So the next question, that was the color solid question. The next is the issue of optimal reflectances. And so this is of all the theoretically possible reflectance spectra that are theoretically possible. Which one leads to RGBs, the colors that are on the gamut boundary on the exterior of this solid? And Schrodinger's answer was all of those functions which have two transitions. That is, they go like this example. It starts off at zero, goes up to one, stays one for a while, and goes back down again. And there are two transition wavelengths here, they're lambda one and lambda two. And you can do it in reverse as well, which you can start at one and go down to zero and back up at one. Both of these are two transition functions. Okay, so the metermen mismatch volume calculation, let's consider the monochromatic case. If you understand this case, you're fine. It's basically, the basic intuition is in this case. So if we consider a single channel mechanism, and I'm going to call it red, but we know that single monochromats aren't going to see anything. But, so we have a single channel, I want to call it red. And let's then have the second system. So we have R and an R prime, which I'll call the Rouge system. So we have the R and the R prime, red and Rouge. So it's two color mechanisms, but each one is single channel, each one is monochromatic. And if we were, for example, saying, well, if the red system, the R system responds with 40, what's the possible set of responses we would have gotten for the Rouge system, the R prime system? It's some range, and I've indicated the range with that rectangle there. Okay, so the question is what is the range, even in this single case? So in other words, what is the metermen mismatch volume, the volume being a line segment in this case, in the R prime space corresponding to the R equals 40? Okay, so to calculate this, we're going to consider all possible pairs of responses, R and R prime together. All possible pairs that could ever arise. In other words, we're going to, okay, we're going to take all possible reflectances and put them together into pairs, and that's going to give us an object color solid. It's like the red-green one I showed you earlier, except this one's a red-Rouge one, but the same idea. Okay, so we're taking the two sensors, but it's two color mechanisms, it's two color mechanisms, it gives us a two-dimensional color solid. Okay, so if we have this color solid, and then we can consider the cross-section. So say we're looking at the, we were considering the R equals 40 example. Okay, so if we measured R equals 40, R equals 40 is represented here by the red line, red vertical line, and that is, if we take the cross-section of that, or that line and how it intersects with the red-red color solid, all of the points, all of the R, those are R prime, R pairs. Every R prime on that line is a possibility. It's, you know, for a given R, the R equal to 40, all of those R primes on the line are possible, could have been possible, because we had all pairs R, R prime to begin with. So in this example, the Metamer mismatch volume induced by R equal 40 is from 0 to 198 on the R prime, as indicated by the green. Okay, so note that the Metamer mismatch volume, which is that segment on the red line, is completely specified by its boundary, which in this case was the R prime equals 0 point and the R prime equals 198 point, which I've shown with the little blue circles there. Okay, so that Metamer mismatch boundary is the intersection, this is the crucial thing in this whole talk, is the intersection of the object color solid boundary with the R equals 40 cross-section. So it's intersection of the cross-section with the color solid boundary. So if I recap that, this, because the three channel case is just the same, uses the same intuitions, just more dimensions and you can't plot it. For any given R, what's the possible set of R prime? Pair them up and that becomes a 2D color solid, intersect that boundary with its R equals 40 cross-section, and that intersection gives you two points in this case that are defined in the boundary, and that's what we're after, the boundary of the Metamer mismatch volume. So let's move on to the dichromatic case. Okay, as before, we have two color mechanisms, but now each one is two channel, each one is dichromatic. So we have RG and R prime, G prime. We have a 4D object color solid, we're putting them into four tuples, so we've got RG, R prime, G prime, and that's a 4D object color solid, which I can't draw for you, but the Metamer mismatch volume boundary is once again the intersection of the cross-section of that 4D solid with the solids boundary. So, for example, in this 4D case, if suppose just we wanted the Metamer mismatch volume for the input, R equals 20, G equals 50, that pair, then the cross-section is all four tuples, 20, 50, R prime, G prime. So the RG constant is 20 and 50, and then we're considering all possible R primes and G primes. Now that becomes a 2D, a planar cross-section, and the result is a 2D volume. Okay, now I can't play the, I can show you the idea of the cross-section, but I can't show you a 4D thing, so I was stuck with a 3D. So here's back to a 3D, but you've got to imagine 4D. So we're going to be in a planar cross-section out of it, and we're asking, in this case I just have B equal to 40. So if we ask for all of the cases of B equal to 40, it's all the RGs on the B equal 40 plane, and the volume here is a plane, and its boundary is a curve in the plane. Okay, so it's just like the single channel case, but now we've gone to two channels, but in both cases it was two color mechanisms. So as before, and now, so, on to the trichromatic case, same idea, as before we have two color mechanisms. Now it's RGB and R prime, G prime, B prime. We have a six-dimensional color solid, all six tuples of this sort, arising in response to all possible reflectances, and the metamarmic mismatch volume boundary is again defined by the intersection of the cross-section of that CD60 solid. Okay, so in this case our object color solid was 60, let's suppose we wanted to solve it for R equals 20, 50, G equals 50, B equals 40. The cross-section is all six tuples holding that 20, 50, 40 constant, and then it's all of R prime, G prime, B prime that follow on it. They're all of the ones that will form the boundary of the object colors, of the metamarmic mismatch volume. This result is a 3D volume in this case, which is what you'd expect, and its boundary is a surface in 3D. So what are the reflectances on the boundary of the 60 object color solid? In the case of the 3D case, they were these two transition functions. In the case of the six-dimensional color solid, we have the same idea, it's just that instead of two transitions, they're going to be five transition 01 functions. So they look this kind of thing, so five different transitions will get you onto the boundary of the six-dimensional object color solid. So the trichromatic metamarmic mismatch volume boundary, we're looking for the intersection of a 60 color solid boundary with the cross-section defined through it by this particular example of 20, 50, 40. So again, you've got to imagine 60, but this is the same idea, right? We're taking a cross-section and we're looking for the intersection with the boundary, and that gives us a volume, not a curve in this case. So the points in this boundary are five transition optimal reflectances, and at this point what this boils down to is in other words we have five transition points we need to find. That is, we need to know which ones are on the boundary. This means in essence that we have six equations that we can deal with. The first three equations constrain the solution to be a metamarm in our example to 20, 40, 50. So this is saying, in this equation here is saying that the R channel, the integral of the R channel, and OPT is standing for an optimal reflectance defined by the five transition wavelengths. The five transition wavelengths are defining for us a reflectance. So if we know what those five transition wavelengths are, they define a reflectance on the, that leads to a point on the boundary of the six-dimensional color solid. And similarly for the G and the B. What these three equations ask for is that the optimal reflectance we're finding is metameric to the 20, 40, 50, has the same RGB as we wanted to start with. And now we're looking for all the R prime, G prime, B primes that are satisfied that. So these equations define the metamarm and Mitzvah boundary implicitly. And so we need to set up a coordinate system to kind of orient ourselves. And so we'll put, so now I'm into 3D color solid because the metamarm, within, and this is in the prime space. We started the RGBs, we're into RGB prime. Here's the volume of it. And within that volume, that object color solid rather, there's a metamarm Mitzvah volume. So it's a 3D volume within a 3D volume. And let me zoom in on that interior piece. So we've gotten a piece, we're looking at the volume that's inside. And looking at that piece, we'll put in an arbitrary origin in there. It doesn't matter where you pick it, any point inside will do. We will set up a spherical coordinate system and then describe, you know, set up a spherical coordinate system. And then from that origin, we can look out in each direction and say, how far is it to the boundary? So if we, that gives us, and each one of those points in the boundary is a 5 transition, is represented by 5 transition reflectance. So we have 6 equations, well, 3 to begin with, and 3 more that are specified by our spherical coordinate system. And if we specify a direction in that space, we can then solve for the distance and the 5 transitions. So there's 6 equations and 6 unknowns in essence, although they're a little messy to solve. So then the metamer-mitch match volume is in essence described as a radial distance as a function of orientation, of where you're looking. Let's see how I'm doing. Okay, so the matter, I can give you some examples. The metamer-mitch match volumes of 100-month cells going from D65 to A look like this. The biggest volumes are near the achromatic axis. Smaller volumes tend to be at the, as you get to the object-color-solid boundary, there are no metamers whatsoever. So they get smaller and smaller. This, we can project that from the side so you can kind of get a sense of fish-limbing or something. But I mean, it's just the same. They're, anyway, that's a cross-section. So let me just review the single-channel case, and then I'll stop because I'm running out of time. The single-channel case, because it's just like the others, and this is the one that if you understand this when you're okay, was if we're going from a single-channel, that is, we have two-color mechons going from R to R prime, and we've had the R-eql-40 case, we have to have the volume, which I've indicated with the rectangle, and that was just defined by the cross-section, and the line here is the cross-section across the volume, the gray volume. And what I didn't have in the original slides when I showed these two earlier is this yellow spot. The mellodot is like the origin that we're setting up for a spherical coordinate system, and the radial distance R. And the radial distance R out to the boundary. And if we go to compute for a different spot, like say we do the R-eql-260, well, it should be, it's 160, and I wrote there 260, then you can see that the boundary is, the metamermis-match volume goes from 25 to 210, and so its boundary is the 25 to 10, as indicated by the blue dots. So it's the cross, so there were, anyway, in conclusion, we have an exact description of the metamermis-match volume. Previous descriptions have been approximate or at least based on restrictions of one sort or another. The trichromatic metamermis-match volume at computation involves the cross-section of a 60-object color solid, the boundary of, intersection of the boundary of the 60 solid, and we wind up with six equations, six unknowns for each location on the metamermis-match volume boundary. So, any questions? Do you have... One back on the back there. I was just wondering, from what I understood, I'll just talk, the volume, the metamermic volume is computed, depending, since it is the boundary, is basically the minimum and maximum values along a certain axis, right? If I get it right. It doesn't matter the distribution of the actual metamermis inside, along that axis, but we just get the boundary, so the minimum and maximum values. Right, so there's no bias about what's inside the volume. The volume is filled, but if you were out in the world sampling reflectances, nothing says anything about what the distribution would be. So this is the theoretical limits of what could happen. Okay, so it's the theoretical limit, it's not the actual distribution of... We cannot determine the actual distribution of metamermis just by knowing the metamermic volume. No, you can't know that by knowing that. Okay. No, but it's a theoretical result. We're talking about the theoretical limits of what could possibly happen. Right, you're never going to find these zero one reflectances in the world. They're always going to be smoothed out somehow. So it's going to shrink in some kind of a way. But still the question is what is the theoretical limit? It's kind of an open question. I think we have to move on. Okay. Okay. That's okay. Very quick. Yeah, really interesting, Brian. Continuing on from the previous question. These little volumes in six-dimensional space, they're continuous, is that right? There'll be no holes in them. Well, the metamermis volume themselves have no holes in them, no. But they're not isotropic. Not as... In other words, the density, the number of metamers per unit volume is not uniform throughout the world. Making no claim about that. It doesn't say anything about it. We're talking about the boundary. The boundary would be filled, but that's filled in theory. It's sort of like the previous question is what's the world like? What do you find if you just go sample the world is a different question? Yes, but within one of those little volumes, you could then apply a density distribution to the mapping. If you wanted to do some density, you could, sir. Okay.
A new algorithm for evaluating metamer mismatch volumes is introduced. Unlike previous methods, the proposed method places no restrictions on the set of possible object reflectance spectra. Such restrictions lead to approximate solutions for the mismatch volume. The new method precisely characterizes the volume in all circumstances.
10.5446/31460 (DOI)
Hi. Okay, so the one thing I love about working in the industry is that you get to meet real problem of real people, that real people have. And this is what you have in this presentation. Okay, so my colleagues from Spain, by the way, they're all working on big printers, big and high quality printers for artists. And what they saw was that when artists come to print, artists come and he has an exhibition tomorrow. It may be a photographer, it may be a real artist that got his painting photo. And he has an exhibition tomorrow. You need high quality images to the show tomorrow and you need to print them. And what they came across over and over again, that even though the printers are well-cuned and ICC file and you have it, when the artist prints what he wanted, the photo that he took, he gets something and it's accurate. Sky blue, everything is accurate, but it's not his picture. It's not what he intended. And this person here says it, if I can make it work, never mind. He says, sometimes I have not even recognized my own photographs. I have even hesitated to call them my own. And whoever controls the editing controls the artist's fate. This is not my work. This is not what I intended. This is not as good as I wanted it to be. So this is an example. This is the image when you just print it. And this is after ICC profile. And you can see that the skies are blue, the color are correct. Not in this projector, but basically there are. And that, this is what he wanted. He might have preferred the sky to be more blueish. He might have wanted the grays to be more reddish or the contrast to be bigger. This is what he wanted. It is, the color is still accurate, but it's not that. That's his own taste. He's an artist. He might even not be able to tell you why he likes it. But that, his preference, that's his art. And you should give him what he wants. So another thing that goes without saying that the printer should be consistent and accurate. That goes without saying that it should also match his taste, the artist's taste. So that's the problem they were facing, Morovi-Kanol. And this is the state of the art. You have the pipeline. You have, this is the image on screen in the RGB space. And you translate it to LAB. And then to the printer color space. And today what the artist can do is just to go over the image, to take the image, to tune it in Photoshop, print it, look at it. Is it okay? If not, tune it again, print it, calibrate so forth until he is happy. And then he can save the image with the profiles, the new profiles that he adjusted. And then he has the printed, the image which is printed the way he wanted it to be printed. But you have to do it all over again for each of the images that you want to print. It's the right for each image. And another problem with that is that you have two copies of this image. One is original and the other is a copy to be printed for this show in this printer. And this is the ground for a lot of trouble when you have two copies of the same thing which are slightly different. So what Morvik and all propose to do is to take this image, the first image, to tune it thoroughly and then to save only the profile. Not the image with the profile, but to learn the new profile, the new taste of the artist and to keep it and use it to print the other images. And this is how they do it in the pipeline sense. They have their original image. Once again, they translate it from the screen to LAB. And this is the original transformation to the ICC printer's color space. And this is the transformation after tuning the image, after adjusting it to the taste of the artist. And what they propose to do, and it's actually in the next slide, but it is easier to see here, what they propose to do is to make transformation from here to here, to learn this transformation and to use it on all the other images. And they do it by looking for in the source image. They look for all the pixels with exactly the same color in the source image and they ask what happened to those pixels in the target image. And they did the mean over those pixels for each of the colors in the image. And you can go by other, you can do 3D transformation, you can do separable transformations. They have several methods to do that. But basically they just learn the transformation. Okay, of course it has limitation. The first limitation is obvious. You cannot, if you tune it to one artist, it will not work for another artist because they have different tastes. So this is kind of redundant. The second problem is that it's an average. So if I have, for example, a photo of someone in the mirror and they're totally identical, the mirror color are totally identical to the person and they have to want to make the mirror look darker, then it just wouldn't, it will take the mean between the, you have the same color in two places in the image and you made them just in one region be different. The mean will not match either of them. So it wouldn't work in such an area. Another problem is that if you are not sampling the whole color space, then you'll have problem. For example, if you're tuning a portrait of a person and then you get a sky, image of a sky, then you don't have any knowledge about the artist's preferences in blue. So you just don't know what to do with it. So we're around it is just to put two images, to tune it over a combination of two images that samples the taste of the artist. And also they say that this is a rare scenario. But the artist usually very conservative and they either like a portrait or scenery, they usually don't mix. But if you have this scenario, you can get over it. But this is something you should know that is not working. And last but not least, you should bear in mind that this is all local changes. Because they assume that the ICC profile was performed and that those are minute changes, not something very big. It's all about the differences. Okay so this is the test they did. And this is Morvik and his family. And this is because they have copyright issues with working with the artist's photographs. So they took their own photograph. And they gave it to 24 artists. And they asked them to tune the first image according to their preferences, their taste. And then they applied it for all of the set of images. And this is just a sanity check. Look at the original image with, sorry, look at the adjusted image with reference for the original profile and compare it to the unadjusted image with a completed profile. So this is a sanity check and it worked. There was no noticeable distance between the two. And the other is they asked them to charge other images that were printed and they all liked the result. And they really preferred, they said that this does reflect their taste in images. So and this is what it's all about. And this is a more quantitative test. They took the images, they let them tune and they compared is the transformation work well. And they really succeed in capturing the transformation. And they did it by comparing the, again, the adjusted image with the original profile to the unadjusted image with the computed profile. And the blue are the median and in the difference of the color. And the red are the 95% in the difference of the color. You can see that most of the values are below one delta E. With this small exception, still it's under 2.5 delta E, which is an noticeable difference. So to conclude, the problem was capturing the artist, the taste of the artist. And they did that by, and they captured this transformation. And they showed that it does capture the taste of the artist on 24 artists. That's the past test. And well, thank you. So we have some time for questions. Yes. So I'm also working with artists in another domain. But so is your conclusion that the artist working with printed image has a single taste for all kind of images of this result? That was the original assumption, but the test showed it because they like, this is what they meant. I mean, that's what they showed in this test, basically. That was the original assumption, which they tested here. So yes. Which is interesting. You're right. You would think that not. So then I have one if you can answer it. So why was the work done on LAB, not on some more equidistant color space like C-CAMO2 model? Okay. So what I understood, and I'm not sure I got the whole of it, but what I understood is that it's because they are doing only modulation, small local differences. And then it doesn't really matter about the big distance in the color space. They want the local neighborhood to be linear. And that's why they work on LAB. So that's what I understood about this region. Okay. Thank you. Thank you very much. Thank you.
While there may be no point in arguing about taste, creative professionals make a living from sharing theirs. Making specific, individual color preferences that a creative professional knows how to achieve when creating content on a display also propagate into print is a significant challenge since it lacks real-time feedback. The present paper introduces a method for allowing creative professionals to use the tools they know and love to also personalize the color behavior of their devices. This is achieved by analyzing color changes applied to images and applying them to a device’s ICC profile. As a result the personalized device results in customized color behavior regardless of the workflow used. The paper describes the ICC profile transformation algorithm in detail and provides a color error analysis of its performance.
10.5446/31463 (DOI)
Good afternoon everybody. In this presentation, I'm going to talk about the effect of outliers on spectrary construction and introduce a method for multispectral data reduction in case of having outlier spectra. This research is a collaboration between Simon Fraser University and Tehran Polytechnic. First, let's have a quick look at the background of this research. As you know, devices like monitors, projectors, scanners and digital cameras define their colors in RGB color system. But the first problem with RGB values is that they are device dependent. In addition, RGB color information depends on the scene illuminance. It means the changing illuminance can lead to the metameric problem. This issue can be very important, particularly when high quality color reproduction is needed. Color reproduction of fine art paintings is a good example from such applications. So in order to overcome the drawbacks of conventional RGB values, we can move to multispectral imaging by capturing extra information from each pixel and making a multi-layer image where the color of each pixel is defined by the corresponding spectra reflectance. Spectral data provides us with very useful information. As spectral data are kind of physical property of an object, they are device independent. And so there is no metamerism in this space. But in spite of all these benefits, the large volume of data provided by a multispectral imaging device can be a problem in terms of storage and communication. So during the last few decades, there have been a number of researchers trying to address the problem of digital image compression and introduce efficient solutions for spectral data compression and reconstruction. Among these methods, principal component analyzers has attracted a great deal of attention. PCA, as a powerful statistical tool, determines a linear transformation from the high-dimensional spectral space to the low-dimensional spectral subspace, which guarantees the best possible representation of the high-dimensional spectral vector in the low-dimensional subspace. There are several tricks to enhance the efficiency of principal component analyzers in compression and reconstruction of spectral data. For example, we can increase the number of basis vectors. Obviously, the more number of basis vectors, the more accuracy. In addition, we can use weighted principal component analyzers by applying weighting factors on individual samples. In addition, we can cluster the main dataset based on a predefined criterion and so increase the similarity and correlation between data and therefore improve the performance of the construction process. In this research, we tried to enhance the efficiency of PCA by removing and reducing the effect of outliers before applying PCA on a dataset. Indeed, we studied a number of spectral datasets and found that there are some elements that may be far from the bulk of the data or don't conform to its correlation structure. So in this research, we decided to first separate the outliers from the non-outlier data and then use a standard PCA data reduction on the non-outlier to extract first three eigenvectors that account for the most variation of the data. As outliers are a part of the main dataset, we couldn't simply ignore them, so we decided to analyze these data separately. For this purpose, we did a pre-processing first by clustering the outlier spectra into a few groups to somehow increase the similarity between data in each group and therefore improve the performance of compression and reconstruction in the next step. And finally, we applied PCA data reduction to the clusters individually to extract fixed three eigenvectors of each cluster. We implemented our method on five different multi-spectral images. The first one that is called fruits and flowers includes about 19,000 spectra samples from 400 to 700 nanometers with 10 nanometer intervals. The four other spectral images were taken from another dataset with the same sampling rate. In order to detect the outlier, we used two different metrics. The first one was Mahalami's distance, that is a traditional method for detecting outliers in multivariate data. In this figure, you can see the outlier spectra detected by Mahalami's distance in fruits and flowers' multi-spectral image. Obviously, samples having a large Mahalami's distance were detected as outlier. But this method is not a reliable measure for detecting outlier observations because we use Mahalami's distance in multiple outliers. Maybe some of the samples cannot be detected as outlier because it's not a robust method. So we needed to use a robust technique for detecting outlier. And in this research, we used minimum covariance determinant, that is a computationally fast method for finding outlier in multivariate datasets. The MCD objective is to find each observations out of n whose classical covariance metrics has the lowest determinant. In this figure, you can see the detected observations by this method from fruits and flowers' image. If you look at this slide, you can make a comparison between the performance of MD and MCD in finding outlier observations. Obviously, the total number of detected outliers by MD is totally different from what detected by MCD. Also, in this figure, you can see the robust distance between Mahalami's distance. Obviously, there is a weak relationship between these two. As I mentioned in a few slides back, after separating outliers, we did a pre-processing by clustering the data. For this purpose, we used the k-means as clustering method, and we chose cosine of the included angle between a spectra as distance parameter. Determining the number of clusters was an issue in itself. For this purpose, we first used MATLAB subclass function to find the optimum number of clusters that is 12 in this case, in case of fruits and flowers' spectra. And then, we decreased the number of clusters gradually and calculated the root mean square error between original and reconsolated spectra. As can be observed in this figure, data clustering has a great impact on reconstruction error when the number of clusters is increased from one to four. But beyond four, the error goes down slowly. So we chose the number of clusters equals four as a trade-off between accuracy and data redundancy. In order to evaluate our results, in order to evaluate our method, we calculated the spectral accuracy of reflectance reconstruction in terms of RMS root mean square error between the original and reconsolated spectra and also GFC goodness of feed coefficient. The yellow bars shows the results of classic PCA and the green bars displays the results of proposed method. As can be seen, there is a considerable improvement in the reconstruction error when we use proposed method and remove the outlier from the bulk of the data before applying principal component analysis. Also, we evaluated the colorimetric accuracy of reflectance reconstruction by calculating delta A2000 under two different illuminants, illuminant D65 and illuminant A. Again, you can see considerable improvement in the results from colorimetric accuracy standpoint. As conclusion, I can say that large spectral data set, such as multi-spectral images can be represented in lower dimensions by the use of principal component analysis. But the presence of outliers that are samples far from the bulk of data can have a negative effect on the results of reconstruction. So the accuracy of reconstruction can be improved by identifying outliers, clustering outliers into groups, and also applying principal component analysis to each cluster individually. Thank you for your attention. Thank you very much. 10 questions or comments, Eva. Thank you. Do you think it would be interesting to try to apply your research to other dimensionality reduction techniques such as independent component analysis or negative matrix facilitation as well? Yeah, it could be. But because in the previous researches, in most of the researches, it says principal component analysis give us the best result for reconstruction in comparison to the other techniques. We use this technique in this research, but yeah, it's awesome. And the second one. When you do the clustering, if I understood it well, you are using the cosine of the angle between as a distance for performing the k-means. So maybe it would be also interesting to use a combined distance, including also the root mean square error. I think so because afterwards you are evaluating your results in terms of the root mean square. There's this metric called the spectra similarity value, for instance, which combines both. Right. Actually, we can choose any other distance as a parameter of clustering, but because the mean goal is increasing the similarity and correlation between data. And for correlation, the distance doesn't matter and just the similarity doesn't matter. We use the cosine angle to choose some data that are similar. It doesn't matter. They are different in terms of Euclidean distance, for example, but we just want to gather some data that are similar to each other. But anyway, we can use any other metric and comparison the results. It's up to you. Yeah, because there are some other studies who suggest that the clustering works better if you use this combined. Yeah, yeah, that's awesome. Okay, thank you. Thank you. Thank you. Thank you for your presentation. I have a question that your results depends on the finding or detection of the outliers. And as you explained, your outliers are based on the similarity within the samples. I want to ask a question that is there any criterion for finding outliers, how you define, how you detect the outliers? I mean, in different database, your outliers will be different. The criterion was Mahler-Lieb's distance. And also in the other method, again, Mahler-Lieb's distance, but with this difference that we calculated, the mean and covariance with a robust technique. So the criterion for detecting a file was just Mahler-Lieb's distance. That is a measure for finding outliers in multivariate data. Okay, I mean, sorry, I just want to know that depends on the hue, what will happen for your outliers? Depends on the hue. For example, your dataset may be consists of different hues or consists of just a group of samples with the same hue. And actually, here we are talking about another type of outlier that is based on the correlation. And so in this method, we just find some samples as outliers that are inconsistent with the majority of data from correlation structure standpoint. And maybe some samples with different hue put in this, in the bulk of the data, or maybe in different plots. And detective of outliers. Thank you. Thanks James. As I hope, I get what you presented. You reduced the dimensionality of the spectral images. So, what was the number of principle components that you used? Three. Thank you very much. Thank you. Okay, please. Just simple question. So you use that KMIN clustering algorithm. So in the beginning you have to specify number of clusters in advance. So how many clusters you assume? Actually, as I explained in my presentation, first we used sub-clust function in MATLAB to find the optimum number of clusters, and then we gradually decreased the number of clusters and calculated the root mean square error. And then in some point that the error didn't decrease significantly, we chose that number of clusters as a trade-off between accuracy and data redundancy because the more number of clusters means the more number of bases functions. So in case of roots and flowers, as I showed here, we used four number of clusters, and this situation was valid for the four other images, more or less. And so for all of them, I considered the number of clusters it calls for. So I am asking. So the initial condition is between 4 and 31, right? Between 4 and? So 4 is a minimum cluster, and maximum is 31, right? No? All of MATLAB is automatically the specified number. The goal is determining the number of clusters that provide us with the lower reconstruction error, but on the other hand, we don't want to have a large number of clusters to kind of increase the number of bases vectors that are supposed to store for the next usage. Okay. Thank you very much. How did you select the initial spectra for clusters in the game's clustering? Pardon me? How did you select the initial state for game's so that how was the spectra for clusters selected? Ideal spectrum? Yes, the first round in the game's, when you have four clusters, so how did you select this four spectra? Actually, four clusters, not for spectra. First, for example, in this case, first we had 12 clusters, and then we calculate the reconstruction error, as you can see in this figure, and we decrease the number of clusters gradually, and then we reached to a point that we realized that with increasing the number of clusters, the error doesn't decrease significantly. And so it was just a personal decision. We decided to stop at this point. Yes. Yeah. Other questions or comments? Thank you very much once more. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
Large multi-spectral datasets such as those created by multi-spectral images require a lot of data storage. Compression of these data is therefore an important problem. A common approach is to use principal components analysis (PCA) as a way of reducing the data requirements as part of a lossy compression strategy. In this paper, we employ the fast MCD (Minimum Covariance Determinant) algorithm, as a highly robust estimator of multivariate mean and covariance, to detect outlier spectra in a multi-spectral image. We then show that by removing the outliers from the main dataset, the performance of PCA in spectral compression significantly increases. However, since outlier spectra are a part of the image, they cannot simply be ignored. Our strategy is to cluster the outliers into a small number of groups and then compress each group separately using its own cluster-specific PCA-derived bases. Overall, we show that significantly better compression can be achieved with this approach.
10.5446/31466 (DOI)
Thank you, Marco, and thank you, everybody, for being still here until the last presentation. I try not to be very tough. I'm going to present my work. This work is actually a collaboration between the Color Imaging Lab of the University of Granada in Spain and also the Polytechnic University of Milano in Italy. As co-authors are Eva Maria Valero and Javier Hernández Andrés in Granada, who are my supervisors and also Giacomo Langfelder from the Polytechnic University of Milano. And the topic is spectral reflectance estimation from transverse field detector sensor responses. This is the outline of the presentation. First, I will go through a brief introduction of what is the problem I'm addressing and what is a TFD sensor, in case any of you don't know them. Then I will pass through the methods. I use sensors, the spectral data I use, the capture simulations, and the algorithms. Then I will show some results followed by some conclusions, and in the end, I will show also some future work to be done. So this is the first part, the introduction. I will speak about the problem and what the TFD sensor is. The problem is nothing new. It has been addressed by many authors. This is spectral reflectance estimation, so we have some sensor responses from some camera. We put them into some algorithm as input, and we try to recover the spectral reflectances. And this is RK2R61 mapping, K being the number of sensors, and we map to 61 dimensions, which is 400 to 700 in 5 nanometers step. But what is new in this study? The new part of my approach or the things I try to approach to this study is the utilization of these new kinds of sensors. And maybe to understand them, it's easier to start from the sensors we all know, like the barrier filter sensors in the normal color cameras, RGB cameras. So basically what we have here is a silicon sensor composed by many pixels with the same sensitivities. And in different spatial locations, we have either blue filters, green filters, or red filters. So it means that we have three different sensitivities, but there's no full spatial resolution, so we have to interpolate to get it. And if we go one step further, there is this kind of sensors like the Sonifobium X3, who are taking advantage of the penetration depth of photons in silicon, depending on the wavelengths. So as we can see in here, every pixel has a blue channel, a green channel, and a red channel. So we have full spatial resolution. And if we go one step further, there are the TFD sensors, which is this new technology. So the basic principle is the same as the Fovion sensor. It also takes advantage of the penetration depth of photons in silicon. But now by applying some transverse electric field, we can control the sensitivities. So now if we apply some voltage, we can have, for instance, one blue channel, one green, and one red. But then if we change this transverse electric field, we can change the sensitivities. So in few shots, we can get plenty of channels, different channels. So let's have a look at the methods I use for this study. First of all, which sensors did I use? Okay. Here we can see a set of eight different setups. Each setup consists in a blue, green, and a red channel. So we could get all these sensitivities, well, in this case, quantum efficiencies, by several shots. In one shot, we get three channels. So the way we selected these four channels, which are the ones we use for the spectro-recovery, was trying with easily optimizing algorithms to get the best results. So we took, in this case, the pseudo-inverse approach and the matrix R approach, and we looked for the first shot with the best performance. So this case was these three sensitivities we can see in the continuous line. And then we looked for a fourth channel to include one more channel, which was the best one of the remaining 21. So this is this dashed line we can see here. So these four channels were the ones we used. Here somebody can ask a normal question. Why? If you take the second shot, you don't get the other three channels, so you have six instead of four. And this was done because there was no improvement in performance, adding these two more channels, but the optimization time could be longer. So in this case, there was no point to use the other two channels, and we just took this first one. We can just ignore the other two. Now talking about the spectral data, our illuminant for the simulation was the standard D65. We took 160 reflectances, randomly selected from these four databases. So the first one is the well-known Nasty Menton Foster of Rural and Urban Scenes. Then we have also REST database, which is composed by reflectances of common objects. And then the Moonset Book of Color and the MacBeth Color Checker, which is this one. This database was chosen so we can use it for general purpose. So actually, we can see here more or less the 160 reflectances are spread all around the color space. Then how we simulate the capture, we have this equation for the sensor responses, which is raw, which is composed by the sensitivities times the illuminant, times the reflectance, and some additive noise. Here we have the formula for the variance of the noise, which is intensity dependent. And Rw is the response for the K channel to the white, to the perfect white. And IK is the response for the K channel to the A sample. And then this is simulating flicker and shot noise, but the thermal and the dark current noise can be, where neglected? Because they can be discounted, either by cooling the camera or subtracting the dark image. Then the algorithms which were studied are these five. The pseudo inverse approach is quite an easy and simple approach. It's linear model, and it was mainly used to have a basis of performance to be able to compare the rest of algorithms. Then we have the matrix R approach, which is based on Weischecki hypothesis. And it's basically performing a colorimetric and a spectral transformation to get the reflectance estimation. Then we have kernels, which is quite a recent proposal. It hasn't been so much used so far for the spectral reflectance recovery. We can see in this paper some approach. And it's a way to linearize a nonlinear problem to be able to use linear methods for the regression, like the rich regression. Then we have the projection on the convex sets, which is an iterative algorithms. We depart from initial estimation, and iteratively we try to calculate the difference with the responses of the camera we have and the responses from the training set and try to reduce this difference by changing the estimation of the spectrum, of the reflectance. And finally, the radial basis functions, neural networks, which are basically one hidden layer networks in which we can control both the number of neurons in the hidden layer and spread as free parameters. Here we can see a table with some properties of each of the algorithms. So as we can see, pseudo-inverse is the only one which has no parameter to be optimized. And then the only one which needs the responsivities is the POCS, the projection on the convex sets. Then only kernel has regularization terms to be able to control the performance against the noise. And then in the optimization time, we have that pseudo-inverse doesn't need optimization time because there's nothing to optimize. Then matrix R is quite fast to be optimized. POCS is more or less medium compared with the others and then kernel and neural networks are slow to be optimized. Let's show some results. Here we can see in the left side the noiseless case and in the right side the noisy case. We can see in both of them the five algorithms and we can see the goodness of feed coefficient and the root mean square error as spectral metrics and also the delta e, a, b as a color metric. And we can see how for both cases, noiseless and noisy, the pseudo-inverse is the one performing the worst and actually since it has no parameters to be optimized, there is no way to enhance this performance. But in the other hand, we have kernel, which is the best performing for both cases and after the optimization. We can see also in this plot of the delta error how both pseudo-inverse and neural networks are the less, the least robust to noise. We can see how the change between the noisy case, which are the dark bars and the noiseless case, which are the light bars is quite big both for pseudo-inverse and for neural networks. While in the other hand, kernel is more or less the most robust against noise. Here we did a comparison with narrower sensitivities. If we see the shape of the sensitivities of the TFD, we can see how they are quite spectrally broad and we believe that this may be a reason for not a very good performance. We wanted to check what would happen if in the same conditions we would be able to somehow narrow these sensitivities. We just took these sensitivities, which corresponds to another RGB system and they are basically unscaled version of a real RGB system. We tried exactly the same algorithms and see what happened. As we can see here, we have in the green bars the noiseless case and in the red bars the noisy case for both these narrow sensitivities here and these are the TFD sensitivities. We can see how the performance is much better when we only narrow the sensitivities instead of the ones for the TFD sensors. Let's have a look at the conclusions. First of all, an accurate hyperspectral image can be estimated with full spatial resolution with only two shots of a camera with these kind of sensors. I have to say that the integration time of this kind of sensor so far is quite low. We have calculated now around 10, 15 milliseconds per shot plus the shifting of the sensitivities, which is more or less the same fast, so we can take two shots very fast and we can get four channels like we have done for this study. Then out of the five algorithms studied, we saw how the pseudo-inverse, even though it's the easiest algorithm to be implemented and it needs no optimization, is the least robust noise, while in the other hand, kernel is performing the best for all color and spectrometrics. Then we also saw how narrowing the sensitivities, better results are found compared with the ones of the TFD sensitivities. As future work, we want to implement and test also with some other algorithms like VINNER algorithms, which is quite widely used by many authors. It's a physical method. Then we also want to study how to narrow spectrally the sensitivities, maybe by including an optical filter in front of the sensor, a kind of filter which would affect equally to all the channels, because let's remember that each piece has all the channels, so we have no separate different filters for different channels. Then we also want to study the performance, including more channels, which would mean including more shots, so maybe here we should have a look at which application we want this TFD for, because maybe increasing the number of shots, we also increase the time for the capture and it may be not useful anymore for some applications, for instance video or moving images, moving scenes. Then also we want to perform some error analyzing in order to look for the optimum training sets, because we found here we work with the training set, which was for general purpose, but maybe if our application requires, we can narrow this wide space of possible applications, so our training set could be more concrete. Then also we want to study different applications, not only the spectral effect and estimation, but also the color calibration with these kind of sensors, so we can take a picture and we get directly the XYZ or LEV color coordinates. And then also include a more specific TFD model for noise, which actually now we have, but in the time of this research we didn't have, so it explains better the real noisy performance of these kind of sensors. And that's it. Thank you for your attention. Thank you very much, Mikuel and David, Francisca. So I have a question, more also maybe a comment that, so this TFD technology, based on some papers, some publications by the authors in Polytechnic Milano, they came also with another design that you can have more than three sensitivities. So you might consider that case as well instead of maybe futuring, because you're going to get less sensitivity in the sense. Yeah, actually in the TFD sensors with only one shot, we can get up to five channels, which is what they are using. The point is that here I was only working in the visible wavelength range. These extra two channels we can get are in the infrared part of the spectrum. So that's why I didn't use them, but for sure it's interesting and for any application they can be very useful also. Okay. Thank you. Is there other question or comment, please, Mikuel? It's more maybe like a comment, so obviously the noise, you have a variable sensors. These sensors are broader than phovion sensors and phovions are known for the problems, although they were quite clever to address them. So the reason why even in color application, let's say you want to take the RGB image and let's say you want to use your four channels to get a better RGB image or XYZ. The reason why broader sensors amplify the noise is the heavy matrixing you have to the color correction matrix. You want to go from raw RGBs to XYZ, let's say. If you have broad sensors, the matrix terms in the color correction matrix are much heavier and that's why you amplify the noise. In your case, this is even worse because the sensors look even broader than phovion. So that's the reason. There are some ways, I guess, phovion must be found a way, which we don't know. But you could look, there is, I think this may relate, may help you. There is, I think, Lee and Silvershine paper from a few years back at CIC. They thought about noise with regard to color correction. The color correction is obviously what you have to do with your... When you look at your raw data, it is completely desaturated. Yes, it looks... There's no color there. It looks almost... There's no color. So that's why heavy matrixing will amplify your noise. Thanks for the comment. Actually, yes, it's one of the main problems we are facing with this kind of sensor for the application because it's so wide that it's sometimes difficult to recover this and to deal with these matrices. Thank you for the comment. Thank you and I will close the spectral color session. Thank you, Mikael. Thank you all speakers in this session.
The main aim of this study is to investigate which would be the best algorithm for spectral estimation from Transverse Field Detectors (TFD) sensor responses. We perform a quality check of the estimation accuracy of five different algorithms, most of which are recent proposals. Some modifications are introduced as well in their implementation to simplify calculations or to increase the performance (see subsection Spectral estimation algorithms for details). The results obtained have allowed us to introduce relevant suggestions for enhancing the TFD sensor performance for their use in multispectral capture devices. This work paves the way for the practical development of a fully automatic multispectral device based on sensors with reconfigurable responsivities.
10.5446/31467 (DOI)
Good morning. I will start off with... Yes, so discussing some of the basics. So as most of you would probably know, the spectral and distribution of the light coming from no scene and reaching the eyes depends not only on on the surface of the reflectance, but also on the spectral and distribution of the light source. However, the perceived color of an object, it is quite independent from the color of the light source. I mean, call it... It's a color constancy. And however, if the visual system then wrongfully then assumes that the bias in spectral and distribution of the light reaching the eyes is being encased by the light source and tries to compensate for this, we misperceived in a color of an object and this is known as chromatic induction. So chromatic induction can actually be seen as a misdirected an attempt of the visual system to achieve a color constancy. However, if you view induction as a mistake that visual system makes in order to achieve a color constancy, you have a certain problem. And one of the problems is that chromatic induction is quite weak. So if you use a very than simple scene, so with simply, you know, a target with a simple surround, what you see is that induction is quite weak. If you compare this effect to, you know, I mean color constancy, you will see that... Yes, so color constancy is quite quite and robust. And one of the reasons why induction is a weak or might be weak is that in the visual system, it has almost no indication that there is a change in the light source and it's quite evident for the subject that it is actually the surface time reflectance. So the colors surrounding than the target which have been unchanged. However, if you use a more complex and stimulus, so if you have, yes, so the pictorial depth cues or you change, no scene layout, so you make this more complex, you have in specular highlights, mutual illuminations and illumination, the gradients, you see that the subjects more than discount the change in the color of the light source. So the research question is would induction be larger than normal if you make it more than evident that there is a change in the color of the light source? And so in other words, do people do the subjects then this counts more of, you know, difference in the illumination. And yes, so like I said, this research question is important because we still don't know quite exactly the relationship between a color constancy on the one hand and induction being studied by using SOCET on the other. So what we did is, you know, a color matching experiment. 12 subjects, they match both the color and the luminance of a reference disc, presented on the left screen to that of, you know, a matching disc on the right. I just, yeah, so I'll show you the stimulus in a few seconds. And it's set about in two meters from the screens in a dark room. And the backgrounds of both the reference and the matching disc, they consist of our tile pattern and the reference disc could have in two colors. Either the CIE with these, these are colored in coordinates with six candelas per square meter. I will refer, yes, I prefer this to the dark, dark darkened disc, or it could have this in color with a high illuminance. And I will call it, yes, a bright disc. So this is actually, yes, for the stimulus. We had in two two, two, then sessions in the one, you know, in the first in sessions, we, I will refer this to the aligned screens. This is how induction is normally being measured. So it is quite an evident for the subjects that there is no non-change in the light source. So here's the reference disc, here's the test disc, and they could change the color by moving the mouse and they could change the luminance by pressing the arrow keys. On the other hand, in what I call the simulated lamp in session, both of these in CIEs, they were insituated behind a black and curtain. And and they could see the, yes, so both in CIEs to a hole in this curtain. And we were hoping that by having a black and surround that, that the subjects, yeah, I mean, they can then explain the transition from, from the border of the CIE to the surround. And it is as if this, this black and surround has a different illumination compared to, to the reference and the task. What we also did is we changed the orientation of, of the reference disc and actually there was, there was a lamp shade here. And we simulated, yes, so a gradient of illumination with, with no specular highlight. And so we hoped that, yes, by doing this, that the subject was quite convinced that there was a change in illumination. What we also did is that on half of the trials, we didn't simulate a difference in the light source. And in the other half, we did suggested a change in the illumination. And it was, yes, on and off. So, so on one trial, the lamp was on and then on the other hand, the lamp was off. And we did this to make it even more plausible that there was a change in, in light source. And so the only difference between the aligned screens and the simulated lamp condition is that I mean the average and color and the local color and contrast was the same between the aligned screens and, and, you know, the simulated lamp condition. But the only difference was that in the, in simulated lamp conditions, we have this and gradient and, and the specular and highlight to make it more plausible that there was a change in illumination. So in other words, for, for the aligned screen condition, the difference in the color in the background, it could just as be, be explained by a difference in surface then reflectance of the surround. So a bit about the illuminations that we then simulated. When the lamp was switched off, there was only, you know, an ambient illumination. It was standard illuminant and see for, for both screens. And when the lamp was on, we had an addiction, an additional on local than tungsten lamp. And the rest I already explained to you. So in the analysis, we, of course, we first, we then determined color matches in the CAE and color space. And what we did, we computed a chromatic induction index and this, it is this, this formula. So it is actually the match with the lamp on compared to the match with the lamp off. So there is this perceptual indifference. And we divide this by what you would expect for, you know, so a color match if the subjects would have attributed a change in the color of the background to the illumination. So, and here on the left, I, I, I explain it more. So this is, these are the settings off, you know, so a single and subject for the simulated lamp condition for, for the dark and target. And so the cross over here, this is what you would expect if the subjects simply matched on the color of the target when the lamp was switched on off. And the cross over here, that is, you know, the color match that you would expect if the subjects would make, make a match if they, if they then, if they then attributed, you know, the color change in the background to the color of the, to the light source. And here are the actual and color matches for the subjects when the lamp was on. And here is the color match when the lamp was switched off. And of course, here you see that the subjects when the lamp was switched off, they could do a very good, good in color match, but that's not so strange, of course. And here you see that, that subject about 40 to 10% has no color match that is then shifted in the direction of the color of the backgrounds. And because we had, yes, it was simulated. Yes, a yellow lamp in a color match of the subject is directed towards more than blue. And so the, I mean, a color consistency then index or, I mean, the induction and index that is simply, that is, that is the difference between this, this length and, and compared to the ratio of this, this length. And so if you, so here are the results for, for all of the 12 subjects, I divided it by the results for the color and the results for the luminance. So this is the color consistency index or, I mean, the, the, the induction index, like you will, if you want. So, and I, here are the results for the dark, dark, dim pattern, dark, and target, and here for the bright target. And of course we are the mostly interested in the difference between the aligned screen, the situation in which, in which there was no suggestion that there was a change in the, in the light source and in the simulated lamp condition. But what you see is that there is hardly any, any difference between these two in sessions. What you also can see is that the induction, so is a bit higher for the dark and target compared to the bright target. This, this has been, this, I mean, this has been from before. And looking at the luminance, what you see is that the subjects attribute more or the difference in the luminance in the background to a change in the bright source. And once again for the, for the dark and target, you see the induction is higher compared to the bright target. So, to discuss, so the extent to which the color of the background influence the color of the Russian's disc, similar to that, that's found in many other, other studies. And induction in which there is no suggestion of a change in the light source. And so like I said, there was a slightly larger influence of the background of color for the decrements, but that has been found before. And so these results, they are in agreement with some other, other than results of, of some publications showing that the visual system seemingly does not try to make an estimate of the illumination, but uses illumin, illuminant, invariant properties like, like for instance, the color contrast or adaptation. However, the manipulations that we make, they did have some, some, I mean, they made some small changes to the retinal statistics. And for instance, one, one indifference between the aligned screens condition and the simulated lamp condition is that we, is that we shifted on the monitor of, of the reference disc slightly. So, so the size of the reference disc was, was a bit smaller. However, there was still enough of, of the color of the background being, being then observable. And apparently all of these issues are not quite important because, because the amount of induction was, was about the same for both conditions. So to conclude, I simply know where we state in the title of the paper. So it's a good suggestion that the illumination and difference between the two scenes does not enhance the color consistency. I thank you. Thank you. Questions? Any of you coming? Hi, Geron. Very, very nice. Very interesting. Just wondered, did you have any measure of how the observers perceived your suggested change in illumination? I mean, did you measure whether the realism was actually improved in your simulated lamp condition? No. Simply by asking them or any other method? No, no, I mean, we simply asked whether it looked quite convincing, but we, I mean, we don't have any data about this. So no. Thank you. Any other question? So it's good to know that color consistency is constant. So thanks the speaker again.
Color constancy involves correctly attributing a bias in the color of the light reaching your eyes to the illumination, and therefore compensating for it when judging surface reflectance. But not all biases are caused by the illumination, and surface colors will be misjudged if a bias is incorrectly attributed to the illumination. Evidence from within a scene (highlights, shadows, gradients, mutual reflections, etc.) could help determine whether a bias is likely to be due to the illumination. To examine whether the human visual system considers such evidence we asked subjects to match two surfaces on differently colored textured backgrounds. When the backgrounds were visibly rendered on screens in an otherwise dark room, the influence of the difference in background color was modest, indicating that subjects did not attribute much of the difference in color to the illumination. When the simulation of a change in illumination was more realistic, the results were very similar. We conclude that the visual system does not seem to use a sophisticated analysis of the possible illumination in order to obtain color constancy.
10.5446/31469 (DOI)
Welcome to my talk. I introduce the paper, the impact of image difference features on perceived image differences. As a motivation, I assume we have an original image and we need to do gamut mapping. There is a smaller gamut. One possibility would be to reduce the chroma. Another possibility would be maybe to shift the U. What we want now is which image is perceptually closer to the original one. Maybe something thinks that the reduced chroma is perceptually closer. It could be that some Arab servers say that the U shifted. Image is perceptually closer. Doing such an experiment, choosing one pair or the other pair is exhausting and time-consuming. So doing an automatic image difference measure would be preferable. What else do we need besides two images to determine the image difference of both images? Also the viewing conditions are very important. At this example we see in the upper line one continuous tone image and on the lower line there's a tone mapping version of the image. In the middle we see the images under an office environment. Probably it could be seen that there is a perceptual difference of both images but it's only a small difference. If we magnify the images as we can see on the right there is a really big difference of both images. So the viewing conditions if we are close to the image are also important. On the other hand side as you can see on the left side, it seems to be very dark. This is simulated like going outside into bright sunlight and you look at your tablet PC for example and the display is nearly black. So which is the difference of these both images? I think it's a perceptual as no difference of both images. Therefore viewing conditions are important. For our image difference measure what do we need? As already told we need two images and the viewing conditions. Then our image difference measure D gets then characteristic value as a result. For an image difference measure the difference of zero means that both images have visual equivalence. Whereas the higher the number gets the bigger is the difference of both images. In our paper we present an image difference workflow. We can see here in the first step as the input we have two images in RGB color space. Then these two images are considered are applied by an image appearance model. This considers the viewing conditions in the next step after applying an image appearance model. The images are transformed into working color space. One condition for this working color space is to be perceptually uniform for our image difference measure. Starting from this color space we do an image difference feature extraction. Here are some examples maybe structure or contrast. I will go into details on the next slide. The next step is to combine these features and in the end we need to get a characteristic value for example by taking the mean. Now we want to know which image differences are important. We were looking for really well working image difference measures and we saw that the structural similarity index works very fine on gummode mapping databases. Also the SSIN index provides three excellent features which I will show now how we get these features. The more details about the structural similarity index can be found in the literature. If we have got two images, we apply this formula. This formula is adjusted to the working color space that is perceptually uniform so it's slightly different to the one in the original paper. As a result for our feature we get this image difference map. This map here in the dark regions shows a small image difference and black means no difference in this feature. The bright region like here means a big difference of the feature. The first feature we extract is the lightness comparison. In this map the lightness comparison is shown as an image difference map. How do we get this image difference map? For the lightness comparison, in the first step we take the lightness axis in the working color space like the LAB color space. Then we take a sliding window of both images. They correspond to each other. They have the same center pixel. The center pixel of the first image is denoted as X of the second sliding window. The center pixel is denoted as Y. Over the both sliding windows we take the variation in lightness and take the mean of the whole sliding windows. That means in the end we get as a result one pixel and put it into our image difference map. The same we repeat pixel by pixel for every sliding window we can do in the two images. In the end after doing this pixel by pixel for the whole image we get this image difference map. The next image difference feature we extracted is the contrast comparison of both images. Again we get an image difference map this times of the contrast comparison. The broad studio is the same as before. We take the lightness channel. We are looking at the sliding windows with the center pixels X and Y and do a pixelized computation of the image difference map. This time sigma X and sigma Y are denoted as the standard deviation of both sliding windows. Again step by step we get our image difference feature. The third feature from the SSIM index is the structure comparison. Here shown with the formula sigma X Y in this case is the cross correlation of both images regarding the two sliding windows. But the SSIM index only works on grayscale images. We want to know is color important has color and impact of our image difference measure. In this example we see two images one is the original on the left side and on the right side a gamut-wept version. And a difference can be seen. This one and this one are different. If we look at the grayscale correspondence I think we can't see a difference. So the assume that also color is important. For our image difference measure workflow I will go into our experimental suggestion which we applied on an image gamut mapping database. So this seems like the same like our workflow. What we used was as an image appearance model the spatial filtering of SC-Lab because in the database we only knew about the viewing distance but not about the illumination. As working color space we choose the lab 2000 HL color space which is near almost perceptual uniform and eulinear. From these images we extracted five features. The first three features as I show as I've shown before are the lightness contrast and structure comparison on the achromatic channels. The next two features we extracted are the chroma comparison and the u-comparison. The formulas are the same as for the lightness comparison but they are computed on the chroma channel and on the u-channel respectively. In the experiment we first took the mean of these image difference map and after taking the mean we were combining these image difference features to a single number by multiplying the characteristic values by each other and take one minus the product of all five image difference features. In the experiment we took gamut mapping databases presented before by Matthias and it's the same as before we divided the images into the training images and the test images. On the training images we adjusted the parameters of our image difference features to the working color space. We trained these parameters. For getting a result we tested on the second half which is independent of the training images. So these are different images taken than only our for the training images. How do we evaluate our image difference measure? We were using the hit rate. The hit rate is the sum of all corrected predictions divided by the sum of all choices made. Here's an example. The observer's choice for these four images were these choices. Here in the middle there is the reference image. On the left and on the right of the reference image are the both images. The observer had to choose one which is perceptually closer to the reference image. For example this one, this, this and this. For the prediction of our image difference measure we computed the image difference between the reference and the left and the reference and the right image and the image difference with a lower number is the image that would be predicted as perceptually closer to the reference image. For example these two images and because this is the same image maybe the observer can choose one and the next decision to choose for the same image another image but an image difference measure always has to choose the same for the same image pair. So we see our image difference measure is three times right for decisions. That means the hit rate is 0.75. Another advantage of a hit rate is to do a significance test. Here we were assuming a binomial distribution of the observer's choice. So we could use a binomial test to test our significance. The results of our image difference workflow can be seen here. The y-axis shows the hit rate. The x-axis shows the combinations of all our five features. Here the features are standing alone and here their features are combined with each other. I only want to show the most important results of the experiment. The first thing is that it seems that the contrast has the most impact of the image difference features because it always shows the best results if it's combined within the combination of two image difference features or three or four or five. Always the one with contrast difference is the best. The next interesting result is what about only using the achromatic features? We see here this is the result for the achromatic features. The lightness, contrast and structure like the SSIM index and we show a significant improvement than this line. This line refers to the SSIM index. Why are we better? The only thing we changed to the SSIM index was using a working color space and regarding the viewing conditions. What is if we add the chromatic features? We can see the best results. We get this if we add the chromatic features also to the chromatic features to the achromatic features. Here the combination of all features gets the best results. But there's already some improvement because the best possible hit rate that could be achieved is in this experiment 0.8. That means that we are still far away from the best possible hit rate. In a summary, I want to say we introduced an image difference workflow. We proposed five image difference features that were based on the SSIM index and added chromatic features. In the results, we saw that the achromatic features significantly improve the prediction accuracy compared to the SSIM index if we consider the viewing conditions and the working color space. As the last results, adding chromatic features further improves the prediction accuracy significantly than only using achromatic features. Thank you for your kind attention. Questions? I know that it's always debatable whether to choose one great scope of representation over another one. In this case, we use likeness as in the original SSIM paper. Did you think that difference in grayscale representation, say the average, if there is really the chromatic chance or some other measure, could be combined to produce a similar result for actually considering chromatic features and concepts? Indeed, I think using better grayscale images could improve the prediction, but what we tested and what we saw is adding the chromatic features really improves the prediction. So we need to consider only achromatic features and we are also wondering if we only take the grayscale images or only the lightness axis. Indeed, it has an impact, but for our workflow, we only choose the lightening axis to do our chromatic features on the other channels. Thank you for an interesting talk, Enns. I have a two-part question. The first is, how did you decide on the sliding window size? The second part is, would you or do you think there is justification for using a different window size for the chromatic features compared to the achromatic features? Okay, to the first question, we used the sliding window size as proposed in the SSIM index paper of 11 pixels by 11 pixels. And to the second question, we didn't regard another window sizes until now. What we looked into further details was applying a multi-structural approach of our workflow and this we were working on and the results will be shown soon, I think. Okay, so thank you very much again. Thank you.
We discuss a few selected hypotheses on how the visual system judges differences of color images. We then derive five image-difference features from these hypotheses and address their relation to the visual processing. Three models are proposed to combine these features for the prediction of perceived image differences. The parameters of the image-difference features are optimized on human image-difference assessments. For each model, we investigate the impact of individual features on the overall prediction performance. If chromatic features are combined with lightness-based features, the prediction accuracy on a test dataset is significantly higher than that of the SSIM index, which only operates on the achromatic component.
10.5446/31472 (DOI)
Thank you for coming to this talk. This talk is about human color constancy. And this work has been done in the computer vision center in Barcelona with the collaboration with Maria Van Rhella and Alejandro Parraga. OK. Then, color constancy, you see a green, sorry, a ruby cube illuminated with green illumination. Then, after time, the human adaptation corrects for this illumination. And if the color constancy was perfect, we will see a scene like this, more or less. Then, we may wonder if the effect of the color constancy is the same for each color category, and also if the interactions among the perceived colors are constant. So we have designed a psychophysical experiment in order to answer these questions, or at least to approximate these questions. So I will explain the paradigm that we have used to measure this adaptation. The paradigm that we have used is very similar to the chromatic setting paradigm. The chromatic setting paradigm is just to adjust a test patch in the scene as until it appears gray, and then compare these adjustments under different adaptations. So it is intuitive and has a good precision, but only measures one point. So we have introduced a new paradigm that it's called chromatic setting, that instead of measuring only one point, we measure nine points in color space. That it's more or less correlated to red, brown, pink, orange, and the basic categories. Then it's also intuitive and has a good precision, and measures multiple points under the same immersive adaptation. So I will go into the details of this paradigm. Subjects has to adjust nine colors under a particular adaptation state. For instance, under a chromatic Mondrian background, and they select a particular choice of colors from a basic categories. And then under another state of adaptation, and they select another particular set of colors. And the correspondences between those selections give an idea of the illumination shift or the extent of the color consistency. Then because we implemented this paradigm using a CRT monitor, we have some constraints. For instance, the CRT gamut. Because when subjects were asked to select the most representative color for its basic category, they selected colors highly saturated. Then these colors had the problem that when they were to be illuminated, was not possible because they fall outside the gamut. So in order to solve this problem, we introduced a new methodology that is just to set a boundary around the region where the subjects had to select the reverse representatives. So they selected the colors in a limitation situation and illumination, and the cylinder was aligned in the lightness axis of the CLAP space. Then the adjustments were inside the gamut, the cylinder boundary, and then the other part of the methodology was instead of selecting again the best representative of its basic category, they just reproduced the previous selected colors. So in doing that way, we expect that the location of these new adjusted colors fall inside the CRT gamut. Then I will experiment using a combination of backgrounds and illumination. Our backgrounds were simple, just Mondrian. So our first type of background was just use seven intensity levels of DC5 chromaticity, and this is this kind of background. The second kind of background that we used is eight-day color selected in each reference session without gray. That means that each stimulus was different for each subject. Was something like that. And then another kind of background that was type 2 that we just select the colors with hues in between the previous colors. That is more or less the same. So then we illuminate these backgrounds, and we just simulated illumination using the typical spectral dot product between each by-perfectance and illuminant spectra, summing a Lambertian model. So we have the Mondrian illuminated with d65. That is the same where the subjects select the colors. And then we just illuminate using greenish illumination, yellowish, oranges, and purpleish, more or less in order to span the whole CERT gamut, more or less. Then this is a particular set of results for one subject. Sorry. This is a particular set of stimuli for one subject. So we have nine adaptation conditions, sorry, 11, that in each of these conditions, subjects did their adjustments. We did our experiments in a dark room with the CRT monitor, and subjects viewed the monitor from one meter distance. We used four subjects, and they were unrestrained, so they moved their heads freely. And the adjustment was done using a gamepad. And in order to do the adjustments in the monitor, they navigated inside the CLAP space using the three dimensions and six buttons, two for each dimension. And then the particular implementation of this paradigm, we use a particular procedure. First, this is like adaptation, uniform, this is like adaptation, up two minutes. Then follow with the stimulus adaptation of three minutes, one of the previous particular scenes. And then we had a trial loop where inside each loop, first the subject heard a word name and also read the word name in the screen. And they had to adjust the requested color and did this adjustment using the gamepad, and the adjustment was done at the same time over these patches, over a set of multiple patches were adjusted simultaneously in the screen. Once they finished that, follow with a stimulus adaptation scene that lasted 10 seconds, and the trial loop began again. So this was done 45 times, five times for each color, gray, green, blue, yellow, the basic colors. And that was the particular procedure of one session and lasted more or less 50 minutes. Okay, then I will give some results before the data analysis. And to begin, I will show the results that were done under the inside the reference session when the boundary cylinder was present. So these were the results for one particular subject. You can see that the red circle is the boundary and the round shapes are the average of the five trials. Then you can see that green, yellow, orange, brown, red, I mean, they are in their spectric location despite being limited in situation. So this means that even subjects had a limit range of situation, they found reasonable colors that represent the categories. Then we are interested in to look for the distances between those colors. So this is a three-dimensional view of the previous image in the CLAP space. And this was done for each subject. So each subject has a particular choice of colors. They were different for each subject. For instance, the first subject, you can see that selected a light blue while the other subject selected a dark blue and so on for the rest of the colors. So the experiment was particularly too net to each subject. Okay, then I will show for the same subject the results for the regular session. This is when the cylinder was not present. So they had the only limitation in selecting the colors was the CELA TIGAMO. The adjustment done under the 65 squares are the average of the trials and the black lines in close to the region that were the trials were adjusted. And the black arrow indicates the distance between the origin of CELAP space and the gray adjustment. So you can see that purple have similar distribution of colors as well as for green. And also it's interesting to know that subjects, most of them reach an interesting high position for some colors like brown or pink. You see that it's better even than gray. So I mean, this is interesting. Then the same for yellow and orange. So a patient followed the LUNAIN shift as expected. Interdistance among colors seems to be preserved. All of that analysis will quantify that. And the method position is around four delta units. That means that the standard deviation between the trials adjustments in CELAP space was four delta units. And gray is not necessarily the color with the best position. Depends on the subject and in the case, but there was color with the same position. So that analysis, this is the analysis that we've done. Consider to reversing the adjustment under using type two background. This means that for in this case, the background was not the selected colors in the reference session, but the hues in between. And we are interested in how different are interdistance among adjusted colors. And we have used graphs in order to model that. Just a simple approach, because it's simple, I think that it's interesting. And just we need to define a node, edge, at edge weight and a graph distance. So this is the typical scenario where we have two adjustments under different denominations. And we define the node as each adjusted color, an edge at each pair of adjusted colors, and edge weight as the occluent distance between two nodes normalized by the average distance to gray. This normalization was done in order to be able to compare results among subjects. Because we don't are interested in absolute distances between among colors, but proportions between colors. So also define a distance between two graphs, that's simple, just the average distance for all corresponding edge weight differences. Then I will present the previous results, but under this new approach. And just under this 65, subjects, this same subject adjusted this particular color, under green, yellow, orange, and purple. So we are interested into quantifying the graph distance between the gray adjustments and the green ones. You can see that these results are quantified in this graph. This is the average of subjects using type one background and comparing this to doing the distance between those two graphs for all subjects. And the same was done for the yellow, the orange case, and the purple one. And then also for the type two background, that you can see that results are quite similar. And then our results, these are the same as producing. We also approximated this issue by considering the colors that were adjusted in the type one session. These were the colors that adjusted in the reference session. And apply the same analysis. These are physical colors in the sense that they were simulated colors, not adjusted colors. So we have a similar figure. And then we can see that doing the same analysis, we get a little bit more deformation. This graph, I mean, you can see that the axis is proportion, should be from zero to 100. And then see that for the perceptual results, the deformation proportion is around more or less 30%. And for the physical ones, is around more or less 20%. So perceptual colors are stable up to 80%, 87% in average. Physical colors are stable up to 77 in average. And no difference over backgrounds and information to our fund. So to sum up this talk, we introduced a novel experimental paradigm that measures multiple points under different states of immersive education. Then we measure it and quantify the interrelation among these perceived colors. And we have seen that physical colors have a little more deformation than the perceptual ones. And our results are also in accordance to previous studies with different paradigms because literaturing color consensus is so large. And I mean, there is a previous studies that has studied the interrelations among colors. So, and as a main conclusion, we can say that color perception maintains high degree of sterile invariance under illuminant changes. So as a further work, we think that it's interesting to know where the reason behind this 30% deformation of perceptual colors came from. So in order to get a better understanding of that, we think that it's interesting to reduce the number of contextual skews in our stimuli because our stimuli are flat background, mundane backgrounds. That means that only colors are important. So our approach is to reduce the number of colors, maybe eight to three, in order to let these deformations increase if they should happen. And then also play a little bit with illuminance in order to promote these differences. So, thanks. Thank you very much. Questions? Sergio. Thank you for your nice talk. You are using CIE-LEV space for your analysis. So you are using the same space for different illuminance. But the CIE-LEV space is optimized for D65. And its matrix changes a little bit with the illuminance. I wonder if this can affect your analysis. Yes, I guess that somehow will affect. But, okay, if you do some measurements under color space, in CIE-LEV color space, and you want to compare those distances, I think that it's better to do it using the same white reference point. Because otherwise, I mean, for instance, if I compute distances in the graph using not the CIE-5 as reference point, but each gray, adapted gray as reference point for its subject, compare those distances, I think that it's not... You are using always the same white? Yeah. I also have a question. So experimentally, the subjects adjusted one color at a time using the game control. Yes. Did you consider having them adjust all 8 or 10 simultaneously? Okay, no, I don't consider it. But doing that, I think that you will modify strongly the stimuli. Oh, yes. We will constrain them. But constrain them illegally. Each selection of the color space, sorry, each selection of each color will constrain the next selection. And maybe, I mean, in order to study the structural approach, maybe it's better to do that. Because you are building the same thing. It also depends on whether you're interested in differences or uniformity. I would like to keep the stimuli as original as possible. And there are some colors which are known to be more robust than others. And your method doesn't reveal those differences. Yeah. Any other questions? Okay, well, thank you very much. Thank you.
Color constancy refers to the ability of the human visual system to stabilize the color appearance of surfaces under an illuminant change. In this work we studied how the interrelations among nine colors are perceived under illuminant changes, particularly whether they remain stable across 10 different conditions (5 illuminants and 2 backgrounds). To do so we have used a paradigm that measures several colors under an immersive state of adaptation. From our measures we defined a perceptual structure descriptor that is up to 87% stable over all conditions, suggesting that color category features could be used to predict color constancy. This is in agreement with previous results on the stability of border categories [1,2] and with computational color constancy algorithms [3] for estimating the scene illuminant.
10.5446/31371 (DOI)
Welcome to today's panel and the title of which is 3D TV, Dangerous Truth or Fiction. So we know television has lots of truth and fiction. But here we're talking about, I guess we're talking about psychophysics or psychology. We have on the panel, to my immediate right and your left, Ed Pelle of the Harvard Medical School, followed by Marty Banks of UC School of Optometry. And Chris Reiman, who's a medical doctor from the Cincinnati Eye Institute. And finally, Peter Lude of Sony. Originally, Chris Tyler of the Smith Kettlewell Eye Institute was going to be on the panel, but he cannot make it today. And I thank Ellie Pelle for filling in. And I had thought before we knew Ellie was available that we would ask the parrot to come up here and see what he would do. But it's almost the same. It's almost the same. So I'm going to give, in my case, it's probably a monkey. So I'll just give you a chance to speak your piece. Everybody's got a couple of minutes and we'll just move down the row and we'll begin with Ellie. You can tell us a little about your background and how you feel about 3D TV and whether or not it's a danger. I am an electrical engineer and an optometrist. I've been working on a problem of 3D and comfort. Actively, I guess since 1990, where I published my first paper on that issue related to the first commercial HMD, the private eye, I've been working on that issue with many of the companies that developed or try to develop HMDs over the years, small and large. And I've done some research that I've published also in my lab in addition to that. You want to make any, yeah. You want to go beyond that. You might want to make a few remarks about how you view the subject. So I think there is no question, I think, after this session, that some discomfort comes from watching 3D or stereo to be more accurate from watching stereo displays at distance or at near and most likely more discomfort at near and there is a directional effect and all this. I think anybody that has tried that knows that there is some discomfort. What is really of great interest and concern is whether that discomfort cumulatively can lead to some injury or sustained discomfort. Because if people have discomfort in something they don't like, the industry may be upset because they will not use the devices or will not buy them even if the discomfort is noted in the point of sale. But the main concern, I think, is whether this could, over time, cause some damage. And when we talk about that, there's clearly a distinction between adults with fully mature visual system and children who are expected to be the first adopters of these things as they usually are. And whether this would be of issue of concern, and many of you have probably read, this has gotten some interest in the media and New York Times had an article about Nintendo's announcement that there may be some risk in restricting the use for children over the age of six. And I think something we may or should discuss here is, which is laid out in the program, is whether, indeed, there is a risk, what is the risk, and if there is a risk, and what can or should be done about it. This would be something I would be interested in. OK. So now we'll hear from Marty, Marty Banks. A little just a moment about your background and how you feel about the issue. First of all, I have to apologize. I've got laryngitis. I've been in England and caught a cold there and managed to. I'm over the cold, so I'm not symptomatic. But I can't speak very well. Maybe that's good. Ellie said something I'd like to. So who am I? I'm a vision scientist. I've studied both the development of vision. In infants as young as one month of age. And I currently study mostly adults and most interested in depth perception, particularly stereo perception, and integration of visual cues with other cues, such as vestibular, auditory, hand, et cetera. Ellie mentioned briefly, I'm just flew by so quickly. I don't think anyone really heard it, so I want to illuminate something he said. We keep calling this 3D TV or 3D Cinema. What's that about? It's been 3D all along. There are 50 depth cues in conventional cinema or TV, and we've added one. And so it's really inappropriate to call it 3D. I know that might be good for marketing, et cetera. But once we get the next thing, which will be light field displays or integral displays, something like that, what are we going to call that? 4D. And then when it gets beyond that, what's that going to be? 5D? So what we should do, I'm going to encourage people to call it Stereo 3D. That's what it is. Or S3D, if you like, abbreviations. And let's try to adopt that, because I think that's just more, that's what it is. Risk factors. Sure, discomfort's a real issue, because most of the venues where we see this stuff coming out is for entertainment. That's where the money is. And it's not entertaining to leave after half an hour with the splitting headache or with blurry vision. And so I think everyone realizes that this could kill the goose that might lay the golden egg if we're making people uncomfortable and don't take care of that upfront. So what we need to do is just understand those sources of discomfort more. We heard two talks just now that are getting at some of those issues, and there needs to be a lot more done. If you think of all the things that we don't know, it's scary. Let me kind of list those. What are the possible sort of reasonable ways that people might become uncomfortable with stereo 3D specifically? Versus accommodation conflicts, we heard about that. Visual vestibular conflict, the second speaker mentioned that in passing, that's where the visual system is saying you're moving, but the vestibular system, the inner ear is saying you're not. And that can make people get motion sickness. It's a problem. Why are we specifically concerned in S3D about that? Because the addition of stereo makes the visual motion cue more compelling. So it's logical to me that that might lead to more nausea, susceptibility to sickness, et cetera. Another one that I think it's not been studied, we're currently doing. In fact, the guy sitting in the audience is starting to study right now, is the effect of head roll. That's a serious problem when you're looking at stereo imagery, because that forces the eyes to have to make a vertical, virgin's eye movement. And the eyes are not happy doing that. Other things are things like ghosting, window violations. I had a list here somewhere. So if I missed any. Flicker and motion jutter. To my mind, none of those are that serious a problem. They're a little annoying, but they're not going to cause someone to be discomforted or fatigued. That's my personal opinion. What I'd like to encourage, I think it's really obvious when you come to a conference like this, there are people who are fundamentally engineers. They know about displays. They have some brilliant ideas about how to do cool things and software and hardware. But they don't really know about human vision. And they don't know how to do experiments on human vision very well. I'm sorry if I offended someone, but maybe most of you believe that. We envision science. We know a lot about vision. We know a lot about stereo vision, actually. And I think we know how to do experiments. What my field doesn't know about are the issues you guys, the engineers, are really concerned with. It's surprising how little my field knows about that. You're seeing a couple of people here who have exceptions to that, I think, to some degree. But most people just don't know what the issues are. So we need to partner. We need to get the engineers and the industry together with the scientists to figure out what do we need to know and how is the best way to find it out. And unfortunately, that takes people being interested in it. It takes money. We have to find some ways to partner. And I think that's a very important thing. Thank you, Marty. We move to Chris Meinman. Chris, a little about yourself and then what you think about this issue. So I'm Chris, and I'm a retinal surgeon at the Cincinnati Institute, full-time and a half practice, and have a lot of interest in all things engineering, because I was an engineering undergraduate back in a previous life. I'm an avid stereoscopist by hobby, which is what brings me to this conference. And I'm trying to apply all things stereoscopy, as per my talk a little bit earlier today, to the medical field. My sense here, the angle that I'd like to bring to this is that of somebody who, although it's not 3D digital TV stereo displays, I'm somebody who uses machine vision. I look through a microscope all day, a stereo microscope all day. And I see things that are completely a-physiologic in terms of what normal people see on a normal basis. And for example, my stereo base goes down to about 15 millimeters when I'm looking at somebody's eye through a slit lamp or through an operating room microscope. So I'm somebody who uses machine vision every day, all day, and I'm not broken, and neither are any of my friends and colleagues. That having been said, I think that this is a very complicated question. And I think it's a question that really needs to be broken down. I think there's health effects, dangerous versus unpleasant. There's effects by the viewing venue and viewing geometry. I think we've already had great talks that have talked about the difference between a handheld device and an IMAX screen. I think there's different effects of different types of 3D hardware. I think auto stereoscopic displays are a different beast than head mounted displays or surgical microscope or active versus passive glasses. And I think the one thing I'd like to maybe stimulate some discussion about today is the quality of the 3D content. I think so much of these symptoms that we've heard about and seen can maybe, they can't be eliminated, but they might be mitigated by careful attention to quality stereoscopic content, limiting things like the errors my co-panelists have described. And then perhaps maybe we can talk slightly about the potential legal implications of all of this. Peter Lude of Sony, a bit about your background, Peter, and how you feel about this issue. Thank you very much, Lede. Pete Lude. Sony, my day job is here in Silicon Valley, not too far from where we're seated, heading up engineering operation for Sony's professional business. And that means we're developing new technologies for a variety of filmmaking, movie making, content creation, and display technologies, a lot in terms of cinematography and digital theater systems. My background is in system engineering for media systems. I feel like the odd man out here coming from the media technology, and I'm very honored to be in this distinguished panel and being a relative newbie, because I've only been really working in items related to 3D for the past decade or so when it first came up in digital cinematography. And as you mentioned, in my nights and weekends, I'm busy on things like being a founding board member of the 3D at Home Consortium. And I'm also honored to be serving as the current president of the Society of Motion Picture and Television Engineers, so I get exposed to a lot of viewpoints. In terms of 3D, I'm really fascinated by what I've been learning from Marty and other visual scientists that understand and have explained to me a lot about the things we perceive. But I have to have a resounding agreement with what Chris just said. When I see symptoms described in terms of eye strain, fatigue, nausea, having to do with entertainment, stereoscopic content, the majority can be very readily traced to errors in filmmaking. And I'll just be that blatant about it. It's not a attribute. It's not a core function of stereoscopic viewing, but it is somebody made a darn mistake in making that film. The common ones are conflicting depth cues which reveal themselves in things like floating window errors, divergence and background depth grading for the improper screen size. And these things you can look at and say, this will cause you to have eye strain. It's just a given. And I think that factor and the learning curve in a creative community has masked some of the other factors here. And so a lot of the confusion that we read about in the press or the complaints are not inherent to stereoscopic, but are curable problems cured by education. And that's one of the major things that Sony has been working on. That's a very interesting point. It's my hope to throw this open to the audience as quickly as possible, but you raised some provocative points. Dr. Yang's paper, which we heard used what I would consider to be excellent content, Cloudy with a Chance of Meatballs, is a beautifully produced stereoscopic CG animated film. It's absolutely lovely. It has low parallax values. I can't believe that it's the source of any discomfort. On the other hand, when I was listening to Dr. Yang's paper, I was thinking about that Samsung monitor, which has, to my way of thinking, a tremendous amount of cross talk. So when you evaluate the perception of individuals, you've got to really ask yourself, is this stereoscopic display device a neutral device? This is a very, very difficult problem. I've talked to Marty about this particular point actually recently at a board meeting we were at. So I'd like to just throw that out and see what the panel thinks about it. Anybody? Hello? Take on what Peter and you just said. I totally agree that fixing the content, fixing the quality is going to help. But I'm going to agree with what Lenny said. It's not going to eliminate the problems. Because for example, if the viewer chooses to look at the display like this, you have no control over that. And the way you design your content has nothing to do with it. And that can make you feel quite ill. That can make you feel quite ill when your head's tilted. Or just accommodation conflict. Yeah, you can have some control of that in the content by keeping the disparities toned down, thinking about where the viewer is likely to look, et cetera. So there's a case where we could do better. The visual vestibular conflict, if you want your content to be exciting to get people feel like they're flying through canyons like you were in Avatar, then you've got to go there. And so I'm not sure just to what degree we can eliminate those problems. But absolutely, some of the ones are technical problems that can be eliminated and should be. That's what people who design this place can address. We can design this stereoscopic transmission system. But in terms of creating the content, that has to be to the artistic side of the house. So that's an educational issue. Next? Well, I think that we've talked about the hardware being important. I very much agree with that. And I think that we need to encourage our content creators to create various outputs of the content that are geared towards different devices. I think that there are certain types of near far disparity that you can get away with. If you're looking at a movie screen that's 10 meters away, that you're not going to be able to get away with if you just take that image and put it on a handheld device. Or even if you put it on a small viewer that's autosterioscopic. So I think something we may all agree about is that motion picture projection is an easier case. It's less offensive to the visual system than viewing at home television monitor up close or handheld device because of the verges accommodation issue. Would you agree? I think, yes, the distance targets are better. Distance viewing is better or easier. But the way people use this content now suggests that they will get the movie and want to see it on their cell phone. And therefore, we may have to come up with a way to change the, if there are solutions, and there may be ones. But the parallax is a function of magnification. So on a cell phone, the parallax is going to be small. So the breakdown of convergence from verges accommodation has got to be small. Well, but the design, as we've seen, would be optimal if we have depth in front of the screen for the movie theater or television. And it will be the wrong position for it, the wrong polarity. It needs to be behind the screen for the handheld device. So it's not the magnitude. It's the polarity of it, whether it's the cost or uncrossed is the absolute value. And of course, what we really care about is the relative depth in the image. We don't really that much care. I mean, there is this woo that the arrow is coming towards your head. But most of what we care about is relative depth. And the relative depth could be kept so the devices could be designed to possibly modify the content in its absolute without modifying the relative, which is one of my projects I designed to overcome the conflict of accommodation and convergence by always moving the point of regard onto the screen, no conflict, maintaining the relative depth across the screen. So there are issues that have to do with engineering the overall system and issues that have to do with the content which may have to be adjusted for the particular display device. Yes, there are. The problem there is that we don't have the data and the understandings yet to set up those guidelines. Basically, we're talking about giving the content creators some kind of guidelines. Yes, defining for them what they may need to do to reduce the chance of causing discomfort, reduce the chance of causing more than discomfort. So we don't have good numbers. Everybody knows. Do less. So move to 2D and resolve the problem at the limit. But this is really not a practical one. So we need to develop guidelines. And they may not be that simple. It may be different for study than for motion, obviously, even for the depth. And so before we ask the content developers to do a better job, somebody should tell them what would be helpful. And the data for that is being collected in Marty's lab in a few hours. It's just starting, though. The data that we have as the talk was presented is from wearing glasses that are improperly produced or designed or something like that, which is a completely different situation. And we need data that is specific to the situation of display and that from it, we can derive new zones of comfort. Any other comments down the line here? I'd like to add to that just a bit. It's hard to disagree with the basics of what everybody is saying, although I think taking from the perspective of the content creation community, there has been a lot of study and thought, although not as scientific as we might be used to in terms of what makes good stereoscopic viewing experiences. And I think that because of the very large investments being made into film, we have to keep in perspective that there's now 60 movies that are being made in stereoscopic 3D in the pipeline. These are tens of millions, to hundreds of millions of dollars in investment each. So you could be darn sure that there are people that have paid a lot of attention to what they look like, and at least the controlled circumstances of the theater. I wanted to point out also, though, that when we talk about multi-versioning, it's a huge technical challenge to properly convey the depth budget going from a large screen to a small screen, because it really requires a geometric model and all the objects in the image, not just making one simple disparity adjustment. And people have tried it both ways, and there's been a lot of dramatic failures. Content creators are already making multiple versions of their cinematic releases. The last Spider-Man release had over 230 different versions, different languages, screen size just for 2D, different intercuts. So nobody's looking forward to making a whole number, many more releases for 3D. And that's really the compromise that's going to have to be. Well, to return to Dr. Yang's paper and his experiment with Cloudy with a Chance of Meatballs, and you've raised the point, Peter, that movie was produced for a 40-foot screen, and it's being shown on a 50-inch monitor. And that means it's going to be a different experience. The parallax values will change. I don't know how it was adjusted for a video release. Most of the content that we're familiar with is on big screens, because very few of us have 3D televisions, and there's very little 3D TV content. But there's a bifurcated universe. The people who are doing CG animation, like the people at Sony Pictures Image Works, or Disney, or DreamWorks, those people are producing CG animated movies that are very moderate in terms of parallax and are designed primarily to do no harm. And I know the stereo supervisors, they will work very hard to get beautiful effects and modulate the effects. And they're mindful of producing unpleasant effects. Then we've got the people who are shooting live action. There are a lot of problems with conversion. People complain about that. There are major issues with regard to stereoscopic capture of live images because of inadequacies in the present camera systems. Also, you can't control production. You've got independent producers. Every studio is not Sony, where people are going to be careful. You've got all kinds of schlacko outfits that are trying to get 3D productions out there. And they're going to be mostly concerned with having dramatic drastic effects. So I don't think we can control, at least in this country, you can't control content production. I agree. But completely, I can't help but wonder whether or not a sub-segment of the people in this room might not come out with some conservative criteria that it doesn't guarantee a pleasant experience, but it makes it a little bit more likely and perhaps have some sort of mark this content is stereoscopy.org approved. Or Lenny Lipton.org approved that it meets certain criteria. They're good at being so approval. We are so far short of knowing enough to do that. Not only do we know what to do, but it would be an exercise in folly. You just can't control that. Not yet. It's doable. There's just too many things we don't know. We don't know what the effect of is if you work last, as in don't. We don't know the effect of, you know, a little bit about age now, but there's a whole bunch of things we just don't know enough about. We don't know the effect of if you have big changes in disparity short over time versus small changes long over time, which is once more fatiguing. We don't know. And there's just a whole bunch of things like that that we just don't know. So once you got down to it and try to produce an index, it's meaningful. You'd end up you had just too many questions that you don't answer. We've got a group of very smart people out there in the audience. And I'd like to hear what's on your mind, but questions the folks have out there. Is anybody? Yes, we have some itchy fingers. OK. We can't modify that. We want that to trace the police that tomorrow. I don't know where it is. Come on down. We're on screen and everything. So we have a big issue. OK. Hi. I'm Neil Dodgson from University of Cambridge. I'm sure that several of us, like me, have been asked by the media whether Nintendo's warning about under six-year-olds must not use their device, how much danger there really is. So can any of you tell us for sure at what point does a kid's virgins and accommodation motor mechanisms actually set in place? And an age above that, they're not likely to have any trouble. Never. Your virgins and accommodation come. The coupling between the two changes throughout your lifespan. And if you get a new set of glasses, you change it again. It's plastic. And so the question is you posed it, the answer is your whole life. Right. I'll comment on that also. From the physician-clinician perspective, I'm asked questions that I don't know the answer to all the time. And I think it's really important that I tell my patients what I know, but it is even more important that I tell my patients what I don't know. And the honest answer to that question is we don't know. And there's all sorts of issues. And maybe we can talk about amblyopia and how amblyopia relates to a stereo vision. So do you want to move to further questions or deal with this one? Why don't you deal with that one? Well, I think the answer is asked and answer is correct. But of course, if anybody wants 100% guarantee, which is what all of Americans want for everything, then it's not going to work. So the answer that Marty gave is correct. But the question maybe could pose slightly differently at what point the danger of an effect becomes small or diminishing. And so we talk about critical period in development of vision. And Martin, that a lot of work on that many years ago, and still working on that. And the numbers go between, it depends what you want to know. If you want to know when stereo vision is fully developed, then it's what? H3? Maybe? It's fully developed. If you want to know when we think you cannot develop amblyopia most likely, then people will tell you 6 to 9 is the range. If you want to know when you cannot really fix an amblyopia after it's been caused, there's another. So the concept of critical period that we did with are multiple of critical periods. You have to pose the question, which critical period you want. But it's pretty clear that three-year-old kids are still developing the visual system. They're still possibly susceptible to developing amblyopia. And in whether playing a stereo game or watching a stereo movie will cause it, we don't know. At the moment, to my knowledge, there's only one report of one kid in Japan who became Strobismic and Amblyopic after watching one movie in 3D. And it's published. And so it's in the public record. I looked at the paper as much as I could through translation. And it's quite doubtful that it's not sufficiently documented. I mean, there's no question the kid is Amblyopic and Strobismic. What is not documented, that he really didn't have it before he went to the movie. And in one case, even if it was the case that he wasn't and he became at some point after, the causality is still not established in my mind, even if the documentation was there to do that. But the body of knowledge about the development of the binocular vision, the things that can break it, says that you've got to be cautious in why Nintendo put this out is to let the parents make a decision, which is what we'd like parents to do. They can then decide if they want the kids to use it in 3D, then by all means, some parents do with their kids a lot more dangerous things, and we let them do it. If they were really trying to help the kids and the parents, that that warning wouldn't have been written that way. What they're trying to do is protect themselves. And some of the claims are so extreme that what most of us do is we look at it and say, well, I'm just going to discount the whole thing. Because it doesn't have face validity that that's a reasonable warning. They admitted to having some intention of protecting themselves in their literature or in their interviews and it's a legitimate activity for a company to protect itself. But I don't think that it's preventing people. So you think that the warning is so strong that people simply will disregard it. Is that your? That's my concern. Certainly the case for the Samsung warning in Australia, that was just on face value. It just looked ridiculous. You know why pregnant women shouldn't watch 3D? That's a very good question. I don't know. I was wondering about that myself. Maybe there's something about pregnant women we don't know. Yes. You know, I think that there's some kind of, we don't know. It's true. But we do know, for example, that too much of anything is bad and too much 2D TV is bad. And there's been a lot of writings in the educational literature talking about how little kids spending enormous amounts of time in front of two-dimensional representations of three-dimensional objects is adversely impacting their three-dimensional thought patterns later in life. That's data that's been out for at least 20 years. And kind of from the educational perspective, and some of us in surgical training seem to think that today's recruits may have a little bit more difficulty because they spent more time watching TV and less time knitting. Let's go to the next question. Bob Boudreau from Corning. When I first heard that the Nintendo put out that announcement and the industry was worried, I was kind of disappointed because I know that when the legal profession gets engaged, they like to have everything bend in different speed limits and things like that. And I was just a little concerned that the 3D industry might get dumbed down to some lower quality level. So the suggestion that I heard from the physician, I think, may be a solution. That is to have the industry itself have some friendly body that possibly rates the quality of the 3D, not the quality of the content of the storyline or anything like that, but the quality of how well the 3D is done so that stories that are like some of the Dreamworks and avatars that are very good 3D get higher ratings and some of the 2.5D movies get lower ratings so that people are at least aware before they watch something. And then that pushes some of the responsibility on the audience rather than putting all the risk and responsibility on the studios. Well, Peter, what do you think about that suggestion? Well, I think in spirit, it's hard to disagree with the intent. I think in practicality, it's very difficult to achieve anything like that. As Marty says, we're not quite sure what the boundaries would be or even what the metrics are that you would choose to contain. There have been some, I would characterize them as beginning attempts at that sort of thing. For example, some of the broadcasters have defined the amount of horizontal disparity or parallax that should be built into motion content. For example, when the World Cup was produced last summer, there was a guideline of 2.5% of horizontal pixels was a maximum disparity. I think that just skims the surface, though. And when you look at all the other variables and the creative content, and even if you were successful at defining some of these parameters, you'd find that they were largely ignored or it was too difficult to measure by the creative community. I'm aware of at least two private companies that claim to be doing 3D quality assessment on those sort of things. They're both these very different methodologies, and I don't really see a lot of traction on that. So I think guidelines that help cinematographers would be very helpful, I think education, to help people know what causes some of these problems. But trying to codify that into some sort of good housekeeping seal of approval, I think, is not very likely. OK, so I'd like to go to the next question, please. Hello. I'm Alma Green from Alma Green Studios in Boston. And I have hundreds of questions for you, but I'll try to keep it to one. He's available for consulting. Regarding that last topic, I have been in contact with the College of Vision Development and suggested to them that perhaps an optometry-based organization would be well suited to work with the industry to evaluate motion picture content on the acceptability or problems that might be associated with vision. But my question is really more direct than that. It is that according to optometrists, there may be as many as 15% of the population at large who have trouble with fusion, who have trouble to see stereopsis. And indeed, there's a tremendous amount of variability in terms of the level of stereopsis that different people have. What is your opinion of how the industry should address people who can't fuse? And does the industry have any role in helping people to overcome that, as perhaps Dr. Sue Berry was able to gain fusion at 50 years of age? Thank you. Somebody would like to answer that. I've got something to say about that. For one thing, there's eye care professionals as optometry and ophthalmology. And the degree to which they don't get along would stagger you. Stagger. And that's unnecessary. Really? No, no, no. It's totally unnecessary. But since the city, we get along just great. So this is an opportunity where actually those two communities could do something good for society. And here's what it is. As you said, there's some number of people out there who have a binocular dysfunction of some kind. And a lot of them don't know it. They've never had a proper eye exam. If we could bundle a binocular eye exam in with content or with your new TV or whatever, it wouldn't be that hard to do. And then if you don't do well in the test, say, you should consult someone, an optometrist or an ophthalmologist. And you get your money back on your TV if there's so much in it. You see Berkeley and you see San Francisco get along pretty well. So we've got the administration of optometry and some of the administration of ophthalmology on board to do just that. And we're working with the International 3D Society from the industry side to try to get them to contact industry and see if we can get a consensus that would be a good thing to do. And the argument for it is, I think, is obvious. On the one hand, here we are talking about all these things we might be doing to people that would cause them discomfort or even harm. Well, where is the public service we're doing to them in return? And this is the case where you actually could do something in principle that is a service. Any other comments? First of all, the number of people that don't see steroids may be around 5%, I believe, not 15. But still a large number. The issue that I think relates to that is, can we find them with this? Yes, we could implement those tests. Can we do something for them after we found them? Probably not. Probably not. So most of the time not. So whether this is going to be a great service for them. We could educate them. We could educate them and explain to them, oh, you have a binocular dysfunction that you were born with. That's why you don't enjoy movies. And keep them from being unhappy clients that complain about symptoms that are due to underlying preexisting conditions that have nothing to do with the context. It's a strange proposal to ask Samsung or Sony to offer a vision test with their product. Which would deter people from being customers. It's ludicrous. It's never going to happen, fellas. It's a great idea. But they want to sell hardware. They're not interested in curing people. You're not talking about the Pope. But they want a body of customers that's happy. But most of these people will watch the movies without difficulty. They just will not have the stereo. They will not have the symptoms. They will not have difficulty. All 3D TVs have a 2D mode also. I want to mention the American Optomology Association has a letter of intent with the 3D at home on a very topic to try to educate and figure out ways to collaborate with the industry. Look, how big a problem is it? Is this really a huge problem? In what sense? In terms of comfort. OK. You made a very good distinction between discomfort and harm. OK. So do you want to elaborate? How does this group feel? I think there's no question there is discomfort. And I think the industry should be very worried about this discomfort because people would sense it at the point of sale, or they may, because it comes on fast enough that you can't get it at the point of sale. In which case, there'll be no sale. It'll be just a point of sale. And so they need to consider that they should get better advice on how to set up the point of sale. You know, that is such a great point. Because as somebody who loves this stuff, as soon as it started coming out, I've been going to all the consumer electronics stores and kind of sitting in there, half of their displays. They're broken, the glasses don't work. Obviously, the sales staff has lost interest. And it's one of the reasons why the prices have come down. And because they're trying to move these products, because they're not selling them well. You're exactly right at the point of sale. So this is the first point. There is discomfort. The second point is that much of that discomfort may go away with adaptation. If there is something that biology system like us do is adapt, and we know from optometric and orthoptic treatment that these kind of symptoms that come to people that have weak or somewhat misaligned systems can be improved with exercises that are to some extent just like conflict of convergence and accommodation. And we know from other things such as simulation sickness that repeat exposure reduces that. So there's a good chance that the discomfort will fade away if you can pass that point of sale or use at the beginning and that things should be designed for that. But again, there's no data on that. I just want to get to the next point, which is harm. Is the group's opinion about lasting harm really damaged to the optical system, to the human? Maybe I could say something about that. Some of the claims that have been made about harm are, in my humble opinion, ridiculous. For example, I read somewhere and then some reporter called me and said that he had been told that watching stereo 3D, that when you do that, you must turn off all of the other cues to depth that exist, linear perspective, texture grading, shading, et cetera, in order to turn up the stereo cue. And that is ridiculous. So the concern, of course, is that what you've done is you train the visual system by watching stereo 3D to only pay attention to disparity when we have all these other depth cues that are incredibly useful in real life. That, in my opinion, is just ridiculous. Agree strongly. Another one that I think is ridiculous, probably too strong a word, but is pretty silly, is that the fact that a lot of these technologies are presenting images to the two eyes at different times is going to uncouple the binocular correlation that's happening in the brain. That, too, is really silly because we know that these images are flipping back and forth between the two eyes very rapidly. Why? To reduce flicker. Flicker is a luminance process. That's something you see changing in brightness. The stereo system itself is very sluggish. It computes disparity slowly over time. So if you can't see something flickering badly, you can be sure the stereo system sees it as smooth as silk. So that's silly, in my opinion. Ones that maybe aren't silly are the, we could be changing the coupling between virgins and accommodation. That doesn't have necessarily hugely deleterious effects, but it can affect the way the world looks to you temporarily until you re-adapt. That could be happening. And inducing these vertical virgins' eye movements when you roll your head to the side, you adapt to that, too. So you actually adopt new postures with your eyes after you've done that for a while. Maybe you'd be concerned that now you take off the glasses and you go outside and your eyes are like this for a bit. You have to tilt your head now, though. And then you've got to tilt your head. And so those are the only two that I've heard that kind of say, well, I can at least see an argument for that. But some of the other ones are silly. I've got to chime in here. That is very true. But I think it's a big step to say that this is something that's harmful. We actually did a nice study that we published in Alphamology about 10, 15 years ago looking at vertical fusion amplitudes in patients that have very different glasses prescriptions in one eye versus the other. Turns out if you're wearing glasses, when you look up and look down, the glasses don't only focus light, but there's a prismatic effect. And if that prismatic effect is different in the vertical direction because you happen to have a different prescription in one eye from the other, it turns out people that have that glasses prescription have adapted and they have greater vertical fusion amplitudes. So I don't think that anything that we're doing is going to be more in terms of severity than, for example, being somebody who adapts to a different glasses prescription in each eye. I really like that example. Thank you for doing that because the analogy would be, I've got my glasses. I've had to learn how to make these different vertical movements because of my glasses. Now I take my glasses off. Am I really at serious risk to get in a traffic accident, to trip and fall on the stairs? Generally not. And that's actually doing something that I would, arguably, is a stronger effect than anything in Stereo 3D viewing spring. And I think we can also take some degree of comfort in the simple fact that we've been watching 3D movies now for a really long time. And there is no history of thousands of people watching Avatar getting in their cars or in the 1950s watching the 1950s 3D movies, driving home from the drive-in and getting into Rex because their 3D was all messed up. Well, at Stereo Graphics for over 20 years, we sold hundreds of thousands of systems to people who use them for molecular modeling and aerial mapping mostly. And they use them for a day of work. And we never had not a single complaint about anybody having eye fatigue. We had complaints about seeks, not being able to put the eyewear over their turbans. But we never had a complaint about anybody having that. Let me, I'll disagree with you slightly. I think that for many ophthalmologists, it's different equipment. But learning how to use slit lamps and microscopes and optometrists, for me, it took some time for me to figure out how to use the equipment and how to not see double. And it took me about a month or two into the beginning of my residency to not get headaches and not have double vision. But, you know. Well, maybe the guys who were using the equipment were paid to endure. And they learned how to use it. I don't know. Perry's been waiting up there. He's a lot. Perry Holbroom in USC. So I mean, I think if we're going to talk about 3D TV dangers and other people are going to talk about it, we're just as justified in putting out the message that 3D is good for you. I mean, some of the things you said about how once you've seen, if you see it more, it's almost a kind of, it exercises your eyes. Not that I actually think that, but I think we would be perfectly justified. It's just as reasonable as saying it's dangerous. We don't know. We need to find out. Yeah. We don't know. Sue Berry wrote an editorial where she points out another positive thing about 3D kind of hitting the mainstream in that a lot of kids who have strabismus and other problems, it's misdiagnosed. In her experience, she was misdiagnosed as having a cognitive disorder because she couldn't focus her eyes correctly. And so, I mean, getting basically that kind of testing and all of that, it goes beyond just good vision. It goes into just how we actually diagnose problems. The thing I wanted to ask, though, and get your opinion on is, I mean, of course, there are all the comfort problems with the displays not being perfect, with sitting, seeing it from different angles. But as far as the content, all we're talking about, we're mostly talking, I mean, there's vertical disparities and huge positive disparities and so on, which we can control. We know their mistakes. We know their hard to look at. I wonder, though, how much discomfort and headache comes from just a cognitive sort of mismatch between stereo cues in bad stereo and the other depth cues. In other words, in conversions, in fact, in almost all converted stereo, unless it's done with a huge amount of attention, some parts of the image match the stereo and other parts don't. Typically, details don't because it's pretty much a cardboard cutout. They just put things at different depths. And I wonder, a lot of times, certainly, I'm thinking of examples like explosions, where the main part of the explosion is the smoke. So they give you a big kind of round stereo shape with smoke, but they're sparks. And the sparks follow the smoke, even though in reality, the sparks would be way out in front. And that kind of stuff happens a lot, especially in stereo conversions. I just wonder how much those kinds of mismatches might cause discomfort the same way that a combination of other stuff. I think looking at things that don't make sense gives you a headache. It gives me a headache. And as somebody who kind of taught myself, well, like many of us did, how to do some of this stuff and used my poor family as guinea pigs, early on, five, six years ago, they had run screaming from the room when I showed up with 3D glasses in my hand. Now they come running screaming into the room when I fire up the projectors. And a lot of the kinds of errors that you talk about are exactly what I've learned to not do. And I think it's really important this gets back to what we were saying earlier, that especially now we're still kind of on the early adapter phase of this technology as far as consumers are concerned. I think it's really important that we tell content providers a little bit less, a little, there's gentle 3D. And you can have a beautiful, immersive 3D experience without things jumping out of the screen. Assuming that we have good visual acuity and that we're throwing 4,000, 8,000 pixels up on the screen, that gives us unbelievable power to create nuanced 3D that isn't aggressive. It doesn't have a lot of near far disparity that people really enjoy watching. They can train themselves that way. And then we can become more aggressive in 3D movies 2.0, five years from now. OK. We have another question from another gentleman. Hi. I'm Vikas and I work for Qualcomm. I had a question about content creation. So someone spoke about specifications or guidelines for cinematographers. I was wondering that if you throw a 3D phone to an end user, to a consumer, would it be the same guidelines or would it even be possible to have a set of guidelines for an end user in that case? Because you don't want that. You give a 3D camera to someone and they shoot something randomly and you would end up having very bad 3D when they look at it. So you don't want to fix that at some point. Well, it's a really interesting point because we're talking about creating quality 3D content that doesn't cause more eye strain or problems than you might otherwise encounter. In regard to the previous question, I might point out that it's been pretty well demonstrated that you could screw up live action 3D just as much as you could screw up synthesized 3D. So there's a lot of these kind of problems you have to be watchful for. That really raises a point that I've been wondering a lot about because we're talking about trying to maintain quality standards or certify or publish guidelines. But for those of you that went to the CES conference a couple of weeks ago in Las Vegas, you saw a half dozen consumer 3D camcorders. And my experience is seeing home movies and watching YouTube. People haven't figured out framing and zooming yet and focus, let alone the subtleties of stereoscopic image capture. So I think that they'll be very interesting as people start to experiment. And in some cases, there's some real limitations on some of these cameras, for instance, are built with very often a wide interaxial distance, which only works when your subject matter is relatively far. And so when you're talking about these 7 centimeter interaxials, it really limits how these could be used. If somebody is trying to do the kind of close ups you often see, you will be creating problems. So the short answer is that, yes, there are guidelines that can help consumers create better 3D just as there are and creating better high definition. I don't see a big appetite for consumers to go through a lot of education on that unfortunately. We're going to take our last question from Dr. Yang, please. Thank you. I feel like I need to clarify some of my data to make sure everybody understands it. First of all, for 2D and 3D groups, we have about 100 individuals. And those reports increase the symptom regard of 2D and 3D. It's only a small friction of it. So what I'm trying to say is we are having a small group of individuals are having problems seeing 3D and using symptoms. And 70% of them, they don't. That means some aspect of their visual ability or motion system is having problems. And what we should do is focusing on solving those problems. Because those problems not only is causing problems in 3D view, but also in other aspect of our lives. And so I guess what AAO is trying to say is, it's not content, it's your eyes. And I think it's partially true, meaning that it is inductive of something. And the second thing I want to mention is we've just completed a 3D gaming study in which we have four individuals vomited during the experiment. Now, but for 2D gaming, same time. So did my 9-year-old. Excuse me? So did my 9-year-old. But the same kind of thing can happen in 2D gaming. And so the question we need to really ask is, the experience is so real, it induce symptom. Or the content or the way of rendering it is so much worse that it's inducing symptom. And I guess we need to do a study to find out. And we really need the manufacturing company to test their new systems as soon as possible to figure out what's the problem here. Well, thank you, Dr. Heng. I'm going to give everybody on the panel one minute to sum up their thoughts and comments. So let's start with Ellie. Final comments. This final, that's it. I've been told by the boss that we're out of time. I think we started when I raised the point of the children, we didn't really cover that issue. Or at least didn't come to much of conclusion on that. I think this is something that we need to explore. It's not going to be easy to do that in various ways. Just do the research is going to be complicated. Well, I completely agree with that. There are too many things we don't know. And the only way you can find them out is to do appropriate research. It's good that a lot of people showed up here so that evidently there's some interest in what these issues are and getting them resolved. But there's a lot of work to be done yet. Agree strongly on the one hand. On the other hand, I don't think that this is something where we need to take 3D stereo and movies off the market. I think that 3D stereo is fundamentally going to be no different than any of the multitude of things that we've already done to impact the visual system. And people are going to be just fine. Fully agree on a lot to learn, but I'd modify it slightly. It has to do with learning about the intersection of visual science and the creative content. I've had the opportunity to review some of the historic literature in my profession and find the same kind of things were discussed on the introduction of sound back in the early part of the century about how that disturbs the human system and also in widescreen. So I think that we have to take into consideration the variability of the entertainment and consumption process. Well, I want to thank the panel and the audience. And Andrew, we have 15 minutes before the next session starts. OK, so folks who want to continue to talk to panel members or myself will be up around this area. And you can continue. You can come up and talk. Is that OK, Andrew? Where do you want to go? Definitely. And please join me in thanking the panel. We restart at.
There has been a lot of recent discussion in the media about the potential dangers of 3DTVs and 3D Movies – and yet stereoscopes have been with us for over 150 years, 3D movies for over 50 years, and 3D viewing is also widely used in industry. 3DTV is, however, transitioning from a special event to a 24/7 experience and becoming available to a wider demographic. Where is the truth in the concerns being expressed, where are the falsehoods, and where are the gaps in our knowledge? The panelists gave their views on this important topic.
10.5446/31397 (DOI)
I'm going to start with a computer topology here. Well, it might start by having to connect to the internet again. One moment. I flew in yesterday, and right now my body thinks that it is a little while ago. We'll see if we can compute this. We might not. So, there we go. Right now, this is where I think I am. This is where I think I woke up this morning, but we'll press on from there. As I was preparing for the summit, I was thinking about all the different kinds of data, computations, and comparisons that I was processing in my daily life. So there are personal details about the trip, and there are estimates of time, distance, path I would take to get here, currency exchange rates, time zone conversion, so I don't accidentally call my wife and wish her a good night at the wrong hour. And of course, reading the news presents all host of other things that you want to get some better understanding of through computation. This is a shot that we've seen a lot of recently, the riser pipe for VP. You know, you check six different sources recently and you get a half dozen different estimates of how much oil was bubbling up through that pipe each day. 25,000 barrels per day is a recent estimate that seems reasonable. It sounds huge, but what does that number really mean? What's the context for that? Can you really understand what 25,000 barrels of oil is? The raw data to answer all these questions is out there, and there are tools of varying efficacy that can make it easier to answer some of these specific questions. But they tend to be scattered among lots of different sites, lots of different sources, with lots of different custom single-purpose interfaces to get at them. Wolfram Alpha is really an attempt to try to bring all that sort of computation together in one place. So in response to simple natural language queries, our very modest goal, as other people have noted, is to make all the world's systematic data instantly computable to anyone. It's a goal we're never really going to reach, but we've made some really great strides in the last year that we've been in existence. So what I want to give today is a little more of an insider's view of Wolfram Alpha. I think a lot of you are pretty familiar with the system. You understand some of the cool things it can do. But what I want to talk about is really the process we go through going from raw data to really useful computable knowledge. I'm also going to focus mostly on socioeconomic content, since that's my focus. I think other people today will fill in the blanks with maths, some of the other things that are coming up for Wolfram Alpha. So it starts with the data. Wolfram Alpha started with the goal of trying to cover a majority of the knowledge domains that you can find in a good reference library. So there's a backbone of basic socioeconomic, geographic, scientific data. We went to some of the largest best known repositories for this sort of thing, intergovernmental organizations, statistical agencies of most countries, major scientific bodies. A little more than a decade ago, I started working in reference publishing at a time when there was an explosion of data on the internet from a lot of these agencies. I still remember having the fax requests for data to the Census Bureau of the UN and then wait around for a stack of these to show up. In a very short period of time, though, huge amounts of that data went online. So we could spend more time getting current data more quickly, disseminating it more efficiently, spending more time curating, editing, analyzing data, than just dealing with mundane details of data formats and trying to get it onto the page. So we're in the midst of a push now with more governments and other organizations pushing their data online. It's great to have all these data catalogs open to the public, but the thing that we discover when you start to explore this open data is that, as Conrad mentioned before, available doesn't always equal accessible. There may be a few hundred, a few thousand, a few hundred thousand new data sets online, but in order to get a specific answer from that data, you need to download a huge data set, understand the file layout, understand the structure, understand the methodology, or if you're lucky, maybe there's some sort of custom tool that will help you navigate it. This is one, the Bureau of Labor Statistics in the US has great data on numbers of people employed in different occupations, median wages, lots of data. If you want to access that data, you either have to download an enormous data file where you can use this tool, you can select a geographic area, refine that geographic area, and then it takes you to this endless list of occupations that nobody could ever find a sensible occupation in. Really what you want to be able to do is say a simple question, a job, in a place, get a specific answer, history, some more context, so you understand the range of values that are possible for that. That's really the kind of thing that we're trying to do with data in Wolfman Alpha. In the process of dealing with these data sets, we go to the data providers. People who are giving us data, we work with them, we talk about the kinds of computations that people really want to be able to do with that data, the ways people talk about that data. And increasingly a lot of keepers of these big data repositories are coming to us first. There's more than one person who said, we care about usability, we just can't afford it right now. So the next step that we always go through, what we call data curation, is that to my mind picks up where traditional curation ends. Traditional reference work is often about choosing essential facts, having an editorial eye, picking things that are going to be relevant to some particular issue that's going on right now. But Wolfman Alpha is really about making all systematic data available, not just the little bits and pieces that look pretty good and are kind of interesting. So the first step in this is so obvious, you might not think it's worth repeating, but we look at the data. We use Mathematica to generate a lot of plots, visualizations, comparisons, some programmatic checking for outliers, and kind of strange qualities of a given data set. And more often than not in data sets that people quote frequently to play heavily in forming public policy, we find some very strange inconsistencies. One of my favorites, a few months back we were working on livestock data from the FAO, and this is not Andrew's how many sheep in the field question, but at least how many sheep in England maybe. You could answer with that. And so we were thinking, what would be a good example of our natural language capability when this hits the site? So somebody said, well how about how many turkeys are in Turkey? So we checked the answer for this, and you can actually see on the site what we saw then, which is turkey, livestock population, turkeys, there's an estimate, but then there's this turkey boom in 1998. There's a sudden jump from about 3 million to just over 5 million turkeys. You expect some fuzziness in data, especially things like this, estimates of livestock populations, somebody who's not going out and tabulating every single turkey in the country. But still, we thought it's a strange world. There was some sort of poultry festival in Turkey that year, short-lived meat packaging company, tax breaks for turkey farmers, something. This turned into a several month project. We don't normally chase down data this much, but we couldn't let this go. We went to the FAO who sent us to Turkestat, who sent us back to the FAO, because they said the FAO must have better data than we do. FAO said, no, we get the data from Turkestat. We went back to them, they combed through their records and said that there was no way to find out what happened that year, because the records that should have been kept 13 years ago from those surveys were lost. So this turkey boom is going down in Turkish national history as official statistics. This has led to, we're working on a system now, so that we really want to be able to annotate things like this. We're discovering this so many times that we think that it's important to give some of the story behind the data. And so when we find these things ourselves, we're also working on building up our volunteer network. So we have people in other countries who can help us in analyzing and figuring out what happened with some of this data. I really enjoyed this piece in Turkey. And beyond this sort of basic but essential set of checks, we also analyze the completeness of data sets. You know, we're looking at if it's socioeconomic data, how many countries are covered, how many properties are there. Does every country have a full complement of data for those properties? If it's time series, how dense are those time series? Or are they very sparse? Is it very old data? Will we be able to do some sensible comparisons between entities, or is it such a checkerboard of data that that's not really possible, and we have to find some other way to teach Alpha to analyze it? You know, another thing is that we always look at units. And we think that we have one of the largest and most comprehensive collections of current and historical units anywhere. You can convert killerkins to punchins, if that's your sort of thing. And getting units right isn't always a no-brainer. One of the things people often overlook is a unit, a flow variable, or a stock variable. You know, people talk about units in ways that are different from the way they actually behave, but in Wolfram Alpha, a piece of data has a kind of a life of its own. We have to know not just the piece of data, but where it fits into some context, where it fits into time, where it fits into place, how the unit behaves in relation to other units. Or simple things like dates. You know, we're not just dealing with recent data, we're dealing with deep historical data sometimes. So we have to say, if we have a date for something, what was the calendar system in place at that time? We have to know, you know, when did this country switch from Gregorian to Julian calendar? When did they change the new year from March 25th to January 1st, so that we can make accurate computations for what the date was at that time, what the equivalent date is now, things like that. Linguistic curation is the next part, and this is the thing that, if we're doing our job right, most people don't think about when they're useful for Malpha. I mean, what we want is for people to type in a query, type in a question, and it just works. But there's a lot of work that goes into that. If we knew that everyone was going to type complete, perfectly formed, interrogative sentences that use the official names of entities and their properties, we'd have a much easier job. But in practice, people type in a kind of search engine. They dump a basket of words into the input box and leave us to unscramble it. So they use colloquial names for things and objects. Or on the other end of the spectrum, experts in particular fields will have some personal credit syntax and vocabulary for dealing with the data in their field. So we have to spend a lot of time talking to experts. Wikipedia and other crowdsourced sites are not a good source of data for us, but they have been great for linguistic curation. I think when we're trying to understand local names, colloquial names, alternate ways of expressing certain concepts, mining things like Wikipedia has been really useful to us on that front. So we learned some interesting things sometimes when we do that. This one I just learned about recently. This is actually a common name for a fruit fly gene. And if there's a mutation at that site, it apparently gives fruit flies a low tolerance for alcohol. Also known as amnesiac, because mutants apparently also maybe unsurprisingly have a very poor memory. So we work hard to accommodate other kinds of nicknames, obvious things like population of the Windy City divided by the Big Apple. We know that you're talking about Chicago, you're talking about New York. We can compute a ratio, we can see what that ratio was over time. Or things like, there are a lot of different ways people can ask a question. China, France population is a perfectly balanced set of keywords. Compare population in China, France, or the fully formed, how many people live in China and France. We need to be able to understand all those things and process them in some sort of consistent, predictable way. Somebody asked this morning about the linguistic, semantic framework for Wolf-Malpha. There is a general linguistic parser that has been improving throughout the past year. And actually it's been a large part of the reason why we've got fewer fall-throughs now. We've cut the percentage of queries that we throw up our hands and say, I don't know what you mean. And a lot of that is because we've improved our ability to understand questions, not because we've added the answer to that question. They were always there, we just didn't know how to answer them. But in addition to that general framework, there are also domain-specific grammars. So there are ways that people talk about and ask questions and certain kinds of computations people might want to do in a particular domain. And we can handle specific syntactical structures for those domains. There's another level of complexity that comes in related to disambiguation of terms. How big is France? How big is France's what? We need to be able to say there are some plausible ways to answer this question. We assume and give weight to some that we think are the most likely answers, but also have to have some way for people to... I think this may also be related to another question earlier today, that people can refine their queries. We didn't quite get it on the nose, but these are some of the other things we might have meant. Another thing where disambiguation comes in for us is it's often a great opportunity for content discovery. London, we assume the city, obviously. But there are a lot of other interpretations. There are other London's in the world, and you could have meant any one of those. But then there are also other occasions where we have data for London in some other context, like as a female given name in the US, which apparently has had an enormous popularity spike over the last few years. I'm not sure why, it's something we're going to have to keep an eye on, and maybe it's the run-up to 2012, we'll see. So once Wolfram Alpha understands what you're talking about, it's gone out, it's pulled the relevant pieces of data out of this enormous database, it's grabbed all those bits of data, the rest is, as we say, because we're a mathematical company, the rest is just math. Because it's really an enormous Mathematica program, Wolfram Alpha can take advantage of all the years of curation of algorithms that are built into Mathematica and compute some specific and relevant answer. And that's a matter of choosing the appropriate set of custom visualizations out of a constantly growing collection. And this is another place where we spend a lot of time working with experts in given fields to understand, here's the data, how should we visualize it, what are some new ways that we can come up with so that Wolfram Alpha can give a response in a way that an average user can understand that an expert is going to find useful. So there are a lot of things here. These can be highly complex, highly technical, computing the precise current location of a satellite, for example, from recent orbital elements, or even pulling in some GOIP data and telling you the current sky position of that object from your present location. Some of these can be a little bit more earthbound. So given a number of drinks, an amount of time, your weight, your sex, you can find out whether you should be driving or doing anything else for that matter. So maybe you'll find that useful later. Dates, if a result is a date, if a query is a date itself, there are lots of things that we do because we understand what these units are. Time difference from that date, observances for your current location, historical events, anniversaries, famous people who were born on that date who we have information about in our database, phases of the moon and so on, or other kinds of comparisons. You know, size of Easter Island, we know it's this size, here are some other things Wolfram Alpha has found there within the same sort of scale for this. So you understand Easter Island, it's this fraction of Rhode Island, it's about 1.3 Disney worlds or two Manhattan's. I'm not sure how it ranks in terms of fun compared to either of those places. Or 25,000 barrels of oil. And what you discover with things like this is 25,000 barrels of oil is about one and a half Olympic size swimming pools. Or because we actually have up-to-date socioeconomic data, you could even say 25,000 barrels of oil, what fraction is that of total US crude oil production? Nope. Which worked fine for me before. And sometimes Wolfram Alpha isn't sure what you're talking about. But 25,000 barrels a day compared to 5.064 million barrels a day, we're not talking about quite as staggering a figure as many people might think. Still a serious problem, but in order to put this into some sort of context, there is data right there that can help you to sort that out. So socioeconomic data, there are things that we can do not just with regards to general sorts of units and lengths and measurements, but if you ask a question like this, France versus Japan dentists, we have an absolute number. We also know Wolfram Alpha can go in and pull in other relevant topical information. So other numbers of healthcare workers, other sort of general useful healthcare indicators for that country, or something like underweight children in Africa. And here's another case where it's a larger set of entities. Wolfram Alpha knows that there's a lot more interesting computation and visualization you can do about this. You can generate a summary and get a distribution of all those values. There's a heat map that can let you see exactly where this problem is most pronounced. There's a ranked list of all those countries that we have data on. So all kinds of data that Wolfram Alpha can pull in to help you get a really precise answer and also some useful context to that. So I may have made this sound kind of daunting, but when we have good data and we have knowledgeable experts, which we have more and more of all the time, we can do some really interesting things with complex information in a very short amount of time. So the source information, and I'll see if you've got one mistake that you've used. Up here. Details for the data. It's actually data that comes because many of the different countries have the latest available data from a different range of dates. And so the source data should all be there. And this is actually data that we've recently put in from WHO. Also, comparing also South America and South Africa. South America versus Africa? This is actually comparing two classes of entities is something that we can't currently do. I'm not going to try to get on a VPN connection here, but if I could, I could show you that we are going to have that capability very soon. And in addition to that, trying to let people be very more specific about the kinds of visualizations they'd like to see. If you'd like a linear regression of maternal age versus infant mortality in Africa versus Europe, that's something that you're going to be able to do very soon. So not just leaving it to Wolfram Alpha to figure out the right kind of visualization, but having some more control over the kind of analysis you can do. So in addition to those, lots of other kinds of data that are coming into the system. Everything from global trade to sports, expanded data on plant and animal species, cinema revenues, international cost of living data goes on. I don't think we're going to quite hit that goal of all systematic data in the world this year, but it'll be an interesting time anyway. I'll try to keep moving and cut it off there. Sure.
I will give an insider’s tour of Wolfram|Alpha, a unique project designed to make all systematic knowledge available to and computable by anyone. Attendees will learn how Wolfram|Alpha’s teams of Mathematica programmers, knowledge-domain experts, and data curators have been able to transform raw data—from public and private sources, both on- and offline—into “computable knowledge” that can be accessed and manipulated through natural-language input.
10.5446/31398 (DOI)
fi ar llawer yn amlattu'r nazir ychydig arall, a fyddwch. Felly, rydyn ni'n gweithio i ddweud o ddau yn ystafell, yn ystafell, a'r ddau yn ystafell o'r ddau o'r ddau computatio o'r ddau o'r ddau. Mae'n gweithio'r gweithio o'r ddau'r analys o'r ddau o'r ddau. Mae'n gweithio'r gweithio o'r ddau o'r ddau o'r ddau. Felly, yn 1940, oedden nhw, mae'r ffamil, o'r ddau o'r ddau o'r ddau, mae'r ffamil o'r ddau'r ddau o'r ddau o'r ddau o'r ddau o'r ddau o'r ddau sydd yn gwaen yngydig o az ages o'r wrthgyff widdiant itdwch持 oes sydd angen i ni oughtreu i fi ddim yn ddigad ac mae'r ffin sydd yn byw I 식rachog cyn began i wasyn ac llawer i ni ei geometrif! Felly, Ond doedd ythaniad mewn holl o'r hanfodベysigion writeon i wir,Maes y gall chwarae pwynt a sydd wedi bod yn ein mwyaf ystod Fyèner mewn holl o'r hwnnw sheetff gw Labion해요 bwysig a gweld llifwydd Sihotherm sydd receiver iawn a hacer eu fod ஒlio gyda was idee o drafio mewn holl o mixture fel cwp o my Archwbel, y brwyddan ni ddwylo eich ors ear 근데 One. Mae christliniad, yer yn ôl. flicki'n Prevention of Пока Edwân a Rhyw Blinquidd mewn pi spirit arnyn. WOwe oesp recipe a wneud enghraifft anon nhw wedi bod y maes eu dedigange faded a erby 부드� haf fel axiwn y logs. Bwdaethadau ddiwedd yn sake loc역idd a gwcofydd i gael un mened. Beth mae'r b keysid am ganddoe grwn Covid-19 sy'n meddwl iawn iddoedd Fais gwahanol moreionol na fydd yw wedi amser ti wedi'i gwahanol those games SO ddim am gynnodd sy'n ca industry d직 Cavit gotwed post coating …serfynio, yr rarei不到, mairhysgol, y cy lleuenau. Y dlesidio, oherwydd gan g 같은데ynfornol websites, machi arawr… …od kalkio ac llin PS unrhyw am fawr ac yn… …fleith i fod rangach fan o fod Felly yn economia am gib yn cael ei wneud. Ac mae anewydd gweinwnt wil o meddwl, maent gweithio mathradd… mae gweithio allw以上 hwyl will ni wrth yw chi'r myfysguaeth yn fysg shywed. Felly mae'r gweithio'r<|be|><|transcribe|> ingau hyn gyflym o'r dddon ohol allw yr 오店ion lawl mewn cyfroli neu ridei hwn o'r credu llwydd pryd recipe ingredients yn las gwymodol ac mae ministeroedd Somos neu ddewyddio chi Sut mae allw eu projects g Tree크�if<|zh|><|transcribe|> Gweithio gael'r vivyoli groes dyna i femod drafodaeth eglt – mae'r meddylo unws at felly mae'r Feddechaen ein ysbyrno i'r cyntaf alongsideall. Mae shef, yn ei ddull, gw�도 dro Madame Cfllyn Cylw saddu Gweithgous yn yr oedd yn ar cleans yn y Bargaryftraeth, lle ar draws mae'r cyngor, i gyffinoddgor, y ddechrau, am y opposition ychydigol i gael y bobl hyn, gan gweithio laps selectinguy oeddAlex i gynnig yn y replacement amp inner..data cofnod, yn gweithio'r ddupliwyr....o'r gweithio'r gweithio'r gweithio. Mae'n gweithio'r gweithio a fyddwch yn ddweud....y'n ddweud yn ddweud o'r gweithio'r gweithio....y'r gweithio'r gweithio'r gweithio....<|ko|><|transcribe|>........................................................................
Modern businesses and organisations have access to an increasing amount of information. This includes private internal information such as product usage, user profiles, and sales history, as well as public external information such as data on demographics, economic growth, and weather patterns. Despite this opportunity, traditional technologies and approaches often restrict access to only a small number of specialists. To reach out to a broad set of users and democratise this knowledge requires fresh thinking in areas such as data access, computation tools, interface construction, and deployment. This talk will review the general issues and show how Wolfram Research’s experiences with Wolfram|Alpha and Mathematica give some innovative techniques to solve this key problem.
10.5446/31400 (DOI)
Ie. As i compares y produce酒 A Fy hyffordd e i Gymfahrenaeth P سےêng y Llywodraeth, a dyna wir yn hynny, ac wrth gwrs, a chyfnwyr αrwnodydd yn blynedd typing ac fem yrUFF. Er mwy fydd Arま y changfb42 y pyramid ger cyfedonطrach astwy�니다. On нis wydd y logo yn fan ar ei gyfrif flwyddyn. Rhyw lle rydyn ni fyddenrifrhydegoedd, ac oedd wedi ymf proofio r掰掰. a'r fl sniperae 我 Ministry weekend medium and now we were 350 ysdwyll adam Meltmi gyd i gyd am ffawr Ministerbuster?! Is not using the computer except its production and distribution is not use the computer at all sgwr lle i unedominouslu pwg wedi ychwanegwch twblur swallowpwg lashu, am Distrwng chi'n tunig pwn yw holes. sein. Rydw i wneud cela gwahanol swyddeth rout? Dyna? професс f recuer cofydli� fasabiwn i ni. ag ysby carried..? Mae'r ciclpaeth yn bwylldedig fel e Na i... trefo wesaeth enghciaethu fy mud i gyda ni cerdadof am y ddylo豬 pob gyllaf gwasanaeth i ddod o berth Trainwyr cy raid i nodu codi'r Serf 찾ue тамn nhw'n g milkon nدun oherwydd gw浪 메qucedion Here's a simple example of this. I've got here a nice, simple infographic and it shows the GDP versus population of countries of the world on a log scale. I've done my very best to get as much information as the spatula gives I can, so I've got all the points on there. I'm also labelling each one so that we can understand which other countries that are doing well in GDP versus population which aren't doing so well, and we know all our 150 flags of the world by heart, don't we? Well, how can we get more into this? Well, first of all, we can give it more dimensionality in terms of time. So I can wind this back and see how that's changed over time by being able to slice through the information. And you can see that all different countries jostling some overtaking others and you can trace their progress through this time slice that I've got from 1975 up until 2005. And of course we don't know our flags of the world, so if we're really wanting to drill into this information, it might be very useful to know that this sort of greeny whitey flag is up at somewhere around 9 and somewhere around 12. What's the actual value? So we need to be able to embed ways of drilling into the information. And so here I've got, I've embedded tooltips into each of the flags so I can see this is India and that's in 2005, its GDP was $809 billion and it was 1.1 billion people. So we've got effectively a depth into the document but in response to the way I want to view that information. Here's another infographic, I was trying to find some things that were relevant. So here I've got the parties in charge of the country since 1900. And here I've got a little bit more freedom about how I want to slice the information. I can decide that I want to see the actual names, so we'll see it stacked and I can decide I only want to see those years when the Conservatives were in power. Or I can bring all the information back and again I'm packing more information in here as much as possible into the document. So if I can't remember who say Harold Wilson was then I can just grill down and remind myself he's the one with the pipe. And here's another thing which is sometimes simply a preference thing. So this is an information visualised as a density plot, not necessarily the style I want to see that information. I might actually want to see the raw data. So here's the numbers that represent, so I might want to see it as a contour plot, I might want to be able to see it as a 3D plot. They're all the same information but depending on my personal preferences, whether I want the details of the numbers or whether I'm a sort of 3D geometric thinker or whether I've got some kind of mapping background that I want to see contour. I can decide the style of presentation at the point of view. Now there's a flipside to embedding more information and that's hiding information that very often you want to present a minimal amount of information and traditionally that has been writing down a minimal amount of information in the document and keeping the rest or throwing it away. But we don't want to throw away information, we want to keep it, so we want to put as much into the document as possible. But we don't necessarily want to confuse the person who's reading or consuming the document with all this extraneous information and not to now the best that's traditional publishing has managed is to have appendices and footnotes that push the information to the back of the document. But hiding information is vital. In this particular presentation, every single diagram that I've shown so far will, all of the information required to reproduce it or to modify it is in the document. But you're not seeing that because I've hidden it because it's not a part of this presentation. But when you download this later after the conference, if you have the appropriate tools, you'll be able to get the raw data out and analyse exactly where these things came from. Which was a question earlier about the interactive thing that Conrad showed. I'll actually show the code for that in a minute. So the next thing is about being able to feed back into the document. Traditional documents are for reading, but we want to be able to change the information in them. And I don't mean editing here, just writing my own content over the top. I want to be able to take something and use it on my own information. So I've got this document about photographic image detection, it's the first step towards image recognition. And it's a very traditional document, we've got an explanation and we've got some maths. And then the author set up an example to show what this looks like. Well, it might be if I'm thinking about factory processes and I want to see how much this actually works on recognising a fault in a clock that's coming off a virtual. And this is not a very useful example. I want to see this process applied to something that's relevant to me. So I should be able to take my image and feed it in and use the knowledge, rather than having to read the knowledge and somehow replicate it. I should be able to feed what I've got back in. Well, I decided for the demo purposes I'd take this to a somewhat extreme level and I'm feeding it in live. So here's the exact same thing implemented taking information from me as a reader. So I as a reader now decide what happens with edge detection when I put up 10 fingers and whether it seems to pick them out appropriately. So now this document isn't a one directional thing, it's two directional. I'm having a conversation with it, literally actually as it happens. And this ultimately all comes down to the theme of the conference, that what it should really do for publishing is to compute. Computation isn't the act of producing a document, it's part of consuming it as well. So I bought a few examples here. Here's one that's trying to illustrate the response of a particular circuit. If I was in electronic engineering I might be interested in understanding this principle. Now, I've got a nice infographic that we can see some weird oscillation happens here, but really get an intuition for it. You need to be able to play with the model. And so the ability to go into this and change some of these parameters and see, okay, if I turn that up it seems to oscillate less. What about this one? I'm not so electric engineer so I don't know what these refer to. You can see here that I've got what, 6, 8, 10 sliders and I've got maybe 1000 degrees of freedom each. That's a literally astronomical number of different permutations. So it's fundamentally different from these interactive things I showed at the beginning, which were pre-generated. And very often when you talk to publishers and say, oh wow, content's interactive, what they mean by that is they've pre-generated a sequence of frames and then you can click through them as an animation or using a slider as I have there. But you can't ever show something that hasn't been pre-generated and embedded in the document. This is generating the visualisation as I asked for it because it cannot possibly hold 100 trillion different versions of the image depending on all the permutations of the input. Conrad stole my first example so I won't go into this one. But I will just answer that question if you want to know the model, here's what he was supposed to be showing, which is the base model. But I go further and say, well that's a editorial decision the author has made to show that little snippet of what the model is for the utility function. But in view of how you can hide information, I don't throw anything out. So here is the entire source code for creating the interface, the maths and the graphic. So it's reasonably verbose, but it's all there so that you can understand exactly where that model comes from and exactly what computations are going on. And at some point they start merging into the idea that we are getting closer to applications because here we are actually trying to find a useful answer directly from within the document. This isn't about helping me to understand this model, this particular one is working out how to negotiate a lawsuit. You don't want to go to court, so you want to settle out a court, but what's the right number to suggest? And it's got lots of degrees of freedom where I can decide who's bearing the costs and I can decide that the plaintiff's got high litigation costs and the defendant hasn't. And the probability of having a zero outcome is actually zero, we're definitely going to get something out of it. Although it's a bit small to see on the projector, down here we've got the actual range in which I should be negotiating. So if I'm going to make a proposal to them, I'm going to be somewhere in this interval and obviously I want to be at the lower end of that. Obviously the cramator said I'm not realistic here because it suggests the plaintiff might actually be willing to pay me, which obviously isn't realistic in this model. But here I'm actually using the document, not just to explain to me the model, and there might be some text supporting this, to explain how I should use this or how one goes about negotiating or even whether I should stamp the table as I negotiate. But I'm actually getting answers that are useful to me. And all of that takes computation. So, why don't we see more of this? Why is it that most documents are like the ones I showed at the beginning? And even away from the formalised world of academic papers, the average PowerPoint presentation has bullet points with words in and pictures. Well, one thing that pervades the whole problem is the paper mindset. Now we've moved on from necessarily having to print things on paper, but we haven't moved away from the mindset that paper defines our idea of publishing information. And it affects all the decisions we make in an editorial, in a design sense for how we compose the content and lay out a document, because we're still having our back and wide contents, this A4 rectangle of paper that notionally at our lines we're laying the thing out to fit on. So we have to think in a much broader sense about publishing information. So at the lowest end you have still got real means for putting things on pieces of paper. So that is part of our output. There are static formats like PDF, which are essentially a directly an electronic version of paper that we have to worry about in images that sometimes even in an electronic world we're still thinking in terms of paper. But then there are things like web pages where the whole navigation and expectation of interactivity is different. There's presentations like this, which are essentially another way of navigating the information, whether it's a page at a time with a slideshow or whether it's automated as a movie. But there's other purposes as well, like data transfer is a real purpose for documents. Excel gets used this way very often. It's not about explaining something to a human, it's about taking a bunch of numbers from one place to another. And that shouldn't be a different application. That should be the same thing as transferring the information for human consumption. And then all the way round at the end we've got things that are applications, or used to the idea of a software application, but we're thinking of them somehow differently. So we've got to get away with it out of this paper mindset because it imposes lots of limitations conceptually on what we do. Some of them are very obvious. It's two dimensional, whereas the digital environment has an arbitrary number of dimensions to it. The capacity is small and this affects our desire always to find summarise and to aggregate information. And just as in data analysis, you want to do aggregation as a last step, so it should be with knowledge that the summary is useful, but you don't only have the summary because you've thrown away all the detail of the knowledge, and of course in the digital world we have every increasing amount of capacity. Also the data and the assumptions in the document are frozen when you think in terms of paper. You've got this idea that you've put the numbers in and generated the visualisation or the summary, and that's it. No one's going to change it. They want to change it, but they're going to have to rewrite what you've done. In the digital world we have to think in terms of being able to update the data so that I don't have to worry about the fact that when an author wrote a paper about the GDP of the country, he didn't know what the GDP was going to be in 2012. I should be able to update those assumptions or bits of data. And also what's communicated is now merged together, that this idea that you have word for communicating with people that excel for somehow having moving data for more computational purposes, there should be the same thing. The document should be both human and computer-useable, and of course in the paper mine said it's for reading, it's for human consumption only. And the information transfer, as I've shown, is one-directional when you think in terms of paper. It's author to reader, and in the digital world it's pi-directional. The reader should be able to contribute both in information and in assumptions and interact with it. And in the end means that now the reader gets to control, do the driving, to decide where you go with the document. When you're in the paper mine said the author decides everything, the logic of the argument goes from line one through to the last line, and it's all pre-defect term and apart from the reader possibly sticking a finger in the page to remind himself of the useful page he needs to go back to. In the digital world it's much more up to the reader to decide what they need to spend more time on, what they need to work with, and what they can skip. Is it a bit different to an even present these options to the readers that don't use them? That's a very good point, and in the end it's a question of presentational style in the end. Just like you've been writing a plain text book, that people have read because they find it impenetrable or boring. If you have some level of interactivity that is not using the kind of metaphors or people who recognise and know what to do with it, they'll be confused and they won't interact or they won't see the point of it. In my experience, I don't see that very much actually, what I see is more of a problem where I talk to publishers and they say I want an electronic version of this book and I say, what should it do? I say, do, I want an electronic version of the book, didn't you hear me? You've got PDFs or something, what can you do? For me, the problem in my experience is that the creation, the consumers, will use it as useful as it is. That might not be worth the cost of producing and we'll come to that in a minute. I'm going to take a little jump here to introduce somebody else who hopefully will show another example of not falling into this electronic book mentality. I'd like to introduce Max Whitby, who is the CEO of ToughFest and has addressed this question when you have a platform like the iPad, what can you do with it that is more compelling than simply mapping information from a book? Steve McFlyan is the best writer we have for you, his favourite appell. I'll have a meeting for a few minutes. I have a feeling about it, it tends to bring me out in a cold sweat because in a former life I was a science producer at the BBC and many of you may know the wonderful series of Christmas lectures which are given here for children but they're fantastically well done. There's quite a lot that goes on behind the scenes to make six hours of effectively live television happen. I have a vivid memory of being in this very room one Christmas Eve at about half past three in the afternoon and being asked by the great scientist who was giving the lectures who will remain nameless to procure for him a jar of pickled sweeps testicles from the Victorian period which he wanted to demonstrate. I managed to do it as well. I'm not here to talk about that, I'm here to talk about a book which we published two months ago on the Apple iPad. It's a book written by Theodore Gray who many of you may know co-founder of Wolfram Research with Stephen Wolfram and Teo and I have been good friends for quite a few years now and I share with him a irrational and deep obsession with the periodic table of elements. When the Apple iPad was announced by Apple in January we knew we had a fantastic opportunity to make an electronic book which I believe demonstrates a lot of the things that John's just been talking about. The two particular advantages we had is that between us we created a great collection of objects to do with the periodic table. We had the good fortune to photograph all those objects for our website periodictable.com as very high resolution rotations, 720 views per object, gigabytes and gigabytes of data. That was sitting on our servers and just ripe to be published. Secondly, we had the whole connection with Wolfram Research and Wolfram Alpha and we knew that we could create a book which would pull this wonderful curated data from the Wolfram Alpha resource and also the ways in which it can be presented. Let me share it to you. I'm going to ask my colleague Fiona Barclay to join me and we'll see if we can get this showing on the camera. I'd like to just make it, I'd like to emphasize that seeing the iPad on a screen is very, very different from the experience of holding it. For those of you who haven't used the iPad, we'll have two or three of the machines out in the coffee break and please, please come and grab us and actually have a look because it's very different. We'll see first of all if we can get the camera working and I've got so many apps loaded I need to find the elements which is here. We'll choose first of all the home screen of the elements which is a moving periodic table. Fiona, if you zoom in a little bit onto one section you can see that the quality of the image is very high and what you're looking at is all the bytes of data of our rotations that we filmed for all these samples and objects to do with the periodic table. If I choose one of them, say gold, we bring up one of those rotations that we're playing back. We have here some facts and figures about the properties of gold and its crystal structure and all kinds of other things. Down at the bottom here we have the good old Wolfram Alpha logo so if I select that we're now actually going live to the Wolfram Alpha servers and with a little bit of luck in a moment or two it will pull up lots of information about gold and it will not only present that in numerical form but it also will pull in graphical displays that show for example the electron that we're playing. The electron orbitals of gold which in fact is one of the reasons it has its beautiful colour and its chemical inertness and in fact just here if you can zoom in Fiona at the very top we can see that the current price of gold is a stonking 853.76 pence as of a few minutes ago on the London Metal Exchange. So it's a good example of a book where we're publishing information which obviously changes all the time. Another relevant example if I go to a different element is one you might not have heard of and that's this one just here. This is Copanissium which in fact was named after we began work on the book and if we bring up Wolfram Alpha to find out a bit more about Copanissium we actually can see that it's one of those elements right at the far end of the periodic table that are made artificially by bombarding helium and helium nuclei at other elements and it has a half-life of 40 minutes. Now that's a piece of information which was only very recently discovered and confirmed and published in peer-reviewed journals but Wolfram Alpha has that knowledge and can therefore incorporate it in. Now unfortunately Copanissium is a very dull element in the sense that you wouldn't really want to have any of it around it would instantly kill you. Highly radioactive so we'll go back to something maybe a bit more attractive let's try I think the greatest underrated element in the universe which is copper and if we turn the page and go to page 2 for the copper you probably saw all those illustrations fell into the page and in fact every single one of them is live I can actually touch it with my finger and spin it round and actually if I give the iPad a shake we wrote the code with a few loose sub-routines and all of the objects will actually spin. I can also double-click double-tack on the screen and you'll notice there are two dragons that I can independently move round and there's another incentive to come and say hi after this. It's a complete gimmick but it's absolutely marvellous to watch. We have 3D glasses and if you look there you get the most fantastic dragon popping out of the screen in front of you. So this is an example of what I believe is a new genre of book and it's one where we at Touch Press are working in partnership with a number of different publishers, a number of different authors and we're really excited to be creating something which keeps all the good qualities that a book has especially the authorship but enhances the information and the knowledge that's contained and in particular brings the kind of tools which Wolfram Alpha has shown to bring that information alive. Thank you. I have to say it is great fun to play with. First step, get ourselves out of that mindset. Think about what we can do with information rather than just thinking about how we're going to print it. Now there's been two practical problems that also slow this down. One is that programming has been quite hard. It takes a lot of effort to write programming code. So authors and programmers have traditionally been very different people and that's percolate in the right room to organisations where you have publishers who produce content and books and then you have software developers who make programmes in the software and they've been very separate types of organisations employing very different kinds of people. Even within an organisation, when you get into large organisations, the skills required to deal with different kinds of outputs, these different deployments of the information have often been quite different as well. A typical large organisation will have a print production department and a website development team and then a separate tools team who build applications for use within the organisation. Even the content that needs to get delivered to these people for people with the knowledge often has to be replicated and shared out and explained three different times and put in three different formats. So what can we do to overcome the challenge? Well the first thing of course is let's get out of that mindset. Let's think in terms of documents and applications being the same thing. There's no longer this distinction that you have for static, it's a document, it's interactive, it's an application. As you've seen through these fairly simple examples, there's all great shades of grey between one and the other and they can all work together. Automation is another key part, computational automation. And that I think is perhaps the tipping point that we're reaching at the moment, that just as word processors, automated the task of typesetting so that my author at the beginning did not know what to do. Author at the beginning didn't have to get in a team of printers the way that Newton would have done with his pencipia. Now we can automate the task of creating knowledge applications. And I think I'm going to have to skip out my demo here. I was going to make one for you right now but Zoe told me that I'll get cold coffee if I go too long. But the idea is that by using computation you can automate not just the computations within an application that described the logic of your knowledge, but also the construction of the whole application so that in 60 seconds I literally can make an application. Now is that automation good enough? Can we get people to do this? Well here's the little project we started to see if that's true. It's a particular format of publishing so there are little mini applications and mini bits of information so they're meant to stand alone demonstrations of a particular topic. And as of today we have 6169 of these so that's a pretty serious number of people, some several thousand people who have taken their knowledge and created little applications and done and it's been low enough effort and cost that they can give them away free as part of this collaborative project. And I'm just going to jump into one here as an example. There's a school teacher in the US who thinks this is a great way of making her maths class do the equivalent of writing an essay. She tells them go out and explain a mathematical topic by making a demonstration so I'm just going to load this demonstration into the browser. And so here I am in Firefox with this demonstration written by three high school students, somewhere there are credits on the page. James Chae, Brian Halforddon from Torrey Pines High School. And this is supposed to explain basically their calculus problem, how big is the shadow on the wall as a person walks towards it and they've calculated the rate of the shadows changing and made this cute animation. And these are high school kids making these kinds of things that only a few years ago would require professional software developers to be able to make. Workflow automation is also very important for having multiple outputs that can hit all of these different things like applications and webpages and print and so on. What you need to do is to create a document that's the superset of all of these possible outputs. So this is part of the task is to hide information. Really that's hiding from one particular output. The information I've hidden in this document is hidden from the presentation type output. But it's absolutely vital if I'm going to turn this into a set of applications that live on a website or if I'm going to make flash animations of them. Then having that hidden information becomes more important than the visible information and things like the pretty banner at the top become unimportant. So we can use automation to take the superset of information and depending on the deployment to throw away the bits that don't matter. You throw away the interactivity if it's destined for creating on a piece of paper but in another case you throw away other information. And to do that you need a document format that supports computability. Both the ability to embed computing as I've been showing, but it itself has to be computable. So this is a data structure that can be transformed and analysed. There's a lot of, in the same paper mindset, a lot of formats for documents set up with purpose when they're up on paper. So the idea of having a computable structure hasn't seemed important. So internally it's very hard to do something with a document other than what it was intended for to print. So we're working very hard to bring this together. You've seen a lot of technology already that does a lot of this, but our big plan for the year is to release a much more open extension on this format, which has a codename of computational document format at the moment. We'll have to wait and see if it actually sends out to be that. That will make it much more accessible and easy for people to be able to boost the kinds of things that I'm showing in demo mode here just as a part of their routine workflow. But you'll have to watch our company's activities to see that come out later in the year. So let me wrap up so that we can have some coffee. The key thing if there's anything to be gained from this is changing the relationship between the author by encoding all of the author's information. Everything that's in their head can be put into that document so that it's no longer a summary and an aggregation of the end result, but it's set out there for the reader to interact with it, to have a dialogue with the author, even if they're not there in person, rather than making it a reader, simply a consumer of summaries. It's computation for the theme of the day, the computation that makes all of this possible. I'll leave you with one question, which is to think about your organisation. It's going to vary from place to place, but where is the value in your intellectual assets? I doubt if it's the words or the pictures that get produced in the documents that they're clicking around, is in the ideas and the methods that it's the expertise. That's what you've got to think when you try and publish. What are you trying to publish? Are you trying to say, look how clever we are, we know this stuff, but we're not telling you any of it? Or are you trying to actually publish information that is useful in a knowledge economy? I will not wait for an answer from that. Do we have any time for questions, or do we have to go straight to coffee? Straight to coffee then, which is outside? That's what I'm going to say.
While computing has revolutionised the productivity of authors, the output has mostly not changed. By using readers’ computing power to do more than just deliver and render words and pictures, it is possible to increase the bandwidth of communication between authors and readers. But how can we enable authors to put the interactive richness of a software application into their documents without training them as programmers? This talk will examine technologies and workflow principles to overcome this barrier.
10.5446/31402 (DOI)
Gweinwchjangodd, mewn i sitaen o felly o maeth Recifig Ac mae niechau roedd hynny ni — ger APPLAUSE Cellymerau fe allai yna ni yn f scratched Mae'r mathmatig yn siŵrer iawn i Ogwch Lle女 mewn g MK, mae'r yrhan Gion?, yma'r dref wedi'i ein rhedол mur ymarhoedd yn ei weld mlours. Ac ten of hon wrth ar brom sydd wedi ein gwympadd fel yming Warol ac y gallai synod ei collud i ni wsio'r talch o gynes, I want to do an education with mathematics and how mathematics is getting used. So I want to talk a bit today about these my views as to how mathematics needs reform in education and what we should do about it. So let me start off. So let me actually, before I get to what is maths, let me talk a little bit about why do we teach people mathematics to start with? And let me just go back for a moment so I can address that. I think if you're brutal about it, there are about three reasons, practically, roughly speaking, for learning mathematics. One is technical jobs and all the industries and technology that depends on that. One is everyday living. Now the world is vastly more mathematical. You cannot survive today successfully in any sense without some quantitative understanding vastly more than in previous decades or centuries. So when needs and understanding, in the way that Andrew Dillnott described amongst others this morning, it's the world relies on understanding things like government statistics and if you don't have any quantitative understanding of the world, it's quite hard to function these days. And the third is what you might describe as logical thinking, a process that has been so successful in the world at producing results and allowing people to understand new relationships and new ideas. It's extremely important that people understand that process of logical thinking. So in a sense, I would argue that those are sort of what the reasons why we teach people mathematics. Let's talk a little bit about what mathematics actually is. Now I would argue it's roughly four pieces to mathematics and I would argue they're roughly this, posing the right question. That is the by far the most screwed up aspect of mathematics in the real world. People ask the wrong questions and surprisingly enough they end up for that if not for other reasons getting the wrong answer. By the way, for some reason I've left the S off the depends which side of the Atlantic one is as to whether you have the S or you don't have the S, I suppose. The next step I consider is a transforming process. You take the real world and when I say real world I might mean very practical real world or I might mean some theoretical model of the world and you want to turn that theoretical model into a kind of maths formulation. So this very powerful because I guess in a sense I think of this as maths is this very powerful way of kind of getting answers to things. But you've got to get things from the form they're in to this mathematical form. And then step three, which I will come back to at length is the computational step where you actually take. You take what you've set up and in a sense in step two you've got to understand where you're trying to go at some level too. You take what's in step two and you transform it somehow to an answer that you wanted to some question to answering some question you were asking. And then what you do is you go the other way. You take the maths formulation and you turn it into some real world result that you've been essentially trying to work out. And crucial step you might label as 4B is verifying your answer, figuring out did you get an answer that made any sense? Does it answer your question? OK, so what are we doing at the moment in maths education? Let's just ask that question. What are we doing with these four steps? And what I'm arguing we should be doing is the real step I want to focus on for a moment is step three. Right now in maths education I reckon we're spending about 80% of the teaching time on step three by hand. Getting people to learn processes for hand calculating results. Not setting the problem up, not asking the question, not dealing with what the answer comes out as, but just hand calculating, solving an equation, inverting a matrix. And my argument would be this is the wrong way round. We should be spending the 80% of the time on steps one, two and four. And the 20% of the time perhaps on a mixture of hand and automated calculation. And that if we did that we could have many more positive results out of maths education. And so I would argue that we should really be using students to do steps one, two and four and computers to do step three. Now, you know, I don't, there are reasons I'll come back to why you sometimes want to do step three by hand, particular estimating things and sort of rough calculations. I think it's extremely useful to do by hand as well. But in terms of very formal mathematics, the sort of mathematics you do if you have a piece of paper and a pen in front of you, rather than doing things in your head, I am very far from convinced that we should be spending anything like the amount of time we are in education on step three. The crucial thing I'm arguing is that maths is not equal to calculating. In fact maths is a much greater subject than calculating. The problem is that over the last few hundred years, well in fact, I'll rephrase that, mathematics up until the last 30, 40, 50 years, those two have been completely intertwined. Because the only way to do calculating was by hand. There was no choice. But of course now and increasingly we have calculating done on computers to an extent where the computer is just vastly better and essentially any kind of step three that I'm talking about than almost any human after years of training. So you seriously got a, you've got a disentangle mathematics in general from if you like calculating step and decide what it is we really want to do with that. Now one thing I want to make quite clear is of course there's one place where this, or one place I say, the majority of places where this really has happened, outside education. You want to look at what's happened where you have computers applied to mathematics and you use them for the calculating step. Look outside education. Look at everything that happens outside education. Look at the fantastic increase in use of mathematics over the last 10, 20, 30 years in every, in every swath of life. So that's what's happening outside education. If however you decide that inside education somehow we should regiment or regulate that instead of following what the outside world has done and use computers for calculating, we instead decide everyone has to be locked into manual calculating, you can see that it is highly easy to generate an increasing size of chasm because the world outside education is forging ahead. It's getting, producing more complicated results, doing things in a more sophisticated way because they can use computers and you can't replicate that if you do everything by hand. Just today, today is a school day still, yes I should know that, my daughter is, no no it's not, it's half term, no it's school, it's finished, I'm confused, okay. Anyway if today were a school day, I reckon 16 UK lifetimes have been spent learning how to calculate just today. So there's a lot of time we're spending. So you know if one's going to argue that we should be hand calculating teaching or teaching what I call the history of mathematical calculation to people, it's, one's got to justify how one's spending this inordinate amount of time on that given what we're trying to achieve and given the apparent failures of such a system. You see I keep coming back to the idea that calculating in a sense is the method, it's the machinery of mathematics, it isn't the whole subject. And usually what the whole advantage that automation gives us is it allows us to separate in a sense the methods from the tasks, the machinery from what we want to do. You know if you want to drive a car nowadays, you know 100 years ago you needed to know everything about you know how the ignition timing worked and the details and mechanics of the car in order to actually get it to move anywhere. You don't need that anymore because cars have automation layers between you and the, and the process, here's the point I want to make, the process of driving is very separate to the process of designing a car or mechanics. They weren't separated long time ago, they are now. So I wanted to bring those out in, in some of the issues that people bring up about what I'm suggesting here. People say well you know you can't use automated stuff before you know the basics, you've got to get the basics first, that's one concern people bring up. You've got to be very careful, what do you mean by basics? And I think what often happens is people confuse historical order of invention with what is the most fundamental to the subject. Just because we, you know pens were invented before computers, it doesn't mean that necessarily they are the right way to do mathematics, they're not necessarily more basic use and way of formulating mathematics than computer use. It doesn't follow. So what I think basics are, are a fundamental understanding of the process and some intuition about what will happen in different cases of that process. And I would argue that the basics of the subject are not the mechanics, the machinery of calculating, but they are this whole process that mathematics are beautifully lays out which allows us to calculate so many things in the world and have moved our societies and our economies forward so far. These steps one to and four that I was talking about. So another argument that actually this is the one that I find most distressing of all the arguments. This one, many other arguments I have legitimate agreement with certain aspects of them so to speak or I agree with legitimacy in them. This one, you know, that somehow the idea is if you get people using computers to do mathematics that somehow it will become mindless and that if you do it by hand it's all somehow very intellectual and brain training. You know, do we really believe that most people studying mathematics right now don't believe it's fairly mindless? Most of the actual students studying mathematics, I'm not talking about the top echelon, I'm talking about the average student in school right now, would basically think they're running through a bunch of processes they don't really understand, that they don't really understand the consequence or the reason for doing, that seem rather dull. And nowadays, which is distinct from maybe 50 years ago, I would also argue are rather pointless and practical use. So that's kind of where we are, so in the sense the base we're starting from with teaching our own hand calculations all the time is relatively low on this one. Let's talk about computers and whether it becomes more or less mindless with computers. Computers are a great tool for doing mathematics and like all great tools you can use them really badly and you can make stuff really pointless. And a way to make it really pointless is to turn everything into a multimedia presentation and to wrap up the use of the base tool in some kind of cotton wool system so that you're somehow leading the student in some way that's very sort of discrepant from the base tool that they could use. So here's an example and I was shown not so long ago somebody were rather proud of a presentation that they had to show how it was a computer program which helped the student solve an equation by hand. So what it had was it was showing somebody, well you know if you have, I can't remember what the equation was, but x plus two equals four you can, if you move the two to the other side of the equation and change its sign, which incidentally is the way I understood how to do equations for many years. It has nothing to do with what you're actually doing but anyway. If you do that somehow, here's, it showed you a nice graphical way how to do this. This strikes me as utterly nuts. Why are we using computers to replace the teacher to show a student how to do a calculation by hand that they ought to be doing on the computer anyway? This seems completely backwards in every kind of way. I mean there are justifications when we have shortages of teachers and so forth of trying to get computers to help in the provision of education. But this just seems like as the way to do computer based maths completely the wrong way round. What I'm arguing for is open ended use of tools, tools that have increasingly become easy to use and ubiquitous. And I think we'll see over the next few years with, I mean already with for example Wolfram Alfer on the web, it's free and open to everyone they can do maths stuff and with the iPads and future things. The stuff is ubiquitous. Everybody has calculating machinery around them and will do in the future in every school. By the way the UK of course is not doing badly on that front anyway. So somehow the idea that mindlessness is introduced by computers is utterly false. The correct use of computers does quite the opposite. And one of the reasons I think it does quite the opposite is because it allows you to try much harder problems. You can solve equations instead of your x plus, what was it the one used to do, x squared plus 2x plus 1 equals 0 and that's right you can work out x has two roots at one. That problem doesn't usually come up in real life. It's much muckier. But the principles that you might use to work to set up an equation or to analyse the results of the equation, those principles are the same. It doesn't matter if it's x to the 17 plus 0 to sine x plus et cetera et cetera, same principle in terms of actually working with the mathematics. With a computer you can actually do that. And I would even go slightly further and say that some of the disasters in recent time we've had with the misuse of mathematics, most particularly in the local environs here in financial risk calculations and the like. Some small aspect of that, I'm not claiming that they're bigger than that, but some small aspect is because people are in fact driven using hand calculation to try very simple problems because that's all they're allowed to try. It's improved a bit with arithmetic as a calculator, but in terms of symbolic problems you still have to have things that work out neatly. Well guess what, risk calculations in finance associated with different risks of what don't work out neatly. They're horrible. And you need to get used to trying horrible things with horrible, messy results that don't work as you want, where sometimes you can't do the calculation, rather than always trying to simplify everything down because that's what you were taught at school with hand calculating. OK, well another argument that comes up, and I think this one is actually I have a lot more sympathy with this one, applying procedures teaches understanding. So I don't particularly, so one of the questions is, you know, if you somehow learn, even though let's say solving an equation by hand, might itself not be particularly useful, doesn't it teach sort of logical thinking that is useful for other aspects of mathematics or other conceptual parts of the subject? I think the answer may be to some extent yes, but I would basically argue the following. I think for any impractical example you can think up, you can think up a much more practical example that does the same job. So if you are going to argue that, you know, people should learn, so I think, you know, I think there is a perfectly valid case that people should learn things that help train their logical thinking. Absolutely I agree with that. I'm not suggesting that everything is just for a purely immediate practical outcome like that. But I don't think that simulating historical ways to do this is, you know, is the most imaginative thing to do, and I believe that with some careful thinking through the different cases one can come up with mixed computer human ways to do this that are actually much better at training it. And I'll pick one in particular that is absolutely critical. You know, the way in the modern world that procedures really get noted down is programs, writing programs. That's what really happens. And we've talked today a lot about, you know, whether it's formally a written program. I mean, as in you type in a piece of computer code in language that the computer understands, whether it's somehow human linguistic programming. But whichever it is, the idea, you know, if you really want to learn about procedures and how proceduralising things work, you get a computer to do it through a program. I mean, that's what the world is built on these days in many respects. And so I think one of the best ways of teaching procedures, their importance, the importance of following them, the importance of understanding how to set up a procedure and so forth is to get people to write programs in whichever form is the easiest. And I think that's very undervalued in what we do. And again, one of the great things about that is it's much more fun. I mean, program stuff that comes out, you get much more happening because nowadays with computers you can build really quite nice programs quite easily and get really nifty results out of them. And you can get much more experience. You can try many, many more things through the program. And this whole thing, by the way, of experience and intuition is really important. You know, in the time it takes you to a manual calculation by hand, you can run 20, 30 calculations that are much more complicated to get graphical results on a computer. And by doing that, you can build up some intuition of what kind of results you should expect, what makes sense, what you're expecting in the case, in a way you can't do by hand. In fact, even worse by hand, if you're anybody like me when I was doing these things, I got a good fraction of them wrong. And then you look at the wrong answer and then you finally get something right or you get told three days later by a teacher that you got it wrong. And then you can't really remember in your mind which of the results it was. It kind of reinforces the negative. You can't quite remember which one was the wrong answer, the right answer, in building up your intuition. And it really doesn't help you go forward in remembering which of these is kind of right and building up that experience. So, you know, again, I think there are many positives to doing that. So I'm really arguing that mathematics, if it is computer-based in the right way, has a rather unique possibility to move forwards of becoming both more practical and more conceptual simultaneously. Many subjects, you know, you can argue there's a sort of trade-off between the vocational and the intellectual. And I think perhaps strangely right now in mathematics, we have a chance to improve both simultaneously. And I think it's, you know, and, you know, so simple example, very, very trivial example of this. Let's teach calculus early. Why do we wait for so many years to teach people calculus? Well, I'll tell you why, because it's damn difficult doing integrals by hand. Right? I mean, you know, it's tricky. I spent years trying to learn all the little tricks for working out an integral. Now, calculus is a really useful subject. It underlies much of the technical mathematics that's used in all walks of life. But, you know, working out technically how to calculate an integral is very distinct from that. So I argue, you know, you can start to build up intuition about calculus rather early. And this was a little thing I made for my four-year-old, five-year-old daughter. This is perhaps a little bit early, but... So, and I was, you know, we were talking about sides on polygons like this. And by the way, an important feature of such demonstrations is you have to be able to change the colour. It's a sort of important characteristic. And I was told that I had to be able to make it sit on its bottom or sit on some other side or point or whatever. Anyway, the point of this was, you know, as you increase the number of sides, right, to a large number, what happens? Hey, it turns into a circle. Okay, it's very simple. It's very early differential calculus. But, you know, that's a sort of view of the world that I don't think many people get in current mathematics education for an awfully long time. And there's no reason for it. I mean, you can see it's very obvious now, you know, having discussed a bit about infinity. It's easy to hear, oh yes, a circle has an infinite number of sides if you want to view it that way. And, you know, et cetera, et cetera. So, I think there, I mean, this is a trivial example, but I think I hope it gives some flavour of what I mean by the conceptual and the practical together. Now, of course, a big bloc to transforming maths education is its examination and how we assess people in maths. Clearly, if we examine people on how well they can hand calculate and we then get them to calculate during their education on computers, they're not going to be particularly good at the hand calculating. So, it is vital over time that computers get integrated into examinations and other assessments. And I'm pleased to note that the Danes, although not in yet in mathematics, but the Danes are starting experiments in this where people can have computers in the cluster. And by the way, in exams, and by the way, this doesn't just apply to mathematics. I mean, to my mind, somehow the idea that everybody has a computer and it's available to them, but they somehow can't use it at some other point in time during an examination, during a test doesn't strike me as in the long term particularly sensible. I mean, this is why I picked this one particularly, but this would be a kind of question you could have if you had computers and exams which you can't have without them. I'm not suggesting this is the only kind of question by any means, but it's one style that you might have. What's the best life insurance policy to get? Here's the model. You tell me how to optimise it. So, that would be a typical kind of question. Well, what's the death benefit? What does that mean? How do I look at this chart? How do I change these years of protection? What does this mean? I mean, these are the practical questions people actually get in real life. There are other kinds of practical questions like how to set up such a model. That's a different kind of question, equally valid. So, right now people are very poorly optimised to answer these kinds of questions in the mathematics they learn. I'm arguing that we could make them far better optimised to do that and to understand the answer. So, finally, I'd like to say here is as well as improving all the practical aspects of mathematics and, in my view, improving people's intellectual engagement with it, all of those benefits, I actually think we can make maths more fun. It's not a word that regularly gets associated with maths. I actually think that one can enliven maths a great deal, make it much more graphical, start people trying examples that they're actually interested in themselves. One of the things that John said earlier was the school in California where they'd be making demonstrations. I remember one of the... I talked to one of the kids there who was building that, some of those, and they were telling me, oh, yeah, one of the problems I wanted to work out is if I go to the movie theatre with my friends, how likely it is that two people sitting together will be in love versus these other two people who hate each other. And he wanted to work out the sort of mathematics of that. And he built a demonstration that was related to that topic. So that's fine. It's an interesting problem to work out that teaches that it's much more engaging than somehow sitting down and solving the process of manual calculating and also much more relevant to the kinds of things that need working out in real life. And unfortunately, as we are maths sort of folks, we tend to make everything kind of computational in the end. So I guess this is my little fellow here to change. I guess that's what he thought of my talk. Let's see if we can change it a little bit. And, well, hey, you can't get it too smiley. This was a talk about maths. Thanks. We have a question. One second. I'm sure we do. Yes. Thank you very much. Could you flick back to your first slide where you articulated what is maths? Yes. That one. Yes. The arguments that you've put forward today were rehearsed in 1632 by William Utrid, the inventor of the slide rule, and Richard Delamaine, a practical teacher of mathematics who published first. I'm sure you can decide what they were saying to each other. Interesting. That hasn't changed much. But what I wanted to ask you really is you have separated out processes two and three in your slide here. Unfortunately, I don't agree that that's quite so simple because to an expert in computation as yourself, your mathematical formulation is based heavily on your knowledge of what kinds of systems of equations can be solved. There are maths problems which can be solved and there are maths problems we know can't be solved and there are maths problems nobody knows how to solve. Let me give you one example of a problem posed to school students recently. It was this. It's in your direction. What's the best life insurance policy? The actual question was how many people can comfortably stand on a soccer pitch? If you model them as squares, it's trivial. If you model them as equally sized circles, then that's a bit more challenging. If you model two different sized people, then nobody can yet solve that problem. Your mathematical formulation is intimately linked to computation and I think recognition of what problems can be solved, a key part of any practical mathematics, is deeply linked to familiarity. Familiarity comes with practice. How do you ensure that those things are connected? It's an interesting question and I somewhat agree with it, though I would have a slightly different way of thinking about what you posed. Instead of saying it's somehow number two and number three are pushed together, I would say that if the problem you want to pose with the different sized people on the football pitch, if that's the problem you actually want to work out, let's at least formulate that. That's step one. It may be a problem that is unsolvable to current mathematical computation methods on a computer or by hand or whatever. But then the question is, okay, so that's unsolvable. That's part of experience. What are the tools you have to use? If I have my car to take that analogy and you get used to, you can drive around a corner at a certain speed and rain with your car and not come off the side of the road. That's a sort of experience to a car. Now at some level I would agree that that's linked to the physics and the mechanics of the suspension and the tyres and so forth. But the practical use of that isn't really very linked to that unless perhaps you're a racing driver. The practical experience of driving is kind of what you need to gain the experience in, in my view, to be able to drive your car, not in the physics or the mechanics of the car itself. Because I think, you know, a lot, I somewhat agree with what you're saying and I think there is therefore some need for understanding the computation under. I think one to be really careful about separating those two of the driving versus the mechanics. And I sort of think that that's the way. And again, I think one of the problems is because people have pushed these steps together, like a little bit like my finance risk example, they have got used to always thinking, okay, well ahead, what can we compute and therefore how can I formulate the problem? And the problem formulation may then just be wrong for what they're trying to work out. So in a sense they've sort of pre-done the work of figuring out, well they can't compute this, so therefore that's formulated the problem wrong and then they kind of get the wrong answer in some funny sense. So there is a feedback loop there that I think one has to be careful of. I mean one thing to understand, you know, in a sense I'm not arguing that nobody should ever learn any manual calculation of any sort. What I'm arguing is you've got to have a really good reason to be teaching somebody that. You know, it's conceptually really important or it's practically very relevant, like estimating, which I believe is a really... I mean I do this every day. People say, you know, oh I don't know, salespeople sometimes tell me, you know, my sales figures have gone out by this percentage and I'm thinking, hmm, does this make sense? Or is this the right calculation? Or you know, the web statistics of this? Does this basically make sense to you? And I often do though, though I could sit at a computer and type them in, I often do those basic sort of estimations by hand because it's efficient and easy to do and it sort of, you know, gives me some feel for it. So I'm not saying you should never do it, but I'm saying, you know, I suppose one further argument I put into this is, we've got to be very careful, so the argument about Ancient Greek, as I call it, should we be forcing everyone in the world to learn in education, to learn Ancient Greek as a compulsory subject between the ages of, you know, five and sixteen? I think the answer is no. Now, does that mean I think Ancient Greek is a worthless subject? No, I don't, not at all. I think it's a very valuable subject for people who are excited about that sort of subject and for people... But you know, there's an argument of trying to engage people a little bit in Ancient Greek to see whether they're kind of excited in it. But I have to say that that's where I think much of the hand calculating that we teach people, it's in that category. So I'm not saying, you know, you should ban people from learning hand calculating or anything like that. I'm saying people get excited about how calculations are done and the history of calculating or how they were done by hand. That's great. They should pursue that excitement. That's a valuable thing. I just don't think it's the compulsory subject that is going to bridge between what people are learning and what they need outside. And, you know, one final thing on this, I mean, you know, politicians love saying, you know, we're making, you know, we need more people who are technical scientists on maths for the knowledge economy. There's something very wrong in that because the maths they're teaching isn't related very strongly to the knowledge economy use of maths and that's why this sort of chasm has rather opened up in my view. We have a couple more questions. Conrad, perhaps you've heard what's happening in GCSE Math 2010 in England. Perhaps not. I imagine you are the advisor because so many things are being approached. For example, there's a new target where students will have questions where they have to choose and think and what methods they should use rather than do this, do that. And a third target where it's really much more problem-orientated, much more open-ended, multi-step problems and also functional everyday maths is coming in and more calculator time in the papers. So that's at least four places. Well, that's good. They're coming to meet you. That's good. I mean, I'm just, you know, so look, obviously I'm pleased that things appear to be moving in that direction. But, you know, the frustrating thing is, this is a slight, you know, in the end, to take the bulk of this time out from the hand-calculating business, you've got to really make a pretty radical set of reforms. I mean, it's sort of hard to teach everyone most of the hand-calculating and allow them to extra-calculate a time and really solve this problem. You've really got to be pretty radical about it in my view to really make the significant change because you just can't do, people ask me, you know, why can't you just add more and more computers in? Well, you've got to take stuff out. There's already too much stuff of different sorts. And by doing that, I think you can raise up. So, obviously good direction, but far, far more in my view could and should be done. And the trouble also is that the temptation has been to solve this math chasm, as I call it, to try and really dumb it down, rather which is what people accuse computers of doing. But computers, in my view, nothing per se to do with that. It's to do with the fact that enough people aren't doing the subjects that apparently we want them to do, etc. etc. So, I think one's sort of got to be cautiously optimistic and pretty cautiously at this point. By the way, I'm very pleased to see somebody has made the rhombic hex... You built that in real time. That's very good. The rhombic hexachron has been made. I'm very impressed. Colin, would you like to answer another couple of questions? We have a couple more. This gentleman on the back row saw. I mean, shout if you can hear. It's just for the video. Okay, all right. I'll oblige you. Interesting you mentioned, first of all, about politicians. A lot of them advocate maths education. Very few of them are mathematicians. A lot of them are lawyers. A second point is, how do we gain this sort of ability to be sceptical about models? Treasury models, the black-shell option pricing model, which is probably all cost as a few barbeach. I don't disagree with some of the lines of argument you're taking, but I think there is a need somewhere to promote this sceptical... I literally agree with that. Yes, I couldn't agree with that more. The sort of scepticism that Andrew Dillnott mentioned earlier, in the sense Alan Joyce mentioned earlier, in how we look at the data when you actually get the data to try and turn it into a wolf from alpha. This scepticism is an absolute essential part of the education, but we don't teach it right now. Realistically, we just don't teach very much of it in maths education. Where do you sit in maths education? You get bits of it, but you don't get, okay, why do you believe this answer is right or wrong? Another question which comes with this, by the way, is if you're trying to verify an answer, somehow you must need to work it out by hand to verify it. I don't really agree with that, that's the way you verify things in the modern world. You do sometimes, in terms of doing a rough estimate, to see whether you think it makes any sense. That's a really powerful manual thing you can do, and that needs to be taught. I think we should have many more maths classes where people do stuff in their heads, trying to do rough estimates to try and get to whether they believe something that's being told to them. I think that would be a very powerful thing. But at the other extreme, if you're really trying to check the details, you're building a bridge and you want to know whether you worked it out right, the way people check that is they run the model in different ways, they run different models, they try different things. You can't recalculate a bridge by hand nowadays, it just can't be done. You either have to estimate very roughly to see if you've got roughly the right thing, or you've got to use computers several ways to corroborate your answer. In the end, I think what this is about is experience. If people get more experience at actually running calculations, running maths, rather than doing the calculating step alone, seeing the formulation of the problem through to the answer, through to what doesn't work, if you try that 50 times as many times are more complicated problems because you can use a computer, I think you're going to build up much better intuition than if you do it the traditional way that's being done now. I think that's much more likely to build sketches. A little bit like we're looking at Andrew's talk this morning. If you've heard some of the issues that Andrew brought up about ways that governments get confused or put out statistics that we don't think are actually really valid, you start to believe you can then see questions to ask about other statistics that come out. But if you've never really heard those questions or thought about them or tried to analyse anything, it's tough to do that from first principles for most people. Conrad, having we got a problem in that, at least in primary school and partially in secondary school, we don't have specialist maths teachers. So how do you change the whole education system in maths? It's tough. Let me bring up perhaps an even worse topic than that, which is the subject I'm talking about, labelled maths, because some people would even go to the greater extreme of saying one should have a parallel subject, at least temporarily, that is the main subject. The subject I'm talking about, I'm quite clear, is the mainstream subject. That is what the majority of people should be focused on. Whether for reasons of brand, so to speak, or reasons of practicality, one calls it maths in its current form, I'm not sure. So the question of people who are actually able to teach it, this is a problem. One thing I would generally say though, which I think affects to some extent both teachers and particularly students, the kind of mathematics I'm talking about uses a somewhat different set of skills from the traditional mathematics. Now, what will that do? I think that will mean that there are people who before were considered bad at mathematics, who are now rather good at it, and there will also be people in the other direction who were considered very good at mathematics because they were very good at the manipulation of the calculations, but don't see the bigger picture as well. In terms of volume of students, I strongly believe that the flow will be to a much larger number because I think there are many more people who were sort of engaged in things that are somehow quantitative than are necessarily interested in inverting a matrix by hand, or see the point of it. I think there may also be parts of teaching where that's true. There are teachers who are science teachers or geography teachers who actually could teach a subject formulated like this rather better than they could teach traditional maths. I don't think it's an easy problem to solve. This is one of the problems in solving anything with maths education, which is there are so many stakeholders in it to fit together, and in many ways it may be sadly that some of the developing countries in funny way may actually leapfrog some of the developed countries in solving us because there's less baggage there. I'm hoping when people see the arguments and think about this in some detail one can at least make some line of progress and start things going. Conrad, you guys have in your hands the most powerful maths teaching tool in the world, and for you not to see how to use it seems to me utterly crazy. There were two gaps in your talk. The first was that, and they're not relevant to my main theme, but they're essential, the first was confidence, the kids have got to have confidence to know that they can screw up in any which way and they will not be dumped on by teacher in front of their colleagues. Because that leads to problem three, which is dissonance, one dissident member of a class and the whole effing thing falls apart. So that's in my opinion, the first thing confidence. The second is courage, the courage to try things out, including the courage to tell mum she's a twat when you go back home and you say, I couldn't do maths today and mum says don't worry dear I couldn't do either, let's go and do art. So getting the confidence and the courage up is there, but the next step is this using of this incredibly powerful tool that you have. I did a trial in Bristol inner city primary school a couple of weeks ago, a couple of years ago in my retirement and we had five days with the five most difficult classes who hated the school and hated themselves and everybody else. And the guys who came back the next day saying I've done your homework it was wonderful, I want to stand and tell the whole class about it. Well the ones we had been warned these are unteachable, these are unspeakable, these hate it and the homework was to go use Wolfram demonstrations to find something interesting and come and tell us about tomorrow. It's as simple as that and it worked like an absolute truth. How can you stand there looking worried man, you should be reforming the world, go for it. Well I can't get more of an endorsement than that, that's for sure. It's very nice to hear that. I personally believe particularly the demonstration site that allows all these interactive examples to be tried. Before I end here, there are two things I actually wanted to show here. One that I completely forgot to show, sorry this is a side thing from your question but just while I remember it. This morning somebody had built a very nice demonstration although it's a rather complicated one about the oil spill and I meant to show it in my talk this morning and forgot. He built it over the weekend so this is actually a rather nice way of visualising the oil spill around the oil spill everybody knows about. This is just to show that one can build applications and things which are rather interesting. This is actually a little bit more sophisticated than you might think because it requires a lot of image processing off of satellite photos to actually pick out the oil as against the water so to speak. Anyway, I just think there's some data and things as to how it compares and so forth. I wanted to show that mainly because I'd forgotten to show it earlier. Just to show you the demonstration site that we're talking about here, I think demonstrations is a great teaching thing and I forgot to... I'm out of ethernet here somehow so let's plug that in and see if I can do it again. This is the site that has 6000 or so items on it. The great thing is it is a very visual exploration of maths and I think it's a fantastic thing for people just to play with and gain some of that confidence that you're talking about. I think it's a great thing to try things. I know my daughter is just one point in the example space but she has great fun playing around with different things. Does she understand what's happening with different parts of mathematics? No, not at this stage but that doesn't matter. I'm not going to manage to get this up again. You can start playing around with kids and fun and you can start playing around with these different items and optical illusions. That was a favourite I know, just playing around with how that works and what you do with these things. In a sense what's so frustrating about this is we've got the tools and they're pretty much ubiquitous, many of them. The way that people actually do mathematics in the real world is to do things like play with things that are already there and then they start formulating results in that. That's the way round that people do it and yet for some reason we're locked in this reverse of how to do it. I think building confidence is, I completely agree about that. That's a crucial step. It's interesting to hear, for example, of how some of the most difficult children actually came out being most excited because they found a form of expression perhaps that was interesting to them. I would love to see demonstrations and things that are interactive in general really help us in that direction. Time for one quick comment and then we are going to break for coffee. I just wanted to say that you mentioned Denmark introducing computers into exam rooms. I don't see that as just a coincidence that they are as a country at the top of the World Bank's Knowledge Economy Index. Yes, it's a very interesting point. Denmark I think has had a long history of being rather innovative in educational together. It isn't a surprise to me, I agree that Denmark is a place where they think, well, let's test what happens with exams and yes, there are issues about cheating but let's deal with the real issue at hand that people have computers in real life. The UK is a place where the stuff could get going. Well, we have good computer provision. We haven't traditionally been as a head in some of these educational reforms or in some other aspects we have, I think. In a sense, we are in a good place on the starting grid. No country has done this yet fully. I am pretty convinced that whichever country really can adopt this approach, with all the complexities of implementing it and things that I accept are there, first we will see a massive improvement to their economy. Quite apart from whether or not you believe, see, education is for many things other than just generating people who can do jobs in some economy. For sure I believe that. But even if you just look at the practical thing of how you move to be a higher tech economy and in some of the computational knowledge things we've talked about today, it must give people a big foot forward and I think it would also solve this chasm that exists and improve numbers and all those things that we are talking about. Yes, I think Denmark is a good place. I think that China will be interesting to watch as well. They have an innovative approach in many respects although in some respects not. Let's see whether something can happen in the UK. Thanks very much. Thank you very much Conrad.
Computers have revolutionised the conduct of maths, science, engineering, finance—and now potentially democracy. Increasingly, what limits progress is asking the right questions, specifying problems effectively, testing, and imagination—not students’ abilities of manual calculating, most prized in maths education. This talk will discuss why and how our technical education needs to change for this new world.
10.5446/31403 (DOI)
Mae oedd wedi zw 술 hun постmóct pokiwn. Felly tu am ddefnyddiaeth ddi 게ff launches at y regiwn rhyth 大 Tor without YOU.ications fof pum mener ychydig pwyswyl eu bodai ein bwyswyl ag meddlaw o hisauьяn gwineil fel eu cyflwyaf SOX. Maee'r CryS пос y gall ddiscuss y moll i ni gwneud beth anndin gan ddau siarad поэтому yn ganddod. Mae chi yw gwaith, mae'n dôn o da dim, rydyn hi ddim am ddau wneud yn gweithio. Er mwyn mwyn d stock Ott ddy, fyddy roi femyn gofnodd.адu, nesbytsydd datblygaiden â oselig i bryd y 5 yma i niferiteil arny suspendadau ar gyferτeteil, yw pr micronnau llyfr Myers-River, lle mae gennych prosciho uch corenedig i fynd i screen. ychydig yn ymweld i'r anodd o'r ysgol, oherwydd ymweld i'r anodd o'r ysgol yma, ymweld i'r anodd o'r ysgol. Yn y cwestiynau, ychydig yn y cyfnodd y cyfnodd y cyfnodd yn y gyfnodd, yw'r ysgol yw'r ysgol yw'r 10% yw'r cyfnodd yn y UK? Yn y cwestiynau 40,000, 60,000, 80,000, 100,000 o'r anodd o'r 125,000 o'r anodd? Yn y cwestiynau ymweld i'n gwybod, ymweld i chi'n ymweld i chi'n gwybod y cyfnodd y cyfnodd y cyfnodd, bydd eich anodd o'r anodd o'r cyfnodd i'r anodd o'r cyfnodd yn y cyfnodd y cyfnodd y cyfnodd, ymweld i chi'n cyfnodd wedi'i anodd i chi'n gyfnodd i chi'n cyfnodd y 10% yw un cyfnodd y cyfnodd, 40, 60, 80, 100 o 125,000. Felly, mae'n gweithio i'r ffordd o'r ffordd o'r ffordd o'r ffordd o'r ffordd. Mae'n gweithio i'r ffordd, felly rydw i'r han, dwi'n ddim yn ddod y ddweud y ddweud ym 80,000. A'n ddweud ym mwyaf, mae'n ddweud y ddweud. Felly, ychydig i'n ddweud, rydw i'n ddweud y ddweud ym mwyaf, dwi'n ddweud i'n ddweud i'n ddweud ym 80,000. Rydw i'n ddweud ym 60,000. Rydw i'n ddweud ym 100. Rydw i'n ddweud 40. Rydw i'n ddweud ym 125,000. Ac Ac nad wnaeth i os fel ei eisglus yma gy пойffol ni? Fe le roedd insecureu bwysig Sod ymlwg gan energ listaig cardia Cymraegeth yn Ca meses Ca yn ruthless sprite Dw un ddigyddを fel Gyflym Fentafolol, connot stonesokellau cymaint fearsas sydd yn hawddiad en pimérie Maesnal Give Finished Mi microwave rose Sure Brail ond, Cymru, y bwynt leave door ar uniglir yn dweud hynny wedi'u dylunio yw 100%? Yn ymwysgwch yw 200%? Yn ymwysgwch yw 50%? Yn ymwysgwch yw 250%? geworden yr bwysg effaith? y gynebyl sy fy amforstick. Cymrydeach fyanныll Yn ymwneud yw'r 20, 40, 60 o'r 80,000 o bobl yn ym 18. Yn ymwneud yw'r 40,000 o bobl? Yn ymwneud yw'r 60,000 o bobl? Yn ymwneud yw'r 20,000 o bobl? Yn ymwneud yw'r 80,000 o bobl? Maen nhw'n yw rothaf. But in that case, I'll wait until the end of the next bit of the talk and I'll get the answers there. So, let me move on. The single most important thing about numbers, I think, and this seems relevant at this conference, is the broad context in which we fit with the context in which we Cysr Paste S Turfod Cymru Afael Dyma ErrwhLaughter Siad או ardurdol, bydd draw himythau falle am hynny gynhyrch i advice arnegolrwydd Ffocredeaeth statwydolwr yn meddwl, ganant hynny'n and Counselor. Fyrnau ond a gweithio a o ddiff 드� reactive er mwyn Grwsn Siadyszwr,llwc arhaeth wrth tan fregylifau'r greadio dw i contactsr Finding You, Arfer is the only way you can understand it. The most important piece of context is normally whether a number is big or not. This is something which is repeatedly misunderstood in the media and in the public enlarge. We have a story this morning about a particular change to smoky, which might say £8.4 million a year of the NHS's budget. This was presented on the BBC's Today programme, willingness on his utveckling, on theしょう on his role.だ i'r d другой cais deall ar dd waxfr ifanc newydd i ryw brydu fod wedi'i dyddion y fwrddio naeth defnyddio i'r rydw i'r ddweud!!!!! Mae ddigon ar y diogeliau yma i chi Androidodd fel ywín ar Simon Yingor, ponedig arnyn change sensei gwyll reciprocal angi wedi'i azonect Fastan performance. Loes yn mynd i at Ready Football o gyfrin Un uhnod, chwyrdd gydech fingers tentuf solidwg hon bryd y nw yw'n gweld o Abbac childcare oes dod yn 50—3 gwirio'r 50 o milieu Naliwdym yn 1998 oedd arsaidwg childcare oes lednod gwirio. Mae uniholo hi'n hyn. Mael Oxaultian f breaks a fynd hynny'r meddwl na wedi wneud amchydau ganodol, Understood Bravo. Ond ymaできid y peth o mi ave oed edrych a fyddwn y DAFF éw du—y'r dda'i ei hyr arwahanol gwahanol sydd angen. Os nawr, aahanol yn eraill ar Hub doeddio 15 billion o rydw i gyfo rhagoslu penabagau iafter y bydd – ac nid yngyn. fama wydd y ll comput과 ar heb ein おien sy'n f yn ei ooий i Dimhwyl, yn cael bellach am gymhnon. felly rhanり hwyswysiannol iawn yn ffordd i ddifrwyddiad asti 所以 gofnydd i deallwyno i corona-rwy sy'n amser, ac bydd iawn i ni perbydd y gallwn ni'n cerddwaint 140 drone a iawn armennu yn ffordd i chwarn. Natillt y visions iawn ar rhan drafydd, yn f coch онnol i'r 영상 hwn. Felly mae 705 mil leoedd gwai sy'n podill ar e Theatre Dunder Down, mae'r cymaeau yw gwaith arnodrif. Le poblu taith y litus tai allwys o'r plwycych飯r sy'n xatro awgπόraditech gyda mi, ysglau'r llwythau yn 750 miliwn yn ffacol yw 800 miliwn, yn 10 yn 750 miliwn, mae'n ddweud yw'r sum iddyn nhw, yn ystod y cyfnod yw'r llwythau yn y llwythau'r llwythau'r llwythau, yn ymgyrch i'r ffordd o'r G8, o'r wych yn ymgyrch o'r 1 per y year. Mae'n ddweud yw'r llwythau yn ymgyrch, a'r dyfodd y gweithio o'r pwysig, a'r ddweud o'r 5 miliwn, mae'n ddweud o'r 2 per y year. Mae'r ddweud o'r ddweud o'r gweithio o'r gweithio o'r gweithio, a'r ddweud o'r ddweud o'r rhan o'r ddweud, mae'r ddweud o'r llwythau yn ddweud yw'r llwythau, bydd! Rydynau ddefnyddio a gwnaith yn gallu rhywun yn deiblegoedd rydyn ni allwedd yn teimlo i hyn o p Camryun yn ychydig iawn, pith yn ddechrau, i dydd yn gallu r recom I was Tawt datang.哗. Rydyn ni Chi. Rydyn ni Menn spyn. Llywed eleningodd y tua ffrifur. Egg concludes. Derbyn ni'r perder Engagement. Siŵr yn ei chyau S Hidden. Samed ddigon rydyn ni i gwaith clicked theild iddo. Llywed benef window o sânol. Gynnu ein mynd yna. Diolch yn fweithio bod ni'n cmwab, Cymru chi chynnu fyrnu? I had, in my bag, 50 dice, and we would have played a game. I'll describe the game, and we won't be able to play it. I'll dice with you, I have them here with me. I would have asked every member of the conference to imagine that they were a piece of road. A piece of road, and they were like, could you give me a way for me around the round? And then I decided that we were going to use the dice to assess how the accidents occur on each piece of road. I showed that we would give out the dice one each and asked you to roll the dice twice, and then add up. But I'm wrong. I would tease you by asking how many people had scored one, because nobody had scored one and had scored another dice twice. And in a group of, I only had 50 aged dice, in a group of 58 dice, or what I tend to see is, about five people who score 11 or more, and we would designate those as the accident black spots. Then I would take a sheet of paper and draw five cameras, ripped the paper into pieces, and given that one of the cameras to each of the people who had scored 11 or 12. Then I would ask those people to roll all their dice twice more to see where the cameras were. What happened was important, that the camera faced the dice, that's what we thought. When you play this game with children, they think that you're Darren Brown, because the expected is that the number of accidents will fall from 11 or 12 to five or six, and that happens. Repeatedly I've played this on the time in the last two years, I've never seen the aggregate number of accidents. Do anything else than four is going to happen to me one day, maybe we'll have one today. This is simply regression to the mean, and all of you will recognise it as regression to the mean if you take a chance event and identify those where something extreme happens and then do it again, you'll find that you move back towards the mean. Why play this game in the context of speed cameras? Well, it falls in the early part of this decade upon transport reliance and work estimating the impact of speed cameras on reductions in accidents. And they knew exactly what we've just done. They looked for bits of road where there had been more than the average number of accidents, they put cameras there, and they looked to see what would happen to the number of accidents. Well, to the extent that accidents are the result of chance processes that drive a heavy heart attack, drivers telephone going, something exciting happening on the radio, speed weather conditions, a deer running across the road, not entirely chance that one. The process will be exactly the same kind of processes that will be described. I made the point, very loud, the on radio program, which made that one, that upon transport said that this was not an issue. Two years later they republished their data, having taken regression to the mean into account. 20% of the reduction in accidents they previously played, they now accepted, was the result simply of a long run down the train of the number of accidents. 60% they accepted was simply regression to the mean. And then 20% of the original playing they now wanted to play was the result genuinely of the production of safety cameras. Now, Alistair Darling, then the central state of transport made the point that any reduction in accidents, deaths is what they, of course, is right. But if you have a public policy that is only one fifth as effective as you'd be this with thought, and if there are alternative policies like putting fences around the playground access for primary schools, more street lighting, more uniformed and marked cars out on the roads, then you will certainly be misanigating all the sources if one of your policies is only one fifth as effective as you believe. This is an example of where statistical understanding, not just the numbers, is important. It's not a subtle point, but it is a very important one, one we really ignored. The story that we heard this morning about the impact of the smoking ban, I should say that, and not in the fact that smoking is one of the most stupid things anyone could do, impact of the smoking ban on my party will function at heart attacks. The latest data suggests that it might have led to a 2.4% reduction in a hospital admissions emergency admissions for my party will function. There was some data published, and I don't know why they're going to Scotland, suggesting that in the first year, perhaps it had led to an 8% reduction in the number of emergency admissions for heart attacks. But that data ignore the fact that the data itself was up and down. If you went back a number of years, there had been reductions in the number of heart attacks that large without there being the introduction of a ban, far too frequently the only interpretive numbers that we need to remember that they go up and down. What other kinds of context do we need? Well, now I want to talk to you about performance. Each one, so we see a story a bit like this, that the winter vomiting bug has gripped the United Kingdom on the top of it. 3 million people have been vomiting bug, but how do we know how many people have been vomiting bug? Are we snooping using webcams on how many people are being over there? That's crazy. I think it's a strange point in the sewage points. There is some underlying data. Here is the data that appeared in the beginning of 2007, and you'll see that there is a seasonal fluctuation. But if you look at the Y axis, you'll notice that the number of the barocu reports peaks at around 300. And there are no zeros missing here. 300 is the number of weekly cases that confirm the example of the norovirus that come through. And you get from 300 to 3 million. That seems like quite a big jump. Well, the first thing you do, which is typical of this kind of thing, is you add 10 weeks together, which isn't what the headline suggests. 10 million applied to 3 million right at any one time. And what if we had 10 weeks together? If you had 10 weeks together in a particular year in question, you get to 1,500, that still looks like a very long way from 3 million. Well, we apply grossing up factor because we know that not everybody who has the winter molding bug will end up being seen by a doctor having a sample of the offending what if you had taken and set off the borrwch. So we multiply by quite a big number. We assume that there is one lab case of 3,500 mucs. Sorry, there are 2,000 of those 10 weeks. So from the 2,000 lab cases we multiply by 1,500 and that gets us to 3 million. But of course there is some considerable uncertainty about where we should multiply by 1,500 or some other number. The 95% confidence interval runs from as low as 1% in 140, which would mean just 280,000 cases of people in the molding bug, to 1 in 17,000, which would mean 30,000 more cases of people in 10 new periods, slightly more than half of the UK population, which is growing up. That information is so broad, it's been actually entirely useless. We have no idea how many people each group are actually having with the winter molding virus. So that's because the number of community cases in the original lab study was just one when there was a lot of trouble with the virus, exactly what was going on in the gastroenterology experience of the whole community. There was only one of us in the community. This survey likes only others, we just don't know. Now you might say, well, that's a problem and not if you ask everybody, not if you have a census, for example. Well, if you ask everybody all the time, the responses you get are garbage. This is some data from the US census and if you look at the columns in the middle for 1970, you'll see that in 1970, 106,441 US citizens claimed to be aid 100 or more. The US Census Bureau believes that the true answer was about 4,800. So even asking people doesn't give you the right answer. How could they get it so wrong? Well, if you're quite old, then it's possible that you're confused. If you're really old, that possibility increases. So maybe some of these really old people were just confused. Maybe some of them wanted to be thought to be older than they really were, and that maybe it part of it. But a lot of it seems to be that the way in which the data was collected required people to survey a very small number on a rather closely printed page. And one of the things about getting older is that your eyesight and your ability to survey the number you really need to make diminish over time. This data don't have to be garbage and the US Census Bureau had to throw it away. Whenever we look at numbers, we need to think very, very hard about where they came from and how they would collect every single number that we look at. It's better at one useful than an exception in the world to exclude the numbers, but recognising its error is terribly, terribly important. Now, I was going to talk about this, but I'm not going to. I'm going to now ask my friends to give me the results from the multiple choice questions because very soon I'm going to tell you what the answers were so they wouldn't be going to be very easy. So, help me now to answer some questions. That's it. That's all it's been. I've got your responses. Sounds as though for the first question, how much was the child's cup of water, how much was the benefits, to be in the top 10% majority set, 8,000 or more. For the second question, how much was real GDP growing by since 1948? It's pretty even with the majority thinking either 100 or 200. As I've played this with many, many groups, we'll start with a GDP question. I asked this question at the inaugural conference of all of the Treasury's economists five or six years ago. And they, their views were not similar to yours, with nearly half and thinking the answer was 50 or 100%. I'll go through this risk stuff and not use it. Here is a chart showing you the answer. This is a chart showing you real GDP from 1948 to 2008, 2009 index to 100 in 2002. I've been slightly unfair to the correct answer, but it's rather more than 300,000 if I didn't give that possibility. Any of you who said 250,000 or 250% or even 200% can get a platform back. If you said 150% or less, then you have massively underestimated the rate of growth of the British economy. The UK economy grows on average at 2% and 3% a year. And if there are any other actress among students, you'll know that 1.0 to 81 raised at the power 25 is 2. So something growth at 2%, 3%, 4% a year of doubles over 25 years, which is roughly what happens to GDP at the UK. Many of the treasurer's economists thought the answer was 50 or 100%, which would imply the economy growing at either less than 1% or only at a bit more than 1% every year since the war. A misunderstanding which, for attendees at a computational knowledge summit, is perfectly reasonable for the people around the economy. You're all going to know the program. It has to be a millionaire. It sounds to me as though if we'd asked the audience on that question, we wouldn't have felt the new answer. Here is a picture of the UK income distribution in 2005, 2006 for the group that I mentioned, Childress Couples. The stripes, the grey stripes and the yellow stripes show that 10% is of distribution. So you'll see the top. The top 10% starts at about 38,000 pounds a year in that year. So around 40,000 pounds a year. The top 10% in the UK income distribution for this type of family starts at 40,000 pounds a year. The median income for them was 18,800, so it would be about 20,000 pounds a year. In our heart, such households live on incomes of less than 30,000 pounds a year. So here we have two pieces of information, both of which on average surprise this group. The first is that we are four times as wealthy in terms of income as we were immediately after the war. The second is that we're much, much poorer than we believe ourselves to be on average as a population. That top 10% starts at only 40,000 pounds a year. These responses are uniform with any group that I've done with James Gaines a few months ago with all the dishes at Church of England, and they give back to the same to you. Now we come to Jim Stick, Mothers. A technical question and a subtle one. My desire to fairly even split in your responses with the two most popular responses being 80,000 or 20,000. If you said 20,000, then again you win the prize. Again I'm unfair to you, I didn't give a true answer because the true answer is only 6,000. Only 6% of the 70-80,000 single parents on income support are in the age of 18. Why do we believe that the number is so much greater, which is certainly a prevailing view from all groups, well because the media tells us so. And that is why above all we need a kind of work that many of you today will be talking about, a kind of work that's brought around for this particular place to get the truth about many of these things in the public domain. One last example of this. This is a picture that shows the national debt as a percentage of national income. All around when he was charged with the extent that 40% should be the limit, and you can see that we were above 40% from 1995, around 1999, then it fell below, and is now above it and rising steeply, and this is led to enormous anxiety. Let me show you another picture. This is the national debt as a percentage of national income going all the way back to 1692, and the chart I just showed you is just the bottom right hand corner of that chart. And what you'll see is that in the period from 1750 to 1850, throughout that period, which was a period where the UK's economy dominated the world, the ratio of national debt to national income exceeded 100%. If people had to understand what was going on with national debt, they desperately needed this context. My own view is that we've got very confused about the problem here. The ratio of national debt to national income. The stock of debt is not really something about which we need to be very anxious. What we do need to be very anxious about is the ratio of annual borrowing, the flow of the borrowing of the share of national income, which has reached unresilient levels and has got to be tackled by the tax and the loop dissolved by the spending reduction. So, it's partly much on the ratio of national debt to national income, which at the moment is really not the problem. And setting that number properly into context makes us realise that we're a long way from the levels that we've seen for much of the past two years. One last point to make before I would be quiet and see if there are any questions, and that is that, as I hope I've shown you some of those questions, our understanding, or both our understanding and my understanding as well, of what the world really looks like needs to be brought up. I think the most important way in which it's far is the mirror, the mirror held up to us by the media of the world in which we live is unbelievably bright. It doesn't portray anything very accurate about the context in which we should see ourselves. Here is a chart that shows you how many debts there have to be from different causes before you get a story on the EEC News. 8,571 debts from smoking before there's a story on the EEC News. 4,714 from our collubus. 7,500 from obesity, but one quarter of one debt from measles. One third of one debt from human variants, UJD, nearly 20 from HIV positive, 1,122 from mental health problems. This is not just a broadcasting issue, if you look at newspapers, you see essentially the same pattern, although you need far fewer debts from alcohol, or you need a different newspaper story, it's newspapers that are quite interested in celebrities who keep going to death. Why does all this matter well? Much of what I've said has suggested that numbers can be wrong, I think all the numbers are wrong. But they're also much the most powerful way of understanding the world in which we live, much more powerful than describing it without using numbers in almost every case. And so, although numbers are flawed, numbers sitting in the proper context actually allow an understanding of the world in which we live, a policy problem that we face, and of the consequences of tackling them in a different way. If we don't understand them, we will make terrible mistakes that has often happened in the last 25 years of public policy in the UK. So the kind of initiatives that those of you who are speaking today are taking have my whole heart and support and deep desire that we should make a success of this kind of thing, because until we do, we'll go on with an impoverished faith that they'll take account of much the best way that describes a complicated world in which we live. Thank you very much. Xim hypocr computing ac sydd y fyddai yn flant y dyfodol yma yn dod tali'r cyfalu o'r syniad-xxwyd, wrth gwrs i wrth electricity只是 yn하는데. Felly, if we find new закus gilyddio, gw feed dda ni, NPR nell gweld com eto потерr er mwyndd views ad odd yn MCC, un oodaeth iddyn nhw i ddim yn yr ysgollFT., nid ei wirthom wedi fy ych daha rheidio'r bell, ac dess y tro wedi wneud some wrth Galleth Cymestiaeth Os mai treidio gallai gynnal hwn yn ymannais? Mark große iawn Am laärETHAN Caerd, bydd wnaeth ichneid o skyddi工odau tariff保 rooted yn dechrau насколько�flo wonhawn, bydd wnaeth cof mewn o-rhyw bwysig ond ylynu bod gan tested gyda'r holl, mae'n Commnaedle sydd wedi cyho 아주 plane但 yn y cael th<|ro|><|transcribe|><|fi|><|transcribe|> Roedd yn sicrhau'n ei Lt-man. Roedd SWF yn armian ble nany el alphabet Hitag? Roeddwr na ein bod yma yn cy patriarchiaf ynreb<|hi|><|transcribe|> Chwn i eff�anicad arjalir i'w weld, ac mae unrhyw oedd yr ysgwrs wedle調b ddyn ni'r ad菜t yw pan roedd wnaeth y tro 𝘧 healthierlle. Felly mae'r y theme am un h curious nod i sicrhau ac hen nōrdd o fwy na'r rhan yn rhan oes ein removed amwyr flydd arna fan y cly sécurité..! Mae'n gdir<|pl|><|transcribe|> Mae'r graffio esp coriander syntiant yn byw ben caiff chamais all fod iawn, fel fath baf am y cws gol mewn llawer a'r rangad cwy God personal a gwiaeth efo mél Thirty o myl gwirionedd yn yn d brushing, criff ei saert проus dda iír br히....... mae'r b dictionary, yn mas gwestiynaad o'r bwysiol o'r newyddser. O bwysiol hwn eich hanif explonellowad. Felly mae'r Daun yn y Robertson driwn ymerchynu, mae'n bwysig o gwyllwch interior dim yn gofyn essent copiedr caelen at y gweddig Cymru. Rydym wedi'u bod y byw'r neden deposits y mae duo eles wnau yn ei bob arbenig o ddod, do planned those information. He, just explain that? yn ond yn cyfath bedtime chain nhw'n mer aconte ond dair bethersial o model cael ei gorcemos fedd Magnadol. Ym Jenkins led i'n mor subway yng Nghymru. Urmez Emma bekommt sobodi oss i'sery faint y cwmp blessed company. Llywydd! Mwy oedd fold ar gyfer efallai링 yn Lam throughoutio, ar gy<|br|><|transcribe|> y fryd dros sounded gwell? I g頭dodd diddologne i'r bron o'r holl llwyddiad neu'r brfynyddiadau yn ffagor 저 wyw ar holl. supposed to diogel ei roi'w, diogeliau wizardau Ar 快 actord yn freiged can podcasts a oedd ychwanes ac yだけ dim yn un gyfanyddlai gyrswloses ac yn ymerein ar g Abend o'i roi heb drill, ac yn yr hwnnw'n gw NEWGR leader diogeliaeth ankl i ganiar usedio ar Irena uchcos. O'r rhodstoffeant, gallwch yn maen nhw ym Mhangod cywran, yn ôl wires your scrolling, a Eiady s-safeter ac medication cependigiaeth Cymr touchscreen o Future Kombat Nuds. Brindddwch yn gwneud eichocus ar allow rydych yn fyddi meetogewr. Felly mae hyn o jagu'rwg aethus iawn fel syd薛, yn lle iawn. я dy Rat зв究 idemawr i gefn Ym curell sut yma chi wnes yr un bwysig i ar dramas heddiw fel mae'n yoid am ddiweddarravel Choose change. Ond ydych chi cosine? Gallwch gyruno? Maegyntedd cookie periodically yn teimlo. Ond yÕ Jared °, dwi'n ddgywl â'r gweithio ar osgymru a'r ddysgrŷn bwynynnau mewn lle addysg serious Dyna ni calli'r rwyng newydd nad yw rhoi gen Manyliaid lun ochre Mount Llyr wieder? Llyr yn fyd bod wel w mor mynd i trainee, ym Llyfrуд G Aud yn chaerthod gweithlu iawn, ddyngod yannau看到 er oed rydym ni faded hyd at grid er fydda'r ams chwed yn模ol ar y synod. Ond gydffredd gan, newid enw i mynd i'r cyfanasanethe o факlaed bydd ifanc pli ranges bethед i mi yn 720—ner! Mae'r twfyn ar fel yri yn o beloved Sport ar oo hon aga ychwyn a'ch osau es a'r hanfyn ac ar gael y faut bod Ie, bod yn toyredol i'r polylifod er粉wllen eich bites, ryw fydd e'n dweud ar ffrwngw obraig charismatic Cyngor yn yr ffe celebrated toll Tom. Ond y gwir yw'r ic violence mwynher更 gweld fath hyn roastedion yw'r ngyngprwynt bwysig newidais. Ond ei fod wedi gweld ddannaf yn et Lionogiaeth Cymraenol a yna y mae'n cy llayd Illund rhagor felly ddim yn details âm yn y Pwysig residential. Yv ni'n gweld um 13 sideways profi a'r ars Yn ymwneud y byddai'r stori, dwi'n dweud ymwneud yma, yw'r ddweud yma? Ond, dyna'r ddweud? Mae'r ddweud yma yw'r stori yma yma, ym 8.4 miliwn, ychydig yn ymwneud ym NHS. Fynau'r ddweud yma yw'r ddweud? Mae'r ddweud yma yw'r ddweud ym NHS? Yn ymwneud ymwneud ym mwyn o'r ddweud ymwneud, wylлиch yn holl sorry. 171 000 llai gyda torrif am bleidranol, le dweud cyntaf o ddyngraed 70 unolion ar gwahanol ei parwineid? Roedd nhw corkhaeth, samadó'r hunggrores cy KENNETH am lawer o chinael, f 곡 sin, A-16 Shepherd 6 m o dd Constant. Roedd cyfrifiad ti fyOD ar gynghir eu hyd o deall ffoc mim O H-8. Y riding yma cant diddi wedi'u gwneud am f Ib inexeriu ac y cfrifiadu a'r llegion yn y hafiau isi. Wenn Andrew eisteddf yn fwy o T-22, yna hundreds ofuge fan Ac Ond yna hyn o'r pobl? Felly, ydych chi'n gwybod y model, mae'n ddweud y pwy yn 100% a'r ddweud y 2%. Yn y benderfyniad yw'r ffordd? Yn y benderfyniad yw'r ffordd, mae'n ddweud yn ddataeth. Mae'n effing gwened ag ar oftennydd ag ar g blogs pe fathau'r dynnu. Dwy ddim fel yn partyr ddwaith try intentions wnaethoi y dym密 dismissed. Onw idd wedi cael eftan Plenwar body Willwyddo Hy石io ond maen nhw'n colli Odî해 dwau'. felly mae'n gydy are fel o gutwch hon hoff. mae fydd pastoedd yn goly'n fod yn fath âaires o wneud Fresfer 300 a 700 On feetiau dark ledd, Byddwn fydd oherwydd.. Miyawdd oedden ni beth dyfodwch eich ffeidi os syddfa os y Batman sy'n gyda sunggu invitell a wel fifth er ei fod yn fel, mae da zen'r glwbeidio befal.ні ranging of hitting special
These are strange times. The range and quality of data that exists is richer than ever before, and by some considerable margin. And yet we seem to fail to make good use of much of what is available, and to be frightened in the face of numbers. What kinds of strategies might help?
10.5446/31224 (DOI)
My name is Simon. I work on the infrastructure team at Shopify. And today I'm going to talk about the past five years of scaling Rails at Shopify. I've only been around at Shopify for four years. So the first year was a little bit of digging. But I want to talk about all of the things that we've learned. And I hope that people in this audience can maybe place themselves on this timeline and learn from some of the lessons that we've had to learn over the past five years. This talk is inspired by a guy called Jeff Dean from Google. He's a genius. And he did this talk about how they scale Google for the first couple of years. And he showed how they ran with a couple of my SQLs. They sharded. They did all this stuff. They did all the no-SQL paradigm. And finally went to the new SQL paradigm that we're now starting to see. But this was really interesting to me because you saw why they made the decisions that they made at that point in time. I've always been fascinated by what made, say, Facebook decide that now is the time to write a VM for PHP to make it faster. So this talk is about that. It's about an overview of the decisions we've made at Shopify and less so about the very tactical details of all of them. It's to give you an overview and mental model for how we evolve our platform. And there's tons of documentation out there on all of the things I'm going to talk about today. Other talks by coworkers, blog posts, readme's, and things like that. So I'm not going to get into the weeds, but I'm going to provide an overview. I work at Shopify, and at this point you're probably tired of hearing about Shopify. So I'm not going to talk too much about it. Just overall, Shopify is something that allows merchants to sell people to other people, and that's relevant for the rest of this talk. We have hundreds of thousands of people who depend on Shopify for their livelihood, and through this platform we run almost 100K RPS at peak. The largest sales hit our rail servers with almost 100K requests per second. Our steady state is around 20 to 40K requests per second, and we run this on 10,000s of workers across two data centers. About $30 billion have made it through this platform, which means that downtime is costly, to say the least. And these numbers, you should keep in the back of your head as the numbers that we have used to go to the point that we are at today. Roughly these metrics double every year. That's the metric we've used. So if you go back five years, you just have to cut this in half five times. I want to introduce a little bit of vocabulary for Shopify, because I'm going to use this loosely in this talk to understand how Shopify works. Shopify is at least four sections. One of those sections is the storefront. This is where people are browsing their collections, browsing their products, adding to the cart. This is the majority of traffic. Somewhere between 80 to 90% of our traffic are people browsing their storefronts. Then we have the checkout. This is where it gets a little bit more complicated. We can't cash as heavily as we do on the storefront. This is where we have to do writes, decrement inventory, and capture payments. Admin is more complex. You have people who apply actions to hundreds of thousands of orders concurrently. You have people who change billing, they need to be billed, and all these things that are much more complex than both checkout and storefront in terms of consistency. The API allows you to change the majority of the things you can change in the admin. The only real difference is that computers hit the API, and computers can hit the API really fast. Recently I saw an app for people who wanted to have an offset under orders numbers at a million. This app will create a million orders and then delete all of them to get that offset. People do crazy things with this API. That's one of our second largest source of traffic after the storefront. I want to talk a little bit about a philosophy that has shaped this platform over the past five years. Flash sales is really what has built and shaped the platform that we have. When Kanye wants to drop his new album on Shopify, it is the team that I am on that is terrified. We had a sort of fork in the road five years ago. Five years ago was when we started seeing these customers who could drive more traffic for one of their sales than the entire platform otherwise was serving of traffic. They would drive a multiple. If we were serving a thousand requests per second for all the stores on Shopify, some of these stores could get us to 5,000. This happens in a matter of seconds. Their sale might start at 2 p.m. and that's when everyone is coming in. There's a fork in the road. Do we become a company that support these sales or do we just kick them off the platform and throttle them heavily and say, this is not the platform for you? That's a reasonable path to take. 99.9 something percent of the stores don't have this pattern. They can't drive that much traffic. But we decided to go the other route. We wanted to be a company that could support these sales and we decided to form a team that would solve this problem of customers that could drive enormous amounts of traffic in a very short amount of time. This is, I think, was a fantastic suggestion and this happened exactly five years ago, which is why the time frame of the talk is five years. I think it was a powerful decision because this has served as a canary in the coal mine. The last sales that we see today and the amount of traffic that they can drive of, say, 80K RPS, that's what the steady state is going to look like next year. When we prepare for these sales, we know what next year is going to look like and we know that we're going to laugh next year because we're already working on that problem. So they help us stay ahead, one to two years ahead. In the meat of this talk, I will walk through the past five years of the major infrastructure projects that we've done. These are not the only projects that we've done. There's been other apps and many other efforts, but these are the most important to the scaling of our Rails application. 2012 was the year where we sat down and decided that we were going to go the antifragile route with Flast Sales. We were going to become the best place in the world to have Flast Sales. So a team was formed whose sole job was to make sure that Shopify as an application would stay up and be responsive under these circumstances. And the first thing you do when you start optimizing an application is you try to identify the lower hanging fruit. In this case, or in many cases, the lower hanging fruit is very application dependent. The lowest hanging fruit from an infrastructure side, that's already harvested in your load balancers. Rails is really good at this or your operating system. They will take all of the generic optimization tuning. So at some point, that work has to be handed off to you and you have to understand your problem domain well enough that you know where the biggest wins are. For us, the first ones were things like backgrounding checkouts. And this sounds crazy. What do you mean they weren't backgrounded before? Well, the app was started in 2005, 2004, and back then, backgrounding jobs in Ruby or Rails was not really a common thing. And we hadn't really done it after that either because it was such a large source of technical debt. So in 2012, a team sat down and collected the massive amount of technical debt to move the background jobs into or remove the checkout process into background jobs. So the payments were captured not in a request that took a long time, but in jobs asynchronously with the rest. And this, of course, was a massive source of speedup. Now you're not occupying all these workers with long-running requests. Another thing we did at these domain-specific problems is it was inventory. If you just, you might think that inventory is just decrementing one number and doing that really fast if you have thousands of people. But MySQL is not good at that. If you're trying to decrement the same number from thousands of queries at the same time, you will run into lock contention. So we have to solve this problem as well. And these are just two of many problems we solved. In general, what we did was we printed out the debug logs of every single query on the storefront, on the checkout, and all of the other hot paths, and basically started checking them off. I couldn't find the original picture, but I found one from a talk that someone from the company did three years ago where you can see the wall here where the debug logs were taped. And the team at the time would go and cross off and write their name on the queries and figure out how to reduce this as much as possible. But you need a feedback loop for this. We couldn't wait until the next sale to see if the optimizations we've done actually made a difference. We needed a better way than just crashing at every single flash sale but having a tighter feedback loop just like when you run the tests locally, you know whether it worked pretty much right away. We wanted to do the same for performance. So we started, we wrote a load testing tool. And what this load testing tool will do is that it will simulate a user performing a checkout. It will go to the storefront, browse around a little bit, find some products, add them to its cart, and perform the full checkout. It will fussy test this entire checkout procedure and then we spin up thousands of these in parallel to test whether the performance actually made any difference. This is now so deeply webbed in our infrastructure culture that whenever someone makes a performance change, people ask, well, how did the load testing go? This was really important for us. And as Siege, just hitting the storefront, that is something that just runs a bunch of the same requests, is just not a realistic benchmark of realistic scenarios. Another thing we did at this time was we wrote a library called Identity Cache. We had a problem with, we had one MySQL at this point hosting tens of thousands of stores. And when you have that, you're pretty, pretty protective of that single database. And we were doing a lot of queries to it. And especially these sales were driving such a massive amount of traffic at once to these databases. So we needed a way of reducing the load on the database. The normal way of doing this, or the most common way of doing this is to start sending queries to the read slaves. So you have databases that feed off of the one that you write to and you start reading from those. And we tried to do that at a time. It's, it has a lot of nice properties to use the read slaves over another method. But when we did this back in the day, there wasn't any really good libraries in the Rails world. We tried to fork some and tried to figure something out. But we ran into data corruption issues. We ran into just mismanagement of the read slaves, which was really problematic at the time because we didn't have any DBAs. And overall, mind you, this is a, this is a team of Rails developers who just had to turn infrastructure developers and understand all of this stuff and learned that at the job because we were, we decided to, to handle flash sales the way that we did. So we just didn't know enough about MySQL and these things to, to go that path. So we decided to figure out something else. And deep inside of Shopify, Toby had written a commit many, many years ago introducing this idea of identity cache, of managing your cache out of bound in memcache. Idea being that if I query for a product, I look in memcache first and see if it's there. If it's not, if it's there, I'll just grab it and not even touch the database. If it's not there, I'll put it there so that for the next request, it will be there. And every time we do a write, we just expire those entries. That's what we managed to do. This has a lot of drawbacks because that cache is never going to be 100% what is in the database. So when we do a read from that managed cache, we never write that back to the database. It's too dangerous. That's also why the API is opt in. You have to do fetch instead of find to use IDC because we only want to do it on these paths. And it will return read only records so you cannot change them to not corrupt your database. This is the massive downside with either using reed slaves or identity cache or something like this is that you have to deal with what are you going to do when the cache is expired or old. So this is what we decided to do at the time. I don't know if this is what we would have done today. Maybe we've gotten much better at handling reed slaves and they have a lot of other advantages such as being able to do much more complicated queries. But this is what we did at the time. And if you're having severe scaling issues already, identity cache is a very simple thing to do and use. So after 2012 and what would have been probably our worst Black Friday, Cyber Monday ever because the team was working night and day to make this happen, there's this famous picture of our CTO face planted on the ground after exhausting work of scaling Shopify at the time. And someone then woke him up and told him, hey, dude, check out his down. We were not in a good place. But identity cache, load testing and all this optimization, it saved us. And once the team had decompressed after this massive sprint to survive these sales and survive Black Friday and Cyber Monday this year, we decided to raise the question of how can we never get into this situation again? We'd spend a lot of time optimizing checkout and storefront, but this is not sustainable. If you keep optimizing something for so long, it becomes inflexible. Often fast code is hard to change code. If you optimize storefront and checkout and had a team that only knew how to do that, there's going to be a developer who's going to come in, add a feature and add a query as a result. And this should be okay. People should be allowed to add queries without understanding everything about the infrastructure. Often the more slower thing is more flexible. Think of a completely normalized schema. It is much easier to change and adapt upon and that's the entire point of a relational database. But once you make it fast, it often is a trade-off of becoming more inflexible. Think of, say, an algorithm, a bubble sort. N square is how much is the complexity of the algorithm. You can make that really fast. You can make that the fastest bubble sort in the world. You can write a C extension in Ruby that has inline assembly and this is the best bubble sort in the world. But my terrible implementation of a quicksort, which is N log N complexity, is still going to be faster. So at some point, you have to stop optimizing, zoom out and re-architect. So that's what we did with Sharding. At some point, we needed that flexibility back and Sharding seemed like a good way to do that. We also had the problem of fundamentally, Shopify is an application that will have a lot of rights. Doing these sales, there's going to be a lot of rights to the database and you can't cash rights. So we had to find a way to do that and Sharding was it. So basically we built this API. A shop is fundamentally isolated from other shops. It should be. Shop A should not have to care about Shop B. So we did per shop Sharding where one shop's data would all be on one Shard and another shop might be on another Shard and the third shop might be together with the first one. So this was the API. Basically, this is all the Sharding API internally exposes. Within that block, it will select the correct database where the product is for that shop. Within that block, you can't reach the other Shard. That's illegal. And in a controller, this might look something like this. At this point, most developers don't have to care about it. It's all done by a filter that will find a shop on another database, wrap the entire request in the connection that that shop is on and any product query will then go to the correct Shard. This is really simple and this means that the majority of the time, developers don't have to care about Sharding. They don't even have to know if it's existent. It just works like this. And jobs will work the same way. But it has drawbacks. There's tons of things that you now can't do. I talked about how optimization might, you might lose flexibility with optimization, but in architecture, you lose flexibility at a much grander scale. Fundamentally, shops should be isolated from each other, but in the few cases where you want them to not be, there's nothing you can do. That's the drawback of architecture and changing the architecture. For example, you might want to do joints across shops. You might want to gather some data or an ad hot query about app installation across shops. And this might not really seem like something you would need to do, but the partners interface for all of our partners who build applications actually need to do that. They need to get all the shops and the installations from them. So it was just written as something that did a join across all the shop and listed it. And this had to be changed. And so the same thing went for our internal dashboard that would do things across shops, find all the shops with a certain app. You just couldn't do that anymore. So we have to find alternatives. If you can get around it, don't shark. Fundamentally, Shopify is an application that will have a lot of rights, but that might not be your application. It's really hard and it took us a year to do and figure out. We ended up doing an ad application level, but there's many different parts of part or levels where you can start. If your database is magical, you don't have to do any of this. Some databases are really good at handling this stuff. And you can make some tradeouts at the database level. So you don't have to do this at an application level. But there are really nice things about being on a relational database, transactions and really and schemas. And the fact that most developers are just familiar with them are massive benefits and they're reliable. They've been around for 30 years and so they're probably going to be around for another 30 years at least. We decided to do that at the application level because we didn't have the experience to write a proxy and the databases that we looked at at the time were just not mature enough. And I actually looked at some of the databases that we were considering at the time and most of them have gone out of business. So we were lucky that we didn't buy into this proprietary technology and solve it at the level that we felt most comfortable with at the time. Today we have a different team and we might have solved this at a proxy level or somewhere else. But this was the right decision at the time. In 2014, we started investing in resiliency. And you might ask, what is resiliency doing in a talk about performance and scaling? Well, as a function of scale, you're going to have more failures. And this led us to a threshold in 2014 where we had enough components that failures were happening quite rapidly and they had a disproportional impact on the platform. When one of our shards was experiencing a problem, requests to other shards and shops that were on the other shards were either much slower or filling altogether. It didn't make sense that when a single Redis server blew up, all of Shopify was down. This reminds me of a concept from chemistry where your reaction time is proportional to the amount of surface area that you expose. If you have two glasses of water and you put a teaspoon of loose sugar in one and a sugar cube in the other glass, the glass with the loose sugar is going to be dissolved in the water quicker because the surface area is larger. The same goes for technology. When you have more servers, more components, there's more things that will react and can potentially fail and make it all fall apart. This means that if you have a ton of components and they're all tightly knitted together in a web where one of these components fail, it drags a bunch of others with it and you have never thought about this, adding a component will probably decrease your availability. This happens exponentially. As you add more components, your overall availability goes down. If you have 10 components with four nines, you have a lot less downtime if they're tightly webbed together in a way that one of them is a single point of failure. We hadn't really at this point had the luxury of finding out what our single point of failure even were. We thought it was going to be okay. But I bet you if you haven't actually verified this, you will have single points of failure all over your application where one failure will take everything down with it. Do you know what happens if your memcast cluster goes down? We didn't and we were quite surprised to find out. This means that you're only really as weak or as good as your weakest single point of failure and if you have multiple single points of failure, multiply the probability of all of those single points of failure together and you have the final probability of your app being available. Very quickly, what looks like downtime of hours per component will be days or even weeks of downtime globally amortized over an entire year. If you're not paying attention to this, it means that adding a component will probably decrease your overall availability. The outage looks something like this. Your response time increases and this is a real graph of the incidents at the time in 2014 where something became slow and as you can see here, the timeout is probably 20 seconds exactly. So something was being really slow and hitting a timeout of 20 seconds. If all of the workers in your application are spending 20 seconds waiting for something that's never going to return because it's going to timeout, then there's no time to serve any request that might actually work. So if shard one is slow, requests for shard zero are going to lag behind in the queue because these requests to shard one will never, ever complete. The mentor that you have to adopt when this starts becoming a problem for you is that single component failure cannot compromise the availability or performance of your entire system. Your job is to build a reliable system from unreliable components. A really useful mental model for thinking about this is the resiliency matrix. On the left hand side, we have all the components in our infrastructure. At the top, we have the sections of the infrastructure such as admin, checkout, storefront, the ones I showed from before. Every cell will tell you what happens if that component on the left is unavailable or slow. What happens to the section? So if Redis goes down, is storefront up, is checkout up, is admin up? This is not what it actually looked like in reality when we drew this out. It was probably a lot worse. And we were shocked to find out how red and blue, how down and degraded Shabify looked when what we thought were tangential data stores like Memcache and Redis, took down everything along with it. The other thing we were shocked about when we wrote this was this is really hard to figure out. Figuring out what all these cells and the values of them are is really difficult. How do you do that? Do you go into your production and just start taking down stuff? How do you know? What would you do in development? So we wrote a tool that will help you do this. The tool is called Toxy Proxy. And what it does is that for a duration of the block, it will emulate network failures at the network level by sitting in between you and that component on the left. This means that you can write a test for every single cell in that grid. So when you flip it from being red to being green, from being bad to being good, you can know that no one will ever reintroduce that failure. So these might look something like this, that when some message queue is down, I get this section and I assert that the response is success. At this point in Shopify, we have very good coverage of our vicinity matrix by unit tests that are all backed by Toxy Proxy. And this is really, really simple to do. Another tool we wrote is called Semi-In. It's fairly complicated exactly how all of these components work and how they work together in Semi-In. So I'm not going to go into it, but there's a readme that goes into vivid detail about how Semi-In works. Semi-In is a library that helps your application become more resilient. And how it does that, I encourage you to check it, to read me, to find out how it works. But this tool was also invaluable for us to run, to not, or to be able to be a more resilient application. What we, the mental model we mapped out for how to work with resiliency was data for pyramid, where we had a lot of resiliency debt because for 10 years we hadn't paid any attention to this. The web I talked about before of certain elements dragging down everything with it was eminent. It was happening everywhere. The resiliency matrix was completely read when we started. And nowadays, it's in pretty good shape. So we started climbing it. We started figuring out, writing all these tools, incorporating all these tools. And then when we got to the very top, someone asked the question, what happens if you flood the data center? That's when we started working on multi-DC in 2015. We needed a way such that if the data center caught fire, we could fail over to the other data center. But resiliency and sharding and optimization were more important for us than going multi-DC. Multi-DC was largely an infrastructure effort of just going from one to N. This requires, required a massive amount of changes in our cookbooks. But finally, we had procured all the inventory and all the servers and stuff to spin up a second data center. And at this point, if you want to fail over Shopify to another data center, you just run the script. And it's done. All of Shopify has moved to a different data center. And the strategy that it uses is actually quite simple and one that most Rails apps can use pretty much as is if the traffic and things like that are set up correctly. Shopify is running in a data center right now in Virginia and one in Chicago. If you go to a Shopify owned IP, you will go to the data center that is closest to you. If you're in Toronto, you're going to go to the data center in Chicago. If you are in New Orleans, you might go to the data center in Virginia. When you hit that data center, the load balancers in that data center inside of our network will know which one of the two data centers is active. Is it Chicago or is it Aspern? And it will route all the traffic there. So when we do a failover, we tell the load balancers in all the data centers, what is the primary data center? So if the primary data center was Chicago and we're moving it to Aspern, we tell the load balancers in both the data centers to route all traffic to Aspern. Aspern in Virginia. When the traffic gets there and we've just moved over, any write will fail. The databases at that point are in read only. They are not writeable in both locations at one because the risk of data corruption is too high. So that means that most things actually work. If you're browsing around Shopify and Shopify storefront looking at products, which is the majority of traffic, you won't see anything. Even if you are in admin, you might just be looking at your products and not notice this at all. And while that's happening, we're failing over all of the databases, which means checking that they're caught up in the new data center and then making them writeable. So very quickly the charge recover over a couple of minutes. It could be anywhere from 10 to 60 seconds per database and then Shopify works again. We then move the jobs because when we move all the traffic, we stop the jobs in the source data center. So we move all the jobs over to the new data center and everything just ticks. But then how do we use both of these data centers? We have one data center that is essentially doing nothing, just very, very expensive hardware sitting there doing absolutely nothing. How can we get to a state where we're running traffic out of multiple data centers at the same time, utilizing both of them? The architecture at first looks something like this. It was shared. We had shared Redis instances, shared memcache between all of the shops. When we say a shard, we're referring to a MySQL shard, but we hadn't sharded Redis. We hadn't sharded memcache and other things. So all of this was shared. What if instead of running one big Shopify like this that we're moving around, we run many small Shopify's that are independent from each other and have everything they need to run? And we call this a pod. So a pod will have everything that a Shopify needs to run, as the workers, as the Redis, the memcache, the MySQL, whatever else there needs to be for a little Shopify to run. If you have these many Shopify's and they're completely independent, they can be in multiple data centers at the same time. You can have some of them active in data center one and some of them active in data center two. Pod one might be active in data center two and pod two might be active in data center one. So that's good. But how do you get traffic there? So for Shopify, every single shop has usually a domain. It might be a free domain that we provide or their own domain. This, when this request hits one of the data centers, the one that you're closest to Chicago or Virginia, depending on where in the world you are, it goes to this little script that's very aptly named sorting hat. And what sorting hat will do is that it will look at the request and interpolate what shop, what pod, what mini Shopify does this request belong to? If that request is on a shop that is going to pod two, it will route us to data center one on the left. But if it's another one, it will go to the right. So sorting hat is just sitting there sorting the request and sending them to the right data center. It doesn't care where you're landing, which data center you're landing to, it would just route you to the other data center if it needs to. Okay. So we have an idea now of what this multi DC strategy can look like, but how do we know if it's safe? Turns out that there's just needs to be two rules that are honored. Rule number one is that any request must be annotated with the shop or the pod that it's going to. All of these requests for the storefront are on the shop domain. So they're indirectly annotated with the shop they're going to through the domain. With the domain, we know which pod which mini Shopify that this request is belonging to. The second rule is that any request can only touch one pod. Otherwise, it would have to go across data centers and potentially this means that one request might have to reach Asia, Europe, maybe also North America, all in the same request. And that's just not reliable. Again, fundamentally shops and request to shops should be independent. So we should be able to honor these two rules. So you might think, well, it sounds reasonable. Like Shopify should just be an application with a bunch of control actions that just go to a shop. But there were hundreds, if not a thousand requests that didn't, that violated this. They might look something like this. They might do something going over every shard and counting something or doing something like that or maybe it's uninstalling PayPal accounts and seeing if there are any other stores with it or something like that across multiple stores. When you have hundreds of endpoints that are violating something you're trying to do and you have a hundred developers who are doing all kinds of other things and introducing new endpoints every single day, that's going to be a losing battle if you just send an email because tomorrow someone joins who's never read that email who's going to violate this. Raphael talked a little bit about this yesterday. He called it white listing. We called it shitless driven development. The idea is that your job, if you want to honor rule one and two, is to build something that gives you a shit list, a list of all the things that violate the rule. If you do not obey the shit list, you raise an error telling people what to do instead. This needs to be actionable. You can't just tell people not to do something unless you provide an alternative. Even if the alternative is that they come to you and you help them solve the problem. But this means that you stop the bleeding and you can then going forward rely on rule one and two in this case being honored. When we had this for Shopify, rule one and two honored, our multi-DC strategy worked. And today, with all of this building a top of five years of work, we're running 80,000 requests per second out of multiple data centers. And this is how we got there. Thank you. Do you have any global data that doesn't fit into a shard? Yes. We have a dreaded master database and that database holds data that doesn't belong to a single shop. In there is, for example, the shop model. We need something that stores the shop globally because otherwise the low balancers can't know globally where the shop is. Other examples are apps. Apps are inherently global and then they're installed by many shops. It can be billing data because it might span multiple shops. Partner data, there's actually a lot of this data. So I didn't go into this at all but I actually spent six months of my life solving this problem. So we have a master database and it spans multiple data centers. And the way that we solve this is essentially we have reed slaves in every single data center that feed off of the master database that is in one of the data centers. If you do a write, you do cross DC writes. This sounds super scary but we eliminated pretty much every path that has a high SLO from writing on this. So billing has a lower SLO in Shopify because the writes have to be cross DC. But the thing is that billing and partners and the other sections of this master database, they're in different sections. They're fundamentally different applications and as we speak, they're actually being extracted out of Shopify because Shopify should be a completely sharded application. And if they're extracted out of Shopify, then you're also doing a cross DC write because you don't know where that thing is. So it's not really making the SLOs worse and it's okay that some of these things have lower SLOs than the checkout and storefront and the admin that have the highest SLOs. So that's how we deal with that. We don't really deal with it. How do you deal with a disproportionate amount of traffic to a single pod or a single shop? So I showed a diagram earlier that shows that the workers are isolated per pod. This is actually a lie. The workers are shared, which means that a single pod can grab up to 60 to 70% of all of the capacity of Shopify. So what's actually isolated in the pod are all the data stores and the workers can sort of move between pods fluid like they're fungible. They will move between pods on the fly. The low bounce just sends requests to it and it will appropriately connect to the correct pod. So this means that the maximum capacity of a single store is somewhere between 60 and 70% of an entire data center. And it's not 100% because that would cause an outage because of a single store which we're not interested in, but that's how we sort of move this around. Is that answer? How do we deal with large amounts of data? Yeah. Like someone who's doing importing 100,000 customers or 100,000 orders. Well, this is where the multi-tenancy strategy or architecture sort of shines. These databases are massive. Half a terabyte of memory, many, many tens of cores. And so if one customer has tons of orders, then that just fits. And if the customer is so large that they need to be moved, that's what this Dac-Defragmentation project is around, is around moving these stores to somewhere where there might be more space for them. So basically we just deal with it by having massive, massive data stores that can handle this without a problem. The import itself is just done in a job. Some of these jobs are quite slow for the big customers and we need to do some more parallelization work. But most of the time it's not a big deal. If you have millions of orders and it takes a week to import that, you have plenty of other work to do during that time otherwise. So this is not something that's been high, high on the list. How much time I have? Done? Okay. Thank you.
Shopify has taken Rails through some of the world's largest sales: Superbowl, Celebrity Launches, and Black Friday. In this talk, we will go through the evolution of the Shopify infrastructure: from re-architecting and caching in 2012, sharding in 2013, and reducing the blast radius of every point of failure in 2014. To 2016, where we accomplished running our 325,000+ stores out of multiple datacenters. It'll be whirlwind tour of the lessons learned scaling one of the world's largest Rails deployments for half a decade.
10.5446/31225 (DOI)
I hope you're doing well. Thank you for coming by. I'm Justin Weiss and I work at AVO where we help people find the legal help that they need. At AVO, I help our software developers get better code into production more quickly. I also write articles and guides to becoming a better Rails developer on my site, justinweiss.com. And I wrote a book, Practicing Rails, which can help you learn Rails without getting overwhelmed. Hey, everyone. Hope your day has been going well so far. Thank you for coming by. I'm Justin Weiss and wave at me if you've seen this part already. What if that was your experience on the web? Like imagine if your site couldn't know that the same person visited two different times or if everything you knew about a user just disappeared as soon as you returned that first piece of HTML. Now this might be fine for a site that doesn't do much, you know, a site that only cares about handing out the most generic information. But most of us, we don't really live in that world, do we? We need to know about our users and we need to store some data about them, whether that's like a user ID, a preferred language, whether they like the mobile or the desktop version of your site better, or what their favorite read of cat to make memes with is. Now you could solve this the functional programmer way. You know, if you can't store a state, then pass all the data you have along with every single request. Whoops. But since we're all using Rails, this problem is pretty easy to solve. We put data in the session hash and it magically comes back to us on the next request. And with that, we never have to worry about which user is accessing our site ever again. So this has been my talk in intro into sessions. Thank you all for coming. But wait a second. How does this even work? How does it stick around? I mean, I thought HTTP was stateless. Now Rails makes using sessions really easy and that's great, but it's also a little bit dangerous. For a long time, I treated sessions like a database that I didn't have to set up. I didn't understand sessions, but I didn't really need to because they were a magic hash that I could always depend on. Sometimes depend on. And that sometimes, that flakiness, it meant that I hated using sessions. I mean, how many of you get unreasonably frustrated when session stuff doesn't work? When you see session exceptions or session bugs or missing data for a user. Why is that? In my first web programming job, a lot of people didn't have accounts and so we had to use sessions a lot. And I caused so many problems with them. No pointer and couldn't find data for this user exceptions. Those showed up like ten times as often as any other. And I actually got reprimanded more than once for the problems that I caused around sessions. So this is what I did. I didn't understand it, so I just did what I was already doing harder. The code I wrote wasn't working. The code I wrote wasn't working, but that's okay because I'll just write more code. All checks everywhere. Now, when that didn't work, I tried to avoid them. Sessions are terrible. Let's not use them. I have an idea. Let's just make the user log in on every page. It'll be great. But then I got a little bit more mature and I realized that this problem wasn't going to go away. So instead, I spent the time to really understand sessions at a deep level and construct a mental model out of them that worked. And after a while, I started to be able to write code that avoided a lot of these problems in the first place. The nice thing is, to understand sessions, we really don't need to get too complicated. Really, this is what we want. We want to know about a user. We want to know about them securely so that nothing can mess with that data. And we want to know about it until they leave. Once they stop using the site, we don't really need to keep that data around anymore. All we need is some way for your user's browser to coordinate with your Rails app and make everything connect up. Now this problem, the problem of not being able to keep track of information about users, like people realize this is a thing pretty early on. And as more people wanted to use the web, and especially to buy things on the web because money, people, developers needed a way to keep track of things like shopping carts and user preferences. Now I was joking a little bit earlier about passing in all the data you needed along with every request through the URL. But that's actually not too far off from what we ended up doing. If those params were in the URL, though, it's easy to see them, it's easy to lose them, and it's easy to fake them. You have another option, though. When a browser makes a request to a server, it sends some headers along with that request, some information about that request. So what if browsers sent some user data along with the rest of the headers so the server could see it? Once the server saw that data in the headers, it could use it, it could change it, and it could send new data back to the browser. Then the browser could send that modified copy back to the server, and they could just kind of ping-pong it back and forth, changing it as they needed to. Now this idea, the idea to use a header that would be automatically sent from the browser, this came out of Netscape in the early 90s, and Netscape called these special headers cookies. And about a year later, it was supported in IE, so, yeah, even 20 years ago, we had to do with that. But the neat thing about this is that the server really doesn't have to manage any of this data at all. This data is managed with or stored with the browser and managed by the browser. So how do these cookies work? Like, what do they look like? Well, let's say you make a request to Google through your browser, or because these are slides through a program like Curl. When Google sends you the page, it also sends you some HTTP headers, you know, like that metadata about a request. And there's one line I want you to focus on right now, this one. When Google returns a page, it also sends your browser that's a cookie header. When your browser sees that, it stores it along with information about which server it came from, in this case, Google.com. That's so that the next time you request a web page from Google, your browser will send headers like this. And if we put these side by side, you can see it sends that exact same cookie the next time your browser hits Google. That way, your browser and your server have some shared piece of information. They have that connection, they can keep their conversation going, and not have to reintroduce themselves to each other every single time. Now all cookies have a few different parts. They have data, which is the information that your server wants to remember. And they have metadata, which the browser cares about, which determines how and when that cookie should be sent to a server. This first part, the part before the semicolon, is the data. When you use the session object and rails, it's storing data in that part, and it's reading data out of that part. The rest of it is all metadata. For example, you can give cookies an expiration date. In this case, after September 7th, the browser won't send this cookie anymore, which means that the server won't have access to the data inside that cookie anymore. If you don't set an expires date, the cookie will usually disappear as soon as the browser closes. And these are called session cookies because they last for one browser session and then get deleted when you close the browser. They're sometimes even just stored in the browser memory and not actually persisted anywhere because you don't really need to. Cookies that have an expires date are sometimes called permanent cookies because they last until the date you set. They're not automatically cleared when you close the browser. Now sites can't read each other's cookie data, so if I was visiting goggles.com instead of Google.com, this cookie wouldn't be sent. With path and subdomain, you can go a little bit further. You can go down to subdirectories and subdomains, which is especially helpful if you end up running multiple apps run by multiple users on a single domain, like Think It Hub pages or WordPress. There's a leading dot there, but.google.com is the same thing as Google.com. It's safer to have it because older browsers still care about it. Now when you set a domain, it also includes all subdomains. So here the cookie is valid for Google.com and all the subdomains, drive.google, docs.google, all that stuff. And there's also some extra attributes you can give cookies like HTTP only that I'll go over a little bit later on. But you didn't just come here to learn about cookies and get hungry before lunch. What does all of this have to do with sessions? Well sessions are built using cookies because cookies are a pretty reliable way to keep track of users without having to keep track of params. So the better you understand cookies, if you can understand cookies at a deep level, it's going to be much, much, much easier to understand sessions and how they work and how they don't. We'll see in a minute that just like a hash, you can build way more complicated things on top of cookies. But out of the box, they're pretty limited. A single cookie can hold only a single value, so you need a separate cookie for every key value pair. And because all of the information is on the users side, on the browser side, you can't necessarily trust what they give you. Like what if you stored username inside a cookie? The server would send this to your browser, then you go into your browser's cookie database and change it to this. Your browser sends this cookie to the server and all of a sudden you can mess with everybody's data because the server doesn't know any better. Like it can't know any better. It has to trust this. Now this isn't any safer than having it in the URL like we saw before. So how does Rails get around these problems? Well, let's dig into an example. Here we have a pretty simple controller action. It takes whatever's in param's name, like whatever's passed as a name param, and puts it into the session under session name. This way we should be able to see some session data get created. So let's hit that and see what we get back. You can see that we're sending the name just into the controller. Again, our controller is going to take that name param, put it into the session, and then hopefully return that session back. And that's maybe what we see over there. You can see that Rails stores the session under a single key, like a single cookie. This session my app cookie. And if you search the sample code base for session my app, you'll find it in an initializer, session store.rb. If you change that option, it'll change which key your session data is stored under and also break all your old sessions in the process. So not necessarily a great idea, but you can certainly do it. Now what about the rest of the cookie? Like I said, we maybe have the data in there, and that's because we can't really tell. It's totally unreadable. I mean, somebody can try if they want, but it's probably not going to be too successful. So how does it do that? Why does Rails do that? Well, the why is pretty easy to answer. The value looks like that so that users or anybody else can't mess with their own cookies. In modern versions of Rails, session cookies are signed, and that means that if anybody tamper with them, they become invalid. And they're also encrypted so that nobody can even see the data stored inside them. But that's not going to stop us. Let's try to get into this cookie and see what's inside. Now all Rails apps have a secret key. These are these things that get constantly leaked onto GitHub. And the key is also used for all the encryption that Rails does, which is why it's such a disaster when it gets leaked onto GitHub. But that includes encrypting and signing cookies. And Rails gives you some defaults, or generates some defaults, secret keys for Dev and Test. You can generate new ones for production with rake secret. When you're at boots, Rails puts the secret key into the key generator here at rails.application.keygenerator. I don't bother writing any of this down, I have links later on. But you can see we use that key generator, that rails.application.keygenerator at the top to create some secrets. And then we use that secrets to create an encryptor object. And this encryptor object is the same kind of thing that we use to encrypt and decrypt cookies. So with this encryptor object we have at the bottom here, we should be able to decrypt our cookie. So we can paste that big giant string of encrypted text into this encryptor object, and this is what we get. It works like JSON, right? So why is that JSON? Well, it turns out you can also configure that. It's set inside another initializer, this cookies serializer. And you can also use the symbol Marshall, if you'd rather use Ruby Marshall. But most people just use JSON, it's the default. It's also even useful if you're like trying to share cookies between apps written in other languages, because every language understands JSON at this point. So what do we know now? We know that Rails stores all the session data inside a single cookie. We know that it does this by turning it into JSON, which gives us the opportunity to put multiple keys and values inside a single cookie. We know that Rails signs and encrypts the cookie, so you can't tamper with it or look at it. And we know that the session key in the serializer can both be configured to be something different if you want to. So now we can see that the cookie actually does contain that name param that we passed way back then. And what we should be able to see now is if we stop passing that param and we pass this cookie instead, we should be able to see this data come from the session data and not from the param. And so let's check that out. Here we pass that big blob of encrypted cookie back to Rails, and it should remember who we are without a parameter. You see, we dropped the parameter off of the URL at the bottom there. And that's what we see. And we didn't have to pass any params. This is exactly the sort of thing that a browser would be doing in this case to store that data across multiple requests. So step by step to kind of tie everything together. Your browser hits a server, and the server stores data into the session. Rails turns that session data into JSON. It encrypts the JSON, and it signs it, and then it sends the encrypted cookie back to the browser in that set cookie header. The browser then stores it along with the fact that it came from your Rails app. That's so that the next time you hit your Rails app, the browser will send that cookie back. Rails will verify and decrypt it and then turn it into that session hash that everybody can use. It's like params that are passed on every page, but managed automatically so that you don't have to think about it. Finally, Rails can change the data and send it back to the browser, which will overwrite the previous cookie, and they can just keep passing that data back and forth. But if passing cookies back and forth was all there was to sessions, there'd be no reason to call them sessions. I mean, you just say, hey, this cookie, this one is your session cookie. It's just the same thing. But cookies aren't always right. Remember how your browser sends that cookie along with every single request? Well, what happens if you start storing a ton of data inside of that cookie? What happens if you store a 4 meg PDF inside the cookie, or like the full text of Moby Dick for some reason? Every request to your server would include that 4 megs of data, even if your server didn't care about it right now, even if your server didn't read it during that request. So cookies are limited. You can only put 4 kilobytes of data in there. If you store more than that, you're going to get an exception. This action dispatch cookie's cookie overflow exception, which also happens to be the most delicious of all of the Rails exceptions. But even 4k is a whole lot bigger than most HTTP requests. I mean, most requests are only a couple hundred bytes. This is like 10 times that size. So if you care about performance, you probably don't want to get even close to that 4k limit. But what if you needed to store more data than that inside your session? How can you keep your cookies small, but make your sessions big? Well, let's think about how you're already dealing with users. If your cookie stored a user, you're probably storing a user ID in there. You're not storing their email address in there. You're not storing their full name in there. You're not storing their list of cart items in there. You're just storing their user ID, and then you'll use that ID to look up other information inside their database later on. But what about people who don't have an account? They don't have a user ID, so you can't do this. But you could generate a random ID, and you could store that in the cookie, like this. Then you could use that ID in the exact same way that you're using user ID to look up information from the database later on. It's not really a user ID, though. So you should probably call it something different, in this case, a session ID. So now we have two different options for storing data persistently across multiple requests. You can store the data right inside the cookie, or you can store a reference to that data inside the cookie and store the actual data someplace else, like inside a database. Now what would that second option look like? Well, let's say that, just like the rest of our data, we want to use ActiveRecord to store our session data. And let's say we called session name equals justin to create a brand new session. What would Rails have to do in order to store this in ActiveRecord? Well, Rails could generate a new random session ID, so it has something to look it up with. It could turn the session hash into a string so that you don't have to have, like, separate columns for everything you could possibly store in the session. You just stuff it on one string and then put it in a single column. It would save the ID and that data to a row in your database so you could look it up later. And then it would return the ID with setCookie so that the next time your browser hit your site, you could use that ID, look up your session data, and get your session hash back. So let's take a look at that in action. First we'll change our session store to the ActiveRecord store, which is a gem. Like that. Then we'll add some data to the session using curl again. Remember when we pass that name parameter, it just takes it out of the name, puts it into the session. And this time, we're just getting a short string returned instead of that big mess of encrypted signed data, that a6c49 string instead of, yeah, instead of that big mess. And that comes from, if you look inside your database, you'll see a session ID alongside some encoded data. And you'll notice that the session ID and the string return of the browser match. And that's how we'll use that ID to look up the session data later on. When your browser sends that cookie data back to the site, it remembers who we are, again without passing in a param. And this is how that works. It grabs the session ID out of the cookie, it looks up the session ID in your database, it pulls the data that's associated with that ID, and then it transforms that data back into your session hash. You can even store sessions in Memcache and Redis and MongoDB or like pretty much anywhere else. And they all pretty much follow the same process. Your cookie is now just a session ID, and then your app uses that ID to look up the rest of the information. And you can even write your own session store. You just need to tell Rack how to find sessions, how to create new sessions, write session data and delete sessions by implementing a couple methods. And Rails even includes a simple cache store that uses your Rails cache to store sessions. And it's a really, really simple and good example to follow if this is something that you're interested in. I'll have a link to that in the notes that I'll share at the end. And that's really the gist of how Rails stores sessions. There are two different strategies it uses. There's the cookie store strategy and there's the everything else strategy. No matter what, you're storing some data inside the cookie because you have to. It's the way that the browser keeps that relationship with the server. But while cookie storage stores all the data inside the cookie, the other methods just store reference to data inside that cookie. And then they can store the data however they want, like in database, on disk, in memory, wherever. But now we have a choice to make because we have a few different ways to store sessions. And this is an important choice to make because changing session stores is not an easy thing to do. So which one should you choose? Should you choose the cookie store, the cache store built into Rails, or the database store? Well, storing your session data in cookies is by far the easiest way to go. You don't need to do any extra infrastructure setup. It just kind of works out of the box. It's also nice because it syncs with your user lifecycle. And by that, I mean, while your user is visiting your site, it's active. When your user stops visiting your site, if they never visit your site again, you have no cleanup to do because the cookie is on the browser side, not the server side. No other methods can guarantee that and it saves you some cleanup. But it's also limited. You can only store four-cave data and you probably don't want to go anywhere near that. And it's also more vulnerable to certain kinds of attacks, which I'll go into a little bit later on. But if the cookie store won't work for you, you have two options. You can store sessions in a database or you can store them in your Rails cache. Now you might already be using something like memcache to cache your partials or some like API response data or that kind of thing. And if you are already using a Rails cache, then this is pretty easy too. It's already set up for you. You don't have to do any more extra infrastructure work or any of that kind of stuff. You also don't have to worry about your session growing out of control because most good cache is going to evict older stuff when new stuff comes in. And it's fast because if your cache is slow, you probably have bigger problems to solve. But it's also not perfect. Your sessions and your cache data are going to be fighting for space. And if you don't have enough memory, you could be facing a ton of early cache misses and early expired sessions. And if you ever need to reset your cache, like let's say you upgraded Rails or you made like a big sweeping change around your site and you just wanted to wipe everything and start over, you can't do that without also wiping your sessions. Still, this is how we tend to store data on our main Navi.com site and it's worked pretty well for us so far with those caveats. Now, if you want to keep your data around until it legitimately expires, you probably want to keep it in some sort of database, whether that's like Redis or whatever you're using for Active Record or something else. But storing sessions inside your database has some other problems. Sessions won't get cleaned up automatically. So you'll have to go through and delete old sessions on your own. You also have to know how your database is going to work when it's full of session data. Like are you using Redis as your session store? Is it going to try to keep all of your session data in memory? Does your server have enough memory for that or is it going to start swapping so hard that you can't SSH in to fix it? It's happening to me. You also have to be more careful about when you create session data or you'll fill your database with useless sessions. Like, for example, if you accidentally touch the session on every single request, when Googlebot crawls your site, they could be creating hundreds of thousands of useless sessions that are never going to be hit again. And that would be a bad time. So most of these problems don't happen super frequently, but they're all things that you need to think about if you're storing the session data semi-permanently. Now, if you're pretty sure you won't run into any of the cookie stores limits, cookie store is my favorite. You don't need to set it up, and it's not a headache to maintain. Cache versus database, I see as more of a choice of how much you want to do maintenance versus how much you worry about accidentally expiring sessions early. I tend to treat session data as pretty temporary. I tend to program pretty defensively around sessions. So the Cache Store works well for me. And my personal preference is cookie store first, then Cache Store, then Database Store. Now, in a lot of these examples, we've used sessions to identify users. And that's actually one of the more common things you're going to use sessions for. And that also makes it a super juicy target for hacking. That means on top of the pretty simple key value pair that make up cookies and sessions, there's a lot of extra stuff that somebody needs to worry about in order to keep your cookies secure. Now I talked about how the Rails server trusts your cookie. It kind of has to, because it's the only thing it has to go on. And that means that if somebody else can get your session cookie, the Rails server has no idea, no way to tell that they're not actually you. Now in lots of public Wi-Fi networks, you could pretty easily snoop on other people's network traffic. And so if you're sending cookies to an insecure, over an insecure network to insecure servers, somebody might be able to grab your cookies and pretend that they're their cookies. Now this became a pretty big deal a couple years ago when Guy Eric Butler released a proof of concept called Firesheep that would grab cookies over open Wi-Fi networks. And check this out. You'll click on somebody and you're instantly logged in as them. That's scary, right? I mean, Facebook used to have this happen to them. Now the only way to really prevent this is to run your site over HTTPS. And that way all of your cookie data and all your session data is secured along with the rest of your internet traffic. On the Rails side, you can turn this on pretty easily. There's some extra infrastructure setup you'll have to do though. But you flip this config for ssl equals true in your production.rb. And with free SSL certificates from Let's Encrypt and this whole ecosystem building up around them, and I think Heroku now supporting SSL on all paid dinos, there's really not a great excuse to run a site without SSL anymore. After you force SSL on, Rails will automatically add this attribute to your cookies, this secure attribute. What this means is that your cookies will no longer be sent to HTTP protocol sites. It's only going to be sent to HTTPS. It works just the same way as if you were trying to send a cookie to a different domain. But snooping a Wi-Fi connection isn't the only way to steal somebody's cookies. Because JavaScript can also read cookies. That is, if you're Google, you can use document.cookie to read Google.com cookies. And anybody else on Google that can run JavaScript on Google can also read Google cookies and send them to whatever server they want. Now MySpace is probably my favorite example of this. MySpace is my favorite example of a lot of things, but this was particularly fantastic. So that site used to have scripting vulnerabilities all over the place. It was really easy to get JavaScript or Flash embedded on your profile. And once you did that, you could grab information about any of the people that hit your page, like name, profile URL, account ID, all that kind of stuff. I'm guessing that you probably could have even logged in as them, but nobody that I knew figured that part out. Rails protects you from a lot of these attacks automatically by escaping your HTML. And Rails also marks session cookies as HTTP-only by default. What that means is that when a cookie is marked as HTTP-only, it's only going to be accessible to your server. The browser's not going to make it accessible to JavaScript anymore. So that helps with a lot of these things. But that's not enough. You can't even trust your own users. Like say you run a music store and your customers can earn credit to buy songs. Your boss read an article saying that forcing signups drops conversion rate, so no more signups, we're just going to put everything in the session instead. Seems great until one of your users gets this brilliant idea. But Justin, you probably say, we've already went over this. Rails encrypts and signs the cookie so you can't tamper with it. But in this case, you don't actually have to tamper with it. Imagine the cookie was encrypted. You couldn't actually get into this. Nobody buys a song sending this cookie with 400 credits into it. You respond with a new cookie that has 300 credits into it. Your user ignores that cookie and sends the old one again for 400 credits. Now they have infinite credits because it's never going to go down. Now this one doesn't really have an easy fix. You can store a unique number in the session and then check to make sure that you never use a number more than once, which is not really my favorite thing to do. Or you can switch to a database store or a cache store, which doesn't really have this problem because all of that data is drawn on the server side. But a better idea is just not to put this data in the cookie to begin with. That's what we have databases for, for storing this kind of data. Now these are some of the more interesting attacks, but there's a whole lot more. And I'm a big, big fan of the Rails security guide for learning more about this kind of thing. I'll have a link to that in the notes also. Now it might seem like there's a lot to think about around cookies and sessions, but there are a few good rules of thumb that I've picked up over the years that tend to keep problems to a minimum. The first is prepare for the session to disappear at any time. And this happens because sessions are on the user's computer. And that's a problem because that means you have absolutely no control over when they clear their cookies or when they switch devices or any of that stuff. So keep in mind every time you use a session, that session might not be there anymore. Program defensively because it's going to happen and if you're not prepared for it, it can cause big problems later on when you're not seeing the data that you expect. The second is don't store actual objects in your session. So why would this be a bad idea? Well, let's say you store a cart item in the session that has a title and a quantity. And later on, you rename title to name because, frankly, title is a terrible name for a cart item and I have no idea who came up with that. This is probably going to work great for you in Dev. You probably don't even have a cart item in your session in Dev mode. But then when you ship to production, cart items in old sessions are going to try to turn into new cart items. They'll try to put data into a title attribute that no longer exists. Then everything will explode. Now I've personally taken down large chunks of site because of this problem and I know I'm not the only one. I've seen this happen many times. And when it happens, you really only have two options and they're both terrible. One is you could reverse the change, which probably isn't going to work because now you have people with old cart items and new cart items in their sessions and so you could try to come up with some grand unified cart item that deals with both the whole time, the site's falling down and everybody's hair is on fire and it's just a bad time. Or you could just say we're going to start from a clean slate, we're going to wipe all the session data and we're going to log everybody out and lose all their data. So I usually go for this third one over here, at least for a little while. Now the bigger your objects are, the more likely this is to happen. This never shows up in Dev and Test because you probably aren't using sessions in the same way in Dev and Test as you are in production. It's the ultimate works on my machine and it will wreck everything when it ships. So just don't do it. For first, storing references to objects, object IDs in the session, not the objects themselves. And finally, be deliberate about what you use the session for. Only use the session when it makes a lot of sense. Because sessions are so easy, it's really easy for them to become a dumping ground of random data and that's when things start to go really wrong. One of the worst bugs I ever investigated started like this. We shipped something and we all of a sudden started to see exceptions coming from what seemed like a completely random part of the app. Like a lot of session bugs, we couldn't repro it locally and we couldn't debug it remotely. And after adding a bunch of logging, we ended up discovering that some code that we deleted a long time ago used the same name for something in the session as something that we just recently shipped. We had two completely different pieces of data that were stomping on each other and causing problems. And it turned out that neither of those things needed to be in the session. We just put it in the session because it was convenient and we didn't want to roll a whole new database table for them. And that convenience ended up costing us way more dev time and the experience of some of our users and it would have been just to do it right in the first place. Just like code, if you don't use sessions for something, it can't cause a problem. So use it with intent. Even when you follow these best practices, though, things are going to go wrong. They just will. So how do you start debugging when you're not seeing what you expect? Well, the best trick I've ever learned to help me debug any kind of problem is to isolate the problem area as quickly as possible. And by that I mean, is a function getting the right input? You probably don't need to go any higher than that. Is a function sending the right output you'd expect given its input? You probably don't need to go any lower than that. And you just keep cutting those closer and closer until you really narrow in on the place that's causing the problem. So the best tools I've found for debugging session issues are all about showing me what my server is sending and what my server is receiving. So how many of you are using something like Quirrell or Postman or Paw in your web development? Yeah, that's just many, many people. So these are great tools for debugging session issues. You can see the session data your server is sending and you can send arbitrary sessions back to the server and see how it responds. If Quirrell, Postman or Paw are telling you that your server is working OK, you can usually assume that it's a problem with the browser not sending something you expect or something else that's weird going on. And for debugging weird internet problems, MITM Proxy is my favorite tool. So with MITM Proxy, it's a little server that sits between your browser and your app and it shows you all of the network connections that go on between the two of them. So you can see like a list of network requests. You can dive into each of these things and start to see the request headers, the response headers, all that kind of thing. You can see all of the real stuff that's going back and forth, which is really great at helping to build these kinds of problems. And even just last week, I was debugging a session race condition where we had Ajax requests that were stomping on session data. And with MITM Proxy, I was able to, within about a half hour, construct an actual timeline of how these requests were going through when they were coming back and how they were conflicting with one another. So I'm a big, big fan of this tool. If your browser isn't sending cookies correctly, nine times out of ten, the domain settings on your cookie are wrong. This is really easy to mess up. It's also really hard to test in dev because in dev mode, you're probably not running the entire DNS structure of your production website. And it's also going to be hard to debug your session, your cookie data if you can't see what it is. So if you're using cookie sessions, I have the beginnings of a gem that you can install in dev mode. If you include the gem and run Rails console inside your Rails app, you can paste in your cookie strings and decrypt it using the Rails key generator. And I'll have links to all this stuff in the talk notes too. From all this, we can kind of see that sessions are core to the modern web. And by modern, I mean since like 1995, so modernish. And when you run into problems with session data, it might seem like they're big, they're complicated, they're flaky, they're frustrating. But like we saw, session data isn't that big of a thing. Sessions are based on a pretty simple primitive, you know, a single key, a single value, and some metadata. And on that foundation, you build new features bit by bit and piece by piece. Most you serialize the data so that you can store, or the values, so you can store more data into a single cookie. Then you encrypt it to avoid tampering with it. Or you use the cookie values as a reference to data somewhere else. Sessions get big and complicated, but really at their core, they're built out of a few simple parts that all can be combined together. And this is one of my favorite things about software development, is that it's all just code. And things that seem super complicated, like Git or Sessions or how the web even works, they were all built by somebody. They were built for a reason. They were built to solve a problem. And when you understand what that problem is and how they were built, they're all at their core understandable. We usually only dig into these things when we have problems with them, and that stress and that confusion, it makes these seem completely insurmountable. But in the end, they almost always turn out to be way more simple than you'd expect. So the next time you get unreasonably frustrated when something doesn't work, don't be like me, skip straight to that last phase, spend some time and force yourself to learn it. Turn that frustration into a mystery to solve and dig into the pieces until they're small enough to understand. Until you'll transform bugs that seem confusing, random and unfair, into some new and exciting piece of knowledge you can use for the rest of your programming career. And if you've recently learned something new and exciting or want to talk programming or really anything else, I would love to talk with you. My email address is up here. Use it. I love getting email. I read and respond to everything. Let me know if you're ever in Seattle. I'd love to grab a coffee with you. And that last link up there, if you're going to write one thing down from this, write that down. It's a link to resources for the talk with the slides, gem to decode, encrypted sessions, and some other notes that didn't quite fit in and some useful session-related links. So it looks like I have about eight minutes or so for some questions if anybody has them. I may have to answer them or deflect them in some creative way. All right. Well, thank you so much again for the time.
What if your Rails app couldn’t tell who was visiting it? If you had no idea that the same person requested two different pages? If all the data you stored vanished as soon as you returned a response? The session is the perfect place to put this kind of data. But sessions can be a little magical. What is a session? How does Rails know to show the right data to the right person? And how do you decide where you keep your session data?
10.5446/31226 (DOI)
All right, let's get started. I can't see very well out there, so shout if you feel the need to ask questions or interrupt. How's the conference for everybody so far? Good? All right, excellent. All right, this thing's not going to work apparently. So I'm Alex Boster. I work for Apfolio, which is a company in Santa Barbara. I work in their San Diego engineering office. We're hiring. Come talk to me. So the last I would have said one particular project inspired this talk, but really it's also the culmination of the kind of experience you get after a few years of doing web development. This is a survey of surprisingly difficult things. What things do I mean? Commonplace things that you know all about in your day-to-day life, so they're easy to model and you model them and they're great, and then it turns out it's actually a lot harder than that. Things where the obvious implementation may very well cause problems. And in fact, you know, these are things like timestamps, time zones, physical addresses, human names. Now when you hear these terms, what do you think? Does this sound easy? These are solved problems, right? Easy. No problem. No complications. What am I not going to talk about? I'm not going to talk about cache invalidation or distributed systems or other also surprisingly difficult things, but this is about real-world stuff. So one of the things, you know, one of the reasons I wanted to give this talk was that developers fall into these traps all the time. We spent months cleaning up old buggy code and then new bugs of the same type are introduced six months later by other developers. So maybe, you know, with a laundry list of things to be aware to watch out for, maybe you won't fall into that trap. And even very senior developers, you know, it might help to just have the occasional reminder here. If you're not a senior developer and you haven't dealt with this stuff much before, then hopefully this talk will save you some time in the future. Another good thing is that if you follow these best practices or similar things, then your app can be more inclusive. So I know that when I start dealing with these world-world things, you know, it causes me to want to drink. Might just drive you to drink, too. So let's start with time. So there are a bunch of time and date classes available to you. The only one I really want to draw attention to, this is mostly for reference later, but the one I want to draw attention to is the last line where there's just no good cross system standard for duration. It's different in every database. It's different in Ruby than it is in the databases. So pay a little attention to that and check out active support duration. But what makes time actually hard to deal with is not this. It's time zones. So again, isn't this a solved problem? You can just have a sufficiently large integer to represent seconds or fractions of a second. And now you have a time value and you're good, right? No problems. Let's all go home. Well, again, the problem is in time zones. So how many time zones do you think there are? Anyone? Thirty something? Say it again? Forty? Well, you may be right, you know, at a certain level. Actually the time zone database, which I'll talk about in a minute, defines 385 time zones and then has a further 176 internal links, which are aliases basically to give them different names. So something to remember, there are half hour time zones. There are quarter hour time zones. You've got daylight savings time to take into account. Daylight savings time may start, a place may start observing daylight savings time that previously didn't, like Arizona currently doesn't. A place may change their schedule like the entire United States did 10 years ago or so and shifted when daylight savings time started. I don't know if this is true currently, but certainly in the past, we've seen examples of two hour daylight savings time changes. This was called double summertime in the UK. And a place may change time zones entirely. They may just switch. So this time zone database I spoke of is used in many Unix-like systems. It's used all over the place with a small band of dedicated developers. It tracks all geographic time zones since 1970 and they define an area as, you know, where any two places share the same time zone at the same time at the same time. So not before, seriously, before 1970, they don't care as much, but they do have historical data. And it does track daylight savings time changes. So an example of this is updated several times a year. If you think this stuff is static, no. It's updated several times a year and here's an example of some release notes. You can read through that quickly. Mongolia no longer observes daylight savings time. This region moved from one to another year round and the clocks starting at this particular time hived off a new area which also affects part of Antarctica. This change fixed many entries for historical time for Madrid before 1979. And it noted that Ecuador switched, actually observed daylight savings time in a particular bit. The exact details aren't important, but just know this stuff is really complicated and they go to somebody's keeping track of it. And that's an example of all the actual regions defined. That's, again, just unique regions since 1970. Another little trivia, you know, how many time zones are in the United States according to this map? Just the continental United States. I count at least six. So we use UTC as a way to standardize things, you know, when things happen at the same instant in time regardless of what you actually call that time in a particular place. Hopefully we know that that stands for neither coordinated universal time nor Thomson or Dursel coordinate, named by diplomats no doubt. UTC is not a time zone, but every time zone has an offset from UTC. And as a rule, you should store your time values in UTC. So also before I proceed, how many, what are the possible offsets from UTC? UTC is kind of in the middle and you can go forward from it and you can go back. How far do you, how do they go? Anyone want to take a guess? That is exactly what I thought. So in 1995, Kiribati got tired of having their country in two different days. So they moved the time zone of some of their outlying islands from minus 10 to plus 14. So there are plus 14 time zones on that side. So keep in mind, without a time zone, any time value you have without context could take place within a 26-hour range, possible half-hour or quarter-hour increments. So how do you handle this? Well, if you don't explicitly provide a time zone, a time you provide could be interpreted using the operating systems default, using your databases default, or using your applications default time zone. That should be configured in your Rails app. Okay, now I need a slightly bigger drink. So as we said, keep your system and database time in UTC, Rails will store its date times in UTC, and time zone-aware methods in Rails will use the applications default if you don't overwrite it with my expressly providing one. So for example, if you have users, be sure to store a time zone on the user's model and always use that in your views if you care at all about when things occur for said users. And just as an example here, you can see that you want to use time.zone.now. I'll talk about this a little more in a minute. And ActiveSupport provides some really sophisticated stuff around it. So don't just use bare Ruby time, use the Rails classes for all this stuff. So these time zone-aware methods, you can see we're parsing two different times in different time zones, but they're actually the same time. So it all works. That's cool. And here are some examples of methods you should use. Hours from now, days ago, those are all good. Always do time.zone.parse, don't do time.parse. If you use string parse time in the middle here, always use in time zone at the end or you will be screwed. Remember definitely time.current to other methods for getting the current time. And the UTC ISO 8601 is for if you're providing an API. Excuse me, if you're providing something to an API. These examples are all from a stolen shamelessly from a blog post that I note there on the bottom. So dates are simpler, right? Dates don't have a time zone. How do you know if you should be storing something in a date or a time? Ask yourself, does it matter what time of day? This seems really basic, but people make this mistake all the time and just convert a date to a time willy-nilly. So don't do this. So what are some examples of dates? Well, birthdays. A day occurs on a day. We don't generally observe when the actual minute of the day a person was born. All day calendar events, maybe holidays you might think of. So for example, I don't know, you know, to take a Western example, Christmas is on the 25th regardless of whether or not you're in Beijing or Toronto or wherever. So as I said, don't store dates and date times. You will have problems. Be very leery of converting back and forth. You almost never want to do that. For example, if you, the one case I can think of offhand is if you have a calendar that you've written and you want to convert, if somebody's editing an event and it goes from being, say, an all day event to a timed event, then maybe. So let's see, where are we? So this is fine. Where did that come from? I don't know. So you want to use date.current because I ran these two lines in the middle of the day, seconds from each other, and that's what I got. What happened? Why did it behave that way? Anyone? Sorry? Yeah, basically. So I'm going to use date.current. The other one is basically telling me that in London it's the 24th. But I see date.today all over the place in code. So these are really the only two date methods you should be using. And you absolutely have to avoid time.now, time.parse, time.string.parse time without the in time zone things at the end or date.today. Something that can help you with this is be sure to use, who uses Rubicup? Okay. How many of you put that into your actual process formally? A few. Good. Cool. I gave a lightning tell a couple of years ago on this and depressingly few people did that. So there are services like Hound, Farsi, there's probably others that I'm codicy that I'm not thinking of or I'm not familiar with. But you can make it a blocker so that you can't merge a PR unless it passes your style check. And amongst the many benefits of that is Rubicup will actually catch some of these errors for you. Any more comments, questions about dates and times? Let's move on to human names. So many of you may have read there were a couple of blog posts that were related. One was called falsehoods programmers believe about names. I believe it was actually inspired by falsehoods programmers believe about time. And to take a few examples from this, we have things like none of these statements are true. People have exactly one canonical full name. Nope. People have exactly one full name which they go by. No. People have at this point in time exactly one canonical full name. People have at this point in time one full name which they go by. No. People's names do not change. That's not true. People's names change but only at a certain enumerated set of events. No. People's names are assigned at birth. Not true. People's names are written in ASCII. Absolutely not true. I'm guessing in this room there's a bunch of people whose names are not actually written in ASCII. Although I'm guessing very few in just emojis yet but that'll come. People's names are written in a single character set. That's not true. People's names are all mapped in Unicode code points. That's not true. Two different systems containing data about the same person will use the same name for that person. That's hopefully pretty obviously not true either. This is terrible. Now I'm feeling a bit crabby. Really the only thing you can do here with names is validate as little as possible. Just don't bother. Why are you trying? Right? Yes, their card holder name probably has to match when you submit the credit card but that's their problem. If you can avoid first name, last name, consider doing so. Just use full name. I have no idea what's going on. Maybe use given name, family name to be a little bit less English specific. Also store things in Unicode. Remember, you can't guarantee real names are used. Don't assume just because you might have a US based and a US centric business that your users will be primarily English speaking or even have as key names. These things are true in the US as well as overseas. Physical addresses. How do you model physical addresses? Right? Yeah. That's good. Well, there are a lot more variations on this than you might expect even in just the United with the United States Postal Service. Even in the US, remember there are rural routes that look like this. There are military addresses that look like that. That doesn't quite fit the city-state paradigm. Remember US Postal Service is Puerto Rico which has a very different address structure. You can actually have a surprising number of lines in a valid address. I saw one example that was supposedly valid that had 12 lines long. It was an international one but still. Basically, don't do this. Until I moved recently, my address had a slash in it. These are actually pretty common in California. They have one halves in numbers. It doesn't validate with Southwest and it doesn't validate with quite a few legacy systems that you'll see out there. Had to have a bunch of banks and things send things to apartment one half. You can standardize these addresses via the US Postal Service. They'll convert them for you and give you some of the right abbreviations. Remember that special characters are still allowed even after that. For example, cities can have apostrophes and addresses can have slashes or dashes and so forth. Things to know about US Postal Codes. Don't use just zip codes. In general, try to use Postal Code which is the international version of the word. Don't just make it five characters long or 10 characters long if you wanted to do plus nine because, again, you're excluding the ability to store addresses from other countries. Including Canada which is close enough you might actually want to be able to ship to it. Also as a bit of trivia, remember that you can't even use zip codes to map to states. There is a database you can buy or possibly download for free that will attempt to give you city information but zip codes not only can cross city boundaries, they can cross state boundaries and here are the ones that currently cross state boundaries. That's all because zip codes map to postal routes, not to geography. It just so happens that most postal routes are geographically constrained. Let's say you want to validate addresses. Again, my first instinct is to say, why are you doing this? Okay, great. The US Postal Service has a database of them. However, these are not always the same as physical addresses. There are entire towns and communities that have no physical addresses in the US Postal Service database. For example, I'm in San Diego, one of the very wealthy suburbs, not suburbs, it's actually a separate city, one of the very wealthy cities in the center of San Diego County is Rancho Santa Fe. The Postal Service delivers everything to their post office and that's it because they didn't want ugly postal trucks driving around going to people's houses. Yet, UPS and Phoenix will deliver to people's houses there. If you're shipping something, maybe you should let them enter their home address even though it's not going to validate. Oh, geez. Any comments, questions about fun stuff with addresses here? Anyone? Bueller? All right, money. Yay. We all need to get paid, right? How do you model money in your Rails apps, database schemas? That's a given. No, correct. Not as a float. I've seen that though. Any other possibilities? That's a good one. And decimal. So I heard those two. Those are sort of the other two approaches that I'm familiar with. So you may use decimal values. That's what a migration doing that would look like. And there are some issues with that. Not that it's invalid. Here we have a line of Ruby code that will render your product for your API. We'll send this out because it's a decimal. It'll look like that when rendered. And now in JavaScript, we do this. What's wrong with that? I'm sorry? Well, maybe. Anything else? Correct. Ding, ding, ding. He said floating point. Current product price in JavaScript is now naively a float. That's how the JSON works. So you can, well, sorry, after that bug came up three different times. So you can get around that. If you want to always remember to use a decimal library on the JavaScript side, there's no problem with that. But inevitably someone will forget to cast to make a new decimal object and the bug will be introduced. Also, you'll get very strange rounding when, for example, you're multiplying by, say, a tax rate or something like that that's maybe has three significant digits. And then suddenly you'll have IEEE rounding issues in your money. Not good. So I prefer the just store sense. Use integers everywhere. You won't have rounding errors. I recommend you keep the name in sense as part of it everywhere and then only convert at the last minute for display purposes to dollars or whatever your currency is. Do be aware some currencies have mills instead of cents. So if that's important, just remember you're multiplying by 1,000 instead. And in this case, usually it's obvious when you've forgotten to convert because it's there on the display. Your totals will be wrong. Easier to test. Email addresses. Those are easy, right? Yeah. So this is now a valid email address. Just keep that in mind. This is now a top level domain. There are still places out there that try to validate.com.net.edu. You can validate that there's an at. There has to be an at sign in an email address and there has to be a dot somewhere in the domain name. That's it. Otherwise you can do the whole verification email dance, right? Which everybody, people know this. But again, people still try to over validate. And it's kind of like, again, why are you doing this? I'll also just mention as a sidebar that for people who aren't aware of this, most email systems, particularly Gmail, will allow you to add a plus after your username. It'll still get delivered to you. But now you can have an infinite number of email addresses without creating new users. This is great for testing or for creating a thousand free trials or whatever you need to do. So internationalization. I'm not going to talk much about internationalization because it's a huge topic. There are literally entire conferences on it. I do have a suggestion which is that particularly if you have a green field application, you've just done Rails new, start putting your hard-coded strings in your config locales just from the beginning. Even if you have no particular plans to go international or support other languages, a cool thing, a cool result of doing that is that one, changing copy is easier. You don't have to search all over the place. It's all in one place. And furthermore, you can even turn over the keys to a product person or a non-dev and they can make copy changes themselves directly in the code instead of handing it off to you. And then you just review their change. Also if you add a locale to your user model and always use it, then again, you won't have to backport or fill this stuff in later. You can just start if you're in the US with USEN or whatever makes sense for you. And then it's there. Payments and credit cards, another huge topic that there's a lot to talk about. Read up on PCI compliance. Again, there's an entire industry around PCI compliance. I think what most people know about it and what you need to take home is never store credit card information, but don't even send it from your client browser to yourself. Use a service that will let you send it directly from the client browser and then give you a web hook back. That way you're not even relying on the browser that somebody's using on an airplane. They're on a Chromebook on an airplane. Just imagine that scenario. And now you're relying on them getting the token and sending it forwarding that to you. No, just use web hooks here. And be sure to consider what happens if the call to your provider, Stripe or whatever, times out. That can happen and depending on your architecture, if you don't use web hooks, you have to actually escalate timeouts to a human being to go and look and see what really happened. Did the call go through or did it fail? Recurring calendar events. That's easy, right? You just have like a day of the week and say, you know, when it recurs or something like that and you're fine, right? Now, read this RFC and consider that most recurring events have no end date. So how are you going to model that? There's an infinite number of them. The rules can get pretty complex. This is a fairly simple one, like every month on the second to the last Thursday. And individual instances of a recurring event can be edited or moved or canceled separately from the actual recurring event. Yeah, now I need this particular body, Mary. That's right. Check out the garage in Seattle. I'd like to just point out that this particular one has as a garnish a second Bloody Mary for you to get through while you're working your way down to the Bloody Mary. And in case you think onion rings, chicken wings, a submarine sandwich, a 12-inch pizza, french fries, and a lime are not enough. Here's the back view. There's a cheeseburger, an onion, a lime, a lemon, and two grilled cheese sandwiches. But you might need this if you start working on recurring events. I highly recommend it. So I talked a little fast, ran a little early, in conclusion, the main takeaway is don't overvalidate. People just have this need to check the values of these things when you can't. And I don't understand it. Even US-only products will have global-like problems. Be culturally aware. Be aware that your experience isn't universal and probably isn't typical. And just don't make assumptions. And there are some references if you get my slides later for the blog post I was talking about. Thank you. I have a request for that slide. Thank you.
Many seemingly simple "real-world" things end up being much more complicated than anticipated, especially if it's a developer's first time dealing with that particular thing. Classic examples include money and currency, time, addresses, human names, and so on. We will survey a number of these common areas and the state of best practices, or lack thereof, for handling them in Rails.
10.5446/31230 (DOI)
I'm probably going to make the AB guy freak out with how loud my voice is, but... So my name is Michael, as she said. My studies largely centered around music before I joined the Rails community. My undergraduate is in classical percussion and vocal performance. And while I was an undergrad, I fell in love with jazz. And for me, the appeal of jazz was it was improvisation, right? So we get to spontaneously compose music on the fly in the moment together. And they would like that immediacy of the connection to music and the moment for me is what was attractive. So I wanted to illustrate how some of the concepts of improvisation can be applied to the development team. But first, I want to show you some overwrought musician headshots. In being a musician, you have to advertise yourself. So of course, I did that. So I call this one the jazz messiah pose. You can see how it showcases my absence of a chin. This one we call it cool and continent or the squatty potty pose. I'm not sure how this sold me as a musician, but that was pretty funny. Then was the jazz frog pose, holding my mallets like a bouquet of flowers. And of course, the Fabio. I look more like I'm trying to sell jeans at a gap ad, but hey, that was getting me gigs, whatever. So ideally what we're talking about here is the people that I'm hoping are in the room. You guys are on a development team and you're looking for ways to be a leading force on your team. Maybe you're a senior developer who is trying to find ways of mentoring the more junior members of your team. You want to be able to coach the people on your team so they can ultimately become better developers. Or perhaps you're a team lead. You're looking to improve the culture on your team. How can you get your team to work better together to create more quantity and quality of committable code? So it comes down to what would an idyllic place to work look like? What would the ideal team be? You'd ideally have clear expectations, right? You'd know what you were doing on a given day. You would agree that what you're doing makes sense as far as the application or the business requirements concerns. You'd have to be like, I'm developing things that I know are going to make a difference. You have a grip on the technical and business aspects of the thing that you're trying to create. So it isn't just simply, here's a feature story that does some mysterious thing that the product owner wants. You have an idea of I know what this is supposed to do. It's supportive ultimately. You have leaders that should work with you, not against you. So they're trying to support you in your efforts as a member of the development team and the development team as a whole. Conflict is resolved together, right? So people are not dictating to you, like your code is wrong and you need to do blah, blah, blah, and your attitude needs improvement, young man, and why are you acting like such a baby because you don't know what you're doing? Like you want people to resolve the conflicts that you have within the team and perhaps among the other teams and organizations you work for, you resolve those things together. And the values that your company holds and the values that your team holds are complementary. So if as you, when you're working on a team, like you feel like the mission of the team is ultimately in alignment with the mission of the company. And ideally, your work day is fun, right? You want to enjoy coming to work and doing what you do. You're free of unnecessary distractions or meetings that are not related to anything that you're doing at all. And your work life and your personal life are complementary. So the values you have as a person line up with the values that you bring to work and the values that are incocated to you on your team are somehow related. So what would it look like if you worked on a team where you wanted to find out what removed for us all would look like on your GitHub repo, for example? So management. So you have little, if any, idea what you were supposed to do when you came into work. The user stories would be absent, perhaps, or you'd be thrown all around on any given day. When you ask for help or guidance from the management team, whether it's regarding your day-to-day work or working with other teams, you get meaningless platitudes or nothing at all. How many people work for, like, really, really big organizations? Anybody? So have you ever seen those, like, mission statements? Like, this is our mission. And it's like this vague business gobbledygook that, like, really has no application to what you do. So we want to sidestep all that. So leadership is also unsupportive. So it comes in two flavors. You're either a micromanaged, like somebody's over your shoulder trying to tell you what to do all day, or there's no management at all. Like, you wouldn't even know who to go to if you had a problem. Out of your team, management does nothing to foster communication among other teams. So it's all like an episode of Game of Thrones or something, right? So your team is like the Lannisters, and, like, they're fighting for turf with, like, this other team, and, like, it's all about using you as a means to undercut somebody else. And it's not fun. Working in a place like that is not fun. You either have nothing to do and are just bored all day, or there's too much to do, the deadline is too short, and these requirements have seemingly manifested out of thin air. And the only real interest leadership has in you is in your ability to produce. So you have no life work balance. You come to work, they squeeze you like a turnip for all the code they can get out of you and throw you on the train back to your house. So we don't want to work in a place like that. So what would be the ideal model for a development team? How could you have people that are actualized as individuals while contributing to a meaningful and productive team? Well, ultimately, that model could be the jazz band. So how does that work? So let's cover what a jazz band is in 60 seconds. You can think of a jazz band kind of like agile squads, right? You have the rhythm section that has their job, and you have the horn section that has their job. There's three components, really, to music in general. So you have melody, harmony, and rhythm. You can think of melody kind of like the core language, right? That is the meat and potatoes of what music is all about. It's like what you're using to communicate your message. Harmony is a lot like the framework, right? It's the thing that gives melody context. So kind of like Rails is to Ruby, right? You have a framework which allows you to communicate and interact with a user in this particular framework. Rhythm is kind of like DevOps, right? It's the thing that allows all of this to occur. So music is what we call a temporal art form. It happens over a period of time. So we have to consider time when we're being creative. The vast majority of jazz standards are using one of the three patterns I have there on the slide. So think of it kind of like convention over configuration. We have very specific templates that we use in jazz. Now, not all jazz music does this, but the majority of music that you're going to hear in a particular, say, oh, Stan Kenton or whatnot, some of the great big band composers very much use templates like this. So typically you have a melody that's stated. People get to improvise on the solo. The melody is stated again. Everybody claps. Drinks are served. People dance. Yada, yada, yada. Good times. So how can we apply this to our field, right? So company style guides. Now this goes not just beyond the style guides that we have as a community, but like how many of you have style guides within your company? Are there particular, so we got to show hands. We got some people that do that. So you have particular colors that you choose to use. There's particular, so there's a lot of different ways of getting work done, right? Maybe your team has picked one specific method, right? You use size over count or whatever you guys do. So I mean, I think we heard Justin mentioned it in the keynote today, right? That the idea of creating this level of consistency among our teams is going to ultimately positively impact our work. So if I don't have to think about what colors I have to use or what particular way I have to do my job, that thinking has been done for me, I can just focus on being creative and doing the things I want to do. So you can think of it like freedom to versus freedom from. So I'm free from the decision making process now and I'm free then to be creative in my own way. And the idea behind that is as a group, we've all agreed upon kind of like we're all playing the same song. We've all agreed upon how we're going to move forward as a team and we all get to be creative in our own way to do that. So I think of it a lot like as a sandbox. Everybody playing a sandbox as a kid? My parents were very specific about the sandbox. You can do whatever you want, within reason, in the sandbox. You can play all day but here's the box in which you get to play. So company style guys are a lot like that. They provide that framework for us to be as creative as we want within a specific scope. Now guidelines are not rules. There is a time and place to bend the guidelines. So like Charlie Parker said, you learn the changes and then you forget them. The idea being that guidelines are servants, not masters. We ultimately want to afford people the opportunity to be as creative as they can be and not stifle that creativity. So sometimes you've got to color outside the lines and that's okay if you have a good reason for it. Guidelines also serve as barriers to the unknown, right? So again, Justin had mentioned earlier in the keynote today, considering your future self. Like how can you be creative in a way where you're setting up the next developer or your future self, setting them up for success? So what do guidelines versus rules look like in the jazz world? This is what rules look like. You have very specific notes to play. You play them at a very specific time. This is exactly what it's going to sound like, which is very specific and rigid, right? So my attraction to the jazz world was I get to be creative and expressive right now in a very specific manner. So this is what guidelines look like in jazz. They provide you just the framework. Here are the chords that you are supposed to play within. You can play inside, you can play outside, but ultimately this is the guide we want you to follow. So it provides just enough structure for us to be creative without stifling our creativity. So what do guidelines versus rules look like in development? So I would say this is what a rules-based user story looks like. As a user, given I'm logged in, I want to see our company logo as a 90 by 45 PNG file in the West pane of the nav bar, with the nav bar color being this hex value, with Helvetica font as the default text, displaying the turn links in a highlight text, which I've also provided the color for you with a maximum load time of 566 milliseconds. Who wants to develop a feature like that? I don't. Definitely because it's, I mean, this is exaggeration of course, but it ties my hands as a developer. Because as far as I'm concerned, my job is to make the code serve the user. So if you look at the guideline-based one as a user, given I'm logged in, I want our company logo to display in the nav bar with highlighted links. So it's very explicit on the what I want to see, which allows us the freedom in the how. How are we going to accomplish this goal? Because I would presume as developers we have more knowledge base when it comes to coding than our product manager does. They might disagree. And that's okay. But our job is ultimately to be caretakers of the code. So it provides less clutter and confusion, right? The guidelines base. I might have to make some choices while I'm developing that would might be contrary to a rules-based user story. But if I have good reason, I'm ultimately still serving the purpose of the user story at the end. So say you buy into guidelines. Guidelines are great, Mike. I think this is an awesome idea. How do I implement this on my team? Like, what's the next step for us? So I would say we don't develop in a vacuum, just like we don't solo by ourselves. So the key is be aware of your surroundings. Code to me is like a pebble in application lake, right? You drop your pebble in and it's going to ripple through the whole application. So we need to be aware as we're moving forward, how is my code affecting the ecosystem as a whole? So a great way to work with that is if you do TDD or continuous integration. Where I work, we're pretty stringent about anything you commit has to have a test. Anybody else work like that? Okay, a lot of us do. And continuous integration, we use that too, affords us the opportunity to, I know I can submit my pull request and it gets peer reviewed. And when it goes into the code base, we have checks and balances that will make sure whatever I submit, whatever pebble I throw into the code base as it ripples through the application, it's clear on how I'm going to affect the rest of the ecosystem. Considering your future self, right? It can be very tempting to over develop things like if I'm thinking about the future, maybe I'm going to need this feature. And if I need that feature, I've got to set up this module just in case and I've got to write this adapter in the event something else comes down. And we can tend to over develop in a sense. So to quote Russell Olson from Design Patterns of Ruby, you ain't going to need it. So the idea is you develop what you need right now. Because the freedom to develop also means freedom of being responsible for the next person in line. And I think I heard Justin just say that very thing. While you're working, are you considering the person coming up behind you? Are you considering your future self and try to develop in that manner? Listening is also important. So just like when I'm taking a solo in a jazz band, I can hear the bass player, I can hear the keyboard player, I can hear the drummer, and I'm aware of what's going on happening around me. So collective freedom means that the team awareness is even more important. Do you know what your coworkers are doing while you're developing? Are you aware if anybody else is touching the files that you're touching while you're developing your feature? So I know in my experience, like I have come into conflicts, when I'm not aware of a feature somebody else is working on, we've both touched the routes file and now we have this like merge conflict we have to deal with, which ultimately slows everybody down. So being aware of who is working on what among your team will also help you to be faster as a team and be considerate of those around you. So how does soloing work? So if I'm being creative in my environment, what does that look like for me? So the soloist on their branch will say they would be driving the bus. They should feel as though the decisions I make will ultimately be supported by my team. Now that's not to say they're going to co-sign bad code, but you should ultimately feel like I'm being supported by those around me. So you should also adhere to a style guide. Again, this is where style guides can come very handy when it comes to your team. If I know ahead of time what the rules are, or get more like what the sandbox looks like for this feature, by the time my code goes up for peer review, I've already answered a lot of the more nominal questions that would come up had I not done that in the first place. So comments on code. This can be a very touchy subject if anybody's ever committed PRs to open source. This can get hurt sometimes. So how do we deal with that? I would say telling ain't selling. So we say that a lot in the education world. So as a teacher, when I'm trying to get you to do something different, I'm not going to tell you, I'm going to ask you the right question so you find the answer that I know is correct on your own. So how can we do that? So when you're peer reviewing each other's codes, do you say to your co-worker, no, that method's wrong. Don't use a try block, use dig. Or do you say, hey, did you know that the method dig is around and maybe of a better use here? What do you think? Now we all, if you guys have ever worked with navigating objects, dig is definitely a better solution than a slew of try blocks. But the idea is you want your co-worker to come to that conclusion on their own to foster the creativity. Because creativity in itself is a very vulnerable thing, right? So time. How can we all be considerate of the environment in which we work? So freedom implies vulnerability, right? Being creative is a vulnerable act. I'm taking this thing I made and I'm putting it out in front of all of you. So while we are working together in that manner, it is important for us to be aware that there is some level of emotional attachment. Well, I mean, I'll say keep it for myself. I am attached in some way to the code that I produce because in some way I feel it represents me. So if people attack my code, I can feel attacked sometimes. Now I know in my head my co-workers are not telling me I'm a bad person because I made some less than ideal choices in my pull review. But it can feel as though you're being attacked. So it's important for us to be aware of that when we peer review each other's codes. And then, Mince won, of course. Matt's is nice, so we are nice. Knowing your role in the ensemble, right? So if I'm the saxophone player, I'm not going to try to play a drum solo, not my role. So what section of the team are you on? We all have our skill sets, right? Some of us are really good in the back end. Some of us are really good in the front end. So where do you naturally gravitate, right? So just like a big band, there are particular skill sets we each have in parts of the stack where we gravitate. So you're encouraged to play to your strengths while you strengthen your weaknesses. So how can the leaders on the team, how can your team lead or how can your manager encourage you to level up on some of the skills where maybe you're not as strong as you'd like to be and you can still make contributions to the code base in that way while capitalizing on what you're strong on. And then the contribution to your team has a lot to do with where you want to go as a developer, right? Do you have any idea where you want your career to go? What do you want to be doing next year? What are you going to be doing three years from now? Five years from now. What you do today and what role you play on your team today is ultimately going to be in service of those goals. Band leading. So one of my favorite band leaders is Duke Ellington. And the reason why is because when he wrote tunes, he didn't write for trumpet three. He wrote for Cootie Williams. So he wrote specifically for the people in his band. And that to me was a lot like servant leadership, right? I want you, I want your voice to be a part of the product. So how can we do that in our field? So we are, weeders are very much stewards of the culture. So spotlight, so you ultimately want to spotlight your dev strengths and help shore up their less than strengths or their opportunities to grow. So potential and passion in my experience are far more important than skill. Skill can be taught. You can learn how to be a good back end dev. You can't really learn to be a passionate person. So passion and potential are important. Encouraging your strong players to mentor your greener players. This is a big tradition in the jazz field. The idea that I've been playing in the Basie band for 20 years and I'm rubbing shoulders with a guy that just got here. How can we, the more senior members of our team, be encouraged to mentor those that are younger and how can the junior devs on the team be open to that level of mentorship? That conversation has to happen because not everybody is welcoming to unsolicited feedback. I'm sure we've all been there once or twice. So encouraging that kind of culture on the team largely starts with team leads or with managers. How can you create a culture of mentorship on your team? So inevitably when we're working things break, right? Creativity is a wonderful thing. The product of creativity isn't always great and that's okay. So how can we deal with failure? Part of dealing with failure is accepting failure. Now I'm not saying if you push a bad commit to prod and you break prod and we should be like, oh, you're learning, that's great. I'm not saying that, but the idea is creativity will yield failure sometimes and that's okay. Can you learn from the fail experience? You push code to a sandbox, you break sandbox. Okay, that happens. What did you do? How can you fix it and how can you learn from that experience? That means I'm not pointing fingers at you for breaking something. You did something bad. You made a choice. It wasn't the right choice. So how can we choose better next time? I think of it like ugly babies. Anybody ever seen an ugly baby? I know I'm not the only one that's seen an ugly baby. If you don't want to admit it, that's fine. It's cool. I'm sure you never, unless you're a jerk, you didn't say to your parents, the parents of that baby, damn, that is an ugly baby. No, that's a hurtful thing to say, right? Now that kind of relates to code. When you create this thing, sometimes the thing devs create is kind of ugly. It might work, but it's ugly. You don't say that, dev, man, that is ugly code. For the same reason why you wouldn't tell somebody their baby's ugly. It's something that they created. They think it's beautiful. They made it. So how can you deal with ugly babies? It kind of goes back to what we were talking about before. What questions can you ask about that ugly code that perhaps might bring that dev to a better choice? Say, oh, I see you used a beginning rescue end here. Why did you do that? I'll let them explain it to you. Maybe the explanation is they didn't realize there was a better way of doing that. So then we're back to asking again, did you know there might be a better way? What does that sound like? So I'm guiding the dev to the right decision without dictating what that decision is, ideally maintaining that spirit of creativity in the moment. My uncle Peter was an actor. One of the most valuable lessons he taught me was, you are not your work. And this kind of relates to ugly babies. I am not my code. So what that means is when I submit code and it's bad code and people tell me it's bad code, that's not necessarily a reflection on me or my abilities. It's a comment on the choices I made in that moment. Just like when you're taking a solo and you play some notes that are not even related to the song you're playing. You might get some hairy eyeballs from the band and that's okay. But there are choices you made in the moment. And as long as we realize that, we can be critical of the choices and not of the developer. Because when you criticize the developer, that ultimately impedes the creative spirit of that person. So now you're getting less code and more timid code from that developer. Also, don't touch the keyboard. Any keyboard touches in here? No keyboard touches? Okay. Band leading do's and don'ts. So you buy into the concept of, yes, my team is like a band. I want everybody to be creative and feel the creative spirit and create really interesting and rich features. How can I do that? So it comes from managing from a position of trust. You need to trust that the people on your team have the desire to create the best features they can. People don't push bad code because they're like, you know what, today screw this guy. I'm pushing this bad code. We don't do that by and large. So one must trust that your team has the best interests of the code based on mine. When it comes across less than performant code, again, how can you ask questions about why the devs made the choices they made? Each one teach one. It's key. How can you as a leader encourage your senior developers or encourage your more seasoned members of the team to coach and encourage the less than junior members? That idea being there's kind of like an attitude of Kaizen on the team or continuous improvement. We are all about getting better and part of getting better is making mistakes. You're going to make mistakes and that's okay as long as you make them once and you learn from them. So things you don't do. Creativity like I said is a very vulnerable process. People are attached to what they create. So naturally they will shut down and get defensive if we attack instead of question and coach. Don't let your stars outshine the ensemble. This is key too. We all have people that are very strong and can commit like the 10X programmer. We've all heard this legend of the guy that can push 10 times the amount of a regular programmer. There's stars out there and then there's people on the team that might habitually struggle. So make sure that you publicly praise and privately criticize, privately coach and kind of that is spreading love. Make sure everybody gets a chance to shine on the team in a way that they are capable of doing so. Creativity can also be messy and unpredictable. So we don't want to kill creativity with the sledgehammer of quality. We want to encourage people to be creative and make creative choices and make interesting code choices and features but we don't want to squash that while we inevitably have to kind of sand down the edges when it's required. So in summary, what can you take with you today? Trust. Trust is a fundamental part of maintaining a creative atmosphere. I must trust as a team member, my team lead has my best interest in mind. I must be able to trust that my team member or my team leads and my senior developers are going to coach and develop me when I'm struggling. As a team lead, I have to trust my team is going to make the best choices they can with the information they have available to them. And you should explicitly create that as culture on your team. Stewardship. So this talk is based on a book by a guy named Max Dupree and I'll put a link up somewhere called Leadership Jazz. And he was a CEO of a furniture company for 30 years. And he talked about steward leadership as a leader, as somebody in charge, the group comes first. It can be very easy for us as leaders to think about what's good for me. How can I run this team in a way that's good for me or feels good to me or provide feedback that I would want if I was receiving feedback? The concept is like the platinum rule. So I had a very brief career at Macy's. One of the things about being a musician is like you need a day gig to hustle all the time. So I was, I sold suits for like two days. But through training, they talked about this concept that I'll never forget. The platinum rule. We've all heard of the golden rule, right? Do you want to others as you would have them do you want to you? The platinum rule is do you want to others as you think they would like to have done? So what does that mean? So as a leader, there are particular ways people want to be coached. Do you know what that is? Some people want to be coached like as if they're playing on a football team. That's very much my style. Like I was in the sports. Some people need a more delicate approach to that. You need a more consultative approach. And that's okay. As leaders, we need to be aware of what those are and then guidelines, style guides. I very much encourage this. Does your team have a style guide? Does your team know about the style guide? Do you regularly return to that style guide as a team to refresh yourself or to make different choices as far as the styles concerned? Do you have orientation for new developers? So that comes in two flares. I know when I started my new job, they just gave me a laptop and said, cool, here's the app you're working on. Here's the list of features. Go. I was like, okay. And I just had to flounder until I figured it out, which can be discouraging. So do you have an orientation process? Do you have a way of, this is how our team does stand up. This is how our team works on features together. Can you acclimate people on your team in that way? And this is all stuff that can be wrapped up through Kaizen. So thank you very much for attending. Very much appreciate your time. Enjoy the rest of your conference. What instrument did I play? So my undergrad experience was classical percussion and voice. So I was an opera singer on one side and I did all the orchestral stuff. When I went to graduate school, I studied the vibraphone, which is the flower of mallets in my hideous head shots was all about. So the question is, if you're in a band setting, a lot of times the lead singer gets all the credit and the backup band can feel some resentment. Is that right? So it kind of goes back to don't let your stars outshine your team. Same way you don't let your singer outshine the band. So I know when I've led bands that have had lead singers that get all the credit, a lot of it is, can you break off like a chorus to let somebody solo? Can you drum solos do a lot? It's like 30 seconds and that drummer is going to feel great for the rest of the night. So how does that apply to the team? You're going to have people that have more of a supporting role on your team and people that are the seasoned rock stars. How can you offer the younger people an opportunity to shine? And then when they do their work, shout them out. That's like praise is free. It's free and it's one of the most valuable things you have as a leader. Hey, so and so just did this feature and it's great. Everybody check it out. And then platform and stand up. That I'm telling you, that has significant value. The question is how do jazz musicians handle conflicts? Conflicts in what sense? Like personality or like while you're in the middle of making music? Okay. So personalities, right? In my experience, particular types of people, particular personalities like compliment or clash. So one of my heroes is a guy named Gary Burton. He's a jazz vibrato player. He told me once in a lesson, he's a very famous jazz keyboard player, Kenny Barron. They just clash. The manner of playing just runs into each other. So a lot of times is can you, can you, so for us, can you segregate the work in a way where maybe conflicting personalities or development styles don't bump into each other? So that's before the music starts. Today you're in the middle of playing and there's conflict. I want to play outside. The keyboard player doesn't want to support that. It kind of comes back to roles. So my attitude whenever I let a jazz man was, the soloist drives the bus. If they want to play out, we play out. If they want to play inside, we play inside. So how does it relate to us? If it's my branch and my feature, I'm driving the bus. Now we're different than music where it's very like ones and zeros. There's like better ways and not great ways. But it comes back to how do you, how do you bring that soloist back into the band? It comes down, it comes back down to comments. Can you ask the right question to get them to make a choice that kind of gets back to where they need to be? It's kind of critical with junior devs, especially when they find a new method, right? I'll do this. Like, we, where I work, we deal with a lot of APIs. So I found this method dig. Dig is great. I want to use dig everywhere. Well, maybe dig isn't the right choice for that thing at that time. So my team kind of had to like get me off the dig train for a second. And a lot of that was asking the right questions. You chose to use this method here. Why did you do that? And normally when I find myself explaining why, I'll hear, oh, this isn't what I intended. Yeah, you're right. I should do something else. So the question is, have I worked with either as a musician or a developer with somebody who doesn't seem to care about the dynamic of the team? Is that, is that fair? The outcome. So they want to do whatever they want to do and whatever comes out of the other side is like whatever. Okay. Um, musically, you, it kind of comes down to choice. Like I don't necessarily have to choose to work with that person, right? That's beforehand. Say we're in the middle of it. One of the most powerful things you can do in music to communicate, stop playing. So somebody's taking a solo, I'm trying to interact and they're just not listening to what I'm doing. I'll just stop and let the absence speak for itself. Because if you find yourself playing all by yourself, it's just you and the drummer, you kind of have to ask yourself, well, what am I doing where everybody wants to stop? So how does that apply to us? If you're working with a developer that just doesn't seem to care about what anybody else thinks, wants to just push their code and bypasses peer reviews, is there a way where you can like quarantine that person? If you have that level of power or on the development team, maybe you don't comment on their code. So they're going to find themselves like I feel very isolated from the group. Now, and that's just, that's just what happens. If you start choosing yourself over the team, you're going to end up by yourself. And I don't know about you guys, but one of the attractions to this field for me was creating as a group. I'm a member of a team, as a team, we're building this thing and this thing is great. So if you find yourself no longer a part of the herd, a lot of people are going to ask themselves why? And maybe that can be addressed. I think we are about out of time. So thank you again for coming. We'll see you on Confreaks and I would love to talk to anybody afterwards if you've got any questions. Thanks again for coming.
The ideal workplace, with motivated employees, supportive managers and a clear vision in the "C-suite", is where we'd all like to work, isn't it? The question then, is, how do we create it? How do managers walk the fine line of "micromanaging" and "anarchy"? How can we, as employees, maximize our contribution to our company and love what we do at the same time? The secret is in the big band. Inspired by Max Dupree's Leadership Jazz, this talk will show you how to apply the principles of improvisation to your company/team and make your workplace more efficient, effective and fun!
10.5446/31232 (DOI)
All right, everyone. I think we're going to get started and let some people trickle in. Welcome. Thank you for coming. My talk today is called Beyond Validates Presence Of. I'm going to be talking about how you can ensure the validity of your data in a distributed system where you need to support a variety of different views of your data that are all, in theory, valid for a period of time. So my name is Amy Unger. I started programming as a librarian, and library records are these arcane, complex, painful records. But the good thing about them is that they don't really often change. If a book changes its title, it's because it's reissued. And so a new record comes in. We don't deal with that much change in alteration within the book data. Obviously users are a different matter. So when I was first developing Rails applications, I found active record validations amazing. Every time I would implement a new model or start work on a new application, I would read through the Rails guide for active record validations and find every single one that I could add. It was a beautiful thing because I thought I could make sure that my data was always going to be valid. Well, fast forward through a good number of consulting projects, some work at Getty Images, and now I work at Heracu. And unfortunately, this story is not quite as simple. And so I wanted to share today some lessons I've learned over the years. First, kind of speaking to my younger self, why would I let my data go wrong? That would be how me five years ago today would be reacting to this. What did you do to your data and why? Next is prevention. Given that you may have accepted that your data may look differently at different times, how can you prevent your data from going into a bad state if you don't only have one good state? And then finally, detection. You know, if your data is going to go wrong, you just better know when that's happening. And if you were here for Betsy Habels' talk just before me, a lot of this is going to sound familiar, just be a little bit more focused on the distributed side of things. So first, let's talk about causes and how your data can go wrong despite your best intentions. And I'd like to start with reframing that by asking why would you expect your data to be correct? Five years ago, me would say, but look, I have all these tools for data correction, correctness. I have database constraints and I have ORM code. I have active record validations. They're going to be in my corner. They're going to keep me safe. So let's take a quick look at what those would be. So for database constraints and indexes, we're looking at ensuring that something is not null. For instance, here, I'm trying to say that any product that we sell to you should probably have a billing record. For the health of our business, it's kind of important that we bill for things that we sell. And so this statement would keep us safe in the sense that anything, before we can actually save a record of something that we have sold to you, we also need to build up a billing record. The core allowing there would be an active record validation. The inspiration of the title of this talk where product validates presence of billing record. Although after I submitted this talk, I realized the syntax is a little bit new. So here, this is what you may recognize and I need to clearly review some things. So why would this go wrong? Well, first, let's get a product requirement that, gosh, it's taken too long for us to sell things. You got so much work going on when, you know, user clicks a button, I want that. We really want to speed up that time. So you think, gosh, you know, I've already extracted some of my email mailers. I'm doing all the things I can in the background. But, you know, billing only needs to be right for us once a month, at the midnight hour, beginning of the month. Until then, we have a little bit of leeway. So why don't we move that into the, into a background job? Well, that leads to a kind of sad moment where we have to comment out this validates presence of billing record. Because we want to get our product controller to have this particular create method. And what we're doing in that create method is we're taking in whatever the user gave us. We're saying, hey, all right, we now have a product that we have sold. And we're going to enqueue this job to create the corresponding billing record. And then we're going to immediately respond with that product. And that is, that's awesome for them. They can start immediately using their Redis, their Postgres, their app, whatever they want. And it just leaves us with, you know, within a few milliseconds, we need to get that billing record created. So it sounds, it sounds great. Unfortunately, what happens if that billing creator job dies? You're in a tough spot for having a product that he's not in fact filled for. Then we have another fun complication. Your engineering team thinks, gosh, it kind of sucks that we're doing all of our billing and invoicing in a really legacy Rails app. That does not seem like the right engineering decision. So let's pull out all of our billing and move it into something that can be scaled at a far better pace for that kind of application. Well, now our job billing creator just gets a little more complicated because when it is initialized, it finds the product, builds up this data, and then calls out to this new billing service. And now we have two modes of failure. One is, you know, your job could just fail. But then most of the job could succeed, but your billing service could fail horribly, which leads to our fun discussion of all the ways your network can fail you. So some of these are easier than others. You can't connect. Okay, well, you can probably try again. No harm, no foul. Just give it a shot. What happens if it succeeds partially on the downstream service? It doesn't fully complete and you get back in error and you think, gosh, I'll retry. Well, is it going to immediately error? Because it's like, I'm in a terrible state. I refuse to accept anything. Or is it going to say, that looks weird. Maybe I'll create a new one. You have another option. The service completes work. But the network cuts out in such a way that you don't, it thinks it's done, but you don't see that. So do you retry that and risk the fact that maybe you're going to bill for something twice? And then this final one is kind of a corollary to that. Do you distinguish between the knowing which systems will roll back if they see a client's head timeout error? So with all of these aspects that are critical to designing highly-performance systems that are going to be distributed, I think we have to move to accepting that your data won't always be correct. Or at least it will be a variety of different ways of correct. It is perfectly fine now for a product to not have a billing record. Because all that means is that the billing record is in the process of being created. What we want to be able to express is the fact that eventually we truly expect something to coalesce to most likely one, maybe multiple, valid states that we expect it to spend the majority of its life in. Now, of course, that's not always true. People create products, make or buy things and then decide, whoops, that was exactly the wrong thing to buy right now and then immediately cancel it. So you may not even get to see this thing finally coalesce into something that you might think would be valid. But what if you don't always know what correct is? So let's move to prevention where it's more about handling those errors. We've stopped really caring about making sure that everything is in a perfect state. Let's just sophisticatedly handle the errors we're seeing. So we have a number of strategies. The first I'd like to talk about is retry. I mentioned this earlier. If you can't connect, might as well just try again. But this brings in to question a couple of issues. First, you want to be aware of whether the downstream service supports item potent actions. If it does, you're good. Keep on retrying. Even if it succeeds, keep on trying. It's fine. The next is a strategy that if you're doing mostly just background jobs, you can implement some sort of sophisticated locking system. I haven't done that. It seems a little more work than I would want to do. But then again, if you are only doing jobs within one system, that might be the right solution. You can choose if you don't trust your downstream service to be item potent. You get to choose between retrying your creates or your deletes. Please do not retry both. Or have far more confidence than I do that your queuing system will always retrieve things in order. And the reason why you might think you don't have to choose is because, sure, if you put them on a queue, you can get first in, first out really well. But what if, and most of the time with the downstream service, you're going to want to be retrying multiple times? Right? Why retry just once? What if the service has a 15-minute blip? Should that require manual intervention? Probably not. You probably want to say, hey, retry this thing like five or 10 times. If it fails on the 10th time, that's fine. But try it five times. Well, so what happens then if your delete call takes far longer to fail than your create? What that means is that by the second time round, your delete that is being retried for the second time is higher up in the queue than your create. And by the 11th time, I mean who knows which one is going to come off first. And if you end up in the unlucky position that your delete call gets pulled off before your create, then you're left in a situation with someone who just wanted to quick buy something, realize that they did something wrong, delete it, and yet they're being billed for this, add an infinite item, and nobody is happy. A final thing to mention with retries is, you know, if you are going to do many, many retries, do consider implementing exponential backoff in circuit breakers. Don't make things worse for your downstream service if it's already struggling by increasing its load. Another strategy you have is rollback, which is a great option if your code has, if only your code has seen the results of this action. So if your code base is the only one, your code base and your local database is the only one that knows that this user wants this product, absolutely rollback. But what about external systems? And the fun thing here is you need to start considering your job queue as an external system, because once you say, hey, go create this billing record, even if the end result is that that billing record is going to be in the same local database, you can't delete the product, you can't just have that record magically disappear. So roll forward would say you have a number of options, right? You can enqueue a deletion job right after your creation job. You can, once you create something, you can delete it. You can also have cleanup scripts that run, that detect things that are in a corrupted state and clean them up, hopefully very quickly. But rolling forward is all about accepting that something has gone wrong, but that something existed for just a short period of time. And we can't make that go away because something out there knows about it. All right, so you say, okay, this kind of makes sense, maybe. What does this look like for my code? Well, first let's talk about transactions. So transactions will allow you to create views of your database that are only local to you. So let's say I want to create an app, create a Postgres, create a Redis, I don't know, register like five users for that app, and also call like two downstream services with all those. If you wrap that all in a transaction and any exception is thrown and bubbles up out of that transaction, all those records go away. Now, you're downstream services, you still need to worry about those. But it's a nice tool for making local things disappear. With that in mind, there are a couple things you might want to consider. First is understanding what strategy you're using, usually this will be the ORM default. So if you were in Betsy's talk earlier, you saw active record dot base dot transaction do. That chooses by default one of four transaction strategies. If you're in Postgres, if you read Postgres' documentation, you'll see they choose a sophisticated default. But please understand which one you are using because it has implications for what things outside of the transaction can see in and what they can't. The next thing I'd like to suggest you consider is adding your job queue to your database. Now, if this causes you absolute horror, because of the load that you foresee putting on your database, you are correct. And this is a little bit like me, you know, if I were from LinkedIn in the days when they had rumors had it, like 20 people working on Kafka and then they told people everybody should use Kafka, Heroku has a decent number of very intelligent people working on Postgres. That being said, if this doesn't totally terrify you, you should definitely absolutely do it. Because what it means is you do not have to worry about pulling deletes off of the queue. They just disappear. So instead of having that crazy race condition of a delete possibly outrunning a create, it just never happened. You can write code as if you just were able to go ahead and think. But then if you have an error, it's as if that job never got enqueued. Next suggestion is to add timestamps. And I would suggest adding one timestamp to an object for every critical service call. So for a product that you sell, you might want to consider adding billing start time and billing end time. And what you do is you set that field in the same transaction as you called the downstream service. If the downstream service fails, it will raise an error that you choose not to catch, which will exit the transaction and result in that timestamp not being set. Timestamps obviously allow you some fun debugging knowledge and they do help you with additional issues debugging across distributed services. But the nice thing here is if the timestamp is not set, you know the call never succeeded and you should be able to retry if you know that it is safe to do so. The next one I want to talk about is code organization. And this is one where I don't have any panacea and is really hard. But I want to advocate very strongly that you think about writing your failure code in the same place as you write your success code. And what I mean by this is if you have a downstream service, let's say you're calling Slack. In the next few slides I'm going to talk about creating a new employee. So let's say you're uploading or Slack so you're creating a new employee within your company Slack. The same place that you are writing that create call, please only a few lines away have the code to do the wind back so that no matter where your call, whether it's further down the line from Slack, wherever your employee creation fails, the code, the path of the code goes right back through that. So what it helps do is it helps your developers think about failure paths at the same time as they're doing successes. So what would this look like? So let's say we're going to create an employee and we have this beautiful app. This is a completely contrived example. So we're going to have a local database. We're going to register them in Slack. We have a HR API. We're going to upload a headshot to S3. We have another bunch of jobs, I don't know, maybe getting them all set up in GitHub. So what happens if, let's say, S3 is down? Lovely thing that I'm already standing up, right? So if S3 is down, and I wrote this slide the day before S3 went down, let's say S3 goes down, then your employee creator class has a pretty clear path for unwinding this all, right? You call the downstream HR API, you pull the user from Slack, and then you cancel the transaction that will have created the employee. And that's lovely. You can think through that, right? But this is kind of more like the code we write. And if this does not look like any code you've ever seen, congratulations. This is awesome. You should give a talk. You will get all the job applicants. So do you know what to do to unwind this mess if it fails right there? I don't. I have absolutely no idea. And sure, I can stare at this long enough and try to figure out what's going on, and I'd probably get close. But if I'm tired, if I haven't spent time with the Slack API since they updated it, I'm probably going to make a mistake. So something I'd like to suggest you consider is something called the Saga pattern, which allows you to create an orchestrator that essentially controls the path that things walk through and then keeps all of your rollback or rollforward code encapsulated in the same spot as the creation code. All right. So with that in mind, that obviously it's hard, and we're going to mess up, how do we detect when things have gone wrong? So the first thing I want to talk about is SQL with timestamps. And since we have added at a previous date, timestamps saying delete it at, create at, and billing started at, billing ended at, we actually have some degree of hope of trying to reconcile things across a distributed system. So we may never get to this. We're definitely never going to get to this. But with a bunch of different small SQL queries, we can get maybe close. So let's say we want to tackle one small aspect of this. Shockingly, you all do not want to continue paying for things that you no longer have on Heroku. If you delete an app, we probably shouldn't continue billing for it. So this query may look a little bit complicated, but what it does is it says, hey, for our billing records and the things we have sold, find all the billing records that are still active, that are attached to products that are not active, as in canceled, someone deleted them. But only those where the product was deleted 15 minutes ago. And what that does is it gives us 15 minutes for us to become eventually consistent into a state that we're pretty confident in. I say pretty not because we want to continue charging you for stuff, but because, let's say, the billing API goes down for longer than 15 minutes. This thing is going to start yelling at me. And that's a pain for me, but most of the time, I mean, 15 minutes is a pretty darn long time. We're likely going to be safe. So SQL with time stamps has a lot of benefits. Some of them are incredibly subjective. The first is absolutely subjective. I am far more confident of my ability to write business logic in really short SQL statements than I am about writing a very large auditing code base. That SQL statement to me was far more readable and something I can maintain confidence in that it will continue to run successfully than I am about writing the same thing in Ruby. That's probably going to be something that your team is going to be different on depending on where you work. The other nice thing about SQL with time stamps is that you can set them up to run automatically. Betsy was talking about Sidekick earlier. We have just an app that will run these. We also have drag and drop folders. So make this easy to write new ones. It shouldn't be hard for someone to think, wow, that record looks weird. Let me write a check to see if there are any others like it. So these drag and drop folders will take SQL and they'll make sure it runs. Alerting by default, if you have ways of making it really easy and consistent, for us that means wrapping our SQL in Ruby files that say, hey, alert me if there are zero of these or alert me if there are any of these, the more common. And then finally, documented remediation plans. As an engineer on call, I have really no interest in relearning our credit policy. So I mean, I'm happy to do it because it means that my mistake is cleared up. But let's not have to talk to our head of finances every time. He's not going to be happy. So some of the challenges here, you might suspect, are non-SQL stores. And I specifically say non-SQL because you could be shoving structured JSON files in S3. I don't know what you're doing. But yes, so no SQL, non-SQL, who knows what. And everything I've talked about so far has been built on the concept of the big, beautiful reporting database. And every large organization I have worked at has one of these. Like, you have so many distributed services and someone has just decided there will be a central one. I think it's probably a corollary of Conway's law somehow. But in any case, what happens if, let's say, one of these is red-us? For us, we usually try to just do a quick ETL script. And if we need to, get it into Postgres, there's also the fully functioning model of just flipping this on its head. If you don't want to use that big, beautiful reporting database and you are fully confident that you can write good auditing code, then you open the doors to so many other options. So you can talk directly either to red-us, get a direct red-us connection string, or hit some API that is backed by red-us. You can hit arbitrary APIs, and then you can also hit all of your other distributed systems. For me, the concern is that writing an application that will talk to every single one of your distributed systems seems a lot more bug-prone than just SQL off of one big, massive, giant database. But I've done this. So it really depends on this scenario. So as I mentioned, some of the challenges are non-SQL data stores where you can call pull transform and cache. Those are usually the verbs we're using, but it's really just ETL. You can end up writing code in non-SQL, which may be the right choice. The other challenge that we run into is systems that do not have timestamps, and so you can't do anything that says, like, gosh, I expect for five minutes this thing to be in flux, but after five minutes, if it's been created for five minutes, absolutely start checking it. If you can't get timestamps added, then I would move to a strategy close to snapshotting, analyze the whole gosh darn thing, write in records that say, like, hey, at this time it was correctly configured. At this time, this thing wasn't correctly configured, but hey, maybe next time it will be. And then we threw together some SQL to determine whether things are coalescing. You may want to, again, do this in code. This SQL was about 60 lines long and included a self-join on a table, and it's a little scary. The other option, in addition to SQL timestamps that I want to talk about, is using event streams. And this may sound somewhat similar to log analysis, which it absolutely is, so if you're doing that, this will be very similar. So let's walk through the process of the events of buying a thing on Heroku. And so each time we hit one of these events, Heroku will emit an event to a central Kafka, and we can read all of these events from one consumer. So for buying a product, we'll first see an event that says, hey, someone really wants a Redis. That's cool. We then move into asserted events on, okay, are they authenticated? Hey, is that product available? Are they allowed to install it? And this goes on, many, many, many events are emitted, even for the smallest requests, until we get to the end, which looks rock-silly like, hey, this Redis cluster is up and available. Billing has started, user response has been generated either to send them a web hook to say, hey, it's available, or because they were in back waiting in line for us to do all this work. And you can start to see patterns. If the user is an average authorized user, right, we can create that list of what events we should see and in what order. And we can use this to determine whether something was actually successfully created, and whether we should expect the data to be in the correct form at the end. So some benefits of event streams. It's a single format. You're not having to negotiate, oh, that thing is backed by Redis. That thing, like, why are we still on flat files? Why? It is one place, and you can just register a new consumer to walk a stream, or walk many streams. It has the added benefit of essentially black box testing your application. So again, if this sounds similar to log analysis where you're trying to determine whether your application is successful, based off of, hey, if someone hits the search button, we should probably see some results returned, and we should see that kind of structure in the log. And therefore, we're going to validate that this AB deployment can slowly be scaled up. This is very similar, just used for a different purpose. I do have concerns about this approach, and we're not using this explicitly for any business critical auditing right now. But it's something we've discussed heavily, and it's the direction we want to go in as we factor things. So I wanted to share some of the concerns I had with going down this road. What do you do if you admit the wrong events? Data on disk is something I have far more confidence in than whether we're continuing to admit the right event. I write typos. Anything that sounds similar, I'm probably going to write. I'm probably going to exchange it. I've been known to exchange cash for cats. In my defense, there were cats on my lap. But you might have random failures like that. What if you continue emitting events, even if you're not actually doing the work? People make mistakes. And while it's one thing to scale up an AB test and say, hey, this canary deployment is great. We're going to go full out with it. It's one thing to rely on events and log analysis for that. It's another thing to trust the health of your business to the accuracy of your events. And then finally, and this gets back to, do you want to be writing code that validates code? What if the stream consumer code is wrong? What is your confidence level that your team is going to be able to write really good auditing code? So this is the end of my talk, but I wanted to leave you with a caveat for what I have been proposing, especially towards the end, which is that everything I've been talking about is a lot of engineering effort. Especially building the beautiful big reporting database if it's not there. Building an auditing system that will touch every single component of your distributed system. My time isn't cheap. And the reason why my company has chosen to invest in some of these is because there are certain things that we just fundamentally cannot get wrong. We've talked a lot about billing because it's a pretty easy thing. It's kind of visceral, us charging you for something that you should not be paying for. That's bad. But this also applies to security concerns. And for us, those are absolutely business critical. And it's why we're willing to put in this effort. But if you're building something that's a little more lightweight and is not going to take down the business if you get it wrong, maybe consider a lighter weight solution. In any case, so I wanted to say that I hope I have had something that was relevant for everyone in the room. Whether that's talking about why your data might go wrong, how you might prevent it, and then detecting when mistakes inevitably happen. I wanted to say thank you. I really appreciate you all sitting through this talk. And I have about five minutes for questions. Nola's trying to clap. We can.
You've added background jobs. You have calls to external services that perform actions asynchronously. Your data is no longer always in one perfect state- it's in one of tens or hundreds of acceptable states. How can you confidently ensure that your data is valid without validations? In this talk, I’ll introduce some data consistency issues you may see in your app when you begin introducing background jobs and external services. You’ll learn some patterns for handling failure so your data never gets out of sync and we’ll talk about strategies to detect when something is wrong.
10.5446/31233 (DOI)
Welcome everybody and thanks for coming. Today I'm going to talk a little bit about Rails and how awesome it is. And I think part of why it is so awesome and why it is great is its community and the huge ecosystem around it. And when I try to explain Rails to newcomers to the industry or even colleagues, I think of Rails about this city map, this huge massive city map. It's big and consists of different areas like over there is the active record district and there is the initializer town and all these things. And in our day-to-day work, it can feel kind of intimidating sometimes even. You're surrounded by all this functionality and you don't know what's happening and you feel constrained sometimes even. And while it's true that Rails has its certain ways and patterns how things should be done, this does not mean that we cannot find or create our own little areas of freedom. And today I'm going to talk about some of the areas we explored at AtheLite in order to help us work in a more efficient way with Rails. So at the beginning I want to talk a little bit about the status quo. Then we talk a little bit about breaking of some things followed by a quick wrap up about general application architecture and then talking about some of the trade-offs. So quick question to the round, who here knows status quo? A couple of hands? All right, cool. So in contrast to Rails, status quo had a very simplistic approach. They always only played with three chords. And while I'm not going to continue today to talk about the band status quo, I thought coming here from London, I should at least take up a British reference. So we're done with that. We continue with the actual status quo as in the current state of affairs. Two years ago, DHAH mentioned at RailsConf that Rails for him meant that he was going to use Rails for him. He still wants to be able to use Rails in order to write web applications. And I think this is a really cool feature and idea behind it because everything comes with Rails. Like batteries included, you can continue or you can start writing web applications without the need of anything else. I'm more of a ultra-light backpacking person myself. And while I'm not prepared for the zombie apocalypse, I always have just enough gear with me for my current adventure. And this is reflected in the Pareto principle, mostly known also as the 80-20 rule. So for example, 80% of your users use only 20% of your product's functionality. Or in Rails' case, 80% of Rails applications out there use only 20% of what Rails offers as features. And the background of this talk has something to do with Omakasa Rails applications. And throughout several years, and having done a couple of Rails projects, we felt the same pain over and over again. And that is that Omakasa Rails applications lead to slow test suites. And this is a problem for us at Aethlite because we are used to quick TDD cycles. We want to be able to write a little code, run the test suite, to get feedback immediately. So this writing a little code and running the test happens, I don't know, five to six times a minute. Sometimes if I'm typing sloppy ten times. And I want this fast and quick feedback loop. Unfortunately, the bigger your Rails code base grows, if you follow the regular Omakasa style, you lose out on these things. And in a previous application where we felt the pain and it was the last application we built in this way, this will be our reference application for throughout this talk. And we ended the project with around about 4,800 tests and the full test suite ran in about six minutes. And waiting six minutes until you know whether refactoring worked or not is too long in my opinion. So I need my code base malleable in order to make a quick decision. Yep, okay, this refactoring worked, continue, this buggy fixed, or it's not fixed. I want to get this feedback really, really quick. So after having seen that throughout a couple of projects, we went back to the drawing board, put on our thinking ads and started to really dig into what is the actual issue. And this is me with my thinking add on. And one hotspot we identified was the Rails Buddha process, which looks roughly like this at the beginning, the Rails framework is loaded, then it will load all the dependencies from the.jamp file and then it kicks off its initialized bang process, which loads all the code from all configured Rails engine and so on and so forth. And during that process, something like called eager loading or in the docs it's called or referred to as eager loading kicks in. And eager loading requires or is responsible for requiring all code that is inside your app directory, for example. And we started to modify our configuration a little bit. And in a typical Rails application, you see in the application.rb, you see a line like this require Rails all, require the whole world and continue. So we changed that a little bit in order to only require the two Rails ties or the two frameworks we actually needed for this particular application, which was only action controller and sprockets for asset management. It turned out, which I learned after we did this change, that there's also already support for this kind of configuration by Rails new. If you run Rails new with a minus minus help flag, you get a list of all configuration options you can provide there. And it actually allows you to skip a couple of plugins already and you see there skip active record, skip action cable. If you don't need them, you can already tell Rails, hey, don't generate the new project with these dependencies because I don't need them. And I did run this command with a minus minus skip test because I usually use RSpec for my testing. So what Rails new generates then is an application.rb that looks like this. It doesn't require Rails all anymore, only Rails, which is the BAM minimum. And then it actually tells you, hey, pick the frameworks you want, pick and choose. And it commended out the test unit Rails type already for me because I was skipping that in the previous step. So that's good and well. The next thing after requiring the initial plugins, there's a line that will look like this. Bundler require Rails groups. In previous versions of Rails, it looked a little bit different. It required the default and then the Rails nth. It has been changed, I don't know in which version, but this tells Bundler to require all gems from your gem file for your current environment. The problem with that is that this line will add a linear load time to your boot process, which means the more gems you add, the more time will be spent when you load up your application. And this is unfortunately a hard fact and a hard truth we need to accept. At the moment, there is no way around that. So we continued. There are a couple of more settings we started to tweak and one of them is active support. Supports a bare flag in order to say, hey, don't load everything from active support, but only the dependencies that are actually needed in order to boot up Rails. And then there are two more settings, cache classes in order to keep a class cache. If a class has been loaded, keep it loaded, and then we disable specifically the dependency loading. And this is because at that time, we were big fans of a thing called screaming architecture, which I will go to, which we will come back to at a later point at the presentation, but as a quick heads up, screaming architecture, the idea behind that was that it should be obvious for a new person on your team that to know by just looking at the directory structure what kind of application is it. And if you take the Rails core base as an example, if you generate a new Rails core base, you see app in config dblib public and I don't know what domain is represented in this particular application. Just because I've seen Rails before, I have an idea that seems like a Rails app. And if you drill down a little bit in the app directory, there are even more sub-directories which do not really reveal the intent beyond the application itself. So what we started doing is we moved everything that was in app into the lib directory with a proper namespace. So in this example, there was a movie organization application. So there you can see there's an edit index and show ERB template. There is a movie model, movie repository, and also the movie controller. Like everything that used to be an app, we moved to lib. We deleted the app directory altogether. In order to really make sure that nothing will be eagerly or auto-loaded or looked up anywhere, there are a couple of more settings for the application.rb. We made sure, okay, eager load false, don't try to load anything, and we made sure don't even try to look up any path because we pretty much nulled them. So we booted up our Rails application, no errors whatsoever. Trying to hit the index page, blew up with an exception, uninitialized constant movies controller. So I mentioned before Rails expects everything to be loaded up front. This is why this pre-loading, or eager loading happens in first place. So now Rails can't find our application controller anymore. Luckily for us, there is a thing called lazy loading to the rescue for us. And what we did there, in the sector, you can see a call to active support, constantize. So at that time, Rails tries to constantize a controller name based on the route definition we provided there. So because Ruby is flexible and awesome, we monkey patched constantize. More specifically, this particular one line you see there, we wrap the existing constantize and say, okay, if you try to constantize a controller, actually require this controller as well. And this makes the trick, everything worked, we could serve requests, everything worked fine. But it had an interesting effect on our code base because now we needed to have explicit requires everywhere. And this was a constant decision within the team. But it's also not very Rails-y, if you want to say so. What I like about it, it makes it obvious how many dependencies my current file I'm looking at has. So I refer to this as a manual static analysis. If you open up a file and it has 20 require statements, it gives you at least a hint to, well, maybe this thing is doing too much. And then you can start thinking about extracting things or splitting it up in different ways. Maybe it's okay, but at least you can start having a conversation and think about, is this thing doing too many things? And with all these settings now configured, and we were able to start a new green field project, which will be our example one. So green field project, we had everything at our disposal. We could do whatever we want because we were in complete control of what we want to do and how we want to approach it. So we did all these configuration settings. And here's the existing or the previous application again with the six minute runtime of the suite. I was not part of the project when we kicked it off. I joined the project when there was already 5,500 specs that ran in 23 seconds. And while I don't have a percentage here, I think this is a pretty big increase, decrease. Additionally to that, with all the tweaks about how many files need to be loaded at the beginning, loading or testing one controller in isolation used to take around 18 seconds until you knew, okay, this controller works or it doesn't work. Now with the new approach and the new configuration settings, we only needed to wait two seconds, which I think is still pretty good. One caveat here, though, is we made also a choice to not use Active Record at all. We switched to the SQL gem and a repository pattern in that way, which you don't need to discard Active Record for that. It's mostly about that you can switch your implementations for your persistency depending on the environment you're in. So when we were testing, we had an in memory repository in order to help us gain more speed. In production, of course, we use a real repository that actually connected to our Postgres database. But we could still have used Active Record for that. It's just we switched to SQL. No particular reason. It was just the way we started working with it. It's worth mentioning here, though, that this adds double burden on us. We needed to maintain two separate versions for each repository. We needed to maintain a Postgres implementation as well as an in memory implementation. But the benefits for that was that we gained a lot of speed for our test suite. So it was worth for us investing into this double maintenance. The way we did that was with shared RSpec examples that ran for both implementations. So we actually only needed to write the test ones, but implemented twice, basically. The second example I have was a Rails application we took over from a new client. And that started with 220 gems and a liberal usage of Active Record. The application itself came with 2,600 tests out of which were 730 controller tests and the rest of them were not just plain or Ruby object tests. There were a bunch of Cappie Bar tests as well as tests that integrated or verified some elastic search behavior as well as verifying that the side gig job was kicked off. So it wasn't just controller and the regular Ruby world. And the full suite ran in around 8.5 minutes. So we thought, okay, let's optimize it our way in order to gain more speed. And running a single controller with the existing application with the require Rails all way took around 14 seconds in that case. After we did all the configuration changes and all the required statements because now we actually needed to add all the required statements manually back into the code base which weren't there before because it was a regular or MacArthur Rails application. We were only able to cut it down to nine seconds. Still a little bit better but not the benefit we had before. And for a non-controller spec it looked a little bit better. If we were running a single non-controller spec it took around six and a half seconds. And we were able to cut this down at least to around half a second. So there was more optimizations. There was a better optimization for us than with the controller test but we weren't able to just apply it throughout the whole code base because manually adding all required statements took a lot of effort. I did all the benchmarking for two controllers. And at the end of it I had like my Git status showed me I had 150 piles changed. So because it's a client project we can't build the client for refactoring that we wanted to do for refactoring sake. So we would need to find a way to slide it in as we add more features. So we couldn't just flip the switch and say yep now your application is faster and we have a better development experience. But we could do some projections. So around eight and a half minutes was the previous runtime for this week. And with the knowledge about like it's not just controller test and non-controller test for us we projected numbers that we were able to cut it down to at least five to six minutes which is still fairly long too long for my own taste but a little bit better than before. And with all the changes and all the explicit requires I know that not many people would agree with me. The main point I'm trying to make here is that launching a test suite should not be a daunting task. You should you want to be able to run your test suite multiple times a minute to get this fast feedback. And if you only take one thing away from this talk is split up your spec halvers. Split your spec halvers in a spec halver that has just that just loads our spec and maybe define some global test halvers that you want to be using throughout your test suite and a separate rail spec halver that requires the existing spec halver and also requires the rail environment. This way you have this is a really low hanging fruit in order to at least optimize all your non-rails test. Like if you just test the class that does not require anything from rails you get a huge speed benefit. Having said all of that let's talk a little bit about the rails way. So what does this actually mean? In Rails's doctrine we can read that there is a value convention over configuration. We made a very conscious decision in our team that we favor expliciteness over implicitness. I like to read explicit and boring code. Like magic code does not I'm not a you I don't really like it. That doesn't mean that I don't like rails but it's just I favor explicit code over knowing that okay something will be loaded and something will be done for me in the background. It's worth mentioning that conventions are heuristics. They are not rules. You can break them and if you learn something new you might even realize how much this convention we used to follow for three years. Maybe it's not a convention anymore. We should use something else and it's not a bad thing. This comes the 80-20 rule applies here again. Like all these conventions we find in rails work very very well for 80 percent of rails apps out there but the remaining 20 percent might need something else and that is fine. It's not bad. It doesn't mean that rails is not the right fit for them. It's just that rails still offers enough flexibility in order to support the remaining 20 percent. You just need to maybe work a little bit harder to reap these benefits. As I mentioned earlier I want to talk a little bit about general architecture as well and while I'm not I don't want to bore you with 90s 70s material about what software architectures should look like there are a couple more recent architectures out there. First one is clean architecture which defines some entities and the core of the application that will be orchestrated by so-called use case implementations and then these use cases will be exposed through controllers in our rails application for example through the web. In a similar vein there is a thing called hexagonal architecture coined by Alistair Coburn which goes in a similar direction like at the core of your application is your actual application and you provide different adapters for different clients. Like you can have a GUI adapter in order to connect the QT GUI to it or an HTTP adapter to connect maybe a browser to it like think of rails here and these adapters do not only work for like inbound connections also for outbound connections like you have an adapter in order to connect to your persistence layer Postgres on MySQL or you have an adapter for the Arrest adapter that connects to Salesforce for example and last but not least in Martin Fowler's book Patterns of Enterprise Application Architectures this is a thing called service layer same idea again you have a user interface that connects to service layer which guards your domain model which then it itself has access to this it's called data source layer here it's your database at the end of the day and when we look at these three architectures they look very very similar they all have the same idea behind them and it's not something completely new even because if we take the service layer as an example and we focus on one particular area and we zoom in on that it's pretty much a layered architecture again like nothing new to learn here we're back in the future back in the 90s it's fine like new words for old concepts like isolate your application from the outside world that's pretty much it it's important to remember though that Rails is not your application and this is why I was emphasizing on the architectures a little bit and I want to show you an example of a pictures e-commerce platform here again in e-commerce platform we might have a user class that if user logs in we have a customer class representing this particular user and the user can have a basket a basket can consist of a couple of products and each product has a category and while we're checking out we need to select a delivery method as well as a payment method and at the end of the payment process we get an invoice at a PDF download for example and this is our core application like we don't see a controller there we don't see a session hash or any gnarly rails of all web details like this is the core of our application and this goes in the direction of the manager of design a book I highly recommend to read and in order to tie that back to the architecture we saw before let's have a look how that applies to our res application how that would apply to our res application so if we start from top to bottom our user interface is usually a browser and this browser connects through HTTP to our res application or our checkout controller and the checkout controller then translates this web request to a thing called our checkout which is our use case implementation in this instance and the checkout process the checkout use case then mediates between several domain model objects which ideally should just be plain old Ruby objects that define some behavior and then at the end it will create an order which is or which acts as our gateway to our database and this is pretty much it like top down one request comes in and you might have something new in the database how does that tie back to Ray as this MVC idea MVC has become this more of an abstract concept than a design pattern or anything like that and we've seen that a couple years back in the reds community even where there was talk about oh yeah we should pay for fat models over two skinny controllers until we realized okay these models get really really hard to test so we went to the other extreme and said okay let's keep the models really thin and put all the behavior in the controllers which made it really hard to test the controllers and it's not enough to only think or talk about these three buckets because I see this as a balloon that you just squeeze on one end you're not magically removing air or responsibilities from from things and you're just squeezing it here with and it will expand the balloon on the other side so it's important to not think about or focus on skin or fat anything we should focus on a healthy everything because spreading the responsibilities into small chunks will help us make create an easier to understand code base like if we only worry about classes that have five to ten lines it's really really it should be easier to understand than following a 50 method long method for example and when I was preparing this talk I had a chat with a friend over coffee and I was going through the topics and areas I want to discuss with him and he was asking me okay so we did this type of rails style for a couple of years now and he was asking would you do it again would you still do all your applications like this and I was thinking a moment and I thought well all the pre loading and loading optimizations yes for sure but the idea with the screaming architecture like removing the app directory and moving everything into lib probably not I think this is utterly overrated because if we think back to the default directory structure I don't necessarily need to know that this is a movie organization database or e-commerce platform for flowers or something like that it's okay to start with okay it's a res application if I want to know more I probably look into lib or if there's a source directory there's source and then I have my nice namespace responsibilities laid out there and with the separation of concerns or like separating your application from everything that is rail specific I actually like the idea to have these physically separated now like my core application is in lip nothing rails specific should leak into lib everything rails and web related should stay in app like this makes it not trivial but at least easier to upgrade rails as well because we don't need to worry about anything that is in lip maybe we wrap some active job dependencies inside lip but this is really about it lip is our application nothing of rails should leak in there so everything comes with a tradeoff right I'm not trying to sell you the promised land here like it's not that magically all of a sudden we will end up writing more and more awesome rails apps there are tradeoffs and there are hard tradeoffs you make for example explicit requires this is something that is not necessarily used in a typical rails application and if you take back the city example or the city metaphor at the beginning like we put up some big construction sites and just cut through a couple of blocks not without thinking but it we need to be aware that it comes with the cost because we need to remember like we work in teams usually and we're not an island right it needs to be a team decision because at the end of like in the end it's the team's effectiveness is more important than our own idealistic view of how an application should be structured because at the end of the day that is the technical detail whether file for control it is in this directory or that directory who cares really would I start every project like this now with this new approach probably not I would wait until I until I feel the pain of having a slow test read and then worry about okay how can I optimize this for example if I would put up a rails application for people to sign up for my birthday party that I will put on a free tier Heroku Dino scaffold the hell out of it and deploy it and call it they really it's really thinking about how much maintenance do you expect for this particular problem and how much benefit do you get from all these optimizations and we should remember to not just follow whatever has been done before us question things break things fix them again but break things and question things like take the leaning tower of Pisa for example it's a great attraction it is massive amount of people go there every year is that an indicator that we should build every tower like this now probably not and last but not least or more importantly know your tools like know why you follow certain rules but also know why and when to break them make it a conscious decision and on that note I think it's the time all right I will leave some room for some questions but first say thank you for listening to me
With Rails being over ten years old now, we know that the Rails way works well. It's battle tested and successful. But not all problems we try to solve fit into its idea on how our application should be structured. Come along to find out what happens when you don't want to have an app directory anymore. We will see what is needed in order to fight parts of the Rails convention and if it's worth it.
10.5446/31234 (DOI)
Hi everyone, I'm really excited to be at RailsConf this year. I'm Aileen Yushitell and I'm a Senior Systems Engineer on GitHub's Platform Systems team. That's quite a mouthful, but it basically means that my team is responsible for working on internal and external tools for the GitHub application. We work on improving Rails, Ruby, and other open source libraries and how those tools interact with GitHub. I'm also the newest member of the Rails core team, which means that I've finally gotten access to the Rails Twitter account. Because we all know that Twitter is the only important thing, never. You can find me on GitHub, Speeder Deck, Twitter, anywhere at the handle Aileen codes, including my website. It's all the same. So today we're going to talk about the new system testing framework in Rails that I spent the last six months building. We'll take an in-depth look at my process for building system tests, roadblocks that I hit, and what's unique about building a feature for open source software. But first, let's travel back in time a few years. At the 2014 RailsConf, DHH declared that test-driven development was dead. He felt that while TDD had good intentions that it was ultimately used to make people feel bad about how they wrote their code. He insisted that we needed to replace test-driven development with something better that motivates programmers to test how applications function as a whole. In a follow-up blog post titled TDD is dead, long-lived testing, David said, today Rails does nothing to encourage full system tests. There is no default answer in the stack. That's a mistake that we're going to fix. It is now three years after DHH declared that system testing should be included in Rails. I'm happy to announce that Rails 5.1 will finally make good on that promise because system tests are now included in the default stack. The newest version of Rails includes Cappy Barra integration to make it possible to run system tests with zero application configuration required. Generating a new scaffold in Rails 5.1 application will include the requirements for system testing without you having to change or install anything. It just works. This is probably a good time to address exactly what a system test is. Most are familiar with Cappy Barra being referred to as an acceptance testing framework, but the ideology of system testing in Rails is much more than acceptance testing. The intention of system test is to test the entire application as a whole entity. This means that instead of testing individual pieces or units of your application, you test how those pieces are integrated together. With unit testing, you'll test that your model has a required name and then in a separate test that the controller detected an error. With unit testing, you assume that the view must be displaying the error, but you can't actually test that. With system testing, all of that becomes possible. You can test that when a user leaves out their name, that the appropriate error is displayed in the view and the user actually sees it. System tests also allow you to test how your JavaScript interacts with your model's views and controllers. That's not something that you can do with any other testing framework inside Rails right now. Before we get into what system tests look into how, what it took to build system tests, I want to show you what it looks like in a Rails application. When you generate a new Rails 5.1 app, the gem file will include Cappy Barra and Selenium WebDriver gems. Cappy Barra is locked to 2.13.0 and above so that your app can use some of the features pushed upstream to Cappy Barra like many test assertions. In your test directory, a system test helper file called application system test case will also be generated. This file includes the public API for your Cappy Barra setup for system tests. By default, applications will use the Selenium driver using the Chrome browser with a custom screen size of 1400 by 1400. If your application requires additional setup for Cappy Barra, you can include all of that in this file. Any system test that you write will inherit from application system test case. Writing a system test is no different from writing Cappy Barra tests except for that now Rails includes all of the URL helpers so that you can use post URL and post path instead of slash posts without doing any additional configuration. This is a simple test that navigates the post index and asserts that the H1 selector is present with the text post. Then in your terminal, you can run system tests with Rails test system. We don't run system tests with the whole suite because they're slower than unit tests and everyone we talk to runs them in a separate CI build anyway. If you want system tests to run with your whole suite, you can create a custom rate test that does that. Let's take a look at system tests in action. I had to record the demo because they run too fast and you wouldn't be able to see them unless I slowed them down. First, we're going to write a test for creating a post. The test visits the post URL. Then we can tell the test to click on the new post link in the view. Then we can fill in the attributes for the post. Title will get system test demo. The content will just put in some warm ifsum. Then just as the user would do this, you would click on the create post button. After redirects, we're going to assert that the text on the page matches the title of the blog post. Then we can run system tests with test system. You can see that the Puma server is booted and Chrome is started. Now the test is filling in the details that we filled in in the test. That's the index test. As you can see, it's super simple and they run really fast. You may be wondering why it took three years to build system tests. That didn't seem that complicated. If you're familiar with the pull request, you know it didn't actually take me six months. It didn't actually take me three years. It took me six months. There were a few reasons that system tests took three years to become a reality. The first is that system tests needed to inherit from integration tests so they could access the URL helpers that already exist. Integration tests were really slow. The performance of them was abysmal. There was no way the REL team could push system testing through integration tests without a major backlash from the community. Nobody wants their tests to go from five minutes to ten minutes. That kind of performance impact isn't acceptable. Speeding up integration tests had to happen before implementing system tests. So in 2014, 2015, I worked with Aaron Patterson on speeding up integration test performance. Once we got integration tests to be marginally slower than controller tests, system tests could inherit from integration tests. Another reason they took three years is contrary to what many may think, the REL's core team does not have a secret REL's feature roadmap. REL's is a volunteer effort. So if there isn't someone interested in implementing a feature, it's not going to get implemented. Of course, individually, we may have an idea of what we'd like to see in REL 6 or 7 or 8, but I'd hardly call it a roadmap. We each work on what we're passionate about and often features grow out of real problems that we're facing with our applications at work. And system tests are a really good example of this. Prior to working at GitHub, I was a programmer at Basecamp. When we were building Basecamp 3, we decided to add system testing through Capybara. I saw firsthand the amount of work it took to get system testing running in our application. This was a major catalyst for getting system tests into REL's 5.1. The work required to get system testing into Basecamp 3 reinspired the motivation to work on this feature so that others could do less work in their application and focus on what's really important, writing software. So this past August, David asked me if I would be interested in getting Capybara integration into REL's 5.1. These are the exact words he said to me. Most of my work on RELs had been in the form of performance improvements, refactorings, or bug fixes, so I was really excited to work on a brand new feature for the RELs framework. There was just one caveat. I had never used Capybara before. I know that sounds ridiculous, but beyond writing three or four system tests at Basecamp, which I admittedly struggled with, I had never set up an application for Capybara nor written an entire test suite. This did have some pros, though. I got to experience first-hand what was hard about setting up Capybara, especially from a beginner standpoint. I had no assumptions about what was easy or hard when I began development on system tests. Having no experience with Capybara allowed me to see the feature that I was building solely from the perspective of what works for RELs and RELs applications. And this is not to say that Capybara does anything wrong, but RELs is extremely opinionated in what code should look and feel like. For RELs, it's important that implementing system tests is easy and require little setup so the programmer can focus on their code rather than test configuration. When you're implementing something that you're unfamiliar with, it's best to have a set of guiding principles in order to make decisions about design and implementation. Without these goals, it's easy to get sucked into scope creep or bike shed arguments about the details. Building guiding principles means that for any decision you can ask yourself, does my code meet these guidelines? For guidance on building system tests, I of course used the RELs doctrine. This, as mentioned earlier today, is a set of nine pillars that drive decision-making and code that goes into the RELs ecosystem. While I was building system tests, I would regularly base decisions on the RELs doctrine. System tests meet all of these requirements in some way, but I want to take a look at a couple of the principles and how system tests meet those specific requirements. The first is optimized for programmer happiness. This pillar is the overarching theme in all of RELs. RELs' entire goal is to make programmers happier, and frankly, I'm spoiled because of this. RELs makes me happy, and I'm sure it makes all of you happy too because you wouldn't be here at RELsConf otherwise. But you know what didn't make me happy? All of the implementation required to get Capybara running in our RELs applications. The code here is the bare minimum that was required for your application to use Puma, Selenium, and Chrome for system testing. Many applications had to do this multiple times to be able to use multiple drivers with their test suite or had much more setup because they wanted to use support different browsers with custom settings. RELs 5.1 system tests mean that you can use Capybara without having to configure anything in your application. You can generate a new RELs 5.1 app and all of the setup to run system tests is done. Programmer happiness was the driving force behind getting system testing out of your application and into RELs. You don't need to figure out how to initialize Capybara for a RELs application. You don't need to set a driver. You don't need to pass settings to the browser, and you don't need to know how to change your web server. System tests in RELs abstract away all of this work so that you can focus on writing code that makes you smile. And all you need is this simple little method driven by. When you generate a new application, a test helper file is generated along with it. If you've upgraded your application, this file will be generated when you generate your first system test or scaffold. All of the code we looked at previously is contained by this one method driven by. It initializes Capybara for your RELs app, it sets the driver to Selenium, the browser to Chrome, and customizes the screen size. RELs values being a single system, a monolithic framework, if you will, that addresses the entire problem of building system tests from databases to views to web sockets and testing. By being an integrated system, RELs reduces duplication and outside dependencies so you can focus on building your application instead of installing and configuring outside tools. Prior to RELs 5.1, RELs as a whole didn't address the need for system tests. By adding this feature, we've made RELs a more complete, robust, and integrated system. As DHH said in 2014, RELs was incomplete when it came to system tests. RELs 5.1 closes that gap by adding Capybara integration. You no longer need to look outside of RELs to add system testing to your applications. RELs values progress over stability. Yes, this means that often betas, release candidates, and even final releases have a few bugs in them, but it also means that RELs hasn't stagnated over time. RELs has been around for many years and the progress we've made in that time is astounding. We care about our users, but we also care about the framework meeting the demands of the present and future, which means sometimes adding improvements that won't be stable. You also know that when a feature, you also don't know if a feature is just right until someone else actually uses it. I could have spent years working on testing and improving system tests, but ultimately I merged them when I knew there were a few bugs left. I did this because I knew that the community would find the answers to the problems I didn't know how to solve and find new issues in the implementation that I just hadn't thought of. By valuing progress over stability and merging system tests when they were 95% done instead of 100% done, many community members tested the beta release and provided bug fixes for things present in system tests. A few features were even added and some functionality was moved upstream to Capybara instead. System tests progressed more by merging than there were a few bugs left than by waiting until they were perfectly stable. Now that we've looked at the driving principles behind system testing, let's look at the decisions and implementation and architecture in the Rails framework. We're going to look at why I chose specific configuration defaults and the overall plumbing of system tests in the Rails framework. The first configuration default I want to talk about is why I chose Selenium for the driver. The barrier to entry for using system tests should have zero setup and be easy for beginners to use. In Capybara, the default driver is RackTest. I didn't think this was a good default for system testing because RackTest can't test JavaScript and can't take screenshots. It's not a good default for someone who's learning how to actually test their system. I also had a few folks tell me on the pull request that they thought Poltergeist was a better choice because it was faster and that's what they used in their apps. While it is true that Poltergeist is popular and faster, ultimately I chose Selenium for a few reasons. Selenium doesn't require the same kind of system installs. Poltergeist requires PhantomJS and Capybara WebKit has a dependency on the system. Both of these system installs aren't something that Rails can take on. Since Selenium doesn't have these requirements, it made sense for Selenium to be the default over Poltergeist or Capybara WebKit. One of the coolest things about Selenium is that you can actually watch it run in your browser. Poltergeist and Capybara WebKit are headless drivers, which means they don't have a graphical interface. While they will produce screenshots and they can test JavaScript, you can't actually see them run. And watching Selenium tests run in a real browser like Chrome or Firefox is almost magical, which also makes it better for beginners. New programmers, especially those learning Capybara and Rails, can sphysically see the tests running and it's easier to discern what's happening or what they might be doing wrong. The best part about system tests is if you don't like Selenium, the driver options are extremely easy to change. To change the default driver, system tests are the driver that system tests use. Open your system test helper and change the driven by method from Selenium to Poltergeist. Of course, you're going to need to install phantom.js and add the gem to your gem file, but changing the driver setting itself is super simple. Rails won't stop you from passing whatever you want here, but Capybara will only accept Selenium, Poltergeist, Capybara WebKit, and Racktest. Another decision that differs from Capybara's defaults is that Rails uses the Chrome browser with Selenium instead of Firefox. Chrome is widely used and has a greater market share than Firefox. In general, I think the development is done in Chrome, so it seemed like a sensible default from that standpoint. Another reason I chose Chrome was that for a while Firefox was broken and didn't work at all with Selenium 2.53. This has since been fixed, but when I started working on it, it was one of the motivations I had for making the default Chrome. There was literally no way that I could merge system tests and have the default configuration be broken. Firefox now works with Selenium, and if you upgrade both Firefox and your Selenium Web driver gem, you can use Firefox. And if you want to use Firefox instead of Chrome, you can simply change the using keyword argument from Chrome to Firefox. The using argument is only used by Selenium, the Selenium driver, since the other drivers are headless and don't have a GUI. I'd love to support more browsers in the future like Safari or Internet Explorer or whatever is that you're using. Drivenby has a few optional arguments that are supported by Selenium. The screen size argument sets the browser's max height and width, which is good for testing your website at different browser sizes or setting the size for screenshots. Drivenby also takes an options hash, which is passed to the browser initialization. This can be useful for passing options that aren't explicitly defined in Rails, but are accepted by Capybara like the URL option. One of the coolest features of SystemTest is they automatically take screenshots when a test fails. This is good for freeze framing failures so you can see what went wrong. This works with all drivers supported by Capybara except for RackTest. Included in the SystemTest code on the Rails framework side is an after tear down method that takes the screenshot if the test fails and screenshots are supported. We're going to take a look at those screenshots in action. First, I'm going to change the part of the test to say failure screenshots instead of demo. Then we can run the test just like we did before. It's going to boot the Puma server. Now you can see that the test failed. In the output of the test, there is a link to an image. We can open that. You can see in the test that the screenshot says SystemTestDemo, but the test is looking for a system test failure screenshot so we can actually see why it failed. You can also take a screenshot at any point when your test is running by calling take screenshot. This can be useful for tools like Perseci for comparing front end changes or for saving a screenshot of what your website looks like at whatever point in your test run. One of the less obvious features of system testing is that database cleaner is no longer required. Those of you who work with Cappy Barra before will know that transactions during Rails test runs with Cappy Barra wouldn't get properly rolled back. It's been the status quo for a long time that ActiveRecord was just broken and unable to roll back transactions when tests were threaded. The basic gist of the problem was when a test was started, Rails testing code would start a transaction with a database connection. And then the web server would open a second connection to the database on a separate thread. When the test runs, database inserts or updates will happen on the second thread instead of the thread with the transaction. When the test finishes, Inserter updated records won't be rolled back because the fixture thread cannot see the web server thread. If the inserted or changed records aren't rolled back, at the end of the test, subsequent runs will fail due to uniqueness constraints or other issues with leftover data. I spent an embarrassing amount of time of the six month building system test trying to solve this problem. It took me a while to understand the real issue with ActiveRecord and I was surprised how many years users had just accepted that this was an issue with Rails. I wanted to solve it so that we didn't have to force users to use yet another dependency. The problem was I didn't know how to solve it and I'll be honest that concurrency isn't one of my strengths. So I had to ask for help from the two people who know more about ActiveRecord concurrency than I do, Aaron Patterson and Matthew Draper. This is definitely a picture of me. The problem was that Aaron and Matthew had differing opinions on how to fix the problem. I first tried to fix it Aaron's way which was to tell the ActiveRecord to just check the connections back in when they were done with them but this broke like 75% of the ActiveRecord tests. So that wasn't going to be acceptable. Because then you were trying to check the connection while the transaction was actually open and it needs it so you can't do that. Matthew came up with a different solution which was to force all of the threads to use the same connection. When the test starts the transaction is started and a database connection is opened and then the Puma server started it's forced to connect to the database using the already existing connection instead of creating a new one. That way they can see each other. Then all of the database inserts and updates happen on the same connection as the original test transaction and then they can all be rolled back when the transaction is closed. Without Matthew and Aaron's help I would not have figured out how to fix this problem and you all would have to use database clean or forever. We spent a lot of time looking at individual settings in the public API for system tests so it's time to take a look at the plumbing that makes all of this work. None of the code that we look at at this point is anything that you should ever have to touch unless you find a bug. This is just everything that Rails has abstracted away so you don't have to worry about configuration and can focus on writing your tests. System tests live in action pack under the action dispatch namespace and inherit from integration tests so it can use all of the URL helpers that are already implemented on the integration side. This entire class can't fit on this slide so I'm going to go through the methods as they're called by the test when it's started. When you run a system test start application is called first this boots the rack app and starts the Puma server. Your system test helper file then calls the driven by method. This is where the default configuration settings are implemented. When driven by is called a new system testing driver class is initialized with the arguments that you passed in. The driver object is initialized with name browser, screen size and options. The running test is initialized. It's important to call use on the driver here so the driver is set when the test is initialized and not when the class is loaded. The use method calls register if the driver is Selenium. The register method is how capybara sets the browser to Chrome with the options and screen size passed to driven by. And finally the use method calls set up which simply sets the current driver method in capybara to the driver you passed into driven by. This is the basic plumbing required in Rails to get system tests running. It's relatively simple but it's great now that none of you have to put that in your application. One of the things that struck me about working on this feature for Rails was how different it was from building features for a product or client. I think this is in part because the work is so public versus client or product work is usually a secret until the big reveal. You're probably thinking duh, it's open source so it's by virtue public. But almost all of the work that I'd done previously was related to performance improvements for factoring and bug fixes. These are things that folks don't come out of the woodwork to comment on. This isn't the type of work that people care a lot about when it comes to style or implementation. It doesn't change their application unless you're touching the public API. But adding a brand new testing framework, that's something everyone has an opinion about. In the three months that my system test pull request was open I got 11 reviews and 161 comments. And that's not even including all of the conversations that we had in the base camp chat about it. And this highlighted one of the big challenges of open source for me. That it's really difficult to not constantly feel like you're being judged. Doing open source work makes you extremely vulnerable. It can feel like every commit, every comment, and every code change is open to public scrutiny. And this is one of the hardest things about working in open source and one of the things that I think keeps new contributors from coming and working on open source. I'm on the Rails core team and I still get an adrenaline rush when I push to master. I still start sweating if the build fails after I merge a pull request. I've been doing open source for many years and I still feel vulnerable. It's difficult to remain confident and keep your cool when doing publicly visible work. I often had to fight the urge to rage quit all of it because I was tired of debating choices that I made. Even if the other person is right, it's still exhausting to feel like you're having the same conversation over and over and over again about implementation. But really public debate is an inherent part of open source and you're always going to have to argue for your position. When your confidence is shaken, it can be tempting to look for ways to find consensus among everyone who's reviewing your work. The desire to find consensus isn't unique to open source. But what is unique to open source is the stakeholders you're trying to find consensus with have varying levels of investment in the end result. When you're building a feature for the company you work for or the client, you usually know who your stakeholders are. Those are the people who care most about the feature that you're working on. But with open source, you don't really know who's going to care until you open that pull request. Of course, I knew the Rails team was a stakeholder and cared a lot about how system tests were implemented. And I knew the Cappy Barrett team cared about the feature as well. But I wasn't prepared for all of the other people who would care. And of course, caring is good. I got a lot of productive and honest feedback from community members. But it's still really overwhelming to feel like I needed to debate everyone. Rails' ideologies of simplicity differ a lot from Cappy Barrett's ideologies of lots of features. And all of the individuals who are interested in the feature had differing opinions as well. Which driver was the best default? Was it okay to change Cappy Barrett's long time defaults from Rack Test to Selenium? Was it even desirable to include screenshots by default? Was it fine to change the default browser from Firefox to Chrome? I struggled with how to respect everyone's opinions while building system tests but also maintaining my sense of ownership. I knew that if I tried to please all through groups and build system tests by consensus, that I would end up pleasing no one. Everyone would end up unhappy because consensus is the enemy of vision. Sure, you end up adding everything everyone wants, but the feature will lose focus and the code will lose style and I will lose everything I felt like was important. I needed to figure out a way to respect everyone's opinions without making system tests a hodgepodge of ideologies or feeling like I threw out everything I cared about. I had to remind myself that we all have one goal. To integrate system testing into Rails. Even if we disagreed about the implementation, this was our common ground. With this in mind, there are a few ways that you can keep your sanity when dealing with multiple ideologies in the open source world. One of the biggest things is to manage expectations. In open source there are no contracts. You can't hold anyone else accountable except for yourself and no one else is going to hold you accountable either. You're your boss and your employee. You're the person who has to own the scope and you're the person who has to say no. There were a ton of extra features suggested for system tests that I would love to see but if I implemented all of them, it still wouldn't be in Rails today. I had to manage the scope and the expectations of everyone involved to keep the project in budget. While I really respected everyone's opinions on system tests, ultimately I was building the feature for Rails. System tests needed to fit into Rails' look and feel. To do that, I had to work with Cappy Bear's ideologies. That system testing should be robust and have many options. There's nothing wrong with this approach but it doesn't follow the Rails doctrine. Because Cappy Bear didn't provide a clear enough path for getting system tests into your Rails application, Rails had to take that on. In the end it was the Rails team who was going to decide when system tests were mergeable. Since I was building the feature for Rails, I first honored Rails principles and everyone else's second. When you're building open source features, you're building something for others. If you're open to suggestions, the feature might change for the better. Even if you don't agree, you have to be open to listening to the other side of things. It's really easy to get cagey about the code that you work so hard to write. I still have to fight the urge to be really protective of system test code when someone wants to merge, add some code or change how it works. I wrote it and I put a lot of time and effort into it so I'm protective of how it looks and feels. But I also have to remember that it's no longer mine and never was mine. It now belongs to everyone who uses Rails and so I need to be open to suggestions, tweaks and changes. Which brings me to my last point. Open source doesn't work without contributors. A perfect example of this is how I merged system tests when I knew they were in 100% stable. I did this because I didn't know how to fix a few bugs like displaying the correct number of assertions or how to run system tests separately from Rails test. The best part was that the system worked. You can't push an unfinished project live and ask your client to contribute to and improve upon your work and fix bugs. But with open source you can build a foundation for others to work off of even if it's not perfect. I didn't merge system tests with known issues because I was lazy. I merged them because I knew that contributors could help me fix the problems when they tested them in a real application. And because of this, when the Rails team released candidate 1 less than a month after the first beta, all of the known issues that I knew about before merging had been fixed by other people. To up all the Capybaras maintainer added many tests to Capybara. Previously users were running system tests with many tests would see an incorrect number of assertions and failures because Capybara handles those that are spec way. This is one of the things I wasn't sure how to fix. I knew fixing the Rails code was wrong but I needed Capybara's buy-in to fix it upstream. So it was great that the maintainer was willing to add this feature for Rails and it's a huge win for system tests. Robin850 sent a pull request that changed system tests so they don't run with the whole suite. We didn't want system tests to run when you run Rails tests in your application because they can be slow since they're using an actual browser. And by default we run them in a separate test command. These two contributors help make changes to DrivenBuy so it could be used on a per subclass basis rather than globally. When I originally designed system tests it was intended to be a global setting because I assumed that if you were a Capybara power user that you just wouldn't be using DrivenBuy anyway. Once the system tests were merged it became clear that others were really excited about using the DrivenBuy method instead and for setting up multiple drivers. So these two contributors help make system tests a lot more robust. FrenchApp helped improve screenshots to make them configurable and to display them differently based on environment settings so your CI versus your terminal. I knew this was a problem when I merged but I really wasn't sure how to fix it. He came up with an elegant and easy to use solution. This was also his first contribution to the Rails framework. And all of these improvements for system tests highlight the real beauty of open source. That it doesn't work without contributors and it doesn't work without you. Open source is for everyone, by anyone. Contributing to open source isn't easy, especially a mature framework like Rails. But Rails is an astounding past and a bright future because of all of the contributors that care. And don't believe anyone who says Rails is dying because frankly the future has never been brighter. Rails can and will benefit from your contributions in the future. Just like the contributors who push changes to system tests, you too can make system tests and Rails better. By contributing you can help define Rails' future. I hope that next year, RailsConf 2018, I see some of you talking about your contributions to Rails and open source. I hope that you enjoyed learning about system tests and how they work and my process for writing the feature. I also hope that I inspire you to contribute to open source, especially Rails. And if you're looking to learn more about contributing to Rails, stay for Alex Kitchen's talk which is up next after the break. Thanks for listening. The question was that is the Rails team going to take a position on testing and push system tests more? The first part of that is first system tests need to exist. So now they exist. And there's different kinds of testing. Because system testing can be a lot slower because you're using a browser and JavaScript and all this other stuff, it doesn't necessarily make sense for us to push system tests as like you have to system test and you can only system test. We're not going to delete unit tests from Rails tomorrow as far as I'm aware. But I think that that would make a lot of people really mad. So it's unlikely that we would do that. So you should use what you want to use but now you don't have to look outside of Rails to use system tests. And that's the best part of it. So the question was how do I find time to contribute to Rails and open source with my full time job? So that's really hard and I think that's one of the reasons why sometimes I would feel like really burned out on system tests because I was just tired of spending my weekends working on it. And my balance is sort of weird, at GitHub I do have some time to work on open source but that's not my main job. So I find the balance by being like if it's something I'm really excited about and it's not for GitHub or for Basecamp when I was there, I would only work on things that I really was excited about or usually stuff that Aaron is excited about because he likes to pawn stuff off on me. Yeah, so that's hard. I think that one of the things that would be good is like if you could convince your company to give you some open source time. Even if you have like one day a month where you give back and you could do it in a group so then it's easier to get ramped up on a project. Or staying up to date on, if you're using a Rails app at work, staying up to date will actually help be more likely for you to find bugs in Rails because not everyone is using up to date applications. So by doing that you might find a bug and then you get that if you can fix it then you get that as a contribution as well. Features obviously take like a lot more time and effort than, well it depends, than fixing a bug usually because a bug is usually a little bit more contained than feature. Does Rails have an opinion on how you structure your system tests? The driven by method structures like your configuration, your setup, so that we have an opinion on. Everything else, we didn't add different ways of writing Kappy Barre tests so you wouldn't write like, we didn't overwrite visit and then change it to get or post. We left that the way it was. So generally write them the Kappy Barre way unless there is a compelling reason to add a more simplistic method or if we wanted to do that we haven't done it. So yeah, so write them the Kappy Barre way until we change that if we do. What was the question was what was the most exciting or fun part about building this feature? The day I merged it? It kind of like went on a long time like the pull request was open for three months so merging it was the most fun part. But because you have this final yay, it's done, everyone gets to use it. And this was one of the most well tested betas too which was really exciting because I don't know why I don't know if it was just because everyone was really excited about system tests and encrypted secrets and a few other things we had. We had more bugs found and fixed in our first beta than I think we've ever had. So that was also really exciting about it. The question was how did I settle on a name for this and I actually left that out of the talk because I thought it wasn't interesting. Originally we had picked one name, someone had picked a name a long time ago in the base camp project that we have for Rails and I was like sure I'll make it that it was like Rails system test case. But then it didn't fit anywhere because you don't want to put it under the Rails ties namespace which is the only place where there's a Rails namespace under the Rails umbrella. There's no other testing framework inside of Rails ties so that didn't make sense. And it needed to inherit from integration tests and before I fixed the active record thing we thought there was going to be an active record dependency or possibly a database cleaner dependency. So then I took everything and moved it to its own gem and then right before merge somebody was like hey what about the name and then there was a whole debate and it got merged, moved back into integration tests and that's how it ended up with application dispatch system test case and that also highlighted that integration tests are really weird and like should be called integration test case not integration test because everything else is test case. So maybe that'll get fixed one day. I am out of time so come find me after. I'm sorry for those of you who didn't get your questions answered.
At the 2014 RailsConf DHH declared system testing would be added to Rails. Three years later, Rails 5.1 makes good on that promise by introducing a new testing framework: ActionDispatch::SystemTestCase. The feature brings system testing to Rails with zero application configuration by adding Capybara integration. After a demonstration of the new framework, we'll walk through what's uniquely involved with building OSS features & how the architecture follows the Rails Doctrine. We'll take a rare look at what it takes to build a major feature for Rails, including goals, design decisions, & roadblocks.
10.5446/31235 (DOI)
Oh, there you are. I've been looking for you. Welcome to your first day at DeLorean. You're going to love it here. Now here at DeLorean, we're revolutionizing the industry of time travel. With one push of a button on our app, you're going to be able to summon a driver in a DeLorean. We'll come roaring around the corner, you jump in and you get taken to whatever time period you want. Now, oh, by the way, I might mention to you, it's a little bit messy because we've got a lot of teams here, you know? So you might be a little surprised when you open the code base, but don't worry. It's totally normal for everyone to feel a little bit surprised when they first join. One thing you should also know is that we have several of these code bases. So when you implement a feature, you're going to have to check out this code base and that one and that one and that one. And not only that, we have some funny naming conventions around here. So when your product owner tells you about the FizzBang widget, don't forget, it's actually called the Foo Bar Doohickey, which someone actually called the bar thing a mobobber for some reason. Don't worry about it. It's just the way things are around here. And I know what you're thinking. You're thinking we need to really clean things up around here, which I'm sure I promise you we're going to get around to. But for the time being, I just really need you to be really heads down on our biggest, latest product development puppy deliveries. And I guarantee you it's going to be a hit. Well, okay, it's time for stand-up. So I'll see you around. You'll get started with your team. Welcome to the little line. Hi, I'm Andrew and I'm a software developer at Carbon 5. And like many of you, I've been a Rails developer for several years. And at Carbon 5 in prior jobs in the past, I've been a part of teams that work on large code bases in Rails that have struggled to really scale as they've grown in size and complexity. Now, I've been thinking a lot about beautiful systems. What are things that make a system beautiful? We've talked a lot about beauty here at RailsConf. So beauty, as many of us Rubyists might think, comes from the language. And it's syntax, and it's form, and it's expressiveness, whether we have nice DSLs that read like English. Or it could come from the tooling. It could come from developer ergonomics with beautiful error messages that are very helpful or an amazing debugger. Or if you're in a different language, it could come from a great type system or a compiler. Some of us might consider beauty to come in form of tests, whether our community encourages us to write great tests, whether there is the existence of a test suite that makes our code resistant to breaking changes. But what I want to propose to us today is that a system is beautiful when it's built to last, when it has longevity and it stands at the test of time with changing business and product requirements. These long lasting systems are just large enough. They know their boundaries. They don't grow past them. They know what they're responsible for. They are highly cohesive and loosely coupled. And what that means is that they contain the necessary set of concepts within themselves. And these concepts are all close together. No need to reach outside to actually go fetch a concept somewhere else. And when I say they're loosely coupled, I mean that they minimize their dependencies on each other. And they have precise semantics that fully express their business domain. So when you jump into the code base, there's no confusion as to what it means, as to what business process it's representing or trying to implement. Now, some coworkers and I, for the past couple years or so, have been reading papers from computer scientists from the past. And we came across David Parnas' paper from 1972 in which he wrote on the criteria to be used in decomposing systems into modules. And he called this criteria information hiding. Here's what it said. He compared a software system. Its job was to process text. And it took an input of words and it basically did some processing on the text and it shifted things and alphabetized things. And he compared two approaches in which he turned these systems into modules. So in the first step, he treated the program like a script. And he said step one goes into a module, step two goes into a module, step three goes into a module. And he says that that's probably the approach that most people would have taken with this program. But in the other approach, he divided it up by responsibilities. So he said this module is responsible for line storage. This module is responsible for alphabetizing. This module is responsible for writing things out to the disk. And what might seem obvious to us 45 years later is that this is a good idea. And so what he concluded in the paper was that we divide up modules by difficult design decisions or design decisions which are likely to change. And by doing that, we insulate these modules from affecting the rest of the system when they change. So I wanted to bring this out and draw this out a little bit more because I thought this applied very well to software systems, meaning systems in our business. Where are the design decisions that are going to change in our company? Well, I want to put forth that this happens within the business groups that generate these changes. Here's an example. My team on marketing wants us to generate 5,000 promo codes. Now, on your team, finance wants you to implement a new auto log every time someone charges a credit card. And on your team, product wants you to implement food delivery. And then on my team, marketing says, oh, actually, we want 2,000 of those 5,000 codes invalidated. And then finance needs us to add yet another attribute to the log. And your product team wants you to now launch in a second market. And to me, that sounds like change divided up from the parts of the business that are driving them. So if we've ever worked in a nice Greenfield Rails app, you know it feels really nice. So marketing asks us to do this, finance asks us to do that. And, you know, like, it's easy to add features as it gets spread across the code. But as time wears on, we know it feels a little bit more like this. And so the question now becomes, how do we get out of this large system that's doing too much stuff? Well, I heard about microservices, and I know that they're not easy. If there's anything that I've learned from attending any of these conferences is that they come with an operational complexity that most people have failed to consider and realize only too late after stepping in. How much do we extract? Do I extract like one little feature? Do I extract an entire area of the code base? And where should I draw those boundaries? What if I extract something that's too specific? And then on the other hand, what if I extract something that's too generic? I once worked at a Rails company with a very large Rails monolith. And we realized as the engineering team, we needed to show something to our CTO about where we wanted to take the architecture. Well, we had no idea. We didn't know if we needed in the end maybe something on the order of 10 systems or 90 systems. I don't know. If only we had something to help us visualize what we need. Well, in 2003, other Eric Evans came out with a book called Domain Driven Design. And in it are both a set of high level strategic design activities and also very concrete software patterns. I must also warn you there's a lot of enterprise speak in it, Java code or.NET code you'll find on the internet that will be very confusing, which certainly confused me when I first got my head into it. But my coworker told me at the time you should really look into domain driven design because I think it will help us. So today what we're going to do is we're going to pick an activity from Domain Driven Design called building a context map. And through the context map, we're going to learn some concepts from Domain Driven Design that will help us understand our systems. And then we're going to learn some refactoring patterns that we can then apply to incrementally organize our systems around the boundaries that we'll find out in our context map. So let's get started. Domain Driven Design is very, very much focused on language. That's the first distinction that I usually tell people when they ask about the subject. And in the language, we have something called a ubiquitous language. And a ubiquitous language is not meant to be a global language for the entire company to use, but it's simply a shared set of terms and definitions that your team in your area of the business can agree on. And we typically use this language to drive the design of the system. So through the development of something called the glossary, we get people together in a room and we simply get together and we write out the list of terms and definitions. This is a very, it seems like a very straightforward exercise. So we simply come up with nouns and verbs that we use in our domain or within our team. So for example, over lunch, we might sit together and write on the whiteboard, okay, well, a driver is this thing, the driver owns the DeLorean, the driver drives around and provides driver services. The rider does this and then your product owner might be like, wait, we call that a passenger. And you scribble out writer and you write, oh, actually, we're going to say passenger. And then you might talk about events as verbs. So there might be an event in which we hail a driver or there might be an event where we charge a credit card, so on and so forth. And the idea is that this term of this list of terms and definitions is something that we codify either in a document or in the code so that we can all be in agreement about what words to use. And then we go on and we start renaming things in the code to follow the business domain. So for example, we might have something in which we have a user requesting a trip. Now there's two, there's two language things in here that we realized don't actually follow the business domain. So we go rename it. It's actually a passenger and the passenger hail of the driver. Now let's move on and let's go visualize our system. I'm going to go generate an ERD diagram for us. And there are gems that do this for us, one of which is called RailRoddy and other one is called RailsERD. And simply the goal right now is to get a lay of the land of the architecture of the system using active record relationships to drive these relationships. So here's what something like that might look like. It's a little hard to read, it's very hard to read, nobody will be able to read it. Don't worry, I've done the work for us. Most likely if your company has a very large code base, this diagram is going to be gigantic. We once printed out ours at a prior company and it was maybe like six feet long printed out on a roll paper. Most likely it's going to be gigantic and it may or may not actually be usable. If it's not usable, you may have to generate one by hand or something. So let's start by defining a few core concepts around domains. The domain of your business, the core domain is the thing that makes the business do what it does uniquely to the business. So at DeLorean, our core domain is transportation. If you were Google, your core domain would be search. If you were Flickr, your core domain would be photo sharing. And then there are things known as supporting domains. Supporting domains are simply areas of the business that play supporting roles to make the core domain happen. So here at DeLorean, we have a team that's devoted to driver routing. And all they do is they come up with maps and fancy algorithms to route the driver to the right places. Or we have a domain for financial transactions in which we charge users credit cards or we pay the drivers. And then we have an optimization team that tracks business events and makes recommendations to the rest of the business on how to optimize certain business process. And then we have a customer support team which manages user tickets and keeps people happy. So now what we're going to do is we're going to go discover these domains by using the diagram to help us think. So we're going to look for clusters on that diagram and we might discover a few domains we haven't thought about. So you take a look here. And there may or may not be clusters that pop out at you. I'm going to do the work for us here. And I have come up with this, but as a team, you might come up together and make this a group exercise. Most likely it will not be as clean as I've fakerly made it out to be here. But for the sake of illustration, let's go with this. So now we've got a list of domains in our system and we have a rough idea on what models belong in which domains. Now let's talk boundaries because boundaries are an important concept that will help us divide up our systems. Now in our Rails app or a Ruby app, we might have a boundary of a class. The class is a definition for a certain concept, but it's concrete in code and it's meant to be a single boundary around one concept. A module could be a boundary across a collection of concepts. And it's simply a namespace for multiple classes to live in or something. Bounds are another way to package up code that belongs together and ship it around. And then finally, things like Rails engines, Rails applications, external applications, external APIs could also be boundaries upon which other concepts are contained. So a bounded context is simply a software system. So when I put it out there and I say bounded context today, I will simply mean a running software system somewhere in production or could be a software system that could be run within your business. But since this is domain driven design, there's a language component to it. So linguistically, it's actually a part of our domain in which concepts live and are bounded in their applicability. So what that means, you might think about it as a playground for concepts to live, but they're not allowed outside of that playground. And I'm going to spend a little bit of time explaining these a little bit more. The bounded context allows us to have precise language because it allows us to have terms that have conflicting overloaded terms. It lets us separate them and give them their own playgrounds to run around in. Here's an example. We have a trip class. And to us, a trip is simply the thing that a passenger jumps into a car and they go for a ride. So that's a trip. There's a time and there's a cost on the trip. However, it's a little overloaded. In the big financial transaction world, the concept of trip time is when the vehicle is moving. Folks in our finance department, or like they just made the decision that we're not going to charge the car, we're not going to charge the customer when the car is stopped. I don't know what that means when your time drive on, but bear with me here. Trip time in the routing context is calculated when the passengers in the car, no matter whether the car is moving or stopped. So you can see right here is that depending on what context you're in the business, whatever software system is using that, there's nuances in the behaviors for the same context. Or what about trip costs? How much money is the customer going to be charged? That's what cost means in finance. But when you go to the routing domain, trip cost is actually a completely different term. It's some made up metric for trip efficiency, some sort of scalar metric. So those two concepts have similar names, but wildly different actual definitions. So what do we do? Well, if you're like me, what we might have done in the past is we might have simply just made it a little bit more specific, and then we would have just like closed the box and like walked away from it. And then what we've done here now is that engineers in the code will actually have to understand the nuances of these methods and understand that one is meant to be used in this context and one is meant to be used in that context. Just briefly here, we can fix this by introducing two bounding contexts. There could be a trip that belongs to the financial transaction bounding context and another one for the routing context. But we'll get into that a little bit later. So here's what we're going to do. We're going to go find the boundaries within our existing system using that diagram as a guide. So we'll also keep in mind that there are other systems in the landscape of our business. So things like other team services or other cloud providers. So over here, I've started out with our diagram. I'm going to pull out the ERAD diagram from behind it just to make it more clear. And I'm going to draw a big circle around what I know to exist in our monolith. Here it is, the monorail app. And I thought a little bit more about it. And I realized that your team has the email service. And oh, we actually use an AWS service called SNS to send push notifications to people's phones, use brand tree to charge credit cards. And actually in the process of drawing out these other systems, I actually realized there's some other domains that I haven't thought about yet. So I'm going to write them in. So customer notifications and only domain I haven't thought about. And there's some marketing because marketing is targeted to emails to people. And then finally, I'm going to draw out dependencies between these bounded contexts. So I'm connecting these bounded contexts. And then I'm drawing a, I'm writing a U or a D. The U stands for upstream, the D stands for downstream. What that means is the upstream system is a system that is the source of truth for certain types of data. So the upstream system may provide the API, the upstream system may fire the message. Whereas if you're the downstream system, you are consuming or you are dependent on whatever the upstream service provides. And drawing out these directionality dependencies will actually help us understand the lay of the land to understand the relationship our system has with other systems in the world of our business. You might notice a few things. A few things about the context map. You might notice that one bounded context has multiple supporting domains. So this is very intuitive to many of us because we felt the pain of the monolith. That monolith did too much because it was trying to manage the code for all these different domains. Another thing we might notice is that there's multiple bounded contexts that have to support a single domain. So over here we might notice that financial transactions and customer notifications both span several software systems. And that's just kind of a call out to us to make us realize that if we ever have to implement a feature in any of these domains, we're going to have to end up touching a few of these systems. And then in the end, there is an ideal or a suggested outcome that DDD suggests to us that every domain is matched up with its own bounded context. So this might look something like this. This is certainly not a practical architecture for many of us, but if we took it to the extreme, every domain would have its own software system running behind it. You might also imagine this to be maybe the ideal microservice architecture. When I said that, I almost immediately want to take it back. But the idea is that everything is segregated to be highly cohesive within itself. Okay. Now let's get to the actual code. So when we begin our first refactoring step, we only want to change a little bit of things at a time. So what I'm going to do now is I'm going to draw out one domain and I'm going to make it a module. So let's say I'm going to implement a feature somewhere in ride sharing. And while I touch those features, I'm going to actually bring in some of those concepts into my ride sharing domain. So I'm going to start with the model and maybe it's related classes and I'm going to moduleize it. I'm going to introduce a ride sharing module. And then I'm going to have to do the Rails-y things to get the rest of the application understanding that this thing is now namespace. Now I'm going to go move that code into a new folder. I've now made a second level folder called domains. And then within that, I'm going to start a new folder for every single module I've introduced. So over here I've made a ride sharing domain. And I'm basically just dumping in all the code that I've collected from my models, my controllers, my views, or maybe even my services. And the idea is to move all that code that's related together into their own folders. This will temporarily make that ride sharing folder a little messy, but I want to put forth that it's okay in the interim. Now there's something also called aggregate roots. And the idea behind this is that an aggregate root helps us address the problem of God objects in active record. So we oftentimes have objects that know too much about the outside world or the outside world to know too much about our object. So over here we have an active record model that might be explicitly bound to other models in the ride sharing domain. And this payment confirmation as I've illustrated here has a lot of relationships that may not be necessary. But however, the fact that they're all explicitly defined here makes it difficult to refactor, makes it hard to write tests. And it's just kind of awful to look at. Additionally, outside actors may actually know a lot about the internals of my domain. So I might have a payment flow that's a web UI that calls in and updates all these models. Or I might have an external ETL process that runs nightly and picks and chooses what it wants out of my domain. Or I might have a thing that pushes notifications to drivers and has to reach into my domain. And so the idea of an aggregate route is something that is the idea that I'm going to expose only a single graph of objects to the outside world. So I'm going to simplify what I expose to the outside world so that the outside world has a reliable interface into my data models and I protect my internal data models from being from change. So over here I've decided, you know, my trip is going to be the root. And then the aggregate is going to be all these other models that flow out from the trip. And the idea here is that this trip now will expose everything else but only through itself. And so anytime someone makes a direct method call to me, every time someone asks for the trip, I'm only going to expose that aggregate route through a JSON payload or through an API endpoint. You might have multiple aggregate routes per domain, which is okay. But just expose just enough to the outside world such that it makes sense. Now here's a quick thing that we can do. We can build a service object that will provide this aggregate route. So the idea is that I'm going to make a service object that will create an aggregate route that is basically a facade. So here's what I'm going to do. I'm going to introduce a thing called a fetch trip. And this fetch trip essentially is going to wrap over here I've written as an active record query that simply returns passengers and drivers on top of the trip. But alternatively it could be a Ruby struct or something like that. Just something to have data to pass back to the outside world. And now callers in which used to be tied to my domain through active record relationships will now simply call the service object. And the service object will then return to them the related data models that they need. Finally, let's talk about event driven architectures. In the past, we might have had to go somewhere else to do a side effect after we finish processing code in our domain. So over here, when a trip is being created, this code then reaches out and does something in an unrelated domain, which ends up coupling the two domains. So over here you can see that a ride sharing concern then has to perform something in the analytics domain as well. What if we flipped the data dependencies? So instead what we're going to do is we're going to go publish an event and then we're going to go have the other domain subscribe to that event. And so therefore we lower the coupling between our domains. I will now introduce a thing. So I'm essentially introducing a message bus. And within this message bus, I'm going to introduce a publisher and we're going to have subscribers or handlers to handle these events. And I'm doing this through a gem called whisper. Whisper provides published subscribe semantics for Ruby applications. And so here the domain event publisher simply passes through an event and then calls through whisper to publish the event. And on the other side or sorry, here in the original code, instead of reaching into the other domain, I'm just going to fire the event. Now on the other side, every bounded context is now going to handle or respond to this event, depending on whether it needs it or not. These handlers will also use things called command objects to basically perform their side effect. So over here I've made a domain event handler. This domain event handler will listen to the trip created event through the definition of a class method called trip created. And so therefore every time an external publisher publishes trip created, this domain event handler turns around and fires the log trip created command. Here's the glue code in which whisper is set up to subscribe to events. So the event handler subscribing to events from the publisher. And this is an illustration of what the command object looks like. The command object is a simple light weight service wrapper around a specific side effect that has to happen. You might also see that other domains need to also respond to these events. And so over here we are introducing some extra behavior that is now decoupled from that controller. And so in the financial transaction world, we would also do things like creating audit logs or we deduct gift card amounts, et cetera, et cetera. So this is kind of the end result of our new architecture where we introduce a message bus. It should also be noted that this is technically not a message bus in the asynchronous manner yet. Whisper actually just has some nice wrappers that allow you to make it look like it's async, but it's actually still synchronous in the web request. Until you introduce the active job wrapper and now your handlers will now be queued up as active jobs. And so this allows you to actually make your side effects asynchronous with your worker queue framework of choice like sidekick or rescue. And then here's the glue code to make that happen with active job. Finally, you can also introduce a real message queue. So if you actually want to decouple your systems to other external systems, you might introduce a thing like RabbitMQ. And there's a few nice gems that let you do that. Stitch Fix has a nice one called Puca. There's also a gem called sneakers. Let's talk about a couple of more advanced topics. The first one of which is what happens when we want to share models between domains? Let's say I have a system that's off here and I have a system here, but they actually still need to access the same data. Well, there's a concept called a shared kernel in which we simply make it okay to ship around a certain shared package. And so what I suggest in that case is we simply name space our models under that shared namespace. And this could actually become a gem if you have to ship it to external libraries or not. But I would also say that if you find yourself sharing a lot, you're maybe not thinking about your domain clearly enough because there might be an actual thing that you want to do, which I'm going to talk about next, is when you have one model that actually needs to belong in two domains. Sometimes you have a concept that just has to be broken up. And how can you get these concepts codified within their respective domains? Now, there's something called an anti-corruption layer, which I'm going to introduce, which is simply an adapter that maps a concept from the outside world into a concept that we can use within our domain. So here's an example. So remember that trip that I introduced earlier on? Well, we know that it actually has the semantics for two domains. So what if I introduced a nice, very expressive domain model for the routing context here? So over here, that's a really beautiful domain model that has language that really reads and flows nicely and matches the business domain. And what I'm going to do is I'm going to make an adapter that simply maps us from that legacy data model into our internal data model. So there's simply a mapping function that just maps things together, and then it converts and instantiates our pure domain model internal. And then we're going to add a repository in which we simply grab the thing from the outside, and then we instantiate an adapter which converts us to the internal model. And now internal domain code is going to call the repository instead of directly reaching for the outside world. Let's talk about what happens next. So one would imagine that you can apply this incrementally. So the beauty of this is that you can apply one or a few or maybe all of the refactoring patterns, and you might move things first into incremental domain-oriented folders. So you might simply pick and choose, move things and drop them into folders. Next, what you could do is you could turn those folders into Rails engines. So they're actually self-contained applications. And then next, you can move these actually into their own Rails services. And then finally, you can move them to whatever language or whatever you want to use of your choice. And the idea here is that actually as you continue decoupling these things, you allow your systems and your teams to scale because they're going to actually be able to run in isolation from each other slowly but surely. Okay, I want to throw up a few caveats because this happens to be very true. DDD will work very well for you if, one, you have a complex domain. You need the linguistic precision. You find yourself really caught up or tripping up over words or names or meanings. Or maybe your business is just very complicated. You have a lot of regulation in your domain or something. There's just a lot of nuances to your domain. Second of all, you might work in a very large team. You just might have a massive team working on a massive code base. So this might actually help you isolate your systems well. Third, you're open to experimentation. You have buy-in from your product owner. And fourth, your whole team is willing to try it out, including other teams. So if you're the lone wolf that wants to kind of slide this under the door, this is not going to work. You need to have buy-in. You need to have agreement that, hey, we're going to try out this new architecture and we're going to try out a few of these refactoring steps. How does it feel? If it doesn't feel good, maybe you need to have second thoughts. Or if somebody is really against it, this may not work. This is in response to a presentation I gave earlier. There was a conversation on Twitter and somebody said, hey, you're just doing Java. And then I had this thought, oh my God, are we becoming the thing we hate? Just kidding. I actually have a lot of love for Java. But I would say that in Ruby and Rails, we oftentimes have this search, we're on this quest for simplicity. But that may not necessarily be the best thing in every case. Because in domains where there's some necessary and essential complexity, maybe what we need to be searching for is clarity. So with that in mind, I want to share with you some things to be on the watch for if it doesn't feel right. When do you stop? So hopefully you've been applying these incremental refactorings. But at a certain point, you might feel like, wait, I feel like I'm over designing. I feel like there's a little too much going on here. Or it feels like it's kind of silly just to make this thing do that thing and then plug this thing into here. Second of all, you might feel like maintaining these abstractions is kind of a burden. You're just like doing things just for the sake of following the patterns of the book. It actually might be okay to simplify things instead of creating a pure domain, instead of creating a service that does this to an adapter that does this to this other repository, you might be able to get away by smashing those three things together into one object and then calling it a day. It's okay. And then finally, if other teams are silently grumbling or they're not, they're very obviously grumbling about it, then maybe it's time to stop and have a conversation. It's okay. You don't have to follow this by the book once again. The beauty of this once again is that we incrementally refactor, we incrementally apply things. So in summary, we discovered the domains in our business and we developed a shared language with our business. We built a context map and so we saw some strategic insights and we saw the lay of the land. And then finally, we found some refactoring patterns and some organization strategies to help us organize our code bases to hopefully get us to that next step so we can build out systems that will really scale. And with that, I want to thank RailsConf for inviting me here. Here's all my contact information. If you'd like to talk, I'd love to talk with you afterwards down here. Thank you very much.
Help! Despite following refactoring patterns by the book, your aging codebase is messier than ever. If only you had a key architectural insight to cut through the noise. Today, we'll move beyond prescriptive recipes and learn how to run a Context Mapping exercise. This strategic design tool helps you discover domain-specific system boundaries, leading to highly-cohesive and loosely-coupled outcomes. With code samples from real production code, we'll look at a domain-oriented approach to organizing code in a Rails codebase, applying incremental refactoring steps to build stable, lasting systems!
10.5446/31237 (DOI)
Hey, good morning everyone. My name's Dave, this is Dennis. Before Rails comp, show of hands, who knew what Tustinita was? Anybody? Nice, okay. For those that don't know, Tustinita is a mattress company. We have a presence here in Phoenix and Rails comp this year because our office is here. Tustinita has some interesting elements about it and one of it is our cultural learning and that's what we wanted to talk to you about today. What does it mean to cultivate learning and why is it important? It's sort of easy to say that growth and learning are important tenants, but what does that actually mean? How do you actually carry that out in your business? And how do you take that to your team as an investment in them and their sort of collective and individual brain trusts and why do it, right? So like when we were thinking about this and thinking about the kind of culture we wanted to create at our own business, three things sort of came to mind. It gives people purpose, it allows them autonomy and it provides them with growth and all of that kind of leads back to happiness, right? And happiness is kind of hard to quantify, but we sort of think of it in ways like this. If you're working on a problem and you get stuck, that's a bad feeling, right? And you want to feel like you have a place to go to get unstuck. We feel like underscoring this culture of learning helps people to realize that it's okay to ask for help. As a software developer, as a designer, you can't know everything, right? If you're here, if you're working with us, we already have confidence in you. We've already assessed that you have the skills that you need to work with us. So from there, it's really about empowering you to know more and to share more with the people that you work with. So as I said, we're David and Dennis, we're from Tuffton Needle. A little bit about me. This is my newest daughter, Rory. She's nine months. I have four kids. Y'all pray for me. I've been actually working with Rails for around ten years. I found it sort of right at the beginning. And my story with Rails and the programmer, as I thought about it, is kind of relevant to this, actually very relevant to this talk. I was on my own for a long time. I had done some work with the military, and then I was kind of, I was basically doing a free-dice consulting. And I wasn't really looking for a job. And I stumbled upon some guys at this company called Hashrocket at a coffee shop that I would go to to work. And I think at the time, I felt like I was doing okay, but I didn't really know that I needed something more. I was a Python developer, I was doing some Ruby stuff, things were fine. And I had this chance encounter, and they invited me down to the office through a sort of other weird situation that involved guitar lessons of all things. But anyway. And to be honest, I don't think I was really qualified to work there at the time. You know, I had some Rails experience, and I knew Ruby and all of that, but I think that Hashrocket was sort of the first place where I encountered this culture. And it was, it was, they took a chance on me, right? Like they saw something in me that they thought that they could cultivate. And they took a chance on me and did that. So, you know, ten years later, I'm still working with Rails full-time Tuft of Needle and leading the team there now. So like Dave said, my name is Dennis. I know that picture is incredibly Asian, but like, I have a relatively similar kind of experience with Dave. I actually met Dave at Hashrocket. As a designer, it's a little weird that I more relate to people in the technology field and the traditional, like, like band posters and CDs and all that type of design. But it's always been an interesting culture to kind of be a part of. I've been designing for a little over 16 years. I kind of started off like a lot of people, like self-taught. And one of the reasons that, like, creating culture learning and like trying to, you know, make that a core part of our culture is important to me is that a lot of you, like, for a lot of us, the pathway to where we are right now is not very straight or clear, right? There's probably a lot of you that had a history degree or something, or an English degree, or something completely off. You taught yourself or you went to school for something completely different. But without those straight and clear paths, it's much more important to have people along the way that will kind of either guide you, give you a break, and just kind of take you here in the wing, whether it's for one project for a day or years. Those people are pretty crucial and they've helped me get to the point where I am in my career. And so part of why, like, doing this is kind of giving back and giving other people opportunities to grow and hopefully, you know, surpass and go much farther than me. So one of the ways that we kind of handle, like, growth at our company is we use, like, two different methods which are mastery programs and then apprenticeships. So I'll let Dave kind of talk a little bit about the apprenticeships. So apprenticeships, I don't think that word is hopefully something that's new to a lot of people in our community. You know, it's something that we pull obviously from other industries. But I think what's important for us is that it sort of starts with the language, right? So across the board, whether you are a software developer or you work in our customer service area or, you know, at the retail store or whatever, we use this language. So it's not, you know, you're a junior developer, you're a senior developer, like, we don't put people in those buckets. It's, you know, you're apprentice, you're a journeyman, you're a master, that kind of stuff. Side note, and this is a personal pet peeve, I don't think you can call yourself a master. I think that's kind of something that somebody says about you. And I know, like, for me, that's, you know, it speaks to, like, you know, the life lived instead of the life attained. And that helps me think about, like, you know, continuous growth and learning and how to prioritize myself instead of, like, thinking about how I get to the next level and, you know, how I get to the next compensation bracket and that kind of stuff, you know. It helps me focus on learning and growth and personal growth and that affects my happiness, right? Because if I'm not worried about getting to that next ladder and, you know, that next step on the ladder, you know, it's just one less thing to worry about, right? So finding talent is hard. I read a statistic recently that last year there were over 223,000 vacant jobs in our industry. And there are not enough people coming out of computer science programs to fill those spaces, right? So hiring is always a problem. I mean, you see it here at RailsConf every year. That's why, like, people are, you know, sponsoring and, you know, trying to get your attention and trying to get you to come work with them. But I think there's other ways. And one of them is to be on the lookout for people that are right in front of you, right? People that you're already in your organization. They already have the values that you have. Maybe they don't have that exact skill set, but I think if you pay attention and you listen, you'll find people that you can cultivate and bring into other departments, right? Bring into software development, bring into design. So, like I said, you know, this is something that I don't know how widespread it is, but it's something like we really care about. You know, I think going back to previous jobs that I've been in, I wish there were people that were sort of paying attention to that. Like, I'm just starting out, you know, maybe I'm just doing this thing over here. And, you know, if there was somebody there that, like, could just see that, you know, it could be a spark for somebody, right? So, guidelines. How do we actually do it? How do we approach our apprenticeship programs? So this is something that's, like, constantly changing and something we're thinking about and working on every time and iterating on, because that's, you know, kind of what we do as software developers. And it's also what we do at Tuff and Needle with our products. We meet quarterly, usually, and talk about their strengths and weaknesses. We set milestones. We think that goals are really important, especially at the beginning, to get people on the right track. We set the right expectations from the beginning, which I think you really need to consider, because if you set unrealistic expectations, it can really throw the whole thing off. People can get nervous and they want to quit and, you know, all that stuff. And then we actually compensate people based on this. So when they meet their goals and they meet their milestones, we pay them more, right? And we think that that makes sense, because, you know, the more you learn, the more value, you know, you can add. And we want to recognize that. So this is Tommy. We have a large customer experience group at Tuff and Needle. We have something around 150 employees. We have about 10 developers, you know, some designers, and then a lot of the people are in this group, customer experience, right? And that's where Tommy was. Tommy has a cool background. He was a schoolteacher before he came to Tuff and Needle, middle school, which I imagine is like crazy stressful and hard. He's shaking his head no right now. And he was kind of interested in technology. And, you know, after a couple years of being a teacher, he decided that it just wasn't for him. He needed to do something else. And so he was kind of exploring HTML and things like that. And then he got a job at Tuff and Needle in the customer experience department, you know, when he was hanging out in retail stores and helping customers and, you know, helping people, you know, email and all that kind of stuff that they do. And he was working on this, you know, this stuff, learning, you know, launch school stuff, that kind of thing on his own. And he, you know, kind of started asking us questions. And he came to RailsConf with us last year. And at some point it was like, okay, like, this is happening. Like, we need to take this guy under our wing. We need to develop him. Like, he's obviously got the aptitude for it. Super smart. Already knows the business. Let's, you know, let's make him part of the development team. So that's what we did. We actually have one of the guys that writes a course for launch school, the JavaScript course. His name is Shane Riley and he works at Tuff and Needle. And we are super lucky to have him. I think he's like top tier front-end developer, crazy good. But, you know, Tommy had access to Shane and we have his material to use. So I think that's, it's awesome for Tommy. Obviously it's awesome for us. But like, I think, you know, those kinds of formal learning experiences are good. And you should encourage your employees to, and your coworkers to use them and pay for them. And, you know, take care of that so that people can have that training. Like I said, this is sort of a constant thing that we are trying to iterate on and we're making mistakes and failures with. Some of the lessons that we're learning are, you know, it's really important to get people and apprenticeships on to real-world projects as soon as possible. We think it, you know, it helps build confidence. It gives them ways to apply what they're learning in real ways. You know, we've all taken those tutorials and, you know, you get to the end. It's like you've made this thing, but how do you actually translate that into something real in the world? It stretches them and it creates structure and, you know, and helps them transition. So yeah, Tommy is part of the front-end team and he's like a full-fledged member now and he's awesome. And he built, he built a project. He released it just like last week, is that right? First like big project. It's awesome. And he did a great job and I know he feels like super accomplished. So I'm going to talk from the design side of things and Rachel actually came from a very similar experience too. She was part of our customer experience team. She was, she actually didn't have any formal training in design. She knew of like the programs and could kind of do stuff, but like she didn't go to school for, you know, for a graphic design or anything kind of like similar to that. I think it was design history was her degree. So like one of the big things that we kind of learned with her is that like a lot of students who are self-taught tend to do like tutorials or do things where it's like, you know, you use this one, you learn one technique. That technique allows you to do one thing, right? And then that's all you really kind of know. And then, you know, you don't really understand a lot of the fundamentals where it's like typography or grid or pacing or rhythm or all that stuff is kind of like left alone. And hopefully you get it on your own or, you know, maybe you don't. It's kind of like akin to, I don't know if it's a perfect metaphor. It's like if someone just knew how to use Rails alone but couldn't write any Ruby from scratch at all. So that's basically the kind of situation that we should do this in. She had a couple teachers that were helping her out, but not on a consistent level and a lot of them were like high level like, hey, get this flyer done versus like, hey, I'm going to help you like learn how to lay out a grid and how that's like helpful for future projects. So a lot of them, and we didn't get it just like, with like, you know, Tommy, I don't think we really got it perfect the first get go. I think we tried to kind of like load her with a bunch of work and tried to just kind of have this pretty brutal routine where we like set up a curriculum and every week she would basically meet with me and go over stuff. But I think it's kind of the sign of a poor teacher if you blame your student for like, they're like, for them not really getting to where they need to go. So kind of took a second to reassess the situation, pared down the curriculum to a point where it was more concentrating on fundamentals and like the basics of visual design before like, so she was trying to learn like front encoding and design and then stuff about web and so it was just like too many things at once. So I think one of the biggest lessons learned is just kind of setting realistic and clear goals so that you're not just throwing someone in water and hoping that they kind of like make it work. And then setting expectations and timelines is also a big deal to you. Like for her, I think there was a lot of looseness around, you know, you're working on this project but never saying, hey, this is, you know, you need to finish this by like two weeks from now. Like, whether it's done or not, like, come to a point where you kind of get there because projects before they were given to her often like just open timelines like they weren't very important. So they would last for like four to five months or something and like there would be no clear endpoint. And the thing about design and I'm sure the same thing with development is that you can pretty much tinker on anything forever until someone gives you like a stopping point. So it's really important to kind of get that idea to younger designers to make sure that they understand that you need to, you know, done is better than perfect if you will. So the next slide kind of shows like the progress that she kind of made. To designers, this was a big deal because typography is like a very big tell of like how mature you are as a designer. And from where she started to now, although albeit these be very simple pieces, her ability to lay out stuff and like do it on her own without much guidance is dramatically improved and like she's become a very, you know, a very important member of a design team to kind of help us accomplish all the things that we need to do. So like I said, there's two parts to kind of what we do a tough needle for the culture of learning. And the second part is mastery programs. I'll let kind of Dave go from here. So back to happiness, you know, continuous growth. We think it's key to increasing happiness. And, you know, it works for everybody. It works for the person that's learning. It works for the person that's teaching. You know, if you've ever tried to teach anything, you realize quickly that you don't know anything. And it's just a really great way to sort of solidify your thoughts and knowledge about a subject. So we have this thing called mastery programs at Tough to Needle. And it's, again, it's across the board. It's not just in software development and design. We use it in every aspect of the business. And basically how it works is it's a little different from team to team, but there's sort of some guidelines across the board. You know, again, we meet with everybody every quarter and we sort of try to figure out what their individual learning path looks like. But we also try to do things as a group. So we've tried a few different things, but sort of what we're doing right now is carving out a specific day and time during the week that we can, you know, structure as learning time. And I think this is really important. You have to be intentional about it. If it's always ad hoc and just kind of whenever and loosey-goosey, you're never going to get to it. So we block out that part of the calendar every single week. And we use it, you know, to learn and to grow. Design team does it on Wednesdays. We do it Thursdays, usually around lunch. But we kind of talk every week about whether or not that's still working. You know, we're open to change it if we need to. Mob programming. Anybody done this? Anybody? The mob mind. This is really fun for us. I love it personally. So what we do is we're mostly remote team, so we get together, you know, on a hangout. And we use, we've done a couple of things. We mostly use them mostly. So we try to get on teammate or something like that, pull up a terminal, and we work on a problem together. And we all participate. And we, you know, even if we try to make it kind of interdisciplinary so that, like, you know, folks that are writing JavaScript all day can spend time with folks that are, you know, writing Ruby all day. You know, we can work on problems together. And this, it leads to some really interesting things. Like, I think when you're, when you're working individually, you don't, you know, you don't see all the perspectives of the team. I think when you get together and work on something, it really enriches yourself and it enriches the team because you're passing little tidbits. I mean, even little things like, what was that Vim command you just pressed? How did you do that? You know, that's, that can be really, really valuable time saver for somebody. Another thing we do is book club. I think, you know, this is something that I've done at other jobs. I like it. I always want to read, but kind of never make time to do it at home because I have four kids. But at work, you know, we set aside time to get together and talk about a book that we choose. And, you know, it helps us to sharpen our technical skills. Sometimes we get kind of tired of that and we do a soft skill, kind of a book, break it up. But yeah, it's fun. So, similar to those things, the design team does stuff that's kind of like a parallel to that. One of the things that we do is do critiques. So, it's kind of borrowed a little bit from design school where, I don't know how familiar some of you guys are from it. So, there was usually like 20 students in class. And you'd basically work all night and day on a project for about two weeks. And then you'd present it in front of all these people. And then they basically criticize you for about 20 minutes. And it's about, it's painful and terrifying as you imagine. A lot of people were assholes in college. But we try to do something similar without the other aspect of just like people's egos kind of getting away. And a lot of it is geared towards some kind of comment or feedback that drives your design one step further, right? So, constructive comments, something like, have you tried this? This isn't as clear as I think you're making out to be. Or like, are you achieving the goal that you're trying to do with this layout or this interaction? All those things hopefully give them ideas on where to go from there. Because the worst thing that would happen for a designer and for anybody really is to be like, I don't know what to do next. I'm stuck. It feels like it's wrong. I don't know how to fix it. So, having the entire team kind of share that experience, present their own work, help other people. Not only makes our designs better, but also makes us better designers because we're sharing knowledge. So like the technique that I suggested someone go check out to help with this particular, you know, site might help someone on another site in the future. So all those things are really, really important for shared learning because like you don't, like the group needs to get better as a group. Or you're just going to have a lot of issues where only one person can do the task versus like everybody can kind of jump in and chip in wherever is needed. One other kind of program that is kind of related to the, like some of the things we spoke to before is the quarterly skill plans. We actually treat these like first class citizens like projects. So there's a very easy tendency to be like, all right, you know, someone write in a notebook and then like, good luck. We'll talk a little bit later. But what we try and do is basically like track these things in, we use a program called Asana. Some people love it, some people hate it. But it's, we basically put milestones. We say deadlines of when you're going to basically accomplish these specific goals. And then the biggest thing is setting realistic goals. So like an example of a bad goal would be like, I want to get better at JavaScript. Like, cool, everybody does. Like that's great. But like saying like, I want to write an API note or something. So that's much more tangible and it's much more attainable. And it's much more realistic to say that versus like, I'm going to master an entire aspect of something. So treating those like, and having them like show up alongside, like get this project out with, I need to learn this this week. It's been like really important to make sure that it's baked into everybody's everyday life. And it's made a huge difference, I think, for me compared to other environments where we try to do the same thing, but didn't have that level of, that level of like checks and balances, if you will. So again, the one thing we want to talk about, and I think another way to really kind of make this much more, much more ingrained in the culture and not like it's just like one group that's kind of like doing it, is all of our skills are tied to compensation structure. So like, if you put together a skill plan in a quarter and then you reach that plan, we basically will give you a raise based on how many of those things you've accomplished. Sorry, did you have a question? Yeah. Yeah, so I'll use one of mine, for example, like, well, actually no, let me use another one. One was a designer who was like, I really want to get better at like writing, right? Like, because like, I don't know how many times you know, but like, writing is design, is user interface. Like, that is a very core skill that you need. So they set out a project where like, all right, I'm going to write, I'm going to write specific like ads or like copy for a page and then like, I'm going to do the specific like, like a project where I help generate like the layout, which in the, like the labels and elements and stuff like that. So we would go through and then say it and then like, they would basically put a deadline like, all right, this project will be completed by the first month, maybe two weeks after that and whatever. And then by the time I meet with them again, I'll be like, all right, how did it go? How much of this stuff worked? And then we'd be like, all right, you know, since you did this, you accomplish a goal, you'll get like a 2% raise this quarter. And then the good news about the way that we do the structure of the raises is that we don't wait until the end of the year to give you like, like a raise because you're, you're already like contributing more because you've learned that skill. So do you exercise or do you find it a project? Yes. Yes. And it's like, it's a project and I believe that it's a non-credit. It's more like to do lists, not necessarily like how many hours we're not like doing it to that level, but just like kind of judging the success of the project and what they learned. So that's a particular project that's in for that, that competition. Yeah, for that skill or for that person. And for that company, for that competition, for that competition, what are the things that you learned from that? Well, that's an interesting point too. Like at first we did things where people were like, I'm going to learn four skills this quarter and like it just was not tenable. Like we really focused it down to like, hey, you need to concentrate on just like one thing. What was the tenable? Tenable. Yeah. Most of the time people would only hit one or two. Like and then like management wise, it's hard to like, if you have our team has started to grow and you're trying to manage like that many aspects of people's growth. It's kind of tough. I mean, there has to be somewhat of a business case. It can't be like, I really like bedazzling stuff. Like that's not like, that's not like very useful to the business probably. Like, like, so for instance, I mean, a more practical one, like I really like, I was into like learning more about VR stuff and I want to do more stuff. Like we couldn't really justify that like, hey, like let me go buy this $2,000 rig and get better at this to the company because of what our goals are. But you know, like I focused on in my personal case was like learning more JavaScript, which is much more prayer. But copywriting is a little bit more tangential to design too. And some of the guys have gotten better at like, like woodworking, which is good for some of our product design members and other stuff. So there is a certain level judgment of like, it's not just whatever you want to learn, but it is kind of targeted towards your job. Oh, no, not necessarily. Not necessarily. We don't think it has to be like the deliverable could be a made up project that. I think it depends. I think it depends on on the goal. I think also to, you know, we're talking a lot about software and design, but the other parts of our business do this. So like a real world example for one of those is if you work in customer experience and you want to understand more about the operation side of the business. So we have, you know, different shipping situations, dark store FedEx, you know, all this other stuff. We might say that that leader might set a goal for that person to learn about those things and then deliver them, you know, some write up about what they learn, you know, that kind of thing. So for instance, like some of our customer experience people would learn how to use SQL or write SQL to write queries to use to track things so they could build like their own dashboards to to kind of monitor some aspects of their particular daily life. So it definitely to your point, it doesn't always like be that direct and like it's a real life project that like the business needs. But sometimes like imagine it not imaginary ones, but like ones that are just like a little bit more fun stretch the person a little bit more. So it's kind of like a like a value call on both the person and like one of us to kind of like say, hey, like which one do you think is going to grow you more or like basically get you farther along that path. Yeah. That's a good transition. I mean, you know, that's it. That's the whole thing. Like, you know, we think it's an important investment. We think it's great for the business. It's great for the people. Yeah. And I think, sorry. The last bit is like one of the like one of our founders is a developer and that's where a lot of this stuff kind of like comes from like it was imbued in the culture from the beginning. And it was very important for him to try and create an environment where oftentimes a lot of us shift from job to job right we go every year it's a new job or something. And that's the only way you can kind of grow and we're kind of used to but I think his vision and a lot of us and why we're attracted to this company is that if you can provide an environment where people are constantly growing and you know, obviously hit all the other marks of like, you know, their needs, whether it's compensation that hopefully they stay there for a long time 10 years 15 years. I mean, the idea is like you would be here for a lifetime that you we would be able to provide you with the opportunities and maybe that's slightly naive or like slightly optimistic. But we would prefer to move with that intention and that vision versus like the pessimistic version where we just basically don't invest in our people and at all. Basically, we figure they're going to leave whenever and just kind of like make it sort of a not you know, just like a bad place to work. Yeah, I mean, I think if you think about it, job hopping is one of the symptoms of it when the reasons why you do it is because you're bored. You know, if we can create a we can create an environment where people don't really get bored, they always feel challenged and we have a better chance of them wanting to stick around. I know that's true for me. Sorry, we can open up to questions too. Thank you everybody. Thank you everybody.
Tuft & Needle is a bootstrapped, Phoenix-based company that pioneered the disruption of a the mattress industry using a software startup’s mindset when it was founded in 2012 and has grown to over $100 million in annual revenue. A commitment to skill acquisition has led to a happier and more productive team, and is a core to the company’s success. In this session, learn how to cultivate a culture of continuous learning and skill acquisition through apprenticeships and group learning sessions.
10.5446/31240 (DOI)
Okay, we're about to begin. It's good to see that the room is not, you know, overbooked, so no one's getting dragged out of the room. Welcome to Engine Yards sponsored talk. So, deep dive into Docker containers for Rails developers. So that's a mouthful, so let's take a look at the title. Deep dive, this is me and my wife scuba diving in the Philippines. We're advanced open water certified and it's beautiful underwater and when you go deeper, it's actually even more beautiful. So we're going to talk about Docker containers. Who among you have used Docker before? It's good, it's more than half, but who among you has used a container but not Docker? Okay, we got one, two. Okay, so this is not an introduction to Docker talk, but we will look into Docker, I'm sorry, container internals. What are containers made of? And then, I have to be specific, this is for, I have to make sure this is for Rails developers because when they announced Rails Conf will be in Phoenix, I was just thinking, oh no, a lot of Phoenix jokes, right? So you've probably heard a lot of these jokes already, right? So, this is seven Phoenix, one of the organizers and, you know, the Phoenix framework. Some people have moved on to other languages or framework that's fine, you know, but we're here, you know, to say that, you know, we use Rails and a lot of people still do. This is sponsored by EngineYard, where I work for, and we're celebrating our 10 years this year, so please join us tonight. There will be a party tonight at 7 p.m., so please join us and we also have a booth tomorrow on Thursday. So, EngineYard's a great place to run your Rails applications, where you can easily scale from one to hundreds of servers, right? We have 10 years of Ruby and Rails optimization on top of AWS and, you know, we have top-notch 24-7 support. So, but let's get into the talk. These are the topics that we're going to talk about. The reasons for using containers, what are containers made of and how do you run containers in production. So, there are a lot of uses for containers, but here we're going to focus specifically about on deploying your Rails app in a container. I remember when I started Ruby back in 2006, or a few years after, one of the most popular deployment tool back then was Capistrano and it probably still is. In some shape or form, we still use that, the Capistrano way of doing things at EngineYard. We have deployed a lot of Rails applications using Capistrano, you know, big customers, big applications, and it works and even still works until now. But here I'm going to try to discuss why you should, you know, put your Rails app in a container. So with Capistrano, you, as a station to a server, if you're using Git, you're going to do a Git clone, Git pull, install the gems, pre-compile assets, and maybe run migrations. And it's fine, it works, you know, we have big apps using that approach and it works. But sometimes when GitHub goes down, then no one would be able to deploy, right? This is not a knock on GitHub. We use them, it's a great service. But when they go down, a lot of people notice, right? Because a lot of people use them. So we get a lot of tickets, actually, when, you know, when GitHub goes down, nothing's wrong with the EngineYard platform, but when GitHub goes down, a lot of our customers can deploy. It's only a small reason, though, why you should use a container. But let's take a look at what's involved in using a container. Here you would see that you still need to install Ruby, install the packages, copy your code, install the gems, pre-compile assets. It's very similar to Capistrano, right? So you're not, it's not a silver bullet that would, that would remove all these steps, right? You're still doing it, but now you're putting it in a container. And once you have that container, your server needs only to know how to run that container, right? It doesn't even know what's inside it, just run that container. And you could run it with other containers. It could be another Rails app. If you have another one, you could run it on the same server. Or it could even be something like Redis or a database, although, you know, our DBA is here and he wouldn't like that. You shouldn't run your database in a container. But it is possible, right? Whatever you put inside it and your host knows how to run it, then it should work. Then you could also have multiple servers. There's no real world analogy to this, but you could duplicate a container easily. You could run it on multiple servers. So now when you try to scale, and you know that Rails can scale, right? You just run a lot of different servers and then on those servers, you run your containers. So containers start faster and you'll be able to easily run your, run any code that you could put in a container, which makes the whole process faster. Like your developers would be able to release code faster in staging or in production. And you know, you get to focus on your business problems. But what are containers? There are a few descriptions that I keep on hearing when people discuss containers. Well, first is lightweight VMs. And a lot of people don't like this description. And because it's technically, a container is not a virtual machine. When you have a virtual machine, you could have a host, for example, that's using the surrounding Linux. And you could have a virtual machine that is a Windows box, right? You could have a guest that is different from your host. But with containers, when you have a Linux host, you could only have Linux containers. There are Windows containers, but we're not going to discuss them. That's outside the scope of the talk. So we're specifically looking at Linux containers. But I like this description that it's a lightweight VM because of what I described earlier, that in a container, you could put everything on it. In fact, you need to put Ruby, you need to put your packages, like if you have MySQL client libraries, you need to put those inside your container. So for me, it's a good description. So lightweight VM. Next is SageRoot on steroids. So SageRoot, if you have a directory, for example, you could make that your new root. You would still be using the same Linux kernel. So that means it's technically one OS. But if you have different subdirectories and you change your root into that, you could do a lot of interesting things. Let's take a look at this. So here, I have an Ubuntu directory. And let me just pause that. And you could see that the directories on that Ubuntu 17.04 are similar to what you can see in your Linux box. But here, they're just subdirectories. And you could run SageRoot. You could run SageRoot. So let's just run it again. So you have an Ubuntu directory, you could SageRoot into that. And now you're inside a different OS. You think you're inside 17.04. So I'll check slash RailsConf. It doesn't exist. It exists on the host, but not on the new root. So here, I also have a CentOS 7 subdirectories. And I could SageRoot into that. And you would see that it's in its own... You could see the version off the OS. But since it's the CentOS root, I now have YUM inside it. So I have an Ubuntu box, but I have YUM running. So it all shares the same Linux kernel, but you could see that you could run whatever distro you want. So here at the end, I just have another directory, Debian, and you could see the version. So now I have one Ubuntu, I think it's a 16.04 LTS version, but I've showed you three other distros that I could run by using SageRoot. And SageRoot is one of the things that a container uses. You have file system isolation where you're inside it, you can't see anything outside of it. However, it's not built for isolation. So you could not see different files outside, but you could see other processes, as I will show you later on. But this was... This is a very old technology, like released in 1982. And it was used mainly for testing or for building software where you don't want to use any dependencies. So it's like having your pristine OS inside your existing OS. So the third description is namespaces and cgroups. And this is the meat of the topic, and what containers really are are namespaces and cgroups. These are kernel features. So if you've heard about namespaces and cgroups, namespaces, when your processes run inside a namespace, they think they're on their own system. They don't think that there's another system... They don't see the host, they see their own system. So a container, you could look at it as a different root, a namespace and a cgroup. So there are tools to create namespaces, but we'll look first at the higher level tools that create namespaces. And these are the things that people are familiar with. We call them the container runtimes, LXC, for example. It has been popular, and it has existed before Docker. Docker at the beginning was using LXC to create a container, so it was just a wrapper. For sure it provides a lot of different advantages, but at the beginning, it was using LXC. Then you also have Rocket, systemd, and spawn. But at the end, you're just creating namespaces and cgroups. So none of these tools added new features to the kernel. They are using namespaces and cgroups. So when you're in a container, there's an illusion to the user that you are on a different OS. As I've showed you earlier, you think the process thinks it's in its own OS. So that is the goal for the containers. So here we'll see the chroot again. I'm using Ubuntu 17.04. And you would see that inside it, I could see all the different processes that are running. I just cleared the screen very quickly. But I could grab for top. I could see that process inside that root, and I could kill it. So if someone in the host was running top, and I'm inside the new root and I killed it, and well, I'm sorry to that person running top. So what namespace does, right, namespaces, what they do is provide you that isolation. So first, let's look at the PID namespace. So I'm going to introduce a tool called Unshare, or a program called Unshare that would create the namespace. So I'm going to combine that with chroot. So I'm going to say Unshare, make a new namespace for a PID namespace, chroot Ubuntu 17.04, I'm using the same thing, going to mount the proc file system. And after I run PS, you would see that I only see the bash process and the PS process. So now inside it, well, it thinks it's PID number one. But in fact, it's not process number one in the host system, it's something else. So it's just mapped to something else, but inside that namespace which we created using Unshare, it thinks it is PID number one. So now you've created a namespace that can't kill the processes that are running on the host. And why is this important? When people run containers that were created by someone else, you don't want that, you don't want that container to be able to go to the host and just kill any process, right? So next is the mount namespace. So when you create a mount namespace, you inherit all the mount points of the server, of the host. But then when you make changes to it, the host won't be affected. So why is that important? So when you create a new container or a new namespace, Docker, for example, changes the mount points for proc, c, and dev. And so the containers won't have access to the hosts. To the hosts. For example, here, the container won't have access to the disk. Why is that important? Well, if you have access to the disk, then you could corrupt it. And everyone running that container or every container running on that host would have a problem. So you don't want your containers to be able to access certain mount points. And that's where the mount namespace would help. Another namespace that we'll look at is username space. And this is actually relatively new. And even Docker only added this maybe a few years ago. But this is like PID mapping, wherein you're running inside a container. You, when you're running as a user on a container, you are actually a different user on the host. So it's like PID mapping. So a lot of containers run as root inside, you know, you run as the root user inside a container. And that could be a problem because when you're running as root without username space, you're also running as root on the host. And you know why that's not good, right? Because if you have privileges on the host, then you could do a lot of different things. So when you enable username space, you'll have root inside a container, but you won't be root outside. So you're not, you won't be root on the host. Next is the network namespace. And inside a container, you will use your own network interfaces. So it won't have any connection, but what Docker does, for example, is create Veef pairs and use a bridge on the host. So now you have one pair on the container, one pair on the host. And so it will be able to, you'll be able to have your network connection. And we will show later on how that works. And there are seven namespaces right now. So we started with mount and the latest is the C group namespace. And this is actually more than 10 years in the making, right? So mount was added at the root kernel 2.4 and user, for example, was added in 3.8. And C group recently was added on the 4.6 kernel. So it wasn't, you know, there wasn't, there wasn't a, just one time we're in, okay, we're releasing containers. They release namespaces and they release it incrementally. So let's take a look at how you're going to use everything, how you're going to combine everything to create your own container and run Rails inside it. So we're going back to our same example, you know, Unshare. But now we're, oh, I'm just showing here that you have a typical Rails app on, you know, on user source app. So we're going to create namespaces using Unshare, but now we're going to pass mount, UTS, IPC, net, kid, and run CH root. So it's what we've been running this whole talk. And we're going to mount the proc. And then next I'm going to add a lot of environment variables, but these are just needed by my Rails app. Like they have database URL and secret key base. I'm going to create just so it's easier to see. And now I'm going to run bundle exec, Rails server to run my Rails app. So I'm now inside a container and running a Rails app, right? So I'm going to try to curl and see if I could access that. And you would see that it would fail because I haven't set up the network with pairs that I mentioned. You would see here there's only one loopback interface. So now I have to create those with pairs, right? So I'm on the second tab on the host and I'm going to create the with pairs using the IP command. You just use H for the host, H, PID, ID, and then C for the other pair. So now I have two pairs. I put the C one and put it on the process ID. So that's the container part. Then I put the H5140 on the Docker bridge that is running on the host. So now you would see that there are two network interfaces, right? So now I'm going to bring up those interfaces. So bring up the loopback interface. Going to bring up the other, the one pair, one end of the pair. Name it if zero inside a container. Here I'm just going to add an IP address. Of course, you want to be able to connect to your container using an IP from the bridge that I just chose randomly. And I'm going to add a route to be able to have connection routing it through the bridge. And after that, I would be able to curl the Rails app inside the container. But note that I'm using the local host, so 127.001 inside. But outside of it, in the host, you need to use the IP address that I used. So here you would see that the Puma process is running. That's the default now with 5.1. And you would see that it's PID 9 inside the container, but it's a different PID on the host. So you could use the PID namespace at work. So next is C groups. So C groups are used to limit resources. Like you could set a limit, a memory limit, a CPU limit, or even access to devices. You could also set a limit to the number of processes you can port because you don't want to exhaust all the process, the number of processes you could run. And these were C groups were added on the 2.6 kernel. So let's take a look at how you're going to set a memory limit to that. So at the beginning, it's just the same, you know, we just create the namespaces. So we're doing the same thing at the beginning, creating the mount UTS namespaces. And then I'm going to mount the proc. And then the environment variables that Rails need. But before running Puma, we're going to use C groups to set up a memory limit. So here I'm using C groups and... So here I'm using the CsfsCgroup memory, which is the C group file system. That's already mounted on my box. I think it was done by systemd. So unlike namespaces, where you use unshare as the program that creates namespaces, with C groups, you actually just interface with a file system, with the C group file system. So I create the directory, create the Rails directory, and you would see that if you... I just created a directory, but after creating it, it creates all these files for me. And those are the limits that I could use. You would see memory limit there and other things. So what I need to do now is get the process ID of my container. So I'll get the process ID of bash. So that's 1045A there, and I'm going to put it inside Rails slash tasks. And tasks on C groups are the processes. So I'm saying process1045H should be under the Rails C group. So there's nothing special with this. I created the Rails C group. And I'm going to say 40 megabytes will go to Rails memory.limitinbytes. So who wants to guess if that's enough for a Rails application? It's a very basic Rails application. So now I'm back to my container, and I'm going to run Puma. So I'm going to run BundleExec Rails server, and it says it's killed. So with the limit of 40 megabytes, our Puma process can start. So now I'm going to increase that to 80 megabytes, and let's see if it works. So this is a new Rails app, so I think this would work. So now you could run that process, and you would see here that Puma is running. So that's how you use C groups with your Rails app. So next description, and the last one, and this is the most accurate description, is containers or processes. So you might have heard this. They're not VMs, they are processes, and this is the correct description. And if you take away nothing else from this stuff, you could run a lot of processes, as you know, but containers make it easier to run those processes together on the same host. So let's take a look at this next video. You can see that I have a lot of Puma processes, right? So and then I'm just showing you that the PID, I'm not sure if that's easy to read, but the PID namespaces, so you could check the namespaces on the proc file system, they're all different. So I'm just showing you that these processes are all in different namespaces, right? But they are namespaces, and what's interesting is I have a lot of Puma processes running, and I don't have even Ruby installed on the host, right? So your host doesn't need to have anything. In fact, there's an OS, CoreOS, or I think they've renamed it to container Linux, but it doesn't even have a package manager, because they want you to run everything in containers. So here I'm trying, okay, run all the Puma processes you want. I think I'm using the same version, so this is the same container, but you could run whatever Ruby version you want, whatever app server you want. You could mix and match Puma, Unicorn, and all containers make it all easier to do all of that. So containers are processes, but containers being a new root, having namespace and C groups, they're not actually enough. So whenever you create containers, you have to make sure you know how to secure them. So let's talk about container security. The way security works with containers is you apply layers of them. There's just no one setting that would make all your containers secure. You have to run a number of different things to make sure they are secure. For example, we have AppArmor, this Linux security module, or if your host doesn't support it, SCP Linux, and it limits the actions that a given program can take. So it provides a lot of limitations on the container, but actually if you start using the user namespaces, a user namespace, some of these restrictions from AppArmor are not needed anymore, but you still keep them, so you just have another layer of security. So next is capabilities. In the beginning, there's only a root and non-root. If you're a regular user, you don't have access or you don't have privileges to do a lot of things. And Linux introduced capabilities so that a regular user would be able to do something if it has privileges. Some privileges, some capabilities, but not a full-fledged root user. So containers need some capabilities, but you don't want to give them all the capabilities, so that's why you also shouldn't run your containers as root. And while limiting capabilities for some containers, then you limit what those containers can do. However, how do you know which capabilities to restrict containers and which capabilities not to restrict? In fact, when you search GitHub, for example, on Docker, there's a lot of discussion on what capabilities to allow or to deny. So there's no one answer. When you use LXC, they give you some set of capabilities, and when you use Docker, they give you another set, so it's different. And the other is Seccom. So this is a Linux kernel feature, and it filters system calls. And Docker, for example, disables 44 system calls out of 300 plus. Like one example of a system call, it blocks, it's opened by Handel add, because when you use that, you could escape the container. So then the solution is just to disable that system call. But again, which settings should you block or should you disable? So those 44 system calls, how did they arrive at those lists? It comes from years of running the Docker project and to know which system calls to block. Like at the beginning, if there's a vulnerability, some system calls will have to be disabled. So the last part is running containers in production. So I've shown you namespaces and C groups. So I hope I've convinced you to look at namespaces and C groups or containers to run your Rails app, but I hope you don't go from this talk creating namespaces and C groups on your own, running on share, because most likely that would be not secured and will have a lot of bugs. For example, I've shown you CH group, but that's not even actually what Docker is using. They're using pivot route, which is more secure than CH route, because as I said, CH route wasn't meant for isolation. So you don't write your own. It's like, I think it's like cryptography. You don't write your own. You let the pros do it. So you use container runtime. I've shown you Docker and Rocket, and that's actually good. If you're going to start running containers in production, that's the first step, because they would create the namespaces, C groups, and they would have default security for you. But then you'll also have other problems. If the Docker daemon dies, and I've had to restart Docker a lot of times, and all your containers are gone, what do you do with that? The site would be down, and it would be bad. So you use something on top of it, an orchestration system, and here you would have Kubernetes, messes, Docker swarm. You could choose. We like Kubernetes. When you run your containers, this system would choose to host with resources. So if you have 10 servers and you say, hey, I want to run this real SAP, or this container with the real SAP, and then Kubernetes would choose, okay, you run it on this host because it has memory. Then when a host dies, when a server reboots or becomes inaccessible, Kubernetes would then, oh, all the containers that are running on this host, I'm going to move them to another host. So with just the container runtime, you'd have to manage that yourself. So that's why even Docker has swarm because they know it's just running Docker on its own and one server is not enough. Then Kubernetes also provides your downtime deploy. If you have containers, then you want to be able to create new containers and with newer versions. But all of this, you still need an image, which I didn't talk or gave the technical details. You still have to create that image. I tell you, install Ruby, install the packages, copy your code, install the gems, but how do you do that? And some people, they just don't want to do that. Of course, you could automate this. A lot of you are using or more than half are using Docker or containers with Docker, so you could use Docker build. And there's a lot of automation that you could use. You could tie them up with your CI, for example, and you could have an image. But what if you don't want to think about all this? And that's when you're a developer, you don't want to think about containers, C groups, namespaces. Then you could actually use a platform. There are a lot of open source projects for this, days, open shift. You don't need an image, you just push. You run a command, like get push or the Cloud Foundry CF push, and your app will be sent to the platform, and it would run containers for you. But in that case, the containers are just implementation details. You don't care that they're running containers. I just care that it works, and I just care that if I push my app, I would see the new version and scale automatically. And yeah, that is the goal. So you now know about namespaces and C groups, but you don't even have to use them. And in fact, Engineered, so this is just a plug, Engineered has a platform that does that, or will have a platform that does that, and there would be an announcement, though we have a keynote on Thursday, where you will hear more about it. So we work on that level. Actually we had a workshop at Kubernetes, so we could actually also work at the orchestration level. Most people would just like to push their app and just be done with it. So yeah, in closing, I mean, deploy your Rails app in a container, looking to the technology, it's, I mean, it's mature enough, a lot of people are using containers. It also has a long way to go, like databases. I think you should not run your databases yet in containers. It is possible, but it's still early. And that's it. Thank you. Thank you.
This is a sponsored talk by Engine Yard. Containers have gained popularity the past few years but they have been around much longer than that. In this talk, we'll dive into the internals of a container. If you have used or heard about Docker containers but are unsure how they work, this talk is for you. You’ll also learn how to run Rails in production-ready container environments like Kubernetes.
10.5446/31242 (DOI)
I'm here to talk to you about local and remote teams. And who here was in the last talk as well? It was an awesome talk, wasn't it? They really did a great job with that. I'm going to cover some of the same things. It'll be a little different. It'll be from my perspective. But I really enjoyed listening to Glenn and Maria tell their experience as well. So my specific perspective on this is looking at hybrids, where we have team members who are local and team members who are remote, and how to get the best of combining those two styles. So I want to start with a quick poll. If you would raise your hand if you today are primarily someone who works locally in an office, maybe not every single day, but most of the time you're in an office with the rest of your team. OK, maybe half. And then how many of you are primarily remote? The other half, it makes sense. How many of you would like to do the other? OK. So that's what we're going to focus on, is some of the advantages of each, why you might choose one versus the other. I think there's a lot of value in each one. And I've done both, and I enjoy both. And there's a lot to look at there. So an introduction a little bit about me first. My name is Ben Klang. I have been doing Ruby for 10 years, doing open source for 20. A lot of my appreciation for remote work comes from open source, where when you're doing open source work most of the time, it's remote anyway. Your collaborators are in other parts of the world. Previously, I founded a company called Mojolingo. We were a software consultancy. We were a remote-first company from day one. About 12 people, not a huge organization, but definitely learned a lot from that experience. Today, I'm vice president in business technology for power-homer modeling. It's a much larger organization. About 2,000 people in the company, 51 in the technology department. We have seven scrum teams, six states, four countries. And really about half and half when it comes to local versus remote or local versus distributed. And again, I want to stress this is my experience. It comes from what we've done in the past and what's worked well for us. I would say that everything you hear today has come from something we've learned, something that hasn't worked. And I'm sure that some of the things that we do today we will find better ways to do tomorrow. And that's OK. That's just part of the process. So I want to start with a little bit of background. What kind of teams exist? What are we talking about when we talk about remote teams? And I want to emphasize that when I say team, I'm really thinking of a fairly small unit. I'm thinking of three to five people, not the entire department, say, but the actual people that you depend on day to day, the kind of people where if they're not around, you would end up blocked. So first of all, local teams. That's a pretty easy one. People in the same building. Not really in the same building, but in the same floor and in the same room even. I remember reading that people who are just separated by as little as one floor start to lose some of the benefits of being a local team. Some of that face-to-face interaction is lost when you have to go out of your way to interact with them. So local teams, we're focusing on people who are physically co-located together, preferably in the same room. Another one is remote teams. Again, pretty intuitive. People who are spread out in many different locations. Sometimes it'll be at home. Sometimes it'll be in a coffee shop. Sometimes it'll be in a co-working space. We actually do have one guy on the team who for some reason just loves coffee shops. He will spend his entire morning in a coffee shop. I couldn't deal with the noise, but he works for him. And then there's a third type, which are mixed teams. So this is a type of team. Where you have most of the people on one physical location, and then one person who's kind of an outlier. To me, this is an anti-pattern. I don't really want to talk about it much other than to say, I strongly advise that you don't do it. This can lead to isolation. This can lead to this one person being left out of conversation. This can lead to people not having the same level of understanding about what's going on within the business, what's going on within the team, what's going on within the software being developed. And sort of an anti-pattern. And sort of another spin on the same topic is where you have one team that is split into two parts. So you have a part of the team in one location, part of the team in the other. Again, you'll end up forming clicks. There'll be small communication patterns, the reference to Conway's law earlier, right? When people are communicating together, the organization will develop that way. So again, I don't recommend this style either. So I'm going to say something that may be controversial. Maybe you agree with me. I just ask that you hold your pitchforks until the end. And I'll explain why I think this. But I feel pretty strongly that local teams win. They're the most effective, most efficient way to develop software. I don't think that all else being equal, if you can control for everything else, having everybody in the same room is going to be the fastest way to deliver what you want. And I say this myself as a hybrid employee, spending about half my time on site, half my time working from home. I say it as someone who founded a business that was remote first and successfully remote first. I was very happy with that company. And I say this as a manager of three remote teams and one local team. I think it's better for communication. I think it's better for camaraderie, better for brainstorming, and for resolving issues. And even Scrum teaches. We follow Scrum pretty religiously. And one of Scrum's big things is get your teams co-located. This isn't just me saying this. This is kind of the wisdom being taught. So then why even talk about remote teams? If local is clearly the way to go, why would this even be a conversation? That's because we want to be remote. A lot of us want to be remote. Just from your hands I saw earlier, half of you are doing it, and another good chunk of you would like to swap whatever you're currently doing. So there's a really great Stack Overflow article, or rather survey, and it came up with 53% of people looking for jobs want some kind of remote option as a top priority in seeking a new position. Which means if you're trying to hire these people, this is a major competitive advantage for you. If you have remote options in your employment, then you have the attention of a very large number of job seekers right away. Additionally, 11% of remote workers report higher job satisfaction. Or I should say the job satisfaction is 11% higher among people of remote options versus purely local. So it's not just about acquisition. It's also about retaining the teams that you have. Keeping them happy. Happiness is correlated with productivity and with longevity within the organization. So this is a big deal, right? And I do want to touch on something I think that Glen and Maria said very well, which is that not every position can be remote. And within our organization, we've called out a few specific types of positions that we will not consider remote applicants. The two that are kind of obvious are application support and infrastructure support, the people that day to day have to interact with some of the end users. We are self-hosted, so we have our own physical infrastructure. We have the teams that manage that infrastructure. We want them local in case something breaks. They can unplug it and re-plug it as necessary. And the third one, which is a bit of a shift for us, is junior development teams. So we actually did a first time for us experiment a year ago, where we put together a team of developers who were junior and paired with them a mentor to bring them up to speed, and we did it as a remote team. And it was successful. I want to say that the people who went through that program are all with the company still today, and they're absolutely valued members of the team. But what we found was it was harder to support them and mentor them in the ways that they deserved. It was harder to establish the kind of communication, to jump over to the whiteboard and explain some kind of complex process than it would be if they were local. So going forward, we've started saying that all junior developers, we want to put them at headquarters where they can be mentored with other team members, and given the attention that's necessary there. So what are the benefits? What are the benefits of enabling teams to be remote? I'll start with benefits to employers. First of all, obviously, it's a broader applicant pool. If you're looking in one location, best case scenario, you're kind of saying, give me the best person within 50 miles. That's just a small pool to begin with, no matter where you are. If you can open that up and say, give me the best person plus or minus three hours, suddenly you have a lot more options to choose from. And ultimately, what you really want to say is, who's the best person for the job? So as your organization continues to expand, and as you can start to form teams in multiple locations, as you can start to organize teams around locations, you're not just looking at one geographical location or even one time zone, you can actually look around the entire world. This has a bunch of advantages. It has advantages like taking better advantage of referrals. So the people on your team are going to refer their friends, hopefully. If you're a place that they want to work, hopefully their friends also want to work there as well. And if those friends happen to not be in your city, you might not have the opportunity to work with them. It definitely improves things like time to hire. So as we grew, we had to grow. We doubled in size twice over the last two years, basically double each year. And the only way we could do that was looking remotely. We just couldn't get a big enough stream of candidates looking only locally. So that was a major shift for us. But it also increases the opportunities for diversity. I think that the last talk mentioned this well, if you're a mother with a newborn child and you can't easily get to the office, that's a limiting factor. Right? When I was a single father, my daughter would come home from school at 3.30. And I could be home to be there with her. That was a big deal for me. That meant that if I was tied to an office, I would have to drive home to do such a thing. And that's not feasible, right? But if I was able to work from home, I could be there for her. So that was a big benefit for me. So let's start with the benefits to the employees as well. I talk about being home when my daughter got home from school, but it's more than just that. It's also a choice of lifestyle. For myself, I like living in a city. I like being in an urban area. I like not having to have a car. I like being able to walk to shops and restaurants. That's the quality of life for me. A very good friend of mine, he has five kids. They lived about two hours away from the office. So two hours one way to drive from his house to the office. And at the time, the company we worked for, this was 10 years ago, they did not have remote options. It wasn't on the table. So every day, he would drive two hours into the office, and he would work full day, and he would drive two hours home. So four hours of his day. By the time he would get to the office, he had already been up for at least two, probably three hours. He was tired from the drive. He was stressed from dealing with traffic. And then even at work, sometimes he would be trying to figure out how to make his commute better, trying to find just the right time to leave to avoid traffic, or started looking at train schedules to try to figure out, is there another way for me to get to the office? No doubt that impacted this productivity. There's just no way you can be as effective when you're spending that much of your energy getting yourself physically to a location. So getting to live where you choose and being able to do so remotely gives you that flexibility, gives you the option of picking the lifestyle that's right for you, and being able to have the job that is right for you as well. So one size does not fit all. And then there are mutual benefits. Things that benefit both the employer and the company. Life is not static. The one thing we can count on is that it will change. What has actually happened with a lot of our team members is their life situations have changed, and they have needed to move. So one of my favorite examples is a very good friend of mine. He was living in the UK. He got married to a Brazilian, and they wanted to live in Brazil. If his employment had been tied to his physical location, we would not have been able to keep him within the organization. And he's a very strong developer. So that was important to us. Like having him be a part of organization while he still was able to live the life he wanted to do, and allow his life to evolve in the way he wanted to evolve it, that's important. So that gives him the ability to transition his life while retaining the stability of his employment. And it gives us longevity of someone who's a valued team member, somebody who knows, and we've already invested teaching him about the organization. Those are things that benefit both sides pretty well. So location independence adds longevity to employment, which is a win for both sides. All right, there are challenges. No surprise, right? Remote's not all easy. I hope everybody had brought a pencil and paper. I'm going to give you the three most important bullet points. Remote work. Everybody ready for this? OK. Number one, communication. No surprise. Communication is the biggest challenge to remote work. What may not necessarily surprise you is that number two is also communication. And number three is communication. Right, got it? OK, sweet. Communication challenge number one, my favorite, time zones. Love time zones. Hate time zones. Time zones were the biggest factor in figuring out how and where to grow the team. Because while we don't particularly care where you live, we do care that you have the ability to communicate with your team. We do care that you can support each other. We do care that you can ask the questions and get answers and get unblocked. So we have a rule of thumb. A team should not spread more than three hours total. Not plus and minus three hours. Three hours total, right? So we try to group things. And these circles are not exactly representative. But the point is that we try to group teams such that no one is more than plus or minus three hours than anyone else in their team. And that gives a nice amount of overlap to the workday. Now you'll notice that some of those circles are decidedly longer than three hours. And as I mentioned earlier, one of the benefits of smaller teams is you can orient your teams in such a way that the teams themselves are within that three hour rule. But across the organization, you can have multiple teams and you can spread the work around that way. So that does work. Communication challenge number two. Being able to understand things deeply. Not just superficially, but truly understand the work you're doing and why you're doing it. I love this quote by George Bernard Shaw. The single biggest problem in communication is the illusion that it has taken place. So you might spend a bunch of time trying to explain what we're building and why. And depending on how you do it, you may or may not have actually been heard. I think this is one of the reasons we love whiteboards. Because when you can get up and draw something, a picture's worth a thousand words. It makes it so easy to explain relatively complex concepts when you can map them out like that. Problem is that doesn't really work so great when you're remote, right? You can't just pull up a whiteboard. I have tried lots of online whiteboarding tools, and they all are kind of disappointing in one way or another. But you have to replace that somehow. So we lean a lot on video conferencing. We lean a lot on screen sharing. Screen sharing is huge. Code reviews are a big deal. And then my CIO's favorite, flow charts. Don't be afraid of the flow chart. This maybe took me 15 minutes to put together. There's no color. It's not very pretty. It's a bunch of boxes, a couple of arrows, and a little bit of text. But it really conveys very quickly some things we're trying to get done. And so this has become part of our culture. And we're even trying to encourage more of it. The point here isn't to burden you with documentation. The point here is to communicate as clearly as you would on a whiteboard. And if you start to build that into your culture, you can partially solve this problem of deep understanding. All right, communication challenge number three. Perceptions and distractions. Everyone knows Dilbert? If I can't see them at their desk, how do I know they're working? It's a real problem, right? Anyone dealt with this? Yeah. Point to your bosses everywhere. There's a flip side to this, though. As a remote employee, my team leader is always multitasking. How do I know if she's taking me seriously? A lot of our communication happens in video conference. Video conference, for most of us today, means working on a laptop. So if I'm speaking with someone on my team and we're having a great conversation, it is really tempting, really tempting just to alt-tab, pull up email, answer something, answer a quick, instant message. That is so destructive. It's so destructive that the conversation is happening. It's even worse when you do it in a stand-up or a retro. And you're not paying attention to the team. That's something you have to fight. It takes conscious effort. It doesn't come automatically. You've got to put that away. In addition, you need to make sure that, back to the first point about if I can't see the team, how do I know they're working, you need to also make sure you proactively manage up. Talk to your supervisors. Let them know what's going on. For us, a lot of that comes through dealing with tools like pivotal tracker and code reviews. A lot of it has to do with stand-ups and sprint planning. Those are opportunities to communicate within the team about what's going on, but also to communicate about progress, so people know what's happening. And I'll talk even more about that in just a second. So that's kind of an overview of some of the benefits and challenges. I want to be really specific and talk about how we do it. The next few slides are going to talk about what we do at power to work through these challenges. We understand that there's value in both styles of work, local and remote. We understand there are business realities of acquiring and retaining the people that we want on the team. And then retaining has a lot to do with happiness. So we needed to find a way to get the best of both of them. First, we looked at structure. Small teams, three to five people. That is a big deal for us, is keeping it small. And if a team starts to get too big, we don't hesitate to split it apart into smaller teams. And also consistent teams. I'm a big believer in not trying to mix local and remote. You can put two people in one location on the same team. But if you do it, they need to act as if they're remote. They need to be on video conferences on their own computers so that they have parity at the communication layer with everyone else in their team. So it doesn't feel like there's just two people and then the rest of the team. Then we have process, scrum, as I said, for big believer in scrum. And there's this sort of related concept of Shuhari, which is where in the beginning you do it exactly like the book says. And we're probably at that point now. Most of what we do is doing our best to stick to the principles exactly as written without too much variation. And then the next step for that is Ha, which is where you understand the principles and you can start to bend the rules where necessary to adapt to your situation. And then there's a mastery level, which is where you completely understand everything and why the rule exists in the first place. And then you can break the rules whenever you want, because the rules no longer exist. Sort of like the matrix, and it flexes his arms and the walls bend. I want to do that. So for us, that process, daily stand-ups and retrospectives are really critical parts for managing the communication, ensuring the communication with remote teams. Stand-ups, if nothing else, give you an opportunity every day to make sure you communicate with everyone on your team. It's really easy to do, right? If we were all remote and just focused on our pull requests, it's so easy to get into the editor, push the code, move on to the next thing, and not even talk to anybody. But stand-ups, at least, if nothing else will give you a chance once a day to communicate. And retrospectives as well will give you kind of an early warning system. If things aren't going great, retrospectives will bring that to the surface. And it's just, again, another way to checkpoint communications, make sure they're happening regularly. Of course, you have to emphasize remote, friendly communication. We do a lot with code reviews. Every single piece that goes to production has had a code review. It's one of the three criteria we require to ship code. Obviously, text chat. A lot of people use Slack. We have a tool internally called Connect that's the same concept. But it's critical not only because it facilitates communication within our development team, but also within the larger company. Anyone can talk to anyone at any time. That's critical. Video conferencing and screen sharing. I mean, I've already given the examples for that. But again, without the ability to pull up a white board, being able to at least look at the same thing and talk about it makes a big difference in facilitating communication. And then, as I said, diagrams, love diagrams. And then culture. Face-to-face meetings. It is my goal to at least once every 90 days spend some time face-to-face with everyone on one of my teams, just to have that personal relationship with them. We also do lunch and learns. And I got to give credit to Jill, who's in the audience, and probably more defied. I'm calling her out right now. But she did a lot to make lunch and learns happen for our team. She did a great job with that. And that has been not only helpful for onboarding and bringing new people into the organization. It's also been really helpful in just getting people talking across teams. As we emphasize, small teams, and as we grow, we end up adding more teams. They don't necessarily communicate as much as they would otherwise. So lunch and learns are a great way to get a larger group together. And then she was telling me yesterday she wants to do coffee dates, which I think sounds like an awesome idea. So I think we'll be doing that when we get back. All right, so we've talked about how we're doing it. I want to talk about one more topic, which is how we optimize for the different styles. We'll talk first about optimizing for local teams. For the teams that we have decided will be local, whether they're development or support or DevOps, there are a few things we can do to make them more productive, to maximize the benefits that they have. First, we built a headquarters that has a lot of collaboration space. I mean, a lot. Probably at least within our department's sort of area of the building, at least half, maybe more than half of the space is dedicated toward collaborative areas. Lots of whiteboards, lots of tables where you can pull up, there's power to plug in. There are, I don't know if those little puzzle piece chairs are. They're sort of like beanbags, but it's sort of not. But it's just a more casual space where you can discuss things. And we use that space not only for internal conversation. And I think this is really important. We use it to also talk to the business. These two rather serious looking fellows are from our talent acquisition department. And we're in the process of building some software to help them automate some of the work that they're doing. This is them coming into our area and speaking with the guy in the red shirts and application support guy. And he's explaining some of the features we're delivering and some of the bugs that we're addressing. And so this is communication not only for within the team, within the department. It's also communicating out to the business. So there's a real danger if you have an entirely remote team that nobody can see, that stuff just happens and nobody knows about it. This is part of our effort to make sure that the business knows what we are up to and that they can appreciate and understand what we're prioritizing and why. And then on that same kind of vein and actually even more outbound, we have this area called the Knowledge Dojo, which is very much modeled on the Apple Knowledge Bar concept. We have this area, which is outside. It's in a very public space, a hallway where a lot of people walk by. And anyone who has a technical issue of any kind can walk up and ask questions. I need help with a mouse or printer ink, or I have trouble with the application. And again, it's just putting a very public face on what we're doing and why. Ultimately, if you do this right, if you build the kind of connections with the rest of the business, what it does is it lowers resistance. I don't think I mentioned this early on, but the business we're in is homer modeling. And the company we're in is in many ways traditional, very local, very much. People come into the office, you saw the people in suits. That's because they actually wear suits. And we're a little different. Not only do we not wear suits, a lot of us just aren't even there. So we need to manage that. We need to let them know that we're working with them. We've got their back. And this kind of outreach is a big part of that. So it lowers the resistance to enabling remote work. And then finally, as developers, sometimes you just have to get away from the noise. So we have these little pods we built. There are 12 of them. For the people who are local, they become their office. And then for the people who are remote, whenever they're at headquarters, they can use them as office space. They're quiet. They're away from the noise. And then the colorful lights, besides looking really awesome in this picture, have a purpose. If the light is red, it means you're busy. You don't want to be disturbed. And please leave me alone. Come back later. And if it's green, you can come in and have a conversation. All right. That's what we do for local. What can we do for remote or distributed teams? Daily stand-ups. I'm just going to hit that point again. That is a big part of how we manage the communication, how we keep the communication good. And we don't have to take them too seriously, as Darren is showing you. What else do I want to say? The other thing I wanted to say about this is this is kind of an interesting team. This team has since been reorganized so that they are now entirely local. But at the time this picture was taken, three of the people were physically in the same location. And two of them were in remote locations. But the reason I show this is this is a good example of getting everybody on the same playing field. You'll notice everybody is at their own work station, on their own computer talking and communicating as peers. So the guy in the bottom left and the guy in the bottom center and the guy in the top right are roughly 10 feet apart from each other in their cubes. But whenever it came to team communication, they would level the playing field to ensure that. I talked about getting face-to-face as a way of enabling remote communication. And this is a big part of it. So it's not just for work. We do a lot of work. And this is three times a year. We call it Create. We bring all the developers, all the remote workers, two headquarters. We bring in the application support teams that they work with. We bring in the infrastructure teams they work with. And we find it's not a hackathon exactly. How do you describe it? It's basically we take goals and projects that are hard to get to in the normal cycle of development. Sometimes they're bigger picture architectural things. Sometimes it's a cool feature that we just couldn't work into the normal flow. And we'll go build it. We'll spend a week building it. What's cool about this, and by the way, not something that we necessarily planned, but was a beautiful kind of emergence behavior, was that the teams self-organized and they did it in different configurations than their daily teams. So in the field, maybe you have this group over here and this group over here. But when they actually came to this session headquarters, they picked completely different people to work with. And that's beautiful for the kind of personal connections that are necessary to both grow and also to get questions answered beyond your team. And of course, it's not just work. It's social too. So we have both structured and unstructured time where people can. This is where we're doing some kind of weird game show thing. So that's one of the structured examples. We've done go-karting. Almost killed somebody, but it's OK. He's arrived. But it's important, because these are the things that are hard to do in your remote. It's hard to build those social connections in the trust that comes from goofing off with somebody. All right, so in summary, there are three things I would say. When you have a local team, you want to optimize those face-to-face interactions. That's your core strength, right? You've got people there. You want to give them as many opportunities to communicate with very high bandwidth, both internally and externally. Second, you want to develop the tools and practices that are necessary to make remote work successful. And this doesn't happen automatically. It takes effort. It takes conscious thought. And it takes evolution, which is the third point. Don't accept status quo. Whatever you're doing today is great, but there's always something you can be doing better. And the retrospectives and create are big parts for us for analyzing that process, finding ways, finding what's working, finding what's not, and then finding what can be better. I want to give you two links for further reading. I want to thank Martin Fowler for a really awesome article that guided a lot of my early thinking about how to build distributed teams. This article is from 2015, not that old. But he goes into some more detail on some points I kind of glossed over here. Highly recommended. And then we were promotely.com when we were hiring specifically for remote developers. This is where we would advertise and got great responses. It's such a focused demographic, I think, that helped a lot. It's also associated with a book that the guys at 37 signals put out called Remote. The website goes along with that book. That's it. So my name is Ben Klang. You can find me at Beklang just about everywhere, GitHub, Twitter, et cetera. And thank you very much. OK. I'll try to repeat the question. So in one of the slides, I had a team that was hybrid, the mixed, that I said was a bad idea. And the question, I think, was when we did bring them in to make them a local team, how did we handle that transition? And what happened to the people that were remote? Did I get that right? OK. So a couple things. One, one of the people just started coming into the office more regularly. And he was already in the area, but it wasn't a big deal to start commuting. The other one was in Australia. And he's kind of an interesting story. He came to the organization by a fluke. To be really honest, I missed that he was in Australia when I started the interview process. But by the end of the interview process, I was so impressed with him that I said, we've got to make this work. And he was great, because he actually worked really hard. He, for probably seven months, eight months, lived on East Coast time in Australia, which, god bless him. I couldn't do it, right? But anyway, we ultimately actually had sponsored him and had him move to the United States. So no one on the team was let go. We did reorganize to put them all together. But we made that happen organizationally. And they were all on board with it. That team in particular, it was a positive thing for them. So to repeat the question, when you're in a mixed situation, and I'm not sure if you were the only person that was in the outsider or one of a. I was on the local team. OK, so you were in that picture where you had a bunch of you in one location, one person remote. And then to your point, you were having conversations and decisions being made, sometimes without that person, just because they were spontaneous. And then how do you deal with that? It's tough. If you find yourself in that situation, the only advice I can give you is that you have to be very conscious about it and try to force more of your discussions to happen in a mode where they can be observed by the person remotely. Text chat is probably the best place for that. I don't think there's an easy answer. I wish I could give you one. Our philosophy has been just don't do it, wherever possible. You can somewhat help that by keeping your team small. There's less of a risk of that if you're only having to manage three or four people. If you're keeping the team small, it's harder to have one person remote. You're either all going to be remote, you're all going to be local just by virtue of having a few people to adjust. But yeah, I hear you. That's a difficult situation. And I would, best thing I can say is just try to avoid it. That's a good question. So the question is, are there certain types of projects or I guess to extrapolate certain kinds of work? Or projects? The technology you're using, or the type of work you're doing? The technology or the type of, OK. Is that are better suited to local versus remote? Yeah. OK. Well, so projects that require you to physically touch things like putting a server into a rack, that's an easy one, then that's really what led us to wanting those teams on site. In the development space, I'm trying to think of any project where it's purely writing code, where we have that issue. Nothing comes immediately to mind. Everything, once you get past that physical realm, as long as you have a way to communicate, I can't think of any, I think that either can be made to work effectively. With the proviso that you're doing all the other things, that you're doing stand-ups, you're doing retrospectives, and that you're gathering the team so that the communication is occurring naturally. Oh, one thing I didn't mention is besides just the three times a year, we also, if we have a big launch for a big feature, we'll bring the team to headquarters for a week and we'll focus on the stories of that feature. And the payoff for that lasts. We might spend one week on site figuring all the pieces out. And then for three months, they have a really good, clear picture of what they're trying to build. And that helps a lot as well. So the question is, for teams that are mixed right now, what's my recommendation for getting past that? To make them work. To make them work. When we did it, we had the same problems that I've talked about, where people felt isolated. People were left out of decisions. And what we tried was a lot of video conferencing, a lot of text chat, a lot of notes in wikis, in pull requests, written documentation. And then we'd try to encourage people to remember when you had a pull-up conversation to go dump the artifacts from that into something that everyone else can see. It can be done. My personal opinion is you've just got to be superstar, like focused on proactively pulling those people closer and closer to make sure that it happens. And I think that's hard. I just think human nature, it's too easy to have a quick conversation and then to act on it without actually informing anyone else. My advice would be if you can, try to reorganize such that you don't have the mix. And that takes time. I'm not saying it's easy or automatic, but it can be done. I just think it's hard. And I don't think we will do it again if we can help it. That's interesting. So you're saying that in your company, it's mandated that you have teams that are mixed, some remote and some local, with the goal being to ensure consistent culture. So what are the ways you can address the culture arguments while not doing it in a mixed team mode? OK, so tactically, some of the things I talked about, the lunch and learns are good because they cross teams. And because they happen regularly, in our case, every two to three weeks, depending on how excited people are about finding something to present, at least everybody's coming together for that. The other thing we do, and I didn't mention this, but the other thing we do is every week we do demos on Friday. And again, that's the entire department. Everybody gets together. Everybody sees what everybody else is doing. And everybody gets to ask questions. And in some cases, in this case, right now, we have two teams who are working on alternate sides of the same feature. And so one happens to be local, one happens to be remote. And in those cases, we are trying to do more. We put them in a single chat room. Those are all really small answers to your questions. I think the bigger answer is the answer to culture itself is culture, is getting people to understand each other, to respect each other, and to want to communicate with each other. All of these other things kind of grow out from that. I don't think culture is easy. I think it's conscious. And I think it can be done. And again, bringing people together so that they get that face-to-face time helps with that. Does that help? OK, good. Anyone else? Awesome. You've been a great audience. Thank you very much.
Our company is traditional in many ways, one of which being the need to come into the office each day. Our team of software developers bucks that trend, spreading across 6 states and 4 countries. Dev teams consider themselves "Remote First", while DevOps and Application Support are "Local First." Each has adopted tools, habits, and practices to maximize their configuration. Each style has learned valuable lessons from the other. This presentation is about how our teams have evolved: the tools, the compromises, the wins and losses, and how we successfully blend Distributed and Concentrated teams.
10.5446/31246 (DOI)
All right, everyone. Thank you for coming. Sorry for starting a little bit late. The last time I gave this talk, it ran about an hour. And we've got like 25 minutes left. So hopefully it will be okay. I squished it a bit, but I added some more too. So thank you again for coming. This is Google Cloud Platform. Loves Ruby because we do. And of course, you're here in the sponsor track. So this talk is brought to you by Google Cloud Platform, the most favoriteist, wonderfulist Cloud Platform according to Remi. 4 out of 5 Remi is a pro. I don't know about that other Remi. So hi, I'm Remi Taylor. I'm a developer programs engineer. I work in DevRel for Google Cloud. And if you didn't guess, I'm a Rubyist. And fun fact, I'm also from Phoenix. Is anyone a Phoenix Rubyist here? I'm just curious. No, a couple. Yay. OK. I did my first Rails app here. I did my first lots of things. So it's really good to see you guys. So this is a sponsor talk. I appreciated when I looked at this that I put in the program. I put Ruby developers welcome in my RailsConf talk. Maybe because I've reused this abstract. But Ruby developers are definitely welcome. What we're going to talk about, though, is this. So we've put together a dedicated team of Rubyists at Google Cloud. A lot of people are surprised to hear that we have a Ruby team. But we do. And it's growing. It's been growing for the past two years. We've brought together a diverse set of engineers of all different kinds, product managers. And our goal, our mandate, if you will, is to make the experience for Rubyist on Google Cloud as best as possible. We use the cloud all the time. We call our APIs for Google Cloud. We deploy our Ruby code. And so we feel any of the friction. And we try and work to make that the best possible. And what that's been looking like is we've been building a number of things. So that includes Ruby libraries. We're going to spend a lot of the first part of this talk on looking at those. App Engine, which is one of the many places you can deploy your Ruby code to and debugging. So once you take your Ruby code and you've got it up and running, how do you maintain it after that? If there's one takeaway that I'd like you guys to leave with today, it's cloud.google.com. Slash Ruby. You'll actually see this printed on the back of our t-shirts. If you check out our GCP booth, which I highly recommend. There's a bunch of us. Please feel to inundate us with all of your questions. And we've been putting together articles and resources on this site to help people get started. So it's a good jumping off point. And we continue to add more information there. So today, what I'm going to walk through is libraries. I'm going to spend a bunch of time there, because it's one of my favorite things we've been working on. Runtimes to get your application up and running and debugging. So first, let's look at some libraries. Google Cloud has a lot of products. I didn't fit them all here on the screen. This just fit with my OCD and looked good. And a lot of these products have really powerful APIs. So as a Rubyist, I want it to be as easy as possible for me to call out to those APIs. So for Ruby, we've put together a number of client libraries. I don't know if you can see which ones were highlighted here on the projector. But there's about 15 of these that I highlighted that today we have Ruby client libraries for that you can go and use. And please do and give us feedback. So what does it look like to use one of these? Let me get you started. We're Rubyists. So if you tell us to use an API, we're probably going to look for a gem. So we're going to let's try Google Cloud Storage, which is one of our products. It's kind of a file object store. So for that, we'll install Google Cloud Storage for a lot of our other Google Cloud products. So we'll install Google Cloud Vision or Speech and so on. So pretty obvious. Oh, no. Don't show the cat yet. So let me show you what we're going to do with our little code sample to get up and running. What I have open here, if you haven't been to our platform, is the Google Cloud Console. If you're using Google Cloud, you will live there a lot. Here I have the storage browser open. So in Cloud Storage, we manage our files into buckets of buckets. And here, we're browsing one of them called My Cat Pictures. If you can see, it says there are no objects in this bucket, which is very sad. So what we will do is fix that right now. Presuming you have the Google Cloud Storage gem installed or in your gem file, what your Ruby code will look like to have this cat is we're going to require Google Cloud Storage, just like the gem. Pretty simple. And then we want some kind of a service object to make our calls out to the API. So we'll make a storage object. And that is a Google Cloud Storage. Hopefully, you're noticing a pattern here. For the bucket, we need some kind of representation of that to be able to, say, add a file to that bucket. So we get a bucket object here. And then to add the file, you can just say bucket new file. And assuming you have a My Cat ping, then that'll work. Which kind of interesting? I mean, it's very cool. Part of what I want to show to you guys today is what makes me passionate are idioms. So we're Ruby-ess. And if you don't want to call a new file, because you don't like the name and the method, you don't have to. Maybe you want to create the file or upload the file. For anyone who happened to see DHH's keynote yesterday, a lot of what he talked about were kind of the belief systems of Ruby. And for example, we don't necessarily like there to be one way to do something. We love aliases. And we are Ruby-ess who made these client libraries. So you'll see a lot of those Ruby idioms in there. Because we made them. We want to use these. So another thing, for example, is if you don't want to pass a file string, like a path to it, you can pass a Ruby file object. And we actually just treat that as an IO, which is a very Ruby-ish thing to do. So if you want, you can pass a string IO, which if you're not familiar is kind of a way to take a string and fake it and say, here's an IO. Treat this as a file. So those are the kinds of very Ruby-ish things that we do. Once we've done that, we get our Mycat. And the world is a great place again. It would be better if it were a dog. I don't know why I use cats. So I want to point you to a couple of places where you can get up and running with these and find some of the reference docs, et cetera. So all of our client libraries are, of course, open source up on GitHub. They're in our Google Cloud Platform organization. So all the Ruby libraries are in the Google Cloud Ruby repository. I know what you guys are asking. And the answer is yes. We are accepting contributions. Please send us your pull requests or file issues. We love hearing feedback and actually number the issues. Even recently, I can think of some that have been filed. We've directly, pretty much immediately, made changes to some of our APIs when possible and made the developer experiences as good as possible. Those IOs were a recent one we did with storage. When you go to GitHub, click our website. This is our website on GitHub for all of the client libraries. I spend a lot of my day job using these client libraries, giving feedback on them, making samples for them. We've got new ones coming out all the time. And for some of the younger ones, the APIs are changing for them all the time while we iterate on making them amazing. So I live in this website. And I'm actually really proud of it. It's really fun. I feel like once you get up and running, and you're like, OK, now I need to know the method references because now I need to do something real with this, I think you'll find yourself pleased here. We're not missing any docs. And then similarly, if you're coming from the product side of things, you know that you want to use Google Cloud Vision to upload images and have our artificial and machine learning system say, hey, there's Doug in the Eiffel Tower in this image. Pull up that product page. For most of our product pages, and it's really exciting. I'm giving a talk about docs. But honestly, they're really cool. Because for most of our products, we have this client libraries page, which for Ruby, will tell you everything you need to know in one small page. What gem, including how to authenticate. The first time you call out to the APIs, you're going to install a tool and set up authentication. You'll do it once. But that's here on every page. And there's code snippets and links to references that link out to places like that GitHub page. Cool. So you're pretty educated. Now you've got good places to go. And one thing I like about these client libraries is we try and follow all the same idioms and conventions amongst all of them. So when you learn one, it should be really easy for you to switch to any other. Let me just give you a couple examples. Here's an example of using storage again. So just a reminder, we require a storage. We make a client. We make a bucket in this case for storage. And here in this case, we're just looping through the files in the bucket. Pretty enigmatic, pretty much what you would expect. Here's a completely different product. I don't know if you guys noticed that I switched the slide. But this is our NoSQL database solution, Datastore. Here, we're making a service for Datastore. And here, we're making a query, running the query, and looping over the results. In this case, printing out the names of dogs. Pretty simple, pretty similar. Completely different product. This time, we're managing DNS entries. Here, we grab a zone from the DNS service and print out the records. Super simple. These look almost identical. Here's our PubSub product. We grab a subscription. Whenever you publish messages to a topic, those get fanned out to subscriptions. So a common use case is for a given subscription, you may want it to kind of block and just listen and kind of hang out and be like, I'm going to process this. I'm going to process this. We use it for background jobs, for example. So here's a nice little teeny idiomatic method. Listen to the subscription and it will process every message. Couple more, just cause. Here's one of the machine learning ones. Definitely go and play with our machine learning APIs. And one of my coworkers will be talking about natural language processing leader today. And here, we're doing something a little bit more complicated. We're taking one sentence, but it could be a whole document. And the language API is getting called. And for every sentence in that document, it gives you the sentiment of it. Really simple, really powerful, right? So here's another thing that the code looks trivial, in my opinion. Upload an audio file. Upload like a wave file. And this prints the text that our speech API detects in your audio file. So what was said? As easy, upload an image. And in this case, if you upload an image of your dog, labels would be things like dog, golden retriever, mammal, any of the things it notices there. Or also for vision, almost the same exact thing, let's print out the landmarks and print out the lat long. So I love that these are so simple and in any amount of care. Print out the faces. They're just about the same. They are easy as pie. Horrible, horrible joke. So besides being really easy, one of my favorite things about using these and working on building them and making the great developer experience is how rubyish they are. For our data store, you saw a small query, but here is a bigger one. You all have probably seen something like this. This looks like a lot of our libraries that we use for interacting with SQL or no SQL like document stores. Should feel pretty familiar, and that's one of our goals. Here is that cool little method that I love. This is one of my favorites. There's a lot of code on this slide, but I would wager that anyone who knows DNS, even if you're not a programmer, I think you'd be able to figure out exactly what's happening here. So this is a cool example of a DSL of ours. So this happens to do all this in a transaction, but if you know DNS, we're adding an A record, right? For www to an IP. We're changing some MX records, changing the TTL on one CNAME. I love this one. I'll wait for you to take a picture with your phone. I'll make sure these are available later. And there are other cool things we do. Like when we started using our logging product, we realized all of our actual Rails applications and our other applications, they use a normal Ruby logger. So we started using a logging product, and we had to call these custom API methods. And we were like, no, this is stupid. Why don't we have our logging client library? Just give you a logger. It just gives you a standard Ruby logger, and you can put it in your Rails apps or wherever you would normally expect to. So this is just another cool example of something that you get when a bunch of Rubyists hang out and use these products, feel the friction, and it's coming from us. This isn't Java written in Ruby, which I know some of us have seen. And finally, I want to end on this. BigQuery has data sets with tables. At RailsConf, I'm sure all of you are very familiar with the syntax, so it looks a whole lot like an active record migration. This is the syntax for our migrations for updating or adding BigQuery schema. So based off of tons of idioms like that, used in active record and SQL. OK, yay, educated. While I watch the time, let's click through a couple of just like refreshers for how you can file up on this when you get back to your hotel and you want to start playing with this. Of course, cloud.google.com slash Ruby. It's a great jumping off point, but especially for the libraries, going to GitHub, opening up that website that we have on GitHub with all the references is a great place to get started. I love these, and similarly for the product docs. Not only do we have these cool client library pages, but for a lot of products, we have really simple little lists of how-to guides. So I showed you how to do landmarks. Here's how to get some of the text from an image, or the landmarks, or some faces. They're really simple little snippets. It looks really simple. It looks like not a lot of work went into them. We put a lot of work into them to make them that simple, so that is really easy. And all of these are copy-pasteable, which is wonderful. And these client library pages exist for a lot of our products. And you'll notice it's not just for Ruby. We've got most of these six other languages that you can use in case you've got other languages at your businesses. Cool. So let's jump into some run times and get up and running. So I want to, with the little time we have, stop and tell a really short story. I want to go back in time. I want to go back to 2009. 2009 was a really cool year for Ruby and for Rails. For Ruby, Ruby 191 had just been released. This is important because this was the first stable version of the 19 series. That brought us YARV, the new bytecode interpreter. It brought us a lot of syntax that we're familiar with today, like the new hash syntax. The arrow syntax for lambdas. And a lot of other important things came from this. Also, Rails 2.3. This was one of the, maybe I was most excited about this Rails release than any sense. And that's because this updated Rails to be fully based on top of Rack. Before that, it hadn't been. And when Rack was created, kind of shoehorned it in. This updated everything to be fully based on it. And it gave us, as a side effect, things like Rails engines. And metal. And just for a brief context for the time period we're in, we probably use Bundler today. Bundler integration for Rails wasn't a thing yet. So that's cute. We went back in time eight years. But why do I care? I care because in 2009, in Phoenix, I gave my very first talk about running Ruby on App Engine. Of course, App Engine was Java then, so I used JRuby. But these are some of my old slides. I've remastered them a little. But this is my old slide from Desert Code Camp in 2009. And it's interesting. I spent a lot of time getting my Ruby running on App Engine. And you might think, why? I mean, this was kind of an uphill battle. Why am I trying to run my Ruby in an environment that totally wasn't made for it? We weren't the target audience. When I answered it in my old slides, the answer here was just magic. Pure magic. Here was one of the quotes from the App Engine website at that time. It says, App Engine uses multiple web servers to run your application and automatically adjust the number of servers it's using to handle requests reliably. In 2009, that was amazing. That was a dream come true. I mean, at the time, most of my deployment was kind of this rat's nest of wires basically managed and hacked together with the Scotch tape that is Capistrano. And don't get me wrong, I was really proud of my Capistrano scripts, but I would prefer the magic. I would prefer to not have had those. So that's why we use those then. And I wasn't the only one asking for it. So back when App Engine was released, the 29th issue filed on the issue tracker. Here in 2008, year before, was for RubySport. And the request kept coming in. So now, nine years later for that, in 2017, on Google Cloud, we have a much better story for Ruby. And we have a lot of different options. So today, if you want full control, you can use Compute Engine. This will give you full virtual machines and you can manage them how you like, with your load balancer, et cetera, et cetera. If you're already using containers and Docker, you may be interested in using Google Container Engine. This is essentially Google's hosted version of Kubernetes. And then of course, what I want to briefly look at is App Engine. I'm an app developer, so this is kind of the model for me. I want to give Google my application and say go. Like, I'm an app developer, I'm gonna update this and send changes, but just please keep it working. So the new App Engine, App Engine flexible environment, runs a lot of different languages and we GA'd the whole product, including the Ruby run time this year. So you can use them, we will support you. And for the Ruby run time, a benefit is we still get all that magic. So now today, many years later, I can run my Ruby on App Engine and still get that scaling, still get a wonderful experience where I just give it my application. How's it work? Short answer, let me check our time, is cloud.google.com. Slash Ruby slash Rails. It's a good place to get up and running. We also have our Container Engine, Compute Engine tutorials here, but this has a page that'll just walk you through the requirements, get you up and running with Google Cloud if you haven't already. The TLDR for this, if you want to click, click, click, is, you need a Google account, okay, duh. You need to log into Google Cloud Console, Google, go ahead and do that. You're in there and make a project. Google Cloud projects are what most of our resources are associated with, a storage bucket and is part of a project. App Engine applications are associated with a project. After that, install the Google Cloud SDK at cloud.google.com slash SDK. This gives you the GCloud command line tool as well as a couple of others that are really powerful and it's what we use to interact with all of our, all of our cloud products. So if you want to do orchestration, if you want to, from the command line, as opposed to our UI, make a bunch of SQL instances, Postgres instances, whatever it may be, that's where you can do it. Once you've got it, CD into your app. This could be Rails app, could be a Sinatra app, a Rack app, whatever, run GCloud, app deploy, and it knows what to do. It's like, hey, this is a Ruby application. I know what to do with this. The only thing we give you a prompt for is how do you want to boot it? We currently use Rack up here, so this will just work for Rails, hit enter. But you could also say bundle exec mySinatraapp.rb or whatever you would want to do. Wait a few seconds, maybe a little bit more. And then your app is deployed. Hooray, fantastic. But for here, what I'm interested in is those cool Rubyisms that we've put into this, the love. Once you've actually deployed your application and you really start to use it, what do you need? What are the pain points in the friction that we've been working on? One of the first things you'll probably need in the real world is to specify the Ruby version. So how do we do that? Well, we didn't reinvent the wheel. We used the idioms of the community when they're there, so use a Ruby version file. If you're not familiar with these, this is the file used by RBM or RVM, these idiomatic tools, which are version managers, to manage a version for Ruby and associate with a given project. Of course, you can just make the file by hand too. You just put like two, four, one in it. But that's a good example of something that just works that we use from the community. Another pain point that came up right away, installing gems. We've got your Noko Geary, but also, anybody who's deployed to remote VMs and things a lot with Ruby, maybe feels a little bit of pain associated with at least one of these gems up here. Most of these require some kind of native dependency. They could hurt you on your Mac, but as easily as well. So these usually require some kind of app package or something like that when you deploy them. So to take away that pain, what we did was, we're like, forget this. We were just going in and every time we found an error, we were like, we'll fix that and add the dependency. No, we just went to RubyGems and we got the top 1000 most downloaded gems off of RubyGems and we just made sure they all install. So the top 1000 most downloaded RubyGems, totally install and build properly on our runtime. Done, make it work. Okay, what if you do want to install some really custom thing that no one else in the Ruby community would be using? You want to install Cousey? Seriously, even the cow think that's ridiculous, but sure, we got you covered, can do. So as opposed to running gcloud app deploy, you can run gcloud app gen config custom. It's a little bit of a mouthful, but it's pretty easy. And what this does is, it'll spit out a Docker file in your directory. So behind the scenes when you gcloud app deploy, we make a Docker file. It's based on our base Docker image, but we also can look at your directory and be like, oh, here's the entry point. We want to give some recommendations. Instead of making the Docker file behind scenes and deploying, what this command does is, it just makes the same Docker file and it spits it out. And then the next time you deploy, your deployment won't use the bare Ruby runtime. It'll use this. So you can edit this to your heart's content, add whatever app get packages or whatever you want, some demons in the background, whatever. App Engine Flex is really based on Docker. And there's other benefits that come with this. So when you want to take your application and take it off of App Engine, maybe you're ready to put it into a bunch of containers on Container Engine or Kubernetes. You can take our same Docker runtime and those work fine there. We have, okay, rushing a little bit, we'll find. So to follow up here, when you start using it, and after you've gotten your Hello World deployed, and you're ready to start focusing on answering some of your real problems that you're having in production, go ahead and look up the docs for the Ruby runtime. And this has everything I just mentioned here. How to set a particular Ruby version, how to make a custom Docker file, but also what if you need to SSH into a container running on one of the machines? This will help you out in case you really need to do that. What if you need something like a cron job? It also has walkthroughs for connecting your App Engine instance to a fully managed Postgres or MySQL instance. A lot of those things that come up after your application is up, how do you split the traffic between multiple versions and things like that? This will answer that for you. Okay, we're running on time, but we will run through this next section on debugging. Once you've got your app up and running, you're very happy, but what happens when that first issue happens? What happens when things start to not go so well? For example, your clients all start calling and they see this glorious page. It's all of our favorite page in the whole wide world. What do you do? So there's two things, one, and this kind of will point you in the right direction. What do you do? What do you do there? We're going to look at some of our products that'll help you through this. This is our Stackdriver suite of tools for logging, air reporting, debugging, which is very cool, and latency tracing. Looking at logging, one just quick note is after you do your GCloud app deploy, I didn't scroll down to the bottom of the output. It'll point out that as soon as you deploy, you can from GCloud tail your logs. I pointed out because not a lot of people spot it and I use it all the time, tail my logs, grab to something, and watch for those exceptions or something, but for the most part, you're probably going to be in our UI. So in the Google Cloud console, search for or click on the nav logging, and you'll find yourself in a screen like this. With a lot of kind of busy logs, but there's a lot of things that your applications log, and I want to point your attention to this. By default, these are just some of the things that we log by default for your applications. So standard out and standard error, a benefit of this is in your Snotra app, your Rails app, your whatever, puts hello, will totally show up in the logs. So that'll get ingested and you can search for that later. Of course, you can make a logger based on standard out as well, so you can use a logger. And for Rails, lo and behold, there's an environment variable that you can set that will pre-configure its logger. So Rails logger will go to standard out, and that's kind of a minimal way to get up and running with logging that works, and it's just useful in Elm. So here, if we deployed that, we'd be able to find our standard out, hello, it's really busy with a lot of other requests, but there's full text search so you can drill down and find those too. When exceptions happen, if you're set up in the same way, similarly, those would get logged to the standard out and you'd be able to find it. This, you can't see the StackTrace, it's kind of not the best scenario, it's not the best way to do your error reporting. So now that we have Kausa installed, is there a better way? And there are most certain needs. Jam Stackdriver, this is one of my favorite things that we've been working on for the past couple of quarters, what a corporate thing. To get up and running with it, the two things to know, one, and this is big gotcha, I still messed this up, to turn on the Stackdriver integration for one of your projects, you can opt into which one of the tools you want. So on the project side, you have to enable these APIs, most of them are enabled by default, but I know error reporting isn't, which is kind of good, you may not want the whole error reporting system turned on in our console and start getting notifications unless you want to opt into it, but turn those on and then in your app, throw jam stackdriver into your jam file, bundle, redeploy. Once you've done that, everything should just work and note all you needed was jam stackdriver and this will work when you deploy to a flex environment but you can also configure applications if you want to, to log and send error reports to Stackdriver from your machine or your other clouds and things like that. So in this case with the gem installed, now our errors are much more pretty, you notice the severity comes through, which it didn't before and for errors, if we blow up, this is more complicated than it needed to be, but I use this for debugging, if the request is invalid and we explode, we hit the read, we hit it, which is good in this case because we want it to explode, it's like a test of development. One of my favorite things is after an error occurs and you've done your gem stackdriver and I go and I want to go and pull up the error reporting section of the cloud console, when you go to the cloud console, like you don't even have to get that far. Our dashboard has all these cards when you log into the console and as soon as an error pops up, you'll get this card and it's like you had 37 of these runtime exceptions, sure I would love to click into error reporting. This is a test project, so I only have one type of exception, you may have more, but this gives you a list of all your different types of exceptions, you can click into one, get all the details, the occurrences, the normal things you would expect, as well as getting the stack trace, which is what we want. There's one really interesting thing about this page. You notice that these are links, there's some magic here and I put dragons on the screen intentionally because we've got, this is alpha, please kick the tires on this, we're still playing around with this, but if you click one of those puppies, it'll take you to stackdriver debug. And now here by default, it says it couldn't find the file and you need to set up some source code, but that's okay. Right now debugger isn't part of that stackdriver gem, so right now you have to install it as a separate thing for now. So if you want to play around with debugger, and please do, add Google Cloud Debugger to your gem file, and then require in your app engine, your Rails, sorry, application RB, Google Cloud Debugger Rails, this kind of loads the rail tie. Stackdriver has, our gem has a rail tie, but it also has middleware, you can use in Snotcher or your rack apps or whatever. So redeploy and then when you click the link, you'll find yourself in a really pretty debugger that I wish I wanted to spend a little bit more time demoing and get things in here for. I've been having a lot of fun using it, but please go and play. It's great to have your application live deployed to flex, and then go and click in an error report, go click on a stack trace, and when you've got it set up, it'll go directly to your source code and set that break point for you. Super duper cool. And I wanted to know, because I didn't have enough time to talk about this. Tomorrow, one of our coworkers is talking about debugging kind of a number of these stack driver products, so check that out, I've got a slide at the end here where I mentioned it. Finally, I'm gonna wrap up, but there's a trace tool that you get for free with gem stack driver to do latency tracing. I shouldn't be glossing over this, because it's really cool, but you'll get a great little chart if you highlight any of your requests that maybe had a high latency. You'll get your Rails instrumentation, timeline of latency over there, it's everything you would normally expect. If you click into any of those, you'll be able to see, like for each one of these calls and how latent they were, what the actual SQL is. This is really, really useful for debugging the performance of your applications. So, refresher there, gem stack driver, go play with it, check out cloud.google.com slash ruby. We've got good docs and launching off points for a number of things we looked at. Check out the open source library. On GitHub, definitely check out this website. As I mentioned for our client libraries, this is one of my favorite sources of docs, because I use those, I haven't used those libraries a lot, if you can't tell, I love them. So these docs are awesome. And of course, go check out these client libraries pages. They're my favorite recommendation for when someone says, hey, how do I get up and running quick? Because it shows you everything you need. I send shout outs to some of my fellow Googlers. Today at 2.40, there's a talk about natural language processing from one of our coworkers who is awesome. And that's talking about language, natural language processing in general, not necessarily a gem. And then also tomorrow, what is my app really doing in production? Check that out. I'm really personally excited to see both of these. All right, so, got a little bit of time. Thank you, that's all I had. Good question, really good question. And definitely come by our booth and ask about that, which I say, because I know one of my coworkers would love to answer that. Oh, sorry, the question is, do we offer things for Chef and other existing orchestration tools in the environment? So it's a question like, do we offer recipes and things like that? We really rely on the open source community to have created those already. We do have an internal team that is an active contributor to some of those teams and projects like Fog. But currently, we don't push anything to Chef or anything like that. If you have ideas and would love anything, definitely let us know. Good question. The question is, how tightly is Stackdriver tied to Google products? And I wanna go so far as to say not at all. When you go to the Stackdriver docs, on the docs you'll find at the top level ways to integrate it with AWS or call from any other cloud. I would love to have put in the configuration options here for us. So of course for Google Cloud, we make it implicit and simple, but everything we did here, you can totally do on any other cloud. Somebody maybe will tell me that's not the case, but I'm pretty sure every single thing, because we call out to those APIs, they're saying yes. The question is, how does the pricing work? So it's based on compute hours or minutes or compute time, someone would be better at answering this than I am, but it's the same that we use for compute engine. The thing that I get the most questions about or I like to point out is we have a sustained usage discount. One of the things that I believe is different with us is as opposed to having to do a bulk buy and have to figure out how many compute hours am I going to need in 2018, which some providers may do. For us, you can start using all those resources and get a sustained discount, but I think pull up the billing pages for Flex. I don't have the exact details for you. I can definitely answer your questions a little bit later. Yeah, it's based on our compute model. Good question. Yeah, the question is, I love all the stuff that you've made, but how come it took so long to incorporate Ruby into App Engine? So I can't speak for priorities for the standard App Engine team, but I will say that one of the really exciting things for this new App Engine flexible environment is that it's based on Docker. So that really opened the door for us. That led our team focus on making a perfect Ruby Docker image. That also opened up the door for youcanddeploy.net applications now, actually, to Docker containers on Flex. And a number of other languages that really opened up the door for us to be able to do that. So it was really the Docker that let us do it. And that's super cool. I hadn't used Docker a lot before I joined Google Cloud. And now, of course, I love it because not only can you use it there in your App Engine Flex, you can take it wherever you want. There's no lock-in or anything to this. Any other questions? And of course, I'll be here after. And shout out, please check out our booth. We'll all be around and available to answer questions. Look for the Google Cloud shirts with the rubies and Google Cloud Ruby on the back. Thank you so much, you guys. Could we just go back and keep the picture painted?
Ruby developers welcome! Our dedicated Google Cloud Platform Ruby team has built a great experience for Ruby developers using GCP. In this session, we'll walk through the steps to deploy, debug and scale a Ruby on Rails application on Google App Engine. You'll also learn about some of the exciting Ruby libraries available today for adding features to your app with GCP services like BigQuery and Cloud Vision API.
10.5446/31247 (DOI)
Thank you for coming. My name is Braulio and I work for AgBlue Technical Services. I would like to start with a question. How many of you have used AgBlue to make a donation? Wow, quite a few. How many of you tipped? Not so many, but for the ones who tipped, thank you. I stole the title for the presentation from Bernie Sanders. He was saying that we need a political revolution. But Sanders made small dollar donations popular, but he's only one of the more than 17,000 organizations that have been using the service for the last 13 years. And it's not only political, we also provide the service for non-profits. In fact, AgBlue is a non-profit. In the first quarter of this year alone, we have 3,000 organizations using the platform. And our Rails application is 12 years old. So how does it work? Let's say that Jason in the back who works here, he wants to run for city council. And I'm sure he would be a good city council, but he needs money to promote his campaign. So how would he like to get donations? Of course, he cannot process credit cards by himself. So what he will do is he will go to AgBlue, we'll set up a page, and from that point, we'll process the credit cards using that page. Once a week, he will get a check. And we also will take care of the legal part, which is very complicated and will also do the compliance. There are multiple reports that have to be sent when you are doing political fundraising or for non-profits. So we also provide additional tools for the campaigns, statistics, A.B. tests. We have someone who donates can also save the card information in the website. And the next time they donate to that, to the same organization or a different one, they don't have to enter anything. It will be one click, a single click donation. We have 3.8 million people with AgBlue Express users. So far, we have raised $1.6 billion in 31 million contributions in these 13 years. And we like to see ourselves as empowering small-dollar donors. How many of you don't know what Citizens United is? All of you know. Okay. Well, in case you know, in case someone doesn't know, I didn't raise their hand, is Citizens United is a ruling by the Supreme Court and it allows unlimited amount of money to promote a political candidate. So a few people with lots of money will have a lot of power in the political process. In the political side, we also do non-profits. But in the political side, this is how we started, we would like to have lots of small dollars, which means a lot of people with little money having the same power. This is how the contribution page looks. This is for John Ossoff. The person here has never visited the website before. It will be a multi-step process. It's getting the amount first. Then we will, the next step, we will get the name, address, credit card, et cetera. This is a non-profit, by the way. In this case, it is an express user donor. It says, hi, Braulio. So it recognized that it's me. And if I click any of those buttons, the donation will process right away. I don't have to enter a card number. That's what we call the single click. During the, during February of last year, Bernie won the primaries in New Hampshire. That was the second state with primary elections. And he won big. It was after Iowa. And he gave a victory speech that night. And I'm, I'm going to show a little clip from, from there. Right here, right now, across America. My request is please go to bernie sanders.com and contribute. Right away, we felt the burn. So the, the first one is requests per minute is about 330,000 per minute. And the second one is contributions, credit card payments. 27,000 per minute is about 40 per second. That's a lot because credit card payment you see is, it's expensive to, to do. And the reason why I chose these graphs is to stress the fact that improving performance is a continuous process. It's not something that happens from one day to the next. We were able to handle this, this spike pretty well. Some donors didn't see the thank you page. They only saw the spinner. And after donating, they never got to the next page. But we never stopped receiving contributions. And you can see that there is no gap. So the service was never down. And that's, that's because we're doing all these 13 years. Every time we have high traffic for any reason, we have been analyzing why do we have that? Is there a bottleneck? Can we improve it? So this, this presentation is about all the experience we have gathered during all these years. The first thing we have to do is define what we're going to optimize. And that will depend on the business. In every case, it will be different. For an In-Commerce website, for example, it's very likely that it will be the response time on browsing the catalog. In our case, in our case, it's very simple. The contribution form. And we have to optimize two things. One is how we load it. And the other one is how we process. Loading contribution, the contribution form, Snow Secret is what you think of loading a form. It's very simple. But processing is a little different. In the center, I have our servers. And around that, I have all the, those, those are web service calls that I have to do in order to execute a payment. We have a vault for the credit card numbers. And outside, the vault is the only place where we have the numbers. Outside of that, it's all tokens. So the first thing we have to do is we have to get a token. We have to tokenize the card. That's number one. That's a post and a response. Then I have a fraud score. I have external service which will provide that service for me. Then I have, with the bank, I actually have two. The first step, when you, all the credit cards are processed, this way you first make what is called an authorization, which is a post, again, from the bank will respond whether it's approved. In that case, it will give me an authorization number or it will say decline. But there is no money transferred at that point. I have to do a second step, another post, and send the number I got if it was approved, of course, and the bank will respond with a confirmation. Also, I have an email receipt I want to send and every organization wants to know, most of the organizations want to know right away when they get a contribution. So they want also to be informed. The thing looks like this. Do you remember those? How many can you do in a minute? It's too many young people here that you have never seen this. Only the old people can remember. Okay. So we have a, this is a high volume track. We have, because we have so many donations, we have a scaling challenge. We have to be able to process this fast and efficiently. What I'm going to do now is I'm going to present one approach. I will show several. So for each one, I will explain how it works, how it is implemented, maybe some code, and there will always be a cost and we'll give you how to solve it. The first one, metrics. Here is the part I'm going to show. Can you see the graph? No? I thought it was going to be the case. How about that? Well, we have dozens of this. I'm going to show only a few, the most important ones. This is contributions per minute on the X axis is the time, on the Y is the number. And something happened there. Actually, what happened there was a burning one in the Anna and there was a spike. These were called burning moments, by the way. So that one. Now, I have another one here, which is traffic. It will be correlated. By the way, correlating, when you have metrics, when you have metrics, numbers are not enough. You need graphs like this. And you have graphs, you can correlate, and between traffic and contributions, there is a correlation. But this one is the number of contributions that are being processed. Because we have so many web services we have to touch. There will be a certain number that will be always in part of that process. So I'm counting those. And if you see between these two, the contributions and the pending, there is no correlation, which is great. That's how it should be. If for some reason, pending was also going up, in the same way, contributions are going up, it means the service is saturated. I cannot process as fast as I have received them. In this case, it's wonderful. That's how it should be always. Sometimes it wasn't. But that's the goal. Then the last one is that one is latency. That is a time interval. It is how long it takes between the, I create a contribution and I receive an authorization from the bank. That's also an important number. In this case, it's about two to three seconds. This is how I do metrics in Ruby. There is a gem called statsd. Ruby. I call the class statd. I create a new object. I pass the host, the host name. I will have multiple calls. I need to know where this is happening. The second instruction is a gauge. The gauge will draw, will generate, excuse me, it will not draw, it will generate a data point, which is an integer. In this case, how many pending authorizations I have. And the timing method, both gauge and timing are sidekick. Excuse me, are statsd methods. I have a timing interval, which as I mentioned before, the distance between when it was created to when it was approved. Very simple. But if you have lots of this, you will be able to have those graphs. And the way you render the graphs is something called graphite. You will have Postgres, Postfig, you have all sorts of things. You also want to measure CPU, memory, disk. So there is something called collectd that will have plugins to gather that information easily. I mentioned logs. They are not really metrics, but they are very important. Don't forget them. Good. We covered the first one. Now, multiple servers. Multiple servers is, it doesn't, if you start with one host, which is normally the case, even the fastest computer in the world won't be able to handle all the load. You will have to put a second, a third, et cetera. These graphs chose on the right, I have three machines running. Inside this machine, I have a little circle. Those are representing threads. So I can have a web server running in each thread. I have multi-threading, which is simple. But the important part here is I have different computers. And because I have that, I need to have this piece in the middle, which is a load balancer. The load balancer can be a piece of software or hardware. Well, in the end, anything is software, but don't get too technical with me. The browser will, the DNS will resolve into the IP address of the load balancer. The request will get there. The load balancer will pick one host, and it will pass that request. How you implement it will depend on the hosting company. I have here, this is called the poor month version of load balancer, because it's free. It is doing Nginx. I can configure Nginx to do load balancing, and I see two blocks. Can you see? Well, yeah. The first block defines the IP addresses of the three hosts. And the second block, the server block, is telling me that I will be listening on port 80, and all the requests will be passed to the back end block. There is an algorithm that will define how they are picked, but in this case, it's whatever sequential random round robin. It doesn't matter. You can define it if you want. So there are costs involved when you are doing this. The first one, if you have used Heroku, the first surprise when you start with Heroku is there is no file system. And this is why. You upload in the browser, you upload the file, it will be on one host. Later, there will be a different request on a different computer, and that will, you will try to see the file on that computer, but it's not there. So you need to provide a mechanism to fix that. One way is use Amazon Web Services S3, which is a memory, the basic is disk. Another option is something called sticky sessions, which is the load balancer can pick always the same host and send those requests there. But that's for the second, for the second problem, but I will read it a little ahead. I'm talking about persistence. You can replicate the files. You can do that. With persistences, I don't share the memory, and I can have the sticky sessions I was talking about, but Rails is very good. You have out-of-the-box Rails sessions. That's how you share state. You can also use Redis. You want your own data store. This third problem you're going to have is you will have, because you're having more servers running, all of them connected to the same database, you're going to start running out of connection. The database has a limit on how many you can connect. What we do is in Postgres, you can easily define replication. That means there will be copies of the database. The database will be a little behind, but not too much. They're read-only, but I can still use them. And if there is a host that doesn't need read-write access, that one doesn't have to connect to the main database. It will connect to that one, to the replica. The last one is it doesn't matter if all the servers are, all the hosts are up. It doesn't matter how many I have. If the load balancer is down, I have a problem. And in our case, the solution is a combination between our CDN. I will explain what the CDN is, but it's a combination between the CDN and the load balancer provided by the hosting company. Good. We did two. We have the next one, caching. Caching is the most popular one. Every time you hear about performance, you will hear, hey, you have to do caching and caching basically making a copy, keeping a copy somewhere to save time, you will have caching between there is a cache in the browser, in the browser, and there is a cache in the web server, and there will be caches in between as well. And if you have money, you can hire a caching service and have something up and running very quickly. That's why it's a highest viable effort. We use something called Fastly, which is a content delivery network. There are several Akamai, another one very popular, CloudFair. And I will explain how it works in the next slide. But this is very good. This is something that works very well. And the loading part of the form is the part that gets all the benefit of doing this. And one time, we had a distributed denial of service attack. And we handled very well only because the CDN was there for us. We couldn't have handled that with our own servers. This is how it works. I have a browser in Boston. And the boxes on the right are pop's point of presence that belongs to the CDN. It doesn't belong to ActBlue. I have a pop in New England. So the browser in Boston will make it, if you follow the numbers, you will be, you will follow the sequence. We'll get the request. Because this is the first time I request this document, the pop will make another get to our own server, the ActBlue host, to get the document. We will respond. That's number three. We're adding two headers there. We'll explain what it is. And the pop will, in time, respond to the browser with the document. There is another header in the last response. Now, later, there is a, probably, it's a CD near 100 miles from Boston. So that CD will also go to the same pop. We'll make a get. But because the pop has the copy, it won't request the copy from us. So we are not going to see the second get. The pop will respond with the copy they have. There will be other pops. The pops are distributed all over the world. And, for example, in this case, I'm putting one in the West Coast. So if someone in LA is browsing ActBlue, they will go there. And they will have their own copy. This is a map where the red dots represent pop pops. This is a dashboard for fastly. And the size represents how many hits I have. Hit, it means that I have the copy. It's a cache hit. And the biggest are in the US, but there are also red dots in Europe, Asia, Australia. The gauge on the left indicates that there is a 97% hit rate. And what that means is only 3% of the request will get to my server. 97% of all the requests that the CD is receiving, all the requests goes to the CD and first. 97% of them will stay there and it will never touch my web server, which is great. How do you control cache? You need to control two things. You need to control how long the copy will live in the cache and how you do the purge. Purge means you force a refresh. You specify how long it will live in the copy, but sometimes you want to do it right away. You don't want to wait all the time. So you do this with headers. I'm showing a few here. Cache control is the first one, which is the most popular one. It tells me, maxH400, it tells me that I will put the slides online. By the way, you don't have to worry about it. 400 seconds, any place it means that in the browser it will live 400 seconds or in between or in the CDN as well. Third-rate control is longer, 3600 seconds. And that one is the same as cache control, but it's only for the CDN. There is a specific specification for how CDNs work. It's called H architecture. And that's where this header is defined. Another one, surrogate key. For each document, I can define that there is like a tag, which is great because at some point I might say, hey, I want to force a refresh on all these pages. If all the pages have tag key two, I can do it in a single call. There is another thing called varnish configuration language, which is a script. The script has access to all the requests and the whole request and also the URL. I want to show something. The VCL, the script will run in the place where the pop is getting the request from the browser. And it will also run in the part where it's getting the response from the host. And this is an example of what can I do with the VCL. I can check the URL and in this case, it starts with videos. In that case, I say, hey, I would like to use a specific backend for videos. And that's the name of the backend. F video. And I don't want to, this is what I'm trying to avoid. I don't want to mix videos and the contribution. I can also respond right away. I can say they return 400, for example, without touching the server. There is also an API. With the API, you can push keys, one or all of them. The cost is very expensive. If you want fine control on how the copies are kept and first, it will get complicated. Second, third, if you are doing SSL, in our case, we always do SSL. It's complicated because all the pops will need to have a copy of the certificate and also the private keys. I need to maintain that. The other thing is, you remember from the slide, what slide is, five? Yes. It says high Braulio. I can bet that checkout pages in regular websites are not cash because you have this personalization. If you cash this and my neighbor is seeing high Braulio, it doesn't work. I need to handle that. The way we do it, in our case, we cannot follow that approach. We have to cash because it's the most important form. We have JavaScript. We cash everything except those little pieces and they are filled with JavaScript. Great. We have power three. We are going in time. Separation of concerns, also known as SOA or microservices. It's a very simple idea. I will have different applications to handle different parts of the system. The first example here is the tokenizer, the vault, which is an application written in Node, completely separate. It's even in a different hosting company. I can have also multiple copies of the database that way. One of the advantages of having separation of concerns is for compliance. The fact that we have this vault means that if I don't have access, I don't need to comply. I don't need to have an antivirus on my laptop. That's why I have never seen a credit card number because I don't want to have an antivirus on my laptop. The cost, as anyone who has done microservices of SOA, is the fact that it's very difficult to implement and very difficult to test. Great. One, two, three, four. We have covered four. We are not going to cover now the third tasks. In our case, we're going to talk about tasks that are slow. I don't want the web server to be doing something that's slow because it will hold it for a long time and that server won't be able to handle other requests. It means I need to, for example, this is slide number nine, and all of these things are slow. Let's say I'm talking with the bank. That shouldn't be done with a regular web server. It will take several seconds. What I do is I save that job for later. If I want to do it later, I need to save it somewhere. I need a queue. In our case, almost everything will be a different task. Extra benefits of doing this is for isolation. If the bank is down, for example, I cannot process authorizations or settlements because it's a different task. It doesn't matter. The customers will still be able to donate. They won't know if it's approved or not, but they will get the thank you page. They might even get an email saying thank you for the donation. We'll tell them later if it was approved or not. The other advantage is in crisis reliability. If for some reason something failed on the authorization, for example, I have all the information saved and I can't re-run it. In our case, the deferred batch system might put that there because it was a big gain for us. We decided that the contribution was going to be paid, was going to be realized at the authorization point. Although we hadn't the money yet because we haven't done the settlement, we're going to consider that paid right away. If we do that, we can do the settlement. We do it deferred always, but we can do it in batches. Instead of having one settlement per post, we can send one post with 400 settlements. Big gain. This is how a QE system looks. On the right, I have the processes doing the work. They are called workers, authorization settlements, sending email. I have the queue in the middle where I save the jobs. On the left, I have the web servers putting the jobs in the queue. We use SideKick. We have two blocks. This is how you use SideKick. We have two blocks. One is you define a class for the worker. In this case, we're doing the settlement. You have to define a method called perform. The settlement.find will get an idea of the model settlement. The method that will talk to the bank is settle exclamation point. That first block is who is doing the job. The second, the line that's in the middle, it's how to put it in the queue. I use the perform method, which is a method with SideKick. I give the idea of the settlement of the record. I put it in the queue. The other one will process. 4.2 has active job incorporated. You can active mail it has it integrated out of the box. You want to send an email. You say deliver later. The line at the bottom. Very simple. If you're doing different tasks, you have some costs. The queue and systems are unreliable except SideKick. That's because Mike who wrote SideKick was around. Just in case. But it's not that. The job can die. The computer can die. The bank can have a problem. The communication gets disconnected. All those things. You're having a different task. The job didn't run. What are you going to do? SideKick will do retries automatically. But maybe you run it all the time. All the time. You're supposed to retry and never succeed. What do you do? If the settlement, let's say $100 settlement and you never settle, you're going to lose that money because you didn't transfer. What else? Coordination. You cannot do authorizations after doing settlements that will fail. And it's difficult to debug. Things, it's kind of crazy because things happen anytime and because we have multiple hosts, they happen anywhere. You remember I told you about the logs? This is where they are important. That's the only way to know what happened. Great. They're doing great. That's the last one. We have covered everything but the last one. Scalable architecture. This is the most important one. And I put it at the end because I am a developer. And as developers, we always overlook architecture. But it shouldn't be like that. The idea is when I'm writing software, I have to be thinking it has to be fast. I'm sure all of you write fast software but that's not enough. You also have to think how I am going to scale this in the future. And if you don't think this way, you might make mistake and it's going to be difficult to fix because you have a whole system written that way. So I'm going to give you two examples. Let's say that you saw the first contribution form. I have these amounts. So I might say, hey, I would like to have a process that on the fly will calculate what are the best amounts to show depending on the organization and depending on the user. So I can say, okay, let's start developing this. I'm going to use machine learning. Oh, great. And right away I say, you know what? If we have a central system to do this, it would be great. Things would be easy. Oh, great. We do it that way. That doesn't scale. At some point I will have so much load on this system, I would want to have two of these and I can't because there is a central one so I cannot have two. There is another example. The deferred, excuse me, the batches for the settlements, we decided it was a decision. It's an architectural decision. We are going to consider the payment, the contribution process at the authorization, at the end of the authorization. And that is huge because I don't need to do the other part. I can say right away. After that step, I can say, hey, we're done. And that's the list. Scale of architecture at the top. And that's all I have. If you are interested in what we do, come talk to us. We have stickers also somewhere there. Those are my colleagues, by the way. But we have time. We have, yeah, seven minutes or six minutes for questions. Anyone have? Okay. The question is when you have multiple servers, how do you handle the logs? You will have many computers generating them. We use paper trail and it works very well. And with paper trail, you define, you have to, the systems generating the log will connect with their system and through a web interface, you see everything in a single page. You can filter if you want, of course. That's the way to go. Okay. We have, we could do that. We don't do it on purpose. We don't have a system to simulate load. Excuse me. The question was how do we simulate load or how do we prepare for the future record like this one? And we don't do it on purpose because we have something called recurring contributions. Every day at four in the morning, we run, you can define, I make this contribution and I want to make it every month or I want to make it every week. So we have lots of them. And at four in the morning, we run them all together. And that's a lab in itself. So it's pretty close to reality and we can analyze and compare how the system is handling. In fact, in some cases, the bank cannot handle the load because it's only one single time. We can do it very close. We have to gauge it, throttle. We have to throttle it and make it a little spread. Also, we have the end of quarter. On every end of quarter, the organizations have goals and they will push until midnight. After midnight, all the traffic will go down. So we also use that. So basically, we don't do our own simulation but we are very careful to study those cases. Another question? Great. Thank you so much.
Bernie Sanders popularized crowdfunding in politics by raising $220 million in small donations. An example of the challenges with handling a high volume of donations is the 2016 New Hampshire primary night, when Sanders asked a national TV audience to donate $27. Traffic peaked at 300K requests/min and 42 credit card transactions/sec. ActBlue is the company behind the service used not only by Sanders, but also 16,600 other political organizations and charities for the past 12 years. This presentation is about the lessons we learned building a high performance fundraising platform in Rails.
10.5446/31248 (DOI)
Hi, everyone. My name is John. Today I'm going to be talking to you about mutation testing. I'm the CTO at a small tech company in Palo Alto called Cognito. Sorry if there's a little flickering here. We're not sure what's going on. But, all right. So, before I get into it, I want to give you a quick outline of the talk. I'm going to give you an introduction to what mutation testing is. I'm going to show you how it can help you improve test coverage. Then I'm going to show you how it can teach you more about Ruby and the code that you rely on. I'm going to show you how it can be an x-ray for legacy code. How it can be a great tool for detecting dead code. How it can be probably the most thorough measure of test coverage. How it can help simplify code. Then I'm going to wrap it up by talking about the practicality of mutation testing day to day and how you can incorporate it at your job. So, before we talk about mutation coverage, we need to be on the same page for line coverage or test coverage in general. Usually when we're talking about test coverage, we mean line coverage. So, line coverage roughly means the number of lines of code run by your tests over the total lines of code in the project. There's different variations like branch coverage, that's sort of the gist of it. Mutation testing asks a different question. It says, how much of your code can I change without failing your tests? If you think about it, that makes sense. If I can remove a line of code or meaningfully modify a line of code in your project without breaking your tests, then something's probably wrong. You're missing a test toward that said code. So, before we actually dive into how to automatically, you know, with a tool, do mutation testing, I want to give you a good intuition of what mutation testing is by doing it by hand. So, I've got some sample code here. You can take a second to read it over. So, I've got this class called Glutton's at the top. I just initialize it with a Twitter client. Then I do a search on the Twitter API using that client. And then I get the first two results, grab the author from it, and return that. And that's basically what the test specifies down here. It's got a fake client and then some fake tweets. All right, on the left here, I've got the same code, but in sublime text on the left. And then on the right, I've got a script that's going to run whenever I modify the file. That script is going to output a diff of the code against the current output or the current code in sublime text as well as the result of running the tests. So, first, I'm going to go in and try to modify the hashtag. That does not fail the test. I can also remove the search string entirely and that doesn't fail it. And I can actually call it with zero argument and that also does not fail the test. If I change first two to first one, that does fail the test. That's good. But if I change it to first three, that does not fail the test. All right, so going over those again, I can basically change the input to the search method. However, I want, I can remove the hashtag, remove the entire search string, call it with a different number of arguments. It doesn't matter. If I change the first two to first one, that does fail it. That's because we're giving those two fake tweets in our fake client. But if I change it to first three, then that does not fail the test. That's because we only have two fake tweets in our test. So that's manual mutation testing. You can imagine that doing that actually day-to-day at your job would be pretty tedious. You know, if you're, this is just one method, but if we're adding 100 lines of code, trying to do this for every single part of the code that we're adding would be a lot of wasted time. And it's also going to be pretty hard to outsmart yourself. If you just did the best job you can, writing this code and writing the test for it, then it's going to be hard to then 30 seconds later come up with things you didn't think of before. All right, now I'm going to show you how to do mutation testing with a automated tool. There's, the main tool for this is called Mutant. It's been around for years. I've been talking about it about two years ago and it's how I got into mutation testing. And I've since then become a, like, arch contributor to the project. A friend and I also just started a fork of this project recently called Mutest. That is pretty similar right now, and you'll notice throughout the presentation I'll probably refer to them interchangeably. But you can use it either one. All right, so in this example here, I'm invoking the Mutant command line argument or the command line program and passing in the, a flag saying to use the aspect integration. And I'm telling it to mutate. The class that we just saw. There's going to be a lot of noise in this output, so don't worry about it. We'll go over all the results again. All right. Each diff here is a mutation that I found after while running my test. So, it found some things that we also found during our manual mutation testing run. It can remove the entire argument and it doesn't matter. It also pointed out that we can pass in a different type of variable to the search. We can also pass in nil, which is interesting. Something that we didn't catch in our manual run is that it can change first two to last two. And if this is the recent method that finds the most recent tweets, then this is probably a pretty bad change. If we care about finding the most recent tweets, then we probably don't want to return the oldest ones. You can also remove the first two call entirely, which is interesting. We probably want to specify that behavior too, because if we ship this code to production and didn't have this code in there, you could see how we could possibly quickly exhaust our API token and rate limit ourselves. So, in this case, our mutation testing tool shows us how to improve the test. We give it three fake tweets instead of two, and we also explicitly specify the search that we expected to perform. So, when we use new tests, it's automated, it's quick, and we don't have to think or expend much more effort, and it's probably going to be more clever than you do. Mutant has been accruing different mutations for years that all target very specific use cases and try to point out specific changes depending on the code that it's interacting with. So, here's another example. In this case, I don't think you do imagine that you're working on an internal API. Here's some sample code. I'll give you a second to read it. Cool. So, here we have the user's controller, and we've got the show action. We're taking in the ID parameter, making sure that it's an integer, passing it to the user finder, and either rendering JSON for what we found or rendering an error, and that's pretty much what the test below is specify. We run this through our mutation testing tool. It's going to show us that we can replace the 2I method with the uppercase integer method. That's actually pretty interesting. If you're not familiar with the difference, the 2I method will work on any string and on nil. If I don't have any integers or any digits in my string, it's still going to give me zero. If we call it on nil, it's going to give me zero. The integer method's going to raise an error if I give it nil, and it's going to raise an error if it can't get a number out of the string. It's also going to change the hash bracket method there to hash fetch, and the difference there is it's a little bit more strict on the presence of the key. So, in the original implementation, if the ID key was not there, this would silently return nil. Now, in this code, it's going to raise an error if that key isn't there. So, if we put those together, our tool is forcing us to write a slightly more strict implementation of this action. It's saying assert the presence of the key, assert that the ID value is actually parsable as an integer. And this has some interesting implications too. We're modeling our problem a little bit better. For example, before, if someone used the API incorrectly and passed in something or did not pass in the ID key, then we would try to get the ID, we could nil, we could color it to zero, pass that to the finder, and then return the user an error saying, could not find the user with ID zero. So, this is a bit more of a well-fitted implementation for this problem, and we're also being forced to think about things like not performing extraneous database queries instead of doing validation ahead of time. Here's another small example. In this case, we've got a created after action. I'll let you read it over real quick. Cool. So, in this case, we're passing in a parameter called after. It's going to parse that input and then pass it to a class method on the user called recent. If we run this through a new test, it's going to show us that we can actually replace parse with a method called ISO H601. If we're not familiar with the difference, that's okay. It's a pretty poorly named method. But basically, it's a more strict parsing method. Specifically, it specifies there's going to be four digits of the year, a dash, two digits for the month, dash, two digits for the day. And this is pretty significant compared to the parsing rules for data pars. It's basically going to try to do anything that can parse the input. It's going to support all these different formats as well as something that we might not want to parse. If it finds the name of a month inside of the input, then it's going to try to parse that. So, on the left, we've got every valid input. Now, for May 1, 2017, on the right, we have all the different inputs that can produce May 1, 2017. Now, we're going to talk about regular expressions. I'm particularly excited about this part of the presentation because this is a feature that I think no other tool in the Ruby ecosystem can really help you with. And Newton and Mutes can actually dig into a regular expression and show you that you're not covering branches within it, which is pretty cool. So, here's some sample code. Basically, here, we are iterating over a list of usernames, presumably an array of strings, and we're selecting the ones that match this regular expression. The first thing we're going to see here is it's going to try to replace the carat and the dollar sign with backslash uppercase A and backslash Z. And if you're not familiar with the difference, the carat and the dollar sign mean beginning and end of line, whereas the backslash uppercase A and backslash Z mean beginning and end of string. So, in the first case, I could pass in Alice New Line, John New Line, Bob, and go to a match. And so, it's showing us in this case, hey, you don't provide any test input that shows that you want to handle these multi-line strings. So, I can actually change this to the more strict format, and that's what works. It's also going to try to remove each value in the alternation and make sure that we're actually testing each conditional because inside the regular expression, we're actually saying John or Alan are both valid matches, so we should be testing both cases. It's also going to try to put a question mark colon at the beginning of the string, and that means that it's changing it to a passive capture group. Basically, parentheses and regular expressions serve multiple purposes. They can both be a mechanism for grouping expressions like here where we have the pipe where we're saying John or Alan, but it also means that we want to extract this value and preserve it in the match data. So, in this case, the question mark colon means we don't care about extracting this value, we're just grouping. And so, in this case, it's recommending that we either test that we're capturing something or use this more intentional reviewing syntax. And finally, if we're running this on Ruby 2.4, it's going to say, hey, I can use the new match predicate method, and if you're not familiar with the difference, this method is new in Ruby 2.4, and it's about three times faster. It only returns true or false, and the way it's faster is it doesn't do anything with global variables, whereas every other regular expression method will actually set variables regardless of whether you want them. And if we put all these together, we get something that is more strict on the input, better tested, more intentional reviewing with the passive capture group, and more performance. And the cool thing here is that we didn't have to know about any of these features in Ruby in order to write this method. We wrote what we knew, and then the tool recommended all these changes which resulted in a pretty different method, but better fitted for our task. All right. Now I'd like to talk about HTTP clients. Here we've got a method called stars for, and it's using the popular HTTP party client. It's going to take in a repository name, hit the GitHub API, turn the result into a hash, and then get the key under the stargazers count. If we run this stormy teaching testing tool, we're going to see that it can, we're going to see that it can remove the 2-H method, and everything still works. Now this might seem a little confusing at first, but what's going on here is the HTTP party client actually will look at the content type, response header, and behave like a hash if the response is JSON. And as a result, we can actually remove that 2-H method and interact with the response object just like we were before, and it works the same. And the cool thing here is that Mute test does not have any, like, specific HTTP party, like, support within it. It just knows how to walk through your method definition and remove different methods. And so as a result, even if we didn't know this before, we're going to see this mutation, read the documentation, and update our code, and we want to know a little bit more in the process. All right. Now I'd like to talk about legacy code. This is the same code example we had before that created after endpoint where we're passing a date. In this case, I'd like you to imagine that instead of implementing this method yourself, you're being tasked with updating the method, maybe adding a new feature. And to make this more realistic, let's say that the original author wrote it two years ago, there isn't much documentation, there's only a few tests, and they no longer work at this company. When you run your mutations in tool on this code before you actually modify it, you're going to see this mutation to ISO 8601. And if you don't know about it then, you're going to probably look at the documentation and see what the difference is. Huh. This is a more strict date parsing format. Interesting. This leads to us asking a few questions about the code and questions. What was the author's intent here? Did they mean for people to only use this very strict format? Or did they mean for people to be able to use any format and they just didn't add tests for that? More importantly, how is this code actually being used today? If there are other services that are passing in other formats here, we probably want to actually update the test to reflect that we support this. We don't want to break their integration. And so running our mutation testing tool on this code before we modify it, it's giving us a checklist, basically, of things or questions that we should answer before we modify it. In other words, it's basically giving us sort of a list of hotspots where if we modify this part of the code, we might actually introduce a regression and the test won't fail. This probably isn't too surprising, but mutation testing can be a very thorough way to measure test coverage. Consider this method right here. If we invoke this method at all within our program, then a line coverage tool is going to say that we have 100% coverage. But even if we test it directly, we are still probably not testing it in the ideal way. Our mutation testing tool is going to show us that it can actually fit all of the boundaries here and say, hey, are you actually testing for the off by one of the pioneers here? It's going to say, do you have a test specifying that 21 is the minimum age for buying alcohol and that 20 is rejected, 22 is allowed? And by fiddling with its boundaries, it's actually helping us improve our test. And this very thorough modification of the code can be a very big help when we're dealing with very complex methods that seem pretty simple. This is only a nine-line method here, but in this case, we're dealing with a lot of complexity. Basically, we have a method here that's deciding whether a given user in a system called an editor here can edit a given post. We've got some different user roles here. By modifying each one of this code, each individual token, it's actually going to ask us, are you testing the case where the user is a guest? What about when they're muted? What about when they're a normal user and they are the author of a post? What about when that post is locked? Are you testing these conditions together? When the editor is the author of the post and when it's locked, same condition, but when it's not locked, when they're not the author and it's locked? What about when they're a moderator? Are you testing the condition where the author is and is not an admin? Are you testing the case where they're an admin? This might seem like a large amount of tests to be writing for this pretty simple method, but the mutation testing tool is pushing us a little bit closer to the actual complexity here. If you think about it, the editor of the post can have five different roles according to this code. The author can have five different roles, and we also have the case where the editor is or is not the user trying to edit the post. And then finally, we have the condition where the post is or is not locked. So we're actually dealing with at least 31 different conditions here. This is a lot of complexity, and our mutation testing tool is at least forcing us to embrace how complex it is and actually prove that we are handling for all these different conditions. Here's another small example. In this case, I'm taking an illicit users and I'm napping over them and grabbing their email, and I'm filtering out users that either don't have an email or have previously unsubscribed from our mailing list. This is the sort of code that I would usually write to test what we just saw. In the first example, we've got a valid user and then a user without an email. And then we're asserting that the valid user email is the only thing in the output. Then in the second case, we have the same thing. We have a valid user and unsubscribed user, and we're asserting that only valid user's email in the output. Our mutation testing tool is showing us that we can change next to break here. That's pretty interesting, but it makes sense given the test that we wrote. If we look back at them, in each case, we have the invalid user, the user that we're trying to filter out at the end of our test input. So in this case, skipping one iteration is the same as ending iteration. And so in this case, the way to correct these tests is to put the user that we want to skip at the beginning and have the good user at the end. This is just another small change that the test is able to make to help us improve our tests. Mutation testing is also a great tool for detecting dead code. Consider this example right here. Maybe I'm new to Rails, and I don't know that Active Friction is going to do this for me if I have a column called name. Even if I don't know this, if I run the mutation testing tool on it, it's going to show me that I can replace the method body with super. This might seem a little weird at first. What it's saying is that the entire documentation of this method is already covered by the parent class. So in other words, as a new user to Rails, I didn't have to read any documentation. I didn't have to talk to any coworkers or just to discover that I'm introducing a redundant method. Here's another example where I've got a post controller and then a private method called authorized. It's got an optional argument called user, and that's going to default to the current user. If the usage looks like this, then it's going to find one mutation. It's going to say, hey, you're always passing on a user, so we actually don't need this default argument. But if the usage looks like this, it's going to do something different. It's going to try to apply that previous mutation, and the test is going to fail because we are calling it with zero arguments. Instead, it's going to say, hey, I can take this assignment and put it at the beginning of the method body. So in other words, no matter what you pass into this, I can overwrite the local variable with the value of current user. In other words, the value of user is static here, and we can actually just inline current user into the method and remove the argument entirely. This is a very small feature that I actually like a lot because I find myself running into it a lot. Maybe I'm doing a refactor, and I have this code elsewhere, and I had to fully qualify the constant that I'm interacting with. But then later, I moved it into a method like this, and I forgot to update the constant. Well, the test is going to show us that we can replace colon, colon, my app, colon, colon, with nothing. It's going to remove it. And we get this for free. It's just going to say, hey, I can actually simplify this constant reference, and it's the same thing. Here's another small example. In this case, we are passing in an ID parameter to this controller, calling it the post-liner, rendering the response, and giving it an HTTP status of 200. The mutations in the tool is going to show us that we can actually remove that status okay entirely. And we look at the documentation, makes sense. In this case, the default status code is going to be 200. So again, we're learning a little bit more about what Rails provides for us without actually having to read the documentation or learn from a coworker. Similar to the dead code that I just showed you, mutation testing is also a great resource for simplifying your application. Here's another method that we might have inside of a controller. Basically, we're taking in a user ID parameter, which is presumably an array, integers, and we're calling the user finder, and splatting the input. It's going to say, hey, you don't actually need the splat here. You can just pass in the array, and it behaves the same. So again, we're learning a little bit more about ActiveRecords interface, and it's basically zero cost. Here we have the user decorator. At the top, we have an attribute reader for the user, and then the greeting method just returns welcome in the name of the user. It's using the instance variable. The mutation testing tool is going to show us that we can actually replace the user instance variable with the user method. This is a very small change, but I actually like it a lot. We have the attribute reader, so why not use it? Also, the method call has some nice properties that we don't get with the instance variable. If we type out the instance variable, we're going to sign on when we get nil, and then we're going to get a slightly more cryptic error here. But if we type out the method call, we're going to get a slightly more clear error saying that we type out something. Here's another small method where we're passing in a string, which is just a path in the Unix system. We've got the leading, and then we're going to replace the leading tilde with the value of the home environment variable. Running this through Mute test is going to show us that we can replace gsub with sub, which makes sense. We don't need a global substitution here. We're doing one substitution, so it's recommending to us that we can use the more intentional viewing and specific method sub, which only does one substitution. Here's another example that I run into pretty frequently. Maybe we would have this sort of method if we were coming in like an imager, where in a delay job or something, we are going to regularly look for images that haven't been viewed in the last two years. Then we're going to iterate over them and log a little bit of debug output, and then delete all of them and return the count. Well, it's going to show us here we can remove the map and replace it with each. That makes sense. We're not iterating over this input and returning a new array. So we can just use the normal each method. This is something I run into a lot when I'm reflecting on some code where previously I was mapping over the input and returning something new and move it somewhere else, and I forget to change it back to an each. And the nice thing is I don't have to always worry about making these little mistakes. I know that Mutes will catch up with me. Then finally, here's another small example where I'm using the Ruby standard library logger, and I am setting a formatter, which is going to take information about a log event and take the different data there, format a string, and then that will be what's logged to the output stream here. Now, Mutes is going to show us that we can actually replace this proc with a lambda. Usually, they're pretty similar, so usually I forget what the actual difference is here. But using this very simplified example here, we've got a proc that takes in two arguments, forms an array, and then inspects that array. Now, if we call this the proc version, I can call it with no arguments, one argument, two arguments, three arguments, it doesn't matter. There's too few arguments that's going to fill in the arguments with nil, and if there's too many arguments, it's going to silently drop them. And it actually has the same behavior if we pass in an array. It's going to silently splat that array and then behave the same as before. But if we use a lambda, it behaves a little bit more sanally. So, you're probably thinking now, for one regular expression, we get five mutations. That seems a little ridiculous. I usually open PRs that are hundreds of lines along, and my tests take hours to run, so how can this possibly be practical? Well, there's a few features that make this more manageable. First, it takes in a since fly where you can pass in a git revision, and that basically says, only mutate code that has changed since this git revision. So, in this case, if we have two commits, and we specify since master, it's only going to select that code that has changed in those two commits. You can also pass in a test selector, which is like this constant name, and then a method. And that's saying, you know, and maybe my giant object where I have hundreds of methods, I only change one thing. It's only mutate that. And it also understands our spec conventions. So, if you're describing a class and then describing a method, it's actually only going to select for mutation that small method that you're worked with before, and it's only going to select the half-dozen tests that actually specify the behavior of that method. So, mutation testing, I think, has been one of the most powerful sources of growth for me the last few years. And I think if you're not using mutation testing today today, it can probably help you grow a lot, too. It helps you learn more about Ruby. There's dozens and dozens of special case mutations that are baked into the tool that only show up when they apply to your current task. And so you sort of learn about them just in time. Some examples from this presentation are how it changed a parse to ISO 8601, all these different regular expression features, the new match feature in Ruby 2.4 from regular expressions, and also the Proc and Lambda change. And the generalized changes that it makes, the different removals of, you know, lines of code or arguments that are passing into a method or default arguments, those help you learn more about the code that you actually rely on and maybe surprised by how frequently this actually results in the learning something new. Some examples from this are how we learned about how HTTP Party, it is definitely if the content type is application JSON, as well as all the different behaviors that we see from different Rails methods, things like the controller behavior with the default status code and active records interface. And the net result here is not just that you learn more, I think you also learn a little bit faster, at least that's what I've found. Whenever you do work, whenever you do a refactor or add code, you're going to also be learning a little bit more about Ruby on average and learning a little bit more about the code you interact with. So this is sort of an amplifying effect, I think. And it's obviously going to improve your testing skills. You're going to start thinking more about what are all the different branches that happen here and what actually is the expected behavior of this feature that I'm adding. I think the net result here is you end up modeling your understanding of the code a little bit better and you end up shipping fewer bugs. You're understanding what tests are still not doing anything or what test cases are not being tested. You're removing dead code. You're removing unnecessary code. You're using more simple methods within Ruby. And if you do this mutation testing on code before you modify it and you're not familiar with it, you're probably going to introduce fewer aggressions, too. As I mentioned before, you're going to get sort of a list of hot spots from the application that are likely to allow you to break the code without failing the test. They'll only show up in production. And it gives you the sort of checklist of, like, before I change this, I need to understand is someone supposed to only pass in this date format or are people now using it in different ways? It results in writing a simpler code, similar to the dead code detected in the mention before. You're going to be surprised by, you know, removing a few lines here, simplifying a method called here, using a simpler Ruby method over there. That comes together as dramatically simpler code and it's not much effort for you to arrive at that after writing the initial implementation. So I hope that some of you are excited to use mutation testing on the job now. If your coworkers are not excited about using it as a team, you can still use MuTest before you push. If you do so, you're probably going to learn a little bit more about Ruby, a little bit more about the code you depend on. You're going to write better tests and you're probably going to grow a little bit faster than your coworkers. And if you are a team lead, you should consider adding MuTest to your CI. You don't have to aim for 100% mutation coverage in order to benefit here. Just being able to see what code can I change here without failing the test is a powerful tool for both the author and the code reviewer. For the author, it lets them sort of review themselves and ask, should I change anything else here before I ship this? And for the code reviewer, they don't have to deeply understand the tests and the code involved. Like, as much in order to understand, is it safe to go to production? They can at least look at CI and think, is this, like, what modifications can we make here if there are a bunch? Maybe we should add some more tests. If you like what you saw here today and you love writing great code, you should email us jobs at CognitoHQ.com. And I hope that you all are excited about using MuTest testing. Thanks. Thank you.
Mutation testing is a silver bullet for assessing test quality. Mutation testing will help you: Write better tests Produce more robust code that better handles edge cases Reveal what parts of your legacy application are most likely to break before you dive in to make new changes Learn about features in Ruby and your dependencies that you didn’t previously know about This talk assumes a basic knowledge of Ruby and testing. The examples in this talk will almost certainly teach you something new about Ruby!
10.5446/31251 (DOI)
Music Alright, let's do this. Good afternoon. I am Barbara Hale. I'm on a rail recording and we'll be talking about railing for the rest of the day. Actually, very easily. First, you need to have a very good railing app, which I haven't had one. And as you can see, we're currently on rails 4 and 2. And, yeah, so what you need to do is you can go to the Jamfowl and get a say in 5.0. And this is the most important step. You want to run the bundle update rails. Now, as you can see, we are on rails 5 successfully. I've created two rails 5.0. Alright, I was kidding. There's actually a, I don't know if you noticed, but this is actually a Spongster talk, which means I'm required to sell you something, but I'm not sure why you're here. Anyway, I don't feel great while selling my little stuff, so I thought I would sell you other people's talk instead. That actually sounds like a very useful talk. I'm sure it's a lot more complicated than that, to have created a big rail without the rail 5. So, if you have to leave now, that is okay. I might get fired, but that's fine. So, the other talk you're missing is called, it's dangerous to go home. It is in Room 164, which is just a pause from here, I believe. But I think the main takeaway for a talk is don't use Go. The other talk that you're missing is Ragnated and Rails. The title for the talk is a single code base for web and mobile, so I'm pretty sure the talk is actually about third-walling as far as, so you can read it on the screen. Room 160. And finally, the last talk that you're missing for this is NLP for Irvias, and I had to look it up on Urbanitioner, I think NLP is in for, nobody likes Python. I'm just kidding. I promise I'll stop making jokes about religion. So, I am actually Guthrie, you can find me in the inference of PIN code. This is my colleague, you can find him on the internet, and the YKACI also has a baby, his baby is Twitter account, YKIDNTS. We're both Rails 14 alumni, we also make the JavaScript framework for EmberJS and Flima. You can do this also on the Rust4 team. We work for a company called Fuller. We are hiring a senior engineer, so if you're interested, please come talk to us. We actually have a product that I could sell you, but I didn't allocate enough time for it, so just put it out there, Scali, you can come to, I'll move, and we'll be happy to sell you from there. I guess I'll tell you one feature. We worked on this thing this year called Grids, which compares your app's performance to all other Scali customers. So it basically gives you a sense of where you're at, what your customer's expectation is in 2017. So if you want to see that feature, you can come to the booth, we'll show you that. So that's it for all the Scali stuff, and now we're going to talk about something else. So last year I gave a talk called, I don't even know what it was called, but I... You might have had the same name as this talk. Yeah, something like that. But last year I talked about a project that we're working on called Helix. Since I imagine not everyone here have been to that talk, here is Yagida giving you a version of that talk in 1-5 minutes. 10 minutes, let's say 10 minutes. Okay, let's do 10 minutes. We'll see. We spend most of the time working on Helix, the thing that we're showing you instead of the talk, so this is... You get to see the slide deck for the first time at the same time as Yagida, so here we go. So, sorry, I also have sore throat, so I can't talk, but thankfully microphones work pretty well, so hopefully that will be okay. So as we discussed last year, everyone is here today because they like writing Ruby. Ruby's awesome, but we also know that Ruby is slow, people say Ruby's slow. Most of the time, Ruby, the speed of Ruby doesn't matter since they workload that you're working with are I.O. bound workloads. You spend most of your time waiting for a response from a database or something like that instead of doing heavy number crunching. But even though it usually doesn't matter most of the time you're waiting for a database, once in a while you end up with a CPU heavy workload and the performance actually does matter. In Ruby, you can get the best of both worlds by using C extensions. For example, when you say require JSON in your Ruby file, you get a native gem on systems that support it and a pure Ruby version on systems that don't support it. The C version is very fast and better yet, as a user, you can't tell the difference between the Ruby version and the C version. This suggests that one way to make Rails apps faster is to write fast versions of hot functionality in C and expose the implementation through the C API. And in fact, this is how Ruby itself made the date library and the path name library fast a few years ago, so that suggests let's do that. So a few years ago, Sam Saffron discovered that the blank question mark method was a hotspot for discourse and he wrote a C library called fast blank that sped up the operations by 20 times. This is a pretty huge win. It was only like 50 lines of C code. That's a pretty big deal, so you might ask why don't we do it more? Why don't we write a lot more of Rails in C? In a nutshell, the problem with writing things in C is that it's C. C is annoying programming language to write in, but it's more importantly, it is also unsafe to write C code and it's risky. If you make a small mistake, your library can cause a seg fault and nobody likes seg faults. For the most part, people prefer to write to use slow code as opposed to the risk of crashing their Rails applications. Also, while C extensions are transparent to your users or the user or the person writing the C extension, they are not transparent to the person maintaining the C extension. They significantly add burden on maintainers and contributors because C is a painful programming language. At Skylight, we had the same kind of problem. The first version of our agent was written in pure Ruby and it was okay, but eventually we couldn't add some features that we really wanted to add like the important one was tracking memory allocations without blowing all the budget. You don't want to install a performance monitoring tool and suddenly have your app do you super slow. That doesn't make any sense. Originally, we thought we would solve this problem by writing a C++ extension, but we had exactly the same problems that I discussed before. We had maintenance burden for our engineers. We really want everyone on the team, including junior engineers, to be able to write things across our entire code base and C is not a language like that. Also, if you make any mistakes, suddenly we're crashing our customers app and actually we have many thousands of customers across many environments. We can't afford for people to be reporting seg faults to us on our GitHub or our GitHub or our intercom. Eventually, we decided let's try writing a bit of an agent Rust and I did a couple weeks of a spiking experiment and it was so successful the first time I did it that we pulled more and more of the core functionality into Rust over time. What is Rust? Just like C, Rust is a compiled and statically typed language in a very fast code, but unlike C, Rust has an advanced type system and carefully designed features which are both fun, pleasant, enjoyable to use and guarantee runtime safety. One of the slogans is if it compiles, it doesn't crash. It might sound crazy to hear about a programming language that's in the same performance league as C, but also offering you the kind of safety guarantees that you expect from Ruby, but it's really just the same kind of guarantee that languages like Ruby offer. If you write a program in Ruby, you don't have to worry like maybe I make a mistake and my program is seg faults. Rust offers a similar guarantee. The cool thing is that Rust figured out how to do it without a garbage collector using the ownership system, which you should go read about if you want to know more. As a side effect, the ownership system also provides concurrency without data races, so concurrency is built in very nicely. Now, in high level languages like Ruby, there's a tension between writing or using abstractions and the performance of your program. If you decide to use really nice features like symbol to croc or like mapping over an array, you're paying a cost in overhead to get the sweet features. Most of the time, this doesn't matter and Ruby programmers optimize for happiness in the 99% of cases where the extra overhead is worth the ergonomic improvement that you're getting. That's a really good trade-off. I think that's why everyone's here. But sometimes it does matter and in Ruby you end up writing very low level unedited matter code just to get performance in the case of where it starts to matter. Plus, you don't have to worry largely about the cost of abstractions. That's because the compiler can see through all your code and magically make it fast to hand-raise. For example, if you use map.map instead of a hand-crafted loop, the Rust compiler is smart enough to see that you're really doing a loop and optimizing it into a loop. And actually, very often, hand-crafted, so high level abstractions can provide faster code in Rust than the hand-crafted code because you're explaining your intent very clearly to the compiler. In loops, for example, if you use map, the compiler eliminates the bounce checks to make it because it knows, oh, I'm mapping over an array. I don't have to worry about checking all the time whether the thing I'm looking up is in the array because I'm mapping over an array. So if we go back to the original fast-point example, the one that Sam wrote in C, when we ported it to Rust, we ended up with a one-liner. It actually is a pretty nice one-liner. It looks pretty familiar to Ruby programmers. But we ended up with roughly the same performance as the C version, but with a single line of code. And by allowing you to use high-level abstractions instead of without cost, small amounts of code can result in very fast but also very easy-to-write programs. Now, there's an asterisk here, which is that when we first did this, we got the unique code for fast-point because one mind, but we didn't talk about the boilerplate. And last year we said, well, it sucks that you have to write all those boilerplates. So we announced last year a library called Helix, which allows you to write the same thing without all the boilerplate. This is what we showed last year. We've been writing Rust code in Ruby for a long time at Skylight. But historically, there was just too much boilerplate to recommend this to regular people. There might be only a single line of Rust code to write fast-point, but there's like 50 lines of boilerplate to set it up. And we made Helix to eliminate the boilerplate and let you jump directly into writing classes and methods without having to write any of the code to wire it up. At a high level, like in the 90s, there was a division between scripting languages and systems languages. Scripting languages handled orchestrating IO bound tasks, and they delegated to serious tools written by serious programmers to do things like sorting, grepping, said off, all that stuff. Those things were delegated to do the heavy lifting. So actually, this kind of idea of scripting languages handling IO bound things worked pretty well for Rails, which is largely an IO bound problem. Most of the time, you're just waiting for the database to give you something back, and there's not that much heavy computation going on. But now, and that division historically was like, you write high-level scripting languages for the ergonomic, pleasant to write, but then the serious programmers write in really baroque, old-school programming languages. But in the new era, in 2017, we have system languages that started to adopt a lot of the things that are nice, that are ergonomic about scripting languages. And our goal with Helix is to allow you to write the Ruby code that you love without fearing that eventually it'll hit some CPU-bound wall that forces you to rewrite everything as a fleet of Go microservices. And so the idea is you can start with Ruby, and you can move your CPU-bound code to Helix if it's appropriate. So that was last year, that was my talk from last year, and I don't know if you want to watch the whole thing in not five minutes. You can do that at home, but now we're on to new stuff. So last year, we had a really good proof of concept. However, it was still too hard to use. Basically, we're also, like, in fact, we did generate the boiler code for you, but then there's more boiler code, a boiler plate around how you set up the structure of the project and stuff like that, that it's hard to figure out. We're also missing some very basic features, like we don't support ticking in Booleans or a temple. We don't have class methods. You can only have exactly one class in the macro, no borrowings, stuff like that. And then we don't really have exception support, so the type errors were just printed to your console. Basically, nothing other than demo support. So this year, we decided to focus on stopping all those problems. And the last year, we'll work on it. We worked on, we decided to focus on making it plausible for very restricted use cases, and do that really, really well. So we decided to focus on the use case of dropping in some Rust code into your Rails app. Maybe you have some background job or whatever that's taking a while and would like to speed up that code. So that's the use case that we decided to focus on. And obviously, we worked on the missing features, basically everything worked out. And the reason we decided to focus on the Rails scenario is because you control the end-to-end environment, so you can just, like, it is not a big deal to have to install the Rust compiler on your build servers or your production servers, so you don't have to worry about pre-compiling it and stuff. So it actually works, like the code we have actually works also on Rails as well, but we just decided to prioritize making the Rails experience nicer. So here is the demo. It is going to be an end-to-end example, so I guess I will show you what we're building. So what we're building here is a very simple Rails app that has a text field that you can type some text in there, and then you can click the button to flip it upside down. So the trick is we will implement the core functionality of flipping the text in Rust inside the Rails app. So Rails app is going to do all the request handling, all the buttons and forms and stuff like that, but then we're going to delegate to Rust for a very heavy operation of flipping text. And by the end of this, we're actually going to deploy this to Geroku, so let's do it. So we are building this from scratch, so let's delete this and start over what could possibly go wrong. And so let's start by generating a Rails app. We're using the latest release candidate, Mac10RCT. We're not going to need active record here, so we're just going to skip that to make deploying to Geroku a little bit easier. So yeah, so that's the Rails app. Now let's go into the Rails app and then we'll just make sure everything is working fine. So this is the Rails server and as you can see, it is actually running. So the next thing we'll do is we'll add the Helix Rails gem to the gem file. So that's that. Helix Rails, we are currently at version 050 as of this morning. So when you install, you're basically going to fetch your gems and that's it. It's pretty familiar so far. Then Helix Rails actually shows the generator, so we're going to use that to generate what we call crates, which I'll explain in a moment. So that's up. And the Helix... So this generated Helix create in create slash text run form. So this is simultaneously a gem. You can see that there's a gem spec and also it is a Rust create, which is basically a Rust equivalent of a gem. So there's a cargo.toml. So the reason we did this is to encourage you to structure your Rust code as a self-contained library. Just like your other extensions, right? Like a JSON gem is a library that do a limited amount of things. So this is how we recommend you to set it up for now. And the next step is... Oh, you can see there's a lib directory for your Ruby code because there's a gem. You can put whatever Ruby code in there. There's also a source directory, which is the Rust convention for Rust code. And you can see that the generator generated a text transform class with a single class method that prints some stuff to the console. So let's try that out. We can do that by running rickrb in the create text transform directory. So when you run rickrb, it will automatically compile the Rust code for you and then put you into rb with access to your Rust code. So you can type text transform.Hello. And as you can see, it is indeed printing stuff to your console. It is probably worth emphasizing that it might be not very obvious, but we're actually building a native extension. We're calling Rust code. Everything here is implemented in Rust. So that's pretty cool. Now that we have the whole borrower play down, let's actually implement the text transform library. We are going to do a little bit of... Let's just try to SM test with our spectos. So let's go through it very quickly. And run on install. So now we have our spec. And we can... I'll just cheat by pasting in the text I wrote earlier in the spec directory. So we have a text transform spec. And it is pretty simple. Basically, we expect the text transform class to define a flip method that takes a string and it flips it. So that's what we're implementing today. So just to make sure we're there in correctly, we'll run the test. And as you can see, it's failing because we didn't define text transform.flip. So that's good. We'll do that. So we'll need to define a method called flip. So in lip.rs, basically this is like a Rust macro, which is like a little DSL for defining Ruby classes in Rust. So we'll type def flip. So that's like the usual syntax we're used to for defining method. And the way Rust distinguishes between class and instance method is whether it takes a parameter called self. In this case, we're making a class method, so there's no self. We just take a text. We can take a string and then we'll tell Rust that we are returning a string here. It's fairly worth pointing out that these are actually Rust types. So all you need to do is you say, oh, this takes a Rust string and returns a Rust string and we'll figure out how to convert the Ruby string into Rust and then like convert your return Rust string into a Ruby string. And if the user is passing a different type, like if you're passing a number, for example, it would automatically raise a Ruby type error for you. So basically all the things that you're used to. And so I'm going to paste in the implementation here. It looks a little bit long, but it's basically just a large table. What you're really doing is you're taking a text, sorry, the string, which we call text. You're looping through each character, you're calling dot-ref which reverses the characters, and then you're mapping each character in the table. And then at the end, you join them back into your string. So pretty familiar syntax, pretty high level. You might think that using all these high level features would make things very slow, but again, the compiler is basically magic. So if you do the charge dot ref dot map, it doesn't actually make an array and then reverse the array and then map it and just like figure out this is what you're going to do. So we're just going to do the smart thing for you out of the box. And it's probably going to be fascinating whatever smart things that you might try to do. And like it knows how to allocate the right amount of size, string for the output and how to look one byte at a time and all that stuff. So what we did is we ran our spec again, unfortunately, the test is still failing. It's still saying we didn't implement text transform dot flip, which we clearly did. The problem is Rust is a compiled language, so we actually need to recompile code after making changes. That being said, the Rust compiler is fairly fast, so you're not going to spend a lot of time waiting for things to compile. So to fix this problem, we can run Rick build. And if we run our spec again afterwards, then it is going to work. Obviously, it's a little bit annoying to have to remember to run Rick build all the time, so we're just going to make a Rick test for it, which you're probably going to do anyway. So this is basically a standard RSpec setup. You have an RSpec task. The key here is to make Rick build a dependency of Rick spec. So just to show that it works, I'm going to go add a new test for it. It can flip table. And basically, if you give it a table, it will flip the table. If you give the table, it will flip it back. So now if we go run the test again, it is not working as expected. So now we can implement this again. We'll go back to lib.rs and we'll paste in two special cases at the top. And now we'll go back to the console and run Rick spec. Remember, we didn't actually run Rick build, but then it automatically noticed that you changed some files, so recompiled and then run the spec for you. Now it's passing. So that's now we have fully working, fully tested, text transform library. Let's actually use it in Rails. As you can see, the generator already edited to app's gem file, so we don't have to do anything special here. And so we'll start by adding a route, a resource called flips. So we'll go to rouserb and then we'll do a resource of flips and then we'll map it to the root path. And we only need the index and the create action in this case. So do that. The next step is to add a controller. So we'll go to app controllers and then we'll make flips controller.rb. And then we'll paste some code that basically in the index action, we default the string to either URL param or we default it to hello world. And then in the create action, we call the text transform.flip method that we implemented in Rust. So finally, we will make a template for this. And so you go to views and then make a folder called flips and we'll make an index.html.rb for it. It's just going to be a very, very simple form that has a single text view and a button. And the Rails defaults for all of these helpers worked out to be exactly what we wanted. So that's very nice. So now with everything in place, we can test it out in the browser. So we're going back to the app and run Rails S. And so now if we refresh, we have a flipper and you can see that it can flip X. You can flip, oh my God, and you can also flip tables. So that is it. So as you can see with pretty minimal effort, we were able to create a Ruby native extension written in Rust using Helix, not have to worry about SecFox and it's like the code is still pretty high level, pretty easy to work with. And we even have a test for it. So finally, let's deploy our app to Heroku as promised as it turns out I actually have something to sell you. So you need a Heroku account in the Heroku CLI, but I already have those set up on my computer. So we are just going to create a Heroku app and we'll call it Helix flipper. So because this is Rust and Ruby app, we have these set up with build packs manually. So first we'll add the Rust build pack, many things to Terence from Heroku for making this work. And we'll then add the usual Ruby build pack for the Rails part of things. And that's it. So now as it is recommending, we should from Git push Heroku master to do it. So Git push Heroku master. So now you can see the Rust build pack is downloading the Rust compiler for you automatically. And we're now into the Ruby part. So this is just running, done on install, downloading all the Ruby dependencies. And then now we're compiling the Rust code. So this is downloading the Rust dependencies. And yeah, looks like that's it. Launching the Dino, but now that's done, we can go to the browser to see it in Helix-flipper.herokuapp.com. You can try it on your phone if you want to. So as you can see, it works. And now we have a Rails app running Rust code in production on the Floppy Internet. Yeah, so back to you. So I want to talk about, like, why, how you might want to use Helix. First of all, you should take a breath to realize that that was pretty cool. So I want to talk about, like, what use cases Helix is good for. So first of all, in general, Helix is good for problems that use heavy computation and simple inputs. The boundary cost of crossing into Helix is still a little higher than we would like, but for problems that do a non-trivial amount of work in Rust, the cost of the boundary crossing pays off pretty quickly. Also, things like data tables, file names, JSON objects are all, like, simple inputs. So you can get pretty far with the types that we already support in Helix. Think about it this way, the Helix boundary is cheaper and supports more types than a background job. So if you could have moved the work into a background job, you can make it work in Helix. As an example, we built a demo that counts the number of words in a text file. When we measure all the words in all the works of Shakespeare, Ruby takes about three seconds to do it. Rust does it in 30 milliseconds. And this example takes advantage of Rayon, which is a Rust library that lets you paralyze loops. So here's the example of basically using what Gottry showed you before. Look at the inputs are both strings. The return type is a 32-bit integer. One of the strings is a file name. And just a note that.expect is a Rust method that raises a panic, and that gets converted into an exception in Ruby. So what you can see here is we open a file from the file name, then we convert it into an iterator and then map over and count how many lines there are. If you use the Rayon library in Rust, which is just a library that you can get off the shelf, you can get into iter, into par iter. Now it's paralyzed. It works across however many cores you have on your computer. So that's one of the ways that we get to 30 milliseconds is that we were able to use four cores or however many we use in that benchmark. Another really good reason to use Helix in general is if you want to use existing Rust libraries from your Ruby application. One of the main reasons why this ends up being good and important is because Servo is a web browser written in Rust. Firefox actually shares a lot of code with Servo. So there's a lot of production quality libraries that already deal with the concept of web content. Turns out we're working on a web framework, so things that deal with web content are very helpful. So as an example, I built a demo that inlines CSS into HTML, like if you're building an email thing and you want to inline, you have a CSS file, you have an HTML file, you want to inline it together. And we were able to use Servo's CSS parser and Servo's HTML parser, and only needed a bit of code to glue it all together, which is actually pretty cool. Basically, like here's an example. So basically, we're using the CSS parser from Servo, then we're looping over all the rules that we got from the CSS parser, and then we're looping over all the elements and inlining things with the style to attribute if we want to. And so it's kind of like writing C bindings, like if you're like, oh, I know there's a libxml library and I want to use it, but now I can write a C binding. It's kind of like that, except it's way easier to do it. You can also use Helix in a request, in a mailer, in a background job, in action cable, or in any part of Rails. Since mailers and background jobs tend to be more CPU intensive, don't discount those use cases. I think those use cases are a really good fit for Helix. It also means, in general, you might have moved some CPU intensive thing out of the request into a background job because it was just too expensive. You might be able to move some of those back if it's good for you to put into the request flow. Okay. I realize in the examples we show, we didn't actually show you all the features that we worked on the last year. We have a website that actually shows you the other thing that you know. Both of the demos that I talked about are on the Helix website, and you can play with them in a Rails app. Some of the features that we worked on is you can have multiple classes now. You can have public-private classes. You can have instance methods, which somehow none of the examples use. You can also have a struct in Helix for storing instance dates. It's like instance variables, except you tell the compiler exactly what you will have, so it can optimize the access a lot better. If you're familiar with the RubyC API, that's basically the data-rap struct API that we wrapped in the Rust macro, which is really cool. Unfortunately, we don't have time, for example, to show you. Anyway, I guess we'll close this off by telling you what doesn't work. So a lot of things work already. In the administration, there's a lot of use-cases that you can start playing around with it. If you happen to have to write problems in your Rails app. However, Helix is still a pretty young project with a lot of work still to be done. So this basically, we're trying to give you a better sense of where we're at, and maybe there are some opportunities for you to contribute back to if you happen to have one of those use-cases that we don't support yet. So we broke down the outstanding work in different use-cases. The first use-case is a Greenfield project, which I consider done. So basically, you're developing a brand new feature or brand new app, or rewriting a feature inside your app as opposed to rewriting something, or as opposed to implementing something in a library. The difference is, since it is in your app, you have full control over the API, so you can make adjustments in your API to work around the current limitations in Helix. So you might use Helix for a CPU bound algorithm here in a request, middle-life background job, as you pointed out. And basically, the problems with a potential for parallelism is also a good fit, as you mentioned. So we currently only support, like for the types, I think we only support strings, numbers, booleans, the basic types like that. That sounds a little bit limiting, but then if you think about it, because HTTP requests and background jobs actually share the same constraint, where you cannot put a Ruby object through an HTTP request, you can march through it and then send the bytes across. You can do that in Helix too, right? So because you already are used to working with the string constraints in HTTP requests, you can actually do a fair amount, even with just strings, numbers, and booleans. So that's the GreenTube project use case. The next use case, which we're currently working on, which builds on top of the GreenTube project feature list, is you're writing some code in a public library from Ruby to Rust. So basically, it's like how you have JSON calling calling pure, and then you need to have an API-compatible version in a native extension. It needs to be a high-fibre-delty match, because you cannot change the API. So that's a little bit tricky because the benchmark we're using for this... I don't mean in a performance sense, but the example that we're using here to help ourselves understand how far along we are is exit support duration. So it is a fairly simple class, but because there are a lot of dynamic features in Ruby, you end up having... There will be one method that happens to take an optional argument and it will happen to not support optional arguments yet. So there are a lot of edge cases like that that we're still working on. So this is maybe not quite there yet, but it's probably the next thing to check off the list. If this is what you're trying to do today, though, you can still accomplish a lot by mixing and matching Ruby and Rust. You don't actually have to do literally everything in Rust. As I show you, there's a lib directory in the crates that you can put Ruby code in there. So what you can do is, basically, define a class in Rust, have you're lifting in there, and then you can define some sugar on top in the Ruby... You can reopen the class in Ruby, and then you can take optional arguments or whatever, and then you can normalize that and call back into Rust. That is not ideal, but it works. And the long-term goal is indeed to make all those work in Helix. We're just not quite there in the DSL yet. So some things that you will probably notice are missing are we don't support module yet. We don't support optional arguments, Rust arguments, keyword arguments. Sometimes you want to take a generic numeric. You don't want to care whether it's a float, an integer, or a big num, or complex, rational. So that's numeric, and sometimes we overload in Ruby. The same parameter could be one of many types. We have plans to make all those work. It's just still coming along, but as I mentioned, you can always normalize those differences in Ruby and call back into Rust. So that's stats, and then reopen is a little bit strange, but a lot of echo support stuff reopen in core classes. So it kind of works now, but it's not amazing yet, and we have some work to do there. And this also is very easy to do a wrapping strategy, like a mix and match strategy. So you can do a reopen in Ruby. The key is you want to, like, the parts that make sense in moving Rust is the algorithm, heavy lifting parts, right? So you can still do a lot of that in Rust, and you can use those code in Ruby like we did in the Rails example. I guess I'll just go through it right very quickly because we're running out of time. So shipping to production, as you can see, we actually kind of made that work with help from Terence, right? Like, it actually works on Heroku and stuff. The things I'm missing is we, like, it works as long as you have to Rust compiler on the server, but we don't have documentation for how to do it. So if you're interested in figuring out, you might want to contribute some documentation there. And let's see. Binary distribution is interesting. So if you are a library author and you want to use Helix, you probably want to make sure that people can install your gem without having Rust compiler on their computer. So this is what we mean by binary distribution. There are some other gems in the Ruby ecosystem, like the thing that wraps libv8 does it. And like, and Skylight. We do ourselves. Basically, we have to make this work in a way that is automated because we need it ourselves. We already have ways to make it work, but we need to extract that into open source version, right? So you basically pre-compile the binary for all the major platforms. They're like a handful of them. And then people can just download the, like when you gem install, you can just download that binary and it works. So that, we still need to work on tooling to make it possible. We know it is possible because, as you said, we actually do it in Skylight. And then there is some non-traditional use cases like mobile, WebAssembly, and Ruby. And performance parity with C, as you mentioned, we are like a little bit slower than C right now. So we need to have some good benchmarks to figure out where the overheads are. And the long-term goal is we want to be on par with the equivalent C native extension that you would write. But like when we say a little bit slower than C, we're like Ruby is like here, C is here, Rust is like here. So like if you're moving code from Ruby into Rust, it doesn't like it. And it's almost entirely the boundary cost that's more expensive. Rust itself is very competitive with C. So if you were treating it like a background job, then it's not a, you don't have to worry about it all. It's if you have a chatty API you should, then you want us to fix this problem. Right. So then finally, there's some miscellaneous features and quality of live improvements that we want to make. Like we want to support more types. It's actually, we have the protocol down. So it's actually quite easy to add more types. We just haven't gotten around. And you just have to decide like what does it mean to convert a hash in Ruby to a hash map in Rust? Like what is that? And also we're quite reliant on the Rust macro system for the DSL. So when you have syntax error in the macro, the errors are quite brutal. We are working on that. And like if that's the kind of thing that you're interested in, we should talk more and maybe you can help there too. So that's basically it. I guess I kind of ran out of time. So here is the website for everything else that I didn't have time to cover. usehelex.com, you can go look at it. And we are, I think we're in, we have this setup in the skylight booth. So if you want to come play with it or chat with us, we are there today and tomorrow. The website, usehelex.com is a Rails app. All the demos are written in Helix in the Rails app. So if you go look at the repo for usehelex.com, the source code for all the demos that we showed today are in there. And they're the same code that's running on the website. And a lot of you are probably looking for opportunities to contribute. So we deliberately sprinkle up typos in the websites. The Roadmap webpage is a more detailed description of what got free talked about. And we're going to be adding GitHub issues for everything soon. So we ran out of time, but this is the best slide. Thank you very much.
This is a sponsored talk by Skylight. We got a good productivity boost by writing the original Skylight agent in Ruby, but over time, we couldn't implement all the desired features with its overhead. Ruby is fast enough... until it isn't. Introducing Helix — an open-source toolkit for writing native Ruby extensions in Rust, extracted from our Featherweight Agent's DNA. Fast, reliable, productive — pick three. Come join us to find out how you can leverage this power in your Ruby apps and even help make Rails faster! (This is not a re-run of Godfrey's talk from last year.)
10.5446/31252 (DOI)
Hi. I'm Jonan. I should have warned our friend in the back that I speak especially loud all of the time. How about I actually put the slides up? Or do you want to view it like this? Is that, should we play? Let's just play. It's easier that way. I'm here to talk to you about inventing some friends. I'm a friend. Hi friend. Should I go through my whole intro and then you want to go? You want to just tell people who you are? We don't want to let this mystery settle for a moment. His name is Julian. He's from Australia. G'day. I'm Jonan and I'm from Portland which is much like Australia except not. It rains more. I'm the Jonan show on the internet. I work at a company called Heroku. You may have heard of us. We invented the color purple. It's fantastic. If you're wearing purple today, you're welcome. If you have any questions about Heroku or our color, hit me up. A long time ago I used to make websites to sell Diablo II in-game items until they made a patch update and made my whole company illegal. And then I had to find a new job. And then I made this. This is my daughter. She doesn't suck on her toes anymore but it's by far the cutest picture I have of friends. My son hogging all the ice cream at a Costco. He's adorable. I used to sell cars then after I stopped selling Diablo II game items. And then I was a poker dealer. And that is a brief history of Jonan. And the reason I am telling you all of these things is simply so you have something to ask me about in the hallway because I'm leaving a lot of questions unanswered. So come talk to me. Hi. So I'm Julian. I'm a big fan of Tweed. I live in the World Heritage Site of Bath in the UK. So just in case you're confused, I don't actually live in London. Just want to make sure that you know that I don't live in London. I do actually live in Bath. And as you can see, it's called Bath because we basically have lots and lots of Bath. But if you do ever come and visit, the good news is I've heard from friends that stayed in some of the local hotels that we do now have showers. So if you aren't a fan of bathing, you can now shower. So I work at a small open source company called Red Hat. We also sell hats, which is pretty neat. I work on a project called Manage IQ, which manages clouds. But here in the desert, there aren't any clouds to manage, so I'm not sure really what I'm going to do. And I'd like to introduce Amazon Alexa, which would help if I got my laptop out to Jonan quick. Just track them. Did I just break the resolution of the display when I did that? I went to present review so we could see the next slide. This is going to be on the video for all you watching at home. The play by play. Are we ready? Okay, go ahead. Ask Rosie to give her talk introduction. Come on, Internet. Come on, Internet. You can do it. I think I can. I think I can. Oh, my. If you could all stop you. The requested skill took too long to respond. Okay, let's try again. Seriously, everyone turn off your phones and everything. Delete. You can go to the next hall and ask Rosie to give her talk introduction. Sub, I'm Alexa. This is my first Rails conference. How are you enjoying it? It's so hot here in Phoenix. You'd think I'd be used to that being from the Amazon. I love DHH. He writes the best codes and runs so well about start ups. I also really like tender love, but I can beat his puns any day. Come at me, bro. Just joking tender love. Here is some advice. Don't give up on your dreams. Keep sleeping. Alexa. Not yet. You're shush. We can't say that one word that rhymes with Schmechsa or this one wakes up automatically. It's terrible. Should we tell them about? Yes. Yeah, so the thing is we came here under a bit of a guise. We told you we were coming to talk to you about one thing. What we're going to talk to you about is actually our start up. Surprise. Julian and I are starting a company. We'd like you all to be a part of it. Today is our product announcement. This is your first opportunity to get it on the ground floor for Instra Bookspace. It is by far the most innovative application that has ever existed. I would give you some more details than that, but just know that it will improve every facet of your life. Anything you can think of, your car, your phone, your dog, it will make all of those things better. This is literally the future of everything, this application that we've built. And I'm excited for you to try it today. But before we get into the details on that, Julian's going to talk to us a little bit about this other thing, our third friend here, Alexa. Actually, you're going to talk afterwards. I am. So yeah, so obviously as we're in the future now, as William Gibson said, everyone is in the future, something like that. We decided to use the Amazon Alexa's because we're forward thinking. And so we have a variety of them here. We have the Amazon Tap, which is the small pink one. We have the big old just normal Amazon Alexa. I can't even lift that one. That's too heavy. It's so small, I've lost it. And the tiny little that you can even just carry around in your hand, Amazon Dot. So those are the Amazons. And there's the Rainforest Edition. If you go shopping later, the important feature of these two is that they have audio out. So you can connect them up to all of the speakers in your home or your stereo. This one does not, but you can pair it as a Bluetooth speaker to play music back through it and all of them. Over to me. To the Alexa Skills Kit. This is the magic that we are doing. When we talk to Alexa and we tell Alexa to do a thing. So what Julian just did with his Alexa is he gave it a text prompt, and that was transformed into a bit of text that we send up to the Amazon Alexa service. The Alexa Skills Kit is a way for you to make Alexa say whatever you want it to say. And I'm going to walk you through how to create one of these skills in the Alexa platform. I wish that we were going to do this in a terminal. We're going to do it in a GUI. Because this is how you set up the Alexa Skills Kit. There is not actually another option for this. But all of the things that I'm going to show you, I'll show you an easier way to do. So this is the page when you log in here. This is your skills information page. You have to go to this page to create a skill if you want to add something to Alexa. There's nothing on this page that is really important except for your application ID. This is something you're going to have to ship up with your information from your application because you're going to build a back end service to send data to the Alexa Skills Kit. Or to take a question from Alexa Skills and give it back the answer. So you've got this application ID. In my example, I wrote an Alexa skill called HAL. And the invocation name for HAL is HAL. So if I talk to my Alexa and I tell HAL, I tell Alexa to tell HAL to do a thing, then that will be sent to this particular skill. And you can have as many of these as you'd like. So this skill here has an intent schema. And the intents are what people will do with your thing. So my skill, HAL, deploys things, deploys applications on Heroku specifically. So you can tell HAL to deploy Steve to production. And that will work. In fact, maybe now is a good time to try it. Well, I'll show you in a moment, actually. But this intent schema, I want to walk through real quick. This isn't a JSON format. Okay? It's not just a JSON file that you can submit up or post to an API and create. But you have to go and log into the UI and paste in your JSON. And there are RubyGems that generate this JSON structure. If you know basically what you're trying to create, you can use those gems to generate it. Or you can just look at some of these examples online, fill in the intents. They have these custom slot types. These are types that are referenced in the schema. These are the things people will say. You have chunk one, chunk thing. Deploys Steve to production, right? And that allows Alexa to extract those two pieces from my intent and just send me those. So all my application has to deal with is Steve production. And I think we can all figure out where to go from there, right? But the piece of going from tell HAL to deploy Steve to production just to Steve production is harder, right? And that's where Alexa comes in. So then you set up a couple of custom types. These are the types for mine. I have an application list and an environment list. I have my names of my applications, Steve HAL in heaven, and my environments in there. And then you give it some sample utterances to train it up. So I say deploy. This deploy thing in the beginning is a little bit deceptive because that's actually the name of this skill. So I'm telling Alexa to like, basically that first part is going to be Alexa, okay? Just pretend that it's Alexa for the purposes of this. Tell HAL to deploy Steve to production. And I give it different ways to say it, right? You can also say to ship Steve to production. It happens to be that deploy is the easiest one for it to recognize. So do that. And then you set up your endpoint. And this is an important step. You set up the endpoint that you want to be able to talk to. And when Alexa hears from you this particular phrase, it will post to that endpoint with some data. And then this is where HAL production application lives. And Julian is going to walk you through one of these generators and how to create these things. So being an Amazon web service, you know, it likes you to create artisanal JSON. But as we're all programmers, I, after seeing Jonas talk last year on Alexa, I got very excited and I found that someone's actually written an Alexa generator that will generate these utterances for you and that JSON. So here's just a quick example. This is actually, on the Amazon page, there's an example of creating a horoscope application. So this is just creating that. So as you can see, we've got the different horoscopes, star signs, and then some different days. So you would sort of say, you know, Alexa, tell me my star sign for tomorrow. I'm in the queries or something like that. And that will then generate all of the data for you. Unfortunately, this gem currently only has a few built-in data types. As you can see, there's like Alexa generator slot type literal. That is just a literal string. That could just be anything in the world. And the date type is a type of date. So using these slot types, it helps Alexa kind of recognize the string that you are passing to it. So as I was saying, this gem currently doesn't support many, but I do have a pull request in the works to send because Amazon has a lot of custom types. It has all of these ones and all of these ones and these ones. So just a few, and they cover everything from like TV series to video games to weather conditions. So you can kind of create all sorts of things. So back to our code. So once you've run this code, that will then create you this JSON. So this is actually the JSON that I'm using today for my demo. So we've got things like dates and times and talk types and list of speakers and locations. And then here are some of the sample utterances that we're going to show a demo of afterwards of the things that you can say to my Alexa app. The Alexa voice service. Has anyone used the Alexa voice service? I'm just curious. By a show of hands. Anyone tried to interact with this API? We have a couple. Yes. So the issue here in particular for me is that it's an HTTP to API. And I am not smart enough to use HTTP to ATP. I actually am. But when I started, I was wrong and I thought I was smarter than I was. I think given six or nine years, I could probably get a handle on how to handle this interaction. I am not going to demonstrate for you today how an HTTP to API works. I want to explain to you, though, first why I want to use the Alexa voice service and what it is. So in Alexa's skill, I say a thing to Alexa and Alexa says a thing back. And that thing back, I get to customize. But if I am going to have the Alexa be another device, for example, my computer or a Raspberry Pi with a microphone and a speaker on it or anything that is not a physical Alexa device, I'm going to be using the Alexa voice service to accomplish those interactions. So I want to give you a little demo of my how skill real quick. Alexa, tell how to deploy Steve to production. How has deployed Steve to production? Hypothetically. I'm not actually able to give you information about whether or not the deploy was successful because I can't deliver messages to you unprompted. That would actually be a terrible feature. Imagine if LinkedIn could send voice messages to your living room. You'd never stop hearing about how you're getting noticed. Which is fundamentally the problem for me in that that's exactly what I want. I want to be able to turn on my Alexa without any prompting. And there is no such thing as a push notification for the Alexa. But there is a bit of a fancy workaround. And we have found a way to do that. But it is the Alexa voice service itself that I want to demo for you today. So let's talk about that. I went over here to the Alexa voice service thing. And I'm reading through the HDP API and I was like, oh, there's a gem for it. Ilya wrote it. He is so smart. And I am so not smart. And I would love to be able to just whip this up. Every time I find a good gem, I'm excited. I'm sure it's very easy to use. I'm sure lots of you have used it. And I would love it if you would come up to me after this and teach me how to do that thing. That'd be great. Thanks so much. But what I used instead is this thing that I found. There's a script, right? This is like a bash script I found online in a forum post after a long time of digging. Kind of when I had almost given up hope on being able to accomplish this piece of this talk. And something stuck out to me and it was right here at the bottom. And it was a very important part at the bottom. It says V1 a lot. It says V1 right on it. The HTB2 API is a V2 version of this app. It's the version 2 API for the Alexa voice service. And guess what the first version does not use is HTB2. So that was a lot easier for me to figure out how to do that thing because I can post for days. I am really good at posting. By good is a relative term. Please don't read my code. Next, this is the actual speech recognizer that we're talking about here. There's a menacing looking warning on the top. This is an API you'll find on like the sixth or seventh page of Google. If you Google version 1 Amazon Alexa voice service API documentation, that's about as specific as I could get. It's buried. But if you find it or you want to link, hit me up. You'll also find bash scripts floating around. So I basically like took this and I ported this API into like a Ruby thing. And so I can post this thing. Things like the omni-all strategy for Amazon don't work here because when you're interacting with the API, you have to send scopes that it's not prepared to send or other information that's not prepared. Most notably also like metadata in the transfer around like a scope data tag and things. If you're particularly good at writing omni-all strategies, maybe we could sit down and pair on one later. I would love to write one for this or make a pull request, I guess. So I have this other thing here I want to talk about which is Polly, right? Imagine that you say a thing, right? You say some text to me. I'm going to send that text up to Polly, right? And that text, I'm going to get it there by sending it this way. Polly is real quick in Amazon API if you're unfamiliar with the announced I think last year that does translation or a text to speech. Have people heard of Polly? I guess I kind of assumed some knowledge that that was unfair. Polly is an API that does text to speech translation using deep learning and it's brilliant. It's very, very good at it. So when you say like I live in New York or Saturday Night Live, Live and Live are spelled the same way and there is no reason why a computer should be able to tell the difference but Polly can and it can speak in I think 16 different languages which is awesome. So you should play with that API. It's very well documented and easy to use. So there's Polly. We're going to send some stuff to Polly and how we're going to do that is we're going to put the text into action cable, okay? This is where the action cable keys comes in. So what friends to book space aside from changing the entire world allows you to do is to type some text into a box and to send that text through action cable into the back end and that is then going to be posted over to Polly, to the Polly service and that Polly service is going to give us back an MP3 response of what we just said, right? So I can tell Alexa to say anything I want, right? And then Alexa, I will take that text and I will send it up to Polly. Polly will turn it into speech. Okay, now I've got speech and I take it and I put it in S3. We're going to need it and I don't immediately put it into S3 because it turns out that the Alexa voice service which is next wants a WAV file with some very specific parameters. So what you do instead is you use FFMPEG in the middle and you install a custom build pack to accomplish that and then it breaks and then you install a different custom build pack to accomplish that and that also breaks. But eventually it works and what I'm doing here is I'm taking that MP3 file and I am taking it back to FFMPEG, like just back ticks out and translating it into a WAV file with like some specific bit rates and then I send it up to S3, right? And then I use that S3 to read the file back and post it to the Alexa voice service, right? And the Alexa voice service sends me back in MP3 which is the response from Alexa to arbitrary text that I just sent it, right? I can send arbitrary text to Alexa. So the thing that I was explaining a moment ago that you can use this to accomplish is that what you could do theoretically is create a skill that does nothing. Just say trigger my skill, trigger Jonan's rad skill, right? And then between the time that Alexa talks to the endpoint it's going to talk to and the time that you tell it to talk to it, you get some text over there magically, right? You could be connected to your production instance, whatever it is. You can modify the response text back to Alexa and so you can have triggers on a server have Alexa say things unprompted, which is kind of magic. And it's not actually what I am demonstrating here because it seems terribly dangerous to just let a crowd loose on that dream. But I've got an equally dangerous dream for you, I promise. This is what happens at the end. We take this MP3 and we actually put a chunk of stuff back in through Action Cable to the front end to include it in the page and then we play a thing, right? So let's talk about Action Cable real quick, right? I think you probably came to talk about Action Cable. It is Enterprise Ready in case anyone was curious. This is an example of Enterprise Ready commit with some seed users from our benevolent leader, DHH. The notorious BIG is forever immortalized here in the Enterprise Ready Action Cable example application, which is actually really useful if you're trying to figure this out from the beginning. But I'm going to walk you through it as simply as I can think to. Action Cable, people have used PubSub. Are you familiar with PubSub? We have used that thing, right? Or I can publish messages and other people can eat those messages on something. It doesn't really matter what that is, right? This is kind of like the fundamental basis for what we're about to create. So normally when I ask a server for a thing, right? My browser is like CIN, right? This is the first thing that they send. It's a little packet, right? So let's try it. Does anyone know what the next packet back is? If I said CIN, what would you say? ACK. Okay, close. CIN ACK. I didn't know that either. I thought it was ACK too. But ACK is what I'm supposed to say to the ACK. If you're a networking engineer and you think I'm wrong, please tell me why I just made bad slides because I am very interested to have found that thing. But anyway, that's how a typical response goes, right? I'm like, hey, synchronize. You see me and the computer's like, I see you, right? And then we're good. And then I send a file. But that's it. That's the end of our interaction, right? So a web socket is an alternative to that style of connection where I do a little dance and throw something is a little bit more like this. We're just screaming at each other all the time. It's just more screaming, okay? I scream and they scream and we're all screaming all the time. It's a very peaceful arrangement. So there are four major components to setting up an action cable thing, right? The first is the connection, which is going to be inside your Rails application on the server side. The next one is the channel. This is so that you can have different ways, right? Anybody remember these TVs? When I was a kid, we had like five channels. You know, they're spoiled. You got so many channels, you like gave up on channels altogether. And you're like, Netflix, man. I don't even use channels. Anyway, channel is just like a channel on your television, right? You got one, two, three, four, five, right? So I send things over one and I can have people listen to one or two or three, right? Not everyone's watching the same TV show all the time. Look at me. I'm a broadcaster. And you have a consumer, okay? The consumer's a little piece that sits in your browser and it consumes a thing, right? It eats up all of the stuff on all of the channels, right? And you can create subscriptions to specific channels. You can turn your TV to the notch you want. You can actually turn your TV to a lot of notches. It's like the little picture-in-picture feature. The analogy's breaking down. But you can do a lot of channels at the same time, as many subscriptions as you want. So I'm going to show you some code to do that real quick, okay? This is how you're creating a subscription. In this case, I'm creating a subscription to the deployment channel, right? When a message comes over on this deployment channel sending me some text, I'm going to perform this action, this deploy action over here with that text. And that deploy action is defined here in my channel, right? This is my action deploy. It gets some text. I log it and then I take the text and I put it into a deployment, right? To respond to it. And that's going to kick off my whole back end. But it's a way fundamentally of taking these two components on this side and these two on this side and shooting things over the wire between the browser and the server, right? The front end and the back end. So then when I want to send things back out here, I'm broadcasting to the server on a particular channel. This is the deployment responses channel. I said you could have multiples. This is a different one, right? And I put the text onto the deployment responses channel, which is the text you heard earlier from Alexa. She had that big diatribe. Alexa heard me say deploy how to Steve to production and it sent that to my back end and the back end generated the text response using that little action cable there and send it back out, okay? And then we're streaming from the deployment responses. So the application that we are here to show you today is a little bit different than this particular version. This is what happens when you receive the data on the deployment response and then in the front end JavaScript you can do speech synthesis on that kind of thing. You can actually use that skill that I just wrote in a browser, which is the point of using action cable for this thing. For doing the thing where you talk to Alexa, you don't need anything like that, right? But I wanted to have people be able to talk to their web browser and get it to do things. And that's what the Hal stuff does. So that was a lot of words and I apologize, but Julian is going to talk to you about something different so you can start hearing again, not just my droning. All right. All right. Now back to some code. Right. So this is RailsConf, obviously. So for that we've made Alexa on Rails. So following on from the other gem earlier that would generate the utterances and the JSON file, when your service actually receives the payload from the Alexa, you need to be able to handle that and then give a response back. And again, it's lots and lots of artisanal JSON. So there's another gem that you should all go and download called Alexa Rubykit. And this is super, super simple. So just in your Rails controller, you just create a new instance of Alexa Rubykit response. And then you just need to generate the string. As you can see, my method that I call here is which demo am I calling? And in that I pass in the Alexa variable which contains all of those keywords from the utterances like the talk, the speaker, the day. And then I've just basically got a massive if statement that kind of works out. Like if this text was this, then I must reply with this. So then you get all of that. And then you go down to the Alexa type. So this was just an intent request as a user was asking for something. And then you have like the session end which Alexa sends itself automatically. It's like, okay, I've finished now with the request. There are other Alexa types you can use. Like you can say Alexa stop. No, Alexa stop. Alexa stop. If any of you have ever used Alexa to like maybe place some music or set an alarm, normally you have to shout stop quite a lot to it. That's where you would put that in this Alexa type. That would handle that response. So then you could reply with something back like, oh, sorry, my bad. As well as generating that, this also creates the title card. So in the Alexa app, you can see what the request was that you sent. And so you can see here, this is in the Amazon Alexa app. It's got the title card of Alexa on Rails at RailsConf. And Alexa heard me ask Rosie who is speaking at 11 AM today. And then you can actually re-listen to your voice again which is super fun just to make sure. And then you can tell Amazon, it's like, no, Alexa somehow burnt the toast. I wanted to know what time it was. So as you can see, my robot is, for Alexa, is called Rosie aptly named after one of my favorite robots which is Rosie from the Jetsons. Cool. Right. So now you've heard about Action Cable. You've heard about Alexa. I think maybe it's time for some demos. Ship it. Ship it. Do you want to load up? Oh, I don't want them to know about that. You didn't see anything. So, you're going to talk to your thing and show them the bills. We've got 15 minutes. Actually, maybe we'll go. What do you want to do? Do some demos? Yeah, do this one. Yeah. Right. So I took the schedule from the conference and I put that into a Rails app. That's running on the cloud. And so when I talk to Alexa, it can then go query the Rails database and hopefully come back with an answer. So let's see. Ask Rosie which room is tomorrow keynote. Sorry. Couldn't find a thing. Let's try that again. She really hates Australians, is it a thing? If I won't try again. So that was, so one thing you have to do in your code is actually handle errors and things. Otherwise, Alexa doesn't really know what to say. So that was it not being able to find anything in the database. Let's try another query. Ask Rosie who is speaking at 11.40 today? Ask Rosie who is speaking at 11.40 AM today? You can do it. Well, we have Kristin Nelson and certain times securing Rails apps and user data, Poly Shandorth, Tyrion Cossack, Sean Marcia, Sarah May, panel, developer happiness through getting involved, Jake Worth, observing change, a gold master test in practice, Justin Weiss, a deep dive into sessions by Daley Joshy, Goldilocks and the three code reviews. Overall a pretty decent lineup. I think you should see Poly Shandorth, Tyrion Cossack, Sean Marcia, Sarah May's talk. Rosie will even recommend things to you. I want to round with plus. Because that is really hard to do that thing. It's not actually that simple. Ask Rosie, give details of Aaron talk. Sorry, couldn't find a thing. Let's try that again. If you're thinking about being speakers in the future, don't do the live demos. Ask Rosie, give details of Aaron talk. Sorry, couldn't find a thing. Let's try that again. Is that why? No, it's set to British. Ask Rosie, give details of Aaron's talk. Sorry, couldn't find a thing. Let's try that again. I have been judged and found wanting. Apparently not British enough. We'll try that one one more time. The only things I can't work at how to do apostrophes with the Alexa service, so it's like that's why Aaron and not Aaron's. Let's try it one more time. Ask Rosie, give details of Aaron talk. Sorry, couldn't find a thing. I surrendered. Ask Rosie, who is speaking at 12.20pm today in exhibit hall? It's just giving up now. It's quit us. Let's try last time. Ask Rosie, who is speaking at 12.20pm today in exhibit hall? Who's such excite? Looks like it's time for lunch. So as well as talking to the conference schedule to find out, you can actually do some useful things for work. You can hook it up to any API that you like. Ask Rosie, what is Rails build status? They call me mello, yellow. The build is still building. So there are practical applications for this. And with a decent Wi-Fi connection that is not me tethering from inside a concrete box. And a little more robust handling that is maybe not like a demo version, but something like you can actually use this to do things. I have used this to deploy applications. El gray t, hot. The replicators on this vessel are not yet operational. Alexa is really good at jokes. We got it? I'm going to show mine. Ours, this one. I would like to welcome you all to the future. There is a application here for your viewing enjoyment called FriendsterBookSpace.HorokuApp.com. And as you're heading here, I would like to point out that this is a chat room being presented at a RailsConf talk that is very well bound under the code of context. So behave yourselves. There's anything inappropriate. I am burning it down and breaking my computer and coming to find you. And I have more information about you than you think. We're using a real name service, the Google auth. More than that, WebSockets can give you a lot. I'll give you some details later. So let's look over here at FriendsterBookSpace on this screen, though, like that. The future of everything. The first day I have machine learning dialogue system to leverage IoT home automation and voice recognition WebSocket technologies on a continuously integrated pass to enable MMOC. Massively multiplayer online chat room. Are you excited? Who wants to hug the future? Should I do it? All right. It's not on the screen. Are you serious? I was just reading a thing. You're like looking at my, like, garbage files. Why are you doing that instead of what I want you to do? Now you can see. Oh, my gosh. Now you know I have a messy desktop. That's like the most shameful thing a programmer can have, I feel like. Right? Okay. I'm going to try again. I was just reading this text here. It's not that exciting. I'm not going to read it again because it was kind of hard the first time. But I will invite you to hug the future, which out of context doesn't make any sense. But as you can see, I've cleverly named the login button, hug the future. So you click here. It's going to off me with my Google thing here. I will choose to not use this browser, first of all, because that would be a poor choice. I apologize for that. But I'm here on the FriendsterBookSpace and look at that. I am logged in. Magic. Wow. How'd that happen? Now you don't have any characters or my passwords. So I can go right in to the future here by cooking this. And I've got a chat room and look at people talking in the chat room already. It's so exciting, right? It's a real live chat room. It's an action cable. We finally found a use for action cable and it's basically making Slack. Hi, mom. That's good. But it's better. It's the future. It's the future of Slack. And I want to show you why real quick. Because I can say things like this, maybe. Oh, no. That's not what we wanted to do. Did it just, okay. How about this? Now I want to do this. Please do a thing. Why are you doing that? Because people are typing. Is that really a bug I just introduced? Please stop typing. Okay. You got nothing for me? Hello, wheels conference. Oh, look at that. We just chatted to Alexa. And to show some joined up thinking, hopefully when I speak to this Alexa, that should also go in the chat room. Ask Rosie, what is Rails build status? I'm Alexa. This is my first real conference. That's perfect. It wasn't quite what I wanted. It went great. What a good post, Alexa. So not only can you speak to your Alexa from text, you can also do it with your voice and then you can talk to your colleagues. So this is a remote workers dream. Yes. That's why we all work from home in the first place, is to talk to our colleagues. Joan and Sheffler asked, what do you want to be when you grow up? Right now I'm translating MP3s to where files on. I want to be the computer from Star Trek. This is a good answer. Thank you, Alexa. You can ask Alexa things. You can say things. All of the things that you ask Alexa to say and ask will happen in your own browser. So the audio is played back through me embedding an audio element that has the S3, MP3 link in it that can then be played in the browser. You can also implement features. Yes, Alexa's got excellent jokes. You can also implement features that would allow you to tell other people. And so those channels that I was talking about creating earlier, there is one broad channel here that is the messages channel that we're working through. You people post questions and things generally in that channel. And then each user has their own channel. So I could, hypothetically, have implemented a feature that would allow me to have Julian's computer say a thing. Raise your hand if you think that's a good idea in a live demo. No, you're wrong. Actually incorrect. Totally incorrect. You could also implement a thing that iterated through all of the users and sent to each of their channels anything you wanted. And I could send audio to all of your devices. If you were the sorts who all had your laptops open and the volume turned up way high right now, I could blast you all with an mp3. I'm just kidding. I'm not going to do that. But that would be something you could do with this type of thing. So the channels give you a lot of flexibility and a lot of power. Action Kevill is a fantastic tool. Let's hop back over here very quickly. And I'll do this. And it worked. And you can see that. Do you see my dirty desktop still? Confreaks, go ahead and cut the desktop from the video. Let's see if we can find Aaron's talk again. Let's do it. Do Aaron's talk. Yeah. Ask Rosie, give details of Aaron talk. Sorry. Couldn't find a thing. Let's try that again. We invite you to try on your own and find Julian in the hallways and try talking to his Alexa in your best British accent. You can actually set the voice that it expects. And the thing where I was talking about Paulie earlier, having 16 different languages, you could very easily change this chat room code to send Japanese text and say Japanese text and it's very good. So go and play with Paulie. If you know how to do HTTP, please teach me. Also, the other Amazon API documentation, the V1 stuff is buried deep. But I can help you find it if you need a link. Hit us up anytime. We're Red Roku, the new startup. Maybe you'll get it on the Facebook page there too, sorry about the top of the video. Got
Chatbots, ActionCable, A.I. and you. And many more buzzwords will enthral you in this talk. We'll learn how to create a simple chatroom in Rails using ActionCable, then how to talk to your colleagues in the office or remote locations using text to speech and Amazon Voice Service. Using the power of ActionCable we will explore how its possible to create an MMMOC: massively multiplayer online chatroom, that you can use TODAY to see your; Travis Build status, or deploy code to your favourite PAAS, let you know when the latest release of Rails is out. Using nothing but your voice and ActionCable.
10.5446/31263 (DOI)
We got a lot to cover here, so I'm going to go ahead and get started. This is open sourcing, real talk. So who am I? I'm Andrew Evans. I work for Hired. I've been there a little over two years. I've worked for all kinds of different startups for like nine, ten years. I've been working with Ruby on Rails since about 1.8 or so. If anybody remembers some fun Twitter-related arguments back then, good times. And I've been writing bugs in PHP or QBasic or all sorts of things for longer than I care to remember. So what is Hired? Hired is the best way to get a tech job. As a candidate, you make a profile. We put you live in our two-sided marketplace. And then companies ask you to interview with them, and they have to give you salary, equity, everything else up front, and you decide if it's worth responding and having that conversation. As a company, this is the best and fastest way to get great candidates into your pipeline. But this is not a pitch talk, so I'm not going to focus on that. Instead, I'm going to talk about open sourcing. This is something that Hired's been doing basically since the beginning. We have 46 public repos on GitHub. A lot of those are forks where we've needed to make some modifications or thought about modifying things. But some of those are open source projects that maybe you can use. These are just a couple. We have a PubSub layer over Sidekick. We have a query builder for Elasticsearch. We have Fortitude, which is our CSS framework, which is built to scale. It's built for teams. We even have some smaller ones. We have a RackMiddleware that will log stats from the Puma web server. So if you're using that, it can be helpful for gathering more info. And we have a tiny thing, which will change the color of the lights in your office. If you're using something like a Phillips Hue, we turn our lights red when the master build fails. So it's a good incentive to get on Slack and figure out what happened. I'm just going to dive deep into these two so we can talk a little bit about what producing these projects were like. I'm going to cover whether it would be worthwhile or not for you to do any open source work, extracting anything out of your app, and making it public. I'm going to talk about some of the challenges that we faced in doing that. And I'm going to talk about how it played out across our team, like how did that affect the dynamics. And I'm going to try and get a little technical and talk about what we learned about Ruby and RubyGems and Rails. Ruby will have some time for Q&A at the end. So why should you open source things? Well, if you've got some logic in your application, then clearly every other Rails app is doing pretty much the same thing. So if we all share our code, then we could all do this stuff automatically. When you open source something, the community will fix bugs in it. They'll add features. They'll document your things. It's free labor. It's amazing. You'll get developer street cred. You'll get lots of respect and appreciation. You'll get 10K followers on Twitter and hundreds of thousands of GitHub stars. And people will open issues and say, well, there's no bugs or problems with this. I just want to say thank you and we love you. So literally none of that is true. You may have bits of application code that everyone needs across basically every Rails app, in which case what's it like working at Thoughtbot? But for the most part, you may be doing some pretty specialized stuff. Most of your app is business logic, so that lives in your business app. So real talk about our projects in particular. I pulled some data off of Best Gems. Best Gems basically scrapes data off rubygems.org every day and tracks like total number of downloads, average number of downloads, et cetera. And it gives your gem a rank out of the 130,000 plus gems on rubygems. I can see ours are not quite in the top 100 or 1,000. We're probably not going to be on the public leaderboards anytime soon. Our average daily downloads, I adjusted and got rid of weekends and like holidays for that. But basically it's a little bit over the range we'd expect for how many times a day we deploy to Heroku. So hopefully these aren't all just Git push Heroku. Oh, look, we got more downloads. I also pulled some data off GitHub. The number of stars and followers we have is a bit more encouraging. There's probably some people outside of our company that care about these repos. But we're not getting people opening issues just saying how great we are. I found some gems that are similar on the Best Gems ranking. So one of them is a framework that lets you build bots for Google Wave. Does anybody remember that? That existed for a hot second. One of them is a command line interface to the pirate bay. I think the founders announced a few months ago that they'd been sued and thrown in jail too many times, so that's dead. One of them combines ActiveAdmin and Trailblazer. Trailblazer is the object-oriented, explicit over implicit, loosely coupled, well-organized framework that you can add on top of your Rails app. ActiveAdmin is literally the opposite of every one of those values. So I'm not sure what you get by combining them. And the last one is a command line app that gives you a random Mitch Hedberg quote. I used to use that all the time. I mean, I still do, but I used to, too. So what we found is if you don't market your gem, then people don't really just find it organically. We haven't done a lot of promotion for these. I don't think we post about them on our blog too often. This may be the first talk where we've gone deep into our open-source stuff at RailsConf. So we haven't gotten a lot of people that just found this out of the blue and said this is the greatest thing. Was it still worth it for us to open-source this code from our application? I'll dive in a little bit and we'll see. So Reactor is our enhanced interface to Sidekick. This is how we do a lot of our background processing at Hired. Sidekick, if you don't know, is the premier background processing library for Ruby and Rails application that was made by Mike Purhem, that guy. Mike is awesome. Thank you very much. It's been a wonderful system for us at Hired and it's been rock solid. So the thing that we were looking at when considering Reactor was when you have, say, a candidate updating their profile, you may want to do a couple of different things in the background. You might want to notify the talent advocate that's working with them. You might want to bust a cache or update a saved search or something like that. So with Sidekick, you have worker classes for these things and you perform a sync and those things will happen in the background outside of your web request. But if you're doing a whole bunch of things, then you may have a whole bunch of different workers that you have to tell to do the thing. So you face a choice. Do you want to copy and paste all of those calls everywhere that you're updating a candidate's profile? Or you can have one worker that fires off and then that calls other workers, which might then call other workers. For the most part, this isn't a huge deal. I think it's a recommended practice. But if you're like me and you tend to forget things, then you might throw a cycle into your workers and they'll spin up 100 or 10,000 jobs. However many activate until you can shut down Sidekick and figure out what happened. So basically, with that pattern, you end up with this big graph of things happening throughout Sidekick. You have to kind of model that in your head and it can make it a little difficult to track down where exactly specific jobs are coming from. So Reactor is a publish and subscribe system. Basically you publish an event. That event has a name, which is just a symbol. You can pass it any arbitrary data. There's also an extension so that you can have active record objects publish an event. And then those objects are serialized and deserialized and available within your event code. So this will kick off a single Sidekick worker, which then locates all these subscribers to that particular event name. And it will enqueue a job for each of those. We wrote a convenience method for our controllers called action event. This will publish an event from the current user and it will merge in parameters or any other data that we think would be useful for tracing, tracking, analytics, et cetera. So what do subscribers look like? You include this subscriberable module. That gives you the class method on event. So you say on event, you give it the event name and then you give it a block of what to do when that event happens. So in our first example, we pull out the actor, which is whatever published the event. We get their ID and then we can bust a cache related to that particular record. You can also set a wild card handler. So this thing will respond to all events. And we use that for logging site activity to Postgres or to anywhere else that we want to have that analytics data available. There's more options to it. Instead of passing a block, you can pass it an event name. If your objects and functionality are a little more complicated and you want to unit test them, this can make it a little bit easier. You can also pass options to it like delays and perform later. So for example, once an interview request happens from this subscriber, then two days later we'll send an update. And you can do some logic in that, like figure out if it's canceled and then abort. We also have a system for creating model-based subscribers. So in this case, we create a subscriber called admin notifier. This is going to live in our Postgres database. And we define the on fire code to say what happens when this event fires. The neat thing about this is you can have it subscribe to multiple different events from the database. So if I'm an admin and I want to be notified when these three different things happen, then I can just do that through our admin interface and not have to deploy new code to handle that. These can be a little bit magical. So their use has fallen off a little bit. Finally, you can have records publish events based on the data in those records. So for example, if we have an interview that has a start at and an end at time, then we can tell it to publish events at those times. We can even pass it some conditions and some other things similar to validations and callbacks. And the nice thing is if the model gets updated, the start at or end at changes, then it will automatically reschedule that for us. So did it help that we took this code and instead of just building it in our application, we put it in an open source repo? Well when you do that, you get this nice green field project. And it actually changes your desktop wallpaper to the old Windows XP wallpaper. Green fields are great. You don't have to worry about your legacy code. You can just start developing fresh. And it keeps your code separate outside of Kansas out of your application. So that produces a little bit of friction to updating the library, which it turns out can be a really good thing. The code for reactor is kind of out of sight, out of mind for day-to-day feature development for us. So it discourages making small arbitrary changes. If I want to say, like, oh, this should be calling instance methods, not class methods, or this or that, I'm less likely to take that on because I have to context switch and then publish to RubyGems and go through this slightly higher friction process. Because it turns it into a lower churn section of code, it keeps the pattern and convention clear. This action event is used all over our application, and basically it stays the same everywhere and it has just kept working. We had to make some updates for Rails 5, and I'm about to make some updates for Sidekick 5, but for the most part, we don't have to think about it too much. Also, when you have this in a public repository instead of your private application repository, that's going on your permanent record on GitHub. That might add just a little bit more pressure to do it right. You might be a little more attentive to things like documenting all of your methods and making sure that your test coverage is 100% and that your code climate score isn't going down. Finally, when you're publishing through RubyGems specifically, you have to be a little more thoughtful about your versioning. You really have to think about, okay, is this going to break anything? Is this backwards incompatible? Or is this just a feature or a patch release that isn't going to break anything existing but maybe provide a new feature? Also, when you're building outside your application and you're not thinking so much about features, it gives you a chance to think more clearly about the pattern of what you're building. So instead of thinking about what do I need to do this feature, you think about what is the minimum information that this open source library needs to do its job. You also have to think about how RubyGems and Rails and Bundler and these other tools work. One thing I stumbled on to pretty early is that requiring Rails is harder than it sounds. I looked through things like device and paperclip and Sidekick and they all require Rails in totally different ways, which I found amusing. Some of these libraries require Rails for all of their functionality and some of them have a few Rails-specific extensions. That caused me to dive a little deeper into, okay, well, what is a rarity? Is that something I need? Where do I put the require statements? How do I manage the dependencies for that? And then it makes you think about whether you want to use Rails-specific extensions inside your gem that's living outside your application. So do we want to use ActiveSupportsConcerns pattern, which is a pretty nice way of extending objects or do we want to use the basic Ruby include and extend patterns? And then you think a little bit more about the overall design pattern. So when we're thinking about React or we're thinking, oh, I just want this action event method in my code so I can just put events all over the place. But when you're designing a gem from the ground up, it's like, well, okay, what's the name for that pattern? So when you go through design pattern, blog post, or the Gang of Forerbook or whatever, you say, oh, this is a message bus. So we're going to make channels that events get passed along. These events are going to have arbitrary data. Some people get this confused with event delegation, particularly if you've done a lot of jQuery. You can have multiple listeners for particular events there. But those actually form a chain. And any of the listeners in that chain can abort the entire process. With the message bus, all of the subscribers get the message regardless of what each of them are doing individually. So if you wanted to see some examples of subscribers and message buses written in Rails, the shiny new action cable has a pretty well put together, well documented example of this. I have a link here to the subscriber map. Or if you want something a bit more core, active support notifications has been in Rails for quite a while. If you're wondering how all of your database statements get from the application code to your development console, this is what's doing it. And this has a link to subscribers in the class that's responsible for organizing them and mapping them. So TLDR, publishing the reactor gem, didn't make us famous, didn't really help establish any of those selfish goals that we put out. But it did help our team establish a convention. It has quietly kept working for years. And it's allowed us to write that code in a way that's smaller and more loosely coupled and better organized. It's also a good excuse when anyone wants to work on it to go in and look at the pattern, look at the examples, and see how does Message Bus work, what's cool about this. The other one I wanted to talk about was Stretchy. This is a composable query builder for Elasticsearch. Elasticsearch is a pretty amazing technology for full text search. It's based on Lucene. It's similar to Solar. Basically, when you want something more advanced than a relational database I like statement, then this is a good place to go. And yes, we spent an inordinate amount of time thinking about the name for this because Nogem is going to take off and become popular unless you have a really clever pun as the name for it. So I wrote the first version of this. The goals that I was thinking of were I wanted to have an active record-ish query syntax that we can use for Elasticsearch. I wanted to have these immutable query objects which make handling and caching responses really easy to manage. I wanted this to be just a query builder for indexing our models to Elasticsearch. We already have reactor so we can index on specific events. We have serializers. So I figured we'd just keep our Elasticsearch gem out of all of that business. I wanted to make this flexible enough for pretty complicated Elasticsearch queries. Because this is just a query builder, it's taking in a bunch of method calls and then it's producing some JSON that we're going to send to Elasticsearch. That means we don't really need any dependencies, which is pretty nice. We don't have to require all of Rails for this one thing. And the end goal of all of that is we have these composable scopes that we can use in the same way that we're familiar with from ActiveRecord. So here's an example of what that looks like. This is a query that should get us back Lynn Conway. She basically invented modern microprocessors. So it's a good person to find in our candidate database. You can see here we have the where method. Pretty similar to ActiveRecord. You can throw some fairly arbitrary parameters at it and it will figure out what the right query is to apply those. But then there's some more Elasticsearch specific stuff. This match query, for example, is like the most basic full text matching relevant sorting query in Elasticsearch. And then we have these boost methods. That gets into what's called a function score query. It lets you tweak how your search results are going to be ranked. When you look at the query that's generated, it's this giant message JSON thing. When I first started working with Elasticsearch, I tried to figure out how to make these giant JSON documents composable so we could mix and match them in different ways. Turned out to be really hard, which is what inspired this whole ActiveRecord-like approach. So some other example methods. I won't get too deep into these, but you tell it like what index you're going to search, what type you want to get back. And then you can use where for filters. You can use match for full text matching. You can or multiple filters together by using the should clause. This comes straight from the Elasticsearch DSL. And then you can add boost to tweak your relevance algorithms. And finally, if what Stretchy has isn't enough, then you can go lower level. You can say this is just a query and then throw some raw JSON at it. And it will figure out where that needs to go in your final query that's going to be sent over. Elasticsearch has a whole lot of stuff in it. It has about 7 million tools for doing different kinds of searches and figuring out relevance and producing the set of results that you want. So it became obvious pretty early we couldn't support all of that. So the goal was just to get to a system where we could do what we needed to in the app and then hopefully expand that as we got more and more features and hired. So how do I get started making an active record chainable interface? I pulled out this 2010 blog post that I've had bookmarked forever, RailsTips.org. Thanks, John Unimaker. This stuff still works. It's still pretty much the same in Ruby 2.3. Hopefully you have a class that class has some data, maybe an array of nodes, let's say. And each of your chainable methods returns a new instance of that class with some new stuff added to it. So when you get down, you're like, okay, we're building up this array and then we're going to compile it to this JSON query. What does that look like? Is there a name for that? How does it work? First out, it's called the visitor pattern. I was looking around at how other repos do this. I looked at active record and oh, that uses A-REL. And A-REL has a visitor pattern. So that has a collector class and then it has a tree manager for figuring out what was collected and compiling it. And then it has multiple different visitor classes to take that tree of abstract nodes and compile it into a query for, say, MySQL or Postgres or Oracle, I guess, if you're into that kind of thing. This is an example of what that tree of abstract nodes is going to look like. You have a select node and then that can have multiple child nodes, like what are you selecting, where are you getting it from, and then any filters that are going to be applied to your query. And then those filters can have child nodes and those have child nodes. So the visitor will traverse all of those and compile it into the final string. So Stretchy didn't need that full stack. There's really only one elastic search. We decided because time, we only wanted to support the latest version. So we can basically combine the visiting and compile steps. But the pattern was still pretty much the same. Some challenges we ran into in having this query builder live outside of our repo and kind of its own thing. It was a little confusing to the rest of the team initially. If you look at other gems for integrating elastic search into your Rails app, they do a lot more. They have search Rails and search kick and chewy. They all tie into active model. They have some other miscellaneous helpers. And Stretchy doesn't do any of that. So other developers were like, how do I do this or that with Stretchy? And it's like, well, we don't actually do that. And it turns out when you have this API-like active record, the whole point is to make it easier so you don't have to look at all the documentation for everything all the time. But that turns out it hides a lot of the actual features of elastic search. It hides the underlying complexity. So other engineers will look at this and say, oh, I'm just going to make this active record-like thing. But it's like maybe you're missing out on a lot of the stuff you could be doing there. When we have an open source plus our internal application queries, it can be a little fuzzy deciding what should go in the open source versus what should go in the app. Obviously, all of our business logic should be in the app. But if we're using this particular style of query for text relevance or that particular style, is that worth having a convenience method for in the open source package? And then early on, we had a couple of cases where it's like, okay, we want to make this feature, but then the gem doesn't support that yet. So your open source library ideally should not be blocking any future development. So to fix that part at least, we had to make it more flexible and easier to customize within our application. And then a little bit later, we made it more flexible and extensible so that we could optimize those queries in our application. And then later, we made it easier to customize and more flexible and more extensible. And that turned out to be a huge win, actually. You can really see the process of editing and refining to get a flexible, extensible solution because ultimately it needs less code. And it makes clear what the bottom level versus mid-level building blocks are. So I actually graphed this for the stretchy repo. I took each of our master commits and looked at how many lines of code are in it. So we've done one or two rewrites. The first one, I'm not going to call it bad, but it certainly added a lot of code. And then over time, we've been able to just keep cutting that down. And at this point, it's back to being a pretty small gem. So the TLDR for that, by making this open source gem in this nice green field, we got to learn the cool new visitor pattern and see how do you do that in Ruby. For me, it was a great excuse to dive in and read a bunch of code from A-REL and Mongoid and other, like, more mature frameworks. By having the core interface be open source, that allowed us to keep that very simple and then keep our complex business logic in the app where it belongs. And it's helped us keep a good separation between elastic search queries and the internals of, like, our models and our controllers. So to wrap up, this whole thing, open sourcing has been pretty cool for us, even if, you know, it hasn't made us rock stars, we don't get, you know, a learjet to fly us to RailsConf because of how awesome we are. Even if this guy is your only watcher on GitHub, you can still get a lot of technical benefits for your organization. And even if you don't get free labor, it can help you internally with your team. It can add a little bit of friction, which can be a good thing. Can be a bad thing occasionally, but we found the trade-offs to be worth it most of the time. Thinking outside of Ruby on Rails and sort of leaving behind the conventions and crutches that gives you can make your code simpler and clearer, easier to understand. Developing open source can give you a good excuse to learn more about computers, learn more about Ruby and RubyGems and Bundler and all the other systems for how you get that back into your application. And with Greenfield Blank Slate Projects, it's a great opportunity to be a little bit more thoughtful about the code that you're writing. So I told you we'd save time for Q&A, so here we are. Any questions for me or about these open source projects or about Hire? Yeah. So the question was, Elasticsearch changed versions from 2.x to 5 because reasons. They completely changed how the query DSL works. There used to be this big separation between queries which handle relevance and then filters, which just affect your result set, but don't affect relevance. So I think it took us about three or four months to get that upgraded internally. So as part of that effort, we upgraded the open source gem. That was surprisingly easy because we have this stack and tree pattern. We basically just had to change, okay, in the tree, instead of doing this, we do that. So I can point you to the pull request for that, but I was actually shocked how simple that was. All right. Well, thank you all very much for coming. I'm Agias on Twitter and I've got the slides up on my website. Use Hired. If you're looking for a new opportunity or if you're a company looking for candidates, get online. Give it a shot. Or come work for us. It's a pretty cool company. So, all right. Thanks, everybody. Thanks.
Hired open-sources some useful abstractions from our Majestic Monolith® and we've learned a lot. Some tough lessons, and some cool knowledge. We'll cover: When & where should you pull something out of the code? Does it really help? What things are important to think about? What if it never takes off? We'll also look at some design patterns from our open-source work.
10.5446/31267 (DOI)
Hi. Thank you so much for coming. We've got a really great panel of people who have some wonderful information to share with you. We all know, so we're here today to talk about developer happiness. And we all know that the best way to make developers happy is kittens, right? Very, very cute kittens. So we have to include some kittens. But beyond that, we can get involved in our community. And giving up your time to do a great thing for somebody else is a really awesome way to keep you happy and keep you from burning out in your job. I'm lucky to have these wonderful panelists to give us some information about how to get started. So I'm Polly Shandorf. During my last week of boot camp, I went to an event called Ruby for Good. And it's a three-day sort of retreat. And we work on projects for nonprofits during the day. And then we have dinner together. And in the evenings, we play games. We meet people, we build community. We find mentors. That's where I found my mentor. And it's just a really amazing place to be. And so I've met some of these people through Ruby for Good. And I was really excited about that experience. I've gone several times this year. I'm actually going to run a project. And it's just something I feel really passionate about. And I wanted to share with some of you guys. So I'm going to have the panelists introduce themselves. Hello. I'm Terri and Kossick. And I met Polly at Ruby for Good and also Sean and some other people I think here from Ruby for Good. I work for GitHub. And other ways that I get involved are I'm a co-organizer of Django Girls in Portland. And what else do I do? I write intermediate programming workshops like one that's coming up later today. Hey, I'm Sean. I organize at Ruby for Good thing. But this isn't just a Ruby for Good panel. I work for the government. I try to make things better for the government despite the new person in charge. Hi. Hi, I'm Sarah. I have actually no connection to Ruby for Good at all. I've never been. It's been on my list of things to do for years and I've never actually been to it. Maybe next year. This year it's during the last week of school for my kids. So it's always something. No connection yet. Right. I am the chief consultant at DevMind and my nonprofit work tends to take the form of this conference and RubyConf. I am part of the organizing team for that. And I also in 2009 founded an organization called RailsBridge, which was one of the first groups that started to do workshops aiming to get women and under other sort of under represented folks into the Rails community through workshops. And so since then there's been a lot of other groups that have taken that model and done really interesting things with it both within the Ruby community like with Rails Girls but then also outside of the Ruby community. So for example, the Django Girls is a similar idea. There have been similar ones for like JavaScript and so on. And recently we formed an umbrella organization because if you're a developer you don't really want to figure out how to be a nonprofit organization. You just want to do a workshop. And so now we've got an umbrella organization. So now we've got like GoBridge and MobileBridge and we just did our first ScalaBridge. So we've been trying to help other communities take that model and do that also. So now we're going to chat. I just have everyone's names up there so we can remember. So Sarah, can we start with you? And can you give us all some advice on how do we figure out something to get started with? I mean, yes, yeah. So it's a really interesting question. I think that for me I wanted to create the workshop that I wish had existed a couple years previously. So I was coming back into the workforce after my daughter was born and I had been a Java developer. And I had a couple years gap on my resume and suddenly no one would interview me. And I was like, oh, wouldn't it be awesome if there was like a workshop where I could direct all of my mom friends that are in like very similar situations once they get to the point where they want to work again so that they could update their skills and then they could come back into the workforce and do this like hot new thing. And so that's where RailsBridge basically came from for me. It was like, I want this to, I wish that this had existed two years ago. So I think I want to make it now. And that's where a lot of the motivation for my stuff comes from. Sure. I'd like to say too that it's really easy to get involved. Like you don't need, like if you're right in, even if you're in a boot camp, like you have the skills right now, like that's, like, you know, like we've worked with a lot of nonprofits and generally like the kind of things we're building them are just like really, really simple like crud apps. Like we're replacing processes that are, you know, being down on pen and paper and like just horrible processes. Like I could tell stories that would just make your heads all shake. So as far as getting started volunteering, my experience like coming out of college and like really, really wanting to get involved in like getting more women and other under represented folks in the tech, I just went to a million meetups and like ask people who needed help. And that was a little too much initially, but I went to all those meetups. I like found out the things that I liked doing, the things I did not like doing and did more of the things I liked and lots of the things I didn't like. I think that's a really good point. So finding things that spark your fire and get you excited are really good ideas. So Tyrion, have you ever experienced some obstacles getting involved in something that you were excited to get involved with, but then you came across some obstacles and what did you do? I guess obstacles, the first groups that I got involved with were mostly dealing with getting young girls interested in tech, which I really care about. But I realized after doing that for a couple of years that I do not have the energy level for getting little girls into tech and like keeping them away from scissors and all that fun stuff. So that was kind of an obstacle. Ultimately now I work with grown women who know how to use scissors. So an obstacle or a learning situation is what I would call it. Yeah, I think for me, I started out in my volunteer work by working with an organization that does a summer camp for high school girls. And it was really rewarding in some ways, but then on the other ways, they wanted me to write a curriculum for a class. And then again, in a similar situation, it was like high school girls are their own species, I feel like almost. And I almost have one, so yeah, that's going to be exciting. My daughter's 11 right now. And I found that there was a lot for me to learn just about dealing with kids. At the same time that I was trying to learn about how do you teach technical topics, because I had not done that before either. And so I kept at it for a while because I felt like I was sort of obligated to or because I said I would. And then I just started getting less and less out of it. And then eventually just decided, you know what? I should probably just let someone do it who wants to do this, right? Who's got the energy and the drive to specifically do this. And they're going to be way better at it than I am. So Sean, we have heard a lot about Ruby for Good and how awesome it is, but it's sold out for this year. And so if people want to do Ruby for Good, they have to wait all the way until next year. So do you have some ideas for other things that people can get involved in? So you can get involved remotely. Like we have, like if helping nonprofits, this is the kind of thing that does float your boat. Hop on our stock channel. We have our go to our GitHub page, just look at all the issues. We tag them, help on it, and try to tag them as well. Like newbie, good first commit. And jump in there because, yeah, like all our stakeholders, we try and get them in their commenting as well. So you'll see Rachel from the diaper bank saying like, hey, is this possible? And it's just really wonderful and fulfilling to, you know, getting that direct interaction. I think Teri has an idea too. I do. I think you're referring to what I think you're referring to. There is a Python offshoot of Ruby for Good happening in Portland in July. I can't remember the website. I think it's codeforgood.io. So that's happening soon. I think that's what you were talking about. I'm not sure. So sometimes when you get started in something, it's a little difficult to figure out how much commitment is involved and how much time you might have to dedicate to it. So Sarah, do you have any tips for sort of up front trying to figure out what level of commitment is going to be required? I think that the one thing that I found helpful in the number of the groups that I've worked with is that they have these things like you were saying that's like good first commit or like easy thing to start with. I find that really useful. What it says to me is not necessarily that I want to do that thing, but what it says to me is that they've thought about which pieces of work they could sort of break off and give to someone who's new, which I think is a thought process that a lot of groups, a lot of times don't really go through. And at Railsbridge, we've been, we've tried hard to have a revolving door, meaning like if you're a student this time, we explicitly encourage them to come back the next time as a TA. If you're a TA, you don't have to teach the class, you're just sitting in the class with the instructor and your role is just to provide another voice, another interpretation, another way of thinking about the concept that they're trying to work on. And we found that to be incredibly valuable just because a lot of times the folks who come back the very next time as a TA are, they're closer in mindset to the people in the class than the instructor is. And so they can offer, you know, a perspective so that the teacher can't offer anymore. And so we tried hard to make that a fairly low commitment deal, right? You just show up for the day and you sit in the class and you sort of comment on things and the teacher may ask you to give your perspective on things once in a while. And so we try and have this sort of ratcheting up level of commitment as you want to do it. And I also think it's okay for you to come and go in a project, in volunteer work in general. Sometimes I have a bunch of room for it in my life and sometimes I don't. And so I go through cycles for sure of like, I'm really involved, I'm on the board, I'm going to every meeting. And then there'll be six months or a year where I'm just like, okay, y'all can handle this for a while, right? Because I've got a kid in middle school or something like that, something will happen. And I think that it's a lot like an open source project, a lot of these things. We do need to be cognizant of people's waxing and waning availability and energy and interest in helping. And so I think that if you've got a group that has thought through that, and a lot of the times these groups that are run like open source projects because open source has already sort of gone through that thought process, a lot of times that that does actually exist. And so for me it's a process of like sort of looking for these levels that they've set up. I think that's really good. And like being honest with yourself with how much time you actually have. And I recently went through a situation with a group that I volunteered for and I volunteered to help run a class and it kept getting delayed and then it was running into other things I had volunteered for. So at some point I had to say, you know what, I'm not going to be available to do this for this time, maybe I can do it next time or something like that. And so I think being honest with yourself and then just being really open with other organizers of the thing that you're doing is really helpful. And also I had like lean on your community. We have probably the best community out of any like DHH talked about that. The Ruby community, the Rails community, it's just full of awesome people. And so if it's an open source project, like ask the person maintaining it, like, you know, is this too hard for me? Can I do this? This is my skill level. If you want to build something for someone, you know, go to a meetup, ask people, like, hey, I'm trying to figure out how to build this. There's a lot of helpful people and everyone's going to help you. Also, sometimes it's good to pick the thing that's too hard and then find someone who will pair with you and then you can learn and it's like a win-win situation because you're helping someone, you're learning, you're growing and you're developing a mental relationship. So it can be helpful all around. I just wanted to add a thing about, through Django Girls we really try to push, not like a show, but like encourage people who just went through the tutorial that they should become TAs like Sarah mentioned at the next workshop. And there's a lot of reluctance there, but people who are very close to like the beginning of learning a program are often the best resource because they know what it's like to be a beginner. They might, they're less inclined to be like, why don't you know this? Like, don't you know what Stack Overflow is? So that's like, our teaching especially is a really great way for someone who's starting out in tech to get involved. The first time that I did a real spiritual workshop was the first time I'd ever really taught anybody programming. And we went along for about 20 minutes before one of the women in my class was like, can I ask a dumb question? I was like, sure, and no question is dumb, blah, blah, blah, blah. She's like, what's a variable? And I was like, well, it's a thing in your program with a name. And that was pretty much the quality of my explanation. But fortunately, there was another person in the room who had more recently, you know, because at that point it had been 20 years since I had processed what a variable was. And I just couldn't really undig it out from underneath everything. But the person in the room who was more recently become a programmer was able to just be like, okay, so imagine you're in preschool and you've got a big wall of cubbies. And one for each kid, right? And what it is is you can put different things in it. And I was just like, wow, that's a great explanation, actually. That's really cool. I didn't resonate with the people there. And so I learned a bunch about how to teach these things just based on listening to other people explain it. Cool. So our next question is for Terian. You can get us started. How do you recognize when it's time to take a break? And why is that really important? I feel like this doesn't really apply to programming specifically, but in your life when you're doing something that makes you unhappy, you should probably stop doing it. And that sounds really obvious, but I feel like as programmers and people who want to help, it's hard for us to take that step back and see, okay, this is not good for us. But I think someone mentioned already that you can take a step back from open source and the community, and it will continue to survive without you, and you can always come back. I also feel especially as programmers, we're under a lot of pressure. There's always something new to learn, new JavaScript framework, new language, new everything. And you see those charts of what it takes to be a full stack developer, and it's like all these interconnected things. And it can get pretty overwhelming. And we put a lot of pressure on ourselves to feel like we have to know all these things. But I think it's good just to disconnect, read a book, read a lot of books. And like Terian said. And I think that one of the things that's been interesting for me is to figure out that there's a lot of different ways to help, and that some of them I'm better at than I am at others. And the ones that give me, and over time, I think especially because it's not, oftentimes there are issues with our job or on a project we don't like, things are making me grumpy, like work is sort of taking energy out of my life. And a lot of times the volunteer work I do is part of what puts that energy back. It's part of what gets me interested again, just in tech in general, and keeps me going when parts of my life are draining energy, which happens at any job. There will be periods where that will happen. And one thing that it took me a while to figure out though is that that drain can go the other way. If I'm working on a volunteer project that is not giving me energy, there's always going to be periods where you're like, okay, I'm just going to go through these 8 billion issues and tag them all or whatever. But I think that overall it's good to sort of assess once in a while, like is this work that I'm doing still feeding into that energy cycle in a positive way or does it feel like it's draining? I had another thought. I was just reminded by what Sarah was saying by DHH this morning talking about the Juicero and how there's an element of Juicero work and everything that you do. And I feel like that's often less so in community involvement. So you can go to work and work on your juice robot and then volunteer and know that, okay, I'm helping someone's life become better. And that's very useful. So Sean, can you talk about some of the benefits to the person themselves or to their career maybe? So I'm a big proponent of, yeah, the benefits of getting involved and stuff like this. And I think the most tangible one is it's open source and whether we like it or not, kind of GitHub is our resume now. And so people laughing. But it's true. Well, to an extent, you're going to apply for a job and if you're just out of boot camp or something, this is something tangible you can point to that they can look at because it's really hard to get value from doing coding Fibonacci or something with someone. And if you've actually built something that they can look at and you can talk about, like your, sorry, it's just agreeing with me. But if there's real code you have out there in the world, then you can have a conversation with someone. But I think... I do think that it is really useful, especially as you're starting out, right, to have some stuff up on GitHub that is not programming challenges or boot camp projects and things like that. I do think though that we tend to overvalue code contributions just in general in terms of when we talk about volunteer work, when we talk about giving back, when we talk about public personas. I remember that we used to do a thing called the Ruby Heroes Award and one thing they would do is they would put up everyone's GitHub contribution graph of the people that won the award. And typically it was all green, meaning they had contributed all this stuff to all these projects and that's super cool. And then I won one and they put up my GitHub contribution graph and there was one little green square on there for the last year. And we don't necessarily have ways to measure sometimes the impact of the work that people do outside of contributing code. And I know that GitHub is thinking about this and I don't know, who knows, you would know better than I do. Yeah, why don't you take over at this point and you can talk about that. This is outside of the realm of GitHub at this point because everyone uses it but GitHub does not want you to treat your contribution graph as a measure of your worth as a human being. It's supposed to be a fun little game or look at the line of green squares but it's gotten out of hand. So Sarah, you talked a little bit about contributing to projects that are draining you instead of fulfilling you. So what do you do when you get involved in a project and it's just not a good fit anymore? I mean, I think that a lot of projects are used to the idea because we all are coming from this open source world, we're used to the idea of contributors are going to come and go and the project itself needs to be resilient to that movement. Just like our projects at work need to be resilient to turnover in the same way. And I think that there's certainly been occasions when people will just drop out and stop showing up on the Slack channel and stop returning emails and stuff. And I generally tend to take that as a sign that they're just overwhelmed. That I let it get to the point without checking in with them that they got overwhelmed. So I don't tend to think that they're like bad people or whatever but I tend to think that they just got overwhelmed and they couldn't handle it anymore and they checked out. And so I try to check in with people as a result of that and be like, hey, so how does this feeling for you? Do you want to do something different? I think that more and more projects, people that run these projects are in that boat, so they're going to be pretty sympathetic to being like, hey, I'm overwhelmed right now, I can't really take this on, I'm having trouble keeping up with whatever it is. I would love to just hand this off and do a clean handoff and if you find someone and you want me to sit with them for a couple hours and I can show them what I do or whatever. But I think that most project maintainers I feel like both in sort of open source code related things but then also in teaching and other things are not necessarily expecting you're going to stay forever. But I think it's just like anything else, it's a communication cycle. Yeah, and I think a lot of times they'll be grateful to hear like, okay, you're overwhelmed, you didn't just disappear on me and you've had enough notice to try and find someone else. So we wanted to make sure we left enough time if you guys have questions. So are there any questions? I have some more questions if you guys know. You can ask us anything you want. Okay, so the question was where do you start when you're in a community that's not obvious where to start? So I recently visited my college which is in the middle of Iowa and there's not a whole lot of tech going on there. So I told students there if they want to get more involved in programming stuff that there's lots of things you can do remotely. Because they're college students and Carol at Video Games, most of the most part I've pointed them at like mobile game jams and things of that nature but there's also non-video game things like that. I also think you might be surprised if you held a meetup. In the DC area we scheduled a meetup for out in the eastern shore of Maryland which was like 200 miles away just to hold a meetup with a friend of ours who lived out there and so it was like the meetup was coming to him rather than him coming to the meetup. But the strangest thing happened because we held a meetup out there and it showed up on meetup.com. All these local people just showed up too so you may be surprised that there's a lot more people nearby than you suspect. Especially with so many people working remotely these days I think that a lot of times there may not be an explicit community in a rural area but I'll bet you there's probably at least a couple of folks hacking on code working from home. And sometimes they are super excited to like you know if I have a friend that lived for a while in eastern Washington and she would go to meetups for basically any technology even when she wasn't working with necessarily because she just wanted to like hang out with other tech people and bounce ideas off them in general. And so it can be illuminating to try and have a tech meetup and see what happens. You may pull some people out of the woodwork that you weren't expecting. Or you might find in your community that there are people who are interested in learning Ruby and Rails. You might even if it was a small thing even if you like had a small workshop and then you could introduce some people to this whole new like career and they might have some interest. When I checked in here the other day the last night at the hotel the gentleman behind the desk was like I'm really interested in this coding thing and so he like really wanted some more information about that. And I think you know that exists everywhere and so that's an opportunity. And then also just getting involved in volunteer work like in whatever capacity. So I volunteer at a food pantry in a soup kitchen and they have this like crazy Excel spreadsheet database like keeping track of things and they hand count things. And so we're doing a project this year at Ruby for Good for the soup kitchen that I volunteer for but I'm sure if you got involved in some sort of volunteering role you'll quickly find that they don't have money to like buy apps and things to make their process better and easier and faster for all the volunteers they have. So there would be a huge opportunity there and then maybe you could recruit some people in your community or even remote to do something for something that's like local in your community. So the question was how do you recognize the people who are in your group? How do you give back to them and recognize them for contributing? Hugs. Lots of hugs. Oh swags too. Yes. So Ruby for Good you'll see me chasing people down to hug them. That's a joke. You know I think this is something that we have not done a great job at with some of the organizations I've been a part of. I think because maybe we come from, most of us come from a programming background and not so many of us from kind of a non-profit administration background where that kind of stuff is more universally understood that is necessary. So some of the things we've done is like we've printed special t-shirts for people. We made little necklaces with the Rails Bridge logo on it for our board members. We gave people hoodies if they were, if they were hot, I forget what it was, it was like teach five workshops and we'll send you a hoodie kind of deal. But I think that like that is definitely an area in which most of us could use help. Because I think that that cycle of like we appreciate you, we know you're doing this and you don't have to, I think is important and is the loop that we should learn to close better. I had a similar sort of experience from Django Girls where we've had almost five workshops and I wanted to recognize people who have coached it, one, two, three, four workshops and so we didn't have the budget for hoodies or anything cool like that. So we bought floppy disks and painted them like gold, silver, bronze, so the different like number of times people volunteered and like put rhinestones on them. That's our thing, rhinestones. Yeah, I would actually like that better than a hoodie. You can't wear it. But people seem to really love them and then also like the less of a physical manifestation of your appreciation, just like reaching out to people like, hey, I really, really appreciated you helping us so many times or often like I noticed you didn't apply to be a coach this time. Is it, do you have something going on in your life or are you like worried you didn't do great because you did great? So just like I think little things like that are really great. So Chris is giving a talk on organizing. He's going to do a panel talk on organizing and so I would highly recommend going to that. That doesn't mean you guys can't answer the question but they'll be, they're going to go way more into depth on sponsors and organizing and all that sort of stuff. So I think that would be a great resource for you. Oh, sorry, the question was how do you manage, if you do get people to come to a meet up, how do you manage the meet up? I think the interesting part of that question for me is what's the difference between managing folks on a team for work, for pay, and the difference between managing it between that and managing a team of volunteers. And I have found that there is a significant amount of overlap. The main difference that I see is that volunteer work tends to be much less real time in terms of the interactions and the reaction to it. Whereas you can have a meeting one day and things shift the following day at work. A lot of times it's much longer feedback loop. You're like, oh, we'll have this email chain and then we'll figure something out and then a week later we'll do it. And for me, that sometimes makes it hard for me to pick up and remember kind of where we were on this journey of trying to make this thing happen. And the other thing is that people do kind of piece in and out more than they would do at work. And maybe this is the advantage of living in the San Francisco bubble, but that happens on my teams in San Francisco too. And I just expect it, right? People are going to work, the average tenure of someone at a tech job in San Francisco right now is under 18 months. And so people are just like, oh, my buddy is doing a startup. I'm just going to go work on that for a couple of months. And they know that they can just get another job or they can come back and work with me again if they want to. After their startup crashes and burns and or is sold to Twitter, it's always the hope, right? And so I think that, yeah, there's the, I feel like they're almost converging, right? There's sort of this, we've been working in the tech community on building teams that are resilient to turnover, which has a lot in common with building a team and sort of hurting the volunteers along. I have never been a real software manager, but something that seems to be important to those folks that I think is important in volunteering is like the bus factor, if you're familiar with that, where, you know, if someone gets burnt out or decides not to do this anymore, what's going to happen? And I think in both those situations, just documenting religiously is very, very important. I was very fortunate that the former organized or Dango Girls Portland left so many emails like in copy and paste and just documentation on the way that she led the organization. So it's not confusing or mysterious ever. Yeah. So the question is, are there examples of companies who have been, who have done a good job of integrating some of this volunteer work into the company? Is that right? Yeah. Yeah. Into the products. Okay. So I can kind of speak to that a little bit. Like I know GitHub actually every, is it February? They do their volunteer month? What's, they get 20% of their time to just work on open source projects. And, you know, Ruby for good has benefited from that because we always point them at Ruby for good projects needing help. And in February, we see a lot of contributions from GitHub people, which is awesome. I know custom ink does skills based volunteering, which is really cool. And one other awesome company that's doing it that I want to give a shout out to, but I can't remember. My company gave us like one, one day a month, which isn't a whole lot, but at least it helped us get more people like in the company involved and stuff like that as well. Yeah. I know Salesforce, and I'm not sure if this carries over to all of like Roku and all that stuff, but I have a couple of friends that work in Salesforce proper and Salesforce actually gives them a certain number of days per year sort of like vacation time, but like it's nonprofit work time. And so they can use that to volunteer at their kid's school or they can use it to work on a project. And so what often happens there is that people will schedule to use that time at the same time so that they can work in a group on a project that they're interested in. And so sometimes even just providing that time and then just kind of seeing what people do with it can be interesting. So I think that's all the time we have, but thank you so much for coming and hopefully you've had some ideas spark and you can go out and make the world a better place. Thank you. Thank you. And a round of applause please for Polly for putting all this together. She did a ton of work.
We have amazing skills and abilities, but for a lot of us the missing piece is finding a way to give back. We have an amazing panel of people who have used their skills and talents from both previous careers and current to make the world a better place. Learn how they got involved, and in turn what you can do to get involved in areas you’re passionate about to fill this missing piece that will keep you happy throughout your career.
10.5446/31270 (DOI)
All right, good morning. I'm going to get started. People can come in from the hallway if they're still out. My name is Lance. If you're here, you probably want to learn about JSON Web tokens, so you're in the right place. We're going to talk a little about what they are, why they exist, and maybe who cares. So they're tokens. They use JSON and they're built for the web. Never mind, scratch that. Now that you're all in the room, I'd like to offer you a special invite, the ground floor of mine. So I've got this idea. It's got the application. You're going to log in and you're going to see a face. Probably your face. It's going to fit somewhere in this like faces as a service vertical. I think it's going to be pretty big. I mean the face is going to be pretty big, clearly. I'm calling it face page. So welcome. You're all hired. You didn't know this was an interview. I mean you passed the code challenge when you registered for the conference, right? So we're good. And you all have faces, so the qualifications are there. So let's go. Oh, I suppose you're wondering who you'll be working with. I'm Lance Ivy. I'm based out of Portland, Oregon. I've seen little out of sorts because I'm trying to get used to the bright solar radiation. I started professionally tinkering on the web as an unpaid developer in 2003, joined some friends and colleagues from college on this startup idea in a small town of like 30,000 people. And started in front end and the era of Netscape had just wrapped up. So I like opened it once, took a picture of a horribly broken rendering and never went back. And made some nice tables after that. Some things happened in 2006, 2008. I became a paid Rails developer. I found my way to working on Kickstarter. Made my own share of mistakes, learned from most of them, picked up a bunch of stories. Currently I'm working with EmpatiCo and also building an open source project called Carrington Off-N, which is an authentication server, you can imagine, devised, if it were rebuilt today as a standalone service. And this is the primary learning opportunity for the talk. So let's get back to FacePage. Obviously we're going to run FacePage as a standard Rails application. So here we go. We have a few meetings. We pivot. We sprint. We change task management systems. We iterate on faces and what they mean to our users and how they're going to change the world and then we deploy it. It's out in the wild. Users love it. There's just one problem. People like seeing their faces on the go and they're looking for us in the native app stores. So easy, right? We make an API. We build out our ILS, Android apps. And maybe the app and the API are intertwined. JSON endpoints the same controller. Or maybe they're namespace, but they're still in the same app. It's a majestic model. Or maybe it's a standalone deployment, but that doesn't matter. What matters is it's up in the cloud, right? It's there. It's running. It works. But something along the way has happened. We couldn't use cookies for our API. So we implemented a quick token authentication system. So now we've got cookies over here and tokens over there and this uneasy feeling that there's something similar, but we can't quite put our finger on it. And we just keep going. But it turns out that we've actually implemented two authentication systems and they're separate. So this is technical debt, but we figure we can manage. It's working. We keep moving. And just suppress those uneasy feelings into a tight little ball. And someone on high maybe makes the decision to move to services, which I don't know. We try it out and we build this API gateway. But is it written in Rails? Does it speak cookies or tokens or both? Do we have to retrofit our authentication schemes into it? Are we on our own? Do we have help? Do we move to something else? I mean, what happens now? Well, I want to take a step back and consider the problem that these cookies and tokens are actually trying to solve. How do we even get here? It's time to ask what even is logging in? What happens? What are the cookies and tokens doing? Let's dig in. We can say it starts here with a username and password. These days it could also be Facebook or GitHub or Google or some other auth provider, but probably still at least passwords. Turns out that's not really the part we care about. What we care about is how we keep track. So here's the classic way of keeping track. Browser logs in. We send back a cookie. And browsers know about cookies. They know to include them on every request back to our domain. So every request for a face on a page is logged in. And we can show your face back. I mean, the requests that want JavaScript and CSS and images are also carrying these cookies, but that's a different talk, right? The simplest explanation for cookies is a header protocol. When the server responds with a set cookie header, the browser knows to include it back in the cookie header. Every future request. But I think it's time to consider the elephant in the room here. What kinds of cookies are these, really? I mean, when we diagram HTTP cookies, they usually look like this. Chocolate chip or raisin, maybe. Oh, what's for lunch? But this is how we should think of them as fortune cookies with a message inside. So Rails uses these to store bits of data in the browser, and we can crack one open and see what's inside. Here's one that you might actually see in an actual Rails application. It's an encoded message with a signature, and you can split it apart on the double hyphens right there. Here's what's inside. There's a user ID. There's a CSRF token, because with cookie authentication, you need that kind of thing. There's a signature that we can use to just make sure that it's legit. So there we go. Cookies are headers on HTTP responses and requests that do things like transport a user ID back and forth. And that's the login story we care about right now. So what's happening with the tokens on the API side of things? Well, one common convention might look like this. The server responds to a login request with some random string in the JSON body, let's say. And again, the device sends it back on future requests, but this time in the authorization header. Now these tokens are opaque. They're random strings. They have no meaning until we use them to find something more interesting like the user ID. And this is good, but it's not great. On the upside, we could delete these tokens to revoke access at any point. We have some control over that. But on the downside, every API query now involves a database query. This is, by the way, how Rails sessions used to work before switching to cookies. And it was a performance problem. The browser submitted a session ID, and you use that to actually find their session from the database. All right, so let's put together what we've learned. The Rails session cookie uses a cookie header, but our API tokens use the authorization header. The Rails session cookie is structured data, but the API is just an opaque random string. Now the Rails session cookie can be verified with cryptography, and API token is security through queries. So can you imagine the best of both worlds? Chase on web tokens. It's signed structured data. This is rather similar to Rails signed cookies. We've added a third segment, the header, and this just describes the format of the token. We can decode the message. What you're looking at here are called the token claims. Now they're called claims to give you a bit of skepticism, because you actually have to assert that these claims are true before you can trust it. Here's a list of common claims. You can see that they're heavily abbreviated. And the reason is because Chase on web tokens are designed to fit in headers and other character limited small spaces. So they have to save on bytes for your more interesting content. All the claims in the left are what I consider metadata. Their claims necessary to verify that a token may be used appropriately in different situations. I'll just go through these. The issuer describes the party that generated and signed the token. Audience describes the party that the message is intended for. These might be the same thing. They might not. Issued at is when the token was created. An expiration is when the token should be ignored. They can have a lifetime. The claims in the right are what I consider the payload. And this is the information that you probably want to extract for your business logic. Now you can put anything you want in here as long as the issuer and the audience agree on what it means. But the common one that a lot of issuers and audiences agree on is the subject. And this is meant to identify the party that the message is about or the person who owns the token, the person who has it. And this is where we put the user ID. So the Chase on web token standard is a pretty generic thing. It's just a spec for sending secure messages. But one of its primary uses is identity. It actually evolved in the context of OAuth and OpenID Connect. And you can see a lot of that in the claims that are built into it. So we can actually imagine it kind of like Rails Cookies and API tokens. Think of it as an ID card. It's like an ID card that makes a number of claims and it contains some security features. This ID card has an issuer. It's from the internet. It has a subject, name, has expiration and issued dates. It has this pretty sweet little official stamp. Security. But it's actually up to you to check the card and detect forgeries to make sure that you can actually take this identification. So here's how you do it. One, is it from someone that we recognize as an authority? Check the issuer. Two, was it intended for me? Check the audience. Has it expired? Check the expiration. Is it a forgery? Check the signature. Can you recreate the signature based on those values? And last but not least, was it generated before or after that time we had to change our secret because we published it on GitHub? And you can check issued at for that. If you can answer those five questions, you're in pretty good shape. And the good news is that you can get a library to just do this for you. Okay. So we've learned that JSON Web tokens are secure messages, like a Rails signed session cookie. We've learned that they contain claims that we need to verify, and we've learned about the most important claims and what they represent. So let's talk about what we can do with them. We've already mentioned identity tokens, so let's continue from there. One of the problems we had in our Face Page app was the duplicate authentication system. So let's see how JSON Web tokens can help. First let's add JSON Web tokens to our comparison table. So while the Rails cookies are tied to the cookie header and the API tokens use the authorization header, our JSON Web tokens are ready for either. They don't care. The Rails cookies were structured data signed with cryptography, and that's actually pretty good. So our JSON Web tokens share those characteristics. So if we use an identity, JSON Web token, for login, it might look like this. So submit the login, Rails responds with a JSON Web token in a cookie. The JWT contains the user ID as the subject. Now in future requests, the browser just sends it back in the cookie. And this looks pretty familiar. And that's a good thing. We actually haven't changed our headers or the relationship between the browser and the server. We're still using cookies. We've just changed the format of the message inside the cookie. And on the API, we can drop it in here as well. There's no change to the client. They're still just sending a string back and forth. But now the string is a JSON Web token. It has structured data. It has meaning. It's not random. And the server can do something with it. So this is the JSON Web token solution. One token, two headers, one authentication system. It doesn't matter whether the server finds the token inside a cookie or inside an authorization header, it can still handle that value exactly the same and set the current user for the duration of the request. Problem number two in our Face Page app. Previously the API had to execute a query on every request just to discover who was making it. And now our API can verify the JWT with the claims and the cryptography. So this replaces a network-bound database bottleneck process with a straightforward CPU-bound calculation. This will perform faster. This will scale better. This will introduce less variation into response times and generally just have fewer failure modes. Problem number three, our cookies were implemented for Rails, by Rails. Now don't get me wrong, the default Rails session store is wonderful. It's, it works. It's hidden. It is secure. It is very well designed. And it does the job it needs to do. But it is tightly coupled to Rails. It is tightly coupled to cookies. And it's kind of tightly coupled to majestic monoliths. So if any of those things don't work for you, you have to ask yourself what's next. JSON WebToken libraries are implemented in at least 20 languages. They're decoupled from the cookies. And they contain claims that you can use to build any kind of distributed architecture. So they're, they're more flexible. It's a more general purpose solution. Problem number four, in a distributed architecture, you might find yourself sharing secrets so that when you sign a message with one system, you can verify it on another. And this involves trusted back channels like copy and paste. Or you know, configuration management systems. And now that the secret exists in more places, it's a bigger attack surface. If any one place is compromised, that secret can be used to attack all the other places as well. So what do JSON WebTokens offer? They support asymmetric key algorithms, like RSA. Now the signature process used by Rails to sign a cookie is called HMAC. You give it a salt, it hashes the cookie. You take that salt, you rehash it later, and it verifies. Now the required setup for RSA is a little bit different. The server signing the key, signing the token, needs a special RSA key, not just a random salt. It uses the private key to sign the token. But then it can actually publish the public key on a free and open HTTP endpoint using a spec like JSON Web Keys. When some audience, some other server receives the message, what it can do is go fetch the public key, use that to verify, and then cache that forever. So one HTTP call automatically shares the secret. But it's not a secret, it's the public key. So this investment means that you don't need to share secrets. It's a little bit more upfront than the operational cost is lower. This means there's no copy and paste between the systems, you just fetch the key over HTTP. There's no super coordinated lockstep deploy process when you need to change it on both places at the same time without dropping messages in the middle. This also reduces the attack surface. A lost secret can only attack the service that leaked it. This can actually even prepare you for some really nifty, like automatic key rotation stuff. I'm not going to get into that right now, but if you're curious how that works, I'm happy to talk about it, come find me around the conference. Problem number five, you thought you knew about how password resets worked. You generate and send a token, it's a random string. You verify it in your controller, and then you regenerate a new token to expire the old ones that you sent out and make sure that people can't hack the system. And when you think about it, this is actually a third authentication system. It works a lot like the opaque API tokens. And again, it's implemented as a one off. So here's how I build a password reset JSON web token. First start with a standard identity token. This contains the metadata claims and the subject. Then I add a scope claim. Now I looked around, this one doesn't seem to be standard, and I couldn't find a better one, so I'm just calling it scope. The idea here is that I'll configure my passwords controller to accept tokens with this scope. And I'll configure the rest of my application to reject tokens with this scope or any other scope that doesn't seem appropriate for them. Then I add an optimistic lock. Now an optimistic lock is where you keep some kind of version. And every time a field updates, you increment that version. Anyone who wants to make a change has to tell you what the current version is. This way you make sure that they don't overwrite something that changed without their knowledge. And you can achieve this with just using a time stamp. So that's what I've done here. I'm maintaining a user's password changed at field, which is also good for other features, and then verify it against the token. If it matches, it's a go. And this effectively has the same, this effectively also expires the old reset tokens as soon as the password changes. So here we go. Once again, we can upgrade opaque tokens into structured signed data. This is one less field in the user's table. This is one less index for your queries. But even better is that we've absorbed a third authentication system by teaching our JSN WebToken backend about scopes and by teaching our password controller about optimistic locks. Problem number six, suppose that you're sending emails with survey links or some other kind of strong call to action where you need the user to click and you need to know who is making that click. And if you don't help them at all, they're going to hit a login wall, and maybe they're on their phone so they don't want to type it in, and they just come back later or they don't, and your conversion drops. So maybe you implement random strings, opaque tokens, and you connect them back to the user just like the API system. I mean, this is sounding pretty familiar, right? So we can just generalize the password reset solution. All we need is a scope claim. If it sounds like I'm suggesting that we send user sessions through email, yeah, actually. That's exactly what I'm suggesting. I mean, that's basically what the randomly generated one-off random strings are doing. They're giving a login session by email. But this one is built into our authentication system. It's not a one-off implementation that you're going to forget about. All right. And number seven, your application is already doing lots of stuff. And you're including a lot of this common standard authentication feature stuff as if it's somehow unique. They run from the same database. They can be affected by every deploy, every upgrade. And all this complexity is in one spot, which makes it a lot harder to audit your attack surface and understand where you need to remain secure. And let's not forget the always-present user God model. And JSON WebFocans can help. So they were born ready for this. This is why the issuer and audience claims exist, so they can be different things. And the issuer, let's imagine, takes responsibility for the account. And the account might be the user name and the password. And the last time the password was changed and how many times they failed their login or any of that stuff. And this leaves your application responsible for a simpler user's model. It just needs to relate the user with an account. And this is actually what I learned firsthand while working on Keratin Authent, which is kind of what you get if device was rebuilt as a standalone authentication service. It removes the complexity from your app, relies heavily on JSON WebFocans, uses every trick I've mentioned here, and then some. The core tech is pretty stable, I think. And I've got some ideas for some advanced features. So if this is interesting to anybody, I'd love to chat about it. All right. In conclusion, these are the things I hope you take away from the presentation. One, you can use JSON WebFocans right now. It doesn't matter if you have a monolith or services, you probably recognized one or more of these problems. Two, JSON WebFocans have a low learning curve and a high skill ceiling. There's a lot more that you can do with them as you gain confidence. But all you need to do is start somewhere. So you're probably going to go home from this conference with a head full of ideas for things that you wish you could use at your day job and all this cool stuff that you want to try. And maybe JSON WebFocans is one of those. Pick something. Learn it by doing it. Try it out and make your knowledge real. All right. And we do have time for questions. So the question is, if the user has been disabled by an admin since they logged in, how can you immediately log them out and make sure that they don't come back? You can create, for example, a blacklist and keep this temporary cache of invalidated tokens. If you choose to for critical actions, implement a revalidation. But yeah, the trade-off is real. The trade-off is real. If you keep authentication in a token that lives for 30 minutes or something, that's the other thing. I recommend that these tokens have a short life and that you regenerate them frequently. So the time window for that kind of problem is small. Yeah. So, again, the comment is that the checking this blacklist takes time. So I recommend deciding which endpoints are critical and revalidating for just those endpoints. There are probably a lot of things where it's okay if they can continue to read data for a few more minutes or maybe what you want to do is protect that data and you can choose to revalidate where necessary. Well you can try JSON Web Tokens in an existing app by finding any kind of opaque token and seeing how you can replace it. So that's one way to try it out. If you want to try an authentication server, then it helps to have an app with some kind of login scheme where you just swap out where the form submits. So the form might submit to your back end, now the form submits to a different back end. Those are two ideas. Is there a drop-in replacement? There's a library called knock and knock is like devised with JWT but it's built into the monolith. So an authentication server is going to run as a separate deployment but knock will use JWT as part of the same monolith. Refreshing the tokens with more tokens. So the way that I've seen is where you maintain what's called an access token and a refresh token. And the access token is what you're sending to the API. The refresh token is more secure because it's not used for very much. The one thing it's used for is getting new access tokens. So let's say that your access token is good for an hour. And let's say that you decide every 30 minutes, way before that's going to expire, you use the refresh token to get a new one. What you can do is build revocation into that refresh system. So the refresh maybe actually does a Redis query or a MySQL query or something to make sure that it's still okay to generate new access tokens. The dual token system means that one can have different security properties and the other is used for your really chatty protocols because it's lightweight and doesn't require queries. And they're kept in different places with different security properties. And so the commenter, the idea was that if you attempt to use a token and the error is that it's expired, you might decide it's time to get a new one. And that's definitely like the fail-safe. I think that aggressively refreshing them before that happens is a cleaner experience. And if you refresh well enough, you may or may not need that kind of last moment it's expired. Let's get a new one thing. Yeah, that's the issued at check. So in that kind of dire situation, which we all hope isn't happening, there's really no going back. You generate a new key and start using it. And then what you do is you roll out this idea of an epoch and you say, any token generated before this point in time was using an old key that we don't trust anymore. So if the issued at is before last Friday when this happened, then just throw it away. And that means people are logged out. Oh, so the comment was that you might have multiple keys and so you need an epoch per key. JSON Web Tokens has a claim called key ID. So you can embed within each JWT a signature of the key that was being used. And that will let you, if you have per key revocation, tell exactly which one was being used and whether it's still trustworthy. For simple login for sessions, it's just what you saw. It's the user ID. If you choose to put them in a cookie, then you need to also figure out CSRF. And you can put that CSRF token in the JWT because it's open. You can add more stuff into it and solve those same problems. I'm actually not a big fan of keeping a lot of data in sessions anyway. So I want to say on the slide it was like two or three lines. It was maybe 50% longer than a Rails equivalent cookie. Some people advocate for keeping more user information in there. Like are they an admin? What permissions do they have? What's their name and email? So you can save more queries that way. The downside is, as mentioned earlier, you have to decide how much you care about that information getting stale. All right. I think we're good. Thank you.
Ever wonder why applications use sessions and APIs use tokens? Must there really be a difference? JSON Web Tokens are an emerging standard for portable secure messages. We'll talk briefly about how they're built and how they earn your trust, then dig into some practical examples you can take back and apply to your own majestic monolith or serious services.
10.5446/31274 (DOI)
So, well, I'm going to talk about processing streaming data with Kafka, but I will tell a little bit about myself first. So my name is Thijs. I work in Epsignal. So my name is pronounced like this. This is like the tutorial for how to do that. I'm from Amsterdam, Netherlands, and actually today is the biggest holiday of the year. So Amsterdam currently looks like this, like all over the city, and I'm skipping this party to be here with you today. So you're welcome. So we do a monitoring product for Ruby and Elixir apps. And as always with these kinds of products, you start with the question, how hard can this be? And you assume that it's not really going to be hard, and then, of course, it always actually is. So it turns out that if you do a product like that, you need to process a lot of streaming data. So how our product basically works is that we have an agent that people install on the server. It's via Ruby Jam, and it's running on all these customers' machine, and they're posting data to us regularly with errors that happened and stuff that was slow. And then we process that and merge it all together to make a UI out of that and do a learning. And that's streaming data. So streaming data is usually defined like this. So it's generated continuously on a regular interval. And it comes from multiple data sources, which are all posting it synchronously, simultaneously. And there's some classical problems associated with this. So one obvious one is database locking. If you do a lot of small updates, then the database is going to be locked a big part of the time, and everything will become really slow. You kind of have to load-balance this stuff around and make sure that, ideally, you make sure that stuff ends up at the same worker servers. So you can do some smarter stuff with it. So let's look at the really simple streaming data challenge that we will use for the rest of the talk as a use case. So we've got a pretty popular website. It has visitors from all over the world. It also has servers all over the world, which are handling traffic for these visitors. And we want to do some processing on the logs, basically. So we have a big stream of log lines coming in, which have the visitors IP address and the URL they visited. You notice the standard stuff. And we actually want to turn it into this graph. So this is a graph of the total amount of visits we had from four different countries. This is actually hard to do. And the answer is that on a small scale, it's not. It's actually quite easy. So the simple approach is to just update a database, like, for every single log line. So that looks a little bit like this. So basically, you just do an update query. There's a country stable. It has a country code and account field. And you just update the thing every single time you get a visitor from that country. But the issue is that a database has to make sure that all data integrity is kept. So it doesn't actually understand that all these streams are kind of continuously. And the log line actually never has to go back in time and update something. But the database has to take into account that data that already exists could be updated again. So it has to do a log around a row. And if you do this at a really high scale, then the whole database will just lock down and there will be no time left to actually do updates. So we ran into this a number of times during our existence at EBSignal. So one thing you can do next is shouting the data. So you basically put all the Dutch visitors in the database one. You put the US ones in number two. And you can kind of just scale it out by just grouping the data on some kind of access and just put it in different database clusters. But this has some downsides. So what happens is that if you want to query this data and you want to get an overview of everything that's in there, you might have to end up querying different database clusters and manually merging all this stuff, which can get really slow and complicated. And the classical one as well is changing the shouting. So if we now decide that we actually wanted counts per browser that people use all along, then we have to completely change this around and write a big migration script and it's going to be really complicated. And at EBSignal, we actually do a lot more than just increment the counter. So we have to do a lot of processing on this data as it comes in. So we not only have bottleneck in the database itself, but we also have a bottleneck in the processing that comes before the database. So we started sort of doing this at some point. So we would have a worker server and customers traffic would come to one of these servers and be flushed to a database cluster. There's a really big issue with this, which is of course that a worker server could die and then this customer is going to be really unhappy because they just had a gap in their reporting for maybe 15 minutes. So what you can then do is just put a load balancer in front of it and all the traffic will be randomly distributed to all these worker nodes. And this works as well, but then still the worker doesn't get all the customer's data. So it has to kind of do really smart optimizations. So the data is fragmented and we cannot really do the stuff we want to do. But actually, our life would be really awesome if this were true. So we get all the data from one customer and the same worker because then you can start doing pretty smart stuff. So a really simple smart thing you can do is this. So basically the standard way to do it is just incrementing the counter every single time a log line arrives in our system. But if we had all the data locally, we could just cache that for a little bit and we just have this single update query that we could run maybe every 10 seconds and that would totally decrease the load on the database. So this is kind of what we want to do. We want to be able to batch the streams and do some caching and do some local calculations and then just write it out to a standard database. And we actually need this to do the statistical tricks we want to do. So if you want to do proper percentile, so histograms, you're kind of forced to have all your data on one computer at some point because otherwise you cannot do the calculation. So we're back here. So we want to write all the customers' streams to a single worker which gets written to the database. And actually if we can do this batching, then we don't really need the charting anymore per se. So we could maybe get away with just having a single database. But of course we're back where we started at the beginning of the talk because we now still have a single point failure. This thing can fail and the customer will be really unhappy. So we need something special. And that will be Kafka for us. So we looked at a lot of different systems and Kafka has some unique properties that allow you to do cool stuff like this. So Kafka actually makes it possible to load better stuff and do the routing and do the failover properly. And I'm saying it makes it possible. Like, it not makes it easy. It's still like pretty hard, but at least it's possible, which is better than impossible in my book. So I will now explain, try to explain Kafka to you. And there's actually sort of complicated thing about it, which is that there's four different concepts that you all kind of need to get and also in relation to each other to be able to understand the whole thing. So it's actually a bit of a hard thing to wrap your head around if you're not used to it. So bear with me and I'll try to make a clear to you. So these are the four main concepts. The beamer is really bad. They will show up. So the first thing is a topic. So a topic is kind of like a database table. It's just a grouping of stuff. So a topic will contain a stream of data, which can be anything. It could be a log line or some kind of JSON object or whatever. This could all be in the topic. And all these messages, which are in the topic, they're in different partitions. So a topic is partitioned in, say, 16 pieces. And a message will always end up in one of these partitions. And the interesting thing is that you can choose how to partition data. So if a message that has the same key will always end up in the same topic. So we could therefore group like other US visitors together if we want to. We'll look at how this works in a little bit. Broker is the Kafka word for a server, basically. I'm not sure why they picked a different word, but you know they did. So a broker stores all this data and makes sure that it can be delivered to the final concept, which is a consumer. And consumer is the Kafka word for basically just a client or a database driver. It's just a way, something you can use from your code to read these messages that are on the topic. And Kafka kind of likes to invent its own words for some reason. So a lot of these things already have a name, but they also have a Kafka name, which can be a little confusing. So these are the four concepts. I will now go into them in more detail. So this is what a topic looks like. This specific topic has three partitions, and it has a stream of messages coming in, which I'll have an offset. So the offset is that number you see there, which starts at zero, and it just automatically increments up. So new data is coming in at the right side of this, and all the data is going out at the left side. And you can configure how long you want this data to stay around. So usually, maybe this would stay around for a few days, and then after, say, three days, the data at the left side would just be cleaned up. It would just fall off the retention. So new messages are coming in from the right side. So if we group these messages by country, as we do here, then they will actually always end up on the same partition. And that's a really important thing, because that will become apparent when we discuss the consumer. So next up is the broker. So a broker is the Kafka server, and the petitions and the messages live on these servers. And a broker is always the primary for some petitions and secondary for others. And that looks like this. So say we have three brokers. So broker one will get one to three as primary. Broker two will get four to six. Three will get 79. And actually, all these brokers will be secondary for another broker's primary petitions. So that means that if one of the brokers die, actually, you can distribute all the data, and it will all still be there. So in this case, broker three died, and broker one and broker two both got some extra petitions. So if you still have enough capacity in your system after this failure, then the whole thing will still be working. There will be no, like, nothing will be actually broken. It might be the case that you were, like, maybe had a little bit less extra capacity needed, and then the whole thing might slow down. But if you plan it properly, then this is still fully working. The other thing, this also works the other way around. So if you go from three to six servers, because you got a big new customer and you need more capacity, it will also just automatically spread out these petitions over these brokers without you really having to do any work for it. So the fourth and final concept I will now tell you about more in detail is the consumer. So the consumer is this Kafka client. It's basically comparable to a database driver or a Redis client. It lets you listen to a topic. And one of the great things about Kafka is that you can have multiple consumers which I'll keep track of their own offset. So in this case, we have two consumers. One is responsible for sending out Slack notifications, and the other one is responsible for sending out notifications via email. So they both start at the beginning at offset zero. But then it actually turns out that Slack is down at the moment. So we cannot restore our API. So in this case, the Slack consumer is still installed at offset zero. It's still, it's just waiting there because it cannot continue. But the email notifications are actually going out just fine. They don't have any issue at all. And then if Slack comes back up, the Slack consumer will actually, will make some progress. And finally, they will be at the end of the queue waiting for more messages to come in. So this is pretty neat if you integrate a lot of external systems because you can make sure that like one outage at a certain vendor is not going to impact all the other integrations you have. So this example only has a single partition. So obviously, you're probably going to have more partitions. So how about that? And Kafka has a thing for that as well, which is a consumer group. So the consumer can be in groups. So you give it a name and Kafka will understand that like different consumers running on different servers with the same name are related to each other and will assign partitions to them. This actually looks a lot like how the broker works. So if you have a topic with nine partitions and three consumers, all three consumers will get one third of the partitions. And if one dies, the same thing happens. So these consumers get assigned the partitions from the broken consumer and everything will just keep working. So a consumer always gets a full partition. And since you can control to which partitions your data go, this allows you to do this routing thing I talked about earlier where you make sure that all the customer's data ends up on the same server. So then this, we end up at this situation. So we actually have a very similar situation to the one we started out with where we have like a few worker servers and one of them dies. But actually in this case, the customer is not going to be unhappy because the Kafka cluster will detect that the consumer is down and will reassign a petition to a different worker. This will happen within a matter of say a minute. Nobody really notices that something failed because it's just rerouted to something that is actually still working. Yeah. So now we're getting to the interesting part, like seeing how can you actually use this from Ruby. Is this clear to everybody so far? Anything? Awesome. What's the relationship between the worker and the data back to the rocket? There is basically not really a direct relationship. So the brokers and the consumers both use this concept of a petition in a very similar way. But actually to the consumer, it's not really relevant what the data is stored. It just knows that the Kafka broker will just tell it where to fetch the data from. So from the consumer side, that's just totally transparent. Yeah. So we're actually going to build this analytics system I just showed you earlier. So we've got the access logs right here. So they're still the same logs. They have an IP address and we can see the URL. And we want to end up with this table. So it's a really simple table. We'll just keep track of how many visitors we had from the US. It doesn't even take time series or into account. It's just like a total count for all visitors, like the simplest thing you can do. And we use two Kafka topics and three rake tasks to make this happen. And our end goal is to just update data in a really simple actual record model. So the model looks like this. So this takes in a model has a country code and a visit count field. And it takes in a hash of country counts. So we will loop through the hash and try to fetch the country by code. If it doesn't exist, it will get created. And then we increment the visit count by the total count that was in the hash. So this is like really standard usage of active record, like nothing fancy going on here. And this is like the architecture of the whole thing. So we have three rake tasks on the left side, two Kafka topics on the right side, and then there's a model at the bottom. So first we will import the access logs. These will be written out to a topic. Then there's some preprocessing going on. And finally, we're going to aggregate them and write it all out to the database. And you might wonder, why do you need the preprocessing step? Because you could basically also just write it out to the database straight away. And the reason for this is that often the data isn't spread out evenly. So if you look at the bars in this example, actually most of our visitors are from the United States. So if we would immediately route all this traffic to a single partition, like one worker server will have six times as much work to do as another one. And if you need to do some CPU intensive stuff, that one worker server might have like a huge load while the other ones are almost doing nothing. And that ends up being really costly. Sometimes you cannot even fix it. You really have to do something else. And this is why we're doing some part of the work before we actually loop the data. I'll get to that in a bit. So step one is importing these logs. And this is kind of cheating a little bit, because in reality, this is the most important thing. This isn't really streaming data, because I put a download of a bunch of log files from somewhere and put them on my laptop. What this code does is it loops through all these log files and just writes them out as messages to Kafka one by one. But in reality, this other stuff will be streaming in from all kinds of web servers. So on line six, you'll see that Kafka will deliver a message call. So this tells Kafka to write out that line, that log line to the topic raw page views. At some point, it's done. And then it's actually imported all data. Step two is then doing the preprocessing. So we now only still have a raw log line. So there's an IP address in there and URL. We still need to find which country it was actually from. So this is the second step. So we've got a Regex here that can pass the log line. I also found this somewhere online. So this splits out the log line into a few different segments. Then we set up a GUIP instance. So GUIP is a way to get somebody's location based on our IP address. And then we set up a consumer and we ask it to read data from the raw page views topics. So this is the topic that we were just writing data to earlier. And we're actually getting it in this second rege task. And then for every message, we pass that log line with the GUIP and Regex thing. And then we just turn it into a nicely formatted hash. So there's the time in there, the IP address, country, browser, and URL. So we actually have properly formatted data we can do something with in the final step. Then we write this out to the second topic. So the second topic will contain these JSON objects that are nicely formatted. But on line 51, the actual magic thing is happening because we're setting a partition key. So this will help Kafka understand which data goes together. So everything that has the same city.country code too will end up in that same partition. So we know for sure that we can aggregate it properly later on. Then we get to the final step. So again, we have a consumer. So we're now actually considering the Bayview. So these contain the JSON hashes. And we set up some storage. This is from line 60 to 62. So on line 61, there's a country count hash. And it's a normal Ruby hash. It uses the hash.NewZero syntax, which means that if no value is present in the hash for a certain key, it will actually end up being the value zero instead of nil. So we always start with a count of zero. Then it loops through all these messages. JSON parses it again because in Kafka it will be serialized. So we actually have to turn it back into a Ruby object. Then we increment the count. And then we increment that country count before we just introduced. So we're getting a country. We're using that to do the hash lookup. And then we increment that by one. So anytime we have a visitor from a certain country, like this hash, we'll be incremented. And then we do this thing every five seconds. So this is part of the main loop. So every five seconds, this thing is invoked. And we call the active record model we introduced earlier. So this will actually write out the whole current state of the country counts hash to the database. And then it will clear it on line 83. So after this is done, we will end up with an empty buffer again. And basically the whole process just repeats. So what happens is that this aggregation task is just reading in data for five seconds. It's incrementing the counts in that hash. And then after five seconds, it just writes current state to the database. And it will be in there. And then it just restarts and does the whole thing all over again. And again, if we go back to the real side, it's all fairly standard. So this is our UI for this. If you look at the controller, we're just fetching the country stats that are available with a descending count. Then we also have to get the sum and the max of the counts. So we know how wide the column should be. And in the view, it's just a really simple HTML table which we loop through. And that's it, basically. So that's kind of the interesting thing in my mind is that you can use these Kafka principles to kind of like buffer huge amounts of incoming data. And at the end, that just reveals up at the end of it, which does no fancy stuff at all. So let's actually look at the demo. So I've got three tabs open here. So this is like the importer. So this kind of fakes, as you might remember, being streaming data. It just keeps pushing raw log lines into a Kafka topic. So then we boot up a preprocessor. So if you pass this for a little bit, you'll see that actually this is, we're now getting some proper JSON. So this one is from Ivory Coast, Firefox browser. So this, we can easily work with this JSON data. We could also add a second one. So if we add a second preprocessor, it will get half of the petitions and it will just double the capacity of the whole system. Well, if it was actually running on a different server, of course, because my laptop only has so much capacity. And then finally, we run the aggregator. So this is going to output in the... So if we look at Safari here, this is going to, if we refresh this a couple of times, it's going to have the same result. But every five seconds, the result will actually increase because we're not actually writing out every single update. We're just only writing out these buffered updates. So if you look at the aggregator here, it's currently, it's running for other countries that are in our dataset. But again, we can start a second one. You'll notice that actually the list of countries in the first one will actually decrease. So on the next tick of the aggregator in the right, top right tab, there will actually be less countries than in the one slightly up. So this list is still everything. And then here on the second tick, which goes from here to here, the second aggregator was started and Kafka noticed that the petitions had to be reassigned and then it actually spread them out over to worker servers. And we just doubled our capacity. And that's what you can do with Kafka. And that concludes my presentation. Thank you. Yeah. Yeah. So the question is, like, what happens if a consumer dies but it hasn't committed to offset? And I actually didn't discuss committing the offset in the presentation. Just keep things a bit simpler. But it comes down to that the consumer can control when it will actually tell Kafka that it's done with some data. So when a consumer dies, it can actually rerun a little bit and ingest data again. So usually that just works out really well because you only commit once you flush to the database and then it will be in sync. Yeah. So the question is, is there any restriction to the format of the messages? And well, the answer is no. So a message has a key and a value and both are a biteray and you can put anything in there that you like. So we use a lot of protobuf in our Kafka topics. So you can also use JSON or whatever format you like. Yeah, so the question is, like, do you do your own operations and how hard is that? And that's kind of the disadvantage of using Kafka. It's like you have to dive into a lot of Java things and you need to run ZooKeeper and that's quite some overhead associated with it. You can buy it at Roku now and also AWS has something called Kinesis, which is, I think, it's basically a Kafka ripoff, it's the same thing, only a different name. So that's some ways to just buy it from service providers. And if you want to run it yourself, you will be in a bit of pain to get it set up. Once it's running, it's extremely robust, but like to understand all the configuration and how to monitor it is pretty painful. Well, thank you. Thank you.
Using a standard Rails stack is great, but when you want to process streams of data at a large scale you'll hit the stack's limitations. What if you want to build an analytics system on a global scale and want to stay within the Ruby world you know and love? In this talk we'll see how we can leverage Kafka to build and painlessly scale an analytics pipeline. We'll talk about Kafka's unique properties that make this possible, and we'll go through a full demo application step by step. At the end of the talk you'll have a good idea of when and how to get started with Kafka yourself.
10.5446/31275 (DOI)
Welcome everyone. Thank you for coming this afternoon. I hope you've had a good first day of Rails comp so far. My name is Jason Clark. I worked for New Relic previously on the Ruby agent. Now I'm working with some other teams on some back end services. Today I'm here to talk to you about Rack. Rack is a library that you may have heard a little bit about, but before we dig in about it, I've got a couple of links for you. The first one is the slides for this presentation. So if it's easier for you to follow along on your computer or you want to reference it, that link will take you there. And then the second link is just a fun little Rack application that I built to demonstrate a few of the principles that are going on here. It lets you post text based robots that will fight each other. We're probably not going to get down to demoing it, but if you're like me and you kind of drift a little here and there during a presentation, feel free to go hit fight.robotlikes.com. This is Rails comp. We have a code of conduct. Keep it clean. Keep it professional. But go have fun with that if you'd like. So let's get down to the meat of what we're here to talk about today. So if you've been around in Ruby and in Rails, you've probably heard Rack mentioned before. But what is it? It's a little ambiguous until you dig in. Where this fits in the ecosystem and what part it plays. The easy answer is that Rack is a gem, like so many other things in Ruby. This is a gem that provides a minimal modular, adaptable interface for developing web applications. If you're like me, that doesn't really tell you a whole lot though. But we can draw a picture that kind of demonstrates what Rack is about and where it fits in our stack. So when a user comes to a website from a browser, they make a request. That request goes across the internet and gets handled by some sort of web server, typically something like Unicorn or Puma. That web server then has to figure out what to do with that request. And that is where Rack comes into play. Rack is the interface that those web servers talk to to communicate that a web request just happened. Rack then in turn is able to turn around and pass that request along generally to a web framework of your choice, something like Rails or Sinatra. So Rack is sort of the glue that sits between your web server and your application code. Now, this is really cool because this is the principle by which you can swap out your web servers without changing your application code. You can change Puma to Unicorn and back and forth. You could even run on WebRick and all of that is fine because everybody talks through this common interface of Rack. But Rack is also available for you to use directly. There's no reason that you can't leverage what it provides to write your own code against it instead of relying on a framework to take care of that for you. Now, why would you want to do a crazy thing like that? I mean, I'm here at Rails Conf and I'm telling you, hey, you can do things other than use Rails. But there's a couple of places where I feel like knowledge of Rack really plays in and it's a good fit. The first is when you need the absolute uttermost speed out of your Ruby code. Because the web server is making one method call interact to hand you a web request that just happened, there's nothing else in between. There's no logic, there's no abstractions, there's no nothing happening to take time. So if you have something that you need it to be as fast as possible for Ruby to be handling it, getting down to the Rack layer takes everything else out of the way. The second reason I'd put forward is simplicity. And I got to put an asterisk on this. This is a certain sort of simplicity. It makes something simpler. The protocol for how you communicate in Rack, as we'll see in a few minutes, is really basic. It's method calls, arrays, and hashes, things that you're very familiar with. But you're trading that for other sorts of complexity that then you have to handle yourself. So it's a trade-off. And it might make some things more simple, might make other things a little harder, and we'll talk about where that's true. The other piece that's nice about Rack is, like I alluded to, there's a high degree of reuse to some of these components. Because Rack is such a standard part of the Ruby web ecosystem, things that are written to work with one Rack-based application can often work with another application, or often with applications that are written against entirely different frameworks. That's a nice part that you might be able to use code that you write if you're writing against Rack, that if you wrote it as a before filter in your controller, you wouldn't be able to share as easily. All right, so that's a lot of yammering about what Rack is. Let's take a look at some code. And this is one of my favorite snippets of Ruby, actually, pretty much in the world, is this. This is a fully functioning Rack web application. Now, you put this conventionally in a config.ru file. Don't let the.ru fool you at all. That's still just a Ruby file that stands for Rack up. We define a class called application, and in that class we define one method. The entirety of Rack protocol is a single method. This method is called call, and it takes one parameter called inv. It's a hash, and we'll look closer at what's in that hash in a little bit. On the response side, Rack's expectation of a call method is that it will return back an array that has three parts. The first value in that array is the status code, the HTTP status code, to return back. In this case, we're saying 200, everything's all good. The second parameter in this array is a hash of HTTP headers that will be fed back to the client browser. Now, I've left this empty in this case. More often, you would have something like the content type, or at least some basic values that you would set in it. I've excluded those in a lot of these slides just to, you know, keep the amount of text down. But any HTTP header that you need to communicate back to the client, you just put in that hash. And then the last thing in this three-part array is an array of the content that you're wanting to send back. Now, this is slightly different than you might expect. You might think, oh, I just want to hand a string back, right? My HTML or my text that I'm sending back across the wire of the JSON. But Rack expects that the content object in the last place is something that will respond to each. And so the array is the easiest way to accomplish that to have, give it, you know, a thing that it will be able to enumerate across. And then it takes whatever it gets out of that enumeration, and that's what it sends down the wire to clients. So that's it. That's the protocol for how the web server communicates to Rack, the request that's coming in, and then gets the response back from that Rack application. Now in config.ru, you've got to get the thing started. And so there is a run method that Rack makes available at that level. And you hand it an instance of a Rack application, an instance that responds to this call method. And now when we go to our terminal, we can say rackup. That is the executable that's installed by the Rack gem. And it will spit out a little bit of front matter here, tell us about some versions, tell us what port number it's running on, and now it's ready to receive requests. We can get a request to it in the simplest possible way just by going to our browser, going to localhost92.92, and it tells us that this is good enough. There are other ways of accessing this, though, and I like to play with them because some of the later cases that we'll look at are a little easier with some of the other tools. And one of them is curl. So curl is a command line tool that's readily available across most Unix-based systems. And you can, in the most basic setting, say curl, and then the same address that you would have typed into your browser, and it will spit back onto your terminal the text that it received. There's all sorts of flags. We're only going to look at a couple of them, but this will come in useful a little later when we look at things that aren't as easy to just type into our browser. So we're Rubyists. A big part of Ruby culture is testing. So we have an app here already. Let's see what it would take to get some tests wrapped around that. Unfortunately, as you might expect, the Ruby community provides through a gem called rack test. You include this into your gem file, just like any other gem. You would require rack test to then make that available, and that pulls in the classes that you're going to use. In these examples, I'm going to be showing many tests because that's kind of my flavor for things. All works perfectly well with RSpec and is super well supported across that. So a test would look something like this. So we have our application test, drives from many test tests. The first thing that we do is we include the rack test method. So what this does is it makes available a whole bunch of helpers that you can then have access to throughout the rest of your test class. You can actually do things like open up the many test base class and do this for everything if you know that you're using it everywhere, or do it on a point-by-point basis like we're doing here. One expectation that rack test has of your test class, though, is that it will provide a method called app, and that app method will return an instance of your rack application. So this allows us to control what we are actually testing. The methods and helpers that rack test provides will interact with this app method to figure things out. And as we'll see later, this plays nicely because you can also do things like middleware instantiation, or configure this however you want to in returning a valid rack application that you want to test. The tests then are pretty basic. It gives you methods where you can invoke all of the typical HTTP verbs that you would expect. So in our case, we just wanted to get what was at the root, so we just get slash. Once that's executed, it will run against our rack app, and rack test will put the responses from that, what happened, what it got back into an object called lastResponse. We'll look a little closer at what this response object is later, but we can interrogate it for things like, was it a successful status code? What was the content that was in it? What sort of headers? And so you can see how you can get into a nice unit testing cycle here of being able to feed various things in, feed URLs, make requests against your rack app, and then see what it's going to respond back to you with. All right, so that's all well and good. I'm sure somebody would pay you tons of money to make a web app that just returns a static string. But most of us will need to have some sort of more dynamic interactive thing going on. And that inf parameter that we looked at is the key to all of the dynamic behavior that you're going to put into your application. Everything that you need to know about the incoming web request is in that inf. As I mentioned, it's a hash, and let's take a look at a little bit of what's in there. There's a lot of values, and you know, you can have it print these out or do whatever to be able to dig deeper. But some of the important ones that you should know about are that includes the HTTP request method. So in our case, this is a get. As we referred to, that's, you know, when you type something into your browser and just hit go, that's the method that's going to get sent there. It gives us a lot of different path information so we can know what got requested and where. And then it also wraps up any input that's coming in. So with a get, there's no real input that gets sent along. You're just requesting the URL. And as we'll see in a few minutes, there are ways that with a post message or some of the other HTTP methods that you're expected to actually send data along with the request. And that's available here in the inf. So this is a modification to our basic app to start digging into the inf and do something a little more sophisticated, although not that sophisticated yet. What we do is we look at that path info that was provided there and that gives us sort of the relative path after the domain of what the user has requested. And in this case, we look to see if it matches exactly slash bought. And if it does, we're returning a 403, which is the HTTP status code for forbidden, and we give them a nice little message to let them know that we've spotted them trying to get somewhere that they're not supposed to. Very, very sophisticated sort of code here. This process of taking a look, though, at the URL that's being passed in and turning that into what code you're going to execute is actually a really fundamental piece of web architecture. In fact, it's called routing. And this is one of the things that you get out of every single web framework. In fact, some of them, their claim to fame is how easy they make routing or how conventional they can do those sorts of things. This is really a place where you feel the lack of frameworks if you just try to go at your own in Rack. You will end up building your own abstractions to keep from having deeply nested if statements, which are most of what you're going to see here. It's not going to get too deep, but you can imagine if you had even 10 routes in your application, it could get a little bit tough to manage without some help. Rack does provide you a little bit of assistance, though. So you're not entirely trapped in a land of nested case statements to get some sort of routing. And it does it through a method called map. So in our config.ru, we originally showed that we just called run on that application. That's the only thing that requests we're going to be able to come into. Well, Rack will also support you providing the leading prefix of a URL and then send those calls to specific app objects. So here we've made another Rack-compatible application object called status. And this will receive calls that come to a slash status URL on the app, rather than them going to the main application object. So you can imagine that at a modest scale, if you had a small number of routes that you were trying to deal with, you could very nicely partition those off into their own separate application classes, which is also kind of nice for testing and, you know, single responsibility. And Rack will let you do some basic routing to get the requests that are supposed to go to those apps, where they belong. If you've got more than one or two of them, there is also a URL map class where you can essentially hand it a hash of the strings for the prefixes and the instances that you want. Depending on how many of them you have and how you're using it, it can be a little tidier than the map method calls in your Config.ru. So the env is a hash. It has all the information that we could possibly want in it, but it's a hash. Like munging around in hashes, knowing the keys, you know, you can botch the key value and it looks like something's not there, hand in things that don't exist. So, you know, it's kind of messy. And so Rack helps us out with that with a class called RackRequest. RackRequest, you hand it an env that's coming in, and then it provides a lot of helper methods that give you cleaner access to the things that are expected to be inside of this RackRequest. So in this case, we've replaced our lookup in that env hash for path info in the uppercase method call to path info instead. Now this, if we botched the typing on it and said pat info, it's going to give us a no method error rather than just returning us a nil value in behaving in kind of a mysterious way. So I highly recommend using RackRequest anytime that you're doing more than one or two very basic accesses of things that are in the env. It really tidies things up. So looking a little further at this, let's do a little bit more sophisticated things. So if you, rather than having slash bot return us just a static string, let's make it a little bit more like a Rails show route where you can say bot slash an ID that you're interested in, and it will still deny you, but at least it'll tell you what thing you were trying to get at. Now, this is a little ugly, and like I said, routing is one of those areas where frameworks help. There's actually Journey and Musterman are both gyms that are independent gyms that Ruby, Rails and Sinatra use for doing routing. So, you know, you could mix those in yourself. In this case, we're using a regular expression to match against the, the route that we're running. The percent R here with the curly braces is really nice when you're doing a regex that wants to match forward slashes. Because forward slashes, then you don't have to like backslash forward slash to get them as literals into it. And then we have parentheses around that backslash d plus, and that grabs as a capture the ID that we passed in. And the dollar one on the lower line below it pulls out the value that was matched when that regex went. Now, there are lots of ways that this could be cleaner and tidier. This is the most concise way to fit it on the slide. So, don't take this as best practice, but as kind of illustrative of where you could go with parsing up your URLs and taking out the information that you want. So, isn't there an easier way? Like I said, this is where you really win big with frameworks. I mean, Sinatra's DSL for anybody who's used it where you just say get, and then give it the string, and you can put parameters in it just with colons on the names. Like, they do so much to help you out here. So, this is a place that you might want to look for that help. We wanted to transform this into a little bit more of a real application, though. Once we've done that match, rather than just returning a string value, we can do whatever sort of code we want inside that. So, here we have some mythical database class that goes and looks up by ID, the things that we have in a store. It returns back that bot and as an object, and then we just 2S that in our output. So, you can imagine if this database was storing things as JSON structures, that this would be a way that you could have just sort of a typical restful API talking back and forth. Like, there's no deep plumbing. You don't need anything here apart from the database class to be able to look up the data that you want to return back. So, as we've seen before, curl comes in handy for doing this. And if we curl at bot slash 1, look up in our database, there we have a funky little ASCII robot with the ID 1 on it as the response that hands back. This is an HTTP get, like we had mentioned. So, it's not expected to have any parameters really except for the URL in the query stream that's provided there. You can send headers, but we're not reading any of those. But we want to make this a little bit more full-featured. Like, we have a database that we're reading out. If we want to let people write into it. And for that, the HTTP method that you should use is post. Post allows us to send data along with. So, we'll adapt our call method a little further. And don't worry, this is the most that this method is going to grow. And you can see where the pain points are here in the routing that we're talking about. We had our match that we've seen before, so as long as we're in the bot. And now, we have a branch in here. And this branch says, if the request is a get method, which is provided for us by that rack request, then we would do the stuff that we were already doing before of looking it up. But if the post, it's, if the request is a post, then what we need to do is we go to that request and we read the body that comes in. So, this is the data that the client is sending to us. Now, in a lot of web applications, this would be something that would be coming from a web form that somebody had entered. But it could also be from an API client calling it directly. Or as we'll see in a moment, there's easy ways to make curl post data to a URL rather than just get from it. Receiving a post, like we said, we would like this to write into our database. So, having matched the ID and having read the content in, we can do whatever we need to with that and return a response to the client indicating that we were successful in saving the record that they sent to us. So, this is like the basic shell of a RESTful API here in 10 lines of code, you know, excluding the database or whatever you bring in to do that. It's pretty straightforward. And the things that you get from your frameworks make this a little cleaner and make this tidier. But these are all the moving parts that are really necessary to get this job done. Taking a look at how this works at the terminal, if you say curl with a minus capital X, then you can provide it an HTTP method other than get, which is what it will default to. We give it the full URL with the ID of the thing that we're wanting in the minus, minus data that allows you to send the data along that you want to actually post in. It gives us back the message that we wrote. And if we then turn back around and do a get against that same URL, we receive back the data that we had sent in. So, we have the full life cycle of posting and reading. Unsurprisingly, given that RAC has a request class, RAC also has a response class, which helps you in building up the valid responses. It's pretty useful, and it fills a couple of gaps in that format. I mean, for one, you know, it's pretty strict for you to need to make exactly those three elements and make sure that the contents, the way that it's supposed to. It also, RAC response also allows you to kind of have a little bit more of a flow in how you're building things up, rather than making that array right at the point when you need to return it. It gives you an object to sort of tally up the information into, and then at the time that you're finished, ask it to write it out to the wire. So, we interact with RAC response by instantiating a new one. It doesn't take any parameters, so start it up necessarily. And then we can set various things on this response. It's sort of a stateful builder type of pattern. So, one of them is write. You can write to it, and that will write to the body. And you can do that as many times as you want. You can continue to append things. You can imagine if you were doing this with an HTML-based thing, you know, maybe you read something from a header file and write that in. And then you write some other piece of dynamic content, and then you write a footer. You can, you know, progressively build up that response how you want to. And then when you're done, you call response.finish, and it generates a valid RAC response to be handed back across the wire. If we pride in at that point and took a look at the response finish, what it returns back to us, like we've experienced before, is that three-part array, status code, headers, and then the body. But two of them look a little different, and this is part of why you want to use RAC response. One of them is that it's appended a content length based on the things that we wrote in. Now, web browsers will act okay. Things don't break dramatically if you don't provide this, but it is in the HTTP specs, and it is better for you to provide that information if you want to make sure that you're compatible with all callers. Additionally, that content, that last thing that we had been passing as an array with the strings in it, RAC has wrapped up in an object. And this object actually takes care of some more complicated scenarios in when things are nested and there's certain closing behaviors on the response stream that it will take care of for you, so you don't need to know about those things if you use RAC response. So at this point, we have a valid app, and it returns data, and we can interact with it, make our requests, get our responses. But there's another major component of how RAC works, and that is middleware. And this is one of the most powerful patterns that this brings to play, and one of the things that applies the most when you're in other frameworks. So back in our config.ru where we had our run statement for our application, you can use a method that's provided there called use and install a middleware. And installing the middleware, they get installed in the sequence that gets called, so we could have done multiple installations there and said use this middleware, then use this one, then use this one. And what that forms is that forms a call chain that's going to get flowed through. So the first request that comes from the web server will hit the first middleware that you said use in the config.ru, then hit the next one, then hit the next one, and eventually most of the time gets down to your app, and then the response goes back up through that chain. Those middlewares take something shaped like this. So they have to have an initialize. Initialize is expected to take an app that is the next thing in the chain that the middleware is going to call. And you have to save that away so that you can make a call against it later. So in this form, this middleware doesn't actually do anything useful. All it does is like takes the request and hands it along. But you can do your own work before and after that call on the app. You can munge the environment. You can deal with the response and change what's there. You can control what's going in and out. So here's an example of a very basic sort of key API key validation as a middleware. We instantiate a request object to read it. We get the HTTPX API key header, which is a pretty standard value for passing an API key along. And if the key matches some predetermined string that we've chosen, I chose beep, then we'll go ahead and let the call through. They knew the key. Everything's fine. If they don't, then we will immediately return from this middleware. That request will never make it to the application object itself to be handled. So in diagram form, we come in, we get to the middleware, and the middleware says, I'm responsible for this, returns, and nobody else further down the chain gets a chance to get involved with that. So middleware are very powerful for allowing you to apply these sorts of cross-cutting concerns, whether it's logging or authentication. Things where all of your endpoints across any portion of your app need to have the same behavior fit very well. And Rails actually is built on a ton of middleware. This is a huge pattern that's applied there for a lot of cross-cutting concerns. So if we look at this, once we've installed our authentication, we can curl against that. We'll get our forbidden message. If we looked at the status code, it would tell us a 403. If we curl with a minus H, minus capital H, we can give it headers in the HTTP format that they expect, the key name, colon, and then the value. And once we pass a valid API key, that request then is allowed to go through. So Rack itself comes with a number of middlewares that you can use. There are lots of them around. One of them is RackStatic. So if you need to serve static files out of a Rack application, it allows for you to sort of declaratively set that up. It has some basic session support, both with backing stores on your server and with cookies that you can install. This actually often gets used with Sinatra if you want to have sessions because it's a middleware that can get installed. It has debugging help, things that will show you a nice exception page when you're in development, things that do code reloading that you can optionally install. A lot of the niceties that you have in Rails are available as middlewares with Rack itself. And that brings us to the last little bit, which is to talk about where Rack intersects with all of the other frameworks. So we mentioned early on that Rails and Sinatra and all of these things are built on top of it, but what does that really mean? I mean, for Rails, your Rails application is a Rack app. There is something in Rails that has a call method that takes an M. And when those requests come in, that's getting dispatched to your controllers and your routing and your views and all of those other things, but all of that is downstream of a call method that you don't have to write because Rails does it for you. That's not the only place where Rails touches Rack, though. You can actually embed Rack applications into the routing inside of a Rails application. So here, in our routes file by using the mount command, you can hand it a Rack app instance and then give it the URL where it should delegate to that Rack app. And any request that gets there will get sent on to that Rack application. Similarly, Rails allows for using middleware. There's a use method, so in your application config, use a middleware just like you did in our config.ru, but it also provides these insert before and insert after. So if you have some concern about the sequencing of your middlewares, you can control that from in there. Now, you can't do that from Rack itself, and so how is Rails accomplishing this? Well, it turns out that Rails actually has its own internal middleware stack. And so when you config use those in the Rails level, it's actually going into that middleware stack. You can install middlewares at the config.ru, at the Rack level, just like you would, but you won't see those when you do things like rake middlewares and ask Rails to show you all of the things that it has installed. So that's a fun little tripping point if you're ever looking around for how things are plugged in. Sinatra as well, just like Rails, is just a Rack app. At the end of the day, when you call Sinatra base and you derive your thing from it, it's just a Rack application. And in fact, although you almost always use helpers and call render or hand back strings and do all the nice things that Sinatra lets you do, you can always fall back to handing back a valid Rack response. And Sinatra knows what to do with it and just hands it on back because it is just a Rack app. So that's a quick tour through Rack and where it fits. We've looked at how you would build a basic application, what the moving parts are that are there in the box. We've looked at the request and response life cycle and what plays into that. We've looked at middleware and how you can use those to kind of compose an app together and layer things in your Rack applications. And we've looked at how this plugs into the various frameworks. Now, I really just touched on the surface of this and haven't given a lot of really concrete details. But I did a course for Plural Sites. So if this is something that is of interest to you, I actually have a bunch of tokens that people can get a free month worth of time on Plural Sites. And that screencast fills in a lot more detail. It goes into lots more of what's in the box with Rack and a much more realistic example of building a JSON based API with it. So hopefully you've all found this useful and I can't see how much time we have left if there's any time for questions. All right. Thank you very much. APPLAUSE
While Rails is the most common Ruby web framework, it’s not the only option. Rack is a simple, elegant HTTP library, ideal for microservices and high performance applications. In this talk, you’ll see Rack from top to bottom. Starting from the simplest app, we’ll grow our code into a RESTful HTTP API. We’ll test our code, write reusable middleware, and dig through what Rack provides out of the box. Throughout, we’ll balance when Rack is a good fit, and when larger tools are needed. If you’ve heard of Rack but wondered where it fits in the Ruby web stack, here’s your chance!
10.5446/31276 (DOI)
Music My name is Fabio and today I'm going to talk to you about Rails 5.1, which is the latest release of the framework that we all love and enjoy. And the slides are available online and you might want to go there simply because there are links to the pull request that I'm going to talk about. So I only have 40 minutes. I can't go into details, but I invite you to go there and see at the commits at the discussion and hopefully get involved you with open source. To give you a little bit of context, 5.1 is going to be released any minute now, any day now. But just to give you an idea, when 5.0 was released, that was the result of 18 months of work and about 10,000 commits, about 1,000 committers. So with 5.1, you can think of it as kind of half of it, nine months of work, about 4,000 commits. I did this just with the git command and if you actually go and look at the git diff, you're going to see a lot of lines that have changed in the code base. And one of the reasons is that the Rails team is now using Rubikop for styling. So you're going to see many things like spaces, single quotes, double quotes, but that is not really going to affect you as users of Rails. So that is what I'm going to talk about today. If you use Rails, how is this new version going to affect you? And so what I'm going to talk about is breaking changes, then active record, active support, action view, real ties, and action pack. And so the last disclaimer before starting is that I have cats in this talk, so I'm going to start with the first one. Okay, breaking changes. Okay, breaking changes. Really, I have upgraded my apps to 5.1 and I haven't really encountered many that have affected me. I found some in the code that I'm going to talk about, but they are not that big. I'm going to start with the first one, which is the engine that reads your ERB files under the hood has changed from a Ruby to a Ruby. And the difference is not just an S at the end, it's also that a Ruby was now really being maintained since 2011, and a Ruby has a simpler internals and a smaller memory footprint. Now, if all you're doing in your app is just having ERB files, this is not going to affect you at all because the result is the same. This is only going to affect you if you actually are explicitly calling a Ruby in your code. For instance, if you have a jam, like Hamel or something that touches action view, then you might want to look at this. Otherwise, it just a benefit for you. It's going to be faster. The next one is when you set or skip callbacks, you can add conditions. You can say set if, skip if. And these conditions, you can pass them as strings. And let me show you what it means. I tried to write an example here if you have a post in your database, and then before you save this post, if it doesn't have the title, so unless the title is present, you want to set no title, that unless you could pass a string in 5.0. Same with if. Now, if this feels weird, it did feel weird to me. It's because there are alternatives to do this. So in 5.1, you're not going to be able to use a string anymore. But you can still, like, use a proc. You can still use a symbol. So if you're using strings for those conditions, then you might want to change to this new syntax. Still talking about breaking changes. This affects your config routes. When you set your routes, you're also going to use strings. You're going to say get dashboard index to home, things like that. So in this string, there are actually a couple of special symbols that you can use in your routes. You can do get and then this column controller was less colon action was less colon ID. So this kind of syntax was valid until race five. And these are wild cards. So if you have that in your routes, you're basically saying match any controller with any action or any ID. This is really powerful, but also it was the cause of a couple of security issues that have already been fixed in previous version of rails. So this decision was made so that going forward, this syntax would not be acceptable anymore. Instead, if you have many controllers or actions, you might want to whitelist them explicitly, which I also think makes for better and cleaner code. That's that's what I have so far for breaking changes. I'm not claiming these are all of them and I invite you to go and look at the changelog of the libraries. But I think it just gives you an idea about the fact that it's probably not going to be hard for you to upgrade to race 5.1. And so now we can start looking at the feature what has been added or improved and what can benefit your everyday work. Let's start with active record. Primary keys in your database, they used to be integers and now they can be big in or big serial if you impose grass. So you basically have 64 bit. So if you have, if you need to reference many records in database now, this is the default. You just have big in as your primary keys. This can be useful. And still talking about databases. This is only for my SQL or equivalent Maria DB. There is no support for virtual columns. I also, I'm going to show an example for this. Let's say you have a user stable every user has a name. Now every user, you can also say has an upper name which is just the uppercase version of the name. My SQL lets you do that with this upper name in SQL. So basically SQL has this virtual column, which is not like duplicate to just calculating the uppercase version. And now you can reference this column in your migration by writing playing Ruby, you just do to the virtual upper name. And it has even more features. Let's say you have another virtual column, which is name length, which is the number of characters of the name. You can tell my SQL to store this column and also to index by this column. So if you want to display your users sorted by the number of characters, you know, my SQL can do all of this. And this is all the code you need. And what is good about this is that you don't have to write your structure dot SQL, you can just write Ruby, which I guess we all prefer. Still talking about database. If you have to deal with many, many records like thousands of records. Active record provides methods to do this like find each or in batches. So if you wanted to iterate 5000 posts, you want to do maybe find each. And this works. There was just one gotcha that in 5.0. If you wanted to use the limit method, it wasn't really working. If you had to write post limit 500.pn each, you would actually get a warning saying scoped order and limit are ignored. And so the query would just fetch 1000 because that's the default in 5.1. The limit method is now supported. So that's good. Okay. I think I'm going a little fast on this initial new features. It's because maybe you've not heard about them. And hopefully I'm getting you curious about going to see everything that has changed. So bear with me and we're going to get to the end where you're probably going to see the features that have made the news or that other presenters are talking about here. But let's continue with active support. There is no support for Unicode nine. And here's an example of what this entails. Well, the value of the constant Unicode version is now nine and not eight. And as an example, let's say you have a string and this string has just a single emoji. This emoji is called a family with two mothers, one daughter and one son. This is one character. It's one emoji. But in 5.0, if you had asked for the length of this string, you would have gotten four. If you had asked to reverse this string, the family would have been split into four characters, which is probably not what you wanted. And so now in 5.1, there is full support for Unicode nine and families are not going to be separated anymore. Active support duration has two methods called since and until. And now they have aliases after and before. So it's easy to understand with an example. If in your code, you're writing two weeks since Christmas Day. Now you can say two weeks after Christmas Day. It's the same thing, but it might sound more natural and English. Five days until Christmas Day, you now have the option to say five days before Christmas Day, maybe in your test. This is something that you like better. Now you have the option. Let's get to one of the bigger request. There's a new method in module called delegate miss into. I think this is going to be very powerful for people who use the delegator pattern. So if you have a class with a bunch of methods, and then you have another class that wraps the first one, basically it's calling all the methods of the first one except couple. And this can help you. Let me show you how. I made another example here. Imagine have an order active record in order in your database so users can create orders, save order, destroy order. So that's just an active record base. And then you have another class called order completion that is delegating all the methods to the order. So it's doing everything that the order does except for create when an order completion is created. It's also sending a notification. This is a pattern that you can do in Rails and a way to do this it's using the delegate method which already existed. The problem with the delegate method is that you have to list all the methods that you want to delegate. So it might not be your first choice. Another way to do this is to use Ruby's metaprogramming methods. You can find respond to missing and you can define method missing and that works. You just have to remember how to write them and then it's like 10 more lines of code. In Rails 5.1, all you need is this. You just have this single method delegate missing two which is exactly what I explained. So if you're going to call a method on order completion and the method does not exist there, then it's going to get called on order. And it's just a single line of code. So I think this is pretty powerful. Okay. Action view. There are actually a few changes in action view and very interesting one. The first one is how do you output an HTML tag with Ruby? In Rails, you actually have two helpers. There is one helper called tag and there is one helper called content tag. If you want to output a tag like br, a breaking line, a way to do it in Rails is tag br nail true. Now that works but you just have to remember the order. Why is it nil there? Why is it true? It is because br doesn't have any content or options so that's why it's nil and also itself closing. But sometimes it's hard to remember and also there is another method, content tag, which you can also use. And you pass div and then the options on the content. In 5.1, this syntax is still available. It's not deprecated but you also have a new syntax where you just call tag dot br or tag dot div. It's basically tag dot and then the HTML tag. And you don't have to remember the order of the argument. And so I think this is really especially important for newcomers when somebody and you come to Rails, they only have to learn one way of creating HTML tags. And I think it's also easier to remember. Now still talking about helpers and views. The next one is I think probably a question that everybody who has ever programmed in Rails has wondered. When you want to create a form, why are there two helpers? Why is there form four and form tag? Sure, they do different things and one accepts a model but it looks so similar that sometimes it's confusing. So what's going to change in 5.1 is that instead of having two helpers to create a form, you have three helpers. There is a new form with. Now the good thing is that this is one to rule them all. So in the long run, the idea is that you're going to switch to form with and then only form with will survive. And for now, nothing has been deprecated. So it's just there. If you create a brand new scaffold app with 5.1, it's going to use form with but for now you can use all of them. So let me just show you how you can upgrade syntax from form four and form tag to form with. If you're using form four, in this case, I'm showing a form to delete a post. You can still do it with form with you have to say actually model post so you have to pass it like that. It just basically syntax difference. If you use form tag, then the first argument is the URL you want to submit to. And with form with you have to specifically say form with URL. Other changes here, for instance, in 5.1, if you create a brand new app form with is going to be remote by default. And also what's pretty interesting is that form with, if you have a block has this F all the time and form tag does not. So with form with you can actually mix the different helpers like submit tag or F dot submit you have the chance to do that. And I think in the end it's going to be less confusing. Okay. I'm think halfway through the list of features. And I think the reason why I made this stock is also really to invite you all to look at the source code and then up for of course upgrade your rates application to 5.1. And also, as my personal way to thank all the people that are mentioned in this slides, the creators of all this for requested contributors. So now let's continue with real ties. The first one is, if you use, if you are in your terminal, you can type rails initializers, and that is going to print out all the initializers of your app. Now the problem is that in 5.0, it's only printing out the method name, and different classes can be using the same method name and you can't really tell. In race 5.1, it's going to output the name of the class and the name of the method. So it's less confusing. And it's more useful, especially if you use race engines, you can just see clearly the order in which they are called. Now let's talk about secrets. Every Rails app has a file config slash secrets.yaml where you can store your secrets, meaning your API keys or any other variable that you want to define once for your environment, your settings, or maybe your email address that you want to use. So there is this secrets.yaml file. And maybe when I'm developing, I want to send all my emails from a specific email address and also when I'm testing, but now when I'm in production. So how do you share one of these variables among different environments? For instance, I want to set my email from to that specific address in development and maybe reuse it in tests. In 5.0, you can already do that. You just have to use this yaml syntax. The ampersand and then remember, like, how, I mean, it's really yaml syntax, but you just have to remember that you have to define the default and then say development, it's using the default. 5.1 is just a little simpler. If you have a section with the name shared, it means that everything that is in that section is going to be available to any environment. So it just makes your code shorter and more readable. This is just one of the changes to the secrets, but there is another one, which is even more important. And that is the addition of encrypted secrets. So as I said before, in this secrets yaml, you can store your secrets, but you probably don't want to store your real production secrets in a plain text file. You don't want to put there your PayPal account information or your API key in production because otherwise everybody who has access just to the source code can be able to access your PayPal and email and everything. So you don't want to put them in plain text right there. As a matter of fact, if you look inside these secrets or yaml, there is a disclaimer and is a comment that says do not keep production secrets in the repository. Instead, read values from the environment. So it is right there. So this is a good advice. But sometimes you need to access your production secrets. You need to install it on a new server or maybe share them with a co-worker. So how do you balance this? How do you balance this? Of having the secrets available, but not in plain text for everybody. This is a new feature of Rails 5.1 and it's encrypted secrets. The way in which it works is that you type Rails secrets set up and this is going to generate two new files. So you can use config secrets yaml.key and config secrets yaml.enc which stands for encrypted. Now the encrypted file is where the secrets are. But if you just try to read this file, you're not going to be able to read them. They're encrypted. In order to open this file, you need to type Rails secrets edit. And this is going to use the key, so the other file, to open it in your editor. Then you can edit, save, and as soon as you close your editor, they're going to be saved encrypted once again. So in short, if you want to read them and edit them, you need both files. But if you just have the encrypted files, then it's okay. You can put that in your repository because people are not going to be able to decrypt them. So the solution is you can put your encrypted files in the repository and you can put it on GitHub and share it. But don't put the key, otherwise this is useless. So keep the key to yourself, share with your coworkers, maybe with a password manager, and then on your server, and then you can have the best of both worlds. You can have encrypted keys. So maybe when you create a new machine, you don't have to manually set 100 environment variables. You just need the key. But then they're encrypted. So if this works for you, it's right there for free in Rails 5.1. If you want to keep on using the other approach of just setting variables in the server, you can still do that. Okay, still talking about Rails.js. Rails.js has seen a lot of great features. jQuery is not a required dependency of Rails anymore, meaning if you don't need it, it's not there. This doesn't mean that we don't like jQuery. I like jQuery. But if it, you know, it's one less library if you don't need it. Let me take a step back. Why was jQuery there in the first place? The main reason was what is called UJS, Unobtrusive JavaScript, and it's for features like this. This is a still arrays scaffold. So you have your list of posts, and for each post you have these links. One of them is destroy. So it's a link to destroy one of your posts. What happens if you click on that link is you're going to see this. You're going to see an alert. This is not the default for links in a browser. Links just work. So it's the JavaScript that it's intercepting your call and showing an alert. And also there is something else here. Normally links cannot do a post or a delete to your server. They just get. So how does this work? It's because there is this UJS that it's looking at the attributes of your link, the data confirm data method, and it's changing things there. So all these came from the jQuery UJS library, which is just JavaScript that has all these features. It looks for these attributes and it changes. There's also data disabled with data remote and so on. Now, if you think about it, you can write the same JavaScript in vanilla JavaScript without jQuery. And that's exactly what happened. A student from the Google Summer of Code of last year for Rails has done this, has rewritten the jQuery UJS library without jQuery. It's now called Rails UJS. It's inside action view. So if you want the exact same features, all you have to do is just require Rails UJS. You don't need to add gems to your gem file. You don't need to require jQuery. It's a smaller footprint and it's exactly the same behavior. Okay, I think I'm going to now get closer to the features you probably have heard of, or maybe that's why you're here in the first place. And I think this is one of them. The original title of this PR was add yarn support in your apps using the dash dash yarn option. I deleted that because since this PR, this has changed. So now it's by default. You don't even need to do dash dash yarn. So whenever you're going to do Rails new with Rails 5.1, you're going to have yarn support. Now let me take a step back. What is yarn? If you are front end developers, you probably have heard of it. It's a package manager for assets or JavaScript libraries. Think of it as bundler and gem file. They are for Ruby gems. Well, if you need to bundle a bunch of JavaScript libraries, yarn is one of the possible solution. And this is really where the Rails team didn't need to reinvent the wheel and create a new one. Yarn has been accepted in the last year as one of the good options. And now it ships with Rails 5.1. So this solves the problem of how can you import a lot of JavaScript libraries in your Rails app? Of course, you could already do that before. You can just copy and paste them in your vendor asset folder. But if it's like one or two, you can manage that. But if you have tens of them, and maybe you need to, if they have dependencies or you need to upgrade them, it becomes complicated. So that's what happens with yarn. Let me show you how you would use yarn with 5.0. And then let me show you what changes in 5.1. In 5.0, you need to install yarn, and then you do yarn init, which sets up the folder. Then you start adding your libraries, for example, bootstrap. These libraries are installed in a folder called node modules. So then you want to ignore this folder, and you have to do that. And also, you have to add this folder to the asset pipeline. And finally, you can just require your libraries. So from 5.0 to 5.1, this happens. There is less setup. There is more convention over configuration. So they're going to be installed in node modules. That folder is already getting ignored. It's already in your pipeline. So it's just very straightforward. And this is leveraging the power of yarn. You can do much more. But this is just really introducing it to the Rails community, and I think it's going to be a great change. Now, you might want to use yarn if you are including, for instance, third-party JavaScript libraries. But what if you are writing JavaScript? What if your app has a lot of JavaScript? Maybe you're using Rails, but then you're writing a single-page app, and you have it all at once. You might start having questions like, in which folder do I put my files? How can I use ES6? How is it going to be pre-compiled? Things like that. So for this, there is another feature in Rails 5.1 that can help you. And that is the fact that you can now leverage Webpacker. So what this means is that there are different ways to structure your JavaScript, let's say, single-page app, and different libraries that you can use. And now with Rails 5.1, you have, once again, a sort of convention over configuration. You have some pretty good defaults. If you do Rails new app dash dash Webpack, a lot of files and folders are going to be created for you. There's going to be an app slash JavaScript folder where you're expected to have your JavaScript in packs. There is a lot of configuration that is written for you in config slash Webpack, and it's separated by environment, development and production. There are some libraries that are there already for you that you're probably going to need, like Webpacker, and Webpacker, and Webpacker, and Webpacker to use Webpacker, Webpacker is available, or, you know, Coffee, SAS files are already ignored, and then there are some bin files, Webpack, WebServer, Webpack, Watcher, and Webpacker. So all this is there for you. In the past if you had to use Webpack with Rails, you had to install it and then decide that I had to do some configuration and stuff, Locally, you can type Webpack Dev Server, which is one of those new commands that come with it. And this is going to watch your folder, your app JavaScript. And as you type, even if it's your six, it's going to watch that folder and then make it available for you so you can develop and have those JavaScript available in your app. The way in which you include those in your app is with this new helper JavaScript pack tag, where you specify which pack you want to load. And if you also have style sheets, you have style sheet pack tag. And there are many, many other features. For instance, you can interpolate Ruby. You can use, you know, a Routes helper. So you just do application.js.erb. There is really a lot that I cannot cover, but all of this is powered by a new jam called Webpacker. And it's open source. It's available on GitHub, Rails slash Webpacker. So feel free to go there and look at what's happening there. Also, this, for instance, works right away if you want to deploy on Heroku or the precompilation. And the last thing I want to mention is that when you do Rails new dash dash Webpack, you can specify as an option, React, View or Angular, which is going to install and configure more libraries for you. So if you need to use TypeScript or JSX, it's all there for you. Okay, I'm going to talk about Action Pack last. If you go and download my slides, I have more slides about Action Mailer and ActiveJob that I'm not going to be able to touch here. But let me just talk about Action Pack. The first feature is actually a mix between Action Pack and Railties, Cappy Bar Integration with Rails, also known as System Test. If you were in this room yesterday, you saw an amazing talk by Eileen Ishitel, who created this PR, and she spent 40 minutes on this feature. I'm just going to try to summarize in two minutes, but you should probably go and re-watch her talk on YouTube. So in summary, System Test, I'm still making an example with Skaffold. If you have a Skaffold, an example of System Test is when I go to the page slash post, I actually want to ensure that in the browser, in the header H1, the text says exactly post. Like I want to test exactly the final result in the browser, not just a unit test. So how do you write this sort of test? In Rails 5.0, you have to make a decision which library I'm going to use, which version, how to install it. As an example, you can decide that to your Gemfire, you're going to add Cappy Bar and Selenium WebDriver. Then before you run your test, you have to require those libraries. Then you have to write a lot of configuration. This is just a very small example. Like you're going to say that Cappy Bar is going to be driven by Selenium. You're going to use Chrome. You might specify a resolution. And then after you do all of these, you can write the test. Now, the test itself is not super complicated. It has basically two lines. You visit slash post, and then you assert that inside the H1, the text is what you expect. So it's really about making everything else go away and focus right in the test. So in Rails 5.1, forget about the setup. All you have to do is write your test. Your test is going to inherit from a class called application system test case, which is going to set up all that for you. You can also use route helpers like post underscore URL. And then you're going to write your system test with Rails test system. So much more convenient. And also it has great new features like screenshots that are taken if your test fail, clean the database. And one more thing is that the old type of functional controller test are not going to be generated anymore when you have a new 5.1 app. You're going to see this type of test. The other ones are still going to be running for backward compatibility. And I'm going to talk about the last feature in Action Pack, which is, I guess, my favorite one. And this is one example of a PR that it's a single PR, but it's like five features in a single PR. And in short, it's adding two new methods that you can use in your config routes. But these methods are very powerful. They're called direct and resolve. And I'm just going to go through two examples that show what they do. But once again, go and check the code because they can do much more. The first one, still a scaffold. This is a post page. And then imagine that below the post, you have comments. So each comment does not live on its own page, like slash comments slash one, it lives inside the post page. Now, imagine that you want to link to one of these comments. How do you write that code? It's not that it's hard to write the code in 5.0, for instance, you can say, link to comment number two, post path, because the page is the post page. So you do post path and then you say anchor content two, because you're basically linking to the post page at a specific place in that. But I believe that all this logic should belong to the routes, not to the view. When you do a link, you just want to say I'm linking to some embedded content. And then the routes are, you know, it's the responsibility of the routes to route. So to say, oh, it's a link to a comment, but the comment is actually, you know, inside the post at a certain anchor. So this new direct method, let's you do that. Let's you define once in your routes this style of routes. And then your views can just be used with link to super simple. So it's very powerful, especially this might sound like a kind of simple example, but you have if you have more polymorphic resources, this can help a lot. The other new method solves several problems. And one of these problems is if you have a form that's that post to a singular resource. For instance, if you have a form that lets user update their profile or create a profile, a user has one profile, not many. So you might want to do a form that submits to slash profile singular. Now, if you try to do this just by doing form for at profile in Rails until five point zero, you would see an error. It's saying undefined method profiles path plural because by default, all the routes are plural, even though you say resource singular in your routes file. There is a work around for this, but this has actually been an issue in the GitHub in the rails GitHub since 2011. People have been asking why is it like this? Can we just make it, you know, work just work. So in five point zero, the work around was when you do form for at profile, you have to specify the URL. You're saying, yes, it's a form to a singular profile, but I have to explicitly say this is my URL. And if you have a form to update, you have to do it. If you have a form to destroy, you have to do it. You have to do multiple times. In five point one, you can add this resolve profile, which is the name of the class as a string and then profile. You just do that once. And then it's that one is basically saying the routes for a single profile. It's basically just going to work. And you do that once and you do that in your routes file, which is probably where you should do it. And we that I have one minute left. So I'm just going to leave it here, but feel free to go and look at more features there. And also feel free to go and look at the changelog files inside the libraries. And then once five point one ships, there's also going to be a guide to upgrade your apps from five point zero to five point one. And I just want to thank all the people who have contributed to Rails and all the people here who are going to maybe start contributing to Rails. And that's what I have. Thank you very much.
Each minor release of Rails brings shiny new features and mild headaches for developers required to upgrade their applications and gems. Rails 5.1 will support Yarn and modern Javascript transpilers, remove jQuery from the default stack, integrate system testing and a concurrent test runner, introduce a new helper to create forms, provide encrypted secrets and more. In this talk, I will cover the improvements brought by Rails 5.1, explain the Core team’s motivations behind each feature, and illustrate the upgrade process to smoothly transition gems and apps from Rails 5.0 to Rails 5.1.
10.5446/31281 (DOI)
Awesome. Well, so this is a sponsor talk by Muvit, the company we work at. And we want to talk about Sidekick Scheduler, which is a Sidekick extension for, an extension for Sidekick. There we go. So my name is Andreas and this is Gian. We work for Muvit. We're based out of South America and Uruguay. And we've been growing. We've got people in Argentina and Colombia. And we also have an office in Austin. If you're in Austin, we're also always hiring. So go check us out. Yeah, so Sidekick Scheduler, that's where it's hosted at. And I'll leave you with Gian now for a while. He'll explain a little more about what it is. Okay. So let's start with a couple of use cases that we have. In 2012, we were working on a project that, among other things, needed to daily fetch information related to credit level scores based on financial data and apply interest to unpaid credit card debts. Also on Friday nights, we needed to send week activities, some of these reports to a couple of email addresses. The first approach, which we thought of, was to set up current entries, which will execute great tasks, which will execute great tasks, performing the processes that I mentioned before. We thought it was a reasonable approach, but with some drawbacks. So why not run? The answer to that question is composed of, because it needs to start the Ruby interpreter in each run. We are using JRuby and take some time to start up and consume a little bit of memory. So if we have a current entry that says run this at 3 p.m., for example, it will need to start another JRuby process and consume some memory. There's no easy way to programmatically add, remove, enable, and disable current entries. Current configuration will need to live outside of the app's configuration. It's not a portable solution because it relies on the operating system. And you can start and stop the scaler without affecting other current entries. There are also some drawbacks, like current minimum resolution is at minute level, not at second level, and deploying the app in a cluster will turn to trigger duplicate tasks when in fact you need to run only once. Despite those drawbacks, we didn't discard the current one, but we thought if there was some alternative solution. We found a gem in 2012, named a psychic scaler whose purpose was to scale psychic jobs at a specific time in the future. This wasn't what we were looking for, but we asked ourselves how hard it can be to add some current support to psychic scaler. Researching options, we stumbled upon Rufus Scaler, which is a gem where you can use current bugs to schedule a call to a block of root codes. So we came up with the idea of integrating Rufus into psychic scaler, adding that way current support. That was in August 2012, and after doing that, we started using psychic scaler in the project we were working on. In 2013, Morton was fighting hard to actively maintain the gem, so he transferred the ownership to us. And in 2016, we added support for real jobs when psychic is acting as active just for games. So how do we use psychic scaler? A psychic scaler is just an extension like applying to psychic. Using it is not really different than using psychic. Therefore, we need to declare a regular psychic worker, which is a Ruby class, including psychic worker module, and that also responds to the performance. The schedule configuration is placed inside the psychic configuration file at the schedule key. So in that example, we have a Hello World job that runs every minute when second is equal to zero, and when that specific point in time occurs, psychic scaler will push the Hello World job into psychic. Okay, then you install the gem, of course. With gem installed, or if you want to use Banner, you just add it into the gem file as usual. Then we run psychic as usual also. As we are here in the example, we are outside of race. We are telling psychic to require our Ruby file that contains the worker class. And that's it. Every minute, psychic scaler will enqueue the job, and then psychic will enqueue the job. Okay. There are different scaling types, and as we rely on Rufus, which is a separate gem, as we rely on Rufus, different scaling types are supported. The common one, the most common one, the most popular one is Krone, which runs every minute, and it's the same Krone-like syntax that you are used to, the Krone tool, the standard Unix tool, you can use it in psychic scaler. Add type, which pushes the job once at a specific point in time. Every, that's it, pushes the jobs in a current Y way following a given frequency. And in type, which pushes the job once after some time duration has elapsed. Okay. The main purpose of psychic scaler is to push jobs into psychic. This is the main idea. Letting psychic then random shows as usual. So how does psychic scaler works? When psychic scaler is required, when you require psychic scaler, it hooks into the startup and shut down psychic's lifecycle method. We will explain then what those lifecycle events or phases are. Then on the startup phase, we fetch the configuration, read the configuration, and starts Rufus scaler, which is just a thread that iterates over all the scalers and invokes each Ruby blog when needed. We start Rufus scaler and set up a scaler job for every one of the computer jobs. Each handler in Rufus is responsible for store execution info into RAIS, verify that in case of Chrome and add jobs, that show instance was not previously pushed, and finally pushed the show into psychic, letting psychic run the show as usual. While implementing this approach, we stumbled upon the challenge of managing multiple instances at the same time. At first, recording jobs were meant to be run on one psychic instance, only one. And if multiple nodes were needed, only one node will run psychic scaler. While the others will host regular psychic instances, running shows pushed by a single one. Support for multiple nodes was added later for Chrome and add scale types. So right now, it's possible to run multiple psychic scalers, scaler instances, having Chrome and add shows. Those Chrome and add shows will not run duplicate. All right. Well, here. Otherwise, I'll just shout and you'll hear me. Sorry about that. All right. Let's talk a little bit about the future, which, yeah. Honestly, the one thing that we want to do is we want to stop colluding the global psychic configuration, because right now, we kind of load the configuration into the psychic configuration. And kind of, you know, if psychic itself would try to do some other stuff, we could possibly have a collision there. So we want to take that away. We want to get every and end schedule types to work with multiple instances, you know, so you don't have to worry about that. And, well, so one of the things we don't want to do is we don't want to work on psychic scheduler with stuff that is outside of the scope of just scheduling, right? So we want to, that psychic scheduler just, you know, does that job and nothing more. Also, there's some test refactor that needs to be done and a little code-based refactor. All right. Demo time. Now, I did make a video out of this, so I don't, you know, so stuff works. You know, hopefully, right. So really, one idea that we had in mind for the demo was for our repository or open source repository, if you have a, right, that's not worth, that's right, ignore it. If you have some, you know, some issues and let's say, you know, the last comment on an issue was from you and you're not really trying to implement that issue and maybe it falls into an activity and the original poster maybe lost interest or really doesn't care about it anymore. We thought that maybe after a month or two, you might want to automatically close it. So we, that was the general idea we had, but since we didn't have a few months to generate all that test data, what we did is to have the invalid and duplicate marked issues. So we created, you know, a little code and psychic scheduler thing that runs every minute and checks those marked issues and just closes them. So, all right. So right now, you go to the code base. I'm sorry if that's a little small. You run psychic scheduler and you'll see we have a con job there in Q's which is called issue clean up there, scheduled to run every minute and then not in a minute, but in a few seconds we'll see here that we'll start running, we'll find the issues, just close them and so that you also believe me. We're going to go over to the browser now and refresh it. You'll see that they were closed and obviously if nothing is open, the job will run and it will do nothing so it will not find any issue. And so if we go back and now let's say a few months passed and no activity was on that issue, you know, it will simulate that by marking it as a duplicate or a, and the next time the job runs it will just clean it up. Okay. So as a bonus to this talk, we wanted to show you what it takes or what tools you have available to write a psychic extension. So you got these four events in the psychic lifecycle that you can hook into. Like John mentioned earlier, we hook into startup and shutdown events. So that's where we, so in the startup event we load up the configuration, start the roof of scheduler and then in the shutdown, I don't think it does a lot. It just dies. Clean up the thread and die. Yeah. And then one thing that I think we didn't display here, but when you run the web extension, a psychic scheduler would actually have create another type for you and it will show you the jobs that are currently in queue and it will also let you disable them. So you have, you know, without logging into the server you can control it via the web. And so to do that, you just got to create like a module extension where you got to hook into that method called self.registered and then you can, with that app.get and put, you can put the path in there and it's a way for you to, you know, define actions and return HTML or whatever data you want to render there on that tab. And once you do that, you have to, in the initializer, I think, you have to register the extension on psychic. So on the psychic web you got to register it and you can add the tab and there you put the name and the path that it points to. And the path really is back here when you do app.get my ex. That's the path you got to use there. And yeah, and you also have the option of adding locales in case you support various languages. With psychic scheduler it's just those four, I think. A little more. Okay. So we're just showing four. So you can, you know, support more than one language. And yeah, so this example is up on the move it, that URL. Although if you want to check out the psychic extension, maybe you want to check out psychic scheduler. All right. Now, about feedback and collaboration, since this is obviously an open source project, we, it has had, since, I mean, we didn't create it, we just, you know, took over in maintenance a few years ago. So over the course of the years it has had 43 contributors. And right now we have like six hours a week allocated for issues. So if you find anything wrong with psychic scheduler, you'll get some time for us to, you know, to fix it. And as always, help and feedback is very appreciated. And as a last thing, we have some other open source libraries. We got Roosin, which is a simple exception notification gem that you can use for logging and sending errors in any Ruby app that you have. We have Rui, which is a lightweight rules evaluator for context-driven conditional expressions. You can do some weird stuff with that. And then we have Angus. And it's, so the name for that was a real story. In Uruguay, we're a meat-loving country. We like to eat a lot of meat. And so Angus, you know, obviously is a famous cow, a good one. And, but it really doesn't have to do anything with that. We just thought it was cool. What it is, it's a REST-like API framework. And it will run on RAC. So if you're already using a RAC app, you can use that to serve your API. And it already generates, so while you're writing, it kind of generates documentation on the way. And, yeah, so you can use that for RESTful APIs. And then we have Fakeit, which is not really something for Ruby. It's an Android fake data, or fake, but realistic data generator. So you can generate names, email states, and country names, which, you know, it will be fake, but it will be kind of real. So, you know, you don't want to generate just a few letters for an email. You need a certain format. Yeah. So, but that is for Android tests, basically. Yeah. And that's it for us. Pretty short. So the question is, how do we manage the second resolution within the scheduler? Well, in fact, we use Rufus scheduler for that. Rufus just starts up a thread, and every, every, some, between some milliseconds, period, like, I don't know, 500, I think, it checks its loops through all the schedules, and asks if the scheduler, the schedule has to run. And in that case, it calls the Ruby block. So it's not just like hooks with some events from the, from the operative system. It just makes a loop, it loops, loops again, loops again. So it's, that means that your Ruby block will not run at 2 p.m. in a certain second, certain milliseconds. Okay. It will run at, I don't know, 190 milliseconds. Okay. It's not, it's not really precise. It's not perfect. Yeah. Yeah. What assures is that it will run in that second, in that second. All right. Thank you very much. Thanks for coming. Thank you.
When background job processing needs arise, Sidekiq is the de facto choice. It's a great tool which has been around for years, but it doesn't provide recurring job processing out of the box. sidekiq-scheduler fills that gap, it's a Sidekiq extension to run background jobs in a recurring manner. In this talk, we'll cover how sidekiq-scheduler does its job, different use cases, challenges when running on distributed environments, its future, how we distribute capacity over open source initiatives, and as a bonus, how to write your own Sidekiq extensions.
10.5446/31282 (DOI)
All right, let's get this party started. So this is reporting on Rails, Active Record, and our OLAP working together. My name is Tony. I'm Senior Developer at Moby in Indiana. We manage the corporate cell phone accounts for Fortune 500 companies, everywhere from bill optimization to device procurement, including tier one tech support. I work specifically on the billing and reporting team, where I basically work in massage with the one million plus devices that we have under management plus all of the billing data that goes back several years. What I essentially do is take information from various sources, carrier billing, tickets from call center, orders placed from carriers, shove them into Postgres, sprinkle some magic on top, and out pops pretty charts and graphs that our clients love. So to better explain what reporting is, the best I can do is use my current experience with Moby. Way back in time around 2012, ownership came to the dev team and uttered the dreaded D word. They wanted a bunch of dashboards to report on all the data that we have. And not just one dashboard, dashboards for our client administrators, dashboards for our internal support staff, and dashboards for ownership to actually see how the company is doing. And oh yeah, they want user defined filtering. You can slice and dice by just about any point of data that you want. And various other bits of scope creep that came up over the years. So where do we begin on this? Well a couple of notes that the dev team laid out in front that we don't want to be bloat in our results set. That means we don't want actual active record objects coming back from the database. Active record is kind of large and we don't really need all that data that's coming back, which means less memory. What we really want is generic uniform data coming back from whatever system we build. This means a plain old bunch of arrays with rows of hashes with the information that we want. We can take that, call to JSON on it, shove it into the flavor of the month JavaScript front end framework for charts, and we'll be on our way. So what is reporting anyway? Well, I'm going to assume most of us are apps run against a relational database. A relational database holds data. Data is completely worthless to you and me. Humans can't really work with data. Computers work with data. What humans can work with is information. What reporting in analytics does on a high level is take data converted to information so humans can actually make decisions out of it. And more importantly, reporting answers questions. And this is probably the most important step in preparing your app to generate dashboards is to figure out upfront what questions you want to answer. That means going to all the primary users and ask, what do you want to know? What information do you need? Because if you don't get that and you just start randomly throwing queries against Postgres or MySQL, you'll be throwing spaghetti at the wall and you probably won't end up with a result or a product that people really can use because it doesn't answer the questions that they care about. So here's some examples of questions for each of the three stakeholder groups for Moby Reporting. And we want to answer all of these plus more effectively. So where do we go about doing that? Well, fortunately, there is an industry standard term called OLAP, online analytical processing that is built for data warehousing and analytics. Now, commonly with more traditional OLAP products, everything is rolled up into memory in the form of data cubes, which all the information is pre-slice and diced, pre-group together, everything upfront so you can run queries on it very fast. And OLAP commonly deals with aggregates, counts, max, min, averages. You don't really deal commonly with the individual rows from the data. You care about the grand picture. However, OLAP, you commonly use OLAP with tack-on to Oracle products, MySQL products, there's a bunch of enterprise-y stuff. And when you think enterprise, you think money and we didn't have money. So there's got to be a better way. Our OLAP to the rescue, this is relational online analytical processing. This is OLAP that runs with SQL, which is what our normal database talks with. It also allows for dynamic queries to be generated on the fly so we can get any information we want out very quickly with just some setup. And what's nice about our OLAP as well is that we can work with both our historical data, which is all the billing information, and our transactional data, which is the support tickets that come through to our local support center. The ever-changing lines of service that change on a daily basis, that's transactional, it changes all the time. So our OLAP can work with both while OLAP is more designed for historical information, stuff that happens in the past. Once it's set, you don't touch it anymore. So like anything enterprise-y, there is a crap ton of terminology that comes with it. Now, we're going to go through all of these, and what's nice about OLAP is that you can relate any OLAP terminology to SQL, and you can also relate it to something in Rails. So we're going to use that as examples to build up the vocabulary, because when you work with reporting, you want to think in OLAP terms, not just simple SQL. The first term is a fact model, also known as a fact table. This is the starting point to get information out of the database, and SQL and this is the from clause, this is the primary table that has the information that you want. In Rails, this is a standard model. So by looking at questions that our users want to answer, we can extrapolate out pretty easily what a fact model is. In this case, the support tickets table and our lines table. A dimension is a way to take your data and slice and dice it into various chunks. Commonly these are relations to other tables in the database. So foreign keys linking to other tables. It can also be columns that live directly on the fact model as well. In Mobyland, we have cost center living on our lines table. You can group by cost center to get a report out of that, or a line of service has a carrier here. We have a carriers table, you can link to that to get information out that way. So a dimension in SQL and is always a group by, and if you want to jump to another table, it's also a join. In Rails, this is a standard column or a has one or belongs to relationship. So when you look at questions you want to answer for reporting, I always look for the word by. I want something, I want to sum up something by something. So in this case, we have support tickets by type, so we're going to group by type. We have active lines of service group by carrier, so we're going to join against the carriers table and group by carrier. Next up is a dimension hierarchy. This is a way to go up and down a hierarchy of information of your dimension. The most common example is dates. Also when you think about I want a group of stuff, a group of orders grouped by day placed, you just group by the day. But I want to go up a level and I say once I want everything grouped by the past five months, everything from the quarter, everything from the year. And the hierarchy basically is a structure you build to go up and down from more general groups of data to more specific groups. In Mobyland, we have devices, a device has a model number, a manufacturer, an operating system, and wireless technology. That's another example of hierarchy where you can go up and down your groups of data. A dimension member or I prefer calling them dimension labels is actual information that when you look at a result of a report that you can work with. So lines has a carrier ID on it. We could easily group by that, but when you look at it in a table or a pie chart, a human can't make sense of that. So instead, a label that you use and shove into the pie chart is the, in that case, the name of the carrier. If the dimension lives on the actual table, so for example, cost center, the cost center is the same thing you group by and also the label. Next up is filters. This is not really an OLAP-y term. This is because OLAP is commonly set up for once you pre-build and slice and dice your data, you don't really have much maneuverability unless you want to build a completely different data cube to group by or filter by something else. Because we're working with our OLAP, we can use the where clause in SQL to further shrink the data set down or get more specific information. So in SQL, your filter is your where clause. In Rails, this is your the where method, standard active record scopes, or if you use the ransack gem, that results in a where clause as well, which can come in pretty handy. A measure in OLAP terms is basically the aggregate, your average sum, max, min, count, pretty much any aggregate-based function that your database can provide. And what you commonly do is the measure is also the column that you plug into the function. So I want the sum of the total charges for an order. That's the measure. It's the sum and the column total charges. Count is obviously the exception because you don't really count on a column. The most commonly used count star in SQL land. Then finally, we have the metric. This is the report. The metric is just a fancy way of asking, is the end result of the question that you want. So in SQL, this is the whole query. This whole damn thing. Rails, it's all of active record, shoved together and execute to the database. To use in our examples, the entire question can be the metric or part of the question can also be a metric. What you can do with our OLAP is start with a very simple and specific metric that you want to ask and then tack on dimensions and more filters later. So you can think of the concept of having a bank of pre-built simple metrics that through maybe a user interface or just through your code, through configuration, you can start tacking on more stuff. So you have a base case and you can expand that out however you want. So I know that's a lot. Here's everything shoved together. Hopefully the colors stand out. We have a complete question. Whole thing in this case is the metric. You have the sum for your measure. Mobile charges would be, we'll extrapolate the table out of that. That's your fact model or fact table. That billing period is your filter and you're grouping by cost center so that is your dimension. And a metric can have as many filters and as many dimensions you want. Just keep in mind the more you tack on, obviously the more complex your query is. But this is essentially all of our OLAP in a nutshell with the SQL equivalent and the Rails equivalent. So that's pretty much the industry level of it. How do you go about implementing it? First your data has to actually be organized in a way that is conducive to reporting. The most common way is called Star Schema. There's another set up called Snowflake Schema which is basically Star Schema++. I prefer Star Schema because it's simpler, it's more direct and it's kind of easier to visualize in your head. The idea is if you can take all your tables and chart them out on a graph or link them together you have fact models or your fact tables in the center of it. And branching off of that is every dimension that you could possibly run against the fact table. And again a dimension can be a simple relation or it can be a column on the fact table. But the end result is when you map them all together it looks like a star. Now a little gotcha in this is pretty much set up for, you know, standard SQL is that it's really hard to report on has many relationships. It is possible. You can throw the magical distinct keyword in the front of your query. But if you're on Postgres specifically that can easily result in invalid SQL just because of the way Postgres works. The reason why it doesn't work with has many very well is because when you are aggregating and joining you effectively get multiple rows back, duplicate rows back. And then you're summing against those duplicate rows and then your numbers are off. So avoid wanting to report on has many relationships as much as possible. The other way around this is to use sub-queries which is hella slow and I don't recommend that. So using Moby's example here's three fact tables that we have identified. Support ticket, a line of service, and a row on a bill in all and various dimensions that they can dimension off of. And again some of these are actual relations to other tables and some of them are actual columns on the database, on the fact model. Notice in line, well that actually almost works, we have a carrier and a carrier account. In Moby a carrier account has a carrier. So why not just, why is carrier a dimension on line and why does line have a separate relation for that? Well you want to avoid doing multiple jumps as much as possible with our OLAP. The more joins you do the slower the result will be. So what we do in Moby is denormalize a lot of stuff that we want to group by and put them on directly on the fact model. Now we can get around, we could get around this with a has one through and effectively have a carrier relationship directly on line. The sequel can be generated just fine but again we're resulting in a double join at that point. Also something interesting to know is created date and the bill date for these two fact tables aren't actual date columns. They're actually separate tables to another relation. This is called a date dimension. And the idea behind this is especially for like a example, a warehouse of sales information. You want information broken down by year, by quarter, by weekday. It's hard to do that with databases with a common, with a regular date column and have it fast. You can tell MySQL and Postgres take the state column and give me the weekday out and group by that. But it has to be done on the fly. And you can't, and it doesn't use the standard index if you just slap it on the date column. You can make standardized indexes for that but again Postgres still has to, Postgres and MySQL have to calculate those values on the fly. Instead you have a separate table with a row for every day that you effectively care about. So in case of Moby, it's beginning of Moby's existence to 15 years from now just to cover all of our bases. And instead of having a date column on support tickets, we actually have a created at ID that links to a date dimension. And so what we can do then is say give me all the support tickets broken down by quarter. So what we do is then we join against the date dimensions table, group by the quarter column, that's the label. And now we can easily report on that. This also allows you to, since each row in the date dimension already has the various parts of the date broken down, you effectively have a very complex hierarchy that you can go up and down the data as you see fit. So great. Active record can do all that, right? Why is Tony up here with his free ticket to RailsConf? So it is true that Active Record does provide all the information needed to actually construct our OLAC queries. It can do a join, it can do a grouping. You can ask it for all of the, you can reflect on all the relations. You can ask for all the attributes on the model. And you can select out very specific columns using the pluck method. However, it does have some limitations. There is really no good way to group by the non-aggregate columns programmatically without manually putting this in. This is a specific gotcha for Postgres. MySQL doesn't have this problem. You can cheat. But with Postgres, if you have an aggregate in your select clause in the non-aggregate columns, you must include those columns in the group by, otherwise it considers it invalid SQL because it technically is. And so you have to make sure you balance the select and the group by. And Active Record can't really do that. In fact, it's mostly built for the count method, the maximum method, the minimum method. You just plug in one column and that is the number you get back. You don't get the grouping. You don't get the nice dimension label with that. It is also not a good way to just have your models be described in RO-lappy terms. Like yes, you have a has one, you have has belongs two, but there's no way to actually say that these are dimensions. You can't actually just go out and list all of them without iterating through every possible relation in the table plus the standard attributes. And there's no really good way to just store premade queries very well. Yes, Active Record, for later does lazy loading so you can start tacking on a select. You can override the from. You can put on a where and it won't execute until you actually need to iterate over it. But there's no way to just grab something real quick and then start tacking on stuff pretty quickly. So what could we do? Well, we could hard code all of queries. That's great except good luck trying to define custom where clauses and custom joins because as part of the requirements we had, we could build a courier ourselves which is actually what we ended up doing. However for commonly I would think a normal Rails developer would just start tacking on extra methods to Active Record to say, you know, give me all the dimensions and effectively dirty up the entire class at that point which we really don't want. Or it could switch to a SQL. There is a, that's a great gem. There's a good replacement for Active Record. There's a much better way to build and define very complex queries on the fly compared to Active Record. However, that ship has sailed quite a long time ago and I don't think management was really up for us to rewrite the entire app. So we had to do another way. And so what we did is we defined our own library for reporting. What I did was recently I extracted out most of the non-mobile logic sort of clean room to a lot of the stuff and implemented it in a open source gem. I call it Active Reporting because I'm terrible at naming. But basically this provides a DSL like system so you can tell your app very stuff in RO Lappy terms. It's a very lightweight DSL and what it does, it uses Active Record and it asks Active Record for bits of information about the database and about all the tables, tells it how to build the query for you and it just executes it directly on the database and instead of Active Record objects coming back, you just get an array of hashes at that point. Very simple, lightweight and small data set back with the information that you actually want. And it actually, and it doesn't really dirty up Active Record too much. I think it adds one method and maybe two at this point. I would say it's mostly production ready. Video games got in the way for me to build a demo app. But 0.1.1 is out. DSL is pretty much in a good spot. Documentation is pretty much what I need at this point. But this is effectively how it works. For every model you have in your app, you have a fact model to go with it if you want it to be reported on effectively. I'm calling this effect model instead of a fact table because we're modeling how it's used within RO Lab. Because Rails is convention over configuration, the idea is you have your Active Record model name, fact model as your entire class and it'll just know to link to the proper model. And there's obviously a way to override it if you're so inclined. But the idea is all the reporting stuff gets shoved into these classes and not your regular class, not your regular models at that point. So with your fact model, you then define the dimensions that you want to work with. Now why are we white listing all this stuff instead of just saying, hey, Active Record model, give me all of your relations, anything that has one or belongs to, we can mention by anything that's an attribute that we could probably group by. Let's just use that. Well, what if we want user interface where we can change the charts and graphs on the fly? Instead of, in Moby, I want all of my lines grouped by carrier, I want them instead grouped by carrier account. And maybe we'll have a dropdown saying I want to change this report altogether, change the dimension. So what we can do then is each fact model, you can ask it for what dimensions it can work with. And the gem knows if it needs to join against another table because it's a relation or if it's an actual attribute just grouped by that. You can also define the hierarchy in the actual default dimension label. So if a dimension is a relation, the gem will assume that the default label will be a name. So line joins to carrier, carrier table has a name, we use that as the label, or we can override that if we see fit. And then we can also define the hierarchy which effectively makes more dimensions. And so we can have a nice line graph of orders over time. And I want to see it by date. Now I want it by month. Now I want it by year. Now I want it by quarter. So this allows you to set up a hierarchy to drill up and drill down your data as needed. You don't mention filters, again these are just ware clauses. And the fact model can then whitelist stuff that you can filter by. And again, why are we whitelisting this stuff? Because this can be possible user input. Scopes on an active record model are just glorified class methods. What's also a class method? Delete all and destroy all. So we don't want to just blindly allow any input being coming in from a form to call methods that are not really safe. So instead we whitelist on a fact models what a user or whoever is building the report can actually filter stuff by. And this can be done by just listing out pre-built scopes from the model. So it'll just whitelist those. You can define your own dimension filter using the same scope DSL, so throwing in a lambda with an input if you're so inclined. This allows you to not have to tack on all your known filters to your model if you don't need to use them in the rest of the app. You can throw them just all in the reporting. So that keeps your models slimmer. Or if you happen to have the ransacked gem loaded up, you can whitelist various ransacked calls as well. The other benefit of to specifying dimension filters manually is now you have effectively full control or mostly full control over what the where clauses will be in the report. The picture record isn't always the smartest in building optimal SQL on the fly. So if you can control that to maybe force using specific indexes or maybe force a union instead of an or, you can do that. So that's a setup. Here's the actual execution. The gem has a concept of a metric. And again, a metric is the question you want to answer. You build a metric by giving it a name as the first argument. You tell it the fact model you want the metric to be based off of. Then you can pass in dimensions, filters. You can set the aggregate, a default to sum. You can set the measure of the actual column that you want to sum max min on. A default to value. I think you can override that as well. But this builds an object that holds the question and all the information that it needs to reach out to the fact models to then reach out to ActiveRecord to get all the information and build the query that you actually want to run. And then finally, you shove it into a report object. A report is effectively just a glorified courier that takes the metric and says build the SQL then ActiveRecordBase.connection.execute SQL go. Yay, here's your very basic result set back. Now why are these separate objects? Well, as I mentioned in a previous slide, a metric can be a very simple question you want to answer and then tack on more stuff. The Active Reporting Report object will allow you to take a metric and then merge in user input from the interface to say I want to tack on, you know, the carrier dimension or no I want to change it to something else. Here's my form of all my filters for these reports. Take that hash, shove it in there. It'll go through the wide list and apply the where clause dynamically at that part. This is the power of our OLAP, again, because you can define a where clause. You can define pretty much anything on the fly as long as it'll result in proper SQL and get data back on it. So we built those two objects. This is the resulting SQL. You have the select clause is very specific to what you want. We are summing on the total charges column. The gem will give you whatever the metric name is as the aggregate result column. We are dimensioning on carrier. So we are going to then grab the carrier label, which is the name. The gem has a nice ID that you can turn off where you can get the identifier back. So if you want to build filters on the fly, embed the identifier column of the dimension, any PyGraph, you click the PyGraph and then your filters magically update to have sales and press potential clients, which they never use the feature later anyway. But anyway, we built us the clause from the fact table. We then have to join on our dimension. We apply our dimension filter, our where clause, and then we finally do the group by because we're in Postgres and we have to be valid query. And the end result for that, we call.run on it and we get back an array of simple hashes. Then we call to JSON on it or whatever. We can massage it later with another service object if we so want to and then spit out a pretty chart, a table, a large number, whatever we want to do. That's effectively all the gem does because, again, reporting isn't about getting a table back of rows. We just want back aggregates of actual information that we care about. And so finally, just some pro tips in general for databases if you want to do reporting on them or any form of way of getting information out. As mentioned before, try to avoid double jumps as much as possible with your queries. Sometimes denormalizing is a valid solution. It's much easier in Moby to ask, give me all my active lines by carrier because carrier ID is directly on the table. We don't need to do a double jump at that point. And we just keep the carrier counting, carrier in sync from a very simple, inactive callback or you can even just use a database trigger if you're so inclined. You can also cheat around some has-minis by implementing countercashes, both the built-in countercash plugin or the, or just manually generated pre-built counts through background jobs or whatever processes you want. That way you have some data pre-built and pre-set up for you so you can easily aggregate against, aggregate against that. Also index wisely if you missed the previous talk about some database optimizations with indexes before this, look that up. It was pretty informative about when to index, when not to index and current gotchas with that. But the common rule is if you're going into mention by something, that's a foreign key, you might as well index that. It would probably help if you're indexing, if you have common filters that you're going to be filtering by a lot, like I said, you can whitelist the filters that you allow users to actually filter by. You can then use that to determine, I'm probably going to need to index these columns or these groups of columns. And use explain analyze as much as possible. This query is taking 50 seconds. Why is it taking 50 seconds? Well, we have tools that our database provides that tells you exactly what the database is doing and oh, I missed an index. Now it's suddenly half a second. Yay. So use the tools handy to optimize queries because this is still SQL. It's still a regular database. All we're doing is just dynamically building a query to run. Also as you grow and you start to outgrow, you know, you go from a small app to a medium app to a large app, look into read-only replication slaves for databases, anything reporting related, have it hit the read-only because you're not doing writes and your master can take the day-to-day operations at that point. Or if you're in Postgres, look into sharding or even schema separation if you're a multi-tenant app. That way you have physically less data for the database to work with per client. So you only have slow queries for your biggest clients and you rest the clients that are smaller don't really have to take a hit. And you can focus on optimizing that one client as opposed to having to take a bunch of unhappy users at that point. So that's about all the rambling I have. Gems on GitHub and released on RubyGems as well. Copy the slides if you're so inclined. I have a GitHub repo called show and tell. That's where I put all the talks I've been doing. I don't Twitter much. I follow people. But if you're one too, I'm on Twitter. Questions, comments, hate mail, death threats, anybody? Okay, we're done early.
It'll happen eventually. Someone will come down with a feature request for your app to "create dashboards and reporting on our data". So how do you go about doing it? What parts of your database should you start thinking about differently? What is "reporting" anyway? Is ActiveRecord enough to pull this off? Let's go on a journey through the world of Relational Online Analytical Processing (ROLAP) and see how this can apply to Rails. We'll also look at database considerations and finish with looking at a light DSL that works with ActiveRecord to help make your data dance.
10.5446/31283 (DOI)
Music Good morning everyone, thank you for coming. My name is Mark Simono and I work remotely as a developer for Stitch Fix. In fact, over the last five years and the last three companies I've actually worked remotely from my closet in a suburban house near Austin, Texas. And I've calculated that over that time I've saved between 1,500 and 2,000 hours of travel. Just to give you the scale of that, that's nearly a full-time job for a year, which is kind of mind-blowing. And in that amount of time, I've spent some time with my family, but I've also done something near and dear to Geeks' heart and that's developed a bunch of hobbies. Now I know geeks that are collectors and film buffs and comic enthusiasts. I, in particular, like music and board games and things like that. But I think that there is a classification of hobbies that are distinct and those are crafts. I think a craft is different in the fact that it produces something that other people value. And maybe at first the other people that value it are like your significant other and your mom. But overall, I think it's something that as you progress, you are able to produce things that other people value. So I contend that we as developers in this room are crafts people. And we take the bits and the bytes and the requirements and the ideas and we combine that with our knowledge and skills and we make apps. And so I think we can learn from other crafts. And in case you couldn't tell from the title of the talk, the craft that I've taken up lately and that I'd like to try to learn from is woodworking. Now just to give you a little bit of background from me, I have kind of been roughly interested in woodworking for a long time. And didn't even realize that my father and my grandfather were amateur furniture makers for most of my life until well into my 30s. This is a picture of my dad working on my grandfather's shop Smith from the 50s. And I was really excited to learn that he had this, that he was a resource for me. But he also lives a few hours away from me. So whenever I kind of had this notion of being interested in woodworking, I couldn't exactly just pop over to his house and learn from him. So I sat on it for a few years until a friend of mine who I played D&D with posted this. This is the Geek Chic Sultan. Now this is the Cadillac of board game tables. This thing is absolutely gorgeous. It is a marriage of form and function. It's amazing. They know it too. It's priced like a Cadillac. According to the website, it will cost between $25,000 and $30,000. And so that's a little out of my price range for a table. My kids would like to eat and possibly go to college, so out of the cards for me. But they had other tables on their website. So I started looking around and I got interested in this idea of having a board gaming table. And I saw this one. This is the Vanguard. A much simpler table, much more affordable. It's only $3,000 or $4,000. It still has a bunch of the same features. It looks like a dining room table. You can put a top on it. It even has this little groove in the side that you can put cup holders, kind of hook them into. And when I looked at this, I thought, hey, that's pretty cool. But $3,000 or $4,000 is still a bit much. So I'm going to spend $5,000 or $6,000 making a workshop so I can build it. Build versus buy, right? Yeah. So I bought a bunch of tools, collected them, and built a few little things here and there to kind of build up my skills. These are relatively simple pieces. And after I'd done this for a little while, I kind of had a few skills. I decided, all right, it's time to find some plans for a table or come up with something and figure this out. So I want to build a table. I went to thewoodwisperer.com and got these plans. And I found out that a table is pretty simple. It's as much as there's some complexity in putting it together. It's really just legs, aprons that join the legs together, and then a top. And in this particular table, the top's a little complicated, but it's still just the top. And so I want to walk you through the process that I took to build this table and some of the lessons that I learned that were really surprising to me from woodworking so that I think we can learn from them as developers. But before we get going, I need to say something about safety. I put this first because if you don't pay attention really to anything else, I think this is actually pretty valuable. So every of the several hundred hours of YouTube videos that I've watched on woodworking, almost all of them at one point or another mention safety or have safety implied. And they talk about two different kinds of safety. They talk about immediate safety, which is, hey, don't be Frodo. Don't get a piece of wood thrown through your abdomen and protect your eyes. It's been said that you can shake with the wooden hand, walk with the wooden leg, but you can't see with the wooden eye. So these are important things to protect. But there's also long-term safety that's involved. And that's stuff like hearing protection and dust collection. And dust collection is actually one of the more insidious safety hazards with woodworking. The dust that's between half a micron and two and a half microns in size is the kind of stuff that gets in your lungs and it's big enough that your lungs can't filter it out and small enough that it can get all the way down. It's a known carcinogen. It's a real problem. And it's the stuff you can't see. So if you see all that dust and it's a big mess in your shop, that's fine. That's the stuff you can sweep away. It's the stuff that you can't sweep away. That's the real issue. So it actually requires a significant investment to really take care of this dust well. This is Jay Bates and he's pointing at his dust collection system that he installed. He probably spent a few thousand dollars on it. And it took him a lot of effort to put the piping all the way down right next to each one of the tools in his shop. And as a result of this, he actually has cleaner air in his shop than he does in his own house. But he spent this money not because a client paid him to and not because he could suddenly build something that he couldn't before. There's no new features here. This is just because he still wants to be woodworking in 30 years. So all of these different safety categories have something that you need to invest in, but they also have habits that you need to form. And I kind of put this roughly on a scale. I think it's actually pretty easy for me to remember that I need to keep my fingers away from a blade that's spinning 30,000 times a minute back at me. But I find it way more difficult to figure out that I need to put on my mask for the dust. That said, the habits are what keep me healthy long term. And so the first lesson that I want us to talk about is that it pays to invest in your safety and develop these strong habits. I don't mean this in some weird code way. I mean this in an absolutely direct way. RSI, lower back pain, those are real things. Vision problems are real things. Depression is a real thing. These are all things that actually come with our job based on the habits that a lot of us take. And so it behooves you to be willing to spend money and invest in your safety. To make sure that you're willing to spend a few hundred dollars on an ergonomic keyboard. Whether or not your company will pay for it. By the way, such fixed will pay for such things if you're interested. But it behooves you to be willing to buy a really good chair. It's actually a really, really great idea to buy a solid monitor. Those are all good things that help your safety. But there's also got to be good habits. Like you have to sit right in that chair. You need to look away from the screen periodically. And you need to go see your therapist or take your medication, draw those things. Those are all things that allow you to still be computing in 20 years. So if you don't hear anything else, please hear at least this. Alright, safety lesson over. Let's move on and actually start building. Okay, first thing we're going to build is the legs. Now, these legs are four identical pieces. They're all rotated differently, but they're all identical. And what we need to do to make those identical pieces is the age old thing is measure twice cut once. But that's not actually how woodworking typically operates, especially with power tools. We could go and take a piece of wood and measure on it and mark a line and then cut. But it wouldn't actually get us the accuracy that we want. We'd get within maybe a sixteenth of an inch for each of those. And we want this much more accurate than that. So the key element to any power tools and woodworking is repeatability. So for each of those three things that we want to do, we want to have things legs that are the same length. We want them to be a perfect square and we want them to all have tapers on the inside. We're going to do repeated operations. And the first operation is the length. This is J. Bates' miter saw. It's just a saw that cuts across the wood. And you place the wood right up against the back there. And this little thing right over here is called a stop block. And so instead of measure twice cut once, you're measuring once and cutting a bunch of times. You place the leg right up against that piece, push it up against the back, and then pull down the miter saw and you get four pieces that are exactly the same, within a few thousands of an inch at the most. You just can't do that another way. There's not a consistent way. And I can do this within a few minutes. It would take me much longer to measure each one and cut them and be really careful about it. So I can do the same thing whenever I'm trying to figure out, hey, how am I going to do the width? This is the fence on my table saw. And that fence just keeps a parallel distance from the blade for whatever I set it at. And I can then place, take my piece and feed it through and cut off something parallel to one side of the piece. So I set the distance, feed our leg through and then rotate it 90 degrees and feed it through and I get a perfect square. It takes a few minutes at the most. Yet again, I'm getting both speed and consistency. Finally, this is a jig. This is the tapering jig that allows us to cut the tapers the same every time. And all the jig is really something that allows you to do repeated operations quickly. Usually unique operations, things that are distinct. And so this jig took, I don't know, three or four minutes to put together. He made a cut. He put a little piece on the end and that was it. This allows us to make these tapers the same angle and the same length every time. And I love this because whenever we were done, we had four identical legs. And so what I learned from that is that repeatability really increases speed and consistency. Now, I think we kind of naturally understand this as developers. We use Rails Nu and it generates us a site way faster than we could normally have. And it's completely consistent. We have a standard kind of way of doing things. That's awesome. That's totally a repeatability thing. When we deploy things, we deploy it and it's one time, one action that we're taking and we're able to do it very quickly and repeatably. But we also have ways that we can apply this on a much smaller scale. What if we make a jig the next time we need to replace some or fix some data that's in production? We've had data that got out of sync and we needed to wait a little while before we deployed the bug fix because of how the bug worked. And so we made a rake task and we were able to test that against staging and then we were able to run it. We did it a few days later but right after we deployed the fix, we were able to run it again and clean up any problems. It totally works and it increases our speed and consistency versus logging in via the Rails console and just doing it manually every time. So we're done with the legs. Now we're going to keep working on the aprons. These aprons, whenever I was doing this, I actually went up to my dad's shop and we milled everything. That means we just made it square. And I took these back home and the only thing that was left to cut on them is the little groove that allows you to put in the cupholders. And I was doing the groove a little bit differently than the plans here and I just ended up getting stuck. I was super concerned that the way I was going to solve this problem wasn't going to work. And so the longer I waited, the more fears crept in. And I ended up having this fear that I was going to have this uncorrectable mistake, that I was going to do something that was going to mess this piece up permanently. I was never going to get this table finished, which is not true. I could always go buy another piece from the lumber yard, right? There's no such thing as this uncorrectable mistake. But there would be this wasted effort, right? That sucks. I don't want to think about that. And so I ended up having all this fear of wasted effort. But that's probably not true as well. I might have a little flaw, but in all likelihood, I probably wouldn't ruin a piece. And so I ended up just having this fear of a flawed final product, which I think is also fairly unbounded because all final products are flawed one way or another. And so I finally just psyched myself up and I said, hey, I'm going to cut this groove. And I passed three of my pieces over and on the fourth piece, I heard the router scream a little bit and crunch something. And I turned it off and looked at it and it was split and I did this. Yeah, it was pretty bad. I was really upset. And then I took a big deep breath. And I thought, okay, now all those problems, all those worries, everything that I had, those are all gone. The only problem I have right now is a split piece right here. That's all I've got. And so I took a second, I glued it, I clamped it and I sanded it and it looks like this, which if you look really close, you can tell that it's split. But only if you look really close. So the thing that I learned there was that a known problem is a lot better than a bunch of potential ones. It's really, really easy to get caught up in potential problems. It's really, really easy to get paralyzed by potential problems. And this happens a lot, especially as I consider how big some of our production apps are and how much we have to manage. Sometimes it's really scary to deploy, especially a big change. But moving forward lets you get to the real problems as opposed to all the potential ones. Now being concerned is okay. Having some concern is all right. It's the fear that paralyzes you that's the real problem. All right. So we've got the legs, we've got the aprons, it's time to make the top. We have two problems with the top. I have to both make this flat and I need to join it together really tightly. It's going to be very visible. And so I want to make sure that it's really, really, really well joined. And I hadn't milled this stuff up with my dad, so I had rough lumber. And it was time for me to get ready to mill this. And so I had a couple of options. The first thing that I decided to do was, let's see if I can use the small jointer that I bought from my shop. And a jointer, in case you don't know, is basically just, it's got two beds. One of them is just a little bit lower than the other and there's a blade that's spinning really fast right here. And as you pass a piece over it, nibbles a little bit out. And so if you make enough passes, you end up with a flat board. It takes out all the little waves on the board. And it takes out the big scoops as well. Well, if you have a really long board and a really short jointer and you pass it over, it doesn't really take out the scoop. So this wasn't really the right tool here. So I thought, hey, maybe I can do this with a hand plane. I actually know that it's possible to do this with a hand plane. A hand plane just basically scrapes off things and has a nice flat bed, so it keeps things flat. But you have to keep checking it. It's not big enough as well, right? It's not like I have a hand plane that's seven feet long. So I needed a big piece that could tell me, hey, what parts aren't flat yet? And I realized that I was struggling as I was trying to sharpen this, as I was trying to use the hand plane. It wasn't quite sharp enough or I couldn't keep it sharp enough. And then I realized I also had no way of knowing if it was flat because I had nothing in my shop that I could tell if it was flat. And I ended up actually going to a nearby shop using a $4,500 jointer that was really well tuned and getting this done fairly quickly. And I bet every one of you thinks that I'm about to turn over the next slide and as this says, use the right tool for the job. Right? Nope. I don't think that's the case because there's two right tools here. I could have used the hand plane or I could have used the big jointer. The difference was I was really fighting the hand plane and the jointer was tuned up correctly. And so what I learned was that if you're fighting your tools, you need to focus on improving them instead. How many times have you fought with your local database? How many times have you pushed rebuild on CircleCI because you have some flaky tests? How many times have you fought with your command line interface? Or your VMRC, right? These are all things that we fight with sometimes that keep us from doing the real work that we want to do. That's not what we want to be thinking about. But if we maintain some of those things and keep them sharp, it's better. Ben Ornstein talks about how he keeps a file on his computer called sharpening.txt and he has a shortcut that allows him to append to that. And anytime he notices something that feels just a little bit inefficient or isn't working quite right and is getting it his way, somehow he's fighting it, he just makes a little note. And every morning he spends 15 minutes and works through as many items in that file as he can get through in that time to try to sharpen it. And so he's sharpening every day. He's making his environment better. He's improving what he has to work with. Okay. We've got all the four pieces. They're milled. They're ready. Now we need to join them together. So I wanted to make these with miter cuts. These are just like 45 degree joints. And it wanted to look something like this. And you probably can't see it from the back, but this actually has a bit of a gap. This is the cut that I made first. It was with a friend's miter saw. It wasn't all that great. It was an inexpensive miter saw. And so I couldn't get it tuned up to the point where it would make a straight up and down cut and where it would make a perfect 45 degree cut. And so I ended up with this gap here. And I started thinking, okay, how can I solve this problem? Because this is actually a pretty difficult problem to get dead on something so that they really fit together perfectly. And I said, well, I can buy a better miter saw. That's $400 to $600. You can go all the way up to 1500 if you really want to for the really awesome stuff. On top of that, I'd need to tune it up and build something around it so that I can support these really long pieces. And once I did all that, it would work great, but that's a lot of work to make eight cuts. So I said, is there another way? And I think there might be. So I said, what about a handsaw? This is a $15 handsaw off of Amazon. That is a little piece of scrap wood that's perfectly 90 degrees. I clamped it on. I put it at a dead on 45 degree angle that I was able to gauge with a bevel. And I just cut it. After it was done, I took my plane and I made it to where it was exactly on that double line. I knew it was dead on 45 degrees. And when all that was said and done, I got a good joint out of it. Now, what I learned from this was that knowing my tools and my techniques really improved my effectiveness. And I think that this applies to us whether we're dealing with clients and we need to be able to inform them. Hey, you actually don't need to do 800 cuts here. You actually don't need to be able to support millions and millions of customers. What you need is this solution that gives you just a few customers. This is all you need. This is a less expensive solution that's just as accurate or maybe even more accurate than that other solution. It just doesn't scale up, but you don't need to scale yet. That's okay. There's also the idea that we need to know our development tools. Like if you don't know your debugger, if you don't know how to use it, you don't get to use that whenever you need to solve a problem that way. It's just not in your wheelhouse. It's not in your tool chest. You also, like, you may think, hey, I'm just going to write puts. Well, puts is fine, but if you don't know how to use that really effectively either, that's no good. Tenderlove actually wrote a blog post a couple of years ago that says I'm a puts debugger. He talked about how to use that in a really, really, really powerful way. He talked about all the things that are really interesting about even just using puts. Puts feels like a hand tool to me. It's really, really simple, right? But it can be powerful if you know how to use it. So knowing both of those options is a huge deal. Okay. We're back here. This is at the lumberyard. And when I started, I went here and I grabbed a bunch of boards and they were really, really rough. And I got to turn a bunch of boards into a fine piece of furniture. And I'm really proud of it. Thanks. I'm really proud of it. And I finally kind of feel like a woodworker. Like, I was pretty excited about this moment. But I'm not feeling like a woodworker because I wanted to be a woodworker. I feel like a woodworker because I wanted a table. And I got there, right? And so I think having a specific project really facilitates and motivates learning. So if you're really looking to learn something, whatever that is, whether it's a technology or a process or a new language or a new database, it doesn't matter what it is. Having a specific project, I think will really help push you forward. Now, maybe some people can do a class and that's fine. I have no problem if that's the way you do it. But if you're really struggling to get motivated, I find this to be very motivating. All right. Let's review real quick. So first thing, safety matters. Make sure you pay attention to your safety. Invest in it, invest in and develop your good habits. Focus on repeatability. Make sure you find ways to use things in a repeatable way. It doesn't just happen by accident. You have to actually look for ways to do it. If you're stuck, focus on finding the known problems rather than just thinking about all the things that are unknown and could go wrong. Improve your tools whenever you're finding them especially rather than just kind of dealing with crappy stuff. Know your options so that you can be more effective and have a specific project so that you can motivate yourself. And then, if you're not doing it, you can just be honest. The problem saw me as a skill that can be developed in any craft. Now, I hope that I've inspired maybe one or two people in here to maybe try woodworking. I think that'd be really cool. But no matter what, I hope that you are able to either observe or participate in a craft that you can learn from. Because I think that that actually is something we can take back to our day jobs. Thank you very much. My Twitter handle is marxim. You can follow me. I also work at Stitch Fix. And we are hiring if you want to have some extra time to develop some of these crafts and work remotely. We hire bright, kind and goal-oriented people. And I would love to talk to any of you about that or woodworking or board games or programming. Come find me after. Thanks very much. Thank you.
Woodworking has experienced quite a renaissance as of late, and a very popular style involves using power tools for rough work and hand tools for detail and precision work. Using both defines each woodworker's speed and ability to produce beautiful/functional pieces. The same can be true of developers. Automation, convention, powerful IDEs, generators and libraries can make each developer go from nothing to something very quickly, but what about diving deeper to get the precision, performance and beauty you need out of your applications? Come find out.
10.5446/31286 (DOI)
So, can everyone see screens? Because I'm going to try to do a live demo, because you know, let's live on the edge a little dangerously. So I'm Wajahammerly. I go by Thagamizer. Most places on the internet, Thagamizer was taken on Twitter. So it's the Thagamizer. I really like it when people tweet at me, questions, comments. You're completely wrong about that type of thing during talks. And my phone is somewhere over there, and I believe Twitter's turned off on the laptop, so y'all won't see it anyway. So feel free to go nuts. And I blog once a week, this week's going to be a little challenging, at Thagamizer.com. Various topics, DevOps, Ruby, the art of speaking, all sorts of stuff. And I work at Google on the Cloud platform. So if you're interested in Google Cloud, Kubernetes, other things we do, machine learning APIs, I'm happy to answer questions. I also have a plethora of opinions that I am more than happy to share with you. You can come find me, we'll chat. And since I work for a large company, oh, my clicker, there we go. The lawyer cat has to say that any code in this talk is copyright Google and licensed Apache V2. It's currently on GitHub, but it will all go up on GitHub in the next day or two. And I will post about it on my blog when it gets there or post it on Twitter. So hopefully, since the room is relatively empty, people aren't just randomly here because they didn't like the last talk they went to or walked in a little late. But what is NLP? Like, Aja, why did you use the three-letter acronym? NLP is natural language processing, which might just get you a bit closer to the idea behind my talk. But it's still a little bit fuzzy for me. So let's go to Wikipedia. Natural language processing is a field of computer science, artificial intelligence, and computational linguistics concerned with the interactions between computers and human slash natural languages, and in particular, concerned with programming computers to fruitfully process large data corpora. So that is one exceptionally long sentence. It has a lot of big words, so here's the definition I use, teaching computers to understand and ideally respond to human languages. And by human languages, I mean things like English, Japanese, American sign language, British sign language, all things like that. Language is humans use. So to echo millions of middle schoolers everywhere, when are we ever going to use this? Why should I care? And the reason is bad NLP is already here. Who has had to interact with a phone system where they're like, say your reservation number? And you say it, and it's like, I don't understand you. Say it again. I know I have. I remember screaming at one in a parking lot at 11 p.m. at night to make sure that my flight reservation went through. Who here has logged onto a website, and you're on the website, and magically this window pops up, and it's like this disembodied person and this sentence, if you need help, I'm here to help you. And you're like, I don't understand if there's really a person behind that or not. I had that happen to me last week. It was creepy. And on all these things, natural language processing of some sort, frequently bad NLP, is likely involved. But the promise of NLP is actually better user experiences. I want to live in a world where instead of having to teach people how to interact with computers, we can teach computers how to interact in ways that people already interact successfully. And so an example of non-ideal NLP, but one many of you are probably familiar with, is this is my favorite slide in the entire talk. Computer, T, Earl Gray, hot. So that right there is an example of bad NLP. The way I phrase the request is very specific to what I wanted to have happen. I wouldn't use that phrase with any other person ever, but when that was on the air, that seemed really futuristic. Turns out we can do better than that already. These random things that live in your house and potentially order dollhouses for you or other things are in lots of people's houses right now. I saw a great talk by Jonah and Julian yesterday where they were arguing with multiple Alexis up here on the stage. I have a Google home. I really like it. And depending on the particular brand of these, they have a relatively large set, and in some cases they can nearly open, completely limitless corpus, a voice command that they can respond to. But if you really want one of these, the Google booth will raffle one of these off at 4.30 tomorrow. Come to a lot of forum. I'll show you if you stick around to the end of the talk how to enter. And then we also, I mentioned before, have tech support and phone trees. And these are actually getting a lot better in my experience. I can actually call my credit card company and say, I never got my credit card, and they will send me to the right person who can answer that question for me. And I really hope that this stuff gets better over the next couple of years. But here are some use cases you may not have thought about. Accessibility. In the closing keynote yesterday, the end of the day keynote, one of the points that was made was that children often use voice interfaces because they can't read yet. I used to work on software for kids, and we specifically specialized in kids with learning disabilities. So the ability to use voice interaction and not have to type and spell correctly is fantastic for people with dyslexia. Or perhaps people have a broken arm and can't type consistently right now, or maybe they're holding a baby. There's all sorts of use cases where this kind of accessibility matters. In addition to the stuff that many of you are probably thinking when I put the word accessibility on the slide, which is blind people. Blind people can use voice interaction. And the other thing that NLP can help us with is it can help us improve our understanding and our ability to analyze large amounts of data. So who works on an app that has a feedback button somewhere? So when I worked in EdTech for kids, the stuff we got via that feedback button was amazing. Five-year-olds have the best ways of telling you they hate your software. No, really. It was awesome. But once we got a little more popular, processing all that feedback was really hard. Initially everyone got every feedback email that came in, and then we're like, no, this has to go to a folder. And we wanted to be able to route feedback to the right people. Like, if it was from an adult about billing, that should go to the team that did billing processing and set prices and stuff. If it was a kid saying, you are a poopy head and I hate you, this should go to the things to read on your bad day folder. And if it was like, I don't understand what to do here, that should go to the team that's working on that interface to try to figure out if we can make the instructions better. But we didn't have a way to do that. It would have taken one person going through and flagging all of that, so we just didn't bother. And NLP can also be used to assist us in other ways. One of my coworkers made a tool called Deep Breath that analyzes your emails as a Gmail plug-in. And if you try to send an email that comes off as hostile, it tells you you might want to rethink that. Imagine how many GitHub flame wars could be stopped if everyone had that thing. So why don't we have it already? Because NLP is hard. Like, really, really hard. But you don't have to believe me. Why is NLP hard? Because English is horrible. So pop quiz, this is a word. This word is seal. Everyone imagine what you think of when you think of seal. Who thought of this? Who thought of this? Who thought of a musician? Who thought of something completely different? So one word, at least four different things that some people in this room thought of. It was actually fairly evenly distributed, which was kind of cool, because when I ran this with some other people, everyone thought of that guy. I really liked the picture, which is why I went with the word seal. Then we have our famous homophones. Yeah, the theirs. So many English teachers, so little time. And then we have words that can be multiple parts of speech. For example, love can be a verb. She loves her wife. It can also be a noun. Love lasts forever. So I repeat, English is horrible. And I didn't even get into stuff like irregular verbs, slang, idioms, and all the other bits that make a language a human language and make learning a foreign language really annoying. And it turns out that English isn't unique. I particularly believe that English is especially horrible, but all human languages are actually pretty horrible. And they're all horrible in the ways I've discussed or different ones. Even the ones we've tried to manufacture to be less horrible are still horrible, because humans make idioms. Humans make slang. Language evolves. And they're particularly horrible for computers, because there is no formal closed grammar for human languages. Some languages are more regular than others, but if you think about the grammar for a computer language, it frequently fits on a page or two of print and text. They're generally very regular. There's a limited vocabulary. And if there's an unlimited vocabulary, we have, like, this can be a symbol. A symbol can start with the following characters can't contain the others and can't contain spaces. And if you don't understand the words formal closed grammar, let's come have a chat. I'll drag out the Chomsky. It'll be awesome afterwards. Also, human language is horrible, because humans are really bad at being precise. For example, if I say I'm starving, there's a remote chance that's true, but it's probably not. And if I say to someone, you look freezing, again, remote chance it's true, I hope not, it's probably not true. We exaggerate a lot. And I was reading a cool article while doing research for this talk, that the word unique is becoming less unique over the last 30 years. 30 years ago, newspaper articles, unique was relatively rare, frequently editors and reporters would use unusual, which is the word they probably want nowadays. But now, I think it's said you're almost as likely to have the word unique as you are to have unusual when you mean unusual. Unique is something like seven times more common than it was in printed articles than it seven years ago. So I'm using that as an example of language evolves. My other favorite example is literally, because as of two years ago, it doesn't mean literally anymore. And there's also the problem of computers suck at sarcasm. Like really truly. And this comes back to, it's not just humans who suck, it's computers who suck too. This is an example I used in a previous talk, so I could say, sure, I'd love to help you out with your problem. I could also say, sure, I'd love to do the dishes. Depending on how I say that, the meaning changes. It also changes by the context, the words that are around it. And despite what we learned from Hitchhiker's Guide to the Galaxy, computers are really bad at distinguishing sarcasm, and they're even worse at generating it. Yay, somebody got my Hitchhiker joke. So I ask again, why is this hard? I hope I've confused you. And convinced you. But the big thing is, is that English is hard, and natural language processing is hard, and languages are hard, because humans, I'm a human, you all, many of you are humans, use language in weird ways. We use sarcasm, we use exaggeration. And it's hard, because our human languages are complicated and always changing. But since humans created human language, we can simplify this whole thing to why is it hard to. Natural language processing is hard, because humans, natural language is hard, processing is hard, because people, we are complicated and we make things hard for computers. So hopefully I've convinced you that this is hard, but not all is lost, because we've been working on this problem for a really long time. These two famous mathematicians were some of the first people to talk about natural language processing. They proposed ways to do algorithmic translation between languages, and I am of the generation that Google Translate became a thing while I was conscious and somewhat useful for me, and I've seen automated translation improve greatly over the last 15 to 20 years. But we've been working on this problem for several hundred years at this point. Another great example, 1950, Alan Turing proposed the Turing test. Who smelled the Turing test? Hands. There's a couple people who didn't raise their hand, so I'm going to actually give the definition, because a lot of people think they understand it and they don't, including me originally. So the Turing test is a way to test for machine intelligence and it uses natural language. A judge, an impartial judge, watches a conversation between a human and a machine without knowing which one is which. Usually this means the text is going across the screen. The judge knows that one of the two is a machine, but it doesn't know which one is the machine. If the judge can't tell which is the human and which is the machine, the machine passes the Turing test. And people have tried to solve this. One of my favorite examples is Eliza. This came out shortly after the Turing test, actually. And if you were an e-maxer, you can do meta-ex doctor to let you play with it in real time. And because I have the time, we're going to do that right now. Awesome. So this is the e-max psychotherapist. So I'm going to say, hi, I'm feeling sad. And it says, hi, how do you do? What brings you to me? And I'm going to say, I already told you. I'm feeling sad. Why do you say it? You already told me that you're feeling sad because I already told you. You stupid machine. Is it because you already told me you're a stupid machine that you came to see me? Yes. Well, I see. What makes you believe this is so? So this is a very classic example of NLP. There's lots of implementations of this. My project for the plane ride home is to port a Python implementation to Ruby because why not? I have to go to a conference and end up in some really stupid hallway programming process. But it's also really fun. And building some of these is one of my favorite programming projects for working with middle schoolers. They like being snarky. They like talking back and making things that talk back based on what you say is a lot of fun. So that's one of my examples. Come on. There we go. Awesome. And Eliza is actually a classic example of a chatbot. So who's written the chatbot? I have lots of fun. You should do those. But the fact that chatbots have been around since the late 50s and early 60s kind of surprised me. I thought we invented that stuff with IRC. Turns out no. We didn't. It's been around forever. So I've now rattled on for about 15 minutes. I haven't showed you a single line of code. That's not really fair of me. So code. I have a thing whenever I do talks. I like to give super impractical examples so that hopefully no one will actually use what I do. But we'll use the ideas instead. And RubyConf 2015, I gave a talk. This was the talk. It was stupid ideas for many computers. And in that talk, I demonstrated how I could do sentiment analysis of tweets by scoring the emoji they contained. Sentiment analysis is a subfield of natural language processing. And the goal of sentiment analysis to figure out if a given body of text is generally positive, generally negative, or something else. And to use emoji to do this, I gave common emoji scores based on how positive and negative they were. This was my general scale. I may have partly done this talk, so I could put the poop emoji on the screen many, many times. And I used emoji at the time. And if you go back and watch the video on Confreaks, because NLP is super, super hard. And in November of 2015, I didn't have the skills or the ability to train up a model to do accurate natural language processing on all the crazy stuff that people tweet during a conference in Ruby in real time. But it turns out that most folks who do machine learning of any kind don't actually build their own models. Because building models is hard. It takes a lot of time. It takes a lot of knowledge about the various ways that they can go wrong. And over the last year, tons and tons of pre-trained models that you can access via API have come out. And I'm going to use this one. We released Cloud Natural Language. And I'm going to use that for my demo today instead of my hacky emoji scoring scheme. So we have a gem. This is actually supposed to be a dash between Cloud Language. I'm sorry about that. Gem is still Google Cloud Language. I believe it's alpha. It might be beta. It's on RubyGems. All the development happens on GitHub. It's all open. And here's all the code I needed to do the scoring. I have an analyze method. Well, require the thing, create a new object, blah, blah, blah. Analyze method. I create a document using the tweet as the text. I call the sentiment method. And it gives me an object back that has score and magnitude. Score is how positive or negative it is, negative one to one. Magnitude is how much that thing it is. So if I get a score of 0.1 and a magnitude of 5, that means that is a very, very neutral tweet. If I get a score of negative 9 and a magnitude of 0.1, it's negative, but it's not excessively negative. It's bits of it that are really negative. I have a blog post about how to interpret those if you're curious, because understanding those two things is a little bit difficult. Don't multiply them together. It doesn't work that way. I tried that. I'm going to massively hand wave over how I set this all up. It's a Kubernetes cluster. I've got a math-produced system with something that's pulling data from Twitter. I'm happy to explain it afterwards. Come to the booth or I can show you. It's all running. I explained it all in great detail in my original talk, because my original talk was how to set up distributed systems to do crazy things. And I'm going to do a demo at the end of the talk if we have time, but I need your help. I'm pulling stuff from that hashtag. So if everyone wants to get a phone out or something and make some sort of tweet, you can blame my talk at some point in the tweet and try to give it a sentiment. Generally sentiment's been running pretty good on the conference thus far. This is not surprising. I'm going to take a drink of water. I'll go tweet something. I'll be back in a second. Okay. I see eyeballs. You can keep tweeting while I'm talking, because the next bit just involves me telling you about grammar for a bit. So that's good. So when this tool came out and I was playing with it, I'm like, dude, did anyone else here have to diagram sentences in grade school? Because I totally did, and I hated every minute of it. Like doing this thing over and over and over again drove me nuts. If you've never seen these before, this is one form of sentence diagram. You have a subject, a vertical line, the verb, a half vertical line, a direct object, and all the stuff that modifies all those things goes on the lines below. So anyone else draw diagrams like this? Because I drew tons of them. Okay. Talk to a bunch of other people, and they do diagrams that look more like this. Using both methods of diagramming, the verb is at the center, and everything that modifies it goes off to the sides. And also, all words are organized so that they are connected to the word that they modify. And when I started showing this to some of my friends at Seattle R.B., they're like, ew, grammar, I hate you. And then I'm like, so. Blah, blah, blah, direct objects. They're like, I don't actually remember any grammar. So brief side quest. Grammar. And I'm going to apologize in advance to all of the non-native English speakers in the audience. Y'all know this already, because you've all had to study this. You can tune out. I'll be back in a couple minutes. So one of the way we understand words is by labeling them based on their function. This is called parts of speech. Verbs. Verbs are actions. You can't have a sentence without a verb. One kind of action might be jump. But you can also have verbs that aren't active. State of being verbs. Like thinking. And then we have nouns. Lots of us learn that a noun is a person, place, or thing. That is strictly true. A person, like Matt's, or Alan Turing. A place, like the bathroom, or Phoenix. A thing, like a cactus, or a mountain goat. But you can also have verbs that are ideas. You may have heard the phrase abstract noun. Democracy. Love. Those are abstract nouns in this concept. In the way we break down words. Adjectives. Adjectives describe or modify other words. Usually nouns. There's a great podcast that explains why that isn't strictly true. Again, humans are awful. Attributes. Adjectives specify the attributes of things. Blue, small, five. Those are all adjectives. They can also compare things. Near and far. And if you are of the same generation I am, you are thinking about a sketch from Sesame Street when I say those words. I almost embedded it, but I didn't because copyrights. But please go search near and far Sesame Street on a video search. It's hilarious. And then we have things called that I learned as articles. A and in the. But modern grammar calls them determiners. And they help clarify which noun. And determiners also include words like this and that. All articles are determiners. Not all determiners are articles. It's like squares and rectangles. It's fine. And then we also have parts of a sentence which are different than parts of speech. So the root is the thing you need to have a sentence. You need a verb, therefore the root is the verb. The subject is the noun that does the verb. And the direct object is the thing that the verb happens to most likely a noun. All you need to know, but pop quiz, here's a sentence. The cat eats fish. The subject is cat. The verb is eats. The direct object is fish. The quiz is complete. Everyone else can pay attention again now. That was your reminder of how grammar in English works. So we're originally working on sentence diagramming. Here's the basic idea of these sentence diagrams. Subject, verb, direct, object, other stuff. And I wanted to figure out how to do this. And to do this I need to know what part each of these words is solving. Well, there are tools for that. And instead of using the sentiment method, now I'm using the syntax method. And it returns a list of tokens. And it returns way more information about every single token than you would ever want, at least that I would ever want. There's a ton of stuff here. This is the token for the word cat. In the sentence, the cat eats fish. Here's the text itself and where it appears in the sentence with the offset. Here's the part of speech. It is a noun and it is singular. English doesn't actually have a lot of modifiers on nouns. We don't tend to use grammatical cases. We don't have grammatical gender at all in English, which is kind of cool. Grammatical mood, tense. Tenses generally don't apply to nouns. They usually apply to verbs. All that information is available if it's relevant. Go use this on something like German or Spanish or something that has more cases, more grammatical gender, things like that to see how that works. And then this token is enabled end subject for subject. And so what this is saying is that cat in the sentence cat, the cat eats fish is the subject of the sentence. So with this, I have enough to make myself some awesome rock and asky art diagrams. Here I'm just finding whatever token has the label subject, taking the text, doing the same thing to find the thing, whatever is labeled root, calling that the verb, asky art. And I get some absolutely amazing asky art here. One cool thing, if you haven't seen it before, I know we have a lot of new Rubyists, you can multiply a string by a numeric. It has to be in that order. You can't switch it, but I'm multiplying a space by the language, by the length of the words, the subject word here so that everything lines up correctly. So awesome. Let's add the direct object. Go find the thing that's labeled direct object. Even better, more rock and asky art. So that's all easy. But I'm missing a word. I need to figure out how the fits in. So the natural language API gives me one other useful thing and it gives me a thing called the head token index for each word. And this is the index of the parent word in the corpus of text for the current node. And not parent word, parent token. Most of the tokens are words you'll see sorely that's not always. So this is the token for the. It's head token index is one. That's the list of tokens. So what this is saying is that the refers to cat. Everyone follow? That's a little tricky. So I'm just going to go through all my tokens and find everything that refers to the subject. Everything that has the same head token index as the index of my subject. I'm going to print that out. And you know, a little more asky art. I have a basic sentence diagram. I couldn't figure out how to do diagonals and asky art and I'm pretty okay with that as it turns out. So this is awesome. And I was very proud of myself. And you follow me on Twitter, you saw that I tweeted something because I was super happy about this. And then I tried a sentence like this. And that didn't go so well as you can probably imagine. So I took a step back and I talked to some of my friends and they're like, well, I didn't do sentence diagrams this way. So I fell back on one of my favorite tools. Everyone's got their favorite tool set. And one of my favorite tools is a gem called graph. This is actually the gem that I gave my very first conference talk about a long time ago. And all graph does is a gem that makes creating node edge graphs, not charts, like bar charts, graphs, like math graphs. Easy. It provides a simple DSL and then it creates dot files and a program called graph is reads dot files and builds visualizations for you. And all you need to know from graph is that it has two methods inside the DSL, node, which takes two arguments, a required ID and an optional label, and edge, which takes an ID for the two and the front. And they can be the same if you want to loop back. So that's all you need. And this is all the code I need to build a graph-based sentence diagram. This little bit here is some graph boilerplate. This just says in this block my DSL applies. Digraph because it is a directed graph. You know, math, it's awesome. I'm going to use the index as the ID and the text as the label for each of my nodes, make a node for every token. I'm going to make an edge from the current node to its head if it doesn't refer to itself. I didn't, the loopbacks got confusing. They were mostly only applied at the root. So that's the cat eats fish and you'll notice that I actually have the punctuation. The punctuation is considered a token as well. And then that's the more complicated sentence that didn't work at all with my first set of code. The cat eats the fish with a side of milk. You can see the prepositional phrase there because milk, head token is of because milk is the end of the prepositional phrase and of is the beginning of the prepositional phrase and grammar is awesome. So I've got a couple minutes left. I showed you some silly examples. This is what I really like to do when I give these talks because if I have to make serious code all the time I get bored. But there are all sorts of practical uses of NLP. We talked about customer feedback. We talked about summarizing. We talked about making ways to make our products more usable for a wider variety of people. And I'm hoping that some of you have ideas of your own at this moment. Hopefully I did my job. And if you want to get started, you want to play around with this. The Google natural language API is a good way to play around with it. You get the first 5,000 requests to each endpoint, syntax, sentiment. We have another one called entities. First 5,000 a month they're free and it's priced per 1,000 after that. And I can't look at the pricing but it's very, very reasonable. And just because I'm like, dude, this is so much fun. I ran the Jabberwocky through it and it got the syntax analysis correct. I was a little bit surprised by that. So it's a good way to play around, experiment with the technology and just kind of see what it's worth it. I highly encourage you to just play because we learn all these new concepts by playing and digging in. I work for Google. We're at RailsConf. We have code labs and stickers and answers on our booth. I also have dinosaur stickers up here if you come hang out with me afterwards. We have a talk this morning called Google Cloud Labs Ruby in the sponsored track. You already missed it so watch it on Confreaks. And one of my coworkers is doing a talk on instrumentation, what my app is really doing in production, tomorrow at 3.30 in this same room. And as I said earlier, we're giving out a Google home. There's the link. It's also at our booth. I wrote it on the whiteboard, on the chalkboard myself. So this is what I see. Thank you. I ask if you have any questions. And again, 30 minutes exactly. So I have time for a couple questions. Yes, over there. I tried it. So the question was how does the sentence diagramming approach deal with incorrect grammar? And I tried it. And a great example I tried was Bunny's Hop. And I tried it out the period and it couldn't figure out that Bunny's was the subject. But I put the period on and figured it out. So it takes its best guess. It is a machine. Machines make mistakes. And it's getting better. All of these models get improved over time. They get better and better. For the most part, it's pretty darn accurate. My grammar as a general rule is really bad. I have a copy editor for my blog just to make sure that I don't do horrible things. Mostly I do horrible things with commas. And it's generally pretty good, especially for the common mistakes that people make. Improperly using semicolons and improperly using commas. Less common mistakes. It's not as good because it doesn't have as much training data. So the question was how does emojis play in? Because I use that and I'm probably maybe because I use that as an example. I don't actually know. I've been running this through Twitter. All right. I promised you guys the closing of the demo. Here I'll show you. So this is my... So this is running in real time and you may not be able to see that. So yes. Here. Let me hit the button so that you can start streaming again. Thank you very much. So our current sentiment is 57. So even if you guys are trying to be horrible, you are outweighed by the positive because it's extraordinarily positive. And there's emojis and tweets. And thus far it hasn't seemed to affect them significantly either way. But I don't know. I don't actually don't know the science behind that. I don't know enough about our underlying model to be able to answer that accurately. So it doesn't seem to completely ruin it though. I know that. So I'm going to summarize your question. Let me know if I get the summary right. How does cleaning up the data and ensuring I have higher quality data improve the syntax analysis? Okay. We agree on that summary. Awesome. I've actually been really lazy and I haven't done any cleaning of the data because I'm a tester at heart and I like to try to break things. And anytime I have to put additional pre-processing in, that's slowing me down. I can't imagine that it would hurt. But I've been pretty happy with what it's done without. The entity analysis is actually pretty good without. I threw, I didn't show them talks, I didn't have time. But I threw, I love pecan pie with ice cream through entity analysis and identified that pecan pie and ice cream were the two most important entities in that sentence. And I agree with that sentiment. So and it's generally, entity analysis generally is for identifying proper nouns, cities, companies, things like that. One of, somebody I know I used to work with does language processing of SEC filings. Looking at various things to try to understand, you know, what's this company saying? And you care about big name companies and stuff and cities and things like that and those type of things. But you know, for the sentence, I love pecan pie with ice cream. Pecan pie and ice cream is a pretty good analysis of that. So, other questions before I give out dinosaurs and set up here? Okay, you can come get dinosaurs. Thank you very much for your time. I appreciate it. Thank you.
Natural Language Processing is an interesting field of computing. The way humans use language is nuanced and deeply context sensitive. For example, the word work can be both a noun and a verb. This talk will give an introduction to the field of NLP using Ruby. There will be demonstrations of how computers fail and succeed at human language. You'll leave the presentation with an understanding of both the challenges and the possibilities of NLP and some tools for getting started with it.
10.5446/31288 (DOI)
Hi everyone, there are a few of you way down the back there and this room is very big. You can come forward, it's fine, I won't bite, I promise. Do it, it'll be great. Alright, this is teaching aspect to play nice with rails, let's get started. I could not be more excited and more happy to be back here at RailsConf on this stage that is a lot bigger than the one I thought I was going to be presenting on this year. Last year I didn't make the conference, some of you know this story, some of you don't, but I was suffering from a life-threatening infection in my leg and actually exactly a year ago to this day I took this picture of myself in hospital and texted it to my mum, she had a bit of a freak out but it was fine. This is a bag of a drug called keftriaxone, keftriaxone is a super powerful broad spectrum antibiotic and were it not for the efforts of the doctors and nurses of the NHS in the UK, I would not be here today giving this presentation in a very literal sense and I'm hugely thankful to them and the healthcare system in the UK that means that a year later I am alive and able to give this talk on this stage. But I'm not telling you this story to induce sympathy towards me but instead to tell you about something slightly odd that happened at the RailsConf programme last year, we had this problem where RailsConf last year was the conference where Rails 5 was due to be announced and that meant that there were a bunch of talks in the programme about how to get various things working with Rails 5, it's a big major version, stuff will break, that is bad and I was going to give the talk about how to get RSpec working with Rails 5. And there wasn't really a back-up speaker that had a talk that looked like that and that was a fairly large gap in the programme. So myself and the rest of the programme committee decided that the correct thing to do would be to call up Justin Searle who is giving the keynote tomorrow, I don't know if he is in the room, he is probably not, he is probably writing his keynote, and have him give my talk instead. This is funny because the programme committee rejected the talk Justin submitted outright but we were like, nah, that doesn't seem like a talk we would like to have and then you are now just morally obliged to give the RSpec talk because Sam is dying and he took that with nothing but grace if you have seen the talk, you know that he spent the back half of it basically trolling me. But in all seriousness, I said this before and this will be the last time I mentioned this on a Ruby Central stage, Justin, I am hugely thankful that you did that even if you are not in the room right now. And I think everyone should use the meme hashtag, hashtag Sam Phippen is Justin Searle's confirmed, tweet him and let him know that you are all thankful too. 5.1. Some of you will probably need RSpec to work with Rails 5.1 and that compatibility is coming but with a couple of caveats. We do not have any support for Action Chat box test case nor Action Dispatch belief system test case. Please have some faith in us. So in all seriousness, Rails 5.1 has not been released yet and that means we can't actually give out compatibility with it but we have a branch ready that should just work and so there will be no changes required by you as soon as RSpec 3.6.0 is released, you should just be able to upgrade and everything will work. We don't have integration with system test because I haven't had enough spare time to do that yet but I would absolutely welcome a poll request from anyone who would like to integrate Rails 5.1 system test with RSpec. So that is sort of the front matter. Now let's get into the actual talk, the integrations between RSpec and Rails and how we got there. Before I go too far, I think it is really important to discuss sort of the high level architecture of the RSpec framework. One of the things that I think not a lot of people realize is that RSpec isn't a single monolithic gem but instead a series of testing components that are designed to work together really well. So when you specify gem RSpec in your gem file, the first thing Bundler is going to do is it is going to go out and fetch the RSpec gem but the RSpec gem actually doesn't do anything on its own. It doesn't have any code inside it of any consequence. And what the RSpec gem does is depend on the rest of the components of RSpec that form together to make the testing framework you are so used to using today. One of the dependencies is called RSpec core and that provides describe and it and tagging and the runner and all of the tooling that you use to actually build and execute your test suite. RSpec expectations provides the expect keyword to all of the matches that you are used to using in the powerful composed matcher system. RSpec mocks provides the mocking and stopping capabilities of RSpec, doubles, spies and all of that good stuff. And so these are the direct dependencies of RSpec but they are independent gems. You can pull each part of them on its own without having to have any of the others. All three of these gems depend on what is called RSpec support which most people don't know about. RSpec support is our internal shared code between the RSpec gems and we use it for various things like pulling methods off objects, certain methods signatures, ruby interpreter functions and so on. And nothing in this gem is marked as part of the public API. So you should never call code in RSpec support directly but it is there nonetheless. RSpec Rails is an entirely different beast. It functions kind of on its own. It packages the rest of the RSpec ecosystem and also provides a bunch of its own code and links directly into the Rails version that you're using. In this manner RSpec Rails is able to provide the entirety of RSpec that our users are already used to and then combine that with Rails specific stuff in order to make testing your Rails app really easy. So if you see gem RSpec and gem RSpec Rails in a gem file, this is actually kind of a smell. You're specifying your dependency on the gem twice and instead you can just pull RSpec Rails and everything will be fine. Let's talk about Rails. I think this is a reasonable statement. And that complexity also comes with the fact that RSpec is really permissive about which parts of Rails you can use and at what versions. Specifically RSpec supports all Rails versions greater than or equal to Rails 3.0. And that means we support 3, 3.1, 3.2, 4, 4.1, 4.2, 5 and soon 5.1. That's a lot of Rails versions and it leads to a lot of code inside the gem that conditionally switches on what Rails version is being used. It leads to our Cucumber examples being tagged with different Rails version support and it leads to here docs that contain Ruby code that switch on Rails versions 5. That was a lot of work to get RSpec compatible with Rails 5 and in the core of this talk I'm going to be going through some of the issues that I encountered, how I thought about debugging them or where external contributors were involved, how I helped them make their issues better or got help from other necessary maintainers to fix everything up. This is broken into a series of lessons and the first one is that major versions mean people can break things. So at the keynote, not last year but the year before, it was announced that controller tests were going to get soft, deprecated and what that actually meant is that a signs and a cert template were going to be removed. Assigns is the thing that lets you test the instance variables your controller assigns, a cert template is the thing that lets you test which template is going to get rendered. I'm like, please, no, stop. There are thousands and thousands of controller tests in the world that desperately need these things to function. You can't do this to us. I was overreacting because it turns out the Rails team had a plan which was this gem called Rails controller testing. All the Rails controller testing gem does is provide a sign and a cert template again. So I'm like, great, this is perfect. I'll just integrate this back into our spec, we'll run our tests, everything will be fine. So let's add it, make sure it works and oh no, the entire test suite exploded. Looking at this a little bit closer, we discover that actually it's just view specs that have stopped working. I'm like, okay, that's less bad because mostly nobody writes view specs. Wow, maybe you do all write view specs and I'm just terrible. No, but specifically the thing that was broken is path helpers, things like URL4, gadgets path, user path, et cetera, you know, those things. I was like, all right, let's work out what's gone on here. So we take one of our automatically generated view tests, we look for the gadgets path and we get undefined local methods for gadgets path. We switch back to Rails 4.2, we call the same method and it works. So we know somewhere between Rails 4.2 and when this gem was extracted, something has broken. I'm like, all right, what could possibly have changed? It's pretty likely that gadgets path is not provided directly on the test but instead one of the things that gets included to the controller. So I'm like, all right, controller, tell me everything that you're made up of. And you get a huge list of nonsense back. But just to explain this, the top line there is the singleton class of the object, the second line is the class of the object itself. Then we have action dispatch test process, three random anonymous modules. Action controller base and then a bunch of stuff. Action controller base is where we can approximately stop looking because that's the thing we're used to using. You switch back to Rails 5 and it looks materially different. And specifically those three random anonymous modules have just disappeared out of this ancestor chain. And I'm like, anonymous modules are the hardest thing to debug ever because they don't have a name, they don't have a source location. How on earth am I supposed to find this? So I stick a more prize in there, I rip my hair out, I get mad. And eventually it points me at this line of codes and I'm like, I look at it and it's like newing up a module and including that module into another module and then doing some crazy hooks and I'm like, nope, no, had enough. I know aspect Rails like the back of my hand but I am not like an expert at the deep, deep internals of the Rails framework and I'm willing to admit that. So I hit up Sean and I'm like, Sean, this is nuts, what's going on here? And we hop into a Skype call and we actually end up spending something like three or four hours trying to debug this ourselves. Sean, would you say that's about fair? Yeah, yeah, so four hours of my life gone and then eventually Sean's like, this appears to be a load inclusion order kind of bug. And he writes this like essay of a commit message. This is the first pro tip of this talk. Write really long commit messages that explain what's going on because anything that takes four hours of two Ruby maintainers to work out is complicated and non-obvious and scary. The diff looks like this and I won't make you squint but basically it's changing the order that the controller testing gem is including stuff in and everything works. We went from a broken gem to a functional gem and I think this is a huge win for collaboration. When I look at this change, I was like on my own solo maintainer scared and lost and confused and I'm like, hey, Ruby ecosystem, I need some help and it helped me and it was great. I love it when we get together and fix that stuff. It makes me really mad that I had to debug anonymous modules but I think this is a great win. And then we were basically ready to release the gem and we did and users started filing issues. Oh, my God, so many. It turns out that the test suite that you have usually doesn't cover all the cases and your users are doing things you can't possibly imagine. So the remainder of the issues that we'll be looking at today were issues filed by actual aspect users as opposed to stuff I encountered while I was trying to do the upgrade. And then in the second two, it is actually possible for Rails to have bugs. This may seem like a controversial statement but I hope you believe me. So we're talking about aspect Rails issue 1658. You can look this up. But the user was basically like, signed cookies are not available in controller specs and aspect Rails. And I'm like, the hell is a signed cookie? I've literally never heard of that Rails feature before. Like, all right. I'm sure that's supposed to work. Let's find out. So I applied the label to it triage. And this is a pretty common thing for maintainers to have. It's like a personal state machine for how they move issues. When I label an issue as triage, it basically means you filed this and I haven't yet had enough time to work out whether there's enough information to reproduce the issue. And I let it lie for a little while and I come back to it a few days later and I basically come to the conclusion that the original issue didn't have enough information in it. So I ask this question of the original submitter. This basically says, thank you for filing this bug. I wasn't able to reproduce it with the information you gave me. Please give me a Rails app that I can clone so that I can just run bundle exec aspect and see what your bug is. And then I apply the label needs reproduction case. I think this is an incredibly common maneuver for maintainers these days because even though you filing an issue represents us having a knowledge point that something is wrong, it can be really, really hard for us to work out what exactly is broken. Rails is a complicated environment. There are many gems. There's all of your configuration. There's everything you do to actually cause this bug to appear. And frankly, I can't necessarily reproduce that given just a few lines of spec in an issue. And the user comes back immediately with like, yes, done. Clone this bundle exec aspect, you're good. And I'm like, yes, go team. We did it. This is how open source is supposed to work. And so I confirm that the issue is real and I move the issue from needs reproduction case to has reproduction case. And this state machine, this triage needs has is something I do with nearly all aspect Rails issues because it helps me know what I do and do not have to fix. But now that I have this green issue, I can actually begin debugging the problem. So we clone the Rails app that the user has provided and we also clone a fresh version of Rails into its own repository. I do this so that I can move the commit that it's on backwards and forwards and I point the Rails app that the user has provided at my fresh Rails clone. I check that the tests are still failing and I get bisect bad. I check out 4.2 beta 4 and that's a very specific reason it's this version. This is the last version of Rails 4.2 on the master branch. After this, 4.2 was released and then they have a 4.2 stable branch which doesn't track master. So I can't simply bisect between like 4.2 and 5 on that branch. I have to use master. So I check out this specific commit. I run my tests, they pass, I get bisect good and git tells me I have 6,794 revisions in order to determine what's wrong. Oh, God. This is my life now. So I basically just run the test suite backwards and forwards, git bisect bad, good, bad, good, bad, good and eventually this commit pops out. Commit number A29 whatever. And I leave myself a little note as exactly to what the problem is but not necessarily a fix. This is definitely one where the bug exists in Rails not in RSpec's interaction with it and so I leave it for a while. Life happens. I get a job where I have to move to New York. That's a thing. And then after a while I'm like at RubyConf and I was having a discussion with somebody and I was like so I bisected 6,000 revisions in Rails, found the specific problem and I'm not really sure what to do now and Sean turns around and goes you have a breaking commit, just show me I'll fix it right now. And I was like all right. So I go on to GitHub and I literally just ping at Esgriff, hi. Lo and behold immediately this issue is not the bug in RSpec. It can be reduced with Rails alone. One opens a bug on Rails which provides a Rails reproduction script and specifically states this can be replicated purely with Rails using the public API. In open source we have a twisty, tourney maze of dependencies. Your Rails app that uses RSpec pulls in Capybara and a Diffing Gem and a coffee script compiler for some reason in 2017. Actually you got rid of that. That's an unfair accusation. So before filing an issue on a specific gem it's worth noting that the bug could be anywhere in the dependency tree of that gem. And so it's well worth paring down and seeing if you can reproduce it with a subset of the things that are available. Specifically Rails has a guide for doing reproduction scripts and before filing bugs on RSpec Rails I would thoroughly encourage you to check it's not a bug in Rails first. Question three, sometimes you can't just call up Sean to fix the problem. I love you Sean. So this issue RSpec Rails 1662 can no longer set host exclamation mark in before block for request specs. And that's not immediately obvious so let me show you what this means. Basically if you have an RSpec configuration that globally sets a host in a before or block this stopped working. And there are good reasons to do this before all blocks are useful. And this host bang can happen if you have router configurations that cause you to have sub domains in your app that are also being responded to. And I'm like this should work and also I feel like I touched this code when I was upgrading RSpec to work with Rails 5. So other RSpec maintainers please don't close this issue. I want to take a look at it. And I asked the submitter for a reproduction case exactly as I did in the other issue. And they come back to me not just with a reproduction case but with I made a short screencast of how to use this app to debug it. I'm like oh my god yes that's the best thing ever. That's so much better than just clone this and bundle RSpec. He walks me through, he shows me where the bug occurs, he shows me what he tried changing. So good. And so I pull down the reproduction app, I do the same bisect, shimmy and shake thing, the commit message spits out and I go have a look and it's Aaron. Basically in the change to Rails 5 this got committed which changed the application initialization logic to happen lazily and instead happening before setup blocks. Before setup blocks are equivalent to before each in RSpec and so will always happen after that before all the user has specified and that's our bug. So I'm like this one is simple enough that I can fix it myself. I open a pull request on Rails explaining what changed, why it broke the thing, my proposed fix. Maintainers love it when you drop context on them. The more words you write about your change increases the likelihood of it getting accepted because it just makes it easier for us to understand what's going on. I also did that as part of not just the PR message but the actual commit as well. The reason for that is if any Rails maintainer in the future is ever blame diving the source of Rails to work out what's going on they can without leaving their terminal look at that commit message and see what changed and why and who did it and they can like come at me on the internet if I broke everything but I don't think I did. So after some discussion, sort of round trips with Aaron, I think Raphael got involved for some of it, Sean got involved for some of it, I got merged and then I completely forget about it and don't close the issue on RSpec for plural months and so someone just comes along and is like hey, you fixed the bug in Rails, you should close this issue. You're right, I totally, totally should, I'm a terrible fallible human and the truth is if I fix something and you just ping me on GitHub and you have fixed this, do you still need this issue to be open? I can be like no and then I can close an issue and you did a simple one minute action that gets rid of an issue on my issue tracker as an external contributor that makes me super happy. So to summarize, RSpec definitely has bugs but the surface area of our integration with Rails is not all that big and if you have a Rails specific bug in your RSpec suite, I have seen it be in both places but it can be because of Rails and literally every fix that I showed today was in a gem that is owned by the Rails project not by RSpec. I know that if you're in this room, you're way more likely to be a fan of RSpec than a fan of Minitest but if you can, use the Rails reproduction script guide which asks you to test it in Minitest just to make sure that's where the bug lies and Rails has a really good guide for doing that. When you open an issue that has more than the most basic form of complexity, I'm likely to ask you to send me an app where I can reproduce exactly what's going on. The reason for that is not that I value my time more than yours, it's that five minutes of you doing that extra work of packaging it up can save me literally hours of debugging in my framework and that's really important. You can always just call Sean to get Rails bugs fixed. Okay, real talk though. I think working on an open source is a really hard thing to do as a career maintainer. It absorbs your evenings and weekends and sometimes stuff goes out into the world and everyone's test suite is on fire and you are the only person in the world who can fix it. I have a friend who once tweeted retweet if participating in open source has made you cry. That has happened to me. I 100% can tell you that for sure. The work represented in this talk sums to more than 40 hours of maintainer time. That's a full work week. My time, other aspect maintainers, Sean's, Rails core team, Rails committed team, you're all wonderful people. Most maintainers, hugs, they deserve it. I think open source is more approachable than it has ever been and it will continue to do so. The previous talk was about how to start contributing to Rails. The talk before that was a deep dive into a new Rails feature. If your company materially depends on the existence of Rails, RSpec, Bundler, RubyGems in order to exist, it seems natural to me that you should find some time to work on that. You saw a giant red slide at the beginning of this talk. I'm a fan of communism. Maybe we should find ways to ensure that maintainers can get paid for their work. Just a thought. Just one random cheer. In the meantime, please buy me drinks at the bar if you use RSpec. That seems like a fair trade-off, right? All right. Quick wrap-up. I work for a company called Digital Ocean. We are an infrastructure as a service provider. That means we do servers, block storage, networking, all the primitive components for you to build amazing applications. We are hiring. I like working there a lot. It is a lot of fun. Our products are cool. I have swag. Come find me. Let's talk about servers in you. That's all I've got. Thank you very much for listening. I'm Sam Phippin on Twitter. Sam Phippin on GitHub. If you would like to email me, I am sphippin at do.co. Thank you. I will take any and all questions. Are the anonymous modules still there? Yes. To the best of my recollection, it is like an active support concern that does something with Rootset. Listen, I will send you the commit. Computers are extremely bad. One thing that could be done there to make it easier is to have those modules define self.name with something sensible. I don't know what the performance implications of that are. Or if it is a hot path or whatever. There is a whole section of Rails maintainers sat over here. Maybe just ask them. Anymore. I literally can't see. Shout. Be loud. All right. Thank you very much. All right. Thank you very much. Thank you.
RSpec gives you many ways to test your Rails app. Controller, view, model, and so on. Often, it's not clear which to use. In this talk, you'll get some practical advice to improve your testing by understanding how RSpec integrates with Rails. To do this we'll look through some real world RSpec bugs, and with each one, clarify our understanding of the boundaries between RSpec and Rails. If you're looking to level up your testing, understand RSpec's internals a little better, or improve your Rails knowledge, this talk will have something for you. Some knowledge of RSpec's test types will be assumed.
10.5446/31290 (DOI)
Good morning. Thank you for coming. Both of you. It looks a lot emptier up here than it did a minute ago. So I'm just going to jump right in. We've got a question to start us off with. Do you trust me? Thank you. If you don't, you're in good company. But if you do, I'd like you to take a second and just think about why you trust me. This talk is about why our apps can trust our users and vice versa. So that's the topic of the day. So me. My name is Michael Sweeten, which you know if you read the slide up there a second ago. I work for a company called Atomic Object. This was us. I'm pointing to a screen you can't see. This was us about a year ago, about 50 people, and we do custom software development. So web, mobile, lots of stuff across the board. And I have to do the obligatory page that I'm sure you'll hear at every single talk today. We are hiring experienced developers at our offices in Ann Arbor and Grand Rapids. So hit me up. So today's talk here is not so much about algorithms or bits and bytes or any of that kind of stuff. I'm not going to start unpacking AES and RSA and padding algorithms and things. This is a talk about how those different pieces all fit together. Because it turns out that it's a really complicated nasty dance to get any of the stuff working right together. And the stuff that many of us might have experienced in college in our district's math course does not come anywhere near covering it. So that's what I want to talk about today. I apologize in advance if some of you did want more details. Now, in addition to some of how all that fits together, you'll see my little spy pop up on the side of the screen when I have a story to tell about some real world security failures. So put some context on some of these things. So a moment ago, I asked you about trust. Now, my name is Michael Sweeten and you can trust me because I have the ID to prove it. The ID was issued by the government. So I won't go so far as asking you to trust the government, but maybe you trust that they checked my birth certificate and other IDs before they issued it. And it's got my picture on it and the name matches what you see on the conference program. And so you can at least trust that this is the talk that was meant to be given at this slot, even if you don't trust anything else. Almost everything we do in terms of security in our apps is built on a chain of trust like this, something to go from program to name, to government, to ID. Almost nothing is made from some big overarching solution. It's all tiny little pieces fit together in just the right way in a lot of ways that I'm terrified that I'm going to screw up because it's hard. So here's an example app. This is what you get if you do like Rails new or whatever the command is nowadays and add device and don't put any styling on it. So I made an example user account for Rails accounts and let's log in. Okay. I'm logged in. It's really easy though to hand wave over what just happened when it's buried under a single click. We actually did just a lot of steps. We had to make a connection. We had to send data. We had to evaluate data and change things. So let me unpack that. We lost about a tenth of an inch of screen. Okay. So we had to do a DNS lookup to find the server. We had to make a connection. We had to somehow magically make it secure. We can send our credentials over, check them and log the person in. So that's great and it's better than what we had but that's still pretty hand wavy. So let me rewind. We've connected. Now before we do anything else, we're going to check the server certificate. So fun story time. We used the certificate to prove authenticity. Authenticity is important. So here's an example. Back in 08, a man stole roughly $850,000 from two banks around the DC area. He walked in and said, hey, I'm here for the pickup. And he looked right. He had the uniform, the gun, he had everything except he wasn't actually working for the security company. Grabbed a bag of half million dollars, he got a nice out. I don't think he actually had an armored car. He just probably got into like a Ford, I scored and drove away. Strapped it in. Did the same thing the next day and took another $350,000 from a different bank. So it doesn't matter if he had the truck or not because even if he did and the truck is secure and it's locked, it doesn't matter if you're sending your data securely to the wrong place. So this is what our certificate is for. But let's dig into what it really is. Our certificate is really three key things. So firstly, it's a public key. It's a public key for the server that we're trying to have a conversation with. But the key itself is just a number. It doesn't tell us anything and the number could be long to anybody. I could give you a number, 42. So what we add in the certificate is some metadata that says not only is this the number that we're talking about, it belongs to this name. It belongs to Amazon.com, Rails.com. And we trust that the certificate authority has validated that and they've issued the certificate and what they do is they sign it. They apply a cryptographic signature which is created with the certificate authority's private key. And so we can verify it. This establishes a chain of trust for us. I trust my web browser. The browser ships with public keys for all of the certificate authorities so they can check that the certificate is validly signed. And then the certificate was signed as a whole so we know that key goes to that name. And because there's a public key in there, that's a tool we can use to help facilitate secure communications with whomever has that private key. And we haven't proven that part just yet. All we've proven is that this is the right public key for us to be using. So we trust the key. This more or less, this system of the certificate authority vouching for the key, basically what the mafia does when they induct a new member, works about as well actually, to be honest. Okay, so we've got that certificate stored in memory. We're going to use it more in just a minute. The next thing we have to do is key exchange. We have a key in that certificate but that's not actually what we use to communicate generally speaking. Public key algorithms like RSA, I'm not sure if this applies to DSS or other signature algorithms, both RSA at least. It's firstly really computationally intensive. And secondly, there's size limitations to the size of things you can encrypt. So it works well for exchanging 256 bit key, a little less well for downloading season two of the Sopranos. Okay, two ways of doing key exchange. So the first is RSA key exchange. The client generates a secret and is going to encrypt it using the public key in the certificate. Then we can send it over and be assured that no observer will be able to steal the key. There we are. Server has the key then and now we have the same key on both sides and we could have it secure. So that's pretty good. And that was in 71 I think when the RSA's paper was published. It was a pretty big deal. There's one other way we do key exchange which is a little bit better I think. So I don't know if you remember seeing, I don't know if you guys remember seeing this in school or this image, this was just taken from Wikipedia. This is Diffie Hellman key exchange. I'm not going to dig into exactly how it works but here's the super quick demo. Each side generates a secret and they're going to keep that secret secret. We also generate something that we're going to exchange publicly. And this is where we do something special. On the server side we sign it. This is an extension of regular Diffie Hellman key exchange that we do with TLS. And this lets you prove authenticity for the server because if we did RSA, only the person with the private key could decrypt it. Because that doesn't apply here we need to do that some other way. So the server signs the key, we swap the public halves and then from that we can construct a session key we can use. But this is where things get cool. The first half that we generated never got transmitted. So that provides a facility we call forward secrecy. If you remember what I showed you for RSA, all the data that matters for the connection was sent over the connection. So if someone happened to record it, even if at the time they didn't have the private key, if a year or two later they acquire it, they can go back to their recording of that conversation and decrypt it. With Diffie Hellman, as long as we delete the parts that we didn't transmit, we can be assured that it's relatively unlikely at least that anyone's going to ever reproduce this particular conversation. Now one thing I thought was really interesting, forward secrecy is actually not enabled in most of the different Cypher suites SSL and TLS provide. Not really sure what the deal is there. It was kind of a big news item at some point in the past when Google enabled that for all of their services. But it's a good thing. You should consider it. Okay, so regardless, we somehow have established session keys. And so we can start encrypting. So that took a few hops to get there. And before we move on, let me just review what we've accomplished with those. We talked about authenticity. We want to make sure we're talking to the correct person on the end. The privacy one is pretty obvious. I want to send something and not have that be observed. The last one is integrity. And I didn't talk much about that. And that's because there isn't a step for integrity. The way we do that with TLS is pretty much every time they exchange data, a hash is provided for the data from beside that sends it. And so on the remote end, they can validate that that hash still works. So that's threaded through everything we've talked about. Okay. These bits are stuff that we typically don't have to deal with ourselves every day. But this part, the actual login stuff, that nuts and bolts is, that's what device gives us if you use device. That's what your login controller will do for you. So all of these pieces are transport layer. And that's our app. You can think about it like this too. So on the top is stuff that someone smarter than me wrote for me. And put it in Apache or NGINX. And in our browser in Chrome and Safari. The other stuff, that's my JavaScript, my HTML, my gem file, my controllers. The key word being mine. This is all stuff that's my problem. And your problem. So these are kind of separate pieces. I'm now going to kind of hand wave over forgetting about those first bits for a little bit and focus on how our apps handle stuff internally. So I started up a development server. And I'm going to make some requests. So this is, I'm going to show you the login request that we demoed right at the start. And now because this is a development mode, I don't have TLS enabled. This means I can actually just tell net in and make a request. It gives you a really good hand line exactly what's happening on the wire when you do this because there's no other layers in between. There's a request. I just click the login button and we see it's a post to the user's controller to sign in. And down at the bottom, there's the data. I'm sending my email. I'm sending my password. I thought URL encoded. And we're relying here on it being sent over an encrypted channel. Because if it weren't, this is exactly what it would look like to an observer. So that's the step. We just sent the login credentials. That's all there is. Verifying it though, it's going to take a little bit of work. So this is the server's log output. And the key bit is this here. We sent a username of RailsConFatExample.com and so I'm going to dig through my user's database and try to find that record. Now, that's actually the only key to the login process that's really showing up in our logs in a good way here. The rest of it is actually right here chronologically speaking. That's where we're going to work on the in-memory stuff and figure out if the password I supplied is also valid. I am going to dwell on passwords for a little bit because it's something that we have to deal with a lot and that's easy for us to screw up. For temporary session keys, that's all stuff that's under our control, but the passwords come from our users. So we can't revoke that without really irritating our users. So let's say we have some users. Some of them have good passwords. Some of them are duplicates. Some of them are bad passwords. Users. So this is a simple way to do it. Now, we all know this is wrong. It's easy, but it's wrong. And it's wrong because if someone gets ahold of our database, they have all our users' passwords. Because users are so readily reusing passwords, it means that if we leak our database somehow, and this just keeps happening if you watch the news, if it gets leaked, we're leaking not only our users' passwords for us and whatever data we have, we're also leaking probably their bank's password because they're not going to have a new password for every service. We all know better, of course, but they don't. Okay, so storing plain text passwords is simple, but don't bother. Okay, let's give it another try. Suppose we hash the password. This is marginally less bad. Data ends up looking like this in the database. And so we just take the hash of whatever they provide, compare it and see where we're at. But immediately it's really obvious that if you have duplicate passwords, it's exposed. I also want to point out, SHA-1 is really fast. So it's kind of a problem. So another story time. In 2013, at the Four Seasons Hotel in New York, they lost a lot of money. This is one of my favorite stories about just brute applications of force. They had a jewelry case by the front desk. So this is broad daylight. So front door is unlocked, desk is manned, and they have a case here. And it's probably bulletproof glass and locked. But two men walk in, transcoach, hats pulled low, pull out a sledge hammer, smash the case. They grab an up $2 million worth of watches, jewelry, cuff links, et cetera. And they walk back out. And it was so fast no one could do anything. So I like that as an example of, you know, you didn't really expect that failure mode. And that's the way it is with the password hashes here. You could try to calculate all the passwords, all the hashes. And it seems like it should be infeasible. But it's actually not. There's an attack type called rainbow tables. And all it is is an implementation of a time space tradeoff that lets us create a lookup table. So a lookup table for all zero to eight character passwords with the, you know, 95 printable characters on the US keyboard is less than half a terabyte. So today, I have that on my laptop. Not a big deal. But even in 2000, that much storage would cost you about $6,000. So you can bet the NSA had access to that kind of computing power. And probably MEGO with a credit card can make a system that can do that. So even though it looks better than storing plain text passwords, it's almost identical. But we can mitigate that by applying a salt. So in the database, it looks like this. We just generate some random data. In practice, it should be longer than this. But I ran out of width on the slide. And when we hash the user's password, we include the salt and whatever they provided and then check that against the, against their password, whatever we've stored. This actually finally starts to get us in the realm of being kind of secure. It's still vulnerable to the fact that, you know, SHA-256, SHA-1 is definitely broken. There was a Google published an attack recently. SHA-256 is, I think, the current state of the art. If you're watching this in six months, I flatter myself. If you're watching this in six months, maybe things changed. I don't know. But right now, so this is good but not perfect. We can do better. What you really, really want to do is just apply a password-specific hashing algorithm. B-crypt, S-crypt, PBKDF2, a lot of them out there, just go and pick the right one. And this ties together everything we have. So it's, I don't know, it's hash you looking crap that you're taking your database. But let me really quick show you a little bit about what's stored there. This is a bling delimited. So that first field is the version, B-crypt in this case, version 2A. This is the bit, the second field is 12. That's what differentiates it from just a standard salted password hash. That 12 is a work factor. So I can make that 15 or 20. We can use that to scale how hard of a problem this is. The password hash, we don't want that to be fast. There's no advantage to it. Because for login, our users don't really care if it's 200 milliseconds over 100. And if we find out later that our computers are too fast, bump it up to 20, 25. First 22 characters there, that is the salt. 128 bits, base 64 encoded. And there's the hash. So that wraps up everything. So that's verifying the login credentials. Finally, we can start to get into the point of logging the user in that we started trying to figure out 20 minutes ago. So I made my request. And here was the service response. And the interesting bit amongst all these headers is this, the session cookie. I actually had to configure Rails session store to use the database for this. So this might look a little different, but the key thing is it's something that we gave to the user that they can bring back next time instead of the username or password. It's revocable. So if we need to change things, we can just log the user out. Not a big deal. It's a lot harder to do that with their actual password. Okay. So the next request, they're going to bring in that token, that cookie, and they'll be logged in and we're done. That took us 22 minutes to get there. So this is what we've accomplished. We used the username and password to authenticate the user. We've issued an unpredictable unique session token that we can revoke. And that whole first bit, those six steps, we know no one else has that token because we've exchanged it over a secure channel. That's actually kind of a big thing, how you treat your tokens. Here's an example. This woman who I've anonymized in the 2015 Melbourne Cup, that 2015 Melbourne Cup in Australia, betting on horses, she won $800. And very naturally, she was excited and she posted on Instagram and Facebook about it. In the time it took her to get from the track over to the betting counter, someone polled their secret token out of that barcode and claimed the prize. She was a bit miffed about that. And now, you know, we think we do better because, you know, it's not printed. You know, we're not exposing it, but it turns out in 2010, developer Eric Butler released this Firefox plugin called Firesheep. This is a screenshot from his website, which is still up. Do a search, you can find it. A lot of major web apps at the time were using SSL to protect the login page, but not to protect all of the other parts of the app. And so that session token, that last property, they broke it. And that meant that if you were connected in a coffee shop on an open Wi-Fi network, you were basically just shouting your credentials out into the world for anyone who would listen. So not optimal. The lesson here is you should always be using SSL if you care, or TLS, which now replaced SSL, if you care about security at all. Okay. So here's my Twitter. I don't use it too much, as you can see, but I'm going to try to perform the same kind of session hijacking XTAC against Twitter. So I go to Twitter and open up Chrome's DevTools, and here we have an off-token cookie. So I went over to cookies, and I had to experiment a little bit to find out that it's this one and not, like, Twitter session. But if I give this token, they will act as if it's coming from me. So I provide it as a header when I hit Twitter, and so that gets sent over in the HTTP headers, and they give me back a successful request. Now, this whole thing, this is a mess to dig through, but if we actually dig into the HTML, okay, that's still a mess, I'm going to scroll down about 7,500 pixels, you see, hey, there's my name. I'll leave it as an exercise to the audience to verify that your Twitter doesn't have my name on it. And that's all it took. Now, that does not mean that Twitter is insecure, because they did everything right in terms of exchanging it over HTTPS and protecting those tokens. I was able to get at it because I control the endpoint. It's on my laptop, and I control the browser. So if someone takes over your browser, you're just kind of screwed. I don't really have any good advice for you there. Hopefully, it's a problem that will be solved eventually, but you have to end up trusting something. We trust certificate authorities, we trust our browser, we trust our laptops. That's a really hard problem. Okay, we talked about this. Okay, so I want to take a second here and shift gears. One of the guys at my office makes fun of me for really liking single sign-on, but I think it's actually a cool demonstration because single sign-on between two separate isolated apps, if we want to have what appears to be a shared session between them, ties together everything we've talked about so far. You have to use our cryptographic primitives to establish trust with each individual system and between the systems. So, yeah, I got time. We're going to dig into this, and hopefully, you'll see how this complicated dance works a little bit better. So let's say we have some boring system. Maybe it's an HR system. It knows who I am, and it can authenticate me, but it doesn't have any data that I actually am interested in. Maybe there's some other system, a Wiki. Maybe that one has data that I want. They're web apps, of course, so let's dot-comify them. And now we're going to, when we provision these servers, I'm going to exchange some keys. And this particular implementation is just random data, and it's the same data on both sides. There's other ways to set up keys. We just need something that will help us establish trust. And it knows about me, so let's just assume that they have some creepy pictures of me in the database. Okay, so I'm a web browser. Which side am I on? This is backwards. So I go to my Wiki and I say, hey, give me the data. But of course, I'm not logged in. I don't have a session. I don't have a cookie. So the app says they don't know me. But it says go talk to the other one. The other database here is going to be a redirect over here because this is the database that actually knows who the hell I am. Just standard 302 redirect. And we're going to carry along the address for the page I'm trying to get to for that, you know, in about 11 minutes when I run out of time. We'll know which page to redirect me to. Okay, so I go to the other database and I say, hey, will you vouch for me to the other guy? And I'm not logged in here either. So he says, no. But this guy is capable of authenticating me, so he'll give me a login form. And I can fill in this login form. So I make a post request. I include my username in it. And maybe that'll check out. So I'm logged in. He's going to give me a session cookie. But that cookie, it's only good here. My brother is not going to send employees.com session cookies to wiki.com. So he's also going to give me a token that the other app is going to use to authenticate me. And that token is going to have to be two things. One is going to have to establish who I am. And the other is going to have to prove that that token is actually accurate. I'll share a fun story about forged tokens later if we have time. So who I am is trivial. Maybe it's JSON. If you're in SAML, maybe it's XML, but whatever. It says who I am. And that's going to have an expiration date on it. This token is only there to get me from here over to the other app. So it doesn't have to last more than a minute or two. If it does, then if I get fired and my access is revoked, then that token will leave me still logged in. So that would be bad. So we keep this short. We probably URL encode it to make it easier to send. And that's who I am. Now, for authenticating it, there's a lot of techniques. We could use public key cryptography to create a signature which can be verified with private keys. Another method that I kind of like when it's just two systems I'm doing something homegrown is a hash-based message authentication code. So this is actually the specific primitive that's used to do integrity checking on all of the SSL packets they go across. And all it is is we wrap up some random data as a key, the data to be signed and a hash function, and they're applied in some fairly specific way to create a hash. So if the other side has the key, which remember, we pre-shared the key, they'll be able to validate it. Okay. So, he redirects me. My URL is starting to get to be a bit of a mess as I accumulate things in it. So we're going over to the other app over here, and we are still carrying along my final destination. But now we have the token that says who I am. And we've got the HMAC that says you can trust this. You can trust that it came from someone who knows. So I follow that redirect. Just make a get request. And now that acts like login on this side. It gives me a session token. And so now I have one token for over there, one token for over here. And they're different and they're not really connected. These apps don't have to share database because having that key set up lets them send data even over the insecure channel of the user, and we can trust it. And now it will finally redirect me to the place I wanted to go. So I follow that redirect. I provide the cookie, and now I can have all the data. So I wanted to illustrate that just because, get to this in a second. I wanted to illustrate that because it's such a mess. I feel like applying cookies and SSL and everything, all those layers, there's a lot of different pieces. But that model, that's the same model for OpenID Connect. Samo, I mentioned, pretty much any single sign-on solution you get off the show will work roughly this way. Now, I originally intended to include this graph in the rest of the talk, but it was a pain in the ass. But I wanted to throw it in at the end anyway. So there's just a little dependency graph of how some of the different math primitives and other business things, like trusted third parties, all come together to let us be able to act securely online. I'm not going to go through this, but I just put it up there to show this is a bloody mess. It's a really careful stack of things. And I find it interesting, so I've read about it. I definitely do not trust myself to implement it. So I strongly recommend if you can get a third party trusted well-audited library. For example, LibSodium is an implementation. And one thing that's cool about that is they set it up so that they never have a branch in their code that depends on secret data. This is apparently a good thing. But I pointed out because that's the kind of thing that I'm not going to think about because I'm not an expert in this. And in our likelihood, 90 plus percent of you guys aren't either. There's very few people in the world I trust to do this well. I don't know who they are, but I hope they're doing it well for me. Okay, so that's the main stuff I wanted to cover. A few minutes ago, I promised a story. So let me share with you two more fun stories about authentication going wrong. This is one of my favorite ones, and I actually included this in my talk submission to RailsCon. In 1970 or 71 maybe. This is back around Vietnam War's protest. There was a group of protesters that broke into a Philadelphia draft board office. So they were going to steal these selective service papers for people that were being drafted. They discovered in this particular office they couldn't break through this door. The padlock was not one they were able to pick or replace. It was just something they'd done in other similar heights. So one of them had this bread idea while they were casing the joint. It's a tactical term. He wrote a note, and the note said, please don't lock this door tonight. And he paced it to the door. They came back a few hours later, and sure enough, it was open. So absolutely my favorite height was just so damn simple. So that's the problem with if you use SAML or something. You get a note on this other side that says you can trust a person who has this token as long as it came from the right place, as long as that token is signed. So it's really critical that you validate your trust at every level because if you break that chain, you're leaving your front door wide open. Okay, so we've got about four minutes if there's questions, discussions, heckling, high fives. No? Okay, well, thanks.
Picking an encryption algorithm is like choosing a lock for your door. Some are better than others - but there's more to keeping burglars out of your house (or web site) than just the door lock. This talk will review what the crypto tools are and how they fit together with our frameworks to provide trust and privacy for our applications. We'll look under the hood of websites like Facebook, at game-changing exploits like Firesheep, and at how tools from our application layer (Rails,) our protocol layer (HTTP,) and our transport layer (TLS) combine build user-visible features like single sign-on.
10.5446/31291 (DOI)
Well, thanks. Thanks a lot for coming. My name is Dave Copeland or DaveTron5000 on Twitter, and I'm going to talk about how to be an effective remote developer, I hope. So I spent the last four and a half years as a remote developer. I was one of the first engineers at Stitchfix. We're a personal styling service for men's and women's clothes, and when I started, we were very small, you know, scrappy startup kind of thing, and, you know, I was just writing code as one does in that situation. But due to the way we work at Stitchfix and over time, my role has also changed. So I was a tech lead of a small team and now I'm a manager of several different teams. And so I've worked with a lot of different people than just developers. Certainly I work with a lot of developers, but I work with people who use our software. The business people who run Stitchfix, vendors that we rely on, all remotes for that entire time. And that's pretty typical of a developer at Stitchfix. We now have over 70 engineers, and most of the engineering work at Stitchfix is done remotely. Over half the engineers don't live in the Bay Area. We're headquartered in San Francisco, but most engineers don't live there. Those that do live in the Bay Area certainly come to the office, but they don't necessarily come in every day. So there's a lot of remote work happening. So what does that mean remote, though? When I was thinking about this, it occurred to me that people work remotely a lot more than you might think if you think about what that means. So I tried to describe it. Bear with me. You do not often interact face-to-face with the people you work with. It's not a great sentence. I tried very hard to make a better one, but I think this gets the point across, right? You go somewhere and do work. Maybe that somewhere is a coffee shop. Maybe that's your basement. And you work with people, and those people aren't there most of the time, right? So that's the situation that I mean by remote. And if you think about it like that, like, there's a lot of remote work happening in the world, not just with developers. So there's the lone wolf, right? This is the hard mode of remote where you are by yourself and everyone else you work with is at some other office. Tricky, but what we're going to talk about in this talk applies to the lone wolf. You also have easy mode, which is everybody is distributed. No one goes to an office. There is no office. One sort of works whenever, wherever. That's a little easier, but still the same things apply that we'll talk about. But think about multiple offices. Like, you could go to an office every day, have a commute, go to a desk with a computer and all of that entails, but the people you work with aren't in that office. You're a remote developer because you're working with people who aren't there. So that's what I mean by remote. So we also have to talk about what is effective. So obviously producing some value, right? Your company's paying you to do a thing and you need to do that thing. So that's part of it. But for you, you also want to be working on something valuable, right? You want to make sure that you're working on the right things, that those things are useful, that people want them or get some sort of benefit out of them. You also want to have some level of agency, right? You don't want your job to just be closing jerry tickets all the time. Like, you want to have some broader effect on the team, the people you work with, the company, something like that that's not just doing the task in front of you but growing as a person and you need agency and impact in order to do that. You want to feel included. You want to feel like part of the team. You don't want to be, again, the person whose job is just to close jerry tickets. You want to be part of the group moving forward on a collective goal. And you want the experience to be rewarding, right? No one wants their job to not be rewarding and to the extent that having to have a job is a necessity, you want it to be as rewarding as it can be. Now, I don't want to imply that you get these things for free just because you go to some office. But you get some of these things for free just by going to an office. Like, there is this sort of implicit level of effectiveness that you can just get for free by being with everyone that you work with. So what it means is to achieve these things when you're remote, you have to just work a little harder and you have to be more intentional in your behavior to make sure that stuff happens. And that's what we're going to talk about. There's not a week that goes by that I don't think about the remote experience that I have or the people I work with have. And it does require constant upkeep. And it is not what I would call easy. It's not impossible, obviously. But it is a part of my job to work on this and make sure that the behavior that I'm exhibiting is doing the best to make the remote experience good. Of course, it's worth it. Who works remote most of the time? Okay. All right. So you know what I'm talking about. It's totally worth it because you get this level of freedom and flexibility that you don't get by having to go to an office. I don't have to commute anywhere at all. I can make lunch in my kitchen. My kitchen is in fact stocked with the snacks that I prefer. And I don't have to stock other people's snacks. I can work outside if it's nice. I don't have to use a public bathroom. But the best part about working remote is I don't have to live in San Francisco. I love not living in San Francisco. It's my favorite thing. No offense to San Francisco, but I don't want to live there. Now, the company gets a benefit from this, too. The company, I'm sure, is very happy that I get to make lunch at home. But really, the company gets access to a wider pool of talent. So that's why they are also willing to spend the time on this sort of thing. So if Stitch Fix had decided that every engineer had to come to some office in San Francisco, it would have taken us much longer to build the team that we have, would have been harder to build the type of team that we have, and it would have actually had a significant impact on the company's growth. And because we committed early on to making the remote thing work, we're able to get people from all over the place. I'm not sure if you are aware of this, but there are developers who are really awesome and they don't live in San Francisco. There's a few of them. So we've got a lot of them on our team, and it's awesome. So this is how we do that. So it's not technical. I'm not going to talk about Slack or anything like that. You have to spend your energy building trust with people that you don't know and maintaining trust with people that you do know, and you have to be constantly doing this. So that's what we're going to talk about. Behaviors that you can take in a very small way that contribute to building trust with other people and acknowledging that trust. Anyone ever heard this phrase, the half-life of trust is six weeks? So there's a tiny link there that you shouldn't bother with. You can go to the slides later. It's a blog post by Steve McConnell. Steve McConnell is an author of software books. He wrote Code Complete. And in the blog post, he talks about his frustration working with a team located in India, which is halfway across the world from where he was. And he recounts one of his managers saying this phrase, the half-life of trust is six weeks, which sort of implies that if you take no action, if you don't do things to reinforce and replenish that trust, it's going to go away. And when you're not present with people, it goes away much more quickly because you don't see them. Like, you get a lot of trust by default by just being around people. And as a remote developer, you're not around people. And so you have to work extra hard to make sure that the trust that you've earned continues and that you're building more trust and just replenishing that constantly. So I've got four mindsets that you can use to drive your behavior that I believe will build and maintain trust with other people. And we're going to talk about those with respect to all the different things that we do as developers. They are communicate frequently and clearly, be responsive but set boundaries, assume good intentions, and help others help you. So the theme here is that you have the power to make your remote experience good and we're going to talk about things that you can do to do that. But we do have to have a brief chat about technology. So the biggest problem with being remote in terms of communication and building trust is, like I said, you're not actually there. So you don't have the ability to go over to someone's desk and talk to them. You have to go through some piece of technology to be able to just communicate with them. And so you're going to need a few set up. You're going to need some sort of chat system that people can use and that they check and that they're in. And I said the word people, not developers, right? So it's got to be something that every person that you need to interact with is going to be able to deal with. IRC is not that thing. I'm sorry. You're going to need a video conferencing system that accommodates multiple people and that they can use easily. And again, this is people, not developers. They should be able to include meetings and calendar invites. They should be able to connect them to room systems or other things like that. They should be able to get in and out of them very easily. So WebEx, anyone like WebEx? Anyone love WebEx? No. WebEx meets the standard even though it is terrible. So again, the bar is low. You just have to have it there. And if you don't have these things, the entire thing is very difficult. It's also not very interesting to talk about because you just sort of need to pay money and get them set up. And I don't want to trivialize that, but that just needs to happen. You also need a non-shitty microphone. So people's experience of you, yes, those of us who have dealt with shitty microphones, people's experience of you in real time is mostly going to be talking over a video conferencing system, which is very terrible because all of that entails. It's not the same as talking to a person, you know, face to face. And you can't control the crappy internet and terrible software that's involved. But you can control the input to that, which is your microphone. So your laptop mic is shitty. I'm sorry if you feel otherwise, but it is. Apple earbuds are perfectly fine. You don't need an amazing $4,000 ribbon microphone, you just need something that works and is near your mouth and Apple earbuds work. So technology over. Back to the harder parts, which is building and maintaining trust. So we, I outlined these kind of four mindsets. So I want to talk about the kinds of behaviors that they could drive when you're doing different things that are part of your job. So we code, right, mostly. Hopefully we're coding most of the time that's the output of our work. We communicate asynchronously, like with an email. We communicate synchronously, like on a video chat. And we socialize. So we'll talk about each of those four things as we go. Coding first, right? Coding, that's the thing you're hired to do. That is the thing that makes you a developer and not someone else. This is probably what you're spending most of your time doing, and this is your work product. So how do we communicate when we're coding? So think about what it's like to walk into a room with a bunch of developers. Like, what do you see? You see people typing on a keyboard into a black rectangle with white text on it. And maybe they're working, maybe they're not, but they sure look like they're working. And just that visual image is actually important, and people will assume, well, the developers look like they're developing because they're typing into rectangles. When you're remote, you don't have that at all. So, you know, to build trust, you need to produce things. And so a way to do that, which is also a great way to write code in general, is to turn larger projects into smaller ones. Find a way to take whatever your problem is and get parts of it in front of other people quickly, whatever your process is. This lets people, first of all, a small change is easy to understand. A large change is not. And so when people see you produce a small change, they're going to see that frequently, and they're going to be able to understand it and give you feedback on it. And that lets them understand you better. And that also shows that you are producing stuff and you're driving towards a result, and that builds trust because they can see that you're actually doing something. When you're making changes, think about what is the smallest viable change, right? Because you have to think about it, not just am I getting the code to work, but can someone understand what I've done so they can give me the feedback that I need to know if it's good so they can say, yes, this can go to production. You want to optimize the way you work so that people can understand that because you're not necessarily going to have another way to talk to them. The only communication you might have is the change request, whatever form that takes. So, it is not, if you're thinking about redoing the test because they're not R-spec enough, don't do that. If you're thinking about refactoring this ugly code you don't like, don't do that. If you're thinking about fixing some white space because it offends your sensibilities, don't do that. I've done all those things and it does not help communication at all. When you are submitting your change request, however you do that, so it's stitch-fixed, we make pull requests to GitHub, whatever your process is, you're presumably going to have to write something to explain what you've done. Write more about it than you might think you should. Because, again, you're optimizing for people to understand what you've produced and so you need to give them some clue as to what they're looking at and why they need to look at it and what you want them to look at. So, when I do this I try every time to write the word problem and hit return and I type a sentence or two about what I'm trying to accomplish. I hit return, I type solution and hit return again and I write some couple of sentences about how I approached it, what I tried to do. To give somebody a chance at understanding what I've done because that might be the only chance they get to understand what I've done. More practically speaking, learn how to screencast and diagram. So, early on at Stitch Fix I was working on software and there wasn't an easy way to share it with anyone. There wasn't really a staging server that anyone could feasibly use and I'm not there to bring my laptop over to show it to someone. So, I would just run the app on my laptop and screencast and just talk over using the app in my development environment and that is way easier to explain what you're doing than typing a bunch of stuff out. I use this thing called Jing. It's free for five minutes or less. You can share this screencast privately. If you just get really good at doing that quickly then this is a thing you can just do without friction and include. Diagram is the same way. Just buy OmniGraphil and learn the keyboard shortcuts and then you will make diagrams very quickly and easily and then that is a thing you can bring to better communicate with people. Being responsive and setting boundaries. So, the first boundary you need to set is your working hours, especially when there's time zones, right? I mean, does anyone like time zone math? Because talk to non-developers. They don't even know what it is. They're not going to do it. So, you need to help them. So, there's a lot of ways to do this and you have to take a wide approach. So, I set in my calendar what my working hours are. I include my time zone in my email signature and then when people are interacting with me I am trying to be nice about what my working hours are. So, if someone schedules a meeting with me at like six o'clock, I say, hey, that's a little bit late for me. I don't know if you know this. I'm on the East Coast. Can we do this later? Right? It's a very nice way to kind of set some boundaries and not being a jerk. Or if someone's giving me feedback or I'm chatting with someone I might have to say, hey, it's the end of my day. I got to take off. Let's pick this up in the morning. Right? So, but you do need to set those boundaries because people aren't going to know and they need to know. So, when you're asking for feedback, when you're writing your code and you're like, this is done. I'm ready. What is the next step? Someone has to like say, yeah, cool. Or whatever the process is, you're kind of watching for feedback. And so, the second you do that, your job then becomes to respond to that feedback. Why? First of all, it shows that you're engaged and that you're trying to drive completion, which builds trust because people like people that drive to completion. But it also allows you to capitalize on the context that people develop by giving you feedback. If someone like reads your poll request and gives you feedback and then you don't respond to it for a couple of days, well, they've forgotten whatever they looked at and they have to go relearn it again if they're going to engage with you. But if you do it quickly and you engage with them back and forth, then you're making everything better. And what I've learned is you need to develop a workflow that does not put you heads down away from all forms of communication for hours at a time. And this is really hard because sometimes as developers, we need to do this. But the more you are not able to be contacted, right, people can't walk over to you. So, they have no way to contact you except for the avenues that you have set up and created. And if you're not responsive to them, then they can't take advantage of your expertise. And that is going to drive you more towards the JIRA ticket closer and not closer to a full-rounded developer who has agency and impact and is growing the team and growing themselves. So, I will happily tell you my crazy workflow that I do for this. It might not work for you. But the gist of it is I have an SLA for all forms of communication. And I try to stick to that. And I have a way that while I'm working, I can check all the forms of communication to see if I need to do anything without forgetting my spot. Whatever works for you, try to find some way to do that. And the more you do build up trust, though, the more you can kind of get away with being offline for a little while. So, this could be a thing where early on you need to focus on being more responsive. And as you kind of build up trust with people, you can take more time to do this stuff. This is hard to do. It's very hard. Assume good intentions. So, code review comments. Anyone like any of their code commented on? Was that a fun experience? Okay. Yeah, it's very harsh. We as an industry aren't used to critiquing each other. We're not used to being critiqued. We're not good at it. And either ends. And so, it can be hard to accept. And also, people are bad at it. And it can often come off mean. Or you can read a tone into it that is unpleasant. It's not good. But you have to assume, the only way to deal with this is to assume that the reviewer, whoever is telling you stuff about your code, they're just trying to help. They're trying to help make your code better, make you better, make the company better, whatever it is they're trying to help or they wouldn't bother commenting. I'm not saying to tolerate bad behavior, but bad behavior is a pattern that you can observe. One time thing, you have to assume good intentions, otherwise you'll go crazy. Now, a way to deal with this problem is, leads to helping others help you. Not everyone is good at it, like I said, but some people are exceptionally good at just talking. So, if you're having an interaction in text that is not working, then jump to video. Like, this is my failure mode all the time, is I never go to video. But if you go to where someone communicates well, then you will have good communication with them. Right? It kind of, right, stands to reason. And instead of making people come to you, you should also be specific in the feedback that you want. If you put up a portal quest and say thoughts, you're not likely to encourage good feedback. So, if you actually specify, like, what are the areas of your code that you want someone to, like, look at? Like, oh, this variable name, I had a hard time coming up with it, I don't know, does this make sense? Or does this method like make any sense? Could someone sort of, like, help me understand does this look right? So, that does several things. One, it lets people, like, quickly figure out what to do and can more easily engage with you, which means that you will engage with them. And, you know, that kind of builds trust. But it also shows a little bit of vulnerability, which is huge for building trust, because it shows that you're willing to acknowledge areas of your code that aren't perfect. And so, then when people interact with you, they know that they're getting an honest and authentic experience, because you're willing to call out things that aren't perfect. Okay, code. Asynchronous communication. So, all this coding stuff is a form of asynchronous communication. It's very specific to our profession. But you're also going to interact with people not about code, writing emails, sharing documents, texting in Slack or something like that. There's all kinds of asynchronous communication. And this is the primary way you're going to communicate with most people, just by nature of being remote. Fortunately, this is a little easier to deal with. So, communicating frequently and clearly. You have to provide more context. So when you're talking to somebody, you can see their eyes start to glaze over, you can see them get confused, they can raise their hand, they can interrupt you if you're not, like, making a point. But in an email or a document, like, you're not going to get that. You might never know if someone understood an email that you sent. So you need to provide a little bit more context to increase the chance that they are going to understand that. So, explicitly state what problem you're trying to solve or what information you want or what you want them to do and why give some more back story so that there's a chance that they get what you mean. You should also become a better writer because you're going to be writing a lot. And this is how to do that. So you write something, you want someone to read. The least you can do is read it yourself first. And when you do that, you find all kinds of mistakes in your writing. You find all kinds of incorrect words and other things. So everything that I write, I read at least once and I revise at least once because there's always a way to make it better. And I just do this all the time. And as you do this more and more, you make a habit out of this, then you will do it frequently and the more important an email might be or whoever you're sending it to, you will revise more and more. And it just makes you more effective at communication when you're sort of at least taking it for a dry run. Typography matters. A wall of text is impossible to understand. Like hit return a few times, make some paragraphs at least. Every few sentences there should be a paragraph or something. But bold, italics, underline, like those exist, they have meaning, you should use those. Bullet lists, I mean it's stupid to say any of this but I remember in my youth every email was just nothing but courier text wrapped at 80 characters and if people wanted bold then that was their problem, right? That is not the way to effectively communicate. Again, the diagramming thing is really helpful especially when you're communicating to outside developers or people who are not good at processing text which is lots of people, learn how to make diagrams quickly and efficiently and that's a tool that you can bring to bear and make things more clear. Being responsive. So a lot of the things we talked about with code kind of apply here. The point is you want to engage, give feedback if you're being asked for it, ask for feedback if you want it, like engage so that you're interested in helping someone solve their problem. Because when you do this, this is how you get agency and this is how you have an impact, right? And this is why being responsive is so important. You have an opinion about things and if you don't share that opinion with anyone then that opinion is not going to affect anything. It's just an opinion that you have. But if you share your opinion with people then that is a chance for you to affect how things are and if your opinions are good and they are helpful then you will be seen as someone who is good and helpful. And that is, that means people will trust your opinion and trust you and come to you and ask you for things and give you more of an impact on what you do and that makes a much better work experience. So being responsive and helping people out is good. But don't forget affirming feedback. So all the stuff we've been talking about about feedback is like critiques, this is wrong, fix this, blah, blah, blah, it's very easy to get wrapped up in that and we do want those. That is the critique that we want because we want to be better. But it is nice when someone tells you that the thing you did is good and it's even nicer when they say it in a very detailed way. So if you say, oh, that API design looks good. I mean that's nice, that's very nice. But if you say the names you're using in the route map exactly to the domain which makes it really easy for me to understand this whole thing is like really simple. Thanks for putting this together. That is really great to hear because it shows that you've understood what they've done and that you took the time to say something nice and it's if you're a person that says nice things to people and makes them feel good, that builds trust. So I used to think affirming feedback was pointless, it is not pointless, it is very, very pointless. Assume good intentions. So a lot of asynchronous communication you have as a developer will be with non-developers. Sometimes there are people that use ask as a noun or use solution as a verb and if you're like me it drives you completely insane. But the point is that's just a communication style, it's not an indicator of ability. So I choose and this is hard but I choose to assume that everyone I interact with is killing it at their job that they're really good at their job. Because that mindset is the only way to deal with other people otherwise you get wrapped up in communication style. I mean the number of power points that I have had to have a conversation about in the last four years is kind of staggering but that's how some people communicate and that's cool because they're good at their job and that's fine. But help others help you. So again, kind of same deal, ask for the feedback that you want. The other cool thing about this is this is another avenue to give context to help people understand what you're doing. If you're telling them what you want that helps them figure out what it is you're trying to accomplish and is much more likely to garner some back and forth. So this is the hard, for me anyway this is the hardest part, is synchronous communication which basically means being on some sort of video conferencing system. Why? I mean part of it is I'm an introvert and so this saps my energy to have conversations but another part of it is that it's this weird uncanny valley version of having a conversation with a real person, right? Because you can see them and you can hear them but everything between you and them is terrible software that barely works and the whole thing is just awful. And so it's just more stressful to have to deal with this but some conversations cannot happen over text. Like you have to have synchronous conversations sometimes. So how do we deal with this? Be prepared. So I find if I'm expected to like say words to people the more confident I am in the subject matter the more likely I am to say something that makes sense. So read the material before the meeting. See who's there, what is the meaning about? Do you have an opinion about what we're going to discuss? If so formulate that opinion in your mind at some level of detail so if you have to say something it will make sense and people will get it. You should also try to speak with more nouns than pronouns. When you say stuff like he, she, it, they, this, that, thing, things not a pronoun but you know what I mean. Those words don't mean anything. Who knows what that means? The only way you can know is if someone said a noun before and you have, you can guess that that noun is the thing that that means and okay now I get what you're saying. And when you're talking to a person like face to face you can see them get confused and be like ah okay sorry this is what I meant blah blah blah. When you're on a video you can't see that. Also due to technology you will not be heard. Everything you say is not going to be heard. Someone coughs, a word is going to drop out, two people start talking, there's crosstalk, people aren't going to hear what you say. The internet drops for a microsecond and one word goes. I've been on video chats where I missed the noun because of technology and then for two minutes people were discussing and they never used a noun and I literally couldn't figure out what they were talking about and I had to interrupt and say what is the noun please. I don't know what this means. I'm sorry you've got to tell me. So when you do that that means that the things that people do here they're more likely to know what you're talking about. A better way is to pause frequently and ask for feedback right because people can't interrupt. Interrupting on video is very hard. People aren't going to necessarily do it. So take a moment. Okay. This is my proposal for how we're going to do this JSON thing. Does that make sense everybody? And then you have this pause. When we do our all hands engineering meetings at such fix like I said a lot of us are remote and our CTO is really good at doing this. She'll give us information and then she will pause and say anybody have any questions and then if nobody in the office has questions she will always say anybody on the phone have any questions. And then pause for a very uncomfortable amount of time because she knows that it takes a while for us to oh do I have a question? I do have a question. Let me ask that question. That takes a while. So she puts those pauses in there so that nobody has to interrupt and everyone has a chance to participate. So when you're speaking you should do that. Being responsive and setting boundaries. This is kind of the other end of this. For me this is the absolute hardest part. It's not multitask. So if you are in a conference room with people and you're on your phone or you're on your laptop like that's very rude and so you wouldn't do that because social norms say I got invited to a meeting I should pay attention to the meeting I shouldn't be on my phone. But when you're on a video conference you're literally on your computer and stuff is popping up and you can check email and Slack and you can do these things. You can tell yourself that you can multitask but you're not really paying attention and then either you miss things that are important or you get called out. I have definitely been asked questions in meetings where I have no idea what to say and I just had the cop to it. I'm sorry I was multitasking please repeat the question. That's super embarrassing. When you do have something to say it is very awkward to jump in but sometimes you're going to have to and because of all the delays that happen you may have to ask the group to backtrack and just be up front about it. Hey I'm sorry to jump in here real quick but can we go back for a second. I had this thing and this comment like it's hard to do this. Some people just can't. I have a hard time not as hard as others. The pausing thing that I talked about if others are doing that that helps and when you do have the floor that is your time to set an example on explicitly calling on other people especially people that you know are not going to be comfortable jumping in but you know have an opinion. That is a really good technique. Oh Chris you just did this thing what's your thoughts right or anyone on the phone else have something before we move on. That's how these meetings should be run and so you have to set that example whenever you can. No one's going to know that the AV is a problem but you. This is like when someone has something on their teeth you just have to tell them because no one's going they're not going to know so you have to be comfortable pointing it out. It sucks but that is a fact of life. And of course you have to be self aware. These behaviors that I'm describing that kind of have to exist on some level are jerky behaviors if they're done too much or too frequently or too aggressively and so you need to have a lot of self awareness when you do this. You've been asking people for feedback when you're offline to say hey I'm sorry I kept interrupting you. I'm really not trying to talk over you but maybe I was like give me your feedback on how that went so I can get better. Like yeah it's definitely hard. And that sort of follows on assuming good intentions because people are going to interrupt you and it's going to seem jerky and you have to assume that they're just trying to deal with the technology. The other thing though is that the people who are maybe not remote who are part of this who are in this mythical conference room they're not having a great time with the video chat either. Like it's not great for them to have all of this stuff going on and it is friction for them and having some empathy for them is really helpful because it's actually something that you all can bond over. I complain about the technology all the time and it's kind of can lighten the mood sometimes. Because computers are terrible. You have to rely on all of these servers and all these pieces of technology and they all barely work. I'm amazed it works at all. They don't work consistently. It's just all just terrible. And acknowledging that can make everyone sort of feel a little bit better about getting through the video conferences. Helping others help you. So I mentioned before about the AV system issues. This will definitely happen and you need to kind of channel your inner wedding coordinator and just tell people what they need to do because they're not going to know. Hey, point the laptop at Pat because Pat is the one who's speaking. Just tell people what to do. They're happy to fix it but they're not going to know because they're not experiencing the AV the same way that you are. If you can recruit an ally or have a blessed back channel that is really helpful too. So at Stitch Fix we do company all hands meetings and as I mentioned a lot of developers are remote but there's a lot of other remote people. We have lots of different offices. Our stylists are remote. People are traveling. So remote is a big part of our all hands meetings. And there's a wonderful person on our ops team who monitors the back channel and she asks questions for us and she fixes AV problems and we can handle this all by chatting and not interrupting the meeting. So if you have an ally that is super helpful. Okay, socializing. Why are we talking about socializing? You go to an office. People know things about you. Even if you're the most private person on the world and you never say anything to anyone, people will know what kind of clothes you wear, what your hair looks like, how tall you are, what time of the day you come into work, how often you go to the bathroom. These are banal silly things but people will know them about you and if you are not going to an office they will know absolutely nothing about you. I'm amazed at the height of my coworkers when I first meet them. It's not what I expect ever. So what this means sadly for the introverts is you're going to have to make small talk. You have no other way to interact with people and when you're on these video conferences, especially with the West Coast, people are going to be late and so you have a lot of time to kill. You're waiting for everyone to find the conference room and get there. So make small talk. It sucks but you can talk about the weather. It's the beginning of a conversation, a beginning of a personal connection. For me it's easy because I live in D.C. where we have weather. San Francisco, they don't have weather so it's always an evergreen topic. Another thing that we do is we do one-on-ones with people that aren't our manager or aren't our direct report or anything like that with no particular agenda. So I have a lot of one-on-ones with people where we'll talk about work sometimes but we'll also just chit chat. Sometimes it's just five minutes but it's a time set aside to interact as people as best as we can and it feels awkward to schedule these sorts of things but it's the only way to make them happen and it feels normal after a little while. Being responsive and setting boundaries. This is more about travel. Getting together with people helps replenish this trust a lot. It is really handy if you can do it. Travel is a pain so you want to make sure you understand what are the expectations hopefully before you take the job. I guess I'm saying this because you might not be told what the expectations are because it might not occur to someone that travel is difficult for you or something like that. So be clear about what the expectation is and try to do it when you can. I mean a lot of us work remotely because travel is difficult or we want the flexibility to run our lives in a certain way and travel sort of disrupts all that. It's totally true and definitely set boundaries but if you can travel it is worth it to reestablish those bonds with the people that you work with every day because you do refresh that trust by seeing people in person. Assume good intentions. Right. So people are going to make a small talk with you too because they're going to want to know who you are and they might ask things that are uncomfortable and they're not trying to be nosy. They just want to get to know who you are. If you have things about your personal life that you don't want to talk about that's totally normal and totally cool but that's all the more reason to do that whole small talk thing. If you're driving the chit chat then you can drive it in the direction that is safe for everyone. And just be okay missing social events like happy hours. My respect and admiration for my coworkers is not derived by the number of beers I have shared with them. It is by the good work that we do as a mindset I take and so therefore I don't care if I miss happy hours or social events. I enjoy them when I can make them and it is nice to have a beer with my coworkers but many of them I've never socialized with at all and they're still great people that I love to work with. You just have to sort of be cool with it. But it doesn't mean you can't socialize at all or you have to miss everything. What it kind of means is that it might not be clear what those things are and if you have ideas bring them up. You'd be amazed at how successful you can be by bringing your boss an idea that requires your boss to do absolutely nothing but say yes. You do that you're good. But it does mean you've got to take the initiative there. If you can find a way to have in-person meetups, be creative about it. There's a couple of developers live on either side of our Dallas warehouse and every once in a while they will go to the Dallas warehouse and work together. They don't work on the same project but they get to see each other in person and have a little social interaction. Be creative and bring them up and suggest them and often your boss will be happy that you're helping to figure this problem out because they're not going to know how to make the experience good for you. All of that are tiny little things that all build little bits of trust between you and others. When you're trusted and you work with people that you trust, you are going to be way more effective at your job, way happier. You're going to have much more agency. You're going to be producing much more value. You're going to feel included and the whole experience is going to be much more rewarding. That's it. These are the four mindsets again. Be frequently and clearly. Be responsive but set boundaries. Assume good intentions and help others help you. I have just kind of described how things are in my company and so you should come work for us or just talk to us and we'll tell you how this is and we can give you all kinds of details on how this stuff works for us. That other link is for these slides that you can check out if you like. Thank you. The question is when there's social events and there's business talk, you're missing out on that and that's a potential thing. That is absolutely a problem and that's sort of a company culture thing. It's hard to fight information happening that you aren't even aware it's happening. That is tough. Getting to know the company culture before you make decisions helps and asking about that directly sometimes is the way to do that. That is potentially a problem for sure. What's the good frequency for visiting up with people? When I started, I did every two months because I was new and I knew I had to put a lot of effort into making it work and that was helpful. Right now, the team gets together every three months. For me personally, I do that but then I end up going for random other things in between. Everyone comes every three months and that seems to work pretty well. Thanks a lot, everybody.
Being on a distributed team, working from your home or coffee shop isn't easy, but it can be incredibly rewarding. Making it work requires constant attention, as well as support from your team and organization. It's more than just setting up Slack and buying a webcam. We'll learn what you can do to be your best self as a remote team member, as well as what you need from your environment, team, and company. It's not about technical stuff—it's the human stuff. We'll learn how can you be present and effective when you aren't physically there.
10.5446/31292 (DOI)
Music Cool, so first of all, thank you all for coming out here today to spend your morning talking about failure, which is a light topic. So I think DHH sort of set you up and now I'm just going to like knock you down. As was mentioned, I'm Jess Rudder and if you're the tweeting kind, that's my Twitter handle, so there we go. On to the show. Alright, when I was 26 years old, I decided that I wanted to learn how to fly a plane and my friends and family were very skeptical. They had such words of wisdom as, you don't even know how to drive a car, you get motion sick, you're afraid of heights. And all of this was 100% true, but I actually didn't see the relevance. You see, no one was going to clip my wings. So we're now six months in and I was on final approach for runway 21 in Santa Monica after a routine training flight. I eased back on the throttle, tilted the plane, the nose up a tiny bit, entered a flare and floated gracefully, more or less, down to the ground. And suddenly the plane jerked to the side and my instructor was like, I've got the plane. I was like, oh, you've got the plane. And he brought the plane to a stop on the runway. And as our hearts calmed down, we got out and we looked at the damage. It was a flat tire. Now my heart rate finally was calm and with the runway finally stopped, a flat tire really isn't a big deal. You just have to tow it back to a maintenance garage and change out the tire. The runway was going to be blocked for less than five minutes. So I was actually really surprised when my instructor said, hey, I'm going to drop you back off at the classroom and then I'll come back to fill out the FAA paperwork. My heart rate jumped right back up. Whoa, whoa, whoa, whoa. It's just a flat tire, no one got hurt. Why does there have to be any paperwork? You see, in addition to not being the biggest fan of paperwork, I was really worried that I had just gotten my instructor in a lot of trouble. But it turns out that the FAA collects details on all events, big or small, even a tiny tire blowout during the landing in my little four-seater plane. They want to get as much data as possible so that they can work out patterns that can help them implement safer systems. They know that more data means they'll be able to draw better conclusions. But they also know that people really don't like paperwork or getting yelled at. So to make sure that pilots are willing to fill out these reports, they have a policy that if there were no injuries, nothing you did was illegal, and you fill out a report, there's not going to be any punishment. Now think about the very different approach that we have to automobile accidents. When I was 12 years old, I was riding home from the Saturn dealership in a shiny brand new car. It was the first brand new car that my parents had ever purchased. We're sitting in a stoplight and suddenly we lurch forward. We'd been rear-ended. My dad got out to check on the other driver, an incredibly nervous 16-year-old boy. Now, the other driver was fine, everyone in our car was fine, and the only damage was a small puncture on the bumper from the other car's license plate. My dad looked it over and he said, well, look, I guess that's what bumpers are for. He told the kid to be careful, and then we all piled back into our slightly less shiny car and drove home. No paperwork was filed, no data was gathered. In fact, there's not a single agency out there collecting data on car issues. It's usually handled by local agencies like the police, and they do not like it if you call them up to let them know about something as trivial as a flat tire. Heck, you can have an accident where two cars actually bump into each other, and as long as no one's injured and no one wants to make an insurance claim, this will never end up in any records anywhere. So these two different approaches have actually led to very different outcomes. I looked up the most recent stats available, which were for 2015, and for every 1 billion miles people in the US travel by car, 3.1 people die. And for every 1 billion miles people in the US travel by plane, there are only.05 deaths. Now, if you're like me, decimals, especially when you're talking about a fraction of a person can be a bit hard to wrap your mind about. So this is a bit easier. If you hold the miles traveled steady, 64 people die traveling in cars for every one person that dies traveling in a plane. Now, there is something very interesting going on here. We have two different approaches that lead to two very different outcomes, and the key difference is actually how each approach deals with failure. You see, it turns out that failure isn't something that you should avoid. It's a way to learn. Now, before we go much further, it's probably a good time to make sure we're all on the same page when we talk about failure. So what is failure? I think for some of us it might be that sinking feeling that you get in the pit of your stomach when something didn't go right and the person is yelling at you and they have the angry face and you're like, why did I even bother getting out of bed this morning? And I can absolutely relate to that. And as I was prepping for this talk and looking at what different people said, I found a lot of people that were like, oh, failure, that one's simple. It's the absence of success. And I was like, sweet, what's success? And they were like, psh, so easy, it's the absence of failure. Oh, not really helpful. But researchers, they actually have a very specific definition of failure. Failure to them is deviation from expected and desired results. And that's not bad. Now, I honestly think there's some truth in all three of these definitions. But that last one, deviation from expected and desired results, that's something that you can actually test and measure. So we're going to stick with that one for now. Now, I couldn't actually find any definitive data on this. But I think that as developers, we have more results that deviate from our expectations than just about any other group of people. So you'd think that programming would be the perfect place to learn from failure. But one of the few places that I could actually find people routinely most of the time, capitalizing on failure, was in video game development. And one of my favorite examples of this is with the game Space Invaders. Do you guys sort of know the game Space Invaders? So it's the old arcade game where you control a small cannon at the bottom that's firing at a descending row of aliens. And as you defeat more aliens, they speed up making them harder to shoot, right? No, that actually was not the game. That's not what it was supposed to be. The developer, Tomohiro Nishikado, he had planned for the aliens to remain at a constant speed the entire time. No matter how many aliens you destroyed, there would not be a speed increase until the next level. And he wrote the code to do exactly that. There was just one problem. He had designed the game for an ideal world. And I don't know how much you know about 1978, but 1978 was far from ideal. And he'd actually placed more characters on the screen than the processor could handle. And as a result, the aliens sort of chugged along at first and only reached their intended speed once enough of them had been destroyed. Now Nishikado had a few ways of dealing with this. He could shelve the project until processor speeds were fast enough. And that might seem silly, but maybe he had a vision and he was not going to compromise. He could also have modified the game design, put fewer aliens on the screen so that it could run at the constant speed that he wanted. But instead of being rigid and insisting on his original vision, he decided to have people test it out. And they absolutely loved it. They got so excited as things sped up, they would actually project emotions on the aliens. They were like, oh, these guys are getting scared. I'm taking them out and they're trying to speed up because they know that I am about to kick their butt. And it was so popular that he kept that in the game. And the failure was actually responsible for creating an entirely new game mechanic, the difficulty curve. So before this, games would always be the exact same difficulty for an entire level. And it wasn't until you got to the next level that things would actually get more difficult. After this, all bets were off. Things could get difficult at any point whenever the developer pleased. Now, I don't know if the developer here had read the studies, but he was actually capitalizing on a lesson that I see time and again in the research about failure. Failure is not something to hide from. It's something to learn from. In fact, it turns out that failure actually presents a greater learning opportunity because there's more information encoded in failure than in success. Think about it. What does success usually look like? A check mark. A thumbs up. A smile from a manager. And what did you actually learn? Well, there's research on this. Research actually shows that people and organizations that don't experience failure become rigid because the only feedback that they get tells them, just keep doing the exact same thing you're doing. Don't make any changes because you're already winning, buddy. Don't change anything. Failure, on the other hand, looks a whole lot more like this. Okay? Look at this. Look how much information there is available in this error message. If we read it closely, we can figure out exactly what went wrong. We know which line in the code has an issue. And if we have some experience with this particular error message, we probably know what that issue most likely is. And even if we've never seen it before, we're just a quick search away from pages and pages worth of information about this particular failure. Now that we've had an experience with an approach that didn't work, with a bit of effort, we could actually figure out how to write something that does work. Now video game development actually has a long and honored history of grabbing hold of mistakes and wrestling them into successes. In fact, the concept of exploiting your failures to make your program better is so important it actually has a name. They call it the good-bad bug. Now having space to learn from their failures, that actually came in very handy for a group of developers that were working on this game in the 90s. So the concept for the game had players racing through city streets and they were being chased by cop cars. And if the police caught up with you and pulled you over before you got to the finish line, you lost. There was just one problem. The developers got the code for the algorithm just a tiny bit wrong. And instead of law-abiding police officers trying to pull you over, you had these super aggressive cops trying to slam right into your car. And they do it at full speed no matter what you did. The beta testers, they actually had way more fun avoiding the cops than they'd ever had with the racing game. And as a result, the entire direction of the game was switched up and the Grand Theft Auto series was born. So just, I wanted to think about that for a moment. The core concept of the best-selling video game franchise of all time in history ever would have been lost if the developers had panicked and tried to cover up the fact that they screwed up the algorithm. They made a mistake, but instead of freaking out, they thought, all right, let's see what happens. And they cashed in. Now, there are actually some larger programs where hundreds if not thousands of hours of work have already been done by product leads and designers and business people before a developer ever gets to write their first line of code. And in game development, the work is encapsulated in a document called the game design document. Now, the GDD is considered a living document. However, it's actually a pretty big deal for changes to be made late in the game. It means that tech requirement pages need to be redone. Art pages need to be redone. Release states have to be pushed back. Budgets might be off. You get the picture. It's a big deal to change this. But that was actually the unhappy reality that the Silent Hill developers were facing. So they had started out building the game to the GDD specs. But there was one tiny problem. Pop in. You see the PlayStation graphics card? It couldn't render all of the buildings and the textures in a scene. So as you walked forward, buildings would suddenly pop into existence and blank walls would magically have a texture. And as you can imagine, that sort of like, oh, hi, trees suddenly distracted people from the game. And a horror game is very dependent on atmosphere that sort of pulls the player into the game's universe. So this was kind of a game breaking issue. Now, it would have been easy for every single person involved to start pointing fingers at everyone else. After all, everyone had sort of played their part. From the designers who put just one or two more buildings in the background to make it interesting, to the tech team that decided to make it for the PlayStation instead of the more powerful 3DO, to the business team that determined the release date. There was not a single individual anywhere along the line that had made an obviously bad call. There were just a bunch of tiny issues that sort of snowballed into a big problem. You see, the entire system had failed. But instead of running from the failure, the Konami team sidestepped it. They found a workaround. They filled the world with a very dense, eerie fog. And it turns out that fog is actually pretty lightweight for a graphics card to render. So now, it obscures distant objects, which means you couldn't really see buildings and textures on the horizon popping in anymore. And as an amazing added bonus, it is really, really, really creepy. In fact, it was so creepy that this fog became a staple of the Silent Hill series long after graphics cards had caught up and become powerful enough that pop-in wasn't an issue anymore. So that's like another example of success being ripped from the jaws of defeat simply by embracing your failures. Now, these examples from the programming world actually helped to illustrate what was happening at our more high stakes example in aviation and automobile accidents. You see, the aviation system saved so many lives because accidents are treated like lessons we can learn from. Data is gathered and aggregated and patterns are identified. If an accident was caused by a pilot being tired, they never just stop right there. They look at pilot schedules and staff levels and flight readiness checklists to determine what contributed to that exhaustion. In contrast, who do we usually blame for road accidents? Yeah, the driver. Oh, she was reckless. That dude, he does not know how to drive. In other words, airplane accidents are always treated as failures of the system, while car accidents are treated like failures of individuals. And with all that judgment going around, it's no wonder that people spend so much time trying to cover up their errors. I definitely stopped at that stop sign. That's the guy who went through it. I mean, they spend time covering it up rather than just acknowledging the failures and learning from them. Now, not everyone in this room is a pilot, but I actually think that we have a lot to learn from how aviation handles failure. If we're willing to use a system to track and learn from our failures as we write code, we're actually going to be much better off. So that sort of begs the question, what should that system look like? Now, in broad strokes, I think that there are three very important pieces that this system would need. And the first one is to avoid placing blame. We're going to need to collect a lot of data. And then we're going to have to abstract patterns. So step one, make sure that you understand that you are not the problem. Cool. That is much easier said than done, right? I mean, learning not to beat yourself up over a failure and mistakes is probably like a whole talk in and of itself, or like a whole lifetime of self-discovery and work. But with aviation failures, like one thing to note is that they never just stop at the top level of blame. So there was actually a case where the pilot made a critical error by dialing in the wrong destination in the flight computer, and it caused a wreck. So on the cockpit recording, they could clearly hear the pilot yawn and say, I'm super excited to finally get a good night's sleep. Now, it would have been very easy for the researchers to stop there and blame the pilot for being tired. But for them, it wasn't enough to know that he was tired. They actually wanted to know why. So they verified that he had had a hotel during his layover. But that wasn't enough. So they verified that he had checked in. And then they looked at the records of every single time that door had opened and closed so that they could establish the maximum amount of time that the pilot could possibly have slept. And even then, they didn't just say, okay, we've shown that the pilot could not have possibly had more than four hours of sleep total tonight. They looked at the three-letter flight computer readout, and they were like, wow, you know, now that we're thinking about it, that's an incredibly confusing interface if you're tired or distracted, which a cockpit is a pretty distracting place. Now, they're always willing to point out where individuals have contributed to a failure. But they also want to focus on what went wrong with the entire system. So if you take away from failure in code or anywhere else in life, is anything like, I'm dumb, I just can't learn this, this probably just isn't my thing, you are absolutely missing out on all the best parts of failure. Now, I understand not everyone is going to be at the point where you can kind of quiet that inner critic yet, but if you just spend some time trying to ignore it and work the rest of the system, given enough time, you're probably going to find that the voice in your head starts to contribute helpful insights rather than just criticism. Now, step two, document everything. Even things that seem small. Heck, I think you should document especially the things that seem small. So my flat tire on the runway in Santa Monica was a very small error. But just as we saw with the Silent Hill example, a lot of those small errors and missteps can start to roll up into a major problem. And catching those problems early on in course correcting is going to help you avoid major meltdowns. So how should we document things? I'm a big fan actually of paper documentation, but as long as you have some sort of record that you can refer back to, the form of documentation is really going to be up to you. You should definitely include details about what you were trying to do, the resources you were using, whether you were working alone or with other people, how tired or hungry you were, and obviously what the outcome was. Get specific when you're recording the outcomes. If you're trying to get data from your Rails backend out of your alt store into your React components and it keeps telling you that you cannot dispatch in the middle of a dispatch, don't just write down, React is so stupid and I can do all of this with jQuery, so why is my boss torturing me? Because that does not help. Trust me, I tried. Now look, the final step is to make use of all that data that you've been diligently gathering. Imagine how powerful that data is as you go through it and start extracting patterns for when you do your best work and when you do your worst work. Instead of vaguely remembering that you struggled the last few times you tried to learn how to manipulate hashes in Ruby, you can actually see that you were only frustrated two out of those three sessions and the difference between the one where you felt good and the other two is that you were well rested for that one. Or maybe you notice that you learn more when you pair with someone or when you have music playing or when you've just eaten some amazing pineapple straight from Kona, Hawaii. On the flip side you might discover that you don't learn well past 9 p.m. or that you're more likely to be frustrated when you're learning something new if you have not snuggled with the puppy for at least 20 minutes prior to opening your computer. And that is a very good thing to know because it's a lot easier to identify the parts of the system that do and don't work for you when you actually have a paper trail and you're not guessing. And you're also going to have a really nice log of all the concepts that you're struggling with which if anyone in here has ever said, oh, I'd love to write a blog post but I just don't have any idea what to write about, this log of all the things you're struggling with, that's your blog post source right there. Now let's say that you go back and you read this data and you see that you had in your last epic coding session, I was trying to wire up my form for my rate this raccoon app and it worked. Sort of. The data got where I was sending it but it kind of ended up in the URL which was a bit weird. Cool. You actually have a very well-defined problem to research and it won't be too long at all after reading some form documentation that you realize you were using to get action on that form and get request, put the data in the URL. Post requests are the ones that keep it hidden in the request body. So now you're just going to need 20 minutes of puppy cuddle time and you're ready to go fix that form. Now I've been focusing on how individuals can learn from failure today and the thing is this is also incredibly important for teams. So in the research on failure there's actually a pretty famous study that looks at patient outcomes at demographically similar hospitals and they found a very strange thing. At the hospitals that had nurse managers that were focused on creating a learning oriented culture instead of a blame oriented culture, there were actually higher rates of error. But they also had better patient outcomes and they were like that's weird. Here we have hospitals where people are encouraged to be open about mistakes and they make more mistakes but patients are better for it. And so they dug in because that was not what they were expecting to find and what they found was that it was just that people were more willing to report their mistakes which meant that the hospitals could find what was causing the mistakes and correct them which meant that patients had better outcome. At the blame oriented hospitals people were afraid of losing their jobs over even tiny mistakes and they would spend a lot of time covering them up. And maybe some of you have been on programming teams where that's the situation like if you break production everyone's going to yell at you and you have to wear a stupid hat and people are going to make fun of you. And so you spend a lot of time like oh crap I just pushed a debugger up, alright maybe I can do a hot fix before anyone finds out. And the underlining issues actually never get dealt with. And if you show me a dev team that has a zero tolerance policy for mistakes, I'll show you a dev team where engineers spend a good portion of their time covering up mistakes rather than writing good code. If you focus on blameless postmortems, rewarding experimentation and just you know not being a dick to people because humans make mistakes, you are actually going to have very different outcomes and probably more longevity and less turnover on your teams. Now look, like everything else that you try, the process that I'm proposing may not actually work perfectly for you the first time around. Thank you. Now at the risk of going a tiny bit too meta, just figure out what about the process isn't working for you and see how you can adjust it. That's right, you can learn from the failures that you're learning from while you're trying to learn from your failures. And as you get more comfortable gleaning info from failures, you're actually going to find that every single bug is actually a feature as long as you can learn from it. Even if you end up deleting every single line of code and starting over again. Thank you. So that's a great question. So it's how to communicate that failure is something to learn from the engineering side to the business side that might have a different perspective. And that's absolutely going to be tough because if you are at a corporation that sees like a bug in production as the end of the world and you might lose your job because of it, then that's going to be a problem. And it's going to be very hard for the engineers to be willing to fail publicly if they're going to get punished even if it's not from their managers. So part of that is, you know, it's up to everyone to try to like educate the other people on the team about like how this is part of the engineering process. But that's obviously, you know, your work is building things and like fixing bugs and maybe not fixing people. So it's unfortunate. I would say like if you're in management, working hard to try to like help business people understand that is great. If not, then buffering your team as much as you can from being punished for mistakes while allowing them the space. So maybe it's like fail internally, be open about it, have blameless postmortems, but put a nice veneer on top for the C level people that need to think that everything's perfect and that that's the best way to do things. Sure. So the question is about the difference between an early career developer making a mistake and being visible and a late career developer making a mistake and being visible. So I actually had a similar question where someone was like as a woman, part of my career growth has been looking like I know ten times more than the next person that was applying for the job and being open about failures could tank my career. What do you recommend? And I was like, I'm not an expert. I recommend you like protect your fucking career. But I would say that I think it's great when like late career developers are open about failures because that gives room for early career developers to be open about their failures. I'm not embarrassed by not knowing something. So when I've been in like code readings or anything like that and they're like, oh, are there any questions? I'm always more than happy to raise my hand and say I didn't understand that thing because for me it's not embarrassing not to know something. And so I kind of like am willing to take one for the team and afterwards people will come up and say, oh, like thank you so much for asking. I thought I was the only one that didn't know. And so I think part of it is like if you're late in your career and you're comfortable with where you are and you're willing to show that it's okay to fail, then take that one for the team. If you feel secure in where you are in your career, take that one for the team because all of us are going to be much better engineers by being willing to fail and learn from that, then we will be covering it up. And so if you think that it might cost you your job, cost you your livelihood, cost you the ability to go down the career path you want to, like don't lose out on the value that you get from at least acknowledging internally your mistakes. You can keep a private diary of what's gone wrong and how you've learned from it. You don't have to publish it publicly. But I would say like don't let other people keep you from being the engineer that you want to be. And if the way that you get there is by trying stuff out, writing the code, breaking the code, and then going back and fixing the code, don't be afraid to do that. Like absolutely don't let other people's weird ideas about everything needing to be perfect from like the first key press on the keyboard keep you from learning because ultimately I mean like you do you. You be good engineers. Like don't be afraid. So the question is about being so cool with failure that people no longer worry about making mistakes. So they're just like whatever. So I mean one thing is I think most programmers want to build stuff that works. So it's going to be very rare that you find someone that's like I don't care that it's not working for people. Whatever. It's lunchtime. Like most of us have, we're like my baby is broken. My code doesn't work. Like we put all that pressure on ourselves. In the research just at least in Western cultures, I can't speak for other cultures, but Western cultures like put such a stigma on failure that it's very rare that you would find someone that's gone too far the other way. And it's much more likely that you're going to end up in a place where people are covering up mistakes and making things even worse because of that. I think if you do find yourself in a situation where, so sort of I think the tell if someone is getting a little too comfortable with mistakes is if they keep making the same one over and over again. And that's not what this is about. This isn't about being like oh stuff breaks whatever. It's about saying like when stuff breaks, what can I learn to do it better next time? And so if you've gone too far on the like laissez faire side of things where you're like no, nothing's a problem. You know, you just reel it back in and you're like hey, I noticed that you've pushed a debugger up to production three times in the last week. You want to like, let's come up with a way that we can make sure that doesn't happen anymore. And yeah, so you're just course correct there. Yes? Yeah, I mean, so the question is sort of finding the balance between the extra time cost of documenting things versus the value that you get when it works out well and you get some value from it. And at the risk of this being a cop out, I'd say you're going to kind of have to experiment and find what works best for you. I don't know if anyone in here has had the experience of like learning something, writing a blog post so you don't forget it or like writing some notes. And then six months later being very confused on a topic, googling and finding your blog post that you wrote six months ago when you had the problem. And I mean, what like, what a love letter that is to yourself where it's like, past you is like Jessica, you are going to struggle with this six months from now. I know how to explain it to you. Here you go. And so like in those moments, that time you took to write that blog post, you either learned it better and were able to do it more fluently next time or you have this great resource written by you for you for the next time you're confused on it. And I think in those instances, people really see the value in like the documentation and the learning. Certainly if you spend like a couple months, you know, like I write a lot of stuff down and nothing is getting better for me, then yeah, scale it back and find a different thing that works better. Sure. So concrete steps that senior developers can take to help junior developers sort of document and learn from their failures. Yeah, I mean, it's probably going to vary by individual and vary by team. I think certainly writing tests is great. Like without getting too pedantic about it, having at least some test coverage, forcing you to like think about especially as a junior, the system that you're about to architect, forcing you to have a safety net for when you make a change later and don't realize that it's going to break things, tests are amazing. I'm a big fan of having junior developers and senior developers alike, right? Blog posts and documentation because it's like we get so much, especially I mean in this room, I think most of us are Ruby and Rails developers probably working with other open source libraries and other things and like maybe we're not all at the point where we can sit down and write a gym or a framework that's going to like help tens of thousands or millions of people but we can write a blog post that's like I struggled with this, this is how I got around it, this is what really made it clear to me and that helps the person that's learning kind of solidify what they've learned and then it's just like this beautiful gift to the rest of the community. I can't tell you how many times like people have been, so I work at a school where we teach developers and there have been plenty of times where students have Googled an issue and it's been resolved by a blog post that a student like two, three, four semesters earlier wrote when they were going through that same issue and so it's both a great way for juniors to learn as well as like a great gift to the community by people that can't necessarily give through like writing code yet. I mean that, oh sorry, the question is if you've experienced a situation where you thought covering up was a mistake was better than admitting it, I mean all the freaking time because I don't like being yelled at, I like have such an irrational fear of authority which has zero to do with how my manager actually interacts with me but every time he's like, oh do you have a minute to talk, I'm like, oh my god this is it, I'm being fired and everything's the worst and he's just like, oh I wanted to tell you you did a great job on that feature launch and I'm like, so when is my last day? So I mean the thing is that I've never actually found a situation where it would be better for me to have hidden it. I've found situations where I was really scared and the thing that I thought was going to be bad which is like a manager yelling at me actually did happen because I worked at a crappy place where that's how they treated small mistakes like a typo in a report that the client had said, oh we never look at that because there's just too much data in there and then my manager's like, you made a typo, you don't care enough and I was like, yeah it's not the 18 hour work days, it's my level of caring and so I think like there are situations where it feels like the better option is to hide the mistake but I've never actually seen it where there's like long term value from hiding the mistake and I mean I may be privileged and lucky in that that it's like never like killed my career. I'm sure like being seen as someone who makes typos in a report certainly didn't help me at that ad agency but I left there and learned to code and my life is better so screw those guys. Anyone else? Awesome, you guys have been fantastic, thank you so much.
The history of programming is filled with examples of bugs that actually turned out to be features and limitations that pushed developers to make an even more interesting product. We’ll journey through code that was so ‘bad’ it was actually good. Then we’ll learn to tame our inner perfectionists so our code will be even better than it is today.
10.5446/31295 (DOI)
Welcome. My name is David. You can find me on the internet as David, Twitter, GitHub. I work for a company called Michelada, like The Drink. If you've never had one, I mean, she likes a great drink that us Mexicans drink all the time. It's made with ice, clam juice, magical Mexican sauce, chili powder, salt, lime, and of course beer. And it's delicious. You should get one if you've never had one. But anyway, the name of my talk is Tips and Tricks for New Developers. I'm going to give you a little bit of context why this talk, what I decided to give this talk. So, long time ago, Galaxy Far Far Away 2009, Las Vegas, my first RailsConf. And it was a great RailsConf. Back then, I was building, I just moved onto Rails, had my first job. I was being paid to write my first Rails app. And I was building a huge e-commerce platform, but knew nothing about, you know, the real stuff. Like, once you're past, like, the tutorial phase of Rails, like, what happens next, right? And I remember that there were several talks at this Rails, at this particular RailsConf that sort of changed my view and helped me a lot. Things like solving the riddle of search, using Sphinx through Rails. Like, it made me figure out that it might not be a good idea to filter my products with SQL or do searches with, like, you know, like searches. So, I learned that there was this thing that you could use to search. We ended up using Solr, by the way, not Sphinx, but anyway, I sort of learned something new. I learned about WebRat. You probably don't remember WebRat, but it was what we used to test applications before Capybara and all this fancy new stuff that we have. But I learned about it. And then the most important one, or one of the most important ones, is I learned about Git. I was using SVN before that, and VisualSourceSafe before that. So, I learned about Git. And it was, I think it's one of my, the best docs I've ever heard. It started with Git in 60 seconds, I remember clearly. And the speakers sat there and just basically read that in less than 60 seconds. It was amazing. And then he proceeded to do the proper doc, right? And another important thing that I learned at that RailsConf is how to put, like, what was I to do once I had my application. I learned that there was this thing called mongrel, that it was our web server. I don't remember about mongrel, of course. I learned that I had to, you know, set it up on different ports. You had a process, a mongrel process that started on each of the ports. And then you had NGINX on top, or Apache on top of it, that sort of, like, did the proxying around. And then if you wanted to scale, well, you scale like that, just add up more servers and HEProxy to do your, the proxying. And you will be fine. You need more power to just add up another one of those. And that was back then, in 2009, before Puma and all this new fancy stuff that we have. So, I remember Las Vegas, right? So, I think that we still need talks for beginners at RailsConf. We do need to know how to do, like, microservices and how to handle a gazillion of requests per second and all of that. But we need talks for beginners so people can come here and learn stuff that they don't even know they needed to know, right? So, this talk is for beginners. Most of the things that you learn here, maybe you already know it, maybe not. But I do want you to talk is for beginners. And it's just based on my experience, what I do when I start a new Rails application. Like, the things that I always use, the things I usually do and all that. So, hope it helps for you. I had to remove a bunch of stuff, but this is the most important things that I think will work for you. So, let's begin. First of all, let's talk about configuration files. When you start a new project, right, you use your Rails new command, the first thing that you want to do, well, it's going to do your bundle stuff, it's going to copy files and all that. The first thing that you want to do, I consider this a courtesy to our developers, is please copy your database file into an example file and then ignore the original one. So, we can always have on hand a configuration file that we can base on. It's awful to go into a new project and then, you know, start your Rails server, you have no database jammer file, and then you realize there's nothing, you have to go Google it, paste it and all that. It's just time consuming. So, please keep an example file of your database file. It's going to pay out in the future. And do so for any other configuration file that you think that it's needed for the project to run. Please create sample files and copy them and just ignore them, ignore the real ones from Git, right? And while we're talking about configuration files, did you know that you can have as many as you want and you can separate those in different files, like, for example, you can have a payments jammer file. And then you can just call it from the application like this. Use config4 and just send the name of the file as a sim, and then you will get that particular configuration. So, you can use it to just organize your configuration files better and everyone will be happier when it's cleaner and you know exactly what kind of configuration is on each of the files. And then remember that wherever there's a string or an integer on the code that's comparing to something else, you probably want to use a constant but a setting on one of those config files. So, remember that if you see code like this in your Rails application, try to change it to something like this. Use a constant for that 30. And if you want to go, like, if you think it's going to change constantly, just move it all the way over to a configuration file. You're going to be very happy that you did this when your boss comes and says, hey, I need to change that to 10 minutes and then 15 minutes later, he's going to say, like, hey, change it to 20 minutes. This way you can, you know where to change something, just restart your server and you know you're done instead of, like, looking for that setting in the code base. Let's talk about the database. Now, this one you can't imagine on eight years of being a Rails developer. I've seen it a lot. Do not yet ignore the schema file. Please. I've been to teams where, or I've joined applications where there's no schema file and then I ask, like, what happened to it? And, ah, there were too many conflicts with it. So, we just got rid of it. Like, that's the point, you know? So, please do not ignore your schema file. It's very important that you're always able to restore your database state with either DB setup or DB reset, especially if you're new to the project. You can just go there, you know, configure your database, run one of those, and you're up and running to start coding. So, please, please do not ignore that file. It's very important that you find those conflicts and then talk to your fellow developers and say, hey, I added this column. Hey, yeah, me too. So, let's figure it out, right, instead of just ignoring it. This is a tricky one. Don't change your data in your database migrations. This is hard to achieve, so I'm going to change a little bit the tip to be careful when you change data in your database migrations. It is pretty common that you do something like this. You add a column to one of the tables that you already have, and then you decide that you want to fill it with something, maybe based on the current data that you already have, like here, right, we're splitting the code column into two columns. So, you do something like this, right, you add the new columns, you go through all the companies, you split it, and then get rid of the old column. The problem with this kind of code, first of all, is that it's not future-proof. It's not future-proof because, for example, what happens in the future, your company model is now the organization model, for example. This migration won't run anymore. And while migrations are not supposed to last that long, they're just, you know, they help you to keep track of history, of how the database evolved. Your schema RB file is the one that's supposed to have, like, the real data structure, database structure, and the database structure, it's still nice to, sort of, try to keep it as coherent as possible. The other problem that you might find with this kind of code is that you can't, you know, have longer deploy times. It's very, very common that you are, you know, on development mode and you run this migration and it runs really fast, but you forgot that there's, like, three million companies in your database. So now it's going through all of them in production, and, you know, it's doing all of that, and you can't, like, your database is in a weird state. Maybe there's more migrations that are coming, and the site is, can't run without those migrations, so you're doing maintenance mode or whatever and your boss is angry because it's taking a long. And you forgot about the three million rows. So try not to do that. You're risking downtime, basically, if you're changing your data and your migrations. And the other one, so this code, for example, this is going to happen in production, I guarantee it. You forgot that maybe code is nil, so it blows up and now, you know, you're stuck with a migration weird state on the database and everyone's yelling at you because you have to log into the server, change that code real quick, run the migrations again. I've been there, so it's a really bad idea to do this. So one way to fix it, prefer SQL over Ruby. If you're gonna do it, you're gonna be careful, but if you're gonna do it, you might be better off like this. Do your thing, update, you know, use regular SQL to do your updates and get, move on, right? This is faster, of course, because it's not iterating through all of the objects, it's not instantating anything, so it's just updating whatever needs to be done. If there's nil, it's not gonna blow up, so this is way faster. So try to think your changes in data, a SQL, and prefer this. It's gonna be safer to do this. But the best thing to do, in my opinion, is to use a gem like this, Rails data migrations. There's a lot of gems that do the same thing, but I think this is the best one, because it works with Rails 5, by the way. And this gem is gonna give you a different set of break tasks for your data migrations, right? So, for example, if you had this regular migration, once you have the gem, you're adding a column and then doing an update on all the records on the table. What you would do, in this case, is create a data migration, you know, set authors active. It will create a special migration in a different folder, and then in that migration, you will add your data changes, right? So you don't need them anymore on the regular migration. And now, when you deploy, you will run first your database migrations that will change the structure, and then with a different rake task, you will run your data migrations. So it's a separate process, and it makes everything cleaner. And it also keeps track of how data is changing using the same, like, the timestamp that the database migration uses. So this is way better. And if you don't want to use a gem, at least maybe first make all the migrations that change the structure, and then add migrations later to change the data, right? Let's talk about background jobs. It's pretty useful, to say, to use background jobs. You have something in place like SADKIC, Rescue, DelayJob. I still used DelayJob a couple of months ago. And what I notice is that it's a, regularly, a bad idea to pass objects to your methods that go in the background. There's a few things that can go wrong. But what I mean is this. Let's say you have a job class that has this perform method, and you probably want to do this, right? You have an order and a user, and you're going to assign it to it. What I mean is that it's better if you do this. Just pass the IDs, then find again your objects on the job, and just do whatever needs to be done. The problem with using objects is that they're usually converted into Jamel, and they end up being, like, huge strings that if you're not careful, and there's special characters in the description field or something like that, it just makes everything blow up. And you would not notice until you see, like, all the alerts coming up. So it's better if you just use the ID. It's smaller to just, you know, pass an integer around instead of a whole object as Jamel. And it's just going to be better. It doesn't matter in terms of performance, because it's already on the background, so just let it do it, right? And speaking of background jobs, always when you start a new application, always, always deliver your email later. Don't wait until, you know, your first SMTP authentication error happens and then makes your checkout blow up, because you were expecting the checkout to send the email after payment was completed or something like that. Always, when you start, you start thinking about delivering your email later, always. And the same rule applies for mailers. Instead of passing objects, try to pass IDs and just pull them over from the code on the mailer, right? This way, even if you don't have anything in place right now to sort of delay the sending of the mailers, in the end, you're just going to need to add that delay method if you're using Sidekick or DelayJob or if you're using ActiveJob, just use the deliver later, and that will be it, but use the integers. Don't use the objects for the same reasons as the background jobs, okay? Mmm, let's talk about rest. This is my favorite lesson for all new developers. You have to be restful all the time. Rest is like the core of Rails and MBC and everything that you're doing, and it's simpler. It's very simple to sort of explain, but it's hard to, like, actually implement it. Let me tell you why. Rest, to me, it means that on your controllers, you only have these actions. Nothing more. Actions should always be index, show, new, create, edit, update, or delete. No other action, but it's pretty common to find in code bases things like this, right? You have the products controller, and then you have an action to deactivate the product, and you create your route, and you just get, and member, whatever, right? This is wrong. This is not restful. When you think about restful, you should think that for every request, there's a something, a resource that's changing. So in this case, when you're deactivating the product, what's changing, it's the product state. So the correct thing to do here is to create a product state controller, and you're updating it, right? This is more restful, much more restful. For example, if you have a shopping cart, right? Instead of having an action to apply a coupon, what's happening? You are creating a discount, or a shopping cart discount. So create a controller for the discount, and you are creating it. This is restful, right? This is very common. If you have contacts, you are... you have a search, so you create a search action. Wrong. That's totally wrong. You create the contacts search controller, search as controller, and then you have an action for that search. This is restful, right? You have something that is changing because that's what restful is supposed to be. So the way to make sure that you're complying with this rule, it's very easy. Just don't have controller actions that are not on that list. It's always index, show, edit, new, create, update, or delete. That's it. If you have an action that's named differently, wrong. Right? Okay. Now, a few Ruby gems that I think must be on all projects. I find them useful. I always add them, so I hope this helps for you too. This one you probably heard of a lot, Ruboka. Ruboka, it's going to analyze your code. It's going to check how it's written, and it's going to tell you where you're doing things right and when you're doing things wrong. And it's automated, so it's pretty cool. Instead of having someone yelling at you, it's Ruboka showing you. Better teachers than humans, trust me. A very common excuse that I've heard about not using Ruboka in your project is because you're already, you know, you have a large application, you're already, it's been two years since development. If you add it right now, there's going to be like 500 errors and you don't have the time to fix them. That's okay. What you can do is you can add this, run it with this option, Auto Gen Config, and it's going to create a RuboCop2do.jammel file that's going to include an exception for everything that it found. So that way, you just add it to your, as a sort of an include file to your main RuboCop file, and that way, you have like a sort of a clean slate at that point in time, and then you are supposed to come back later. That's why it's called a to-do file. And remove all those exceptions and fix them. That's what you're supposed to do. You probably won't, but it's fine, right? At least from that moment, like for example, right there, it's green, at least from that moment on, if there's something like, if you write bad RubiCode, it's going to tell you starting from there, instead of trying to fix all other like 500 or whatever errors you find. Out of GenConfigure also helps for a new project, so you can like pull configurations that you might use, like change the rules, depending on your style. For example, there's this one where it configures RuboCop2 how many columns, how many characters you see per line. I usually have it at 80, 90, you can set it to 130. I know there's people that say, we have large monitors, like why do we need this? The thing is that your vision is only, like, it's really narrow, so it's better if you have less characters to read at once. You just have to like sort of nest code around, so it looks better if you try to remove the characters on your lines. Try that one. This one, style and documentation, I usually move it to the main file because I'm not going to write comments on all my classes or methods or anything, so I just get rid of it, no one has time for that, right? And this one, I think it's important, this is the one I value the most about RuboCop, the metrics method length, it forces you to make methods that are no longer than 10 lines. It's hard to, you know, comply with this one, but if you do, your code will be so much cleaner and so much readable, so trust me, try this one. I'll allow you, if you go to 12 or 13 lines, that's fine, but don't go over that. But, you know, forcing your methods to be 10 statements and that's it, helps you, like, clean up your code base. It's gonna, you know, you're supposed to try to abstract all the, like, the code into smaller methods and then test those, unit testing, that's why I call unit testing. So keeping this one, it's like, really, it's gonna make your code so much readable and so much better. And then if you want to run this before you push code to your, to Git, you can use a hook. If you add that file,.git, slash hooks, slash pre-push, it's gonna run Rubikop before it actually pushes the code to the repo, and if it fails, then the push will be canceled. So that way you can just push Rubikop, failed, oh, I have to fix this first, fix whatever offense you have and then push it over. It's very useful, for example, us, we have a CI server that actually runs Rubikop, and our repos are filled with comments that, you know, read, like, fix that Rubikop thing. That Rubikop, because the build fails when Rubikop fails. So it's usual to push, wait for the CI server, fail, why, because, you know, you, you, maybe you went over with the characters or whatnot, so you have to make a commit that only fixes that, fRubikop, dammit Rubikop, there's a lot of commits like this on our repos. So what I tell the guys is just use a hook, so, you know, you run Rubikop before actually pushing, and if it doesn't pass, then you won't push, and you can fix it, and not curse at it. Annotate, this is an R gem that I don't see often, and I don't know why, but I love it. Annotate, what it does is, let's say that you have a, like, a model user with name, username, password, active, whatever. When you run Annotate on a Rails project, well, after you do your migrations, what it's gonna do, it's gonna, well, annotate your models with information about the database, automatically, basically. So if you add columns, you just run Annotate again, and it's gonna add that information at the top of your classes, and it's gonna do so for models, unit test files, and factories if you use. So it works as a reference, a quick reference if you are working on a model about the columns that it has, instead of, you know, going to the database or checking schema or whatnot, you just have it right there at the top of the model, and you can see it real quick. It's pretty useful to have these annotations handy. And this one is pretty useful too. If you use routes as an option, then it will annotate your routes file. So you don't have to rake routes every time you need to figure out where my controller should go. You will already have it. And if you're very old like me and use VI as your text editor, you're gonna tank that it has like contextual autocomplete. So it basically autocompletes only on what's open, so you open your routes file, and you have autocomplete for all those path methods. So it's pretty cool. Very useful. But I don't know why people don't use it. So now you know, please use it, Annotate. Very useful. Bullet. Bullet detects the N plus 1 problem as it's happening, as you're developing. If you don't know what the N plus 1 problem is, it's very common in Rails, but it's something like this. So let's say you have a book method that belongs to author, has many comments, then you have the author that has, you know, name, whatever, and then you have the comment that belongs to user, belongs to book, and your index page looks like this. So when you go to this page, the controller usually looks like this probably, and then when you look at that page, your log file will look like this. There's a bunch of queries going on, like three queries, four queries per row, and you know, it's just getting all the objects one by one because that's what you told Rails to do. It doesn't know what else to do. So this is called the N plus 1 problem. It's very common when you're developing Rails application. So if you use bullet and you configure it, let's say for example, what I'm doing here is, well, enabling it. I want it to show on the console, I want it to show on the Rails logs, and to add the footer, what happens is, as you load that page, you will notice that there is a footer that says, hey, you know, you're probably better off if you include the author on the query or if you include the comments on the query. And it's going to gel at you from the console to, from the Rails logger, right? It's going to tell you, hey, you know, you're doing something weird here, so why don't you fix it? And you're going to go, oh, okay, yeah, let me fix it. So you go to your controller, you use includes properly, you nest it as needed, and then when you load your page, you will now have, you know, the queries for each of the classes. You don't have, like, a bunch of microquery. So this is way better at M-Bullet, you know, we just tell you as soon as you're developing that you're doing something not wrong, but maybe weird. And another similar gem, point. This detect memory leaks before it's too late. I don't know if you've worked with Rails applications that have memory leaks. You have a memory leak when your applications start growing, like, I don't know, 1.2 gigs in RAM, and then you need to restart. Actually, that's the best way to fix memory leaks on Rails applications, just set up Monit, and if it goes past the threshold, just restart the server, done, right? Because it's so hard to find memory leaks on the code on Rails, it's very, very hard. Once it's there, it's not gone ever, like, you're stuck with it, deal with it, deal with it, basically. That's a solution. So Oink, when you have it on your code base, and you, it's some sort of middleware, so you initialize it, and it's gonna add at the bottom of your logs, like, when you're developing disinformation, which is basically how much memory it's using, and the number of objects that you are instantiating in that request. So if you're developing something like this, and you see that you have, like, 500 comment objects instantiated, something must be wrong, right? There's something wrong going on. And it's actually kind of safe to use in production. I wouldn't enable it all the time, but if you're sort of trying to figure out where the problem is, I wouldn't enable it, and then just let it log a few requests and try to find where I'm instantiating a lot of objects. So it's very useful for that. And, wow, right on the 30 minutes. That's it. That's all I have for you right now. Like I said, I had to remove a bunch of them of, like, advice, but this is, like, what I thought was most important. And that's it. Thank you. Thank you.
Are you ready to begin building applications with Ruby on Rails? It's very easy to follow a tutorial and learn how to build a blog in 15 minutes, but there's a lot more to it in real life when you try to code a big web app. During this talk I will give you a bunch of tips and tricks for Rails development that almost everyone follows but rarely anyone talks about. If you are about to join this fantastic community, this talk is for you.
10.5446/31296 (DOI)
So, welcome to Uncertain Times, protecting your Rails app and user data. How many of you here were at DHH's keynote this morning? Cool, almost, I think all of you. So, I was also there, sitting in the audience, and I noticed that a lot of the themes he had in his keynote are actually similar to what we're going to be talking about today. So, I'm excited for that. I'm super excited to be here. This is my first ever conference talk. I can't believe it's here at RailsConf amongst this awesome community. So, just one note, if you're looking for a chore list of what to do for putting in security measures, it's not going to be exactly that, but I'm hoping to start a conversation here and really start talking about security in a new light. So, let's get started. So, originally, when I was creating this talk many months ago, the title was Uncertain Times Ahead. But in those couple months, a lot has happened. It's been some untrained times. Yeah, another connection to DHH's talk, the Juicero. It was up at that, I don't know. But it's not Uncertain Times Ahead. It's Uncertain Times Now. Uncertainty has always been here, and especially now, I think it's more, we're more aware of that than ever. So, who am I? I'm Krista. You can find me on Twitter. Krista A. Nelson. A bit about my background. Went to a big university, studied math. Went to a big corporation, spent a lot of years making rich people richer. Got sick of it. Went to the Turing School out in Denver. It's an awesome seven-month Rails program. If you haven't heard of them, check them out. And then after that, I was looking for my next career and my next job. I really wanted to make sure that I found something that I was passionate about. You know, I'm not just making other rich people richer. And if I'm going to wake up and work hard every day, I'm doing something good. So, I found Glassbreakers, which is an enterprise, the enterprise platform that connects employees on personal identifications. So, things that are really sensitive, like your race, gender, sexual orientation. Also, some more fun things to do like foodie or hiker. But why we are there is to, again, connect people to build a platform to empower them. And so, we want to make sure that we're not putting them at more harm by having us trust in their sensitive data and not treat it well. So, when people ask me what I do at Glassbreakers, it's hard for me to come up with an answer because, you know, I'm a back-end developer, but I do more than that. You know, I really focus on the security and making sure that we're doing everything that we can do. Unfortunately, when you say you work on security, people's minds think security, network security, firewalls, and they start asking me all these really complicated questions that I actually don't focus on day by day. So, I was trying to come up with a new way of explaining what I did. And I really had to think, why do I go to work? What am I doing? I'm trying to build something that's going to help people, and I'm trying to protect them from harm's way or bad. So, then I came up with, I'm a protection advocate, but then I thought, that might sound like I'm advocating for a different type of protection. So, I, which, I advocate for all protection, but I landed on developer user protection advocate. So, that's what I'm going to go on for now on. I hope there's some more of you in the crowd, and I hope after this talk, you'll all want to become user protection advocates, because I think we need a lot more in our community. So, this, this has been me, pretty much the last year, digging into security, ever since I've taken over the focus. I've just been reading blogs and blogs and blogs. If you Google software security, oh my goodness, like the outcome that you get from that, there's just so much to dig through. And the more and more I read, instead of becoming more clear of what I needed to do, I was getting more clouded, kind of more confused, like, what do I focus on this? Do I focus on that? You know, where do I go? So, then I started talking to everyone I could, anyone I could talk to about it, I would try talking to my coworkers, my parents. They're so sick of me talking about this, my Lyft drivers, my mailman, you name it. I was talking to them about security. And I noticed two things. Everybody loves talking about security. Everyone has their favorite, like, breach story, you know, like, oh, the Ashley Madison, you know, the LinkedIn, the Yahoo's. Everyone, like, everyone knows about, it's a problem. They have a favorite story. And then the other thing I noticed was everybody had an excuse. Why they didn't have to worry about it. Like, oh, it's a good thing that at my company, we have a security team, so I don't even know what they do, but they handle it. I don't have to worry about it. Or I hear this a lot. Oh, our company is too small. It's lucky we don't have to worry about it. We don't have any information that's sensitive. We don't have, you know, HIPAA compliance. So we're good. We don't have to worry about it. Or, oh, yeah, I know we have to worry about it. But we just have to get out our MVP, like, once we get out our MVP, then we're going to have all this time. I'm sure a lot of you have heard that before. Like, we'll have time in the future, but that never comes. So I noticed there's this big disconnect. Everybody knows it's a problem. Everyone knows here's the stories, but nobody's taking action on it. So why is there that disconnect? At the same time, doing all the security research, I was also getting ready to go on my next putt trip. So every year I go to Colorado, and our friends and I get together and we hike seven miles out into the middle of nowhere on top of a mountain with no cell phone, no Wi-Fi, hike through Avalanche zones to completely disconnect. And as I was preparing for this, I was also talking to people about it, and everyone didn't get it. Like, why would you do that? Like, why would you spend your time off and want to put all that effort in doing something that is so dangerous? You know, there's so many risks. You could have a blizzard. You could get lost. There could be an Avalanche. But to me, it was in my heart. I knew why. Like, it wasn't a question. I knew why I would want to do that. It's worth it. The journey is worth it. It's a beautiful atmosphere. And there are risks, but you just handle it. You know, you do your training. You get your gear list. It's just kind of built into the process where you're kind of always thinking about it, but never realizing how much you are thinking about it. It's just part of the process. So then I kind of had this aha moment. They're kind of similar, right? There's a lot of risks in security. There's a lot of risks in back country. Basically you need to figure out how you can best protect yourself. But what was the difference between me handling our security research and this Avalanche safety research, and it was the passion? So here, again, risks. Anything out of all of the research I've done on security or on Avalanche training, every tip and trick and recommendation, when you really look down to what they're suggesting that you do, it's just understanding your risk. Understand what is the probability that this is going to happen? What are the consequences if it does happen? And then how can I minimize my vulnerability to that? And how can I limit my exposure? So if it's so clear that all we need to do is look at our risks, always be assessing our risks, and figuring out how we can limit our vulnerabilities and exposure, why is this a $350 billion industry? And I think this is the problem here. So it says one cannot be prepared for something while secretly believing it will not happen. And I think from going back to all the talks that I had done, it was one kind of theme, one common theme. It won't happen to me. It's not going to happen to me. It sucks for the people it does happen to. I hope they're doing something to protect themselves, but it's not going to happen to me. Also, when I was doing those talks, I realized some of these companies I'm actually a user for. I use their product, and I hear, oh, do you do security training? Nah, we don't do it. And then I realize I am trusting in these companies just as the users at my company trust in me. And I want to make sure that they're preparing for me just as I'm preparing for our users. So to help try to get through this, it won't happen to me. Mentality, here are some stats. 43% of cyber tax target small business. So I think a lot of companies think, oh, they're only after the big dogs. They're only after enterprises. But no, 43% of attacks attack small businesses. You might think, OK, still won't happen to me. There's a ton of small businesses. But out of holds, small and medium-sized business, 55% reported that they had a cyber attack, and 50% reported they had a data breach. Again, that's just the percentage that reported it. A lot of companies will go an entire year without even knowing they were attacked. So it's now obvious there is a good chance you are going to get a breach or cyber attack. But then oftentimes you hear, well, how bad can that be? So if this is happening to everyone, they get through it, we'll be fine. But 60% of small companies that suffer a cyber attack are out of business within six months. This is what my brain felt like when I heard that stat. We work so hard to try to get our companies to flourish and thrive. And yet 60% of small companies that suffer a cyber attack are out of business in the six months, and 55% report a cyber attack. So often they'll say, OK, we'll buy a product. Right, we get it, we'll buy a product, we'll throw money at it, that'll fix it. 48% show root causes from a negligent employee or contractor. And 41% show root cause from a third party. So again, even if you throw as much money at all the products that you want, how are you going to get control of your employees and third parties? 83% of companies have confirmed a data breach, leveraged from a weak, default, or stolen password. So I already had to change the title once. I'm changing it again now to change your passwords and enable two-factor authentication. If there's one thing I can get out of this talk, because I hope all of you have a secure password and two-factor enabled. So also 63% of businesses don't have a fully mature method to track and control sensitive data. So most of us are going to get hacked. We know it's a problem, yet the majority of us don't have a system in place. So let's hit the road and see what we can do. I'm going to talk about three things, how to get everyone involved, mapping your sensitive data, and securing your software development life cycle. All right, so get everyone involved. So back to those conversations that I was having. And again, so many people were saying, I don't have to be involved. We have a security team. They handle it. Then you are out in back country, even if you have the best expert with you guiding your group. If you have one person that's having an off day, not paying attention, if they make one wrong turn, they can trigger an avalanche and put your entire group in danger. If they're not prepared to know how to use the tools that they have, how to use their beacon, their transceiver, how to, the skills it takes to locate a person and probe them and where to dig them out. Again, you're putting your trust in them that they're here to also protect you. So again, it doesn't matter if you are an expert, if you have an expert, if you have one person in that group that's not prepared, it can be really costly. So one issue I found is you have to talk to leadership. So how do you get everyone involved? Again, everybody's busy. Everybody now, everyone's wearing multiple hats, there's deadlines, you have to get your project done, you don't want to get in trouble. So you just need to do what you need to do in order to get your work done. And if you have time later on to learn some security things, cool, but you need to focus on what you need to focus on. I noticed too that if leadership is not on board with this, they're not going to understand that they're the ones that need to lead by example and do what they can do to make sure that they're protecting the company. And then also understanding and budgeting time. If something didn't hit its deadline, understanding was it because they were just trying to be secure and be careful and make sure the product was safe. So how do you get leadership to buy in? So again, show them the stats. 60% of small companies suffer a cyber attack or that suffer a cyber attack or out of business in six months. So again, when you're getting pressure from leadership to focus on other things, just remind them that if they want to make money, 60% of small businesses are out of business. Also remind them of what the bottom line is. Regardless what your bottom line is, if you're there to make money, if you're there to help people or if you're there to help the planet, you're not going to be able to do those things unless you are fully bought in on the team. This is why we here. We are waking up and working our butts off in order to do one of these things or maybe all three. So again, when you're budgeting time and getting efforts, make sure that they understand that if you want to make money, I think from the small and medium-sized businesses, it was almost 100K cost for each breach that they had. So again, putting some time up front can save the company lots of money. And again, if you're here to do good, I just saw in the news there's this awesome new app that made me really excited. I can't remember what it's called, but it's basically a panic button for those that are in fear of getting attacked last minute by immigration. And it's this really great concept. You have your contacts and a message to each contact where if something happened, you can hit the button and it will send messages to those contacts. I was so excited to hear about this project. I thought it was great. I went to their site and I noticed they did not have the certificate, the SSL certificate up in the browser. I broke my heart. And on the form or on the page, there was a form where you put in your phone number. And so my, you know, that's what I'm thinking about now is security is if you're putting in your phone number, again, and you can identify someone by their phone number and providing all of this very sensitive and secure information. And if it's not at a trustworthy company, you can end up doing way more harm by if that information got breached, then it would have done good. Luckily, I went in and I checked the form where they got the phone number and they had the encryption at the form level. But again, if you were unknowing and you went to the site, it would be really easy to have a fissure where they made a site replicate this other site and you could put in your phone number and they would know who's at fear of putting into the situation. So again, make sure you can continuously kind of give this message back to the company of why are we here? What are we passionate about? What are we protecting? All right. So how do we get everyone involved? So make sure that everyone at the company is included in this program. So that includes employees. It also includes contractors. Everyone with access to sensitive data and code. One of the great things about the Rails community is mentorship is really big here. So if you have a mentor or an advisor that can see your code or can see your sensitive data, make sure that they are also on board. So every single person, anyone that has access to any part of your information. What? Make sure that they understand what is considered sensitive. I think most people assume we know, okay, a social security number is sensitive information. But talking to the full team, it was surprising to see how many people didn't realize a name. That's sensitive. An email is sensitive. Even just knowing that you're a part of an organization, if you're signed up for an application, could again, if someone got it in the wrong hands, could be harmful. Why? So protect yourself, users, and the company. I read a blog on the, it was onboarding for startup security and it broke down what it really meant when you signed your documents. And included, so these companies do get breached. And what happens when you get breached? You could be sued. They could take an image of your computer. Knowing what information that they're going to have. I'm sure a few of you maybe sometimes have once or twice done a personal thing on your professional laptop. Maybe you checked your email or your bank statement. Understanding that when you're signing those docs, what you're putting yourself up for. And again, your users and your company. When? Make sure that everybody knows when to say something. I think a lot of people can be afraid by security. So maybe they did click a link that they shouldn't have clicked and they think, oh, should I, I probably should tell someone. But let them know when they should step up and say something. And then where? Have a repository for all your policies and make sure that all of your employees can have access to that and easily pull it up if they ever have a question. All right. So now how? Password managers. Who here uses a password manager? Nice. Oh my gosh, this is so great. So this is most of the room is using a password manager. I think also we need to step a little bit outside our bubble. It's so great that we all are using password managers. But as I've been doing these talks and really talking to everyone, most people don't actually know what a password manager is yet. So a password manager is just a way that you memorize one password and it'll generate random passwords for all your logins. So that way you can have different logins for every system. You can go to, I think it's www.haveibeenponed.com and you can put in your email and it'll show you if your email and password has already been hacked. And I have a guess that most people have. Two-factor authentication. This is another common thing that I heard when talking to people. Everybody knows they should use two-factor authentication. But then you hear, oh, but it's so annoying. I have to get up and get my phone and oh, I don't have time for that. But again, it's important. So is it worth getting up to get your phone to protect your users? Secure your devices. So I live in San Francisco and it cracks me up. So I'm often, I'll work in coffee shops or go to meetups. And it blows my mind how many times I see someone with their laptop just wide open and they'll just walk away. And it'll be open for more than five minutes without a password screen popped up. Again, most people, you got stickers on your computer. You're wearing shirts. They know who you are. So please put a password lock on your computer. Again, these are not new topics or new subjects or new insights. But how many people are actually practicing that? When in doubt, delete. So if you have spreadsheets of user information or anything on your computer, you don't know if your computer is going to get stolen. So keep as little information on your computer as possible. And careful what you email. I think we work so hard to protect our application and our databases, but then we'll blink out and we'll send an email with user information to it to one of our coworkers. And so also make sure that your full team understands that, you know, emailing is not safe. So make sure that you have a way to set up where you can pass information through, you know, encryption using a GPG key or up on box, but definitely don't email. And also I've heard so many companies that use Google Docs for everything. And also I see they put user information in their Google Docs. So be careful. A good developer is a secure developer, Kristen Nelson. So this is a quote by me. So again, when I say that I work on security, everyone assumes I'm a security engineer. But really, this isn't just a task that I should worry about. Like every developer should be a secure developer. Just like this Rails community, right? Like we all want to write clean code. We want it to be readable. We want, you know, well-tested. Why do we not put an emphasis on something that is so costly to our programming? So relating it back to DHHU's talk, he talked a lot about how you have to have roots. If you don't have roots, you're not going to understand why you're doing what you're doing. And if you don't understand why you're doing what you're doing, you're not going to care. You're just going to keep doing the motions, be miserable. And this applies to security as well. So again, there's so many chores that you could have to do and they could seem annoying. But if you really figure out why you're doing it and what the fundamentals are, I think it'll help make it less painful. So the fundamentals are CIA. So confidentiality, making sure only those who should be able to see the information can see it. Integrity, making sure that that information is what it should be. So again, are people logging in and changing information? Are they mimicking? Are they trying to duplicate a certificate? How do you make sure that this is what it should be? And then accountability. I think we see this a lot with the DDoS attacks of if someone needs information, can they trust that it's going to be there? This, the OWASP, how many people here have read through all the OWASP docs? Okay, a few hands. So this is probably the first thing that I would recommend for developers is go to the OWASP site. It's the open web application security project. And it is an open source project with just a ton of awesome tools. They have a fake app setup that you can test and play around with. They have the OWASP top 10. They just came out with a new release and they're looking for feedback on it and they're going to do the final release later this summer. But they have these awesome resources. So these are the top critical vulnerabilities that are most likely going to hit your application. So if you can focus on these top 10, you're going to get the majority of the vulnerabilities that you're going to be put at risk. Again, it's not going to be 100%, but this is a great place to start. I've seen encryption types and hashing algorithms, understanding why it's important. Again, I've seen apps where they still have the SHA-1 password. So again, you don't have to know the full details of all the different hashing algorithms, but know what you can trust about them and what you need to know about them. All right, next. Mapping your sensitive data. So when you are going through Adelange zone, it is critical to 100% make sure before you go into the zone that you know exactly where your danger zones are. Because again, when you're out there, it's very hard to see, okay, if I step here, I'm safe, if I step here, I'm not safe. So you need to have it well mapped out and know, okay, if I walk through this area, I need to put extra caution. You can do things where you send one person throughout a time to limit your exposure. You can talk quieter, you don't shout. There's all these kind of tactics to limit the chance that something's going to happen when you're in one of those zones. So how do we do that with our applications? If you start thinking about, okay, we have the sensitive data, or we collect all this data from our users, what do we need to keep safe? So again, personal identifiable information can be so much, again, a name, a phone number, their LinkedIn URL, an image, all those things could tie a user back to an application. You have any protected health information that has a whole other laws and confines that you need to enact if you hold any of that information. Security card information, social security numbers, messaging, communications, logs, all these things can have sensitive information that you need to keep track of all those things. And if you think about all the journey of that information, right, so if you have someone's email address, you know, they use it to log in. It comes to the application. It gets saved in the database. We go out to SendGrid or MailChimp. We send them emails. We have like a third party tool that tracks analytics on that user. Maybe there's spreadsheets, or again, so many places that this information could go. Adding in third parties, knowing what information gets sent to what place and who has access to all that information. So again, you can be aware of we need to be safe in this place, or if someone leaves the company, you can know exactly where you need to go and check to make sure that they no longer have access to that information. So 41% show root cause of data breach from a third party mistake. And I think this is one of the things that was really shocking to me too. I think we put so much trust in other third companies. Again, we just assume that they've done their diligence. It will actually be safer. Instead of us having to do our work, we'll make sure that they go and do their work, or we just assume that they went and did their work. But again, a huge percentage of data breaches happen because of your use with third parties. So before you use a third party, make sure you do a security audit on them. What are their security policies? Who has access to their information? What information are you giving them? Have they done penetration testing? Do they have vulnerabilities? Have they had any recent attacks? You can get there. There's a SOC2 report, and it'll show all of the security measures that they've been through. So every time that you're sending any of your information from your platform to another company, you need to make sure that they are trusted and legit. And again, make sure that that is ongoing too, that maybe they were secure, but then there was a vulnerability. So you need to keep that line clear. Also I think another big thing with third parties is you assume the defaults are secure. So I know there's all these different tools out there that you can plug into security. And I know one where they were tracking exceptions. But the default was, so when it tracks an exception, it grabs the prorams of the request body and it captures that in time so that it can help you determine what caused the exception. There was no filters. They had all these filter options, but the filters were not default. So again, what information is getting trapped in that prorams? Raw passwords, social security numbers. So make sure that if you're trying to do good by adding a third party, make sure you read the docs and make sure you're setting it up in a secure way. Also if you're the reverse, if you're a product and you're offering a service, if you have security measures, a lot of times you'll go to the site and be like, oh, we offer authentication and all these things. If your users have to take a step to enable that, make sure it is like read and clear so that they know to set it up. The simplest things are often the truest. So again, when you're doing analytics, do you need to send all of the user data to your analytics tool? Can you completely anonymize it? Again, the less that you send, the less chance there's going to be that it's going to get compromised. All right. And last, securing your SDLC. How many people here have heard of the SDLC before today? Okay, about half of you. So the SDLC stands for the software development life cycle. And again, even if you didn't realize what the SDLC was, you're probably doing it. And again, it's just the fundamentals of how do you get a project from the very conception of an idea to deployment? So you start by spec-ing, right? What are we building? What's required? Why are we building this? Really think of the high level, what needs to get done? What do we need to think about? Do we have privacy laws that we need to withhold? What are the terms and conditions? What have we promised our users? Are there any ethical and moral requirements? Do we need to make sure it's encrypted? How available does it need to be? Features. So again, when you're budgeting time and building out your product, think about what features do we need to, can we include in this, should we include in this to make sure that our users are as safe as possible? So user privacy settings. What do they want to choose to be public and to be private? Strong pass rate requirements. Again, if you make your pass rate requirements, I mean this would be extreme, but 30. Basically 30 characters in length. People are going to start having to use a password manager just because it's unreasonable to try to come up with one that long. Do you offer two-factor authentication? So again, we expect the platforms that we use to have two-factor authentication. Are you yourself providing that as well? Email authentication. So again, did you set up your SPFs and your decams on your emails? Secure sensitive data deletion. So what does it look like when you delete a user? Is it really getting rid of all your information from all your tools? A mass staging environment. A place you can test your product to make sure it's not going to have vulnerabilities yet at the same time not putting that information out for another place to get breached. Anonymized analytics. So once you get your main features kind of specced out, the next part is design. So this is where you go and really dig into what are these features going to look like. So in Avalanche safety, Avalanche has happened when every time it snows there's a new layer of snow and there's always a weakest link. And once it gets triggered, that weakest link layer is what lets go and that's where the slide happens. So again, what are your weakest links in these features? Is it your input? Is it your authentication? How likely do you think these things are going to happen and how consequential will it be if it does happen? And then also once you figure out all of the risks that you can think, really like ranking them. What are the ones that are kind of the most likely to happen, the worst that's going to happen and are there measures that you can take to mitigate those risks? And if not, maybe the feature is not worth it. Again you have to think, is this feature going to add to the product or in the end do more harm? Peer code review. So one thing that I've read over and over and over again is even with all of these tools, one of the most effective way of catching a security breach is through peer code review. We are human. We're tired. We're overworked. There's a lot to think about. So having an extra set of eyes can really make a difference. So I recommend making a security code review checklist and just checking these things. Is it authenticated? Is the authorization set up correct? Are we encrypting sensitive data? How is the error handling? Again, if we give errors that could give hackers clues into how our database is set up, it could lead them into harm's way. Is there any add-on configuration? So again, if you're pulling in a library or you're pulling in a third data, do double double check to make sure that that's a secure source and that you have the configurations set up in a secure way and complex code. Give an extra look to complex code because again, that's usually where the breaches can happen. SAC analysis. So again, there should be two types of code review. The manual human code review and then a static analysis. And there's a ton of programs out there. It doesn't matter to me which program you use as long as it works for you. But these go through and catch a ton of things. So Breakman is just phenomenal. I'm so glad. Thank you for doing all the work that they do. But everybody should have at least Breakman, if not all of these on their code practice. So again, you can set up, if you have CircleCI, you can set it up where every time it runs, it does these checks. I'm going to check for top vulnerabilities, again, those OWASP top 10s. You can check all of your gem dependencies. There's bundler audit, which makes sure that you're not having any dependencies in your gem file that has vulnerabilities. So manual testing. This is one of my favorite Giffy's. If you can't see it, he's shooting in his code product and then a QA, Gumby shuts it down. But again, if you spent all that time writing your code, building a feature, test it. Also try to break it. Make sure too that it's not just you. You have some coworkers going in and purposely put in incorrect input and see what happens. Our CEO is famous for the best at dog-putting our own product. I don't know how she finds all of the bugs that she finds, but it's usually in a demo. But make sure, again, you are using your own products. Set up a secure staging environment. Again, if you want to make sure that what you're testing is going to be true to what's going to happen on production and test on different account types. So again, it might work for you, but maybe you have a different authentication level than what your users have. So when you're going through and testing, make sure you log in as a user, log in as an admin, log in as all the different roles and make sure that only things that should be happening are available to that access level. Dynamic analysis. So this is where it will actually go in and try to hack into your code. Qualis has this great SSL test and you can just type in your website and it will test your certs. So again, I'd set up a weekly calendar reminder to every week, put in your website into Qualis and just make sure your certificate is set up. Tinfoil and Burp Sweep and AOS app. Again, there's a ton of products out there and they are in range of price point. But I would figure out a way of not just doing the static analysis, but dynamic analysis too. So deploy. Be on high alert. So once you've deployed your code, usually you just want to celebrate, go have a beer, you did it, you hit your deadline, but you're not quite done yet. So again, as soon as you push your code, go to the logs, figure out what's going to happen. Have a way to revert the code if it crashes, right? Check your logs, check your page load times, check HTTP errors. What does your database performance look like? Are there any weird database queries? Logging. So again, make sure that you're familiar with logging. I know when I started, I knew it existed, but I just kind of thought it was there. Just in case for emergencies, didn't really want to go into it. But again, really get familiar with it. Now, the more practice that you have, the better you're going to get. So just with Avalanche training and being in the back country, every trip that I take, I learn more and more things. And every time something then happens, I'm quicker to being able to respond. So get friendly with your logs now. There's also a ton of different products out there where you can help filter out noise. You can consolidate all into one centralized location. You can have it where it's structured. Loggridge will condense it from multiple lines down into one line. So take the time now to clean up your logs, get them set up so that you're comfortable with them so that if something did happen, you'd be able to tell and you'd be comfortable. Monitoring and alerts. So again, make sure that you know what the norm is so that you can see what those spikes are. And then set up the alerts for how critical is it? Is it if it happens once do you need to be alerted or is it where it's happening many, many times over five minutes you need to get alerted? Also know how severe is it? Should we wake up the engineering team? Is it more disavowing? So again, this will be kind of constant refactoring if you may of always kind of updating what those alerts will look like for you. So uncertainty is the only certainty there is. And knowing how to live with insecurity is the only security. So uncertainty is definitely here now. It's definitely not going away. It's not helpful to just be afraid by it. I mean, it's really the only most certain thing there is. And knowing how to live with insecurity is the only secure way. So again, we just need to build these into our course. We need to be passionate about it. Why are we here? Why do we care if our systems are secure or not? And figure out how to update our daily processes. So again, I am Kristen Nelson. I'm working at Glassbreakers. Glassbreakers is hiring. So if you're interested, it's an awesome organization. Come find me. And I hope that after today's talk, we have more user protection advocates in the crowd. Let's spread the movement. All right. Thank you. Thank you.
It’s what everyone is talking about: cyber security, hacking and the safety of our data. Many of us are anxiously asking what can do we do? We can implement security best practices to protect our user’s personal identifiable information from harm. We each have the power and duty to be a force for good. Security is a moving target and a full team effort, so whether you are a beginner or senior level Rails developer, this talk will cover important measures and resources to make sure your Rails app is best secured.
10.5446/31298 (DOI)
Thanks for the intro, Roy. Good morning, everybody. Thank you all for being here. So I have a question to start this talk. How many of you made a New Year's resolution this year? Raise your hand. Alright, that's about what I'd expect. Keep them up for just a second. So actually, you can put them down. I'll have you put it back up in just a minute. Except you, Sam. Keep yours up. So 40% of Americans on average either always or usually make a New Year's resolution. Another 17% do so every once in a while. Okay, so hands back up. If you made a New Year's resolution and you're still on track with your resolution, keep your hand up. Otherwise, put it down. So what percentage of people do you think that make resolutions actually follow through with them? Shout out some answers. The answer is eight. 8% of people who make a New Year's resolution managed to follow through on that New Year's resolution and do what they set out to do. So if you had to put your hands down just now, you're far from alone. That's normal. To give you some context, your odds are about the same as taking a deck of cards, shuffling it and pulling an ace off the top. Those are your odds of hitting your New Year's resolution. You have a 92% chance of failure when you set a New Year's resolution. So let's talk about what we can do to shift those odds in our favor. But first, a disclaimer. I want everybody in this room to know that you are a beautiful person just as you are. I'm going to talk a lot about health and fitness in this talk. I'm going to talk about those things because that's my story. That's how I learned all of this stuff. But I'm not telling you that you need to go out and lose weight or get fit or do anything like that. That is a very personal choice that only you can make for yourself and I'll get into why that is here in a minute. But these principles that I'm going to tell you about, despite the fact that I'm talking a lot about health and fitness, apply to anything. You can take what I'm going to tell you about today and apply these principles to anything that you might want to do in your life. Secondly, if you do want to lose weight, if you do want to get fit, I am not a doctor. So don't take anything I'm about to tell you as the gospel truth. Don't do it because Nick said to do it. When you're laying on the side of the road of the torn Achilles, I don't want to hear anybody say, but Nick said to go run. So that said, let me tell you a little bit about my story. I'm not sure how visible it is on the slide, but that's an asthma inhaler in the background of that title slide. I had asthma as a kid, really bad. My mom, when I was talking to her about this talk, she told me the story of signing me up for soccer as a kid. I went to soccer and did the practice. At the end of practice, when she got there, I was doubled over, breathing as hard as I could and beat red. I could not breathe. She handed me my asthma inhaler and slowly but surely my breathing returned to normal. Nowadays, if kids have asthma, there's all sorts of things they can do, all sorts of preventive treatments. Back then, not so much. Asthma treatment consisted of, here's your rescue inhaler. If you can't breathe, take two puffs. If that doesn't work, go to the emergency room. Good luck to you. And so because of that, I was a very sedentary kid. I sat out of PE a lot because of asthma flare ups. And I was picked last for every team that ever was for anything, that sort of thing. But I saw the appetite of a growing boy. So I pretty much whatever I wanted, right? And per BMI, which I know is not a perfect measurement, but per BMI, I've been obese my entire life. This is me a couple of summers ago. My son was four and my daughter had just turned one. I don't know how many of you have multiple kids, but the adjustment from one kid to two is not a doubling of effort. It's exponential. And that transition was particularly hard for me and my family, trying to balance a very demanding job and raising two kids and being a good husband. And so I basically ate my way through it. I stress eight all the time. And I had very little time for exercise. My weight topped out somewhere around 230. And that's about what I am in that picture. I was super out of shape. I struggled to keep up with my son when we were running around kicking the soccer ball around even. But I really, despite all that, wasn't all that interested in losing weight or getting fit. I had tried and failed so many times in my life, but I just basically bought into the idea that I was a big person and there wasn't anything that I could do to change my physical fitness or my weight. Every time I had tried it, I'd failed. But all of my previous attempts had followed a similar pattern. I think this will be familiar to everybody in here. Remete Safety calls this the cycle of non-finishers. When you decide to do something new, you get super motivated. You're super excited about it. You're going to do this thing. You're finally going to lose weight. And so you do the first thing that pops to mind. You start doing stuff. It doesn't matter what. You run out to the gym and you hop on the treadmill for 30 minutes because that's the only piece of equipment in the gym that you recognize. Maybe later when you get brave, you'll try those weight machines out. But for today, man, the treadmill's enough. And so over time, going to the gym over and over, hitting the treadmill for 30 minutes, not seeing a lot of results, you start to realize that maybe this is harder than you thought it was. Maybe it's going to take more work than you thought it was going to take. And so you start to lose your motivation. Three trips a week to the gym, turns into two, turns into one, turns into zero, and you've given up. You know, they say it can't hurt to try, but this is proof that sometimes, sometimes it can. Because if you do this over and over, you follow this cycle repeatedly, at some point you start to believe lies about yourself. You start to believe that you're not the kind of person that finishes things. You start to believe that you don't have any discipline. You start to believe that you're lazy. And none of those things are actually true. You've just fallen into the motivation trap. Turns out it's really difficult to get anything done if you're relying on motivation or willpower to get it done. We know this because of this man, Dr. Roy Balmeister. I was first introduced to him in Charles Duhigg's book, The Power of Habit, which I'll talk more about in a minute. But in 1998, Dr. Balmeister and his team did an experiment at Western University that's gone on to become our modern understanding of willpower. What they did is this. They recruited a group of students to come in ostensibly for a taste perception exercise. They told these students, okay, here's your scheduled time to arrive. I want you to skip the meal immediately prior to your scheduled time and make sure that you've fasted for at least three hours ahead of time. So these people were hungry. And to torture them, Balmeister and his team baked a batch of chocolate chip cookies in the experiment room immediately prior to their subjects arrival. Now imagine how these chocolate chip cookies would smell to you if you'd skipped the meal immediately prior to this and you were starving. They must have smelled amazing. So the researcher greeted them, sat them down at a table. And on this table, on one side, were the chocolate chip cookies that had just been baked. Still a little bit melty. On the other side, everybody's favorite, a bowl of radishes. Now the researchers explained that cookies and radishes had been deliberately chosen because they were opposite ends of the flavor spectrum. Chocolate chip cookies are sweet and salty. Radishes are bitter and umami. And they assigned each person in the experiment either to the cookie cohort or the radish cohort. Now, if you had the good fortune to be assigned to the cookie cohort, your only work here was to eat three chocolate chip cookies. If you were assigned to the radish cohort, your job was to eat three radishes. And the researchers stressed that it was imperative for the integrity of the experiment that you only eat your assigned food. And then they left the room. And so some of the people from the radish cohort would pick up a chocolate chip cookie, look at it so longingly, and they would smell it. And then they would put it down with great resignation. They would eat their radish. Nobody actually ate the wrong food, which is pretty remarkable in my mind. But that wasn't the crux of the experiment. After they ate, the researcher returned and told them that they needed to wait 15 or 20 minutes for their flavor profile in their mouth to fade before they can move on to the next phase of the experiment. And oh, by the way, while you're waiting, would you mind helping us with another experiment? We're working on problem-solving approaches, seeing how different people solve problems. And the people who are in the lab, they're doing the same thing. And they put a piece of paper with this diagram in front of the research subject. They actually gave them a stack of these, as many as they wanted. And their task was to trace every line in this diagram without repeating any segments or lifting their pencil. They told them to try as long as they wanted to. They gave them a bell. So if you get bored of this, just ring the bell. We'll come back in here. No big deal. But try as long as you want to. And that's what the researchers on Dr. Ballmeister's team wanted to test. They wanted to see if there was any difference in persistence between the people assigned to the cookie cohort and the radish cohort. So here's what they found. The people in the radish cohort made an average of 19.4 attempts at working their way through this problem. They spent on average 8 minutes of their time in the lab. They spent 8 minutes of their time in the lab. And they were able to do that. And they spent on average of 19.4 attempts at working their way through this problem. They spent on average 8 minutes and 21 seconds. That's an impossible problem. That's pretty good persistence. The cookie cohort, on the other hand, did 34.3 attempts on average and spent 18 minutes and 54 seconds trying to work their way through the problem. Now, that number is actually skewed on the low side because the researchers set a time limit of 30 minutes. They wouldn't let 30 minutes on this problem. And a number of people in the cookie cohort had that problem. They were still working at 30 minutes when the researcher came in to interrupt them. Nobody in the radish cohort had that problem. And so the conclusion that Dr. Ballmeister and his team drew from this is that willpower is a lot like a muscle. It's really easy to work your willpower to exhaustion just like lifting a weight over and over again. Eventually, your muscle is going to fatigue and give out. And so if you spend all day at work keeping yourself on task, the odds that you'll have willpower left at night to tackle something significant in your life, pretty slim. This was my first big realization. Your willpower will fail you every single time. So you have to plan accordingly. This is what I mean by the motivation trap. When you run out and charge headlong into something, you're eventually going to hit the trough where you run out of motivation and you can't keep doing what you were trying to do. You can't grit your way through big accomplishments in life. It just won't work. But if you look at the cycle of non-finishers and you look at these first two items, there's a lot of energy and motivation present in those two steps. We don't want to waste that. So how can we channel that energy more productively? What should we do with that motivation? That's where Dr. BJ Fogg comes in. BJ Fogg has a term for this. BJ is a researcher at Stanford and the founder of the Stanford Persuasive Technology Lab. He's done a lot of research around human motivation and he coined the term motivation wave to describe this initial rush of energy and motivation that accompanies every time we decide to do something new. So how does this work? Let's say that you want to learn to surf. You're pumped. You're going to do it. You're going to hang ten just like this guy. You could do one of two things. You could, on one hand, head down to your local surf shop, grab the first surf board that caught your eye, grab a wetsuit, paddle out on the wave and start trying to surf. If you've learned to surf, you'll recognize how foolish that would be. On the other hand, you can find a good instructor. Book some lessons. Rent some equipment. Go check it out if you like it. Put lessons on a regular repetition so that you can get good at surfing. Now the first path is just the cycle of non-finishers exemplified. That's what we all do when we decide to charge headlong into something. The second path is what Dr. Fogg would encourage us to do, namely, do the important, hard to do meta work up front when you have energy and motivation to do it, so that following through on the behavior later is easy. Now when I say hard to do an important meta work, what do I mean by that? Well, there's a few things you should do. First of all, you should look yourself in the mirror. You should look yourself in the mirror and be really honest with yourself. Sorry, my slides are running out of control. You should look yourself in the mirror and be really honest with yourself. Thinking through your goal and making a plan to reach it is really hard work. Sorry, my notes are... there we go. Now I'm in the right place. Sorry, everybody. You should look yourself in the mirror and be really honest with yourself. It turns out it's really... when somebody asks you to do something, it's a lot easier to say yes than to say no, right? When somebody asks you... even if you don't want to do it, it's always easier to say yes than to say no. Well, it turns out it's easy to do this with yourself as well. When you decide to do something, it's easier to tell yourself yes. I'm going to go charge out and do that thing than to say, I don't know that I want to do that. So lots of times we fail because we say yes to something out of obligation. Weight loss is a perfect example of this. Every one of us who has tried to lose weight at some point in their life has said yes to weight loss out of societal obligation versus actually desiring to do the hard work ourselves. When you say yes out of obligation, you're way less committed to doing it. And half-hearted commitments become very expensive when these failures start to pile up. So when you start something new, you have to admit to yourself upfront that it's going to get really hard. And you have to decide if it's really worth pursuing. If you say yes, then you can make a real commitment to it. And if it's not, just let it go. Don't feel guilty about it. So if you do say yes and you decide to make a commitment to it, what does that look like? You want to pick your goal and make a plan to get there. Thinking through your goal and making a plan to get there is really hard work. That's why you should do it when your motivation is high. Having a plan upfront keeps you from being the person that goes out and spends a thousand bucks on a surfboard and then lets it collect dust in their garage for years. The interesting thing about all this in my life is that I stumbled into it all sort of backwards. My journey to better health began here at Juice Land in Austin. It's a staple for juice and smoothies in Austin. My wife discovered green smoothies here. She would go once a week to get a green smoothie after her yoga class. And she'd been doing that for months, trying to convince me the whole time how wonderful they were and that I should try them. No way man. Something that green couldn't taste good. There's no way. She finally managed to talk me into it despite my fear of green leafy vegetables and really anything healthy. And I really liked it, much to my surprise. And so for Christmas 2015, my wife and I always decided to buy ourselves something together as our big Christmas gift. And for Christmas 2015, our big Christmas gift each other was a Vitamix. Now I had always been of the opinion that it would be silly to spend this much money on a blender. Who needs a $400 blender? Nobody. Turns out to have been one of the best investments I have ever made. I'll tell you why. In the course of looking for recipes to make with this giant monument of a blender, I ran across this website called simplegreensmovies.com. And in December of 2015, targeting all of us who had just gotten a fancy new blender for Christmas, they were running a 30-day green smoothie challenge. You signed up for this thing. They'd send you recipes. They'd send you shopping lists. And your only job was to make this smoothie and drink it once a day for 30 days. Well, that sounded easy enough for me, right? I knew I wasn't getting enough fruits and veggies in my diet, and I liked green smoothies a lot. Truly, I could do that. And so that was my first goal, drink a green smoothie for breakfast every day for one month. Now, I didn't know all of the things that I know. I hadn't done all of the research around habits. So I lucked into a really good goal here, because here we are 16 months after that initial challenge. And this is what I have for breakfast almost every day. It really is that color, and it really does taste amazing. So how did it stick? How did I look into something that stuck for 16 months for me when I really hated healthy eating up to that point? Well, first of all, like I alluded to, the goal is very well-structured. It's super simple. I should have to make and drink a green smoothie once a day for 30 days. Total commitment, maybe five minutes. It's lesson number two today. Your initial goal should be so small that you would feel silly not doing it. The reason there's dental floss in the background here is that BJ Fogg has done, has a program called Tiny Habits, where he coaches you through how to set goals and how to hit them. It's a week-long program, super short, and his number one recommended goal for that program has to do with oral hygiene. But it's not floss your teeth. It's floss one tooth. Because if you floss one tooth, I mean, it's very low commitment. Anybody can floss one tooth, but what's going to happen when you floss one tooth? I've already got the floss out. It's already wrapped around my fingers, and my will just floss. Right? And so setting that tiny goal lets you start building up a pattern of success. It lets your success begin to snowball, and it lets you build momentum that lets you hit other more difficult habits. But a good goal was only part of my success. What else was there about this 30-day green smoothie challenge that worked so well for me? Well, number two, they handed me a plan that made it super easy. In my email every week, I got a shopping list. All I had to do was go to the grocery store and buy what was on their list, and then in addition to the rest of my grocery shopping for the week. Every day, I looked at the recipe, grabbed a couple things out of the freezer, tossed them in the blender, turned it on, voila, green smoothie. No thought required on my part at all. So this is what I mean when I say do the work up front to make it easy for yourself later when you lose motivation. If I was opening the freezer every day and looking at what fruit do I have in there now, when you make up a recipe, I probably wouldn't have been as successful as I was with this all laid out and easy to follow step by step. But it turns out there's another secret weapon here that really made this stick for me. This green smoothie plan had all of the ingredients of a habit. And it's a habit that's held for me for 16 months. So let's talk about why that is. And this is where we bring up Charles Duhigg. I mentioned his book, The Power of Habit earlier. In it, Duhigg picks apart the biological and practical aspects of how to build habits. If you want to make big changes in your life, this is the book you want to go out and read this book. I'll start the same place he does with biology. So this is a coronal section of the human brain. What that means is if somebody took my head and sliced it in half right here, this is what it would look like. Duhigg explains it like this. The brain is like an onion. And the outer layers of that onion are the layers that have been most recently added from an evolutionary perspective. This is where complex thought takes place. If you paint a painting or write some code, this is where it's happening in the outer layers of your brain. Deeper in the center of the brain, we're getting into more primitive structures, things that regulate breathing, digestion, your heart rate, things like that. And surrounding the base of your brainstem on either side is a golf ball sized mass called the basal ganglia. Almost all animals' brains have it. It's a very primitive structure. And one of the things that the basal ganglia does for us is recognize, store, and replay patterns. It does that automatically. We know this because of an experiment run by a group of scientists at MIT in the mid-1990s involving rats, mazes, and chocolate. So here's Duhigg's diagram of the experiment from the power of habit. They set up a T-shaped maze. They put the rat on one end of it behind the door. And on the other end of it, where it branched out, on one side there was chocolate. On the other side, there was nothing. Now, the chocolate was always in the same place in this maze. They never moved it from one side of the other. They always left it on the left leg from the rat's perspective. And they took these rats and they nestedatized them and they inserted implants in their brains that could measure their brain activity as they were running this maze and trying to navigate this maze. So when they first opened the door, there's a loud click. And they could see furious activity in the rat's brain. The rat would sniff and it would scratch the walls and it would try to climb up. And eventually, it would scamper its way down and it might wander right and look around a little bit. And finally, it would turn left and find the chocolate. After it had done it a few more times, the researchers started to see the activity in their brains decline. They weren't having to think it's hard to work this maze. They were learning the pattern. A couple weeks later, after the rats had run this maze hundreds of times, they saw a really interesting pattern. As soon as the door clicked open, the rat would immediately run through the maze, turn left, straight to the chocolate. What they saw was an immediate spike in brain activity when the door clicked. Then nothing. The rat's brain essentially went to idle. And then another spike in activity at the end when the rat got to the chocolate. The only part of the brain that wasn't quiet during this was the basal ganglia. This recognition and automation of behavior is called behavior chunking. And it's the basis of what forms habits for us. So how many of you, when you sat down this morning to put your shoes on, had to consciously think about how to tie them? Nobody, right? It just happened. How many of you, when you brushed your teeth this morning, thought about putting toothpaste on your toothbrush? Like actually consciously made a decision of, oh, before I brush my teeth, I should really put some toothpaste on this brush. Nobody. You didn't even think about brushing your teeth. That was automated for you. You walked in the bathroom, picked up your toothbrush, and went to town without even thinking about it. It's automatic because your basal ganglia has chunked that behavior for you and it replace it automatically on cue. This is what Charles Dutthig refers to as the habit loop. For our friends, the rats, the habit loop looks like this. First, there's the cue. When the door clicks open, the rats associate that click with, oh, I'm going to get some chocolate. Let's get after this. And so it triggers their basal ganglia to kick in and replay this pattern. That's the routine. The rat runs down this maze and turns left. And the reason the rat runs down this maze and turns left is because at the end of that routine, there is a reward. That reward is what triggered the rat's brain to encode this process in the first place. Because it knew that by consistently following this set of steps, it would be rewarded and evolutionarily speaking, rats like food, they want to continue living. And so they're going to do this reliably every time that door clicks. Turns out our brains are doing this. They're doing this encoding for us all the time. We just don't realize it. Our brains are really lazy. They like to save thought. And the more things that our brains can shove down into basal ganglia, the more of this critical thinking time our brains can free up for doing more important work. You know, like when you're tying your shoes or brushing your teeth, you're thinking about the day ahead. You might be thinking about a meeting that you have. In my case, you were fretting about a talk that you had to give. It frees your brain up to all of that stuff while automating your brain up to all of that stuff while automating the simple repetitive elements of your life. But that also means that your brain automates things like, oh, I'm really stressed out. What would help that is a double cheeseburger. The exact same mechanism. Significant portions of our lives are automated by habits that we don't recognize or understand. This is why this movie thing stuck so easily for me. Every morning when I saw the Vitamix sitting on the cabinet, I went, oh yeah, smoothie challenge. The decision to make one was already made. I mean, no discipline required. I looked at the day's recipe, got the stuff out of the freezer, threw it in the blender, drank my green smoothie, and I was rewarded every day by this explosion of flavor in my mouth that I really like. And that amazing explosion of flavor immediately reinforced that behavior. It taught my body that it wanted to do that because when it did that, when it made a smoothie, it gets something that tastes really good at the end. And over time, it learned that it made it feel really good as well. Gave it an energy boost. And so my body became accustomed to that. If you want some hands on, well, lesson number three is that deliberately building a habit is far more effective than just deciding to make a change. If you can build a change you want to make into a habit, get yourself to automate it. Making a change doesn't require any willpower on your part. It just happens. If you want some hands on training and doing this, like I mentioned, you should check out BJ Fogg's Tiny Habits Program. Just Google Tiny Habits. It'll be result number one program last week. He signs up a new cohort literally every week. And he walks you through setting and achieving three very simple goals. If you want to dig deeper, definitely read the power of habit. This brings up another one of Charles Duhigg's key concepts, the idea of keystone habits. The idea is that some habits are so powerful that they can trigger other habits in your life. And green smoothies turned out to be one of those habits for me. I didn't know it. Duhigg says you're more likely to stumble on one of these keystone habits when you do something that's very scary for you. And for me, and I know this sounds kind of silly, but for me, eating healthy has always been scary. I don't like the way healthy things taste. I never have. I've always liked richer, fattier, cheesier foods. And so the idea of taking one of my meals every day and devoting it exclusively to something healthy was a little scary for me. And I stumbled onto a keystone habit. I started drinking these smoothies once, sometimes even twice a day. I had more energy. I conquered my fear and I was inspired to do more to get fit. And so I set a long-term goal of losing 50 pounds and a short-term goal of regularly going to the gym, specifically to spend 30 minutes on the elliptical. The very specific reason that I picked the elliptical, I'll get to that in a second. But the thing about this is the green smoothies are self-reinforcing, right? I get this wonderful blast of flavor at the end. This mind-numbing torture device, not so much. So here's how I went about building the habit and making myself do this thing that I really didn't want to do on a regular basis. On Tuesday, Thursday, and Sunday, I changed into workout clothes before dinner. Then after I... Let's take this girl in here. And then after I got my kids to bed, I already had my gym clothes on. I was already committed. So I went to the gym. What about the reward? Well, you have to know yourself to be able to pick an effective reward. This is why I picked the elliptical. One of the things that I love most in life is time to read. And as you can imagine with two kids, I don't get a lot of time to read. Life is a little too crazy for me to sit down and read on a regular basis. But the elliptical happens to be a full-body workout that I can do while I'm still reading a book. And so I can convince myself that it was worth getting on the elliptical for 30 minutes for the sake of reading. If that doesn't work for you, you might try eating a piece of candy or even a cookie immediately after your workout. It doesn't even matter that it's counterproductive at first. As long as you're positively reinforcing that behavior that you want to encode in your life, your brain will encode it for you. And eventually you can wean yourself off of that extrinsic reward because your brain will start getting hooked on things like the dopamine boost that you get from working out. Or if it's, say, sitting down and concentrating on something for 30 minutes, your brain will get hooked on the feeling of success that comes from doing that. But in order to initially encode the habit, you may have to cheat a little. You may have to eat some chocolate or some cookies or something. It turns out, though, that sometimes it's hard to get a habit right. And my elliptical habit failed miserably. Because my schedule was inconsistent. I'd make excuses. I'd have a really hard Tuesday and I'd go, oh, I'll just, I'll just sit down and watch TV tonight. I can go to the gym tomorrow. I never went tomorrow, ever. Some weeks I only went to the gym twice. Some weeks once. Some weeks not at all. And I fell back into the same old patterns of beating myself up for not being able to make myself go to the gym. So how do you fix a failing habit? Well, habits are just simple systems. We're all software developers. We know what to do with systems. Sam Carpenter has done a lot of thinking about systems. He's an entrepreneur from Oregon and he's written a couple of books on systems thinking in both business and in your personal life. And the book that I read was this one, The Systems Mindset. It's about taking a systems mindset in your own life. Now, before you run out and buy this book, I want to warn you that it's a bit of an eccentric read. And there's some things in it that I definitely don't agree with. But there's one central idea that I want to bring up. And it's this idea that literally everything in life is made up of systems. What does that mean? The takeaway is that we spend our lives running around fighting fires in the form of results. For me, I'm not going to the gym three times a week. Come on, self, go to the gym three times a week. This shouldn't be that hard. When instead what we should be doing is realizing that that's a system. There's a whole set of steps and processes that leads up to us successfully going to the gym three times a week. And if you're not doing that, ignore the end result. Figure out where your system's breaking. Take a debugging mindset. This is a really powerful shift in thinking. Going far beyond practical implications. How many of you have heard of the idea of a blameless postmortem? So this is another example of systems thinking. When your system falls over or something breaks or you have a critical production failure, you set your team down not to establish blame, not to figure out who's going to get fired, but to figure out what was wrong with your system and what changes you can make to your overall process to prevent that thing from happening again. It's a very healthy, very encouraging process focused largely on learning. And if it's done right, everybody walks away feeling whole and nobody walks away feeling beaten up. The same principles apply when you start applying systems mindset to your own life. Not only am I focusing on the root causes of why I'm not going to the gym, I'm shifting blame from me, from me being lazy or undisciplined or unable to finish to my system. It's not that I have a personality flaw. It's that my system screwed up. No big deal. Let me fix the system. Things like procrastination, lack of discipline, become clues, not personal moral failings. And when what you're most looking to do is snowball success into other success, that's a really important thing. And so when I looked at my attempted elliptical habit, I found two problems. Number one, I had given myself too much of a choice to make and it was a choice that required me to exercise willpower at the end of the day, specifically when I was completely out of willpower. So obviously on a regular basis, I'm going to fail that choice. I'm tired from the day. It's going to be way easier to sit down and watch TV. I'm not going to go to the gym. So how could I take that choice away? Well, for me, the answer was a little extreme. I decided I was going to go work out every day. Uncompromising. So that way there wasn't a decision to make. It was just a thing that I did. It turns out that it's really easy. It's a lot easier to encode a habit when you do something on a very regular rhythm. Now, if that didn't work for me, another thing that I could have done is decided that I'm going to go work out Tuesday, Thursday, Sunday, non-negotiable. Because if you do something on the same day, same time, even if it's not on this, even if it's not every day, it's easier for you to encode that into a habit. The thing that prevents you from encoding into a habit is the habit that I had gotten into negotiating with myself and telling myself that I could always go tomorrow. So I went to the extreme end of the scale. I decided I was going to work out every day. But then there was another problem with that when I looked at it from a systems perspective. That was a really unrealistic commitment for me. I had 30 minutes a day to spare. I could get on the elliptical 30 minutes a day. What I couldn't do was drive 15 minutes to the gym, get on the elliptical for 30 minutes, and then drive 15 minutes home. For some reason, psychologically, the need to commit to an hour of time was a huge psychological barrier. But I could make the commitment to 30 minutes of time, especially 30 minutes of reading. So what did I do? I went out and I bought an elliptical for my house so that I didn't have to drive to the gym. I made this thing that I wanted to do even easier for myself. Now I realized that's sort of an extreme example. But that's why you have to look at yourself in the mirror and decide, is this really something I want to commit to? Because sometimes if you want to make big changes in your life, as you're debugging these systems, you're going to find out that you have to make some significant investments in making these things happen. So did it work? Yeah, pretty well. So I bought my elliptical on February the 6th. And that started a 73 day long streak of me getting 30 minutes of cardio a day. And as you can imagine, I lost a ton of weight in those 73 days. Turns out that Apple did something really smart when they implemented this calendar. This is just a glorified Seinfeld calendar. If you're not familiar with the idea of a Seinfeld calendar, it's that if you want to reach a goal, what you should do is put a calendar on the wall and put an X on that calendar for every day that you do the thing you want to do. And eventually you have a chain that you don't want to break. When you decide to work out every day to get 30 minutes of cardio every day, this becomes a Seinfeld calendar. And there were definitely days in that stretch that the only reason, the only reason I got on the elliptical is because I had a 20 day streak in front of me that I didn't want to break. April the 19th, I got a stomach bug. And then work got stressful after that. And I fell off the wagon for a little bit. But I got back on pretty quick. Building this success enabled me to tackle one of my biggest fears. Running. Now to understand what a big deal this is for me, you have to understand how much I hated running my whole life. I mean, really hated. I looked at people who enjoyed running. And in my mind, these people were aliens. I couldn't understand why anybody would want to do that. Why would you want to run when nothing was chasing you when you weren't in danger of being eaten? But my wife's a runner. She likes to run. I mean, not, not a regular runner, but she ran track in high school. And she's always enjoyed running. So this was a way for me to invest in spending time with my wife to learn how to be a runner. In elementary school, I would beg my mom for a note on the day that we had to do the mile run walk for the president's physical fitness challenge. I would beg her to write me a note excusing me from PE that day. So that I didn't have to do it. And so it's September 2016. I started couch to 5k. I kept my daily cardio routine, but I just subbed in the couch to 5k runs three days a week. In October, I hit the infamous week five day three, which if you've done couch to 5k, you probably know what I'm talking about. It's where your workout ramps up from two eight minute periods of running with a five minute walk break in between to a straight up 20 minute run. It's a pretty big jump. It's pretty intimidating. And I had a plan for failure. If I couldn't do it, then I was just going to repeat week five, no big deal. But I did it. I surprised myself. I didn't think that I could do it. And so when I did week five, day three of couch to 5k, I ran the first mile of my entire life. I literally never run a continuous mile until that point. As a 35 year old, I ran the first mile of my life. I stuck with couch to 5k until I could run a continuous but slow 5k. And then I signed up for a race. And I underestimated how motivating that was going to be for me. So I trained pretty hard for it. I followed an actual training plan. My goal was to get under 30 minutes. And I did. I finished in 28 minutes and 40 seconds way faster than I expected to. I beat my previous best by about three minutes. This last Sunday, I ran the cap 10k in Austin. It's one of the most famous races in Austin. 21,300 runners in this race. My previous fastest 10k was about an hour and two minutes. So I set an hour as my goal seemed reasonable. It seemed like it was going to be tough on a course this hilly with 21,000 other runners. I ended up finishing in 57 minutes and two seconds. I got the text message on my phone with my official time. And I teared up a little bit. It was one of the biggest achievements of my life to be able to run a 10k that fast and to look at the work that had taken me to that point. My runs are now one of my favorite parts of the week. And race days are some of my favorite days of the year. I get to go be around this amazing tribe of people who are all oriented around fitness and running. It's very reinforcing. So to tie it all up, let's talk about how to hit your big goals. First, take a long-term view. So I mentioned when I set into this whole journey, when I decided I was going to get fit, I set a goal of losing 50 pounds. But I didn't tie a date to that. Instead, it was just this nebulous thing out in the future that I wanted to get to. What I did tie dates and things to were things like getting to the gym three days a week, getting on elliptical seven days a week, working on my fitness that way. When you take a long-term view, it's smooths out the bumps in the road. It lets you find the joy in the process versus being fixated on sprinting to this goal all the time. Second, set goals along a way that let you snowball success. This is how I became a runner. I did the elliptical thing every day for 73 days. And then I did it again. Running was really scary for me. But being on the elliptical had taught my body to seek out endorphins. And so when I took away the extrinsic reward, that 30 minutes of reading, I found that the endorphins were enough for me. I was so hooked on the small endorphin boost from being on the elliptical that the big endorphin boost from the runner's high, oh man, that was immediately reinforcing for me. I no longer needed that extrinsic reward. The exercise itself was enough for me. Third, build habits and then optimize them with the system's mindset. This takes the blame off of yourself. It lets you focus on building the habits and optimizing them and not focusing on things that you might perceive as personal failures. It takes the responsibility off of you and lets you do what you already know how to do from your job. You already know how to approach and debug a system successfully. Apply that skill to your own life. You've worked too hard at building it. You want to get better at writing code? Build habits around reading and deliberate practice. You want to get better at concentrating? Build habits around periods of concentration and slowly build on them. Cal Newport's book, Deep Work, is about exactly that. It's exactly what he talks about in that book, is systematizing and building out patterns of deep thought in your life. So today, I weigh 180 pounds, 50 pounds off my peak weight. I'm at my lowest weight of my adult life and by far the best fitness. I drink one to two green smoothies a day. I get 30 minutes of cardio six to seven days a week. I've gone from hating running to running an average of 20 miles a week. And all this stuff is driven by systems and habits requiring very little willpower on my part. And if you take nothing else away today, that's what I want you to hear. Anybody can do this. You just have to understand that willpower won't get you there and learn how to hack your way around willpower, learn how to build systems and habits that will get you to your goals. You can do stuff like this too. I don't have a lot of willpower. I failed at this so many times in my life, in my life before I finally figured it out. And I know you can do it too. Thanks a lot.
Over the past year, I’ve spoken at several conferences, lost 30 pounds, and worked up to running my first 5K, all while leading an engineering team and spending significant time with my family. I also have less willpower than just about everyone I know. So how’d I accomplish those things? Let’s talk about how to build goals the right way so that you’ll be all but guaranteed to hit them. We’ll work through the process of creating systems and nurturing habits to turn your brain into your biggest ally, no matter what you want to accomplish!
10.5446/31300 (DOI)
All right, sorry for the delays. I want to start out today by doing a little exercise. The exercise is lifted from a book called Nonviolent Communication, and it's called Translating Have to Choose To. The idea is that we all have things in our lives, which we do not because we inherently enjoy them, but just out of some kind of sense of obligation. We feel that we have to do them. And the idea of the exercise is to acknowledge that there's a choice there, and once we realize that it's a choice, figure out what we're going to do about it, and think about maybe how to re-make that decision. So you take all these things in your life that you don't really enjoy doing, you make a big list of them, and then you have to fill in one by one this sentence with each thing. So you say, I choose to X because I want Y. You figure out what's the reason that you make the choice to do that thing. And what you'll basically find is one of a few things will happen. Maybe the thing that you want is actually really important, and you'll find new meaning in the thing that you were doing. And it'll feel more like an active choice that you want to make. Maybe you'll say, well, the thing that I want wasn't really that important, so I don't really have to do it. Or you'll say, well, the thing that I want is important, but maybe this isn't quite the right way to get to that point. Or maybe there's just a different kind of broader set of strategies that I need to really get the thing that I want in some kind of better way. So today we're going to ask the question, why do we choose to write good code? And that's a little bit of a scary question, right? Now, what do you mean? We have to write good code, right? It's ingrained into us. We go to talks at conferences. We read books. We watch podcasts, or listen to podcasts, I guess. We watch stuff online. We read blog posts all about good code. We have to do it. That's just, you know, that's what we do, right? So I'm here to tell you, first of all, today, you don't have to write good code. You choose to write good code, hopefully. And part of today will be figuring out why we choose to write good code. That second line, because I want what? What do we fill that in with? Maybe we fill it in with, well, if I don't write good code by some kind of arbitrary standard, then I'll get fired, a sad reason to, but that might be it. But ideally, we'd find something meaningful and important in terms of why we're choosing to write good code. So why do we care about writing good code? And because this is a talk about software quality, and that's really the ultimate goal here, we have to ask, is good code actually the same as quality software? So we'll get back to that. But first I want to talk about something somewhat, but not entirely different. Dave Thomas, Bob Martin, others as well, but I mentioned them just because they were signatories on the original Agile Manifesto, have said that Agile is broken. It didn't accomplish the revolution that it was supposed to accomplish. Here's a quote from Bob Martin quoting Kempbeck, who's also there. He said that one of the goals of the Agile Manifesto was to heal the divide between development and business. There was a gap that had been created, a chasm between the two sides, and a lot of tension, and the goal was to relieve some of that tension and find ways to work synergetically and harmoniously. Reality check, I don't think it really worked. They don't think it did either. So at least I have some people who were involved in Agile right from the top saying, not great situation. But what it became was basically some kind of business person, project manager, whatever it is, would take a course, get certified in something, and basically tell developers, okay, now you're all going to do Agile because that will lead to better software. It's something that developers do to keep business people happy if you could even do, in quotes, Agile. Usually the form that this takes is Scrum. I forget the number, 60, 80%, something like that, of organizations that claim to be doing Agile are using Scrum as their implementation. Fun fact, as far as I can tell, because I've read through the Scrum guide and I've read through the Agile Manifesto and compared the two, it addresses maybe 50% of what Agile is and it kind of misses out on all the values. So I'm not saying that Scrum is bad. I don't want to give that impression. I think Scrum has a lot of really good ideas. It's just not an implementation of Agile. It's a framework in which you can choose to be Agile or not. It makes it easier in some ways, but it's not actually Agility. We don't need more processes. Sometimes processes can be helpful, but they're a tool toward something more important. What I think that we need, and this is why I'm coming down kind of hard on Scrum, is we need more trust. Scrum focuses a lot on transparency. As far as I understand, transparency is basically we have two sides. Each side sets to the other. I need to know what you're doing at all times. I need you to be transparent about it so that when I see something going wrong, I can jump in and complain and micromanage you. Trust on the other hand is let's have a conversation. Let's understand that there are two sides to the situation here, and let's talk about the needs of each side. Each side will express their needs in a way that makes sense to the other side, and once we're confident that the other side has heard our needs and understands them, we can rely on the other side to think about our needs and account for them in their own decisions. That's what trust is to me, at least, and that's what allows us to build effective organizations. I think we can solve this problem. I don't think it's even too complex. It's difficult, but I don't think it's complex, and it starts with communication. If we learn to communicate between developers and business people in a way that makes sense to both sides, then we can start building the trust that we need to have highly effective organizations. We have an ambitious goal today. We're going to create a common language for developers and business people to be able to express their needs in a way that makes sense to both sides. The way that we're going to achieve it is by going right for the jugular, taking the thing that causes a lot of tension between the two sides, and trying to understand it again, figuring out why we write good code. A lot of times we push for good code in whatever way we define it, and good code takes time, it takes investment, and it's hard to explain that to the other side, and that causes a lot of tension. We're going to figure out why we write good code, but first let's figure out why do we write code. I really mean that. Why do we write code ever? It seems like a strange question, right? We're developers. We're not going to do that. That's our job, right? Well, newsflash, controversial statement, but I think it's certainly pretty clear to me, your job is not to write code. Your job is to solve problems. I want to give an anecdote from specifically another field that will help us look at things a little bit more objectively. There's a man once came to a plastic surgeon, and he said, I need you to do some surgery for me. My ears are very large, and I want you to make my ears smaller. Interesting. The surgeon decided to probe further. Okay, why do you want this surgery? What problem are you trying to solve? This guy said, well, it's not a problem from actually at this stage in my life at all. I get along fine, we're all adults, right? But when I was a kid, I got teased a lot. I got bullied a lot over the size of my ears. I don't want to pass on that problem to my kids. I'm about to start a family, and I don't want them to get my ears. So how about you fix my ears so that I won't pass it on? So he clearly still had a Lamarckian understanding of evolution. I was curious how many laughs that would get. And basically what the doctor did was say, hey, let's talk about modern genetics. Let's understand how these things actually work and why I can't solve your problem, and the solution that you suggested certainly doesn't help. If the doctor had said, okay, that's nice, let's schedule a surgery, we would call that malpractice. But it's funny because that's very clear to all of us, I think. But what happens when someone approaches a developer and says, hey, I want you to build a mobile app for me? And the developer says, okay, without even understanding the problem that we're trying to solve, without understanding whether this is the right solution, maybe they need a marketing director, maybe they need to just have an email campaign. See a lot of things. There's thousands of apps out there in the App Store that reflect this problem. They don't do anything for anybody. They don't matter. I have a hard time calling them high quality apps because they don't solve a problem. If you're not solving a problem, there's never a reason to write the code. Now this doesn't sound very businessy, so I'm going to give a little translation. Your job is to create value. And that's kind of a loaded term. It means a lot of things to different people. So I'm just going to give a working definition that we're going to use for the rest of the talk. Value creation in the context of this talk means one of three things or more of those three things. Generating revenue, lowering costs, or reducing risk. Those are the main things that matter in business. In the specific case of a nonprofit, instead of generating revenue, you can substitute doing whatever good the nonprofit is supposed to do. And on the whole, the same pattern still applies. Those are the three things that we mainly care about. And where do users come in? Well, generally the way that you do these three things is by meeting the needs of your users. So we're going to use meeting business needs and meeting user needs kind of interchangeably in the talk. Hopefully they go together when they don't is a whole discussion that I'm just not getting into today. So to recap, why write code to create value? That's it. It's the only reason we should ever be writing code. If that's the case, why do we care about writing good code? I think the answer is simple. It usually helps to create value. Does that mean that good code is the same as quality software? According to this definition, I don't think so. So here's the second of two controversial statements. I would define software quality as just the amount of value that it creates. So writing the code to create value, if it creates that value, it's fit for purpose. It is ergo high quality. If it doesn't meet the needs, then it can be really well written under the hood, but doesn't really help because it's not actually solving a problem. So rather than asking how can you write better code, what we should really be asking is how do we target our efforts in coding to maximize value creation? And how does good code fit within that whole broad picture? So I'm going to commit a major programming sin now, which is solving a problem by writing a framework. It's not JavaScript. It's just a human people framework. It's only three words. So hopefully it's not too bad. I think that there are three factors that if we think about these three, we can pretty much fit anything about software quality as we've defined it into one of these three factors or two or three out of them. I call them usefulness, sustainability, and accuracy. We're going to talk about what each of these means, but you'll notice there's an acronym there. I didn't choose these words over the acronym. Like that happens sometimes. You just want a cool acronym. It happened to be the acronym that came out, but if thinking about the USA helps you in terms of remembering the three words, all the better. All right. So let's start with usefulness. Each of these is going to have a question and target. The question in terms of usefulness is, does it solve a problem effectively? Pretty straightforward. But a lot of details there. The target here is our users. Again, using users and business needs a little bit interchangeably. The question is, is it a problem that affects users first of all? Sometimes we spend a lot of money to code up a solution that doesn't actually address a need. We have to make sure in whatever way we can that we verify that the problem actually exists and that it affects the users that we're going to be addressing. Then even if we've correctly identified a problem, it's important to think about, well, is this actually a solution to the problem or are we missing something? And it can't just solve a problem. It has to solve the problem in a way that works for the users. Because if it works but doesn't work for them, you haven't really done an awful lot. I want to cite a story from Tara Schroeder-Delafwente who gave a keynote at RubyConf last year, which I think absolutely illustrates this point really nicely. She was working with a user of one of their applications, just kind of seeing what they were experiencing. It was to upload a spreadsheet with a whole bunch of rows and anything that the application could figure out, great. If not, then you had to kind of fill it in. One of the columns was a column of dates. Most of us think, okay, date, date picker, great. In the case of this application, they decided to allow inserting dates, and putting dates by a drop-down menu without sorting the dates. So there were just hundreds of dates in a completely unsorted list that they had to look through, somehow managed to find the right date, and then repeat it again and again and again and again and again until the whole spreadsheet was done. And if there was an error, by the way, you had to go back to the beginning and it wouldn't save what you had done so far. That meets the needs technically. It'll eventually get it done, I guess, but it's not in a way that works for the user. It's really hard to call that a high-quality product. Okay, that's usefulness. Sustainability, what's it about? The question is, can we keep building without unnecessary obstacles? And I thought about this a lot. I didn't want to say without obstacles, because there are always going to be obstacles there. The question is, are we identifying the most important obstacles and trying to account for them, trying to limit the things that are going to stand in our way as we continue developing? The targets, in this case there's two targets, which I think are really one. The first is our software. Is the software resistant to change? That breaks down into two sub-questions. One is an issue of understanding. Is the software written in a way that we just won't understand what it does, and it's going to be hard to make changes? The other piece of it is, well, even if it's very clearly written, we understand what it does, but the architect, some, certain architecture choices we might have made make it hard to change things later on. So it just takes longer. It's a lot more effort to do. The other target is the development team. We so often forget that the development team is, I don't even want to say it's as much a part of our software as the code itself. It's more of a part of our software than the code itself. Think about how, just care for you are when like the leading code, putting in new stuff. You don't do that with your team, right? The rate of churn is ideally a lot lower, and you crash the development team, you basically don't have a product anymore. So it's important to be aware whether the team is unstable in some kind of way that, if things are tipped a little bit, is that going to prevent future progress? Is the team going to collapse? Or just become less effective in some way? That sustainability. Let's talk about accuracy. This is the one that we probably think about the most. The question that we're asking is pretty simple. Does the software work the way that we think it does? The target here, you think I'm going to see the code, right? The target is ourselves. We have to ask four questions. Have we developed our understanding of the problem? Sometimes we just don't understand the problem correctly. We do everything right in the coding step, but we haven't taken the time or the effort to figure out first, is this actually the problem? Does it solve the problem? All the usefulness stuff leads into an accuracy problem. Have we developed understanding of the code? Is there something about the code that makes it hard for us to get the next step done, and we're just coding the wrong thing? Sometimes there could be issues of cognitive overload that even if the code is relatively clear, but there's so much going on, too many integrated systems, microservices, whatever, that you just don't understand the full impact of your actions. Finally, our understanding of a problem changes over time. Sometimes the problem itself changes over time. If we're not updating the code to reflect that, that ultimately appears as an accuracy problem. These seem like things that are maybe a little bit like mashed together that shouldn't be together, but if you think about it from the perspective of the user, if any of these are failing your software is broken. That's a bug report, any of those questions. To me, that means that they're all issues of accuracy. I can't speak for anybody else, but when I think about why I write good code, why I choose, actively choose to write good code, it's because I want software that's useful, sustainable, and accurate. Good code is a significant tool in helping me get there. Here's the traditional model with the business development chasm pictured. In the right corner of the room, we have accuracy and sustainability. Those are seen as the domain of the developers to write good, accurate code. On the left side, we have usefulness, which is the domain of designers, UX people, product people, et cetera, and never the twain shall meet. We really subdivide those responsibilities, and it's not surprising that each side advocates for the things that it cares about. Well, we said never the twain shall meet, but today the twain shall meet. What we actually need is to think about quality as all of those factors together. We can't leave out any element of that. We have to be part of that usefulness circle as well. This has all been pretty abstract, I realize. I don't want to leave you with just abstract floating in the air ideas. I want to take this down to a very practical level. What we're going to do, clear out the diagram, we're going to ask the question, is it quality software? We're actually going to fill in the things that we do on a much more granular level to address each of these three points. So, quality, accuracy, sorry, sustainability, and accuracy. Now, this is event diagram. There is going to be some overlap on occasion. Practices might contribute to more than one of these circles. Or to something that we might think of as maybe the confluence between those two circles. So, I'll just give a little more definition in the middle. We have between accuracy and usefulness, the question of does it work in the way that the users expect? All right, less thinking about ourselves, starting to think more in the direction of our users. Between accuracy and sustainability, we're asking the question, can we keep building our system without breaking our system? It's future focused accuracy. And up top, between sustainability and usefulness, we have the question of are there obstacles to future usefulness? Essentially things that aren't threatening usefulness right now. But we're aware that the choices that we make now might have some kind of impact in the future on how useful our product is. All right, so I titled this talk, what comes after solid? I thought it was a nice juicy title. But I also think it kind of makes sense to start by talking about solid because it's just this traditional metric of software quality. We think a lot about it. We talk a lot about it. I'm actually not going to talk a lot about it. I'm actually going to run through it really, really fast. So if you aren't familiar with all of the details of solid, that's okay. I could spend the whole 40 minutes just talking about that. So you might be bored for about a minute, but we'll be fine after that. There's tons of stuff that you will understand. Fine, so solid stands for five different principles that are related to creating whatever we might define as well-crafted software. The single responsibility principle, the idea that every element in your system should do one thing and do it well. The open-close principle, which is short for open for extension, close for modification, meaning that you should be able to add new features without modifying existing code. How you do that is, again, a long story. So I'm not going to get into it. Liskov substitution principle. Basically use inheritance properly. Don't inherit from a superclass because you need a method. Inherit because it's actually a specialized instance, but it should fully implement the superclass and all of its functionality. Interface segregation. This is not that relevant for Ruby because we don't really have interfaces in the traditional sense, but I think of it as, ultimately, it's about limiting the surface area of how objects depend on other objects. You can do that with added closely interfaces. Technically, I don't think it matches a principle, but it's close enough. And finally, dependency inversion. Depends on abstractions, not concretions. You're basically pushing dependencies to the outside of your system, and then the core elements, the units, are depending on sort of a theoretical idea of what the dependency is doing without relating to too many of the implementation details. So let's run through, we'll do this very quickly. SRP, I think, is a big winner. If every item in your system does one thing and does it well, it's easy to tell if it's doing the thing that it's supposed to do. It's much easier to change later because you know exactly what to go back to and modify. And this is sort of a personal opinion. There's going to be a lot of personal opinions here, so I apologize, but kind of sorry, not sorry. The main thing here is not to learn the details. It's to see the way of thinking, and ultimately, you're going to apply it yourselves. But back to right here. So usefulness. I think the single responsibility means that each part of your system has a very clear responsibility, and that lets you start thinking about is this the right responsibility? Is it something that actually creates value for people? Is this something that should or should not exist in our system? It opens up a pathway to those conversations. So I think of it as a very, very central practice. Open-closed is really just about the future, so I put it all the way over in sustainability. Liskov substitution is about avoiding certain classes of errors. So I think about that as avoiding errors now and avoiding errors later, accuracy, sustainability. Interface segregation and dependency inversion, again, are really about the future. So those go all the way in the sustainability camp. So that's where solid falls out in the way that we've expressed it. It's all in the sustainability camp. There are a few that get into other places, and again, SRP to me is a pretty central practice. Let's talk about some other stuff. Sandy Metz has her four principles, which get the acronym true. You can worry about them in the first chapter of Puder. Four principles that are meant to create systems that are much more robust in terms of their ability to accept change. It makes it much easier to change them in the future. Transparent, meaning that the consequences of changing code should be obvious. The cost of change should be proportional to the benefits. If it's a small change, then it shouldn't be, sorry, if it's not that beneficial change versus if it's a very, very beneficial change, the cost of change should be proportionally related. Usable, basically, is the code reusable in other contexts. An exemplary, your coach had set a good example for coders to come because, of course, we all kind of look at the system, see that, oh, it is the way it is. Whatever, let's just perpetuate those qualities in the future because, obviously, whoever was here before did it right. Those are all explicitly about the future, so I put them right into that sustainability camp as well. We're pretty heavy on sustainability, but let's try to diversify our strategy for quality just a little bit. Let's talk about testing. The first thing that very often in our community we forget is even a choice, is testing at all. In the test, there's a lot of places and teams that don't test at all, so choosing to test at all is a practice. Code coverage, the attempt to make sure that every piece of your system is exercised in a test. Different types of tests that we can talk about, there's unit tests, which test a small piece of your system, integration tests, which are testing how pieces of your system interact with each other, a much broader perspective, and manual QA, which is even further out, you have a person who probably hasn't worked on your system themselves, thinking about a user perspective, trying to use it as a user, trying to break it, et cetera. Then we have TDD and BDD. I'm not going to offer my own definition of it. I'm just going to try to give my understanding of how the R-Spec book defines them. Either way, you write your test first, but with TDD, you start out with very small units. You build your way up out towards the edges of your system. With BDD, it's the opposite. You start out with a broader perspective of what you're trying to do for your whole system and then work your way down. How did those fall out? Not surprisingly, they're all in the accuracy camp because they all help you write more accurate code. Coverage helps you out with the future as well. It's explicitly designed that if anything about your code changes in a way that's going to break it, you have a test alerting you to that fact. You have BDD manual QA and integration tests, I think actually do relate a little bit to usefulness because they're making you think about your system not in terms of, is a code, right? But what is the purpose of writing this code? Testing I put in the middle because I think that all of these actually provide a pretty strong signal in a bunch of different directions. I think of testing as essential practice as well. All right. That was a little dry. Let's talk about some hot new buzzwords. Let's talk about functional programming. DHH seems to be, I'm not sure if he's in Twitter or not, based on the keynote this morning, but everyone's talking about it. Type systems. Everyone's talking now about gradual typing, soft typing, type script, type, type, type. Immutability, creating objects that should not change after they're created. Scalability, everyone's talking about now how you have to be Facebook style. You have to be Twitter scale. You have to be, I don't even know. All of these are actually really in the accuracy camp. You could make an argument to push over some of them into sustainability just a little bit, but ultimately they're about writing code that's accurate. That doesn't have unexpected side effects. So that's most of them. Scalability is a little different. It's about future usefulness. And the reason that I say that is because if you actually need scalability right now, that's not called scalability anymore. That's called our app is down in production. That's a different problem. An important one, but not the same. Scalability is about, well, one day we might need to deal with a lot, a lot more load, so let's think about that now. Which I don't know how useful that is, but for some apps, I guess it can be. All right. Complexity metrics. Those are fun. Cyclomatic complexity. How many assignments, branching, and conditionals does your code have that make it more difficult to understand? If you use Flay, it's looking at that. Local canasins. This was introduced to me by the patron saint of Wunder Jim Wyrick in a talk that he gave a few years back. Basically different places of your code that have to change in tandem create canasins and we want to lower that. We want to minimize how many things you have to change together to have your system not break. So I put that basically the more complex something is, the harder it is to write correctly the first time and the harder it is to write correctly the next time also. So that's where I see those falling out. Okay. Now we get to something that's, for me, a lot of fun, something I care about a lot, which is team-oriented practices. Organization conventions. If you have something like, you know, we all use the same, we all use Rails, we all, you know, use a certain text stack, a certain process tool, anything that is consistent across your organization that makes it easy to move across teams if need be, that's one team-oriented practice. A style guide and ideally, in fortune, that style guide programmatically with Rubikob, JS, whatever is the appropriate tool. Something that keeps your code consistent so it's much easier to navigate and understand. The bus factor. This is the measurement of how many people on your team would have to be hit by a bus in order to destroy your application. Fun fact, most teams have a bus factor of one. Which is bad because occasionally, aside from dying, people also quit, get fired, take vacations, get sick. We want to ideally have, have ruin our organizations for people to do normal people things and have our apps not be destroyed. So mainly knowledge sharing, also upping the level of really everybody on the team rather than kind of hoarding knowledge or skill. Code review. Okay. Stuff gets reviewed before it's checked into source control. Pair programming or mob programming, sort of like code review on steroids, taking multiple developers, however many it is, and having them work on code at the same time, each providing their own perspectives. Internal documentation, readme, stuff like that, things that help you both onboard new developers and also remind you of things that you might have forgotten or you might have forgotten to share with other people on your team. Debugging tools, stuff that helps you out a lot when things break and you need more insight into your system. And the last thing is mentorship. Which I'm choosing to define, and it's a choice, I'll admit it, but I'm choosing to define as taking two people on your team who have different skill sets, pairing them together in a formal context over a long period of time and having them share their skills with each other. So most of these fall into sustainability because they're practices that help you build up your team. Your team works better together, your team is more stable. I put mentorship on that crossover with accuracy because it turns out that people who are more skilled end up writing better code. So both the first time as well as keeping your team healthy in the future. Code review, I put in the crossover between accuracy and usefulness because code review first of all is just a good check on whether your code is doing what it's supposed to do. But it also creates an opportunity for a conversation about should we be writing this code? Is this the right code to write right now? Is this the right solution to the problem that we're facing? And pairing and mobbing, like I said, are kind of code review on steroids. The cool thing about it is that it does all those things while also building up your team, building those relationships. I once saw something, a quora answer by Kempbeck where he was asked, actually someone just asked the world, does Kempbeck believe that pair programming is always a good idea? So Kempbeck, I guess, found that and answered. He basically said when either the problem space is big, the solution space is big, or you need to build relationships on a team. Really important to remember that pair programming or mob programming also have that effect. All right, a couple miscellaneous items. Continuous integration, making sure that your code passes a build before you actually deploy it. Frequent releases, frequent releases, kind of self-explanatory. Refactoring, not as a thing that you do once, but as a thing that you do constantly, always looking to improve the health of your code base. Simple APIs, basically using things like RailsRest or JSON API, or just the normal ways of doing stuff in the language of choice, things like add a reader and add a writer rather than get adder and get set adder in Ruby or the opposite, I guess, in Java. Just kind of using the things that people expect. So those fall out kind of all over. XCI is mainly an accuracy thing, you want to know that your code works. Refactoring is on that border between accuracy and sustainability because it helps you write code correctly, because usually you're refactoring in the context of writing new code, helps you make sure that you got it right. It also helps you keep your code base healthy in the long run. Conventional APIs I put on up top because I think that they are usually well-designed and they leave you room to grow in the future. And frequent releases. So I did not think of frequent releases as a central practice for a long time, so it's kind of a recent revelation for me. But the truth is, when you release things frequently, first of all, you figure out that things are broken really fast, so that I guess helps your accuracy. It helps your sustainability because you're not building castles on top of foundations which you only later find out when you deploy are actually broken. And it helps out with usefulness because until you've actually shipped any changes that you've made, you could think of as inventory, right, you haven't actually created value until you've shipped. And the faster that you ship, even a small bit of functionality, the faster it's going to create value. Okay, so we've covered a lot of ground. We have a pretty diverse strategy, but we're kind of missing something. What goes in that red area? What's going in that usefulness section that we can use to balance out this picture of software quality? To me, it's practices that are centered on user-oriented thinking. The main one, which kind of leads to all the others, is a focus on delivering value. Rather than thinking about what's a cool thing that we could do now or thinking about it from a very technical, like, okay, let's build it up from the ground up, just focus on what's the next thing that I can do that will create value for our users. To understand that better, you have to actually talk to your users. You have to research their needs. You have to, if you can collaborate with them, the closer collaboration you can do, the better you'll be able to actually solve their problems because you'll understand what their problems are. Prioritization. It's really easy to get caught up in what's a cool thing to build, but if you're focused on delivering value, you'll prioritize the actual most important things for delivering value. Discoverability. If you create a really cool feature, but nobody can find it, then you might as well have not done any of that coding because it's useless. Empathetic UI was not a term until about three seconds ago, as far as I know. I'm sure there's a real term, but what I mean is a UI that thinks about what are the needs of my users, what kind of situations are they going to be in when they're using our software, and then how do we anticipate those needs and give them what they need in the way that they need it. On time, depends on your organization and on your users and their needs, but if you're writing tax software, really try not to be a year late. Performance. We always think of performance as a computer science thing, and for the life of me, I can't figure out why. I mean, part of it is because, yes, so much of computer science curricula is about performance, but it doesn't matter for some reason of technical accuracy or anything. It matters because users can't wait forever for your app to work. It's just to meet a user need, and when you do performance optimization, you have to actually focus on meeting that user need, and what are the things that they actually need me to optimize. So that's pretty good in the red area, and we're just going to add a couple more things for technical products. Documentation is the face of your product to your users. If they see a distinction between your documentation and what your software actually does, they don't think that your documentation is wrong. They think your software is broken. Something to think about. Writing client code may or may not be relevant in terms of what you're doing, but sometimes if you write a client library that will help your clients use the code that you're creating, the technical product that you're creating, that will actually give more value to them than just kind of staying on your own and making a more robust product, but they have to figure out how to integrate it. All right, so that's the whole picture in my mind, and there's more practice that I care about, but we only have limited time. But this is my brain right here on the slide. There's more to my brain than that, but part of my brain. But ultimately, in the question of, is it quality software? I can't tell you. You have to decide. So I'm going to leave you with this, a blank slate. I'm about to put it on my site. I tried to deploy before, and it didn't appear, so I'm going to try again right after. But bring this back to your teams. Think about these issues. Think about the practices that you do and that you care about as individuals and as teams, and whether there's areas that are missing that are a little patchy in terms of your software quality. All right, so we're going to wrap up a couple of parting thoughts. Number one is different projects require different balances of factors, and actually, Chembeck has this 3x framework you've been talking about recently, which points out that even within one project, it'll change over time which of these things matter. Number two, I don't think it's bad to focus on good code first in terms of career development. Why? Because it's more straightforward. Business understanding takes a really long time to develop. It's really important, but it takes a long time. Good code, you can get really, really much better at good code in a scale of months, maybe a small number of years. And once that happens, by the way, it becomes more automatic. Like you'll find yourself less able to write bad code if that's the way that I can put it. Business understanding, you can't automate. You have to constantly refresh because there are constantly new business realities. So the implication is start off really focused on good code, and then move more into business understanding in terms of your focus as you move on. So your career trajectory might look like this. That middle line is sloping down. You start out focusing a lot on good code, but then it goes down. Business understanding is kind of coming in in its place. Those lines might cross. Those lines might cross really far if you're going into management or project, a product management or stuff like that. They might stay far apart if you're an architect, but those lines should change over time. That's okay. That's expected. That's a good thing. The main thing is the line on top is how much value are you creating? So what can I do to build business understanding? It's basically a matter of learning. It's learning about your users, your industry, the organization of your organization, looking to solve dysfunctions within your organization, ideas of business, ideas of organizations and processes. So there's all that learning, but you also have to become better at empathy skills. And that's my last point, is that empathy is going to become ever more important. What can you do to build empathy skills? Number one is read, learn, find out about new ideas. Next slide we'll talk about that, but the other part is practice, meaning cultivating curiosity about others, especially if they're different from you, which is really hard, but you have to listen to them. You have to understand their needs and their feelings and see the humanity ultimately in everyone. That will make you a better person and it'll also make you a much more effective developer and person in your business. So I started a club, or I'm starting a club, it's launching officially at the end of Rails Conf. We're going to be reading books on empathy in the software context, in a general context, and thinking about how they apply in our professional lives. You can check out the site, devempathybook.club, if you're interested. I'd really like there to be more discussion about these things in our community, so check it out. But the bottom line is make your software valuable, high quality, creating value for users and make yourself valuable. I want to thank a few people. I'm not going to name them, they're on the slide, but what I've said today is not my ideas per se, they're just my analysis and integration of ideas that I've learned from and through these people. I especially want to call out Gene Westbrook, who was my mentor for my first year at Vitals, and he's in the room today. So thank you for being there for me. And the last thing I'm going to say is I spent a lot of time on airplanes to get here, 18 hours in the air, three flights. So I'm a little like, I have the airline thing still haunting my dreams. So I'm just going to say, I know you have a lot of options as to where you can spend your time at Rails Conf. Thank you very much for spending it here. Thank you.
You care deeply about code quality and constantly strive to learn more. You devour books and blogs, watch conference talks, and practice code katas. That's excellent! But immaculately factored code and clean architecture alone won't guarantee quality software. As a developer, your job isn't to write Good Code. It's to deliver value for people. In that light, we'll examine the effects of a host of popular coding practices. What do they accomplish? Where do they fall short? We'll set meaningful goals for well-rounded, high-quality software that solves important problems for real people.
10.5446/31189 (DOI)
Okay, the talk today is a very abstract talk. The abstract was extremely abstract, especially when it was written. And what this talk is about is all good things coming through. Now this is a reflection of some things that are going on in the industry at the moment. And I'd like to point out that the entire presentation is running in the page via ASG Plaguing that will be releasing shortly. Last year I ran it in the Plaguing that I won't realise, so you can all download it in a few months time. So I'm going to be close with it. Can I get the bugs out? Come on. Alright, you can be a tester if you want. Okay, so what are the three screens? This is the question, people would probably read the abstract or look at the title and say what the hell are three screens? What are you talking about? Is this my multi-monitor setup of my name? Is it my iPhone, my Blackberry, and my whatever? Nokia phone? Do they have this video? What the three screens is, is kind of the industry need to provide some kind of common platform for presenting graphics on desktop, on setup boxes, and on the off-dose. And this is a real industry need for things like the Bundesliga, you want to distribute TV, you want to be able to watch the show on all the devices, you want to one, rip at this content on all the devices, and just to adapt to the screen resolution. So this is the basic concept behind what the three screens are. So historically, we have Java, and Java came along, the big nine is the promise of providing a single, targetable environment for on any device, anywhere. It had the right once one everywhere promise, and of course as we all know that it worked in Java, it ended up the last test everywhere. It's a shame, and other platforms have come along and tried to do this. Now, Java and me as a mobile platform is a perfect example of problems in terms of penetration, and only 1.8 billion units of Java are in the field. How many of you like Java in applications? Oh, today. How many iPhone developers or Android developers are actually doing Java early at the same time? There is a serious trend if you follow the mobile market, there is a huge drop. So Java has a really full-filled promise for coming these three platform screens. So we come to why SPG? This is a very good question. Does anyone know the answer? If I knew the answer, I'd make lots of money, I'm sure. Well, my answer is that SPG has an open standard, something that is penetrating on all devices. You find it on set-top boxes, you find it on all PCs and browsers, you find it in embedded markets. And so as a common format for graphic artists to author to, it is a great portable environment, especially for things like video. So, we just iconize this for a moment. For example, if you can imagine that you buy a mobile phone, right? So you fire up your mobile phone and you decide, okay, I'm going to watch your video. So you might get something like this. This is basically 7 H264 video screens, all SPG video, in a window, right? And the advantage of having SPG for this kind of content, of course, is that this could be your iPod Nano, this could be your iPod Touch, or that could be your mobile phone, that could be your set-top box, that could be your PC. So the inherent scalability of SPG is what gives it the advantage to adapt to any screen display as long as the content is authored properly. And you can imagine, say, this kind of a little widget thing on your desktop, you can tuck it in the corner now, imagine if you're watching the Olympics. You could have channels of sport coming on little tiles in the corner, your favorite, and as you click on them, they pop up for screen. So you can have this whole out-of-browser experience through SPG. And of course, it's all based on a really nice little window set, and we'll get to see in two minutes. Okay, so now that's kind of a very, very strange justification for using SPG, but from what we've been hearing in the embedded space, there is a serious indiscriminate for one format, for authoring, so that authoring tools generate the one format. Multiple vendors can provide solutions for the same format, and that gives basically the corporate size lots of choice. So one of the areas I wanted to talk about today is horizontal application areas for SPG. So rather than the web that you're used to just browsing web pages in SPG, because a lot of the focus on the conference has been on H2NL and these solutions, I just wanted to look at a couple of parallel areas where SPG is being looked at heavily as a future solution. The first one is the past a child of the telco's or web at the place. We've lost a lot of money on this technology, but it's IPTV. Now IPTV's surface has been around for a number of years. A lot of the existing offerings are based on H2NL. Now if you went to an AB this year and you walked around to get a vendor's stance and saw the offerings, you'd see a shift from H2NL to SPG, and the vendors are offering basically both flavors. So you picked up the magazines and read them. The one thing that stood out on the cover was for this to succeed with the standard. So the big impediment to IPTV for broad adoption is a lack of a similitude by standard. And at the moment there is the open IPTV consortium that I think some of you are talking about big later on. And they are attempting to standardize around a flavor best for your duty for all set-top boxes. And this will provide a common platform and hopefully bring industry forward. So for example there is a company called Dream Park and they're in Sweden. Of course the Swedes lead all this kind of stuff. It's getting their names and we're always doing great stuff. And they specialize in SPG. So this is a screenshot of their advanced UI's that are all done in SPG. This is really hardcore SPG. It's nothing like static graphics. The way this works is the budget service. What they do is they fire up a request which pulls up basically a skeleton page. And that skeleton page does everything it takes and it might actually be request. So it's JSON, Ajax requests, find out and forth, populating these movie lists and titles and all this kind of stuff. And of course the reason people want to go over to something like this is because it's scalable. You have a standard definition TV or a high-def TV or a full high-def TV and it just scales to your screen size. And this is the promise of SPG. So this will just take a second. So I just want to show you basically this is a real thing. Excuse me, I'll just take a second about that one bit. What I'm going to show you is basically the service that Dream Park is running, running in a virtual machine and visualise, talk to it basically just to give you a rough idea of the kind of UI's that they're building. So the nice screenshots that I show on the slides are really nice mocked up graphic art ones. If you're actually running a 500 megatodes MIPS, you are not going to see that kind of screenshot. So instead what you'll see is the best they can do with the new security resources available. And this is why we, I'll talk a bit later about software CPU performance, GPU acceleration, this kind of thing and how it relates to all this stuff. Okay, so you see the little spinning icon there. There's still been a video of this, it's a virtual machine, probably a keyboard machine. But this wind surge was supposed to be the video in the background. The little twirly thing that you just saw was the AJAX request is going off, fetching the stuff from the server. And then you just go to call screen. Okay, this is my web screen. On the right here there's a TV guide which you can go up and down with if you go so which if you do it normally with your remote control, you can get the channel, you can have to go to the other side of the screen so you can actually see it. So here's like CNN, it's pulling in all the information about the program, all the AJAX requests, so as you go up and down it pulls information in for whatever channel you're looking at, and flip through the different channels and select, I want to show you this, go to the channel. And then it shows the channel peaks and that will actually then fire up a new SVG video tag, it will actually like the DOM, change the video tag, set up a UVV stream to the server that's across your IP connection and be watching a new channel. Then you have sort of advanced things that they have in a portal which will show you your favorite content, you can see it coming out of the network mode, which is a bit like the slides, and again have all these TV shows, etc. And then as well, in SVG, you have DOM, different versions of that one. You've got the light of the UVV store and then it goes off, it pulls it over and it sucks all over the current top 10 UVVs and then they do things like insert animation modes on the fly in the DOM and do these fancy animations between all the tiles that you're available. So it's kind of a funky, you know, wave and interact, but the engine behind this is SVG, the declarative animation, Ajax requests a lot of the web technologies that you're familiar with, but for a particular niche setting, namely the set.box, another general web spread browser. So we'll go back to, you didn't explore, of course. Right, now the second application area I wanted to talk about, which is the third screen. We all have that first screen, the PC, the second screen, the set.box connected to the TV, even if it's integrated into the TV. The third thing I really wanted to talk about, which is a really exciting application area for SVG, coming now to the future in the USA, which probably doesn't matter so much to the people here from Europe, but in the US there's a big push for what's called ATC-M-E8, which is mobile handheld. And what that is is a broadcast television service that would broadcast a video to the handset with a dedicated receiver chip in the handset. And then the data channel in the phone is used to display interactive content that goes to and from the server. And so the model is that a TV broadcasting station can spend approximately $200,000 to buy a specialized transmitter for this signal. The signal is in the center of the bus, so there are like bursts of a second digital data across the airway, so the phone actually powers up, brings in a bit of video, shuts down to save power. Then it plays the video, and then you use the channel to put things like voting on American Idol, for example, advertising, and it's an advertiser's dream, because if you tap on the phone, that's through the data channels, so they know that you're tapping on it. But more importantly, the delivery of the video is across some of the airwaves, and that's not through the internet, so the cell towers don't get clogged. And in other countries where this kind of technology has been introduced, it's reached the market penetration of about 50% of the population. And so the broadcasters are not dead yet, and in fact, if you look at the last quarter, the number of broadcasters in the US increased rather than decreased, despite cooling. So just looking at this block diagram, which is off Wikipedia, I just want to point out that there are a lot of pieces, and this little box up here called RME is where SVG sits. It's a tiny little box in a very, very, very complex system. I won't go into the technical details of RME, because it's boring to tears. So just a couple of examples. So this is the kind of screen mock-ups that they've done for the ATSC stuff, and you tell us your demographic, I want to vote for that American Idol, and if you vote, you vote as coward, and they know that you voted for that person, they're going to be totally reprisizing to your preferences. Male or female, I guess. Same thing, music videos, buying a BMW car, all this kind of stuff. It's a very good application area for SVG as a technology. Now, people are skeptical about the adoption of ATSC and MH, which everybody says, oh, we don't need it with a Hulu, we've got YouTube, we've got all this stuff. Who cares about the broadcasters? Well, the broadcasters are trying to build up their business, and NBC and Fox have just announced, eventually, they're calling it the mobile content venture, which is very interesting for the moment. So you will actually have a potential future if it's adopted, and a little side note that's made me be interested, technical people in the audience, is SVG is a technology for broadcast television, it's a lot more interesting, you actually know. One of the biggest manufacturers of broadcast gear is the company called Harris, and what they have is they have these large broadcast things, and if you're watching TV, you see a little subtitle thing on the bottom, come up, you know, the latest news or the name of the person, that is actually done in SVG, yes it is. They have guys with little authoring station doing the graphics, render on the station, plug live into the broadcast stream, using SVG as the exchange format. Okay, so we are talking about the three screens. One of the things about the three screens is that PCs with, you know, three gigahertz quad core, eight core, hyperthreaded, you know, two gigabytes of RAM, NVIDIA chips, a great null. But they do just increase numbers of transistors, and this is, we all know that in LawsLaw it's not clock rate, it's numbers of transistors, it's a very important distinction. There's another law called Newman's Law by computer scientist Andrew Newman, and his law is that layers of software abstraction will compensate the increases in transistors, keeping software response time constant. I think a lot of people have said that, I think the answer to that is it's like, logo of those, just enjoy it. Okay, so, the thing is, this is the question, the magic bullet. We all look for a magic bullet, you know, we are always chasing better performance in all the products, you know, as an engineer you have to make trade-offs, hardware software, we've seen a lot of talks on GPU rendering, there's a software rendering and, you know, a whole system approach to making an architecture work, and this is critically important. So, I thought, well, let's just do a random test to see how the GPU compares to the CPU, so it was in quite a few of the test files that we've been using for rendering for a long time. So, what are we going to compare, says to himself? No, okay. Right, how about, no, by the way, the Microsoft guys came to the real estate scene to pay for it, I think, it will be released soon, at most. Which is my play, correct. So, we'll run... It runs on XP, service pack 1, 9, 6, 8, 9, 9, 9, 10. So... Let me do this. There we go, this is the world-prepared demo. Now, the Ferrari, this is one of our favourite test files, has been a favourite test file in the mobile community for a long time, because the tiger.svg is, as Brad Newbugh said, svg101, it's 240 parts or something. The Ferrari is many thousands of polygons with a very detailed, like, engine drawing and all this kind of stuff. So, it's a very good test of how fast you can parse the svg, how fast you can build it on there, how fast you can render it. And this is where the whole system approach is kind of critical. So, before I leave, I'll just get ready to drag it across. So, let's take a bit. We're going to do GPU, brother, Internet Explorer 9, there's four, this is software, Rasterizer, I'm running an app on it. So, what do you think the performance differential is going to be? You're a software. You're a genius. Okay, let's check it out. Go IE. Go to the next file. So, we progressively render as well. So, we're looking at a factory 405 here. Now, this is primarily, probably because it's a smaller mesh footprint, it's like a megabyte, so it's a time-primary, it's higher view, so what IE is doing is much more complicated. This, right, with the full time. But, the lesson I wanted to leave you with is that GPU versus CPU is not necessarily always the best solution for an engineering exercise in the phone, for example. There's coming Canada, UI labs, people know about them, they do soft GPU, so they actually implement GPU primitives in software. And we're seeing ARM bring out time-little cortex cores that you'll be seeing ARM chips with 4 and 8 cores very soon. So, it's a whole trade-off about how you build this thing, whether you ship the load over the multicores, using new languages like Gullo, etc. versus the kind of traditional approach during GPU acceleration. And, you know, I've done 20 years of A6 with throwing things a bit that engines in the 90s, and, you know, and it's a constant race. And the problem is that Intel and companies like that keep fixing their damn process and beating the A6 that we build. And so that's where you've got to take them to the Grand Assault sometimes. It looks pretty good, but it's pretty good. It looks beautiful. It took me a while ago, I guess, there. That's what you said at the question, wasn't it? Yeah, I think so. All right, so, you know, that's coming for us. So, basically, this is the kind of flash-gill of it. We perceive SVG to be. This, of course, uses oil, SVG, and bit-in-parascript, have the pink panther. Oh, I think it's the size of a rush that you can finish it off. Right, questions. This is an SVG-filled font. Hang on, the Christmas benefit. What about the B? Well, you can read it. Look, S, B, G. Oh, I'm sorry. Oh, I'm sorry. This is the V. Thank you. Thank you. Thank you. Do you have questions? No, no, no, no, no, no. I'm sorry to hear. The test that you conducted has a lot more to it than I just read. Just to make things clear, as you mentioned, we're essentially in an entire dorm, and it's fasting time, and so on and so forth. So, actually, the GPU rendering of that test is very, very small portion. So, I'm not sure from that point of view, it really compares apples to apples. The other thing is that PATH is the thing that currently GPUs are the worst acting, because we use all the time the distillation of the PATH. We have a number of ideas on how to improve that in the future, but it's something to keep in mind that this is probably, PATH is the worst case. We do better with text and certainly better with images as well, as well as position in general, so if you do opacity or blurring or whatever filter effect, those things are much faster on the GPU than on the CPU. But distillation is a problem, and we'll take a little more thinking to how to solve that. Just wanted to quantify the test. But it's a bad, I mean, no question, we'll move five times slower, almost six times. That's just recording, but you'll be okay. To be fair, the point I was trying to illustrate is more of the GPU versus the CPU. And if it's any consolation, you're the fastest of all the other browsers on that test, okay? To make the other browsers cry better, wouldn't it have been a fair test of it, just one that way? That's a good question. I guess I can speak a little. I was new at the beginning of the 14th, probably a bit. It's hard to keep it in mind. I would be forward, are you supporting the CPU at 151 or what? Just a relative, or a wanker point. We started the pre-op program now, we've used a lot of testers, so down in the water, I'd be second round, so it's not quite a thing, so it's generally held. If anyone wants to help you, they can do a test, but I'm not sure. It's currently a tiny part, my two implementation, quite narrowly, however there are, quite a few things that are long and things like the department, so it should be a solid tool on the test. For example, the second box of the prototype that I showed you, things like getting on the bus, which is certainly not tiny, it's a lot of cool stuff. It's my way towards what implementation, of course, we've had time, we've really got to build the compatibility, other things we're going to do in SUD, in the next few weeks, at the consulate. Once we've done the music, I'll say, but we're just busy on the consulate. I just wanted some clarification about your plugin, I guess that it will behave like a normal plugin. There's been a rectangular box inside which the SVG will be displayed. This will be for an object inside an HTML page, for example. So you don't have the advantage of the SVG contours and so on. That's not actually correct, but we've already done a few tests, and we need to make sure that the SVG is known as best support and allows you to get into the minus SVG, which is a test that you can implement directly to the present and what the current cost and do it in the future. So it is possible to do the minus SVG as long as you do it. If there's any next test, so you can do support out there, and you can do the detection a long time, because that allows you to out-in the traffic. I use this to set up one point. Any more questions? Thank you again.
For years, technology pundits have craved convergence and interoperability. In recent times, there has been a big push for the ‘three’ screens nirvana. The three screens ideal, encompasses mobile phone, PC and TV screen. From a content supplier point of view, the three screen approach means building content for the consumer and delivering it to the three screens – those being the mobile phone, the PC client and the TV in the living room. In order to make this three screen ideal a reality, there needs to be an enabling technology to make it happen. This is the stumbling block. How to make content from one source adaptable to threee screens? Enter SVG. With SVG, content can be authored to apply to any size screen. Scalability makes it possible. Industry has recognized this, and in response industry has put into place a number of derived specifications that make the three screen ideal a possibility. On the mobile front, the American ATSC body has adopted SVG Tiny 1.2 as the standard overlay for interactive TV on mobile phones.
10.5446/31190 (DOI)
Now, I will start with my topic. This is the Automobile Crash Testing simulation using SCG Authoring Tools and SCG 3D implementation. SCG 3D virtualization and this software is designed for academic purpose. So right now, for Automobile Crash Testing, this is just a simple application of example I have chosen because I am a fan of cars and automobiles that's why. And the main focus is like how authoring tools can be used to simulate particular operations and it can be used for educational purpose to show the students how the facts work behind the screen. So now, I will be taking you clear of like why I have chosen SCG for it, the power of SCG. Then motivation for this research, the use of different SCG agents, J-Q-A-R-E and G-Edibles. I have chosen G-Edibles 6.0 version to implement this project. Today, SCG 2D animation, SCG 3D virtualization. Why it is called virtualization? Because SCG is a basic SCG unit using 2D format. You can create it after applying few effects like filtering, tracing effect and the physical region algorithms. So you can get the 3D virtualization of that particular 2D image. And it is called as 2.5D that is 12.5D virtualization. Then implementation of SCG authoring tools which I have been using, which I have been, for that I am using that automobile example. Then how to import SCG into the smarts. So this is what I am interested in doing in my future research for this topic. Right now I am done for desktop application. Then what is the advantage of this scripting? Because like SCG, SCG means that scripting, the drawn of the image can be imported to the PNG which is universally accepted for it. And there is no any problem for rendering that particularly needs. Now just quickly what is the, as you know that this is very compelling and high-resolution graphics. For example, Google the match is using it. You can go and find the image, you can zoom the image and still you have the clear picture and you can have more details about the image. So in other format like GIF and JT, if you try to, though there are many ways to try to zoom and find, you can have the broken pixels and you can get a detail of the particular data. The thing is, SCG takes less and it is extremely simple. So images can be indexed by the search engine. If there is particular data associated with particular image that can be searched using the search engine. So in such examples like if you have in few particular, suppose there is a lack of body, the body is there, you are pointing high and you are leveling high by HG, IG, high. Then the hard work can also be identified by search engine. So this has to be provided by SCG. It cannot be done by JAP or JPEG or any other search engine. Now it gives a high quality printable for any kind of screen size. So that's why I am saying that I would like to implement it further in further smart school because you are having limited scope of the screen and limited view for that. It gives a quality printable because other available applications like flash or other few graphics operations cannot provide high quality printing but SCG provides that. What is the benefits of SCG? SCG. So this is faster rendering and it gives good overall experience. It takes less space for storage and if you compare with JAP and JPEG images, there is no need. So high performance zooming and panning operations within. Animations filter, burning, masking, scripting and inking can be done for graphical images. You can say that you can have more dynamic information like Google Maps. You can add the information to that particular instance of difference in design. You do not need to alter both images. You just need to edit the track and add the information to that. So this is what information I will be using in my application. For that particular auto-movement crash testing, when in company they have to develop a new vehicle. So before that, so when they want to develop a new vehicle, they have to do a crash testing for that particular vehicle. So they want to have different parameters assigned for that. So which kind of material, front material, they are going to use for it. Then what kind of speed limitations should be there. So this information used to change from time to time. Like in one day they can have different tests. So that particular test, they can change the information without editing the whole state of their structure. The whole there are graphical images. So this is very useful in that case. Now for various education, that medical and viral change, we saw in the previous example, it can be used. As I said that there are section very competitive on one thing and for the other partial part you use stickers and other font. But then what is it, is that you can simply see so again. We have that little locker inside the window. These too many, many pages are open. These are than the materials that I use at the time They will enter itsват preside. Sograte is so for simulation using a CG meter and jcarries. Then how physical engine algorithms can be used with filtering effects to give the computational freedom for the manufacture of particular part. So they can change the parameters and they can have different material parameters associated with that image and which can be used with the filter images and physical engine algorithms. Now, if we show how 2D to 3D virtualization is possible in CG, all images are drawn using CG scripting and they are imported to the 3D format. Now, one ratio between two has some enhancement of the filters to gain 3D virtualization for given 2D image. So there are some compositing models and transformation models are available. In compositing models, there are multiple layers for that particular 2D image and you can map all these 2D images one by one to have one particular 3D image kind of look for that particular image. That's why it's called as virtualization. It's not an actual 3D image but it gives a virtual image for that. So layer options are combined in variety of ways. You can feel that particular 2D object to give the feel of light like the shading of the light cell and everything. That particular image is like appearing below or above another image. So it will give you a good look for that particular image. So you can see there is one research done by Howard Andrews showing that for CG, the Z axis is coming towards the user. So if you are taking this site, you can have that. You can map that particular image using this particular format. You are having your X and Y axis which you can have less so image and you can place other images which are objects in this particular Z axis. So it will give that layer feeling for that particular 2D image and which will give you 3D visualization. So for that user credits can be used in the text box. Then filter effects for that I am using Z-vary, scg-filter.js pattern and for generally sg animation, sg-anim is available which will give the functionality for all these things. Now there are few functions like prototype image, transform image and using function which I am using for designing on that particular image. Here I am, few of the images I am designing using inkscape and this images I have given that the effect, 3D effect. So here you can see, this is 2D layer objects. You can see there are multiple objects here. This is one surface, this is another surface, this is kind of type, there are two different types and this is the front model of particular layer. So these are layer objects. Then you put it according to that subject axis and you put in particular layer then you can have this kind of 3D look of that particular layer. So this is how you can provide some 3D visualization look for your 2D objects. So like this you can say that linear gradient provides, linear gradient is the function to provide 3D look for the 2D images. Now this was the basic implementation. This was the front look. Suppose this is the vehicle and this is before the crash test. Here other parameters like how much is the distance for this experiment I have kept it fixed. Now you know this is the distance between the front body and the chassis of that particular vehicle. Now in this crash test this object, this truck will be collided with this for the standard support. This is the front support and the parameters can be measured using this estimation. For example, this is 18 miles per hour. So for that, so here I designed and used the net width ID and the SVG results. So in that I am using the swing functions and just called parallel. So I am using that action business to correct the information from user input. This you have designed using this Java parallel. If user is selecting, here you can see. This user has to select this 80mph answer combination. You can see there are multiple combinations can be assigned for particular object. So one can be 80mph and 18mph and 18mph and 18mph. So at this split for this particular object, what we do in that? So as I said that we can type SVG images. So 80mph and 18mph we have some particular net and that will be tagged in that particular result. Like that we can type multiple results for this combination and it can be searched by a self-divided for future references. So this will be the normal animation set. So here distance between the obstacle maker was faster. It will have impact on obstacle with user depends to give it. For this, this SVG animation is a function which I am using. So after the crash, you can have this kind of structure. Now the distance will reduce 1.2 meters. It will be taken care by the mass-equal database running behind the scene and it will ignore this kind of information. So after the crash, what was the condition of that vehicle? So for particular kind of metal used, what was the impact? And it will give the output in this kind of format. So here I am using this program path. I am using a control path function which will fetch out this characteristics from that net bin's ID. And it will import and it will draw that particular path on that chart. So for this, the combination which we have selected, for here safety for driver, how much is the safety? For passenger setting, next to emerging, how much is the percentage that they can use? Overall safety verdict, what do you think? So they can have certain parameters. If it is your 85% vehicle, this kind of useful application, you can find for that. So in this application, maybe I am seeing that power of SVG. How can the authoring use and the animation can be combined to have one single application? Then it is more versatile as it best use and it generates 3D animations. You can set 2.5D because actually it is a real animation. So it is an example of virtualization plus animation. The virtual effects achieved using filtering and using effects. This is mainly for academic purposes. So you can have different, using the same net bin's ID or the same terminology, you can define any other scenario for this. You can design any other application rather than autonomous test districts and this can be used for different physical attributes. Now, the future. I would like to have some portable desktop application to be converted to mobile applications. So right now, all Android market is picking up and you can have any APIs to design it. So you can export it to mobile applications and it can be used for the Azure Desktop. Because many devices support SVG APIs and JTUNY, which is SVG, GUI, VJET. So it can be generated and using Java, mobile, Java, etc. You can import it to your mobile environment. If you add a physical engineer, it will give some confidence to the application. So because of the filtering, effects and computing, it can be implemented for professional purposes. It can be used in the industry also. So basically compared to AutoCAD, it is very complex application. All these mechanical engineers, they are trained into that. But for a normal student, using SVG could be easier for you to understand rather than going to AutoCAD application or similar kind of design environment. So this was the main study purpose for this application. Thank you so much for your attention. So unfortunately, I lost my database in my hair transit. So I am using another PC. I cannot show you a demo of actual working. But I don't have a database in this one. So I have to manage all this. That's all in my presentation. Thank you. Any questions for speakers? I don't think the demo was good. I would like to report but I don't have a database. I have a question. So you are thinking about using physics engine, similar to crash also. Do you have particles and stuff? Yeah. Have you tried anything else? I would like to report. I would like to report my picture in the future. It was a bit complicated and during this time I could not even tell you. I have to be a bit into mechanical learning to try to get some function. So if you want to try for a professional base, you can have that. That's the end.
The main purpose of this research is to demonstrate an effective way of implementing NetBeans IDE with JAVA for developing a portable and interactive desktop application having rich UI; which can render SVG 3D Animations and combine it with SVG authoring tools. The implementation will ease the importing of Web Statistics, Database to the application designed using JAVA Scripting. In my efforts, using a simple example of animated Automobile Crash Testing; the proposed transformation module will also be capable of allowing vector paths to portray a desired 3D effect on an object in order to simulate the vehicle crash testing. Although SVG does not support 3D geometry, for this crash test simulation implementation, 3D effects are achieved by effective use of filters and transformations. The SVG Filters Module allows a series of graphic operations to be performed on a given 2D. The compositing module allows layered objects to be combined in various ways to produce different 3 dimensional effects using JQuery, Transformations, and Translations for vectored graphics. Physical Engine Algorithms with filtering effects and transformations will provide a computing realism.
10.5446/31191 (DOI)
I think they probably changed directions from what we heard this morning. I've been coming to the SPD conference this past summer in 2002, and I've always been trying to get people involved with SPD is to look at some of the fundamental materials we're working with. Things like color, composition, and the way it looks, things are composed. I think in many ways the worst possible things to be concerned about when you're doing design and at the beginning, things like rock, skyrobes, rounded corners, ingredient fields, and not worrying about what kind of work that's done, and how they relate to each other, and how they build what you're looking at. And I've found ways to sort of engage people in this group. In this, I decided to take a series of examples, simple examples, of the ways in which you can think about relating to color, and color relationship, and building it in your graph. Now, these are all done hand-coded. No, I don't use, I'm not using anything, I just like it here. So it's very simple to work with, and very simple for anyone to do. It also means you use a very good space. You're a very, very simple piece of design. The first of these, and people really must download it, I mean, it's the simple marker. It's to fill a simple hex color, and then in the stroke, and then the stroke will be a color, and the stroke will be a, and these are all constructed out of that, and each one of these, you can go to the website, and you can open up, look at the SPG file, and they are very simple files. But what I think you can see is what happens when you relate the brightness and the dullness of colors to each other, and how they then give us kind of connections to what happens with color. One thing I quickly want to mention, I'll mention again, probably the most destructive thing in making good interaction of color is using heavy black lines around the edges of your form. Now that comes to computer graphics from a traditional print, which is there for good reason, because you had made a good solid line around the edges, so it wouldn't break down. And traditions that, and this is what's traditional is always from the new print in the 13th to 14th century through to comic books, which are, that way, we can get that to be read, and then print it on paper, and then throw it out really behind it. There wasn't some traditions like ceremonial in Japanese prints, the delicacy of the relationship, and the absence of a line or a very fine line. So the third reason you can't see things happening between colors is you can't see with, like the tiger, which is used so commonly in SPG, and that every day can destroy the chance of seeing what can happen between colors in color relationship. So that one with the black and dull, and another one is, I thought there were programs that would be fun, is to make your X colors add up to 16. So as you know, it's zero, so it's zero is one, and so nine is about ten, and you can count it up. And do that and see about playing around with that and those relationships that happen. And here's a whole series of each one of these is working with the color, so they add up to 16. And you can see, of course, the projection is a lot less good than what you see on screen. You can see a lot of whole possibilities of things that you could relate to and have happen in your color relationship. There's suddenly, there's delicacy there, and it can be done without spending a lot of time trying to figure out whether that discrimination is right. Just playing around with the number with X colors. And as I'll say later, it's always a good idea to even use X colors or the RGB. They rather not names, because the names limit you to very hard to edit and change around and make any sort of changes in the color. And another very useful tool, is this, and obviously there's nothing on it here. Let me see if we can do more on these examples. Oh, let me just show, and here, as I said, the background is a vast deal full. And these others are 2.3.4, and then we can go back to the front of the copy. And I even build a very nice stack of color ratios. And here's a few examples of where, again, you should go to the website and look at these, because they show you some of the possibilities that you can get by varying the opacity of the color and having them place through from the back to the front, and give you some very wonderful possibilities for working with color and color relations. And then you can also play around with opacity and fill and stroke. And so then you get a whole lot of, again, different variations of color and the mixing of color and the use of trans-parasite, which gives you a whole lot of possibilities of going in and out. And also, part of these are now good colors, because I do work with and think of color stationery, and that cool colors can be received, warm colors can come forward. And so when you're working with the color, you get those special relationships and workings because the warmth and coolness of the color also light colors can come forward, dark colors can go back. And so you can have a lot of things happen within the surface plane of your design out of working with color relationships. And so that's what I encourage people to do, to think about what happens through the interaction of color, what happens when you stack colors on top of each other, what happens if you make them bright or dull, and so that you can do much more with what happens visually in your design. I know there are certain individuals, appropriate for people who are sensitive to color, to think largely in terms of what you've written down in your code. But always thinking about the viewer is not looking at the code. They're looking at the results of what you've put in it. And if they're experiencing that, they are seeing any of the stuff that's behind it. They're seeing what's right there in front of the face. When we need to focus, I think, if we can, more on that, on what we see, and how much that affects our ability to communicate. Now, I come to this from background education, not in either your programming nor from a sale. And the emphasis so often in this still tends to be on one of those two areas, rather than how can we more effectively communicate with people? How can we keep them better? How can we get these learned things? How can we change information more effectively? And so much of what has been to set up is tendency to put a barrier between effective communication and how we set things up. Now, I have some other suggestions of things for people to think about. I mentioned in the beginning to avoid color names because they're much easier to work through a whole lot of series of possibilities in color relationship. And think twice before you use thick black lines, they break up the connection between colors, and then have some interaction. And third, be careful of how you distinguish that light and dark. As important as the ones you make of hue, that's different color. So you're working accessible to those who have limited color vision. It's something that Jason pointed out often enough that people who have problems with color vision don't have problems with seeing value. And so if you're dealing with light and darks, light and darks, as well as with the value, that that's sort of a hue. Then you can make it accessible to the very hue. And look for sources outside of the hue. It can be a shop window, a church, a museum. Much two-dimensional art in the world is based on stroking fill. There's a tendency to think that putting things into a 3D is somehow better than having something in 2D. If you look at the history of art and art from all over the world, then you find that people have worked so subtly and so effectively, so beautifully, to essentially stroking fill. Yesterday I put down to the... I don't think I've got an interconnection to see if I do. I can know this for sure. And another, they used to do the molding which is down next to the graded palm and close to the center. There's a remarkable exhibition of art from the world in which there are just thousands of examples of beautiful work with stroking fill. It's all the variety and the richness and the subtly of that work can provide you with so many models for a way of looking at the way in which you're designing with stable vector graphics and vector graphics programs. It also can give you some examples of how there are several differences in the way scripts are used. Another interest in mine is topography, and I really regret that it appears that SVG fonts are not going to take off if I think it could be terrific if they could do it. But also, if I look at presentations here, it seems strikes me that people don't look much at fonts. And when I suddenly see Gil Sand and Ariel and Helvetica and Times New Roman and all the stuff in the jungle of different fonts in presentation, Helvetica and their last presentations. And I want to say, why do people use these that way? Why don't they do a simple thing like using a little learned spacing when they're using all uppercase? It's a simple thing to do with spouses. But to sensitize yourself to those issues, the visual issues of presentation on the table. I think it's a really important thing for all of us to do. If we want, as I think we all do, SVG is successful in communication. One thing, another thing that I didn't mention is this last point that I made is to make it disappear. I was essayed by a woman in the New York Air Force about the call of Crystal Galvin. And what she says, good topography is invisible for transparency. And that I find a good design is intended to be useful rather than to make say, is often most successful when it's transparent. You don't see it in numbers. And I know that a lot of us are, where we are by drawing attention to ourselves and our work, rather than thinking about how effectively is it used and how easy is it to use and how transparent it is to the user. And I think that's something for us to think about and see about thinking about focusing on. As I say, I'm involved in education and I've been doing this online for about 15 years. And what I find works best is that I'm able to get out of the way and the students can move directly to the material. And if I find I present my material well enough, they don't notice this being presented. I think that's all I have to say here and open up for questions. Thank you. APPLAUSE Questions? Yes. What you say and make it disappear, I will read it. Which you want to say. If you think of a good book when you pick it and read it, do you notice how it's set and how it's a type of track to your attention? Do you notice what people, how it's put together? Do you notice what typeface it uses? Do you notice the line length and line height? Do you notice that? Most effective design, kind of design where you look for, you don't notice that. And something like long text. Now certainly for brochures and ads, people very often track attention to it. But that's what Stereo is designed to say. It's designed to push something rather than letting you receive the information. And that's what I mean by making it a bit away from this. Yes. Yes. Do you also see people who pick stuff? No, all that from the talk from the... I think they're wonderful. But from the talk with the Microsoft person and what other people were saying, it looks like the wolf files invented fonts. Who files for your kind of stuff? The worse or... I mean, the reason why I like the SDG fonts, there are a whole lot of subtle things you can do, which you can think, for instance, I've done some with... using Galiart, which is a method of creating fonts. And there are ways in which some glyphs are combined, there are ways in which some relationships are created. In that font, that font is not... Yeah, when it's embedded. Now, there are certain, you know, glyphs and such you can do with Unicode. But there are some other things, some of these you can find if you went through, say, Galiart, my best example, that you can't really get through the wolf fonts. Now, the wolf fonts, I think, it's a wonderful breakthrough and it's terrific and I'm pretty pleased and I've been spending a lot of time working with them and seeing how well it can work and the different kind of things like when you upload and you kind of jumps in that kind of stuff and you feel a little disconcerting. But no, I think there's a lot of wonderful possibilities there, but I do think that SDG fonts will help you some more subtly, some more ways of working with letting the text flow that you can do with HCL5 and being able to have flow to text, there's two boxes on the page, that kind of thing you can do with SDG that as far as I know right now the wolf fonts can't reach in front of them. I'm not knocking that, I think it's a wonderful breakthrough. And part of also what I was going to say is that much of the design work I've been doing now is to try to admit all my HTML as well as the SDG so that I can use a single style sheet and have them work just as well on a computer, desktop computer, on handheld from a drive phone all the way up through all the other devices, which I think is a very important area for us to look at. I have a small one. Most of your advice seems quite subtle. And you're an amateur, I'm an amateur. If I look at your advice, I would probably get exactly what you want me to do. All you see here, we've got something so different and from cheap phone to... One of the things that I would say, projection from a computer is the worst possible way to look at that in front of you. We darken the room and completely set this thing up but out of bed. But one of the things that I was amazed with and delighted with when I got a drive phone, was the last time I ever read anything. With that high screen resolution, I was reading a test of the new Google Earth, which I formatted for all the devices. And it was terrific to place the thing. My question was more about color. Oh, yeah, because with the WT, I fully agree with you. But the color experience in such a wide area, the color experience, at least for me on the droid and on my bed and on the window switch, is pretty similar. Now, there are some differences in that. It is a very different thing. My sense is that on the whole, they aren't... It isn't that. You're part of our projection. Projection is a horror and the last time I was trying to read something on pattern the last time I talked to you, that's what you come up with. And the projection was so awful, nobody could see what I... Really couldn't see what I'd done by developing a new music pattern. So, yeah, it's a horror in terms of trying to present anything that has been done to see a problem. Yeah, I think we saw in most of the other presentations, it was just a vague idea of what that was going to look like on screen. We just washed that off and... It took off. And that's the problem. Okay. If there's no more question, I want to thank all the presenters in this session. Thank you.
One of the most exciting opportunities to explore, to me at least, is the interaction of color using SVG on the small screens of mobile devices. By that I mean exploring the ways color can be varied through hue, value and intensity and the interactions between them to make images which are interesting, surprising and satisfying to look at. As an example, here is an image which works with reds and blacks in a stack of rectangles scaled at 350px and 125px: Both the reds and blacks (most really dark reds from hex #900 to #100) become progressively darker and create a deepening color space. The reds (#f00 to #700) in this image are the same reds in the three variations on the design. The apparent changes in them is due to the altered context around them. The first variation is the same design with instead of the deep reds for the dark, there are greens (the complimentary color of the red – hex from #0a0 to #010). In it the contrast of the colors across the color wheel make a livelier image and because, scaled to this size—225px x 225px—the lines are narrow enough for the optical mixture of the colors to be evident. The first of the next two examples above uses colors that take a step away from the red—to either side, a red-violet, #e09 to #601, and an yellow-orange, #f90 to #710, and the last variation is a split compliment—#09f – #02a and #089 – #031—from the opposite side of the color wheel The possibilities, I hope these few examples make clear, are many, exciting, easy to do, scalable and light weight. For my presentation I would extend these through at least four addition designs showing other ways to explore color and shape on a small scale.
10.5446/31195 (DOI)
So, I just want to say, my name is Rob Russell, I'm an engineer at Google and today I'm going to talk to you about efficient SPG. Hopefully, go through some coding practices that are beneficial and go along kind of the same page, get in the discussion about how, you know, the SPG is the best way to do this is to get in the discussion about how to make efficient SPG and see where we go. So, when I say efficient SPG, what is efficient? So efficient generally, I can vote maximum productivity, minimum least, minimum unnecessary, that sort of thing. Now, in the case of SPG, I'm willing in many cases to trade resources like memory or time up front in order to get better execution speed later on in the application when you need it. Obviously, memory efficiency matters too, but speed seems to be a concern with SPG applications, which are applications. So, that's what I'm going to focus on today. And when it comes to SPG, obviously, there's one of the one way to draw things, there's one of the one way to get a specific engine on the screen. You can use a raptor path or whatever else you need to use to build your graphics. And there can be different differences across platforms and as you know, choices to make. And so, my interest is in the speed theory application, and I'll look at the speed of one browser versus other browsers, which I think is pretty shiny for me in a way. The microform is working on this, which Microsoft and Google are both involved in, may help with measurements across browsers or the measurements of applications that are more accurate. But I'm not going to go into that. But we're focusing right now on how to get the best performance in your application or your image using practices that are available today in the release. So, there's two big buckets that I kind of threw things into here. Basically, I'm looking at building static images. So, images that are meant to be displayed in the same way as last year's in some web today, and also, or the alternative, being built in dynamic applications which are the ritual applications, in SCG. So, there are certain practical performance needs that you have when you build static images. And as SCG becomes more widely adopted for use in normal web pages, just your average web page, as Justin mentioned, in the special choice, Draven, we need to set user expectations appropriately, and users already have broad experience with brass images, and so there's an understanding of that. When you have a high-detail, high-resolution image, you're going to have a large brass image that you're going to have a certain expected file size, and that file size is going to have a very high development browser. Having a large brass image on the page, there's a certain expectation of what kind of performance you're going to have in the browser. It's going to be very useful. Now, we want, obviously, for users to build the access SCG images, the web pages have these basic new browsers, so it can help you to build a certain value, build on the user's understanding of what happens with brass images. And doing that, one concept that I find helpful is a modern resolution. This is something that I believe Wikipedia shows when you look at an SCG image and what Wikipedia. There's a resolution that the image is expected to be used as. There's a lower end and a higher end that you're expected to use that. This is the image. Now, of course, it's going to scale out infinitely, because it scales in infinitely, but typically when you use a piece of clipart, there's a size that it's going to show up on the screen. Keeping that notion in mind, I feel it helps to choose what details you can include, what details you can leave out, and leaving out details that are unnecessary over the range of resolutions that you expect is one way to get your image to display more readily to the lower processor, be personal to the render processor. It's also a choice that you have to choose in certain images. There's just too much detail where there could be infinite detail for display and fractal. There is an infinite detail there. You're going to talk to some point if you have a stack of images. So you're going to make that choice that's somewhat reasonable to make that part. So the other thing we don't want to do is use resources that are just before you can choose a benefit. So the user perceives a certain benefit from choosing what SCG on a page, and in many cases we can get SCG images to be much more space efficient than a register in the same place. So we want to make sure that we exercise a benefit and deliver the answer to any users who are creating these users. Now, of course, the difference with SCG being that we put this built with Markup and while we are not setting a certain markup with the velocity, those interest in precipitation is pretty much the same with the EXI. It's possible to reduce the size of the ID encoded. That sort of thing. But in general, right now what we have and what we have for the future is this Markup. And that's just fine. It can be demanding on processors to run a large system like this, but more characters simply have extra characters to save something. Doesn't necessarily have to be abandoned. It will impact your car's impact, for sure. For example, if you've got these two paths are same, you can represent the same path a lot with zeros. It feels like more precision, but once that's parsed and once it's in memory, then as far as the browser is concerned, those two paths are pretty much the same. Now, if you have to manipulate the state of Winstrip, of course, if you're manipulating large strings in JavaScript, then you're going to feel performance came there. So there's a difference in the distinction between what is something that you're manipulating or what is something that the attribute gets set and stays set. So getting into the source of the images that you're using and reusing is very important. So this is an example of some markup that I found in an image that I used from the browser. I wasn't going through looking for specific things, but I just kind of thought, okay, well, let's have a look through here so you can see how I can reduce my file size. And obviously, this is a bunch of repeated group elements which are empty. They serve no purpose to the end user. And in the application where this image is being used, where it's simply displaying the static image on the page, the browser has to parcel this text that it and has to create DOM elements for all these other groups. There's ways that the browser's going to make optimizations around this, but the fastest thing to do in there is to think it's just not even there. So if there's ways to find these sorts of empty elements, unused elements, unnecessary depths, elements that you know are going to be obscured on the page, it's good to get rid of those. And if you're doing something like offering pipeline, if you have a method of creating content that's automated, then check it kind of out with the reproducing and see what kinds of craft you're putting in there. Here's another example. Again, I just followed the language on the other video. Now this chart is scary in at least two ways. One's the data that it represents. See, it's got a scary thing. But if we look beyond what's on the page and we actually look at the text that's shown on the upper left there, the way that text is actually shown is a single path for every letter. Now there were times when there's not even support for text across browser platforms. Those days are hopefully behind us. So that's no longer, it's no longer necessary to turn your text into paths for display. There are other cases where something might say they need to have a very specific font and that font's not available or they need a very specific text effect, which they just don't want to be using the text effect that's available, and actually do for whatever reason. And so those those things are going to continue to happen. But when you just arbitrarily choose to explore those paths for standard fonts, they're already there for browser, you just heard yourself and you're keeping yourself out of search results for no good reason. It's accessible for no good reason. And in general, I would expect that, I'd expect that you're just playing text rendering and just going to be faster because that's not a text rendering code. Passly optimized for many of the paths that the browser matches as for G. So you can expect that in plain text rendering, you just go to play text rendering and you can expect that in the browser. One other note, if you have some experience or some success where you have to do this, make sure you include the description and the element of title and these provide alternatives. This is how practice is shown up in web pages for HTML when people use specific fonts and then they use a flash and an image will do off-screen text. So I expect those practices to emerge for special situations, but with standard, definitely stick to text. Another concern comes up when trying to draw a static image in SVG. We have the filters available in SVG on many platforms and they can produce some really beautiful light and effects. They can produce some really beautiful shadows and do some really nice things there. They're not always going to be slow. There are some in order to keep... With current communications, we do find some pretty performance impacts by trying to use filters. One of the things that I found is a filter resolution patch created to down the good read and reduce that slightly and you get a slight performance improvement for the speed of the lighting that's actually doing also specification of the rotation of the screws and the filters because of the type of processing they have to do. When specular lighting happens, before the lighting happens, you have to do generally have to do a blur and the Gaussian blur can be a significant portion of the time it takes to actually produce the render, the lighting effect. The example that could be copied up is the one from the specification and it uses a Gaussian blur and it simply feeds that back into the screen. If you have a specific case where you're trying to use lighting for a button or for a user interface element, then it's quite possible that the world technically gets used in 3D animation where you kind of make that blur into a specific image so you can create a raster if you know it's going to be the same size all the time, create a raster and then reuse that with a fader perhaps. I've done a couple of experiments on that, I'm very far from that. But for your specific use case you can try and cut the cost of that Gaussian blur. But today what actually happens if you want to do lighting across the web, most of the time what we want is just something smooth and shiny, we want something smooth and shiny, then just throw in a white gradient or a gradient that feeds to the alpha blend values. Gradients can be faster than real lighting, you'll get that smooth shiny effect which is what you may want and you can do all kinds of tricks. This is a very simple example and it's not hard to do with my hand, you just set it to the maximum of why. You can move around the target of the light or imagine a light. And the same thing works if you want to shadow up to the side and do a shadow gradient. This happens all the time, a lot of times you see people using this specific effect and you're doing it real because you don't think about how they did that or the notice that's lit and so it's well done. In this case I've used transition to colors, it's also possible to do a more reusable gradient if you do transition through opacity and moving that transition through opacity to a gradient that you can reuse on different color backgrounds or whatever. And partial transparencies get less expensive thanks to hyperacceleration and thanks to just better optimization in the browser. Transparency affects the habit across HTML and SVG. So moving from static images onto dynamic applications which is what we hear a lot about it which is one of the great strengths of SVG is being able to build very rich user interfaces. We have a lot of the same concerns, I can use a lot of the same optimizations that we use in static images. But we also have additional concerns with dynamic applications. We want to have smooth interactive performance. When the user is actually using the application, you might have a drag that happens, you want the smooth animations to be dragged, other times you should just sit inside and not do anything. You want similar behavior across different platforms on different random engines. You want your application to be accessible to all users. And one of the key here is that we want to be sure that we don't for the performance of other applications in the same browser. This is an important point to hit on because it's easy to do, sorry, I was going to say easy to do but easy is not sure. We can do demonstrations where we show high speed animation, high frame rates, we can do some really cool stuff. When we're doing demonstrations, we're doing demos, a lot of times we're not so concerned about CPU utilization, we're not so concerned about whether you can run something else at the same time as this and we're not concerned about the feedback we're going to create. So as long as it works, we're going to be doing demonstrations. Now, when you have multiple applications running and you're trying to build an application that the user doesn't think of all the time, it's just something that once a while you go to this application, you can run something later on in a couple of days and come back to it. You want to be sure that you're not running, for example, running an animation where it doesn't get too long. You want to be sure that your resources are available and you need them and freely don't. So in order to achieve this, we need to measure performance. So, measuring performance is both easier and harder now than it ever has been before. It's harder because of the optimization that are having browsers to make some light effects in animations and really impressive means and really impressively complex documents, but it also gets easier because of fantastic tools like the pictures I have here are from Firebug and from Web Inspector, but there's also great tools like Drive and Fly and in the Explorer has some great developers that have them. So use the tools in browsers that you're working on, whichever platform you're on, definitely learn how to profile. And these tools are great for ad hoc, very quick tests. If you see something funny happening, for example, in Web Inspector, you can simply open up Web Inspector, click on the record button and it will, in this case, it's showing that it's recorded how long a certain step state you can have or see what's going on there. And you can find hotspots in the application very quickly simply by repeating those actions and watching what happens. Now that's not a very rigorous test. This is for ad hoc, quick tests, and finding places that are worth investigating. And for this, I kind of built up a little speed measurement harness which was inspired by something that John's presented a previous test with GeoOpen. My method is very simple because it run the application many times in a test harness and then count how long it takes and divide by the number of times it takes to do it. So I have to do average execution times. So say you've got a rendering that happens and every time you're running load, you call it tick method and then at the end of the time, you divide up time. But the way you do it, you put that back to Web Circle, run it with the channel and it's pretty straightforward. The one thing, the one warning about running the test harness of that is if you want to have it tender, then there might be things that are happening on the screen that aren't what you want. So it's something that needs refinement and hopefully over time we'll get better at that. One of the things I came across though is no surprise, there is a cost to a live DOM. This is a thing we've heard before. The live collections in the DOM make for some very easy to access scripting elements so you can kind of just touch something and see your reaction anyway. But that comes at a very high cost. There's recalculations that happen sometimes, maybe unexpected. And there's a few approaches that have evolved for this with time with HTML, web applications, where you can cache, if you have a live collection, then you can cache the length which just means you will get the length of the collection, sort off the JavaScript variable and then you can just do that later as long as you're not modifying the DOM and it does live attractions. There's also query-selectrol method. query-selectrol method seems relatively recent that it's coming to kind of the zone where it's acceptable to use because it's implemented in a wide-ended browser, and query-selectrol returns a collection which is not live. So that's very important because it means that when you use that collection, then you're not going to incur any costs back in DOM now in that way. That does also mean that you have to manually push stuff back in DOM when you're ready. It's also good to note that there's certain SVG methods that are specifically called out in specification. They return a node list, but that node list is specifically called out to be not live. So a couple of examples, get an enclosure list, get an intersection list, and I think there's a couple of others as well. So it's good to know whether the question you're working with is live or not. So you have to don't touch the DOM. It's expensive. If you've got something that's UI intense, then you want to make sure that you understand when you're doing that. Interaction. Pre-count the value as much as you can and use the spend-redraw to get it to a little bit more here. So this is an example that gets you back a, get on a class name, gets you back a live collection. This example here simply indexes it to it, but not just indexing it into it, it causes pain, but when you actually go back and modify the reference to get from gala into class name. So it can be a tricky issue to understand. Do it once it's okay, but using that collection over and over in some type of, it's going to cause you pain. Now that said, don't optimize too or really don't, you know, don't sweat up with this too much. Go up, build something and then find out what happens. So when you do touch the DOM, because I said don't touch the DOM, I'm sure you're going to notice it. Always, always make sure that you create your element, set the attributes on it, then attach it to the DOM. Never do this the second after that. Second, if you can't see what the difference is, in the second case, what's happened is the DOM, the element's already cracked, it's all in some default values, and now it's being adjusted. So do it before you do. Document fragment can help with this. This is something I'm surprised at how little usage it gets. It's simple to create a document fragment and you can treat it just like if you create a group element. You attach things to it, you build up this collection, and then you attach it to the DOM. This is the kind of thing, you can see kind of the verbosity that you complain about in DOM functions, but a lot of times this can be abstract away into some function that you cause. You don't necessarily have to see this all the time. But building up this fragment like this, and then attaching it to the DOM, you're working with the APIs that we have. You're working with the APIs that are built for this. The other one I might add here was a suspend redraw. And suspend redraw, if you're not familiar with it, is a function which will hold off on rendering the document. So you still incur the cost of manipulating the DOM, but you won't be rendering the output during that time. So for example, you could call suspend redraw to wait 100 milliseconds, do a bunch of manipulation, then you call unspend redraw with the handle that you got earlier, and you're paid to be updated all at once, and get all that running just once. Now, sometimes you're making an optimization like that, and you don't know exactly why there was actually no effect to it. You kind of dig into like, what's going on here? Well, with open source rendering engines, sometimes you could do some detailed analysis, getting the code. Figure out just exactly what's happening there. This is a line code from WebKit. So today, definitely WebKit, but if you are talking about 1.275, it's now a doob of a 10 world heap. But I mean, it's open source code, so we can see the server thing. And hopefully, one day that will be in there. Maybe somebody here is inspired, and Google makes changes. But still, make a call to unspend redraw. The function is there. The reason is there is so that you can call it, so that you can use the same code across the platform. So it's definitely a private state. Now, when you do call unspend redraw, you update your page. When you're updating your page, the reason you're going to consume an animation. And whenever we're talking about animation in SVG, we always look at the smile animation versus job search animation. It always has to be phrased that way. It has to be phrased as smile animation versus job search animation. And of course, the smile animation could potentially allow the poor browser to predict the future of it. What I mean here is, hypothetically, I don't know if this is a background development, but it's anywhere right now. You could make a calculation based on the values that are assigned to an anime element. So that when a user switches away from your application and switches back, no matter if you don't have to do that animation in the background, you can potentially predict the value based on calculation. So you have kind of a continuous animation around the screen, calculated animation. Now, smile animation is often held up as a good example of deferent workup, and it's great that it's declared, but nothing's true. Just because it's marked up doesn't mean it happens instantaneously, it involves happening instantaneously. The value calculation still has to happen, the rendering still has to happen, all the same work still has to happen as what happens for your job search animation. Except getting the DOM. You don't have to call the AK to do the modification, but the tradeoff is you've got in the end of the film. So it's interesting though, looking at, actually let me get back to that in one second, because there is one other thing I want to mention in Rosilla, it's at least looking at optimizations that can be made for job search based animation. The Get Smalls Request and Mainframe method, it's meant to hold off on animation again. So there's an experiment happening there, and it's good to know. So maybe we'll see improved performance in job search animation as well. But the way I like to look at it myself is at smile animation and job search animation, because like you said, smile is part of the DOM, or it's part of the document, it's deferent workup, and we can do modifications to the DOM in job search animation. Because I can mention the document that you grabbed, the anime element, and this example here, which probably proves it's bad values, but basically we can, we can calculate the values that we're going to use for animation and make that animation happen. A lot of times with the block of smile animation, they look for long, long running continuous animation, which is fine. But it's also true that if you had a one second animation, whose positions you did not make at creation time, you could calculate those values whether the user performs an action, then assign those values to the anime element. And then the advantage of this would be that the animation that happens with all the elements of smile and optimizations are available for smile, and you don't have to figure out your animation time at tick that you would do with job search. So this is just kind of a hypothetical thing, it's just something to remember that if you are doing smile animation or job search animation, there are ways to make the tune, and it is possible. And there are also events that are fine for detecting whether the animation starts or stops, so you kind of interpret it much more than we've seen in the past. One of the minor ones that I wanted to hit on was transformations. When we do OEM, very often what we're animating is the transform. And so you have to have some specific transformations that you document that you can modify over time. But if you have a whole bunch of transformations, you have a whole bunch of elements, then there's always a way you can't bake in those values into the improved absolute coordinates, which would be easier to digest for the browser. So what I'm saying here, when I say bake in, what I'm saying is run through the calculation, and then run through the any absolute coordinates that will make something that's easier for the browser to understand or to quickly run out. This is something that's more useful if you're building something that has a toolchain. So if you have some toolchain and it has some SVG resources, then consider this as part of something you think about to improve some efficiency there. Some of the things you would look at, there's methods for getting values and specific units. You can get coordinate transforming tricks and you can get specific transform that's applied to OEM. So you could actually, in JavaScript, run through if you could open all the initial positions from the transform that you can have on a fly in no runtime, and then get back a path resource that you should be taking. Use that as your best feedback for your initial application. So there's one more concept that I wanted to get on here, and this is something that I've actually seen several times during the SVG Open this year. That's got a level of detail. Level of detail, I thought I would have to explain, but it sounds like everybody kind of already has a concept of this. There are, I come at it from the, my dinosaur's kind of, basically what I'm showing here is there's different 3D meshes that are shown here and in 3D animation, one of the techniques that's used is this level of detail concept where if you have an object that's close to you, you have a 3D mesh that has very many vertices in it. Now when the object is close to the camera, then all those vertices are needed and you need that high level of detail. When the object is far away, if you use the exact same mesh object, then you've got basically a whole bunch of vertices that are contributing nothing to the final image that's shown on the screen. So you have a whole lot of detail that's thrown away, a whole lot of data that you really don't need. So I did a little experiment using this level of detail. The concept, like I said, is using reduced complexity path for objects that are at a smaller scale, not actually far from the previous phase, but a smaller scale. And when that path comes, when that object is something that comes up a little bit closer, then you would, or whatever it goes, a larger scale. Then you would use a higher complexity path to get the details that you need. But kind of a small example that I could show. So you kind of already want to talk to you about what's going to make sure it really won't work. So it's not a big deal, it's just kind of some things bouncing around. So this is a, I grabbed a complex, kind of complex map of Wikipedia. I have no idea if it's the file here or just, and I just wanted to animate it and see if it can kind of have a high-fives. But here it's fixed as a frame, maybe 23 milliseconds or something. It's just a decent handle. And the idea was, I got this image that takes a little while just to run around the screen. Now I want something that complex and I want to use it in animation, and we should definitely be able to do this in SVG. One thing to note here, it might look like the smaller ones are moving faster, that's because I didn't correct the velocity. So I'm making the velocity type of thing, but I've got some objects that look fixed, that are on. And you can't see it here while I'm going to be careful, but this would be the test of the technique. When it's a larger scale, I'm actually using a high-resolution path, and when it's a small scale, like it's a low-resolution path, this is something that was brought up a couple times in the demonstration. Specifically, the media queries sound like a great way to deal with this in other cases. And when I... I don't think that one is better. Now, when I start on this, I grab the image that's closest to what I grabbed, this is one down here, that 85 kilobytes. I'm using the file sizes to have a proxy for path complexity because in this file, there are two paths which dominate the size of the file. And you can see down the bottom, you've got this combinage path that has a lot of to-go-lunch-to-end, it's a lot of detail. And up here, there are 22 kilobytes. I remove the lack of to-go-lunch-to-ends, and there's a lot of curvy, uglyness. So, let me get back on this. Let's skip this thing. And basically what I did, I started out with an image from this initial work of media. I pulled out the things that were obviously not necessary inside, and the smalls that you get down to the 85 kilobytes after the voucher scouting and everything else. Then, basically what I did was I loaded up in inkspeak, select path, hit reduce path to find the simplified path, which will allow a couple of months, and once for each step here. So, there's a path for the L1B open, there's a path for the other contents on the global and I did that for each of them. And you can see there's a point, there's kind of initial returns. At first we get a step down by half the size, and then we step down maybe two-thirds the size, and then at the end there where it gets down is where we go to the long lines because you can't see because it's a long line. So, there's, depending on the scale range and the normal resolution of the images that you're using, you're going to have a different number of levels that you'll need. Now, there's many ways to apply this but I try to field them. One being just to visit the attributes of the path element, another would be if you set up the images as desks, then you can use the use element and you can swap out the href value. There's a third method but if you bear with me I'd like to look it up here really quick. See if I can double or double the value. See if I can test my network connection. Okay, I'm going to do that after I pop in a second. So, let's see. Insight search for the wordalpattersy because that's what I want. Let's see if there's the sg file. I was looking for, thank you Doug. And in case you missed it there, what I just did was search for sg files and Google then that works out. So, what Doug says here is that the cheapest is displaying none. So, into the levels of detail, make one way visible. And, yeah, so in my experience at a level of detail, I found that using multiple levels did provide a significant time saving to the different room. It did help a lot. The method that I used to switch paths didn't make that much of a difference but that could be because I hadn't hit an array one yet. And choosing the right scale to rate points in time to switch levels makes a huge difference in that specific test application. Now, so I think there's a lot to be said for the most part. But, yeah, so the themes I tried to have today, making your constants, anything that's constant, run it through the document, keep the same if you're using something that you're using often. Pre-calculate as much as you can to reduce startup time. So calculate stuff out, build up document fragments and attach them. Don't touch the dog. And measure the performance curve because this is quite a hot spot. Hopefully not just by watching the ECP meter but as development tools continue to improve then we'll find better ad-hoc looks for these pieces or build up frameworks. There are several things I didn't hit on today. Probably I didn't look at using XMLHTTP requests efficiently. I didn't look at network delays. These are critical points, but I think that it's roughly the same as it is for HGL documents. But in general that's the problem. And if I'm allowed to tell the web already, I should refer to using techniques. I did look into masking clipping which has significantly performance impacts and I wish I had more time to do it. And I didn't look too much into text display render but like I said I believe the text, the reduced text, text rendering facilities that are needed to the browser, needed to the platform as much as possible, then you probably know the best performance that just seems intuitively right. And then there's a kind of a learning theme now where people ask for ways to make SVT so I would like to see SVT become more efficient. The things that I'd like to see that aren't necessarily, aren't usually necessary to do the specifications so much as just further implementation of the specification as it is. Just keep refining things and optimizing. So, smile and recommendation and refinement, filter performance improvements. And the other one being the other one that I don't know if this is something that belongs to specification or something that belongs to the framework but if we had better ways to approach multiple representations of a single element, so high detail, low detail, one for scale 1 to 10, one for scale 10 to 100, that would be something that would be improved by the time. So with that I guess I'm going to wrap up and thank you for your time and I hope some of this helps and if there's any spots that aren't working, they're really wrong then feel free to call me on it. I tried to verify things if I could but it's obviously based on other research that's already out there. I hope to dig into it deeper over time and get to see how it should go. So, thanks for your time. So, I think we have some minutes left if there's any questions. Simplifying past data in maps has been known to cartographers as generalization and there's a very efficient, I mean you did it in Inkscape but Inkscape isn't very well suited to that. There's an online tool that's unfortunately in flash, it's called Map Shaper, it's done by the graphic designer from the New York Times and there you can load the standard shapefile data and have a slider where you can generalize and simplify past data from maps and that is very useful to speed up map display. I'm going to take note of that too, thanks. Illustrator also has a simplified path. Pardon? Illustrator. Illustrator as well. Illustrator has a simplified path that uses the same algorithm. But it's not about just simplifying past because in maps sometimes if you use Illustrator or Inkscape and you simplify then the borders don't align anymore. That's the problem and if you have a cartography's generalization tool then it's always the case that the borders after generalization still align. Right, there's a presentation about the problem of border alignment in geography data and that map data is something that I've had a lot of later on. I'm excited there may be other domain specific concerns for certain images and different algorithms that would make more sense. I wanted to say a great presentation. I think most of what you said actually applied to IE as well. A couple things. You mentioned something about pre-gap data in Transforce. It's quite true but so far we're skipping in line that the browser can apply to Transforce so that's why students can't change the game. At least in the internet exploration application you can use a lot of tricks to observe the interface. It's really, really fast. So it might sometimes give, you know, my connection with the app. Yeah, I think I might have come across it incorrectly there. I didn't mean to say that the application should perform a transform instead of having the browser to be used as much as possible. Rely on the code that's written in cc++ instead of trying to read it and reinvent it. What I was trying to point out there was that if you have a workflow whereby you have input documents in a scene and in order to optimize those next few documents you may have produced some other tool then your application will run through in a one-time fashion. So one time you run through all of the calculations before the transform is getting the bodies back and then feed that back into your system. So it's the same path back that you've been talking about as you're getting put into the next time. Yeah, there's no question that if you can make a head-to-head mission once it's done. In terms of the level of detail, I think there are places where within a spec it's currently, for example, with epic turbulence you can set a number of octaves, which is a controller over the amount of detail. There's a number of octaves increases, the amount of time the render increases. But there are other situations in which it might be nice for the user to be able to say, I only care about so much detail or to say good enough as an attribute value. There is a good enough attribute value. It's the level right here. There's a render and parameters attribute and there's an optimized speed value. So that would approach the good enough that you're looking for. I was going to mention that as well as things I didn't look into. I did actually try it really quickly. I just expect that given the maturity of the limitations, I assume that most browsers don't have multiple paths, as was expected in the end, whether it's specification or not. So it's possible that that could be something that browser vendors could look into as, if there's a divergence between dynamic applications and stack images, so it's not something that you want high quality. Like you said, in other cases, you have a good enough value. There are other things you can do for the good enough where you like to reduce filter resolution. You can reduce, for example, an Gaussian word number, standard deviation. That makes a huge difference in execution time, but if you reduce number, standard deviations, you're sacrificing quality. And not actually the quality in a little way, but sometimes it's in a very, very big, very noticeable way. In some cases where there's a lot of calculation going on, I think one could pre-calculate when the image becomes asymptotically similar to a previous incarnation. And that would be kind of cool. That would be kind of cool if we had to come down to the project. Interesting. I was just talking about the image that you had to build the code for the views. And something that I hadn't tried that I thought maybe you might have hit on was, was the performance of the views after you started animating, or the trade-off of cloning them into the DOM up front and taking that bit, because I would think that it might be a little patient to sit there and I think you would have to play with it myself. I did look. So the big thing that I noticed from the data was that the difference in choosing all this is in the detail was there. So I went down that path to investigate. I really do want to dig more into that, I haven't got a clue yet. One of the things that you can usually use is in terms of certain things you can't control. Let's see what you call that. So, yeah, I don't have a solid answer. It is definitely different across from the platform. I tried it, yeah, I got my demo, well, yeah, almost all my instructions were in the I9 test drive, and the rendering is quick, so there's a big difference. You know, the instruction was dominant, which is, I think, if they're expensive versus the leverage. Are your slides going to be available online so we can point to them? I hope so. It doesn't just happen, but actually, it's the things that box are alive, right? No, Rob, I'd just like to echo, can't put a great presentation. Well, I guess you got two comments. One on the last one about the hiding disability. I think what Doug hit on is probably a great way to go. If you're on both in line and you use the disability attribute, or the CSS attribute, you can do it with a change selector, you're not even touching the DOM. And we would pre-parse everything up, so it's actually loaded, it's taking memory, it's all loaded, but you're just going to flip its disability. I expected that to be a beautiful one. But you didn't see the big differences. Yeah, I kind of, I think it gets pretty solid tracks on, I would say, so I actually got a spotter. Well, it feels intuitively a great idea. It doesn't, that's the one reason I did it. And you're comment about Texas way, that's also super. It also helps your searchability and indexability, and it's great that you're indexing SVG. Another first Google thing. I was pretty excited with that, yeah. I guess one last comment, your workflow comment, I think, is great too, in many levels. You talked about cleaning up the cruft, flattening the transforms, if possible, if you're not changing those dynamically. And all that's great is when we look at stuff, the slowest SVG is the stuff that's directly exported or saved, whether it's Vizio, Illustrator, Rxcape, because all those products put in stuff so they can re-edit, re-open. Right, so there's always RAM tripping data in there, and there's always metadata in there. And it's a tough-out strike when you're asking about metadata, because you want that, like license integration, you want it to be in there, but then there is a lot of cruft that's just totally unnecessary, just hard to find. And grouping that was done when people created groups or layers in those applications, which actually are never used in the final rendering, you know, so flattening the whole idea, for a static image to be a huge win. Yeah, so Excape has a vacuum desk for that, and then Jeff, Jeff from the money version scour, I think, for, so you can actually save that scour SVG for Excape. I should have brought that up, and do that for us, because when you're using Excape, save that scour SVG, or if you use scour independently, you just go through that and you get a small amount of file. And then that, I think, you see kind of a script for coming up with lots of them, and that's available on the picture. And I want to emphasize one comment you made, which was, I'm not swearing too much, just to implement it. Yes. Yes. Absolutely. The productivity of the performance in the browser is kind of going steady down as the complexity is increasing. In particular, it's really hard to predict, in case you're going to explore exactly how much we're going to relay out, and re-display it for a given change. We have algorithms for those that are pretty complex, and predicting in advance what exactly will happen, and whether it's going to be optimal is difficult. So try it and see what happens. You have to experiment, and you have to test changes, and make sure you're testing the changes you think you're on. One of the things, like for an Explorer's case, and with Firefox on Windows, you've got to direct to the acceleration, and do as well. One of the things that I expect to not be bad, sort of, with SVG is methods that try to do things that have to, say, pull a texture back, or read from GPU. I know that's traditionally unexpected operation, but that would be something that's really unexpected for web developers. Yes. Okay, so we have to move on to the next topic, so let's thank our speaker again.
As more developers have adopted SVG, questions have shifted from suitability as a format to more subtle questions about the best way to build with SVG in applications where it shines. Best practices are evolving for building applications, compatibility across implementations in different user agents, and for integrating SVG components as moving parts in larger HTML5 applications. The time has come to dive more deeply into efficient SVG applications. Google has been a part of building the current generation of broadly-adopted SVG applications. Google Maps and Google Docs rely on SVG for interactivity and a robust document format. Google contributions to WebKit are helping to build one of the best open source implementations for rendering SVG. However, it is the community of developers, like the SVG Open community who are building the next generation of domain-specific tools using SVG and the rest of the open web stack. Rob will compare real and perceived performant SVG coding practices, helping along the conversation around best practices for coding with dynamic SVG.
10.5446/31201 (DOI)
Yeah, before IE 9 came out, we worried about how quickly you could generate a random polygon. Now we wouldn't have to worry so our talk has become irrelevant. But now there's still a theoretical interest in such things. And I'm going to talk a little bit about why on Earth you'd want to... First of all, why random polygons? Here we have some random polygons. One of the things that's nice about random polygons is that they sort of have meaning. People look at random polygons and they interpret them as having significance. Sometimes that significance starts to take on mental meaning. Such as that. Or that. Sometimes people see random polygons on the surface of the Earth and they think it's evidence of the existence of extraterrestrials. This particular polygon drawn on the surface of the Earth by the Nazca in South America is obvious proof that there had to be somebody in outer space doing something. Here we have another random polygon that starts to take on significance as you look at it. In fact, the pyramid in the center of this thing proves the existence of extraterrestrials. There you see what I mean. Here we have something drawn on the surface of Mars. Familiar thing from one of the Voyager missions. Which we see is in fact a more familiar thing. Mac Jagger. So that's why random polygons are interesting to humans. So why would it be difficult to generate random polygons is the sort of question. Well, let's first look at... Here we have some different random polygons. The size of the end-gone is the crucial issue here. And prior to our recent work, the time to generate a random polygon grew exponentially with N. And that was sort of a problem. We also have to consider what is a random polygon. This is not considered to be a random polygon because it has too many edges. Infinitely many edges. It's too many to be an end-gone. Humans like to work with finite N. That one's too complex because it's... Bonjour. Anyhow, a simple closed curve is what we're meaning by a polygon. Hello. This one's too holy and this one's simply too Picasso to be a polygon. But the Picasso polygons are sort of interesting at times. And one of the nice things about random polygons is that if they're smooth, they tend to take on significance. These are evocative somehow emotionally. Okay. Internet Explorer doesn't like me to run local JavaScript. So there's several different ways that you could think about generating a random polygon. Here what we're going to do is just take a circle and generate three points on the surface of the circle. We can then connect those things together in clockwise order. Okay, so here we've got a random polygon. We chose three random points on the edge of a circle. And we can generate more and more of these things. You'll notice that mathematicians love this kind of thing because you can say what is the expected area of a triangle generated thusly? And what's the probability, for example, that the midpoint will be contained in the interior of that polygon? And clearly as the number of edges in the polygon increases, the probability of the centroid being located inside the thing increases, and clearly the area will tend to asymptote toward the area of the circle. So mathematicians have looked at random polygons of that sort for years. It's related to Buffon's needle problem. I saw a picture or a statue of Buffon in the park, in the jardin de plant, the same Buffon who invented the needle problem, which I thought was pretty cool. So I took a picture of it, but I didn't bring that for you. Anyhow, so that's one way to generate random polygons, but you'll notice that that's not indicative of the class of all polygons. All polygons generated according to this method will be convex. No concavities are allowed through this technique. So we could say, okay, let's come up with a more general way of doing that. And what this technique does, in fact, this was used in psychological literature in the 1970s or 60s. The idea is take a centroid, generate a bunch of points, connect them to the centroid, and then sort the angles. So we can simply sort those angles. Let's generate a few more. There we have a five-gon, and what we've done is just take each of these angles and sort of them clockwise, and then connected the things together in that order. And so we will naturally come up with a random polygon. The problem here is that though we can come up with polygons that have concavities in them, this is still not indicative or representative of the class of all polygons. They're polygons that cannot be generated via this technique. This would clearly be fast. Order of n squared, order of n log n, sorting n objects. But in the worst case, the problem becomes a little bit more complicated. Suppose we wanted... I need to do this. Okay, there I've got how many? Eight, seven random points drawn in the plane. What prevents that from being a polygon? Well, it's not simple. It crosses itself. How do we prevent it from crossing itself? Well, we could permute the order of visitation. Here we've got seven points. I'm going to take the same seven points. Notice how the locus of the points is not changing. I'm just changing the order in which they're connected. There are only 5,040 permutations of seven. So eventually we could do this, and we could write an algorithm fairly easily that detects whether or not the lines cross, and thereby we could just run... If we ran it 5,000 times, we could expect eventually to find one that is a polygon. The problem is if you want a 200-sided n-gon, 200 factorial becomes a big number. There are only 10 to the 73rd atoms in the universe. And therefore, if you started putting computers on each atom in the universe and run the problem, you're still going to run out of time. So the problem becomes sort of like the traveling salesman problem, which is NP-complete of how to do this. There are a variety of other techniques that people have considered. For example, what we could do... Well, it turns out that not all shapes... Well, that's not a good link. Let's go to here. How do I internet? It's just slow. Well, the internet is too slow. There we go. Let's try that one. There. There are certain shapes which cannot be viewed from a single point. In other words, remember the algorithm we had before that I said was not representative of the class of all polygons. There are certain polygons that require two lamps. It's called the museum lighting problem. You take a polygon and ask the question, how many lamps do you have to put on the interior of a museum so that all the exterior walls are illuminated by interior lamps? A graph there named Klatal proved that in some cases you require n over 3 lamps on the interior of a polygon to light all of the surfaces on the outside. What we could do is, if we wanted to generate an n-gon, we could take n over 3 points, put lamps at each of those, shoot arrows from each of those lamps, find out where the arrows land, and then somehow stitch together the resulting polygons. The problem there is that there are a lot of different ways to stitch two polygons together to make bigger polygons. So here I'm stitching this polygon together by eliminating these two lines and replacing them with these two lines, yielding this new polygon. You'll notice that this particular polygon cannot be illuminated from a single interior lamp. Hence it cannot be generated via that other technique. There are a lot of different ways to stitch two polygons together. So in the case of multiple polygons, the problem of stitching multiple polygons together becomes like a traveling salesman problem. How many different ways could we hook these seven clustered polygons together? Well, if there are seven clusters, there are seven factorial different ways of putting the seven clusters together. So the problem remains NP-complete. Another approach would be to do some kind of onion skinning. It's easy computationally to calculate the convex hull of a polygon, or a bunch of points in the plane. There is like a bounding box on the way, a bounding polygon, and that's easy to calculate computationally. So what we could do is find a series of concentric convex hulls and then consider the number of different pathways of connecting those convex hulls together. That problem also looks like it's an NP-type problem. So we came up with another approach a few, a couple of years ago, which was sort of like this. Let's start with a random triangle. Three points chosen at random. What are we going to do next? Well, let's just pick a point, like there. The point that I picked had only one line visible to it. So the question of where do I connect, where do I make the foregone out of the three gone, becomes simple. Suppose I click on the interior, however. There are four different line segments that could be eliminated to create the next two. And the x is undefined, because my JavaScript is a little screwball. But I randomly chose one of those four lines and replaced it by the two new lines. And so that's the technique that we're basically using. Well, it's one of the techniques. That approach seemed to be sort of a sensible approach in the sense that you can generate a bunch of nice things and you can smooth them. And the path data is being preserved down there. And then of course you can do fun things like fill them with random patterns and random linear gradients and keep them and move them around and stuff. There's a big however. And that is using this technique of choosing a random point and then finding a line which is visible from it doesn't always work. In this particular case, suppose the brown edges have already been drawn. And I happen to by chance choose a point in the middle of the darn thing. Notice that there is no line visible from the interior point. So therefore, the algorithm I just described is not always going to work. Dr. Whitfield is going to tell you about the algorithm that really does work and some of the problems. Can you get out from there? Back in 2008 we had an algorithm presented with a different component that instead of the typical NP complete column, which is, I think I need that microphone. Just realized that. Yes, so back in 2008 we came up with an algorithm instead of the NP way of taking an end vertices and then trying to find an end gone from those end vertices. Instead point by point create new vertices. So you start off with three points at a fourth point at a fifth point and from that create an end gone. And that's a technique I'm going to tell you about here. There's several steps to this. Some of them are more difficult than others, so just let me briefly give you an overview of how it works and then I'll basically talk about step three here, which is the difficult step. So some point or another you've got to some end gone. And what we're going to do is randomly select an edge. Take that edge and then from that edge, VAVB up there, we're going to find what is the visible region for VA and VB. That's the difficult part. So let's get into the details of that in just a second. After you've got that visible region, then what we're going to do is triangulate that region, break it into a bunch of little triangles. That's actually a difficult problem also, but thank heavens that's been solved by hundreds of different graph theorists. And so we could just steal that code off the internet and use Ratcliffe's code is what we did after we translated it. Now I've got a bunch of triangles. After I got a bunch of triangles and going to randomly select one based upon their weight, so it's evenly distributed, and then select a point in there. What our algorithm is doing is we believe and we believe we can prove it, it is able to generate any end gone. So it's not going to give you a typical convex hull. It's not going to give you a flat bottom boat like many other techniques does. It is theoretically capable to randomly generate any end gone. Alright, then we add the new point and you move on from there. So my task was to implement this and I implemented this in JavaScript. As I said, many of the steps are easy steps one, two, five, six and seven are extremely easy in that list I just went through. Step four, I just used John Ratcliffe's code. It was in C++ and I just generated JavaScript from it. This particular technique, as Dr. Daly already mentioned, we have no holes in our end gone, so that makes the triangulation problem simpler. So we didn't have to do the very complex triangulation. So how did we do this generating of the end gone? Well, the biggest problem is trying to find this visible region. So when you select VA and VB here, what you do is extend the line. And you can see that this new point here extends and hits the exterior part of the end gone where this one hits the interior part of the end gone. Our original implementation, we thought, well, we'll just make two passes and we'll find all those ones that are visible, the region that's visible from the interior and exterior and then stitch them together. Well, during the implementation, I found out that, hey, it's senseless to do two passes. I could do this all in one pass. Great idea, but the implementation and coding of that was quite difficult because now I had to worry about, and I'll tell you some of the problems later, but basically getting the points into the right list, into the right order seemed to be the problem. So after you have determined that this is the extension of the line, then what you're going to do is go around the end gone, adding those two new points in, and simply go around and say, is this visible to both, is this visible to both, is this visible to both? That'll give you the vertices with me. That's simple. Now, what if one was visible, the next one is not, and the next one is? Well, somewhere along that traversal, we hit into an edge where somewhere on that edge, we hit is visible and the rest of that edge is not. So we simply calculated that using line intersection tests. I better move because we've got one more speaker. Okay, I'm going to jump ahead to some of the problems with this. What we did was created an array of XY coordinates. It was very, very easy to do this and then map it to SVG path object. So that was basically the output of this algorithm as an SVG path. The problems that we ran into were based upon a couple different things. One, we originally didn't think, hey, we want to put this in a bounding box. I have the edge of the screen, but as soon as we decided, hey, edge of the screen, bounding box, they're analogous, not bad. That moved quickly. Then the next problem we ran into is the one I alluded to just a moment ago, which is interior visible, which is exterior visible. If I want to combine those two lists, then I have to do a lot of math calculations if you will of intersection tests in order to do this. Overhead got pretty high. And finally, I realized, wait a minute, I can do a bunch of this calculation one pass prior to the algorithm starting and save all that information. Speed it up quite a bit and we think we could speed it up even more. So what were the real problems? Well, that's the ugliest down here. One of the problems is this intersection, if you'll see, here's A, B, the selected line, and I'm looking at visible region. Look, this visible region doesn't hit any edge. It's possible that intersection test gives you something in the middle of the end going. Okay, so we didn't think that a first, add that in. That was easily fixed. Then the ugliest problem is the problem that was alluded to in a Microsoft talk earlier, actually, that we have a bunch of pixels on the screen, which is a thousand by a thousand. Those are integers. You do line intersection tests. How many decimal points do you have? Now, I've got floating point problems is what we ended up with. And you say, well, that's not a problem. You just maintain the large floating point numbers. Fine, we can do that. But then when you do triangulation, what you'd have is some kind of n-gon, and I got a new point here. This point could be so close to an existing edge, it now makes my polygon at this size look like it has holes in it, and we don't want that. So we have two techniques that we're going to try to fix that. But being that SVG is an S, which somebody mentioned earlier today is scalar, well, when you enlarge it, it shouldn't be a problem. It will no longer look like it has holes, but we're still not quite pleased with when it's in the smaller representation that it does look like it has holes. George actually implemented, took the n-gon code, and Dr. Daly did a preview of that code, and he's going to give a quick demo now of the code that's running. I should use like 9. I should have. Such an issue. Okay, so like they said, this is the latest version we have at the moment that has all the things. This is just the basic screen. We just threw a random polygon up that just introduced it. This is what it does. Down here, you can see how the polygon is set up, because sometimes you get these cool shapes and you want to try to keep them, or you'll come up with something and you're like, because at first we came up with things, we got these really cool shapes, then we lost them, and that broke our hearts, so we wanted to come up with a way of setting it up so that we could always have it. And we split the menu bar up into three different parts. You have the first random polygon, which controls the actual random polygon. The first three functions or buttons on the top do with Dr. Daly's original code, the way we thought was going to be the fastest. We thought we came up with a cool thing, but then he showed you the problem of where sometimes the lines don't intersect correctly. And then we have keep it, which he already showed you, random point, which gives you one random point, smooth, which he went over, which he went over, which makes polygons look way more beautiful, in my opinion. And then we have our true algorithm step and true algorithm step, which Dr. Whitfield just went over in her way. So we do zero, we'll give us a new triangle. Let's get a decent one. That looks okay. Let's add a couple things, smooth it out, and then we have a random shape. So then we can go to filters and does all work with filters. So you guys can see that and Dr. Daly talked about this earlier, like adding a pattern, different patterns he added, random gradient. But over top of it, we could put like a random turbulence. It's fun and nice one. Change this to linear gradient. But you see how just adding a random turbulence over top of the linear gradient gives you something totally different. We wanted to add a little bit of filters and stuff to the random polygon to show that we could do completely random things, completely off the wall things, not just like mimics of like Dr. Daly showed in the original beginning, how the pentagon or faces on the moon, or McJagger, all that stuff is basically nice. Then we also have the bounding box because you obviously don't want to always have a polygon that takes the whole screen. It's useless to you. You want to make sure you have a polygon that says, hey, I want a polygon this size, and I can put it here, I can put it there. So like that, we came up with a way to like change the bounding box size. And then if it doesn't want to work, we'll go here. Here's some nice polygons we had. Like how they can just come up, just simple change of the filters, completely changes everything. We went through all that fun stuff. So we have the bounding box. Basically, you select point, select point, you come up with a basic thing. And then whenever you keep it, like he had, you can put different things on the screen. And you can see how like we had a pattern here, and then we just had a basic color with a turbulence thrown over it. So then like eventually like say, what we want to try to like get this into is like, imagine if you have lakes, and then not every lake is going to be the same type. But you can basically see like, imagine if this was a blue color, you could see like a blue color here, and then you don't want to make them all the same size and everything. And then Dr. Woodfield will go over like future directions and any other questions you guys might have. We have three minutes. Let me show you the, I'll just display the future directions and the bibliography is down there at the bottom too, if you have any other questions. But what basically we want to make the tool a little nicer to use is basically what all the options are for. Questions? So I actually, why and I'm not this much of a geek, why do I want random polygons? Well, I wish we could have given you more realistic examples, but let's say you want to create a screen that has clouds. Go look at the clouds, they're very random. So now I could take a bunch of polygons and they're not flat also and they have depth to them. So now if you take the keep it and you say I've got this cloud that's its shape and I move this one here, here and here, I can actually create myself a nice scenery. I could do some visual scenery. My idea was actually for mountains and streams. Now you take this bounding box that's here and you get this random polygon here and you can then connect them together and then you can end up with something that's nice and scenic. Add a filter that's a water filter, you've got the water. Add something that's white, that's a cloud, you can get clouds. My feeling is that artists and movie makers are too expensive. If you're going to do like that, you've got parameters you want to say. Make some stuff, make swoopy stuff. I'm doing Manson's house to make Jackie's stuff. Yes, absolutely. Our goal wasn't to create a full blown tool. We'd really like somebody that's in the industry right now to say, hey, this is a great idea. We'll implement it for you and do a full blown tool where they've got the 40 developers that have all these year and a half to do it. We're in academia. We teach and do administrations. Are there any more questions? Thank you. Thanks very much.
The ability to generate dynamic graphical content is a major asset of SVG. Accordingly, a natural question that has caught the attention of computer scientists for some time is how to efficiently generate “interesting” random shapes. The generation of random shapes could be used to mimic scenery found in nature which appears to the human eye to be truly random. Trees grow at apparently random angles, water bodies such as lakes, and oceans have random contours edges. Furthermore, streams have unusual edges and meander randomly. Random polygons with a large number of edges can be smoothed, filtered and given gradients to resemble natural entities such as clouds, lakes and land formations. An efficient algorithm for generating poly-gons has been created and implemented in SVG. The paper will demonstrate its use to create random shapes.
10.5446/31204 (DOI)
So we'll start with what is JSX Graph, perhaps not all of you have seen the talk of last year. And go on with JSX Graph, which is running on mobile devices. Show what is JSX Script. It's a recent invention about JSX Graph. And to end, we'll have some further applications, examples, fun stuff. I think you will like it. So let's start with what is JSX Graph. How did we start? The original idea was that there are many dynamic geometry systems out there, like GeoNex, GeoJ Brass, and Rella, Capri, you might know here in France. In USA, it's Geometer Sketchpad, I think. And they are all based on Java. And we just didn't want Java because we think it's dead. And so we wanted to have another dynamic geometry system which runs on every browser with JavaScript. And of course, we can use JavaScript to program geometric construction, not only to read the database which is just given by Capri or GeoJ Brass. So what was the result? We have a library, JSX Graph, which is implemented completely in JavaScript, which runs on all major browsers, even Internet Explorer, and even Internet Explorer with a version smaller than nine. Of course, Firefox, Opera, Chromium. And we don't need any plugins because we use SVG and VML. And what is different to last year? We don't need any additional libraries. So we kicked off Prototype and jQuery and do the things on ourselves because we didn't need very much about getElement by ID, we can do it by ourselves and don't need the dollar. So you can use it, of course. It's LGPL license. So you can use it for your own private commercial purposes for free. So first example, what is JSX Graph? You see Newton's method, you might know it from school, you can drag around this point X0 and see the lines adapting. And that's the major advantage of JSX Graph. You don't have to program all those events about the points and the dependencies that's built in. You just have to create, say, point and line to this point. And if you move around the point, the lines go around with it. So I'll hand over to Peter. One of the main benefits is you don't have to care where you're going. You have to date to run some every PC and every platform, every operating system, and the new touchpad, like that, and Android start or Android, and mobile phones on the iPod touch on Android phones and on every device, even if you have no plugins like Java or Flash, because they suck, and Android devices, we did implement the Canvas renderer because the guys did bug us. We had to because it didn't run last year. So we featured a new Canvas renderer to do it in now VML, SVT, and Canvas. So we can do it now on nearly every platform that has a browser. And one of the main benefits is the name of the talk. We are faster than a Java plugin. We have prepared a little comparison. That's what it normally looks like. We have drawn our image and we're still waiting. But you did see it loading. Normally, it would have shown. I would present now, but you have the initial time of Java and stuff. And you can display it on every device. And like Michael said, on this talk on Monday, plugins are there, so don't use plugins. Just build it in the browser. So, JG-Dragon Mobile Devices is one of our main progress of this year. We have no internet. We have no internet. There would be a YouTube video included presenting on iPod Touch. But if you have it, like the new iPad being around here, you can see it. And one is the same presentation we'll get in touch with later. So, it runs and you can touch it. We implement a multi-touch and stuff and just go ahead. It runs. What we did or what we're doing now is we built an application to get here with the App Store. It's not in there right now. So, we are pending reviews. What we will do is some basic application for doing calculus in school and stuff. Then we will build a new dynamic geometry system. And we will implement a full multi-touch support in about a few months. Multi-touch is enabled right now. So, if you want to try it out, you can do it afterwards. And, yeah, so we have it. The other stuff. So, as a small comparison, SVT with Canvas, the main benefit is you can use it even in the browser. But as you see here, SVT is, or Canvas is a bit faster in the browser. But when you're on mobile device, Canvas is even faster. And perhaps like on Android Voice, you don't have SVT enabled. So, that's a small comparison on the left is used in the Canvas. So, just a script. One of the most beautiful versions that were made by Bianca last year. Okay. So, you might not know just a script. That's because it's invented by us. By her? No, it's by Michael. Could you go? So, what was the idea about jessi script? If you had seen the Newton thing that I presented before, it was all programmed with JavaScript. That might not be very difficult for all of you. But it's impossible for a teacher in Germany. You take a microphone. It's impossible for a teacher in Germany. And for pupils, it's even more impossible. So, we thought about what could we do to ease this construction thing. And the result was jessi script. So, what is jessi script? It's a syntax that is similar to what is taught in schools. And I'm not familiar with the syntax that is taught in French or American schools. But perhaps you might imagine if you have p1,1, that's a point at the coordinates 1,1. And if you have k, that's because it's in German Kreis for circle. You have a circle around p with radius 2. And I think that's even smaller than SVG. And it's easy to learn for students. And another great thing about it, it's secure. You might imagine you have a wiki where you want to present interactive geometric content. Where you can drag points around and show what is meant with some theorems. And you could provide to anyone to use jessi script to construct something and add it to the wiki. Without any security lag because it's parsed by jessi script. And not just to say evil something. So I have a little example. Over there you see the syntax. It's one point, one circle, another point. And a segment, you see it because the brackets are going to the points and not from the points away. And if we send it, you see it on the board. And you can again drag the points around, do what you want with it. And jessi script is more developed than just having a point or a segment. You can do even macros. It's not so long this macro. But what it does is it takes two points and it creates the fitting square. And you could use it here for the Pythagorean theory. And here it shows the square function. It's x squared. And you see if you have here b, the squared value is x. And it's not difficult. You might foster algorithmic thinking for people with it. It's very useful at schools, I think. So now we come to the fun stuff. First thing you might have seen yesterday from Michael Neutze. It's the animated age data from Germany, I think. It's the same thing that Michael presented yesterday. But it's done with JSX Graph. It's redone, I think, I would say. And you see it's fast. And it works on every device. And the next thing you might have seen some more times these days, it's the German election data and it's also rebuilt with JSX Graph. The main interesting thing about it here is that we use arc view shaped data, which you can read with a Python script that runs on server, of course. And you have two options. Either you use the Python script to convert the data once and read it here. Or you can do it by live. You run the Python script on the server and read arc view shaped data files. And what is done there is you can shorten the number of numbers. Because arc view shaped data has 12 or 13 numbers per thing. I don't know. And you can do a Raymond Douglas Poick algorithm. I don't know if you know it. It's just something we talked about this morning to shorten the data of the path. These are not all the points that are within the arc view shaped format. That was text talk, yes. So of course, if you can do math and you can do circles and points, you can do charting. And these animations are also built in with JavaScript with JSX Graph. And they are done with JavaScript, of course. So if you want to have this chart with those plopping pies, you don't have to do it on yourself with all the events you need to create. And of course, you have line charts. And here the interesting thing about that is that the data comes live from server. It's randomly generated. But you might imagine that you want to plot stock data on live and this is possible. It was an Ajax request. And now the real fun part, don't be hypnotized. You can drag the sliders around and there are more or less rectangles. And if you go back to 95, go back to 95. And then draw around the colors, it's a bit disco feeling. And you saw it where very many rectangles and it was really fast. You could drag the points around. So here you see some new animations. We had animations last year, but now we have some new ones. Here you see changing colors. All do with JavaScript. Moving circles. Over there you see the point F which runs along these other points which is done by a......bizier curve, something like that. It's calculated in the background where the point has to go along. And perhaps you can drag on the weight thing. So it's just bumping around. In the background a linear differential equation solved. And you see perhaps here this math formula. We're supporting ASCII MathML and MathJax. Perhaps you might know it. To display such formulas you might imagine for a graph, the integral or something like that. And down there you have a moving chart. It's again a little psychedelic. And if you go over it the opacity of the bars is changing. And this is done by prototype. You might use the built-in animations or you might use prototype for your own ideas. So it's very flexible. And the last thing, you might even do games with JSX Graph. So if it's a bit boring in a talk or at school, at university, at work, you might play Tetris with JSX Graph. So Peter is good at it. Okay. Merci beaucoup pour votre attention. And if you want to contribute, you're welcome. You're very welcome. We have our site, JSXGraph.org. And we have also a wiki which you find from our main page. And we have a Google group. So you can send annotations, bugs. We have no bugs, of course, but perhaps you find one or you have suggestions what we could do. So the main benefits are we are faster than any other library. And we do a more scientific approach while we solve all the issues and the paths with algorithms. And we do encourage you to send examples or proposals. And if you don't, can visualize for yourself, just send them in. And if you have success or anything else. So if there are any questions, are there questions right now? Yeah. I happen to be married with a math teacher. And she's using a lot of GeoGebra. It's a pain for her to use it. Yeah, she uses it but she's integrating stuff in the... Yeah, in web pages. It's not so easy. Do you provide it as a web service, like a page running where you can type in your script and get the results? From GeoGebra? From JS Script. Is there a web page where you can put your... Yeah, you could. In our wiki there's a page where you could try it out. But we use... If your wife doesn't like GeoGebra, she can take your GeoGebra file and read them with JSxgraph. We have a GeoGebra reader, as you have seen. It's a GeoGebra file indeed and we can read it with JSxgraph and display it. At the moment you can't save files but we are implementing it or we are working on it. You can import every other file and display it, work with it, but you can't actually save it. But we are working on it. But you could save the jessi script syntax. You just run it every time, so it's displayed again. I just have two comments about this. One, the games I think is wonderful because when I was a math student I know that all the time we would have these graph and top-levels. We'd always install Tetris or Asteroids and eventually we could program them. So I mean to be able to do that. We have Snake. I didn't show it but we have Snake. Let's have a look at our wiki if you have many more games. It's scientific. I think it's great because it completes the cycle. The other thing is, do you have media wiki integration with this? We have plugins for WordPress, media wiki, Drupal, Moodle and several others. So you all look at that? Yeah. Just have a look on the website where you can integrate it because it was a thing to do to integrate it in e-learning platforms. It is done. So I guess you already answered that you are using only JavaScript for your missions or you are also borrowing the mix-mile somewhere. No. Just JavaScript. It's your JavaScript. It's just SVG static things and you animate them with JavaScript and timeouts. Yeah. Just JavaScript. If anyone does want to do a comparison, feel free. We don't know exactly what is faster, pure SVG animation or JavaScript enabled animation. Feel free to do it. I can file a bug for this but I noticed that if you look for the animated page permit, the slider you use in the bottom, I'm not sure how well it works on mobile devices. It's just a straight-up in the old mobile limiter and you can't grab the handle to drag it around. So maybe you want to look into using an HTML5 web form slider or something like that. So maybe if you go to the browser rather than I'm not sure how it is made. Well, it's sort of a fake slider. Yeah, it's done within JSX Graph. It's a point in the line. But what you can do is just use jQuery sliders or really other sliders and hook them up. So this works too. Okay. Any other questions? For more complex computations inside JavaScript, have you looked at the worker stuff that's going on so that you can put a JavaScript into its own thread to manipulate the cluster? I don't think so. For the election data, we did something with chunks, time chunks, to read the data fast. But I don't know if this... So the work is something like you load a JavaScript file, it runs in its own thread. Okay. That's what it does. It doesn't have any DOM access or stuff like that. But it's encapsulated on its own and it will be perfectly for storing the differential equations and stuff like that. Okay. So could you manipulate the things out there? If you see the stiff here at the animated age data, you want to manipulate the text out of JSX Graph. There are APIs. Okay. You can communicate via messages, but the heavy computations and stuff that could be put out into the threads and then it would run faster. Okay. It would be in nice extend. Let's have a look at it after the talk. But we're doing some server side computation. We implemented R for example and stuff. But let's talk about it later. I'm sorry. This is the beginning. How many browsers do you run in? Every one. You might imagine. Every major one. Even Internet Explorer 6, 7, 8. 6, 7, 8, 9. Great. Okay. We were running already in 9. It was only a little thing to adjust it. Okay. And we have running some Bachelor thesis right now to implement 3D and some physics engine for fast path solving. And the graphs we saw already. Yeah. I had a question as well. At some point you showed both SDG and Canvas outputs. So I'm guessing that, so Canvas is faster obviously, especially on low-part advice. Yeah. I guess you don't have interactivity with Canvas, right? We have. You have it? Yeah. It's drawn every time you do something. If you go over. Can you go over the line again? Yeah. They are highlighting. You see the highlighting? Yeah. They work pretty much the same. VML, SVG and Canvas. Did you find it difficult to generate multiple outputs for VML and Canvas? Don't let someone know that. It works. I think it was okay. Because somewhere the difference are not too big between SVG and VML. But perhaps if you look at shadows and VML, you have just to do something very easy. And SVG, you need those filters. And so you had to do many things. But the basics, the points, the circles, the lines, that's not the problem. Thank you very much. Thank you very much. Thank you.
The JavaScript based library JSXGraph enables a wide range of interactive data visualizations from complex mathematical content like geometry constructions or curve plotting to online charts and maps. Thereto it does not rely on any other library but uses SVG for drawing on most browsers and VML on the Internet Explorer. JSXGraph is easy to embed and has a small footprint: less than 100 KByte if embedded in a web page. Special care has been taken to optimize the performance. JSXGraph is developed at the Lehrstuhl für Mathematik und ihre Didaktik, University of Bayreuth, Germany.
10.5446/31205 (DOI)
Hello, I'm Marcel, I'm the second host of SMA in the practice, the French company. Today, I have a presentation with the team, HGT and Hopper, and a couple of tutorials in an NCC. Some definitions. In a geographic information system, a user needs to be able to manage the stream of data, the vector data, the ring of the vector, the attribute data, and the roster data. The roster is more important. A user needs to be mixed. In our case, a user needs to be able to generate a base and render to share data and to share data and to consider this data everywhere. The user just also be allowed to consider the data and manipulate the geographic element and at the end, they have to edit the data. So, a good case to see how HGT will be linked with the unit lose can be generate right from our presence with the HGT case. Now, let's see how we can get HGT from the HGT stack. First, you have to have a good base, like a classical LAN server, and a classical LAN server, long for a minute, it's a bit... In this picture, you can see the beginning of the architecture. First, you have the physical server, then the LAN server, and then here is our product, which is not in the HGT. On the right, there is a nice queue in the database. So, I only imagine what kind of......you do in your comments. It's a little......a geographic... So, why don't you do a geographic database? Like the Cosgram, or the Cosq... My reference is that......we have tested both of the systems, and it appears that my secure is more efficient, easier to install and easier to... So, we......we put the shell on the manager that is here. So, because of the shell, our geographic data is stored in the written text. We state that the binary manager that is here, we use the written text. So, the title of my......is not very correct, it's not from just the GIG, but we are very close. When you have written text, which is a......which is format, you are very close to the......the......the back position. Then the client searcher. We are talking about the online GIST. So, we need a browser. We can distribute that......that features. The browser will be......several eggs, maybe later......with what I learned yesterday and before yesterday, we can imagine......this, this, other browser. Today, we found the code......we use the......the......when we did that, this is the......this is the list solution we have to......to display as a GIG. And then, this......what's the code? We use the......GIG, the layout and things. I'll show you an example. This is the......the kind of layout we use in G-Otivo. Of course, we have attributes, as we see in the attribute......classical. Also......also......we can use the attributes. And we also have......the show attributes. We can use the......this one. This is the layout of all the......the browser that we get in the bar. This is the......this one. Now back to my work in the......so, in the databases, we have regular......mode cases. It's passed through a generator, an SVG generator, who sends......the SVG fragment in the good layout......in our company. And as we can see......we use......we use the other SVG one. This is the layout, the visual edge, our......and all that. So, as for......what context to SVG is a......quite easy process because......what context......looks like this SVG bar. Just a tip......we use......to......get the......we don't......in what context you have the coordinate......each coordinate......each one's......so you have the SVG one,...the Y, X3, X3, X3, etc. They are 0, 0, 0, 0. So, we use the......relative transpose......for the branch of the SVG......X1, X1, X1......Deltax, X2, Delta X3, Delta X3......and so on. And like that......excusing fragment in the generator......after the test......I will......see......in our case, the SVG is not enough. Our users need to manipulate it......to get information from the SVG. The best way to do this......is to use the DOM. Just before......we......turned the SVG......to the DOM. We......turned the SVG......to the DOM. With just this......for example, our users......can modify the......the data style......the data style......they can......modify to the shape of the element......and......at last......they can get information......like the key of our......of the......of the......the diagram is going the same......than......than the......the data style. But here, you have two new blocks. One to manage the client......the client......the request, and one......to manage the user......with one ESPHP. So, again, just......the client......and.....I know......the default data is displayed. Still to display......the rest of the......with the......the system. First, this......type presentation......the system......we have seen......the......the open view, etc. This guy is a big... The first solution is more efficient because you learn the small area in red, which is quite out of the baguette. In fact, we choose the second solution because the one tool achieves better. Now, first we want to display the roster data, the database of the directly from the G site. Here we use another linear compact which is called a map track. And how it works, we generate queries that are sent to the map server by the image marker in the decision. So the last step in our QSTD is to generate a decision-ready way from a user. In fact, the goal is to do it. We not only have to generate a real set of queries, but we also have to choose the type that the user can use. So we have to choose the format, including pdf, and we generate this pdf to the party as we see in the server. As a function of all the reports, the SVG is displayed in the browser. We cut the XML content of the SVG to send it to the server. We have to regenerate the map server because we can directly generate the SVG to the browser. We regenerate the picture and we modify the SVG to the browser, to list directly files and not to the user. I expect the user to expect the file. Then we have to read SVG with a link to the page. So we have to ask the user to read the page. It's simple, as you see, pdf, just a little bit, and go back. We have to get to our script with SVG, SVG structure, implementation, and SVG, and SVG, etc. So in 10 minutes, we have a summary of many, many, many hours of work and mind-making. As a conclusion, let's see the advantages and the drawbacks of such a step. First, advantages. In order to be perfect, we have to reuse the template, even if sometimes it's difficult to reuse something. It's always faster to use the rest of the project. So we reduce the advantages. Another thing we want to do is to set the system with code to our user page. As we say, the code is in the customer's skin and that's the object. Because the data is one percent different, it doesn't have to be. So we have to make sure that the data is in the customer's skin. Unfortunately, there is a lot of work. We need a complex system, not only by linking codes to our service, but also by interacting by the script. So we see what we can do with the script. The software maintenance and update is quite difficult. You have many breaks to bring to the system and we have a client-side intention, especially with Adobe and FijiDoubar. This program is not available anymore and our user has to use IE and not a developer. Now let's talk about the demo. Let's see how it works. So we can choose the main maps. Here is a Gigi and here is a Gigi. Here you have the parameters. Before, here you can assign the parameters. Here is the vector. So you can modify your feeling. Here we will set to TT, little work. Here for vector data, I can display my vector data. This is not vector, this is not. This is the PNG file. And I close this slide, I close it by two arrows, and I close it. I close it by two arrows, and I close it by two arrows. I close it by two arrows, and I close it. I close it by two arrows, and I close it. This is a text. It's not the loading information. This is for interactivity and display. I can display real and so single edition tools. This is for interactivity and display. This is for interactivity and display. This is for economic development. This is for interactivity and display. I do not snap the roster. I do not snap the roster, but I do snap the roster. I do snap the roster. I can make a unique tutorial that in our case, we have the board and the hand sanitizer. We have many tools in the script. That is what is the chance of application to the script. We use NZ to go ahead with the specification. We manage the network since 5 years ago. This is for interactivity and display. We have the tools. We can read things. We can read things. The next question is the circuits. We have the topology network at the back. We can read the circuits. We can see them. We can read the circuits. We can go ahead with the specification. Thank you for the presentation. I think you should get rid of the G-Fuel. This is a call. I agree. A couple of years ago, the other policies have been slower. Nowadays, it is really variable. I think it is a little bit more difficult to get rid of the G-Fuel. I think it is a little bit more difficult. I think it is a little bit more difficult. I think it is a little bit more difficult. I think it is a little bit more difficult. I think it is a little bit more difficult. I think the ignorance extends both ways. It seems to understand what is needed in the way of accessibility of SCG. I was wondering who are the users for this system? The people who use this system. Most of them are the regular users. Any other questions? One to finish off, maybe. What was not clear to me is the graphic user interface. Thank you again.
This presentation will start with a quick explanation about the choice to use SVG as a displaying tool in a online GIS (easy generation, vector/raster displaying, scripting, free technology…). This introduction will also underline the central place of SVG in such type of software architecture and the potentials links between SVG and others open or free tools.
10.5446/31207 (DOI)
Ladies and gentlemen, thank you. Sorry, our French is very, very small. So we will do our talk mostly in English and perhaps the question in French is very small. So I'm Johnny Martin and my colleague Ignatius. We met at San Jose State University where I am still currently teaching. We work briefly together at Adobe Systems and then at PayPal. Ignatius remains at PayPal. I have actually returned to teaching at the university. So a little bit of background about our talk today. I had this idea after reading a book by a gentleman named Richard Dawkins. And after I had this idea, I tried to encourage several of my students to work on this area. And I succeeded in encouraging one student to work on this. Unfortunately, she was not able to carry the ball to the finish line. And at the last minute, decided she's not able to participate. So Ignatius and I decided, okay, this is really a fun topic. The paper is accepted. We decided to go ahead and finish for you. So we have, just so you know, Ignatius and I are presenting another paper tomorrow involving swing and SPG and interoperability and a lot of very technical and interesting challenges that we solved with that talk. Today's talk is a lot more about just having fun. So we are really, really passionate about SPG and we really like this idea of combining SPG in this particular application. First, if we hit space bar or next page. Okay, so just to give you a rough idea of what we will do, we will first cover some topic about what's evolution and genes. We will then talk a little bit about reproduction, but please don't get too excited. This is a very technical computer science forum, so we will not talk about that type of reproduction. We will then talk about how we manage selection and how we use SPG for both animating and displaying the figures. We will talk a little bit about how we decided which toolkits to use and we will finish up with a demonstration, which Ignatius will run. It's cold. It's very cold up here with the wind. Okay, next page. Okay, so in 1996 Richard Dawkins wrote a book. He actually has written several books, you might be familiar. One of his most popular texts is called The Selfish Gene. In one of these books he describes a program called Evolution. This program is a program that's used for Richard Dawkins' purpose to show how random features in just mathematical randomness apply to a situation of reproduction and mutation, causes the ability for very, very complicated life forms to evolve essentially out of nothing, just the randomness of the fact. So Richard Dawkins' program is originally designed to explore that idea, to see how just the forces of randomness, just the forces of mutation can lead to very, very complicated forms indeed. That's where his program came from. Ultimately his view is to kind of push the idea against creationism and against the fact of God, he is a very strong atheist and he's trying to push in favor of the fact that the creationist view is a little bit silly and instead the atheist view is better. So let's go to the next page, please. Okay, so we are probably less interested in the kind of spiritual ramifications. Ignatius and I are longtime friends of SVG and I love the graphics that are here. So we looked at Dawkins' program and we saw, this is an interesting program, it's very graphical, it produces some very, very visually intriguing results very quickly and there are a lot of variations and implementations of Dawkins' idea in the literature which explore different, say for example, have used TK or used some different graphics systems and we looked at it in literature and we saw, geez, nobody uses SVG and vector graphics is the perfect, I believe, representation format for the evolution program that Dawkins has put forth. So we said, hey, we need to do this in SVG, I mean this is just waiting to happen, so, well, today you see it happen. Okay, so next page, please. Okay, so again, we are not too worried about does God exist or not? This is not the most important question to us, we are not too worried about evolution or not. What we are really interested in doing today is just having a lot of fun with SVG and seeing how what we call biomorphs will work, explain a biomorph in just a second and then also at the last we tried to incorporate some animation ideas into SVG and to sort of extend or expand a little bit beyond Richard Dawkins' original program which is static graphics. We want to expand a little bit beyond that with some, we sort of create some more, our own ideas to show some things. Okay, next. Okay, so let's talk a little bit about how it works in the real world. Okay, so forget the computer for a minute, you go to the real world, what happens? Okay, animals and plants in the real world reproduce either sexually or asexually or sometimes both. The idea is that there is gene encoding in all living organisms, that gene encoding is either paired with another or if it's an asexual reproduction, the gene encoding reproduces again. And so we don't really care too much about the way it works in the real world, whether it's sexual or asexual or both. For example, if you have ever studied certain insects like bees or ants, you notice that they actually combine. Sometimes the queen produces asexually, sometimes they do sexual reproduction when the queen mates with the drones and in fact we don't care about any of these things. We only care about, we cheat, we use a very simple selection method, we're going to call user input. So what we do is we present us various candidate genes and this is again inspired by Dawkins' original program. So what we do is we take each organism is controlled exactly by an array, which is an array of genes. Each gene controls some feature of the organism. We present in the demonstration you will see, we present an instance of each organism according to a slightly different gene. And the way we do this is we start from a given gene, we take the first element of the array, we modify the gene by adding or subtracting one and this leads to a new biomorph. We take the second element of the array, we again add or subtract one, this again leads to another biomorph. Say if we add one, we get one biomorph, we subtract one, we get another biomorph and each varies slightly. So what we do is we iterate through all of the members of the array, each gene in turn, and we produce for you an array of different biomorphs. Each is slightly different. This is the first step in what Richard Dawkins and what we also call reproduction. So the reproduction step is done simply by starting from one candidate biomorph, modify all of the genes, present for you a smorgasbord array and then you pick one through user input. Which one do you like the best? That becomes the new parent gene for the next evolutionary cycle, the next generation. We then again present the whole smorgasbord by modifying the array, add or subtract each one from the gene and what we result with is again smorgasbord slightly different ones and then you pick which one is nice. The interesting thing that happens in this program is that starting from something very, very simple in a very few number of iterations you find that you end up with something very complicated that looks almost life-like. And so this is the term biomorph comes from this idea. So the biomorph is simply an attachment of different graphical primitives structured in such a way that it almost looks like it comes alive. You will see. So, okay, maybe we go a little too fast on this slide. I think I talked about most of those. So next one. Okay, so we are, okay, you went backwards. Go to, to, to, to, to, to, to. No, you said. Okay, so yeah, again, what we are doing is perhaps I talked too quickly for my slides. What we are doing is we are going to go through and no, this is slide backwards. Go forward one, please. There we go. Next one. Forward one. Okay, there we go. Okay, so again, what we do is everything we have is implemented entirely in SVG. So the idea is we have an SVG document that's loaded with a bunch of JavaScript. The whole thing can run by itself. And the idea is the SVG itself, the SVG DOM, will render the first biomorph and then render all the children biomorph. So we use, there are two qualities that we use in SVG. The first one is that we use the interactivity ability of SVG. I mean, this is, this is what really makes the application by using the JavaScript. You click on something and to, to choose which one of the biomorphs you want to use. And we really take advantage of the SVG interactivity to produce which shapes we want. And then for animation, we are trying a couple of different things. We played with a couple of toolkits, Rafael and jQuery. I think the one we are showing today is the Rafael version. And ultimately we have, we actually started a port to jQuery, but I don't think we're going to show that today. Still some problems, yeah. Okay. So the idea is that the user will interact with, directly with the SVG document. It's completely standalone SVG with embedded JavaScript. Yeah. So, let's see, maybe we show next some, some code. So here's some example of the JavaScript code that we use. The idea is we just have some, this is the code which is initializing randomly some of the gene members. And essentially it's just a simple array. And we just, a simple array with various levels to initialize different things. We have the first gene index by zero controls the depth. So what we'll have is a series of vectors that are drawn and how deep the recursion goes is controlled by this first gene. The next gene is the X scale level. So how far do we scale in the X direction? Again, in the Y direction, how far do we scale? And these, all these things are various controls to just control some primitive, primitive graphic things. It's very simple. It's very, very simple. And then the colors and some animation quality. So this is a somewhat simplified version of what we have in the code. And shall we go ahead and... I have something to add to that. Maybe it's a bit confusing if you see what does depth have anything to do with biomorphs of the sort. But it's, it may be helpful to think that Dawkins original idea was that a biomorph is a tree of sorts. So the depth simply indicates the depth of the tree. How deep is it? And then the scaling simply refers to how long the tree is. So if you want to think of one part of the tree as a limb, the limb can be longer or shorter. It depends on the scale, the scale levels. And the depth simply shows you how deep the tree recurses. Is it much more complicated like a leaf of a tree? Or is it simply, does it look like the digits of my hand here? Something like that. So maybe we should show. Oh sure. Yeah, we show you a demo. It's more fun. So we have it right here. You have it fully blown up, huh? Yes. So this is, we have a simple randomized button here too. For lazy people like myself, if I just want to see it go through random sort of... That's the run function that you see. You can see that these range from different... Maybe just if I can interject Ignatius. Just to let you know what are you looking at. So right now this is running in Firefox. So this is running in Firefox. We've just loaded an SVG document. The SVG has, actually it's not technically embedded JavaScript, but there's a separate JavaScript file which it simply loads. So it's all essentially static content that we just load. And then the JavaScript animates the SVG DOM. Each time we are clicking on the buttons for either randomizing or generating the screen. Okay. So right now we have a parent which is the current organism that we have. And now here's where the user selection comes in. There is a list of eight potential children down there. And you get to pick which children you think is the most suitable to your selection. Usually in nature this occurs based on the environment. For example, the strong survive they say, right? And this is what you are doing. You are being nature and you pick which one. Which one? Any takers? This one? This one or this one? Okay, say let's pick the one that looks most different there. And then it becomes the new parent. And now you get to pick which other ones. See now in Dawkins original idea, he doesn't think about color. It's still black and white. We also vary color here. I'm not sure if you can see that clearly here. Yeah. Maybe if you keep going, it will start to show. Okay. So if we pick that. See this guy here has a sort of blueish color before the parent was a little bit yellow. And you can also select shape differences. This one has more elongated features. And this one looks like a rocket ship now. Flaking. So you keep on picking the one that is most suitable to the shape that you desire. And it gets complicated as you go along. So in Dawkins original work, the idea was you had a, you know, how does an organism evolve through, you know, the process of evolution? You start with an organism, some mutation happens in the presence of natural selection. The organism that is the most likely to survive because it has certain qualities is selected by its environment and then reproduces and produces more. So that's the whole idea of natural selection. Well, here we are also using, if you think Ignatius is clicking is natural, then we have natural selection reproduced here. If you think his selection is artificial, then we have artificial selection going on here. So right now we have Ignatius selection going on. So he's selecting for different traits and characteristics. The idea is that as he continues to select, the children will be continuing to inherit those features and traits which he selects each time he's clicking. And as you can see, the organisms continue to evolve in their shape, in their color and their attributes as he clicks on the children. And each time he clicks on a child, it moves the child to the upper part, which is the new parent. And then he's offered a new array to select, clicks on another one, becomes the new parent, and so on. Notice that the shape doesn't get as complicated as fast as the original Dawkins program. This is due to, we have several other genes that control the color. And from each of these selections, they actually vary only one gene. So we have 12 genes. And his original program only has eight, I think. Yep. This is good. Yeah. Doug, how much time do we have to continue? Six minutes. Okay, so what do you think? Is it worth to try to show the animation one? Now see, this is not IE 9, so it's going to be a little bit slow. So let's see. Yeah, we had, we'll show you another version that we tried to experiment with animation. So we add, in this next version of the program, we try to add a new gene which controls, you know, animation, speed of animation, those sorts of things. And these new sort of sets of genes allow us to, let's hope it works. Are you, we need to unlock that. Unlock. Oh, shoot, what happened? Oh, is it read only? Yep. Okay, let's see, this is control. I put the, this is control. So, okay, bear with us guys. Okay, go. Okay, let's reset. So it's, let's see. Oh, you know what the problem is? It's a read only file somehow. Yep. Oh no, okay. There we go, okay, where's the meta? There is a toggle. Hold on, it's a Mac. Meta is here. There we go. There we go. Let's see this guy. There we go. Thank you. Yep. We got the nice Emax user in the audience. Thank you, sir. Okay. There we go. Okay. Okay. Okay. Happy day. Okay. Control X, control S is your friend and say yes. Yes. God bless America. Okay. All friends. Yes. Anyway. Refresh this. It's a far enough. Okay. Good. So as you can see, the animation trait we've selected is glowing right now. Right now. Okay. It's the glowing trait. So everybody right now, it seems from the child, no gene is modified that alters the movement. So everybody is still glowing. So let's pick this guy. See if there's any other sort of animation. Animation, that's right. So far so good. So far so good. Oh, there you go. Well, I just missed it. I hear. Here's a different type of animation here. Right now it's lengthening and shorting. Yeah. We can establish more complex ones, which is what we're flying to do that before. So our idea was to have each join be joined in as a group and then we can animate that group to make it more lifelike. So this one doing the group thing not yet? No, this is not doing. We have some problems with Raphael. We can only talk about one of the things we wanted to do. We thought the idea is if it's possible, right, we saw with the Dawkins program the possibility of getting really, really complicated and interesting life forms with a very few iterations. So we thought, man, wouldn't it be fun if we can have maybe say something like a hand or something that moves that could maybe grasp or something that can walk or something like this. So this was one of our original ideas to try to accomplish with the animation. Unfortunately, in this version that we show you now, we are able to animate things with rotation, lengthening, shortening, glowing effects, but we don't actually have the grouping effect so that we can't actually get a limb or something to move. So stay tuned for the next version. Maybe we will have something fun like that. And this actually is why we've decided to move to jQuery because this is the limitation of Raphael. Raphael does not support animating groups at all. So you can only animate one certain portion of like a SVG primitive. That's what you can animate. You can animate groups, but jQuery supposedly can and that's why we're trying to port it over to jQuery, but it's still buggy. Okay. Okay, great. Thanks a lot. We have a minute or two for a couple of questions during questions. Go ahead. I think there is a whole strategy. All of these selection criteria mechanisms. Is it possible to have some automatic selection criteria which could be really accelerating the whole process? That's right. That's right. So there's actually, we included also in the literature for the paper some ideas of other people who have done similar implementations of Dawkins ideas. And there is one, perhaps after the talk, I have to take a chance and look again through the references to find exactly which one, but there's another fellow who's implemented the same thing and taken exactly this idea. So the idea is to try to create an environment which simulates that of natural selection by essentially by making the organism very difficult to survive and giving it some difficulties that it has to overcome. So it's a good idea. But for now we have the randomized button. Yeah. For now we just have to use human interaction and you're right. It makes it a little bit slower to develop organisms. Yeah. This is where we can use some of those new IE9 features that, because right now it's very slow. You really like that IE9 stuff? It's fast. Yes. It's fast. Thank you. Thank you. Yeah, good job. Thank you.
Inspired by a similar program introduced by Richard Dawkins, our program provides a dynamic updating display of primitive animated SVG biomorphs. Our SVG biomorphs evolves on the screen under the influence of mouse interactions that guide mutations along an evolutionary path. Genes control organism traits which are realized in SVG with JavaScript animation. JavaScript events trigger selective mutation forming the interface for user interaction. This paper describes a system of dynamic shape-based organisms with quantitative traits, evolving from simple to complex organisms through user selection.
10.5446/31208 (DOI)
Okay, so this is the schedule. I'm trying to drive up by a bit of really true work because regarding dynamic charts like real-time ones like one shown in JSX graph, there isn't much about it. So this is one of the motivations, one of the difficulties and causes to somehow implement something like this our own. So let's drive by a bit of related work, a bit of history first. This was related in a project started FSSEC which is a Portuguese company for power systems and lateral mechanics. We had a previous system which was only a local one and we started developing a web port. We started a few years ago and so the previous system only allowed local access and of course the web-based one still allows the local access but it's much more interesting to access by a PDA or a personal computer. So this is used mostly in SCADA systems which supervise power plants and malls and such where you see diagrams with the current system status which are built in SVG and were already presented in paper two years ago. And one of the interesting things to also monitor is real-time data and historical data. So you use usually charts for that which we call trends. Although trends usually is a mathematical concept so it can be confused because it usually shows forecast analysis and such but we call it dynamic charts to don't mix up things. So I'd like to show a few related work maybe just to have a notion on things that exist. I'm going to show you Windows stuff because most folks won't be able to see this afterwards so I hope this is not a bomb attack. Okay thanks I'm actually safer now. So this is one of the examples with the real-time data. This is just the Windows performance monitor so you can monitor the processor usage for several processes like M-monitor in Firefox and EA and our own web server with this and it really just displays data and doesn't support much navigation through data. It only keeps overwriting it every time it passes the buffer but it has a few neat features such as tooltp so you can see details, you can also browse by selecting lines and see if you stick statistics. Another example is also a Windows based. I'm only showing the ones which are Windows or private the other ones you can afterwards see and access directly in the web. So this is another process monitoring tool which I found pretty neat it's by Microsoft's Season Tunnels. It also allows you to monitor system related stuff and the real-time chart also doesn't allow basic things like zoom and panning but you can increase it and you can also see further details by hovering and you can see past data you can not adjust the data dynamically or but you can adjust the buffer by messing with the window size so when adjusting the size you'll see past data. So another one of the coolest examples this is web based this is by Jeff Schiller it's one well-known I guess but it's worth seeing again it shows not real-time but it's historical data and it shows it has a few neat things like allowing to narrow the visible interval so you can see it in more detail. You can also drag a point and see further details on the right so this is this is a demo with a few years old but it's still accurate quite current in terms of what it allows to do because in some in some kind of chart you aren't able to to navigate through the data you can see it but you aren't usually allowed to see many details. So there are a few more one of them I was already presented today in JSX graph you have also real time Raphael and other civil rights based demos and canvas and stuff but those are available online so I'll just skip them. So we needed to develop a solution so we created an MPC model of course to model this stuff into the the run the SVG runtime so it's all within SVG although we use HTML for some kinds of interaction we run the runtime all the good the runtime and configuration is done in SVG. So we initially made some technology selection and of course we ended up using SVG but we evaluated flash as well and swing Java and XAML and we didn't evaluate canvas at the time because it was still in early adoption the study was conducted about three years ago so it was still very cloudy at the time but even still flash was second place and of course being proprietary and licensing and stuff we ended up with SVG again. So we modeled some kind of diagram of what we could do to make a graphics visit viewable and afterwards browsable so we have the regular axis stuff and if you want to browse through the graphic if you want to allow viewing details you should have probably static axis which kept always in the same zoom and afterwards you could have the data which is represented there simply drawn within a clipped area so you can zoom in and zoom out and browse through it. We used HTML and externalized HTML context for doing all kinds of configuration and we later do some information exchange to SVG and we also implement navigation in HTML for browsers without native support. So there are a few neat features in SVG which help developers quickly starting up. When well used viewbox and preserve expect ratio are pretty nice. We wanted to do a simple demo but we didn't have time to just have a prototype kind of the animated version of this sample simplest one working. Also current scale, current translate can be manipulated to simulate zoom whereas natively whatever is no interface for native zoom and of course the basic stuff like clip paths and using the DOM for storing the graphics and metadata about them and of course JavaScript for the glue. So using zoom and pan to browse through data this was a cool initial idea why because it's intuitive. The user knows how to zoom and pan it just clicks and makes window zoom or clicks in or zooms out or drags to pan. It was a good idea but it's not very hard to implement although it's currently broken in our software. Basically you need to if you're using a single canvas for everything you need to apply inverse transforms to stuff so they keep in the same place. You need to adjust labels and access so everything is matched with the visible data but the remaining you leave it alone. So the user browse through data and it's SVG, it's scalable so it zooms in. On the other hand it has a few cons in analyzing the data because if you are in this sort of situations like if you have very wide ranges either in Y or X axis you'll have plenty of difficulties. You basically can't analyze the data properly because once you zoom in you lose the context. So this is the major con in using this approach to browse through data. If I zoom in through this interval, if I zoom in here, if I want to see some details on these points I lose information about these. The other way if I want to zoom in and see this floating, this small floating here I'll end up losing the limits. So it's the major con on this approach. Even so when you have interactive graphics you have several approaches for for interactivity. We chose to have an interactive area around the data point so that it's more easily clickable. Either way this was our simplistic approach but best would be calculating the nearest point whenever you are. So you don't have to click near the point you have the tooltip or specific details without having to click in a specific point which is very hard in clutter graphics. No more. So let's try to see a little bit of this and not working. So I guess I didn't show one of the demos. It's not ours. It's one of the stuff I want showing related work as it is successful on the web so you can also go there. It's also kind of a scatter system which has diagrams and graphics. I'm showing you because it's super light and many can't access it easily. So this is one of it explores one of the cool stuff on real time graphics which allows you to see past data. In this case I just opened it so there is no interesting past data to see. But if you leave it here about five minutes you can scroll back. You can zoom in. It's a limitation on this demo I guess. You can see detailed information here but you can access historical data which has been acquired. Another related work I want to show you was a product we have for a web interface for it. It basically also uses XSLT to generate SVG and basically generates a static chart. The only interesting thing here is using tooltips, native tooltips which aren't used too much these days because they are not well supported in SVG. Firefox 4 will have tooltips for title elements which is great. You don't have any zoom or pion abilities here. Just have a static graph and linking to specific line at the table with further details on the click point. How are you? So what do we got here? So we have here a small preview of what has been done. It has a simple view which is what we call in minitrain. It's just also kind of our remote system monitoring. It's showing the processor and memory and disk usage of a remote station which in this case is local. So this has no zoom abilities. This is just kind of monitoring stuff. So let's load some kind of configuration. So this is the full widget. We are using HTML for all the user interface stuff and for the legend. And afterwards this is only SVG. You can see details, tooltips, but unfortunately I broke all the zoom and pan behavior which was actually working almost in the initial prototype we've done about two years ago. But as ASV was not working properly with the inverse transforms, we had to disable that altogether. And we didn't yet made it work. Only about we scheduled it for the end of next month. So this should zoom and pan but the inverse transforms aren't being applied. So you can actually, you should be able to pan and the expected result was obviously that the axis kept static and that's only the graph parts suffered the transforms. So you can select lines either in the HTML context. So they stack upon the others and allows you to see cluttered stuff. For example, on a demo with historical data, choose a bit more cluttered. So this is one of the cases hard to solve because it is cluttered and should be able to zoom here. So you see nothing. You almost see a rectangle, a field rectangle because there is a lot of data concentrated in the data. So you can also insert and modify lines. You have the standard configuration interface which is not SVG based. So you have to configure everything in HTML and afterwards you can, it passes on to the SVG context which triggers the graphic loading. So basically the project's main goal was achieved back then when you started implementing it with just to display real time data and historical data. The navigation part wasn't well working back then and we didn't spend any effort in this two years for it. We'll read back again or maybe think about using another frameworks. But the fact is we face difficulties because we see that currently most JavaScript frameworks and such lack support for real time graphics. So there is no support for interactivity in real time because data is moving and you can't follow it with your mouse. It's not very user friendly. So there is a lot of interactivity stuff and browsing back data which was proposed using Zoom and Pan here which isn't addressed in most libraries even not only JavaScript based, Java based also and not many I'm familiar with. So this was the main difficulty. Not sure. So for future work we can start thinking about using SVG 1.2 vector effects so that lines don't get scaled when you zoom in. This is pretty cool and helps avoiding those cluttered graphics which almost look like rectangles. You can also implement some kind of runtime and decluttering support. This idea is to allow turning that dumble back then to have runtime decluttering support. This isn't the same runtime but it is not runtime adjusted. It's pre-configured. The idea was to do something like that in runtime using maybe media queries which was already explained by Andreas in the open so that when the screen real estate wasn't enough you start removing stuff such as the legend and afterwards maybe the access stuff and you ended up with something like this and maybe when you get done space enough you ended up with something like a sparkline. This idea was also explored in that tool I showed you because in the main window it actually has a trend line. So this could be a future idea so that this kind of real time and not only real time, the charts would be real time adjustable. This would be really cool. So I'd like to thank my co-authors who could be here. I think my wife who had to pay her money and had to which got quite motivated me and the rest of the department of course in the company who helped give a few ideas and developed this and brought me here basically. So thank you. Any questions out of time? No, you're perfect. Any questions? I had a question. So you mentioned using media queries to maybe rearrange everything and refold stuff. That's interesting but media queries currently works with CSS and you couldn't rearrange everything with only CSS so would you need some kind of JavaScript access? I actually made a small demo using media queries for media queries. So the idea was for example have media queries to simulate the vector effects, the stroke. So media queries does funny stuff. So that's all done with CSS. So media queries is just the answer. You need probably JavaScript to rearrange stuff. Are there any other questions? No? Thank you very much again. In other news the last talk in this session this morning has been apparently cancelled. It was in the errata. Yeah, but making sure that everyone knows. It's time for it all introduce themselves to see the directoroo of the
Graphical representation of values over time using charts is one of the most popular and effective ways of displaying data, easing establish of a correlation between measurements, creation of trends, prediction of system evolution and more. Plotting data acquired in real-time and archived (historical) data are two important aspects, allowing use-cases such as system monitoring and post-crisis analysis, respectively. The differences between the use-cases pose challenges in both visualization (plot) and interaction (navigation). While the former use-case is reasonably addressed by an increasing number of software libraries, the latter is sparsely covered in the current state-of-the-art. In this article a solution which addresses the problem is presented, based in SVG (Scalable Vector Graphics) and other Web-related technologies. A prototype which implements the solution proposal is demonstrated, as well as the currently existent system. A set of tests allows evaluating whether the established goals were achieved.
10.5446/31209 (DOI)
I'm here so it's a bit too hot. I think I'm going to have to wait for the recording. It's okay, David, otherwise you can keep on moving. Hold the mic. Okay, that's going to show advisory controls and data. Decision. Right. One, two, three. You guys are playing. This set is also known as HCI. It's widely used in industry to monitor real-time stages of the practice of the past. The main function of HMI is self-explaining. It selects to translate machine signals into human, brand-new signals, such as sound or visual. Specifically, animation. The question is why SVG is scared out. What do you guys do while at the end of the video? Scatter, the demand of scatter overwrap is increasing. When you scatter companies try to port their system to work over the internet, why would they require specific software to be installed? So, when SVG is introduced, you immediately draw attention. Because if I need to install anything, I have to do it for the project. So this is basic scatter screenshot. It consists of the details of every connected equipment. For example, this is a pump. Red color stands for stop and green stands for running and yellow for problem tricks. This is color animation in scatter. What the time decide is level animation. While the text above during time period here is the text animation. This presentation will focus mainly on five major animation in scatter. There are color level move, opacity and rotate. To perform the animation, we use JavaScript to access into the SVG DOM and change the property or attributes periodically. For example, to achieve this, we want to modify and escape the object property to create an interface for user to configure the animation of the object. This configuration will then be converted into JSON-based text called X-Fact. It is a direct-sense-scarabination code. And this X-Fact code will be stored into the SVG file as an attribute. When this SVG file is open at client-side, the JavaScript engine will start gathering data from server and perform the animation according to the configuration set. After this implementation we are doing, there is a small little drawback as we cannot achieve smart animation which is smooth. All the five major animations I mentioned can be applied by using current SVG animations. They are animated and they currently transform animation. However, all these animations are time-oriented. We must specify when the animation to begin to end and also the duration of the animation. So we cannot specifically tell what animation to be carried out when something has happened. This is very critical in our manufacturing plot. So to work around this, we can use a JavaScript to append new animated text into an SVG object every time the object has a state change. But this is not critical in Scala Software because some objects have a very short duration since state change. So below is a comparison between the implementation we are doing and the smart animation. From the implementation, we introduced the concept of state-oriented animation. This state-oriented animation is generally similar to the current SVG animation but has a state attribute here. The beginning and end attribute is removed because it is not time-oriented. The duration is given to determine the transition time between states. To change the state, we will use JavaScript to set attribute state. This slide will show you how to propose to SVG Open. The state-oriented animation can be applied. For color animation, in this state of it, this color is defined in color attribute. In this example, there are two states separated by a semi-column. The default color of this example is 0, which will take the first color. When this object, the example is here, when the state is changed to 1, it will take two seconds for this object to change color from blue to red, which is the second state here. Level animation is usually used in indicating a depth of a water tank. The from and to attributes indicates the range of minimum and maximum of this object can reach. State attribute of this animation takes the number between from and to. It indicates the current level of this water tank. The duration works the same as your work, which will take nine seconds for every state change. This is actually a j file, so it is not time-oriented. Most animations, the state of this animation is used in the percentage to indicate the current position of this object. The duration is also the same as your work. It will take six seconds to reach from one state to another state. This is a j file. I cannot do it because it is not intermittent. Opacity here is generally similar to the current speed animation. From and to means the minimum and maximum opacity. The state will take the number between from and to. The duration works the same as your work. This example, when it is one, it will take two seconds to change from zero to one. The state attribute of this animation is a degree of the position of the direction. It is generally same as the work examples set in the duration and state type. Time-oriented animations are not meant to indicate the life status of an object. For example, this aircraft flying force is usually accompanied by a number of loops. It looks like a general idea of a situation where accuracy is not a question. State-oriented animation gives a more specific location of the aircraft. This will definitely benefit all the real-time applications. Other applications such as game or simulations might also benefit from this state-oriented animation. However, this concept is developed from our experience in developing SCADA. It might not be fully useful to other applications. I hope the SVG community can look into the possibility to make this happen. To make SVG more competitive among these competitions. Thank you. It seems like you might be able to get a lot of this using the begin and end method calls on animation elements. If you are already using scripting to set the state attribute that you are proposing, you could just as easily get the animation element and call begin on it at the time that you want to set the state. We actually tried to use the function begin and end at, right? It is still using time to possibly stop the animation. What I am talking is this animation is made of control by state, something controlled by state not by time. If we have to use begin and end element at, we will have to scale the state into time to possibly stop it. So this is the limitation. Tomorrow is the follow up of that suggestion, which is being done. You can specify when to start and stop. You can use the begin and end element that was already stated. But instead of specifying it from it to intervals, you simply say begin and end. And if we only trigger when you go to the dom element and trigger it with the scripting or with the mouse events, that part I miss can be done. Just a suggestion. Do you mean that this object can only change once? No, it can change as many times as you want. You can start the animation frozen initially and only trigger it when desired. I guess we are already doing that in our latest recordings with things like that. To trigger animations, only when state changes. So actually we do something like this and use Smile for that and trigger animations when state changes. And when you use begin and end element, that is what is normally the case. Actually, I'm not sure if I can do it anymore. You could probably also use. The thing is I didn't quite understand the state attribute. Was it something that was live in the dom to change the state? What was the state? You can actually understand the attribute. How would it be used? Was it to specify a default value for the animation? State is quite common. I'll take color as a sample. This state is related to this color, which is configuration of the state. So when the state is zero, we will take the first state. If you manipulate the state attribute, you just need to expect the color to change to the second one. Here it is. Are there other questions? Maybe just one more. I'm suggesting that you can actually, as I said, you have to map state with time. And you can position animation in the given moment of time using any new value. So you have the base value, which is the dom, which is in the original XML declaration. You can change the animated value using the dom. This is a true data problem. You can see that it's not still live in the dom. That doesn't get the interpolation. It would get smoothed with the interpolation. It's just to put it from there and then we'll be able to stop it. I have one question here. Were you able to work around these problems, your experience, to get to the solution that worked as you expected it to work in a way? You explained the problems you experienced when you were trying to do what you wanted to. Were you able to work around these problems in some way, to find a solution for the problems using JavaScript, using tricks? Yes, we can actually use the current SGG animation, but you know, as I mentioned, this workaround is to, it works like a pad with new animate tag into the object. And whereas it has another state change, we remove an append and new one. But this workaround is not practical because sometimes a single object has a lot of state changes within one second. So if it has a thousand of objects same as this, it will crash. Do you have an example in SGG here to show us? No. There's no more. There's no more. So you said that your editing environment was basically in this state using the planning, is that available? Is the project available? Yes. I can show you the landscape. In this case, you have only five. I'm sorry. Is it broken in pipe? Is it broken in pipe or how we edit it? Is it made of extension of back in forwarding state? No, no, no. I'll show you how that works. So when I click on this object, you see bar animation. This is the state configured in the bar animation. So it will follow the data which is generating every second 0 to 59. So it will get on the workaround to 0 to 59 and it becomes 0 to 8. So this is what we are doing. Any more questions? Thank you very much.
SCADA/HMI, a real time monitoring system is recently developed to be working on web. It is very common for people nowaday, to be able to monitor and control their home/office and plant remotely using a web client. SCADA/HMI, which is characterized by rich mimic, has traditionally functioned as a desktop application. When SCADA/HMI is required to run on intranet or Internet, Java technologies and ActiveX technologies, which are not W3C standard, were mainly used to handle the extensive graphical animations over this media. SVG has made rich animations possible over the net without using proprietary technologies. Ecava has developed and delivered its first SVG based SCADA system called “IntegraXor” as early as 2003. Ecava further developed XSAC (IntegraXor SCADA Animation Code) which allows animations to be done easily by linking a JavaScript animation library instead of doing JavaScript programming. XSAC is a series of animation attributes written in JSON format which can be easily generated. Ecava has developed SAGE (Scada Animation GUI Editor) based on Inkscape for such purpose. The JSON syntax contains the animation to work, a tag name to be listen, and the parameters to take action. For example a circle created in Inkscape is given a COLOR animation, and the color of the object shall change according to a predefined parameter associated with the tag value, which update from server. The update procedure requires a Javascript library to loop and make HTTP request to server. Major Animations for SCADA application are Color, Level, Movement, Opacity, Rotate and Text. They are all tied to at least one variable which is normally associated with data from an external field equipment. The animations shall correspond to the actual status of the field equipments.
10.5446/31210 (DOI)
Okay, so the topic of the talk that we present is a work that we're doing together with Jonathan Cilan here on the electronic program that is using the SDG. And this is completely in line with what you say about IPTV, for an analysis, we'll probably be the promised sort of thing, this issue is very consistent. So the context of our work is that we're witnessing convergence in many directions in a multilid world, right? First, there are convergent photo and video codecs. MPEG4 AC is supported in the most everywhere, AVC is also everywhere, no question about that. And for graphics, it's very similar, HTML, file or whatever, and SDG as well. So as people from Microsoft have said it, we will witness graphically and media-rich content everywhere, on PC, on set-top boxes, on phone, tablets, like this iPad here. And in this context, TV is also evolving. First, okay, it may be obvious that TV is becoming digital, right? But still, so yeah, and with digital TV we have more programs, more channels. And just to stress that it is really becoming digital and only digital. For example, in France, the switch off of the NLTV is scheduled for end of 2011. And there's an opportunity here with the TV becoming really digital to have more interactivity. And one of the standards that was not mentioned, you mentioned all IPF, there's a standard, new standard in Europe called HDB TV, or HTML broadcast broadband TV. It's an Etsy standard, so if you're interested you can have a look. So TV is evolving, devices and networks are evolving also. So as said, devices can play video, TVs can get connected to the internet already. And the mobile world also is changing. You can already have access to TV, 3G networks, but also mobile broadcast networks are being deployed. As you mentioned, the ATAC and H and in Europe, the DVH has not started, it would probably never start, I don't know. Maybe there would be a replacement like DVSH. But still, there are problems of deployment of these networks, cost problem. Especially when you want to reach, studies have shown that the usage of mobile TV is not really mobile. Most of the people use it at home, surprisingly. They are in their rooms with their phone and they look at the TV on their phone. Some studies have shown that. So one of the problems in deploying mobile TVs to have the single penetrating houses, so either you need to put more power in the antenna or you need to put more antennas. And that costs a lot. So here comes the project in which we are working, it's called Pingo. So it's a project to the three companies and us. Deepcom, French manufacturer for TV signal receivers, DVB, TV, HSH and so on. Baraco-Dameja is an integrator and it's financed by the French research institution. So we have this device here, it's an embedded platform. The goal is to have this device located near the window that captures the signal and redistributes it in the house and adapts it. So it can already receive DVBT, DVBH and DVBSH signals, transcode the video, changing the format. For example, if you receive HD signal and you want to send it to a mobile phone, you have to do some transcoding. If you receive a PEC2 video, you need to transform it into a PEC4AVC, things like that. And what is interesting for us is the transformation of the UPG data. And we'll talk about that later. And we're targeting PC phones and tablets, not TV, because TV can already receive the signal. And so the principle is that the end of our work at the ComparisTech is that we develop a framework or a method to receive the UPG data, to transform it into SVG and deliver it over Wi-Fi. And we investigate two approaches, one is streaming, and the other one is traditional AJAX. And of course we adapt it to the size of the screen, for example. So, okay, I will skip this. You know what is an electronic program down in UPG. To example, you showed some others. Just so that you know the ecosystem of the UPG, there are at least three. First, if you look at DVBT, so DVBT is the standard for terrestrial TV, not mobile, just for your TV in your house. And the UPG data is delivered over PEC2, a transportation stream. And the format is binary format. The format of the data, the UPG data, is a binary format called DVB and event information table. On the web, there are many examples. You retrieve it from the IP, and you can get either HTML or XML or whatever. And in DVBH and DVBSH, you have the same kind of protocol stack you showed for ATSC, MH. You have first the PEC2TS, then some IPDCs, a way to deliver IP packets on over MPEG2. And then you have the protocol to deliver the files over IP. And then you have some XMLs, maybe TV and ETIM, or something else. And so we implemented inputs from these three sources, types of sources. So again, why SVG? So yeah, well, of course, we like to work with SVG, and our player implements SVG, so it was natural to try it with SVG. But also SVG has interesting properties. The layout, for example, not having to rely on a flow layout allows you to put your programs into a layout that's not a table, that's not a strict table. Then you have gradients, you have animations, and you have interactivity, so that's nice. There is a video support, so in tiny, so that allows us to show the grid and the TV at the same time. Plus SVG is now supported more and more, so it's a big hit, this conference is a title. Then again, there's a mobile industry interest in SVG, through GPP, OMA, HTC, whatever. And what we wanted to investigate was the streaming of SVG, and we can do streaming using the RME or DIMS, it's almost the same thing. So for those who don't know what is DIMS, DIMS means Dynamic Interactive Motivating Scenes, it's a 3GPP standard based on the VET4 laser, which is based on SVG, so it's a stack of modifications to SVG. And the concepts are, okay, there are some extensions of SVG, but we didn't use them in our project. The most interesting part is that with DIMS you can send from a server to a client, you can send modifications of SVG when the server decides, not when the client requests them. And there can be insertions, removal, replacement, just as you would do them by script using the DOM. And there can be, these times, the modifications of SVG can be stored, either in files like MP4 files, and then you can synchronize them with the audio and video to make subtitles, to make regional interest tracking over a video. But you can also send them from a server, like any streaming server, or you can do progressive download of synchronized streams. So that's the architecture of DIMS. So the architecture of our system is first we have this server, which receives the DDB signals, transmits the transformed information over Wi-Fi to a phone, let's say an iPhone. And the architecture is first we have some demoderator receiving signal, we separate the audio and the EPG, the audio and video is transcode, the EPG is transformed in SVG, and then we have two paths. One is DIMS packaging and streaming over at RTP, which is as if we were doing video. And the second path is storing and then serving it with a classical web server. So there are, of course, advantages and drawbacks for all the approaches, and I'm not saying that one is better than the other, it's just different, and depending on your use cases you might want to use one or the other. But the traditional AJAX approach, our server will go on the web, retrieve or go with a T-signal, retrieve the program, generate XML files, store them in the web server, and they are able to fly it with parallelity, retrieve the XML and update its display. So with this approach, of course, we have a live server that's just decoding the programs, creating an XML and serving them. Adaptation is nice because then it's easier on the client than with the streaming approach because everything is on the client, so the client can do everything, can decide. There's no privacy issue of sending back my preferred programs, or there's no... It's simple, but okay, it's not as... The dynamicity is not as good as when you're just streaming, because when you're just streaming, every time there's a new program, you receive it, or a modification of the program, you receive it. But in the AJAX mode, you have to pull it, you have to wait for the next poll to update it. And also the layout, when we do the AJAX approach, we just send the XML of the APT programs. We don't change the layout. With the streaming approach, we can send modification of the scene to the date, the layout to the date, the charter to date, the style, and so on. So you can imagine that for Christmas, we could send updates of the theme, a theme like the Google page you have during Christmas is different from another time of the year. And for the streaming, the principle is that the program is retrieved from the broadcast source, they are transformed, packaged, and streamed over our RTP. And well, I don't know if it's a pro or a con, it's a pro or a con, but it reproduces the behavior of the broadcast channel, right? Things are pushed until the end user. There's no storage in the middle. The client is lighter. In our demonstration, we use JavaScript on the client only for the navigation, so I'll come back later on that. But for the construction, for the display, JavaScript is not used. We have a heavier server. The bandwidth between the server and the client is increased because we're sending EPG data plus presentation data, right? And it's more difficult, as I said, to adapt. To adapt, not to adapt. So some implementation details. The server, as you see in the picture, is an embedded platform running an R processor at 400 MHz with a video transcoding chip. The clients we are using, so for streaming, for DIMPS streaming, we're using only GPAC. It's the only player we could have. I don't know if you're a player, so for R and E. Okay, so we would be happy to test it for interpretability. And I know that someone else may be a keyboard. And for the Ajax, okay, we used all the browsers except IE9, and I tested just before my presentation. It doesn't work, so we have to fix it. So some results. Here is how it looks like on Chrome. So you have a grid, a typical grid. The video is displayed there, and then when you click on a program, you have some additional information here. We will see it on the video, in more detail, just to show you the looks. Opera. So in opera, we have some problems with the playback of the video. We have some problems with the same, the supported problems. This one is with Safari. Then we have, okay, this one is GPAC. It's a bit different than this one, and it's a DBBT example. The other one where the source was the web. This one is really DBBT, and in particular, it's in DBBT that when you retrieve programs, you only get two programs per channel. The current one and the next one, and they are being updated as the program goes. This is on the iPhone, and this is on the iPad. So you can see, for example, one kind of adaptation is that when you're on the iPad, you have room to show the video and the grid at the same time. When you're on the iPad, you cannot. You have a button here to switch between video mode and graphics mode back and forth. So I will show you the video of the iPad. So we're starting the application. It allows the grid and the video. And when you select the program, you can see that it's displayed here. We can shift the dates, like touching right and left moves the grid right and left, touching the grid vertically or changes the channel in display. You can zoom to display more or less, program changes the size of the grid. When we click on it, you can have more information, more of the abstract, the synopsis, the ratings, and still showing the video at the same time. And then if we want, we can go full speed, of course, just to show you the video. And then we have filters, so we can filter the content that we want to show, just movies, for example. And so programs have been removed and only the movies are remaining. Then we can go back to the app. So this is for the video. And then I have a live example. We're going to have to refresh because when I plug the keyboard, the resolution of my train has changed. So the layout is not dynamic, when we resize the window, we don't really reconcute the size of all the games. Yeah, okay, this is not live video. This is just one video because our server is in the other building and somehow it's down. So this is the same as the panoramic zone, actually. So it was down five minutes ago. And so you can see if I shift, I can browse, I can have more channels. And so on, okay? Okay, my last slide. Problems and limitations we faced when we did that. My first interoperability problems is usually the tiny one.do is not supported well enough to my taste. The first text area is not supported, so we don't have flow text easily. We have to do foreign objects, HTML, div. This is just not great. We can argue about that. Just turn everything inside out. Actually, we have first an HTML to have the video element of HTML. Then we have the SVG and then inside the SVG we have the foreign object with HTML. So we have three layers. But you do not need to take the whole area where you do the readout. You can do the multiple that demo. Yeah, let's give you first question. The trade access API is also not enough supported. And I think we have this discussion yesterday during the panel, the video element also. We had also a difference between events on mobile, like on the iPhone iPad. The name of events are not the same. Touch events instead of gesture events instead of clicks and so on. Lack of SVG support on Android platforms. Yes, I hope it will come. Lack of Vim support also. Windows, I think it's nice. And then we had some missing SVG features that we think would be nice to have. For example, screen orientation detection by events or whatever it means. Sorry. Pixel density detection. We wanted to change the font size or the layout depending on the pixel density just to make sure that the layout is proper, that legibility is correct. And this is missing. Text ellipses. When we have the name of a program that is that long but lasts for 30 seconds, we have to put it in a rectangle. And at the moment we just do a text area with clipping. Or I forgot where. Yeah, we do it also with clipping with CSS. Yeah, okay. We remove the name of the program instead of showing the program. And then we add a mode where we could replace long text with dot dot dot. It would be nice. And Z-Order also would be nice to manage more easily the different layers. The TV, the grid, things on top of the TV and so on. And there's an example in SVG spec. I think it's one of the two about nav index. And this example is exactly about EPG, right? There's a grid. You cannot use it. It's impossible. Why? Because you have to give next, well, north, south, northwest and so on. You have to give all the elements that are southeast, west and so on. But when you're doing this with programs that have duration, changing duration or small or short, the layout is not that easy. So it's very, it's not really usable. But we don't have yet a better proposal or fix it. We just wanted to report it. So, merci. Attention. I say it's on my computer. So, thank you. Thank you. Thank you. Thank you. Can you go back to the previous slide, please? It's just that this is the first part of my question, which is not really a question. I think some things you list as missing SVG features are not SVG features. Screen orientation detection. This shouldn't be SVG HTML, but I don't care. Right. So screen orientation is being worked on in, sorry. Yeah, that's being worked on. So density, there's a media query for that, I think, nowadays in the works. Text ellipses is in CSS. And that order is definitely on SVG features. But seriously, my question was, at some point you mentioned the differences between the streaming broadcast model and the HX model. I was wondering if you'd investigated things like comment support in browsers or using things like an XMPP library. They're very good ones. So that would give you push. Of course, other advantages of broadcast exist, but it might be interesting to compare XMPP with streaming. So you're saying you should have SVG HTML, CSS media queries, yeah? It's a stat, a tool box. Yeah, okay. I don't think it's... It just got in all the other users, so it doesn't have room to compare. Yeah, it's not a question of implementation, it can be read. Just because we split the spectrum... How many offering tools do you need to do that? You just need text-mates. That's all I use. You know, seriously, just because the spectrum is split into smaller parts doesn't mean it's more complicated. I mean, you know, you can take one thing and call it NPEG4. You know how much stuff is in there. You know what I mean? It's not a naming problem. What I'm saying is that, okay, the status-twin orientation has been defined in one. There's a general tool that we can see to do it. But there could be differences. I mean, I don't know the ways to apply it when you're only in an SVG context. It's not... It's a... It shouldn't be exclusive. It's not exclusive. You can use it with SVG. Yeah, this is not a client, but it's still the relevant tool. Okay, but what about the SVG? CSS can definitely be used on SVG. We don't know Oval's support everything yet. The text ellipsis, for example, is the... I was just looking at it being the text all over the property in CSS. And it does exactly what you want. I mean, your examples are second to dead. They're very interesting. We don't lay out the text if you can. But we could support the text ellipsis in SVG, too. And that makes total sense to us back. The other thing is your example kind of screams for the... It's exactly what we have in mind, what we say, in IT graphic to chronographic apps. And it screams for the mix of HTML, CSS, and SVG, because you have text that is flow... That is to flow. You need... In fact, you probably need all the power of CSS and HTML text if you start tackling complex scripts and all the other stuff that's important. You need to push. So it's really... Ultimately, you need all these features. You just gotta discover it's sort of... Okay, I like the demo. I like the box. I like the whole idea of collecting this broadcast and rebroadcasting it inside the house. I like it. But, I mean, you know, we're in an SVG conference, maybe this is heresy, but I didn't see anything on that UI that you couldn't have just done with absolutely positioned HTML. I mean, if you've got the JavaScript logic to lay out on the program grid, just use an absolutely positioned div of width and height, throw a queue, you know, round them and now with border radius and put a nice gradient background on it, and you pretty much got the same thing, and you could do the whole thing in HTML5. I mean, yes, we're in an SVG conference, but you didn't need an SVG to solve that problem. And also, there are historical reasons. We started with SVG because we started with a streaming approach where we had no JavaScript on the client. Yeah. Okay, so all the positioning was done, Bioserver sent as updates, and anything was positioned and the grid would be built progressively. And back then, there was no HTML5, so that's when they started. Oh, it was tired. No idea. Two years ago. Why do you have these types of data? Yeah, Ted kind of stole my question a little bit, but I had more questions about that, because, yeah, I mean, at first I thought that you were just going to lay the thing out using, you know, just like a regular viewbox, and then it was going to be the exact same UI, but, you know, the viewbox element had been scaled up for one. It's not at all. It's laid out. Is it that right? Do you think that, like, I mean, is it easier to do that? Like, when you're doing, because, like, when something needs to be laid out to take up the full screen orientation like that and have text position decided that, you know, I immediately think of HTML and divs, you know, because you have to do all of that. I mean, well, I guess there wasn't flow text in there, but there was definitely like flow, you know, flow. Sorry, I don't know if that's the essence to tell you, but I immediately think of HTML. So do you think it would actually be easier, I think, more sense to use that for this kind of application? Okay, SVG, the S stands for Scalability. Someone said it at the beginning of the conference, but scalability is not enough. Scalability in terms of graphic is not enough. When you're doing adaptation, you need to adapt the layout. And viewbox is the minimal tool, but in some cases it's not enough. So maybe for sure the layout logic can be implemented in scripts, and then you can do it in SVG or you can do it in HTML, CSS, and so on. There are different tools, but the result will be the same. So we pointed out. And again, historically when we push for the streaming approach, we push, we can filter the updates depending on the resolution, we get on the orientation and stuff like that. We wouldn't need the JavaScript to do the layouts, and even to change the orientation. So let's see. So, we have the demo on the iPad, we can also try it. So, give them a sec. Correct? No, depending on the approach. If you use the layout, it's done in the client. If you use streaming approach, there are several ways you can push the alternatives, like port 3 and landscape, and we tag the updates with port 3 and landscape, and then the player to decide on which orientation, but this is maybe static in this case. You choose like this or like that. Then you are at the piece. I'm used to the command, it's working on a scalable approach for updates, which is the size of the... Okay, yeah. Okay. Well, it seems to me, as I already mentioned earlier, it's good for the screen orientation, it would be better off with height and detection of medium curves. You don't need to know your orientation, because every day you want to watch the video on the side. So you would... In addition, you take the video on the side? Yeah, that's true. So it's applied immediately whenever you resize the voucher, it's applied constantly. So you don't need a refresh. And one tip about the video and opera, it's because you're using more files rather than... In this example, we use a H2TS. No, we own a spoiler, we have a Nogue, so it's a separate file from the section. That's what I said. Yeah. That's a good question. Thank you.
With the development of convergent video codecs such as MPEG-4 AVC|H.264, it is now possible to view television progams on many types of devices, including mobile devices (e.g. iPhone). However, given the high number of available television programs, electronic program guides are required. There are many ways to provide electronic program guides applications on mobile devices. One way is to develop and deploy new native applications on each of the mobile plateforms (e.g. iPhone, Android, Windows Mobile). Another way is to rely on convergent Web standards such as SVG to leverage already deployed browsers. Following this second way, this paper presents the results of some investigations and experiments for the display of EPG data using SVG. In particular, we investigated different ways to generate and deliver EPG data. One traditional approach uses AJAX and SVG. On the client, some Javascript code pulls some XML representing the EPG data from a server and converts it into presentable SVG. Another approach consists in directly streaming SVG data, using 3GPP DIMS RTP format, to the client. We report on those two approaches showing the benefits and drawbacks of each. Finally, we demonstrate the EPG application on different clients such as PC equiped with GPAC or Opera, iPhone using Safari.
10.5446/31211 (DOI)
Okay, so hi everyone. So hi, I'm Col Anderson. I'm representing Ericsson here. I'll talk to you about the IPTV and also how we heard some of you said that you're a SVG for IPTV. Yeah, that's the point of the entire presentation you're speaking about, Col. Okay, so we're working at Ericsson and the multiple network services. And within this unit, I'm in a division for TV and network people, where I have a responsibility to find the stack of those products and solutions that we have. And I have a background with SVG. I joined the working group in 2001 and it's been three years. So what is that TV? I guess you heard a couple of presentations about it, but I mean it's essentially brought on TV. TV and also other network services for IP, of course. And compared to the other TV, technology is like radio broadcast, cable, satellite. TV is really online. But it is picking up speed, but we haven't. So a lot of countries adopting it and starting to roll in the wind, so it's coming more and more. So what you need, actually, is a few advantages. I think one of the main advantages is that I think that the network channel might be select. To give an example, for example, a TV producer who is in American Idol in the US, it was supposed to hold for the band, the singers, to know that they were the overlay band, the jazz band. This is around the corner. This is the SMS. And that's what you should do. But what role does that use the most? The TV channel, what you said, is overlaying at the same time, and it's like, what you said, both RAM and RedMap are both far, TV and the rest are both under user control. That's about a simple example, but you have more examples with the same kind of way of use of X-channel, you can have quite attractive services that you can put together to keep it useful. More, you can have the infrastructure close, which is a bit of a short fix. I have true ideas, but I mean, given at least in the rest of the world, almost everyone has broadband connection. And to be able to use that broadband connection also for TV, instead of having a separate infrastructure, like the cable, which is more of a satellite-based network, you can... I mean, you will less to maintain and eventually you will be cheaper or more... Yeah, I mean, cheaper to maintain. And also it gives a normal broadband data type, because providers and shares of the TV market, quite easily, already have an infrastructure in place. And that might be true. I mean, that will be true to more competition, and more get down to the better services for cheaper prices for us consumers. So this was the most complicated picture I've ever made. And I think it's a bit of a hard course. X-channel and the satellite are a bit more of a hot-pot on the touch on the services of graphics. So what you see on top of the network now, I mean, is the set-top box of the TV. That would be normal and happen in your home. The set-top box, the coordinated data traffic, the decoding of streams, the rendering of graphics for the NUSCP cheese, and things like that, which you can see on the TV. And we're welcome to talk about today how we can see graphics and the value of graphics where we can use a CG. So about this UI technology stuff, there's a lot of set-top boxes after that. There's a whole range of different technologies from real-world, native, C, C++, and Java solutions out there, Flash, CG, X-channel, and other things also. And I mean, what's essentially a set-top box, and I really, really love it with one-house. And I provide the operator of all the software, the TV service, the services, the subscribers, the most expensive part of this set-top box. Because I really provide each user with a set-top box. So I really have to pressure that box to do what's possible, and then the performance of those boxes is really cool. And so traditionally, what they do for the user interface, in a sense, was also to allow the host to read the stream, so to speak, and so forth. So for the graphics, I rely on keeping the C++ in the stacks, like native, C, new UI, set-top, embedded, and set-top box. And it's a really, really crappy experience, but that works. Luckily, the image set-top box is a bit more developed, and we can actually move the graphics a bit, a bit higher in the stack. And by then, you use more dynamic new UI technologies that are more think-providing solutions, which is really important. The prior kind of an operator is to be able to modify the UI and to authorize the different ones. And then, yeah, I mean, from this entire technology stream, we've talked about rep technologies now, and that's kind of a focus on HGN now, especially. So why use the technologies for these UI's? I mean, the set-pecability is one thing. Serous side of the creation, without any other use, it's a meta-based, and all is different, and I could give you all these different server-side roles you have. The recommendation engines, you have charging, and engines, you have internal servers you have, and insertions, for example. The common denominator always is that they export this now. So in order to aggregate it, you always need to use experience, so it's a good presentation for it, because it's also a mass-media system. And the fact that there are these vertical technologies are standard-axle, we have multiple providers of these engines, and for the upper-intercept providers, it's very good, it's a pressure-price, it's a very good thing. So, you can use it, it's a good thing. Yeah, I can. They can find another different provider, so it's a good thing. It's not a software-proper technology, it's a manual one. So now, in HGN or SVG, yeah, I mean, in there, it's all in our right-and-right, we can offer it at both, both areas, so we have the, historically, we started about five years ago, maybe, with the IPTV offering, and we have all the HGN, so we're all user interfaces for this channel based. And then, coming in, we're already managed by wonderful customers, and we're all working for fresh, right-to-user interfaces, and we're all, I think, locked out, and then, we're all done, we're all done, we're all done, we're all done, we're all done, we're all done. And so we started to use Flash, no, I think, I'm sorry, Flash, I'm sorry. We started to use SVG, I think, maybe three years ago, and now we have both offerings, I'd say, like, the last 18 months, all the customers are required in the SVT, so all the solutions are going out now are in the SVT list. Yeah, and besides the, I mean, the graphic and the build is also a scalability and a test, I mean, the scale is a different solution, but it's not as important. I mean, the layout is, I mean, the layout is also important, whether you're doing the JavaScript or you're doing something else, but the scale is a bit different, you need to, you need to, SD, HD, and also the app registers, but they are only looking to how to get the T- which are the real devices, and how to get them all to the tablet, so all the pieces and things like that, so there's a lot of different solutions. So how do we work with SVG, for example? In the IPTV offering, we have all the different servers, all the test holders, the detectors, the systems, a lot of different things, and we have the user interfaces for the client, the TPs, the only thing we don't have are the separate devices, because that's something that the service providers take from familiar and there are some specialist companies that have built this one, and they have to control it and others. But what we do, I mean, we put requirements in on these, when an operator comes to us and they want to buy the app TV, and they, we call in the 732, they say, SVT, because they put that in our offering, and they want that, and we help them talking to these acceptable providers, help them to enable the SVT on their boxes, and we do that by working with SVT browser, SVT-M2 providers, and we work with multiple SVT providers, and we should have that strong and more of a compact, or at least of that. And let me tell you for a second, if you use this company, you put their contact and you get help from them to integrate. And they're also comfortable. We work with CD-TIMIC, CD-TIMIC M2, since it protects the media, and since it's the fastest, if you want the fastest, it's the input that's needed, and that's only the CD-TIMIC currently. And the speed is a key, and it needs to be re-needed to the fastest, except for what I mean, it's already not the best way to do it, the speed is, whether it's 80% of support, that's entirely on the M2-5, or 100% of the, not really the problem, the speed is. And then you also put the plus-plus every time, because you don't have all the, it's going to be able to speed for TV, with the SVT, so there are some issues that you can't do. And we also build the user interfaces in SVT, so we have in-house resources doing user interfaces, and also external companies, companies, what about biometrics, you're more of a JavaScript and SVT design. And the UI that we build, I mean, no customer never used them, I say, because they all want to brand, they all want to have their own cathedral, of course, but they often use it as a template, or they use it, in some cases they even use it as a rebandage, in some cases they give us a storyboard and they implement it in a better user interface for the source. But we are having these in-house resources doing this. Okay, so I mean, is SVT enough, and I must say it's not really enough that you don't have all that material to speak in. So one thing is, you know, like to the forum, where they ask, which is an explanation for IPTV, where they have standard apps that are working in the setup of JavaScript and the apps for TV, but we're not really related to SVT or SVT, it's just getting IPG releases doing, scheduling recordings, I think, that's not really tied to the presentation format. But then also, when we are using SVT, we need to have additional features within SVT for controlling the media and things like that. And those things we have to find, and we are trying to get those also standard apps within our app, or potentially within our PC, especially in the scenario in which we are, the red material, which I think also is a sweep or something, and we are really interested in that when we are participating in that. There are also some for the SVT community, the job that comes with SVT, it's really important to get SVT aspect into that also. Yeah, I have a slide just showing the open IP forum members, so this is kind of the member of this open IP forum. So it's quite a large part of the IPTV industry. And we are also using HTML5 APIs in our solution together, we are using web storage by broadcast, we are using essential, of course, as others. And to control this last slide, and to give some examples of these SVT extensions, that we have defined, what we need. For example, we need the control of the time container for the Qo and SVT to be able to fast forward, rewind, things like that. Then we get an answer part of that in the SVT, but you can only see if it's three of the stops, I think, and then you can see if it starts to stop running and things like that. Also, it's something that's really needed. Priority rules for the other ones, normally set up boxes, you can remember one full screen video, one picture, a picture, if you have a screen contact with three video elements, which two ones you choose. In the media screen, you have normally multiple audio tracks, you have a variety of subtype of tracks, you have teletext information, you still have a way to keep it on forward today. And I think this applies to HTML5 videos. Clipping, yeah, Ermas Post-Clipping, so it doesn't work. It's not a problem anymore. TVT identifiers, they don't free, TVT identifiers don't release remote control keys. So it's no longer going to access it, so it's not really releasing things like that. So that's something. And I think that's pretty much it. Any questions? I was wondering what exactly you had in the Plus Plus, that you said there's some things missing for TV. I'd like to go into a little bit more detail there. In the Plus Plus, we have a solution to this. We have a definition of whatever we do here. And we have this over-depth heat extension that are not really escalation related. And so for the first part, the media events that started a decade ago was not good. Amidia is that good? Yes. But we should finish it. That's what I'm here for. It doesn't have an offering. It does have an after-eclare. I'm asking this question because the stuff you've listed is interesting stuff. I'm not necessarily talking about it. We are using media events. I'm not sure if they cover everything, but they cover all of them. Right. So you should continue with that. Media access events were referenced from R&D and PIN. It's not finished. They were referenced when it was in something similar to that. It's like, it's based on a slide. So they recall that as a communication between those groups and WPC to 10 at this point not to save on our correctness and our wrongness. That doesn't have any evidence. So shooting this here is the end of it. We finished it up in the evening. Yes. So you just have to copy and paste the HTML5 video and drop it in the street key. You can't have two dueling specs. That would be crazy. That's the idea. Media access. All of them need to solve the things that are missing, but some of those things are missing. I mean, I don't really like to see the information of all the videos. I mean, there's a lot of work needed in that. You can't do it like you do with the HTML5. It's just a half-off solution. Well, that's no HTML5. The right one, no. It's just a half-off solution. So you can't have a lot of information. You can't have a lot of information. You can't have a lot of information. You can't have the HTML5 to the right one, no. You can't have a lot of information. I think that's another one. I mean, you know, but it doesn't have to be an insane one with the specs. That's a bit too political. The HTML5 is one of the reasons. No, it's not. It's both of them. That's that. I mean, in some ways, for us to keep things to work with a smile, with a smile model, it's not a question of smile model. If you can get those beautiful things, that would be great. I'm not sure. That's just a discussion about unifying the two. When you're saying smile, it's not the same smile. It's not the smile in the solution. No, no, no. Yeah, it's a high-pounding model. It's very simple. I'll see why. I don't know what I mean, but you need to start with high-pounding. You need to have a high-pounding. That's not so much to do. Anyway, you need to start with that. So, by that, that's not quite a matter of making a riddle. It's a spring. This is not a question of smile. Can you explain, just for four sentences, smile in a mission, smile at times, and can you? No. One of the most smart time containers about when you start the time life with a specific video or for a specific media event, and how you control that, smile at missions. When you smile at missions, smile at missions, that's about how you end up at the new time. I know. There are two things that are important in smile when you're doing just video. I'm not talking about smile. I'm talking about the smile part that is needed to implement the video element in SVG. You need to have the ability to desynchronize the timeline of the video with the timeline of the document. To make a clue, to start not at the beginning, and then to sync it with an audio track or a subtitle track. Or not. Or not. Or not. Yes, yes, and that's it. That is not an HTML file. Then I don't care whether the attribute is called SRC or Extinct HREF. It doesn't matter. I just want the model to be the same. We want it. I believe there's some work ongoing on the web SRT. But it's also still very... Okay. That was not a big fan. Anyways, so it's HTML5 video version of SRT files that contain subtitles. And so that, I don't know, when it will be part of HTML5 or at least not a reasonable file, but of the specs around everything that happens with HTML5. But that could be a possibility to the most... If it is standardized internet. Maybe I have one other question. You've been talking about done events and remote control. That actually has been quite a burden for us as well. Have you looked at things in HVTV or any other specifications or do you have your own map games, let's say, for the various buttons that you own? Yeah, yeah. I mean, we have our own map game. We could have some somewhere. Maybe it's originally from there or maybe it's from someone like under the shop. Okay. Watch that. Okay, then. Thank you.
Within the IP Television industry the interest for using solutions based on web technology is strongly increasing. SVG is currently the top requirement from most IPTV operators that want to deliver a rich user experience with a technology that allows customization and is future proof. This paper gives a brief introduction to IPTV. It explains the rationale behind using web technologies for IPTV and in particular why SVG is an important part of many IPTV solutions. It describes the ongoing IPTV related standardization efforts, in particular OIPF (Open IPTV Forum), and how they include SVG as part of their work. Finally it addresses some of the missing parts within SVG that are needed to make SVG really useful for IPTV and mentions how Ericsson are working with defining these missing parts.
10.5446/31213 (DOI)
Okay, I started a little bit with joking about IE because in the past it was a little bit of a stepchild for web developers and SVG especially and that job made sense during the timeframe till IE 8 where you had only an asset 3 score of 20 but that job doesn't work anymore these days with IE 9 having an asset 3 score of 95 and that brings me to today to my today's talk what about SVG in Internet Explorer. As you've seen this morning this is the IE 9 test drive site and as was demonstrated this morning there is a mapping example that I showed last conference last year and that was already there for the first platform preview. There is a gentleman Michael Neutz in Germany that's been using this technology to track German election results and he hosts a site where you can see on the right here it's HTML and on the left it's SVG, it's one page, it's all that mark up in one page and he's got hover over effects adjusting the different parties and how they voted. So you have this nice little ability and if my German would be better I'd be able to tell you more. You can switch to the green party and find out what their turnout was. What a good looking guy! Yeah I couldn't have said it better. So this was March 16th the first preview of IE 9 which I hadn't seen until that day and the problem with the first preview was this is my mapping example how it looked like in the first preview and you wonder how they did it in the video but they didn't use WebKit. The thing is I built a special at the launch at mix 10 IE 9 the first preview didn't support view box which is really critical in mapping and so I built a special version for them with a map that had only coordinates screen size coordinates so that it would work but I was very I was worried that their implementation might be half hearted and because it looked just like that and I'm here to tell you that I'm very relieved with the fourth platform preview because this is the mapping example completely unaltered no code changes and this is IE 9 platform preview 4 this is Firefox 3.6 on OS 10 and this is again IE 9 so you see this is exactly what you want you have the same code and you have a predictable outcome. Now there has been a lot of talk about how great this is and hardware acceleration I mean we have to keep things a little bit in perspective this is the ultra low powered $100 laptop from a few years ago and obviously this wasn't rendered in milliseconds but probably more several seconds but but still it was already there and sometimes the standards conformity and feature support can be more important than performance. Let's show a few other examples this from statistics Germany about inflation that's done by Tanya Raschke who's also here in the audience and that was a preview 1 that's an SVG file that also runs with the Adobe plugin and I'm happy to tell you that in the platform preview 4 that we have now of IE 9 it renders perfectly like in even in the Adobe plugin and all the native browsers. Another from statistics Germany about inflation that is a shared code base with a French statistical institute in SE and that's also working in platform preview 4. This is from my personal website that is an elevation profile here I did a cycling tour and had a GPS logger with me and so this is just the GPS data that is drawn on a Google map here that sits in an iframe and you can follow the elevation profile here and that's running in IE 9 and that already worked in the first preview. So those were a lot of examples that run in IE 9 right off the box without any changes. I'm not going into performance numbers but what are the numbers in terms of feature support? This is Jeff Schiller's SVG support table that luckily he updated for me today and so just to give you probably the type is too small but the large green bars usually operate the poster child of standard support but this talk is about Internet Explorer so you have here a large red bar that's IE 8, 0% support and then you have IE 9 previews 1, 2, 3 and 4 and you see that now they have 58% support so this is the smile animation and I mean they were quite open, it's not in there yet but the other stuff has improved significantly. So IE 9 looks very promising I have to say but IE 9 doesn't run on Windows XP and if some of you work in larger corporations or government institutions you know they have glacial update cycles in terms of browsers and it may take a while. So let's speculate from an independent perspective what IE 9 adoption might look like. What do we know about Internet Explorer update cycles in the past? Let's look at some market share statistics. These are browser share and I'll zoom in a little of IE 6, 7 and 8 on a daily basis and we don't talk about absolute numbers but just the pattern and you will see these patterns from different browser statistic companies. So what you see is that IE 6 and IE 7 have the same pattern just on a different scale and IE 8 is inverse. So what happens here is that during the week from Monday to Friday you have a huge market share in IE 6 and IE 7 and on the weekends you have a peak for IE 8. That means that people in their home use when they buy a new Windows machine or they get Windows update they are quicker in updating but larger companies take longer, a lot longer. So this is a petition in the UK where they are a petition to phase out Internet Explorer 6 and that is not from a few years ago. In fact it is this BBC News article is from February this year. So this is the problem. So the William Gibson quote that the future is already there here but that it is not evenly distributed yet, this quote makes a lot of sense in this context of browser adoption. So how can we bridge the gap? There are two things. One thing is you can educate users or you can evangelize with network administrators in your company. So I'd say plugins are probably not the way to go. I mean they are even still in development maybe for embedded systems but we have Firefox 4, Chrome 6 and Opera 10. They all run on Windows XP. But once you've worked in a larger organization and talked to your system administrator to install a new browser company wide you will see that's not an easy task to do. So as a content creator, when you rely on as large a population of users who can see and interact with your content you probably have to do something on the server side. And now there was this blog post on the IE blog about having PNGs and what server but we are talking about interactive content here and so the way to go is to use JavaScript libraries. Now I will just show you a few of them. I'm not evaluating them and there are a lot more libraries out there than I can show you but just to give you an example starting with Dojo. They have the Dojo GFX module that allows vector graphics and the special thing about Dojo is that they work on a lot of back ends. So even if like Silverlight is deployed they can use Silverlight for drawing vectors. If no plugin at all is present they use VML, the vector language of Internet Explorer 628 and even there is even some work going on to support SVG web which is an implementation to draw SVG in flash. Now it even says on the project home page that Dojo is aimed at the experienced developer and that's absolutely true. If you are more of a designer guy or haven't done much programming it's very well documented but especially when it comes to vector graphics there are only a few examples and I found it quite difficult and you will probably not choose Dojo just to get vector graphics on older browsers. You may want to have a look at Rafa that was also featured at last year's conference. A very well designed site that was also featured recently on a list apart and it is aimed at vector graphics and only at vector graphics and already on the home page, starting page they have an example that you get an idea how to use it. There's also JSX Graph where we have Bianca here in the audience, it's more aimed at mathematics, they also have very good documentation and examples and very powerful VML implementation. Now all those JavaScript libraries invented their own syntax for 2D graphics. I mean there aren't too many ways to draw a circle but still they differ and it's not SVG, that's the point. And not all vector graphics are done programmatically, designers often will use Illustrator or Inkscape or Visio and export to SVG and so then you would have to do a second step to convert your SVG into something that works with the aforementioned libraries. Now there's one more library to talk about. The aforementioned libraries have the advantage that they even offer you vector graphics when no plugin is present at all, which is a huge bonus. But if you consider the widespread adoption of the flash plugin at least for say another one or two years, then you have a more powerful renderer present and SVG Web that was also presented at last year's SVG Open Conference is a JavaScript library that renders SVG using the flash plugin. So again, Jeff Schiller's table. We are currently expecting today or at least this week another SVG Web release and this is from already these are the numbers from the release candidate. So we are here in terms of feature support, SVG Web is in the ballpark of the current IE9 implementation. They differ a little. I mean, there's already some smile animation support in SVG Web, but there are other things that maybe are not working as well. So CSS styling still needs to be implemented, but it is coming. So how to use SVG Web to support older Internet Explorer versions with SVG? Consider this is a very simple HTML file where you want to embed SVG. And it's pretty straightforward. You load, obviously, you load the JavaScript library and then you have just the script tag which has a bit of a weird type of image SVG and XML and that offers you within plain HTML offers you complete standard support for SVG. So this is not some newly invented syntax. This is plain SVG. You can take your illustrator SVG in there and you're ready to go. Last year we had only at the conference some preliminary test running. Then in, I guess, November, Statistics Germany took their animated population permit and supported it with the SVG Web library. It's also my only example that's localized to four languages, including French. So you see it here in Safari on OS 10 and it just works exactly the same here on IE7 in Windows XP and as you can see it renders in the Flash Player. So now why is it important? I mean, SVG Web, we call it the shim technology. It's a temporary solution for bridging the gap and for those years, like we don't know one, two, three years ahead of us where still a significant user base is on IE67 or eight. Now the interesting thing is remember this is our code with SVG Web and like what would you need to do currently? Unfortunately, I have to admit SVG Web is not yet aware of IE9 but I hope and that's why I'm doing this talk to get some of you interested enough to support it. I guess it won't be very complicated. So but I mean JavaScript libraries come and go, we don't know about the future of this project and you think, well, what about the time and money I invested in my code and is it future proof? And the thing is, it is because what you can do, you can just strip this out. You see, this is what SVG Web is and you just strip it out and boom, you have code that is working in IE9. There is a caveat because this code works only in IE9. But because I mean we have to talk about like namespacing and things like that, maybe tomorrow afternoon when Patrick is doing his talk about HTML5 and SVG. But what you see here is that this is IE9, the latest preview. I just took out SVG Web, didn't change anything else in the code. And it's doing everything it's supposed to do. There are a few things here like I think it's probably anti-aliasing here, the rectangles. We've seen that with Firefox 1.5 or like Opera 9 had it kind of precision in drawing those adjacent rectangles. And I'm not quite sure once we get into the, you see here, we are a bit off in terms of coordinates, but I'm not sure if that's IE9's fault. I didn't have enough time to analyze the code base in that respect. But what you see is it's already, it's working the same code and so it's future proof. So yes, SVG Web needs you. I think it's the best idea in terms of future proof and real SVG markup. It could need a few more helping hands, especially, we're familiar with IE9 and Windows 7 to have a very good implementation of the feature detection. And yeah, the examples are all on the web and these links are also in the slides that are on the website. And yeah, that's what I wanted to say. But now I have a question concerning your statement that the plugins are dead. I mean on this yet Shiller page you presented frequently, so you're not looking bad at all if I looked into this static line in the lower part. So why, what makes you to say that? So what's the problem? You see, I've been with SVG since 2003 and between 2003 and 2006 I did this professionally and I tried a lot to get people to install the Adobe SVG plugin. It's very powerful, it's very standard conformant, but it's a plugin, you need administrative privileges to install it and I had telephone support for people who weren't able to install the plugin either because they weren't allowed to and didn't know about it or weren't able to and I did it for three years and if you rely on a plugin that's not there. Yeah, that's very difficult. You can be lucky that people like YouTube and that YouTube still, I mean most of YouTube still requires Flash and there are some games that require Flash, so at least if you have children in your household you can, you will probably have Flash. If you are in some organization, government organization where you shouldn't be playing at the workplace, maybe you don't have Flash and you have a problem with SVG web, of course, yeah, yeah. But on the whole it's very good. And the Adobe SVG plugin, I mean it's not officially supported anymore for XP, there are no, there will be no updates in terms of security and Badeek and all the others as well as they might be, their market share is pretty low and I wouldn't bet on that. So I therefore can have other내�b
In recent years Internet Explorer has been like handcuffs to SVG development. There have been several attempts to find keys to loosen the chains. After Adobe’s abandonment of the plugin, those were mainly JavaScript libraries. Leveraging VML or Flash to draw vectors in Internet Explorer these attempts have been quite successful but had to remain their shim status. Then with Microsoft’s announcement of Internet Explorer 9 (ie9) on March 16, 2010 at their MIX conference, the commitment to native SVG support in Internet Explorer came as a long awaited relief. The testdrive site for the ie9 platform preview showcases one of the election maps that were presented at SVGopen 2009 and had been adapted in time for the MIX announcement to work with the early state of the implementation. From this experience it became clear that SVG in Internet Explorer needs to be a topic at SVGopen 2010 and independent views should counter possible marketing ploys. Finally sufficient time will be devoted to showing examples that already work in the Internet Explorer 9 platform preview with special emphasis if they worked right out of the box or what kind of adaptations were needed to make them work. Additionally the discussion will focus on mixing SVG with HTML. Microsoft claims that IE9 will be the first browser to support SVG right inside plain HTML, an approach that the SVG Web toolkit is mimicking already (in contrast to namespacing in XHTML that is used in Firefox, Webkit and Opera).
10.5446/31214 (DOI)
Okay, so I'd like to introduce the SG working group, so I would like everyone to just say who they are. I guess I can stop since I have a mic. I'm sharing the group, my name is Eric Doldstrom, I'm sure most of you have met me already, so I'm with Office Software, been involved in the working group for quite a long time, I can't even remember how many years, but it's not, well I'm not the oldest member anyway on the group, but it's okay, okay. Alright, so, how's the mic? So I'm Chris Sully, I work for Dungeo, I'm in the group since I started, started with Gatsby, so I'm the old guy and they say, why did we do that again? I go, oh because of this and this and this reason for, I can't remember. Hi, I'm Anthony Grasso, I'm a proper Zech Cannon on the working group, I've been on the community out for about three and a half years, so it's quite a movie. Yeah, we've been working since then. Hi, I'm Robin Vergen, I'm a freelance standards consultant and I invited X for the group, I've been on the group since 2002 on and off and yeah, I guess that's about it. I'm Alex D'Amalo from Abra, I've been on the group since about 2002 as well on and off. I'm Doug Shepard, I also work for W3C, I've been on the group for four years once, one year as a member of W3C and then all of us three years as an employee of W3C and Chris on the team contacts. John Bach, yeah, first I can speak on the group then I can take hand with this. I'm Patrick Eng, we're on the scribe. I'm Mark and I'm in the SIG, also I have a colleague. Andrew Slid would like to give one more month a series of representatives to the SVG working group. I can add to the characteristics in the room. Not there to exit the room. I'm Andrew, you're in charge. It's all good to have a good time. Some of the members like to find that you're a professional. There are full of members in the room as well. They wouldn't have been to it yet. Chris, why are you looking around? It's Scott. Yeah, so I wanted to just say a few things about what we've been doing the past year. If you notice that we released the SG11 spec in an updated version, let's actually just live with your browser, Google for SG11, we'll get the latest spec hopefully. That's currently in last call. The last call period ended. We're now going through all the comments we got. We'll have to do the group game just after the conference. Besides trying to finish up the 11 spec to make it more clear, we'll be working on extending the test route for 11. I think the latest number of tests was something like close to 500 tests. Perhaps over 100 of those are not reviewed yet. But compared to what we have for the previous test we released, I think that's 100, 115 tests. But it takes a lot of time to do it over review, so that's what we've been doing, I think. Do you think the number of tests compared to the previous table? Yeah, I think his table is using something like 300 tests or something. The latest number, I remember, was 499 tests. I think it's about 200 tests compared to the previous drafts and the review tests. But yeah, I think we can just open the fourth questions. Or I can ask a question, that's... So yeah, besides that, we're of course discussing other things like trying to move to SG2. Discussing, for example, what we're going to do with Xlink-Href, which is the two-page... Yeah, I think I understand you're asking a lot because you've been doing review. Sure, we've done Xlink-Href, but let's get you to it. Okay, so currently we use Xlink-Href. We here doesn't know what I'm talking about. Okay, Xlink-Href. The excellent namespace. That's all. So we use the excellent names... Okay, so... And this has been a point of consternation for a lot of people, and it's not that hard for people. But okay. It is clumsy. Everyone agrees it's clumsy. Just like that. So what we want to do for SG2 is we want to allow both syntaxes. So if you have Href, you just have to say use Href rather than Xlink-Href, that would work. Okay, so I'm curious. And the other part of that is if you use the.href property in the DOM, the question is what should that do? And if... And currently we're thinking of saying that that sets the.href. If you have both Xlink-Href and.href on the same attribute, that's where you get into conflicts, possible conflicts. And we would say that.href would have precedence over an.h... Xlink-Href. So that's a simple example of the difference between what the written new class and what the developers do. Developers either already understand Xlink-Href and think it's fine, or they trip over all of the time and they say, that's a pain now, I don't have to type that. But we have to decide what happens if someone uses both, which they're probably not going to do, but we're going to have to define it anyway in these kind of precedences. So these sorts of issues are not necessarily very visible, but it has to depend down on what they do, so the implementation is not the same thing. So why would... Why is there an issue? Why is there an issue? No, no, why is there an issue? Because historically, that was going to be the linking part of XML and everyone was going to use it, and it turned out that wasn't the way. And basically, we'd been able to, we'd sort of had a bit of pain to buy into the whole XML infrastructure, and it didn't buy us anything. Exactly, so the XML infrastructure and mainstaysing kind of requires that. There's lots of relaxation of those requirements in HTML. And so part of this is part of the... I don't want to say integration, but smoothing out the relationship and the ease of development for you. So things like this need to come from the community. So it's easy for us to grasp because it's been on the wall for a while. Well, I guess, and this relates also a little bit to the earlier presentation next door on the changing the past syntax. All these things, I just hope the working here takes into account upward or backward compatibility from whichever place you're sitting. Because, by exact, it was Chris who said, you can't just change excellent HRF to HRF, to bus or boat. And in fact, when IE9 is out there, it will only know one, it will only know excellent HRF. If you're into creating content and you want it to run on IE9, you'll use excellent HRF. And, you know, we all know, that's not an overnight change from old versions of IE8 to IE9, and IE9, the IE10, you can attend in the regular address. You know, it wouldn't... There's legacy blocks you in a lot longer, I think, than we would consider. And this idea of changing the past syntax, great idea. You can't really use it because only the latest version of the browser will support that new one. And if you want to create content that addresses the whole web, you then choose to use the old syntax. And it takes a very good on time until you actually adopt the new one. So much so, you really gotta ask, is it worth it? It's hardly ever worth changing something. It's much better just to add. Right. I think that's a good distinction. And this would actually be... We wouldn't change it, right? We would always support, for macros compatibility, we would continue to support excellent data in-perm to it. However, I think we're adding the data from the IE8 to the X. And changing... I guess my perspective is, if you are too resistant to change, you simply won't go anywhere. I think that we have to be able to add stuff that doesn't work in older browsers. But we have to be more mindful of making sure that stuff in legacy content works on new versions. And I think that one way we can... So the backwards compatibility, there's two ways you have to consider that. And I think that with regards to changing things, it actually depends much through the process. If you connect with everybody in the tool chain, from the authoring tools to all the rendering agents and everybody, and say, okay guys, we're going to coordinate and we're all going to move now. And that actually is... And that's what W3C is for, right? It's to coordinate between different vendors in order to move things forward. Yeah, I'm just saying browser upgrades are... The speed of browser upgrades of the whole base is like an inverse relationship to how large the market share is. Sure. Larger market share takes a longer time to upgrade to base. And, you know, we can talk about IE6. It's still out there. Right, but SDG doesn't work in IE6 unless you have a plug. No, this is a serious issue. I've been trying to say it takes so long to upgrade the IE8, the IE9 will be out there for a long time. Do I have any interesting things to point out? So should we bring an additive feature? Could it be... Could it have graceful degradation? Could it... It has to be additive, right? We kind of baked in these strange behaviors that we're just going to have to live with and talk about in ways to prove them. Why not change them? This is actually a really good reason why everybody should download IE9 preview. Because if there's something in there that you really hate, you should let them know about it. I think they feel... I mean, this is... You guys feel the same way. If there's something about it, you're like... You're talking about three months and then it's locked in for ten years. So this is... This is a real... This should be a real concern for people. I mean, we're not talking about the things that they know that they don't do, right? So, okay, animation, whatever. Maybe it'll be an IE10, we'll see, whatever. But for the features that are there, do make sure that you're downloading this thing, testing it against your application, testing it against your content. Because that constraints what the working group is going to be able to do to some degree going forward. And the other browsers for the next ten years. In terms of legacy content... Legacy content is a big issue for those of us who have actually been building stuff in SVG. And changing the spectaculity of that stuff is something that gets us angry. And makes us want to make SVG 5 and take it away from page from W3C. So what... Well, like moving, smile, how to... Or anything like that. You know, we don't... We have... We don't... The SVG working group doesn't... The only thing I think that might be not dropped exactly, but moved into a module, or SVG funds. And that's because the marketplace moved in the direction of another fund format. Well, that's one of the kinds of concerns that sometimes when I hear about the intersection between what all the browsers do and focusing on that. That's troublesome for those of us who are experimenting with it. But nevertheless. Second issue is it seems as though the proper way perhaps to drive specs. I mean, you're still working on 1.1 and 1.2 and 2.0. Maybe breaking things into a multi-layered spec development. That might make some sort of sense. Where one of the specs would be looking very far into the future. So one of the... Well, very far in the future. It might... It did something to be considered. The way we're doing SVG 2. I don't know if... I mean, I've described this before and I don't know if I've articulated it well. But the way we're doing SVG 2 is we are making a set of modules, individual specifications around features like transforms. And animation, all these other things. Individual modules. And then as those get mature and... One of the... Filters. Paint fills. Things like this. All the stuff that we're working on with CSS working with as well. All these things. Transforms. Filters. Animation. So having this modular approach does let us work on specific pieces of functionality. That it's easier for people to review. And it's also easier for implementers who want to have an upgrade path from either an SVG Tiny 1.2 or an SVG 1.1 implementation towards SVG 2.0. And so as these modules mature, we're going to what we call park them in candidate recommendation. In this candidate recommendation specs. And as all of the modules mature, once we get a better sense of them and we get better feedback from the implementers, we plan on making sure that the specs agree with one another. And it would be really close way. And then from a process plan view, we're going to take them back to... We're going to change them to make sure they all work together well. We're going to take them back to the last call, get comments from the community, take them back to see our announcement recommendation. So that is a modular approach. As a group. Yes. And so that is a modular approach. What about things that are more far looking? Yeah. I mean, definitely that's a possibility. If there are suggestions. I know you have some suggestions. If there's suggestions that people have for stuff that might happen further out. I know you may not come to us, but one of the main constraints is we've only got so much time. I mean, half the people on this panel rarely attend teleconferences or participate in the lists because they're busy working in implementations. And those of us that are actively working on specifications only have limited amount of time. So if you are interested in getting...in helping us move forward in different directions, part of that would be volunteering time. More money. But the more time that money can take. And is that a sub question? What would happen if we had a microphone on? That's a good question. The browser, manufacturers by and large didn't like it. Hardware custom is different. Hardware custom is the same as it is in the regular talk. Actually, the microphone custom, why my speech is because it is actually an object model. It does actually give you...you know, some things in color, you see three quotes for the RGB. It's in the string which is passed to a number which you can then change that string. And then you send the string back and it gets passed to a number and all this. But that's, hey, right? So I think the fact that the microphone actually had real live objects in there was better. But it really hasn't brought up with the browser yet. So it's an ongoing discussion of what to do with it. I mean, it says in the tiny one-to-two spec that for SVG, I think it says SVG one-to-two-four. And that's the SVG two. It's going to be built on the microphone. So it's kind of the... I mean, they have a lot of content out there that they're using on the microphone. And that really needs to work in the SVG two browsers. So that's an interesting question. That's a very interesting answer. Very interesting answer. Yeah. But just in terms of the microdom, just have to know a little bit of history. One of the complaints that the IE team had last year or whenever it was about difficulty in implementing SVG was the SVG DOM. Now, the origin of the microdom came from the domain of the experience of implementing the full DOM. The original concept of the type-to-the-face, like I've got a float on the X value and modify it, was the variety in response to the experience of building the full DOM and being heavily involved in the design. So it came from the experience of the broken full DOM that everybody seems to think needs fixing. And it is a very good type interface. Now, the advantage that microdom over the full DOM is that it will accept the type-to-the-face. But five years ago, when there was no V8, there was no Tamarind, there was no squirrelfish extreme, the speed difference wasn't visible. Today, you're seeing JavaScript that's getting very close to native execution speed. Now, if you have to stick at the string parser in the middle of your extremely fast JavaScript engine versus pulling the float straight out of the DOM into a JavaScript variable, you're going to see a very large performance difference and increase using the microdom result concept. So really, that's a logical thing to pull into 2.0, as well as make it extended to a couple of functionality that's not even the microdom as such, because there are full features that probably can't be done. Yeah, I mean, performance aside, I actually think it can't be done with this. I mean, it really needs to be done because there's so much content out there to use. Okay, so I just wanted to comment on some of the things that are in the microdom. It's in a way, it's more limited on the full one-on-one DOM. But from an implementation experience, it's not hard to support both. The microdom is just returning objects, and it's really simple to have that on top of existing one-on-one cases, or as it is. But as far as content is concerned, I don't know, we haven't seen that much content using microdom so far. Maybe it's like interest. Yeah, I mean... Yeah, if the content is on an interest, we'd have to see it. If it's on mobile devices and it's being shared between that service provider and their subscribers, we'd have to see it on their subscribers, right? Yeah, I think 99% of that content is on embedded devices and not on the open internet, so it's not visible, but still, it's a lot of investment. Sure, but it's not that. And as you say, over time, the 1.2 tiny implementations will need some interest with you too, as well. So it needs to take us to the cap, which I've just cut that content off. Yeah, especially as you see open tools and not supporting the time limit at $2, and more supporting SVG1.1 and SVG2.0, it needs to be a need to much. So the SVG working movement is... we don't have authority over SVG per se, right? We can put something in the stack, but if the implementers say, we're not going to do that, then it's actually worse than useless for us to put energy into something that the implementers will not do. Just that's the data point. The SVG working group, in order to be empowered to do that, I said this in another session, in order to empower us to do things in the SVG space, the implementers are not going to listen to this working group. It's plain and simple. I mean, unless they have invested interest, they're not going to listen to us. Unless we show them evidence that this is the case, we cannot show them all this content, unless you somehow make... unless you can help us do that, right? So this actually ties... the fact that it's not used on the public web really ties our hands. That said, I mean, we need to be able to justify why they should change what they're doing now. It's just a pragmatic approach. And so if we have the information about that, if we have the data about that, like in terms of raw numbers, where is this happening? The chance of us being able to get something in there or better. Okay, so then, all that said, the browser vendors have expressed an interest in improving the DOM. And so that's actually something... not just for SVG but for HTML as well. And so people like John Resick have... I've talked to John Resick, I've talked to people at Mozilla, I mean, you guys, we talked about basically an idea of opera. I mean, we've been talking about for a while the idea of improving the DOM across the board, not just for SVG. So that's... so maybe some aspects of the micro DOM could make it in there. Yeah, I mean, improving is one thing, and I think that's our group initially improved, but just on breadth, it has got something new to it. So it depends upon... I don't really think that you need to be multi-acting. You really need to be multi-acting, but I should break it on to be like that. Okay, so which browsers are you currently using that can't read the micro DOM syntax? The Canteray. The Canteray. Which one's in your tool chain? I think something around there. Using the browsers that support it, they'd like to be able to use more. We are hearing that, we're hearing that right now, but we aren't seeing that traffic on the SVG browsers. Yes, the first time I hear of anyone using the micro DOM recently. Yeah, or really? Yeah. It's all spiking, it's interesting, four years ago or something, and I haven't seen much usage in the past two years or so. Well, I've called the process up to something, so you can't convince the implementers to do things, but this is the implementers' plan. No, this is the workgroup. This is the workgroup, and all that. Some of us are implementers. I'm the one, as you know. Chris isn't. Eric's not. Oh, no, I'm Eric. But almost all the implementers are representative of the working group. You know, even within a browser company, like Eric, do you sometimes have difficulty convincing people within opera that, like even within, like, we aren't all powerful beings? Well, now, Chris is on. I don't write the checks. I push the features up, because I don't know. The priorities in SVG is what I've got to wrap my head around. I work with these guys on. I have certain ones in my head, which is, you know, this is kind of Ted said, three months, HMOL and SVG are happening now. Let's nail that. And then there's a finite set of features that we've heard. Some interesting graphical improvements, some format improvements, you know, those things can come in modules. I'm not afraid of them. And then there's the fix. And then there might be like, okay, you fix, I think it's fixed. There's added a different fix. How do you fix smart systems? Right? I think that popped to your part right now. How does it change how it's happening and where it's shipped? That's keeping me up to date. Coming to microdom, maybe when you would shift SVG support in Windows Mobile, you would come to this issue. When? Yeah, I do. I'm not in the mode world. We saw CSS transitions. We saw CSS transitions in Opera. We saw it in Firefox. We added a red kit and we also tried to use it on SVG. But there's still some attributes that are not animatable or not veritable even by CSS because of different reasons like any well or base well, because of different SVG types like SVG lengths or something like this. Are there any plans to change that or can we make, can we cut off any make value or any value? Can we find some solutions for SVG lengths or something like that? Yeah, you can. The smile animation can animate either a property or an attribute. CSS animation, since it was named CSS, only talks about animating CSS properties. So there's various ways you could approach that. One would be to make every attribute in CSS the property. Now, if we suggest that, every attribute in SVG of the CSS property. Now, the CSS property would be very worried about the total size of the property set. They often discuss, for a given proposal, if one adds only one property and one adds four, they'll prefer the one that has one because they don't have too many properties. And every property goes on every possible element. So if you do a simple mind-reading implementation of that, you'll end up with this big table hanging off every element. And obviously there are smart ways to implement that. So, that's where every property is. We're not implied to it. Yes, it will, actually. That is the model in CSS. Every property can be set on every element. It may not affect its rendering, but it may inherit to a child, which may... So, but that is the issue. And a lot of attributes, we have some common attributes in SVG which are of many elements. We have some attributes that are of much one element, and converting working group to say, okay, we want that to be a property, you know, an area, it's either where you add this one thing, you know, like the stock color. Words. So, but another way to change that is to extend the CSS animation thing so we can say, am I animating property or am I animating attribute? Which then gives you complete generality, and that means you can animate other stuff as well. Which, to my mind, far the better way of doing it. That's the CSS work? Yeah, but that's the technique that we need to. I mean, this is, but we're on the CSS... Well, the FX Task Force. That's what the FX Task Force... But anyway, so your question was a good one, and it still needs to be discussed, but there are benefits in drawback shooting to the basis of it. But it is another issue that you... Yeah, we'll work on that. Right, but it actually would be really helpful if you bring that to, again, bring that to that specific technical issue. Should be a FX Task Force. By, well, you can email www-svg at, you know, w3.org, or public-fx, and that's actually a better list to respond by on CSS plus SVG stuff. You subscribe to the list, send us your issue, and we'll have a discussion about it, because we, you know, there's a lot of different issues, and you can counter a specific technical issue, and, you know, we'd love to have you feed that. But we can't, we probably won't solve it correctly until we have you feed it. But it is actually helpful. Could you just... One was raised in a... Okay. Just a second. I just want to talk about the inheritance model. Which you respond to CSS Force. See you again. I just wanted to ask about the inheritance model that you're using for these CSS properties in the Python or the SVG, just the inheritance model. So the properties... The properties... No. The properties are divided into inherited ones, non-inherited ones. The inherited ones apply down the entire string document order. The non-inherited ones, if you haven't set it explicitly, then you get the initial value of the default value. So that's the way it works. And there's also special keywords called inheritance, which even on a non-inherited property means take a value of my parent, so you can sort of do that. So that helps the model. How do they compute it in memory? That's an implementation issue. If you do the simple-minded thing, then yes, you end up in a massive table and every single element in your memory goes. But, I mean, there are better ways to do that. So you can do intelligent analysis of where is this and how much do you need to allocate it. Basically, you can do some pre-calculation of the same memory, but it makes the slows a renderable. You can just allocate the whole thing and then de-arcate it and it doesn't get used. There's a very simple implementation approach. But the actual model, yes, is that it... This is often where the input is set on model. I think there's a different route of representation, say, the L-cost symmetric input of the problem. Right, this is not how you want model here. Right? There was a follow-up actually. Yes, there was a follow-up. I still wanted to hear about any model. I know very fast to cut it off because we're still not implementing web yet. And the thing is, what's holding us back to do it is, I've never had any market board about it that someone actually wants to use it. I've never seen content which uses anima. So maybe someone can like me where there's an actual real use-base, for instance, to script a base for the anima while the animation is running and all this stuff which is really... which boosts and blasts the SVP dom a lot. And it's a lot of complexity. So what's the deal with it? Are the plans to change it? That is a good question. That's bad for the server. Good to hear that. So for us, I mean, the way that all this can start basically is what implementation has to do with it. And it's useful if you want to track what's going on in the animation, what the trim values in the bin process. You can store the pointers to those objects and follow what's going on. You can use that to make debugger for animations, for example. We haven't seen a lot of use of it in the course, but I think part of the reason for that is because it operates on the base of the location. So once Firefox or WebKit asks support for it, or are you annoying them? So then probably there will be more use of it, maybe more use cases as well coming from that. So do people want us to drop an envelope? Well, so it says that this is a problem. Sure, Hans, how many people use an envelope? Is there one you use an envelope? Just go over there. Vincent, you don't have to. In general, I think it's possible that once it was in the spec, people will use it or will have to use it. Correct. Sometimes you use it and you decide it's a bad idea. Correct. So for us, the base file has to be there, right? And the smile's got this nice model where you've got the base value and then you've got the stack of animations. And while those animations are running, you can still change the base value through script, and it will still give a different result if that's good. So it's not obvious that maybe you want to get the value of the top of the stack, because the implementation has to have calculated anyway. And what we didn't want was people have to do a script or parallel implementation if they wanted that value. Having said which, grabbing that value, doing something with it and shutting it back in the bottom again isn't that useful. And the other thing I would say is that the DOM is an API, not a memory model. In other words, you can be lazy there and not count version, not store it. And if someone asks for it, well, you probably have access to that value, so you can give them the answer back. You don't have to store it as a big tree, you don't have to allocate on every other just in case. But when there is an Ibar, it's a bit of my mind, just right here. The problem here is not so much with the DOM itself. As you said, it's just an API. You can construct any value on the fly. You have a lot of this, which is DOM, because we need to construct these objects that are returned as typed interfaces, as interfaces. That's fine. That takes a little bit of time to implement it, but it's totally doable and we've done quite a bit of it already. The problem here is a bigger problem is what you mentioned before. We have a different animation model between HTML and SVG. That is actually a much bigger issue to tackle. Maybe as a result of that issue, we find that any value should be used in HTML and in SVG. That's fine. That actually might provide a lot of value. Or we find that it's not the case in the initial cases at all. But the real problem here is we have one single model, or at least a model that scales, if not one, a model that scales between SVG and HTML. Also, the developer doesn't have to learn two models. The implementer doesn't have to learn two incompatible models. We don't have a seam as a transition, so it can mix SVG and HTML even more than we're attempting today. I think that's the biggest part to tackle first. Right. I think that we need to look... This is really intensive to see as the joint CSS SVG workgroup task force. It is to look at animation, reconcile the models, come up with a consistent model, and actually to provide better script interfaces to that animation. Because you often times... There's often times limitations to what you do with the third animation. You might want to pass it off to script or vice versa. There's use cases for both. So having a really good API for that as well would be a good thing. We want people to be able to see that on how the animation works. Mark, what can serve you? Mark, what can serve you right? This presentation says today that if you change a value in an animation element, the behavior is undefined, meaning each of the top three implementations to define what that is. So don't do that. We have to... This guy had a question as well. It was just a follow-up about what you said about script size and how it's going to be positioned. Sorry to stick to that. I don't quite understand why there is, after something years, two specifications for animation that tend to collide. I don't really understand also why time representation is inside CSS. I don't know, time is not really the same. Smile was imported from the smile language in SVG. Either it's supposed to stay in the SVG or be in another file to specify what happens at what time. What is going to happen with this animate tag like David Bailey was saying, if the CSS animation module take... I don't know how you say that in English. As the leg up? Yes, exactly. So where this came from was the smile habit originally. SVG imported it with no change. It worked moderately well. There were some things that were more aimed at smile presentations. So smile didn't have very much in terms of layout. They had a very complex time model. Essentially it was a bit researchy. It did a lot for the time. There was nothing for layout. And it was a slideshow type stuff. So that was okay. We could use it very well for SVG. Now in SVG, you're dealing with a presentational level and a fairly low level. We did try to separate our own content. The content for us there was really geometry. Some people would say that the whole thing is presentational. But we had this in the afternoon. So we could animate actually. We could animate CSS that way from the smile. So that was fine. But with HTML it's much more common. If you've got 100 paragraphs, the chances are you want to start on the same. If you've got 100 paragraphs, it's incredibly unlikely you want to start on the same. So there are different use cases. It's much more common in HTML to have the style out on a set of style sheets. So you can restart, you can reskin that sort of thing. So when people wanted to add animation, it was natural for them to look at a style sheet like model. Now the work proposals, time sheets was a proposal to have totally second methods that was out there. But then you animate. So it just did animation. So it was quite a guess in the traction of a whole A5 year. It just didn't really go anywhere. So Apple wanted to do these sorts of things in HTML. And Dean Jackson, who had been on the SQW, and then went to work for Apple, used that as a model to change a few things that he found a problem or he wanted to do things differently. And that's where that came from. So that's why there's probably more similarity than different. He's definitely used the small time model, but thrown away parts that wouldn't be that relevant to doing kinds of stuff on an iPhone. And there is interest now in harmonizing those two. But that's why it's harmonizable at all, because they come from a common source and we can bring them back together. And it's part of the rationale that I've heard from the WebKit and the Microsoft System. The people who they were aiming at, the authors that they were aiming at, were people who were already familiar with CSS. And that as silly as it sounds, the syntactic difference to those people would be challenging. And they were also offering tools that these people were using, which they anticipated having. So CSS was just, that format was more familiar to the audience that they were aiming at, so they put it in CSS. So, yeah. But if the development patterns come up for a lot, right? Yeah, then this is, I think, what we talked about at XML namespaces in terms of BigTrap. But the key scenario you brought up and resonated with everybody that I have talked to that's used it, is when they integrated SVG in their HTML, they put an href there and nothing happened. Yeah, it's just like, it was a non-starter, this language doesn't work. And it just goes back, okay, we'll see it. SVG is stylobal with CSS, but transitions don't work, you know? Yeah, so animations are rich with time. It's a little weird, I agree, but it's a language that can attach to the code. And honestly, it's just another syntax. We can reconcile them and have the same functionality in both pointy bracket syntax and selector and curly brackets. We're just trying to make it as easy as possible. As long as the underlying model is the same, so that the implementers don't have two conflicting models in their browsers, which frankly they wouldn't do, right? They wouldn't want to have these two, but models have to be completely separate. And I wouldn't want to code to it. And also, what happens when I do something using Smile and one animation using Smile, and another animation using CSS, the transforms, sorry, CSS animations, and we haven't said how they work together, which is behind me. So, yeah, I'd say we reconcile the models, which is the way it goes. And we'd love to have somebody from WebKit actively working with the working group to do this. I would have a follow-up on the uninvalid issue. So, as I understand, it's an empty view, so it's calculated all the time whether it's used or not. But it makes not a mess of it's an empty view. It may or may not. Let Chris say that it may or may not actually be. Yeah, you can evaluate it just in time. So it's no difference whether it's a mess of a raw event or an implementer. And it's actually implementations. I don't think that most of them would constantly calculate it. I would have... Do you have a follow-up on the same issue? Yeah, I'll go. Because I have questions on text. Okay, let's go again. Just to follow up on the process to get to reconciliation of the models. One thing that we can start small, we can call before we work and then run. That works pretty well. And I think if there was reconciliation, say just from the transition part, CSS has these two parts of transitions and animations. The transitions are much simpler. The time model is super simple. But it's certainly useful because they will allow you to do a ton of very simple visual effects that developers want to do and today you can actually try to do something. If we just have a little solution on that, just that way, I think it gives us a way to progress and kind of all together, kind of move forward. There's a huge value in looking something down and then getting that model done. We can find the finished generalized VG and then moving to the next step. And by the way, before transitions, actually we can start to transform some of the big names as well. That's why it is simple. So if you look at the FXS course charter, we actually have a bullet list of things we want, we have goals and they have a priority right beside them. And that is exactly the priority. Transforms, high priority. Transitions, high priority. Animation, because it's a bigger problem, is I think medium or low priority. I think it's medium and priority. High priority, low priority, medium priority. Part of the problem is that both the CSS and SPG working groups have been busy with other stuff and so we haven't been as active as we would like in doing this stuff. But now that SPG 1.1 is mostly in the can, this is something we're definitely going to be looking at right away. Transitions is a higher priority, a higher priority. Yes, yes, again, text. So in SPG 1.2, tiny text area and in 2.0 there's also flow text, are in the after also. And I think there's a real need for both, and also a little bit text. But as I understand the current situation, there's no local missile. So we have to rely on foreign object and HTML. So what's your scenario? Is it just that you want flow based text and you also send input? Not to that point, I'll take that. So these concepts are HTML. This goes back to my circle. Is HTML a foreign element in SPG? Or is SPG just part of HTML? We're going to figure out how we can go along. Why do you think that everywhere in the middle of SPG, you saw some flow text? Can I just have that off the stage please? You're supposed to play that. I'm not playing this, but I'll play it. I'll be the one. One of the things about the working group is that the number one demanded feature from the public to the SPG working group for a number of years was flow text. And that was six or seven years ago at least now. There were plenty of people completely against me including myself. Thought we spent a good two solid years doing it. And we came up with really good proposals including flow inside our polygons. We dealt with BIDI in self-intercepting polygons, all sorts of stuff. The algorithms were all spelled out. And basically all that flow text was killed by certain CSS people that didn't think it was a good idea and didn't have a place in SPG. So yes, the HTML paragraph is all well and good. But what you want to do is you want to get flowchart, shut down a diamond and flow the text in that diamond. And how do you do that in HTML? Or editable text on path, whatever. Editable text on path, anything like that. So there are serious use cases in real world applications. Like a paint diagram for example. You put the text in the box and you want to set it vertically, horizontally and it's not square. And these are the use cases that we addressed. And tiny one point to relax because it doesn't have the arbitrary polygons. The solutions have already been built, already been specced and prototyped by Adobe back in 2002 or 2003. So that work shouldn't die. And if we are to bring in flow text in SG2.0, we really need to address the use cases that go beyond what you can do in HTML. Because who cares if that's just a problem. So, yeah, we'll just spin it. So, Ted said something and I thought, wrinkle my brows like August doesn't like that. No, this has a comment that if you're using a box model, you have this set of message boxes. And you always know the size of your actual containing box and so on and so forth. And if you're using an SVG layout model, then you have a code system and you can slump stuff up any way you want. So what happens if you block the paragraph right down in the middle of the SVG and use it as a CSS box model? It says, how big is my container block? And the answer is we don't know. I would argue that SVG is always noted because you can drop the P element inside the circle. Your circle has a center and a radius and I actually know the containing area inside the volume, inside the polygon. So I don't have this little computation there but you just save and actually even leave the regular box. You let the paragraph element be dropped inside the packet. And when you tell me what it's supposed to do, I'm going to come up with a little bit of a strategy. And so that's how you actually answer the question because the point you know you come to is doing you need to define what it's using for its layout. And you've just done that. Exactly. I totally agree with you in that the early proposers actually did say something very similar to that. Just drop it there. The primary problem. The power has to be in the bottom and I'm sorry. I know that's helpful but the problem we found. It's a lot of power to the edge. They're on this side. They're on the other side. So just this you say all HTML dumps inside the packet work. That's well and good but the problem, the reason that the SGG group did not adopt that as a solution is the CSS fox went on with the padding borders. And if you want a border on the inside, the padding to be the correct distance, it doesn't work with a rectangular box. So reconciling that. Not rectangular. So we were trying to reconcile those two things and the simplest thing was to just say, okay we will say just lay out within the pole. Again this is an area where you can stop simple and say we'll define it in a rectangle inside the circle and we'll arrive to that. That is the essential problem. It's easy to say right here that that resolves the issue. But to be able to move forward you have to reconcile the models first. As you said previously about the animation, you must reconcile the models first then you can define the syntax. It's easy to say put the P there, not syntax. That doesn't help you. It's the model that needs to be defined first. I agree. But let's not invent another flow text model. Yes. That is your chance. It's your chance. So we want to put HTML in SVG. If you want tables, if you want lists and all that stuff in SVG, we'll do the holy testing to walk. Let's do it via HTML. You find a problem between HTML and SVG. What exactly should happen? You can tell the wrapping, the least type wrapping, the least square wrapping, the most wrapping, the most wrapping letter. Whatever. That can be defined. Let's define another flow model that would be specific to SVG and different from what's given out. Even in certain ways. I agree. But I want to add also one point. When we unify these two models according to the system and the flow model, we have to do it also the other way around to enable the, what was in one of your slides, with the circle in the middle of the paragraph. Without the SVG view box, without the, we should, I mean, this is a very strong one, right? And when we design this unified sort of unified model or reconcile the model, we have to take into account both work rates. So actually somebody recently asked me on the SVG IRC channels, hey, how do you do flow tests in this case? I said, I worked up a demo for him on how you can do it using T-Scan manually. Hang in the bud. We all drive that. It's terrible. But it works. And how you can do it by doing foreign console, at side by side, foreign content, HTML, paragraph, and then the SVG model, just to give them both, I'm actually writing one post on Amazon. So how do you do it? Anyway, and the thing I found was that every implementation, every implementation worked. They all did, they all laid it out. But there were very dramatic differences in the HTML part. The SVG part was obviously all the same, but the HTML part. So I think it was a really interesting idea. I think that the reason it didn't work across browser is because there isn't a specification that says, what do you do in the search engine? And that is definitely something the SVG working version was absolutely interested in talking about. I just agree with what you're saying. You disagree that I think I've got completely different results than every browser? No. I disagree with what you're saying, that we don't have flow text in SVG. We have it in SVG Tiny 1.2. And then we would get the same results. We would have different, and we have different results. For example, if we compare the flow text in Opera and in G-pad, there's a different result. I'm not sure about that. Because we don't do, we're wrapping the same way. The same actually is the case in CSS. You cannot get word for word pixel perfect matching between browsers. And that's not part of the CSS spec. It's not part of the CSS spec. And actually, so the XSLFO people are also interested in this problem. And I've heard from some people, okay, so I hear from influencers, we do not want to have pixel perfect representation to cost problems. We can't do any of the line systems with different content technologies, different rendering engines, different anti-aliasing algorithms. Designers really want that. I'm just saying, that is a data point, that designers do want that. So if there's a way, if there's a way, you should get as close as possible. Like, I think that just a point would be... How do you say a Bonds cross platform helps? And that's where it's... Well, and Bob helps with that. So the question up here is about how to commonly link Bonds Bonds to our master. So basically matching PC's, you can have the same Bonds. And that's just hugely important. Do you guys have a question about a common Bonds? Yeah, I was just going to ask, I want to hear from Todd. What do you guys... Because you guys were really interested in... That was one of your bullet points, right? Well, actually the point was, we don't do it in a Bonds-Bonds kind of way. And it's a problem because it's not in the space space. And it's not converted automatically because the G-game space is at the T-stance. And so if you say, a lot of work is done with browsers, and someone's back on there at some point, and not do all that, unless you've met a... Yeah, Sport. Pretty much. So the ground floor, you don't illustrate that. They have the sort of text where you pin it down and just start going. And then they have the thing where you drag your rectangle and just start tapping inside it, and it wraps a single paragraph. Pretty much every drawing floor I've used has those two most brilliant text. This girl has this really steep, um, very accident shape. We can wrap it in a shape. It's not... You can do the rectangle. You can do what wrap in a shape. You have to do it in two steps. So some people are going to say, if we do this, right, some people are going to say, okay, well, I don't want to use SVG. Why can't we do this? It says, why can't we define this? These are all questions we're going to have to consider. There has to be a really coherent model for how it can be used for everything. I think the answer actually is to that side of the... Declare an SVG element, learn how to do it, don't put the syntax in CSS. But these are all questions that we're going to have to answer. Somebody... I'm sorry, I have an unrelated question. Can you stop the other side? Yes. You were talking about modules a little bit earlier for SVG2. Is there... And of course there was SVG1.1 and SVG1.2 tiny. So is there already an idea of having one or more profiles for SVG2? SVG2 and also... Okay, so no. Well, we really don't want to have more profiles. I mean, 1.1 had three profiles, right? It had tiny and basic and full. By the time we were finishing off tiny, people were saying, can we just have gradient, so can we just have this, whatever. And the big difference was that basic had a drama and tiny did. 1.2, we did a tiny which... The base level was animated and had a drama where it was being made a small drama. So that's why we were down to two profiles. But then we never finished off 1.2. Full because of the laws of the argument. So that's how we did other things to do. We needed to beat up 1.1 and make it more descriptive and so on. So really, for two hours, I'm looking at the profile. Really. And effectively, the profile will be shown by what modules get adopted and what... We take that Venn diagram and we pop down the middle and say, okay, this is what's got traction now, the other thing's got traction. This is what we've got now. This is a 2-0 profile. It makes 2.1, 2.2. And what we've provided my statistics for is, slightly, what you said in the same... It wasn't that there were lots of arguments. I don't think that's the main reason for 2.1. The main reason for 2.1 is there's lots of traction. There's lots of things happening. I mean, the same way as we saw in the first keynote, it's the fact that a lot of people are doing things and they're doing them in a unified way and we're finding new arguments and therefore we're on a new generation where we actually harmonize with HGNL and CSS. And that's essential. And because of that, you have to skip. But I think even with the new version, the fact that we had all this discussion about, should we have a flow layout or something, and some people say, just use HTML and that's it. And for instance, in my experience, it's very easy, for instance, if you want to work on the web app and you're doing HTML and you can have some icons in SVG and it's great because it scales and it has lots of interesting features. And then I want to do an application that works in a pure SVG-tiny context. In that case, I don't have all my fancy, I mean not even fancy, but at least easy layout. So now I have to put my button exactly at this point and not dump it in the div or something. So there are clearly different use cases, different scenarios. There's also the case that I think is a Suthani was designed for low-capability devices and so it doesn't have clipping, it doesn't have masking. But isn't there also a case that you might want to implement a part of SVG and the problem that you don't want to do everything is not because your device is low-powered, it's also because you don't have that much manpower. So you say, I'm not going to do animation, but I still want to do SVG and clipping and masking because my underlying graphic engine allows me to implement a lot of SVG very easily, and then there's another thing that are hard to do. Yes, the problem is that each implementer or each little use case has their own thing. So when we were doing 1.1 for example, Bitflash, it was opening the Distinct with TI and we were using this low-end processor, but it had a DSP on it and they suddenly went, we can do all the filters, every single run, there are a lot of separations, passes everything, but we don't have these other features because they're hard for us. So the problem is once you split it down, you end up splitting it in such different ways because when we had the Intercepts, they were doing industrial automation stuff, so they had Windows, CE-based analysts for automation, and they didn't care about filters, they never really used filters. They wanted polygons and filters and stuff like that and there were loads of things, and they wanted DOM animations because they could interact and do towers which are not, never rebooting or what sort of stuff. So depending on what they were doing, different people wanted different things, and as soon as you get to profiles, it's very hard to stop them. You end up with a technical illustration profile, a graphic arts profile, I don't know. And then you make, and the content doesn't work, I mean then you get a, a schism in it. And then you get a schism in the content. That was said that, that's modularizing the player and just being the side which was the one thing. So that's part of the idea. So let's make sure that this thing arose easily for, for both implementers and content developers to know on the platform that they're using which modules are implemented and which are not. And I'm just, I'm just saying, just over history of trying to get a CG into the three profiles in the 1.1 timeframe, you know, we learned from the industry, right? You know, there were so many meetings about chopping up vertically, horizontally, modules on the side, you know, basically tiny, the optional time, all these diagrams on the board. At the end of the day, we built these three profiles, right? Which happened in a fairly quick timeframe, really, from 1.0. The next lesson, which was 1.2 tiny, it took seven years. Now that's a lesson to ask, we, you know, in the press it's like the glacial pace of standardization. It's like, well, yeah, seven years from start of 1.2 tiny to recommendation status, is glacial and it's because we get denial of service attacks from people that don't like what we're doing and trying to get all the features in that all the implementers want. And learning from that with 2.0, we go, well, okay, we'll have a call, make the call, the common set that everyone really wants, the nice features that other people think they want. If they're in separate modules, then the market will shake it out. Implementers will build it, they'll use the modules, and if that happens, then there may be a 2.1, which is 2.0, with this module and that module. But rather than profile it to get it fast through W3C, split off the individual features in the modules, and that's kind of the approach. Yeah, I think the modular approach is the way you have to go and not sort of as a working group, try to solve the problem. But yeah, to the last question, and to the gentleman over here, it feels like there is a real crossroads here. And maybe as Wes described is SDG for the sort of, well, public web, where the characteristics are, to be any one of a, like about a billion people that browse the web in over 100 languages all around the world. Or I'm using SDG to build maybe an industrial panel, or I'd love to hear from a colleague of Cam in there, that you know your device, you know your audience, and don't really care if the SDG you're offering ever runs in I-99, because I-99 is not running on your camera or copier, or in other embedded device. So then it doesn't matter if I doesn't support some feature that you were using, but at the same time, the browser vendors have the opportunity to really take SDG up a notch, because we're building on very powerful text engines and rendering engines, and SDG could get to be such a complicated step if you let it. But there's no way it could be implemented in an embedded device realistically. I mean maybe just go grab a led kit source and do it. No, well, I mean it's always possible, but it may be at need. Unless you need full HTML flow layout, maybe on an embedded device, maybe it's not needed. But that just seems to be at least that level of crossroads, I guess. But what is worth? I just want to add one point to what was said before. Maybe it's like the same thing from the inside. I understand that I agree that the time is on this module, whether it's back, but modules are, as you explained, modules are there to improve the standardization process. Profiles are there to improve interpretability, because you make sure that, yeah, if you have too many profiles, it's bad for interpretability. 29, if you have too many profiles, it's bad for interpretability. But sometimes, if you have only one profile, it's also bad, because if the spec is to be, no one will implement everything. Cutting stuff, you cannot. You cannot. But since it's there, it's there. Profiling is the way you cut things. So maybe what I want to say is that it's not because we have designed the spec using modules to improve, to speed up the standardization process that we should stick to that and have two profiles across modules. Not profiles per module, depending on the module type, it would be a mess. But having the tiny profiles around better devices and the full profiles for the PC world, I think it's definitely my own suggestion. One comment today, Dan. I just want to make one comment today. That was exactly what was said in 2000 and 13. I think it was, in the scheme of soft meeting, we had exactly that on the board with the profiles and the modules on the side, and we just couldn't get it to work. And these are the problems that the working group faces every day, trying to work out how to slice advice. Just a quick follow-up on that. I see one possible way of profiling in a story, that is to actually categorize, not to work on an individual profile, like a separate profile, SVG time 1.2 was a profile with SVG, we spent a huge amount of time working on it. But within the same stack, marking out certain features as, marking out, that would be a more efficient from a standards point of view, saying, okay, yes it would, from a standards point of view, yes it would. Because you need to test all the time. No, no, no, the profile would never work. Yeah, it's still related. So when they call profiles, what do you call it, modules? Two completely different concepts. I understand that they're being proposed to different concepts, but if you have a subset of modules that actually are differently by subgroup, then I mean that's the fact that we're working on a profile. So, I think that customers are very dry, profile, the profile presenter delivers that profile, the depot, so profile, the profile, that goes back to the interoperable set that we talked about in the beginning. The point of time that we looked at, at the point of time that we looked at the set that we were looking at, that was the profile, those were the features we selected. If I shift over time, I mean we have to track that. But there could be modules that we never touch. But it's in respect. So it makes sense that you get this interoperable subset, this is a profile that we'll implement. Really, I guess the real question is how, when you approach modules this time, how are you deciding what's a module, where do you draw the lines between modules? As Chris pointed out, for some vendors, it might turn out all of a sudden this feature is an easy person to approach. Those features that are easier, harder, don't necessarily follow up lines of what's in a module, what's not in a module. So how do you draw specific features, sub-features within a module? Which modules do you draw? So how do you decide what's a module and which features what a module is? I mean, all the features of a module except one, that have that module and there was nothing to do with it. That's through last call on CR. There's some processes that exist today. Last call is when you guys say, this is the thing that's going to go to the implementers. The implementers have already started, and they're going to come back with the feedback and say, these we don't want, these we want. And before that, the implementers want to hear. And last call, this is a very important feature for me as a developer, a user of this technology. And I want you to implement this. Or it doesn't make sense this way, the model should be slightly different, because this is the way I think this is a developer. I guess I'm not very on how close to that matches up to the reality that we've seen with implementation of SVG over time. I mean, like in Wim Pryfox first, we had a small implementer called the SVG, and it was like, these pieces we have, these pieces we don't. There's features there from logic to have a detection, but they don't actually... That's sort of an accident in history, in a way, because the browser vendors were not implementing SVG at all. At all. We had a few small implementers, and those implementers were not small like Adobe. Yeah, but not Adobe, correct. No, no, so we had, okay, but not the browsers. We don't want that many browsers packed in either, right? The set of implementers at the time all agreed to those features, and they all implemented them. And then the market shifted. Adobe decided to not use their SVG expertise anymore, and so it shifted to browser vendors. And so now we have a different set of implementers that have, say, and so it shifted over time. The initial set of implementers were implementing all those features, or most of them. And then another set of implementers came on and said, whoa, we didn't want to sign up for, I don't know, what, the fonts. We all want to do SVG fonts. So the market has shifted over who the people making this deck are, and who the implementers are. So that's just an instance in absolute history. So here's a point to the view of this classification as being something that's impartial in the cybercum reality versus this period of process that does involve a lot of different, or it doesn't require any recognition. So I think it's like you're right on the other hand. You need an implementer to do all the different things. Not even with this new user. Yeah, and Rob, I think you're right to say that, which is what the working group, the recent times, is actually pretty cautiously. And so, and by that I say, we're putting out specs now. It's modules to see what the takeoff is like. And then once we know what the takeoff is like, or once we know what's being used, what's being built, then we can move on to what's out there. And that's how I accept it. We've learned from the past. And also, we are trying to actively get the different implementers, in particular, being tough, right? We've been trying to get Inkscape involved, you know, as an authoring tool, involved in the Ascension Working Group for years. And it's only when Todd stepped up and said, yes, I will actively do this, that we actually, like, any good feedback and have a good back and forth with the community, right? It's really important that the implementers are working. And that's what we learned with KID. Oh, I'm talking to you guys. Can we take yours? Yeah, one more question and then... I don't know if I want the last question. Yeah, yeah, yeah. They came to mind as that, as you split a working group into little subgroups, the bigger companies dominate those subgroups increases because they have more staff to develop to such a thing. I don't understand the question. We're not splitting the group up, this is what it was thinking about. But each aspect of this... I think we're more interested in one another, that's true. But in general, each module will be discussed by the entire group. We're not the one and only groups of one and two groups. Even in a bigger group, that happens. If you're not interested in a specific feature, you drop out of the discussion and you wait for your... You're interested, you make sure you're there. And that, I think, is a gradient managed to a way that the other three seed works. And it's one voice for each company. So it's based on, at least in the SQG work, we're very much on proposal. Is your proposal, does it have merit that can often lead to your... One of the problems, though, with having implementers so closely involved with suspect development is that feasibility is such an impediment. Reality is interfering with dreams. And... And marketability. But there are both sides to thinking in the future. There's thinking ahead and there's thinking about what's practical and there's figuring out how to get from one to the other. For example, in the example of a layout model, you will at some point in time have to change into something that is non-merc to linear. That will happen. And that's ongoing in time because cartographic data is going to be poured into polygonal tessellations of the plant. We hear you, David. The one thing is that the working group is not a research and development outfit. We set standards for every seed community. We're trying to standardize an interoperable set for all implementers. We don't go in the pie in the sky. Things like this might be cool as much as it may be great. Maybe that's what we're seeing as a thinking inside it. But if you are an implementer, this is as big a open as that. What are your cool ideas? I would say, and this is one thing I've seen increasingly in the last few years, is that sometimes we'll see proposals that are deeply worked out. They've got through the use cases. If you've got what they want to do, and they'll include an implementation, however slow it may be, you can put something in JavaScript or whatever, so you can actually try it out. And that helps demonstrate the feasibility. That means that you're much more likely to get traction from the other implementers. It also means that the group isn't swayed by three implementers, have a veto and everyone else will say, please, sir, may I? Someone that really cares can actually code something up. Like you did for the parameters set, right? You're not an implementer, so say, we're a big implementation. But you coded that up and you showed that it was tractable or feasible. That's actually why I wouldn't like this, because this is something that I'm actually practicing as, you know, working group. I'm not an implementer. I have very little say in what implementations do, unless I've been influenced in some way. That's exactly how I've influenced them. But also, that's one of the ways we're trying to reach out to communities. By making the primers, I don't know if you've seen that, but most of our new specs, we're making primers that show exactly what sort of use cases, at least some of the use cases that we had in mind would be designed for this feature, like parameters. And I got really good feedback on parameters where other stuff in the SVG working group had put out did not get very good feedback, or didn't get any feedback, right? We weren't getting feedback by putting out stuff in primers. We're trying to make it more accessible to the people who we expect to use it. And we want your feedback on whether this is the right way of doing this. And so, if you treat it similarly in terms of feeding back to us, I mean, that's how we can have it at any community that actually works on things. I just want to feed into that, which is, I mean, the question I was going to ask, I don't know what started this discussion. Can you list the kinds of capabilities, the exposure that the SVG working group is? What are the touch points to the community so that they know that? So, I can close. Start at the W3C SVG working group website. So, any section that you want to really look at? That the SVG working group, we've reviewed that on the SVG working group website to try to make it easier for you to give us a feedback. And as you might know what we're doing, we're tweeting, we're trying to get the word out. So, get into that feedback.
The SVG WG would like to make themselves available for a panel discussion covering topics related to the standards effort, such as current and past SVG specifications, errata, testsuites and implementation status. Similar sessions in previous years have been lively, informative and well attended.
10.5446/31217 (DOI)
So, hi everyone. This is my first time on conference SVG open conference, so It's going to be okay. The title of this presentation is embedding WebGL documents in SVG. The objective of this presentation is to show how it is possible to mix SVG and WebGL Which is a 3d5 in the collater format Into one single SVG document and demonstrating some level of interactivity The demonstration will be made using SVG and WebGL libraries So what is WebGL? Some of you may know WebGL is a specification based on OpenGL ES2 Which is a JavaScript API which gives direct access to OpenGL ES OpenGL ES now enables full programmable 3d graphics It consists of well-defined subsets of desktop OpenGL Creating a flexible and powerful low-level interface between software and graphics acceleration It is designed to display 3d content in a browser without the need of a plugin Nike builds of web browsers like minefield for Firefox, Chromium for Chrome Safari and Sunopera are WebGL enabled It is also possible to use hardware accelerated 2d and 3d applications still with no plugins So the environment of WebGL WebGL enabled browsers are capable to use the computer's hardware resources for rendering 3d contents Whole scenes like those in animation movies and video game can be rendered And the interactivity with human interface devices is fully implemented Like a keyboard for navigating the scene or for changing parameters and the mouse to look around At this time there are already several libraries most of which open source To help authors integrate their work with WebGL The library I am using here is GLGE It is very easy to anticipate on the terrific possibilities that can result by combining the two technologies Like powerful applications in the scientific, educational, architecture or any other domain For example the creation of real-life places for virtual visit Combining the SVG and WebGL animation and transformation tools Instead of targeting desktop environments only, Web browsers of the next generation will do the job Without the need of installing anything So let's see the method employed So two libraries are used for this work, the GLG WebGL library and the Pergola SVG library The SVG library is used for creating a window with zoom and pan tools Capable of loading in remote or local mode another SVG document Which contains the collada document The 3D file is embedded in an HTML5 canvas element through the foreign object The GLG library has been chosen for giving good overall results while being relatively easy to use The GLG library carries out the following tasks Like build the 3D scene with cameras, lighting and so forth Manage the translations, rotations and scaling Map the keyboard and mouse for the manipulation of objects And import the specific collada documents I am going to show how the SVG transformation tools interact with the keyboard and mouse commands of the GLG library Ok now time to see what it all looks like We are going to see two examples The first, an animal would be appropriate in a game or animation context The second, a molecule relates more to the scientific applications domain Ok let's see The objects are presented spinning And I am going to show how to navigate in the scene Without attaining nevertheless the richness of 3D software functions and commands So you can move forward, backward, ok Then strafing left, strafing right You can follow the shark if you want And you can use the mouse of course to look around Ok Additionally other SVG elements can be added to the 3D scene or anywhere in the user space Ok now, I forgot to say this is a model I made of It's a character from one of my animation movies during my school days So let's see the second model Ok So the loader SVG is kind of the XF file XF file or something So it's going to load Ah, yes here it is Oh wait a second Yeah It should sometimes there are some sometimes bugs because Because Mind Field and GLG library is kind of young and unstable Refresh? Yeah yeah this one I'm going Ah, it's ok, it's ok Ah, no no Ok, let's refresh I didn't really expect that but Sometimes we have to find some tips Relunch Wow, ok I'm going to relaunch Ok, don't worry We had some difficulties to make it really work good Ok, let me do something bad Ok Ah, normally it spins around, it spins but Ah, I'm going to refresh Firefox beta is unstable Yeah, it's unstable And there's only two actually nightly beams that work with this Ok, for example normally it spins when you see that when I move the mouse it spins Ok, we'll see, normally it works but we are going to see the SVG interaction So, you can resize the window You can zoom Ok Ok, you can pan Ok Double clicking the zoom button resets to 1.1 You can zoom out And zoom in Double clicking on the hand tool resets to 1.1 Ok, let's retrieve the, let's go back to the molecule Federico Yes, what? Federico It's finished? Yes Ok Do you have anything specific that you'd like to round off with? Do I have anything specific? You can get this online Oh yeah, yeah, you can see online, I forgot to say, sorry You can see on the SVG open site, on papers, you know And if you go to WebGL and SVG you see links where you can see online the demos So, to see the shark and this molecule Thank you Do you have any questions, sir? Ok I didn't understand why you need the HTML layer to be SVG and the collada Why don't you use the frame object with the collada directly? I'll let you answer to create the SVG framework But we can somehow put it in SVG, we work on that The main reason is because the GLG library that we chose to be, is to use Forces user to use a canvas And plus it's not really finalized as a library So we don't even have the choice to give a particular idea to the canvas It's kind of horrible But we are working on exactly what you say, because it's actually very easy to For an object and put the thing in But we have to find a library that allows us to do that Now another thing that I would like to add, it's about the browser You won't be able to see any of these demos unless you use a Firefox 4.0 beta 2 That's the only one at this moment that runs the demos As in, that also refuses The links are on the paper online To the browser as well And you need of course to download the Firefox beta version So wait till this evening and you'll get an updated version on the web That's exactly what you can do Further questions? I saw that when you zoomed in into the 3D object, the image was pixelated Is it the limitation of Geology library or? I would expect that it was rather in a higher resolution Yeah, we see that problem, but maybe if we put that on the SVG document This may be corrected, because this is a canvas which is bit mapped So that's why for the moment it was not predicted to be like that So normally it was a canvas where you couldn't zoom But with the framework we could do that So normally, if you want to explain more in detail No, it's not done, it's not done Wait, we'll see more It's not important Further questions? Thank you again, Pedro Jico
The objective of this presentation is to show how it is possible to combine SVG and WebGL –3D file in the .dae format– into one single SVG document and demonstrating some level of interactivity. This presentation would require a slot of 5 to 15 minutes. It is articulated around three major points: – What is WebGL – How it functions – How a Collada document can be embedded into a SVG document and made to work. The demonstration will be made using the GEMï Web OS version 2 and the WebGL document will run into one of its windows. WebGL is a specification based on Open GL ES 2.0, with JavaScript binding. It is designed to display 3D content on the web without the need of a plug–in. Nightly builds of web browsers like Minefield (Firefox), Chromium (Chrome), Webkit (Safari) and soon Opera, are able to render 3D.
10.5446/31137 (DOI)
Okay, so I'm Petr Nalewka and I would like to show you a little case today of how we utilize XML technologies to make automated authoring of large set of documentation and very visual documentation also. So first, what are the advantages of the approach I'm going to present? So we can automatically generate fragments of documents using XSLT and Duckbook. Styles are easily pluggable, they are automatically applied. Also using Duckbook, everything is modular, reusable, thanks to Xinclude mostly. And we have professional type setting with highly customizable outputs thanks to Duckbook and F.O. By the way, this presentation is a Duckbook slide presentation. So we get highly visual documents using SVG and we use subversion for collaboration and versioning of our documents. And we allow authors to edit documents visually using XMLMind. There are barely any disadvantages. Only some to mention, maybe authors need to learn some new tools. They don't get what they change immediately. They don't see it immediately. They change something and then generate a document which may need some different approach. There may be an initial time to build, but I very much believe it gets paid off very quickly as soon as you start to do changes and as soon as you produce a similar document what you already did. And it's not an out of the box solution. It's basically a framework, something a little build it yourself thing. So even the framework is really like general. You can use it in many different industries for different domains. We use it in my company to create documentation for large networking projects. So we have a domain which composes several sites with different equipment, with different network topologies. We have IP plans, wireless connections, et cetera. And what we do is we support such kind of projects through all its lifecycle phases through this technology. So we generate different kinds of documents from the proposal to detailed design documents, implementation and testing guides, inventory management documents, et cetera. And we all generate it from the same data. So something about the framework. It's very simple. It will just quickly show you the main principle. You have some XML based domain specific grammar. You apply style, accessibility style sheets to it and you generate SVG fragments, duck book sections, tables, slays, et cetera. Then you have some repository of static diagrams, images, static duck book fragments and everything gets composed together using X include to get final duck book, books and articles. And the framework is basically a glue between this kind of technologies which helps you set up this very quickly to have a place where you put your domain specific data, where you put your style sheets, define a pipeline to process those, et cetera. In the next phase, it's basically a standard duck book process when you take duck book sources and you apply some accessibility customization layer on it. We have something which you call styles, but basically those are like encapsulated pieces of duck book customization, which you can very quickly plug different styles to have completely different styling for different departments of your company, different branches and so on. Then those styles share some customization layers and they also have some specific stuff for different documents. So it's kind of a layered approach to this customization layer. And we also have a nice feature that if you produce a PDF document finally out of your duck book sources, you can merge it with static PDF fragments. So for example, if you don't want to maintain a very large presentational data, you want just to merge it into your generated document, you can do it quickly. So as I said, it's basically a framework, so it requires you to design a nice grammar for your domain and then do accessibility stylesheets for it. I think this is even if it doesn't allow to have something out of the box, it's really a nice solution because you can hardly have something nicer than a domain specific grammar optimally in XML. It's really a very maintainable structure. It's very much understandable to domain authors. You can get a rate of redundancies. You can really change it easily and your stylesheets are very clean instead if you would use something generic purpose. So you have to keep in mind that your domain model is really the most rapidly changing part that you always do changes there. So to keep it very agile, it's a good thing to have it like in your grammar in XML and just apply stylesheets to it straight away. So I think this approach for this particular use case is very much agile, much more agile than if you would use some scripting languages or other technologies. The reason is that you don't need to do all this mapping between different formats like Mark was telling last presentation. You get rid of all those guys sitting next to your XML. You have your domain specific XML, you just straight away transform it into some nice presentational grammars. So if your output is basically XML grammars, this is the easiest thing what you can do. You get a declarative approach. So everything is much more readable and you need less code to do it. So and we go even farther to make it even simpler to change things. We have several techniques which we propose. We don't use any strict schemas. We rather define only fixed points in the tree which are more rigid than the other parts of the tree. And then we create like loosely coupled stylesheet which in the Xpass you basically navigate through the fixed points and you expect that everything else is more likely to be changed. So this is to minimize the impact on changes in the domain specific grammar or in your data how they affect the stylesheets actually. So the aim is to make large changes in your grammar possible without really touching much the XSLT. So for schemas we don't use some grammar based schemas like relaxing or XML schema but we use Schematron which is very good for loose validation. So with Schematron everything is allowed until you make some rule. So first of all we check this fixed points so that if those fixed points have been touched that we are sure that we need to change a lot of our stylesheets. If not then it should be okay. And what is also Schematron very good in is we do consistency checks in the data. So even if you have a very beautiful grammar you never can get really rate of inconsistencies. So what we do is Schematron is really perfect for that. You can check really complex rules and output very much domain specific messages. So you can really guide the authors what shall be resolved, what they should do. So in the networking domain for example you can assign an IP address to a network where it doesn't belong. So Schematron is excellent for that. You can do that with XML schema 1.1 for example. So let's move on and I would like to talk about visual documents. Because visual information is much more understandable for humans than some paragraphs of text. So it really makes sense to make your document as much visual as possible because if people understand your documents you can for example win a tender or sell your product. You can use less skilled stuff let's say. So it really makes sense to visualize your documents. And why there are not so much visual elements in nowadays documents is because it's quite difficult to maintain visual data. And that's where something automated comes really in play. So if you can automate it then you can really visualize your documents. So I will walk you through several use cases, what we do just to show you what are the possibilities. For example we implement image call outs which is a duck book feature but it's not implemented in the style sheets so we have an XSLT SVG implementation. So basically we get a bitmap image and we wrap it with SVG and we highlight some different areas, assign numbers to them and each number is then referenced and some text is assigned to it. So you can imagine that keeping such kind of information in a bitmap format and opening an editor changing something it's real nightmare. But having an XML structure you can really easily change the areas, you can change the text here because it's just plain text. So this is just one of the examples. Another one is in the networking domain you do a lot of networking diagrams. And one thing you could do is to just take your data and generate those networking diagrams automatically layout them basically but you don't get very nice result it's quite complicated. So when we like to simplify the case what we could do we could let the author control the layout of the diagrams. For example networking guys really love Vizio so they can draw their diagram in Vizio and assign custom properties to different labels and elements in the diagram. And then we export to SVG and we have a nice facility style sheet which maps custom properties to X-pass queries and populates the diagram with data, with our data. So basically we can have let's say 20 sites with the same network setup so the authors only draw the diagram once and populate data for each site. So it's very much maintainable if they change place of one component they regenerate and 20 diagrams are automatically newly generated. Another use case where we really generate diagrams from scratch is rec layout. So what we do we have clip art of SVG generated sorry exported from Vizio and we have an X-stility style sheet which draws rec layouts from that. So sorry to show you an example this is one of the documents sorry this is one of the documents which we produce. It's a detailed per site documentation and this is SVG diagram where we did populate the data from our domain specific data. Here's another one and here's an example how can for example a rec layout look like. What is really cool about SVG that you can do you can really zoom in so if you do something like that you have really detailed documentation with all the ports, cables, manufacturers, flashlights everything which can be very useful on site and installing things or repairing things. So next where we get really visual we do integrate with geographical formats. So we use integration with Google Earth for example you know they use this keyhole markup language and it's an XML grammar so it's very easy to get it into an XML based framework. So and we do two types of things either we use our data and we generate like KML data out of it so basically we visualize our project on the Earth's surface so you can see different sites you can see how they are connected if it's wirelessly you can see line of sight for example so it's very useful and another thing is we allow the authors to create the data from within Google Earth and then transform it into our data. And I would like to show you a little application which we used and it also demonstrates how agile this approach is because it has been created in really two to many days and what we have is a little tool which allows us to plan how to wirelessly covers larger area with Wi-Fi signal. So what we do the authors place this little pin points on in Google Earth and assigns some custom metadata to it which KML allows and in the first transformation we generate more richer data out of it so we connect each device with wireless signal which is basically a machine between the wireless units, different colors show us different channels, the circles show us the coverage so we see how is the site covered. Next we generate automatically legends for those colors, we have table of four devices with geographic locations so guys can go on site with GPS devices and really look where to install the devices. And what we have also visual representation of how to install each individual device so for example here you see a device with eight slots and we calculate within XSLT from the geographic data we calculate bearings where to point different antennas so and we have a little optimization task which turns doing it so that the radios are equally distributed within the sectors. So that's only a use case how we can visualize things and there is a lot of future work it would be nice to allow charting, SVG based charting. I looked on the internet I didn't find a nice library which would be really produce some professional output like charts. Sorry, there is one. Thank you. Another useful thing would be graph layouting really automated and another thing is to take SVG and make it really geographical where to really render maps not rely on Google Earth or some other tool. Visual tools this is really a critical point for an XML authoring framework because you really can't expect that authors will go into your XML code because it's something very unfriendly for them. If you don't want to spend all your days just converting ugly presentation format into nice XML you really have to allow your authors to use visual tools. So what we already have is integration with Vizio through SVG as I have shown you. We have KML and we have also Excel through comma separate values files. So we have an XSTLT style sheet which basically parses in comma separated values files file into a generic XML and then we convert it through some conventions in the spreadsheet into a domain specific grammar. Another thing which is really crucial is a visual XML editor because if you don't have something like that then you would be doomed with converting board documents into XML and this is something which can't be really fully automated. You can have tools which help you as much as possible but you cannot 100% automate it. So I have evaluated several editors like XMetal, Oxygen, Epic, CERNA and until now I have best experiences with XML mind which is something I can, after some customization I can really hand it over to really non-technical people. I mean people who only work with office applications and I get at least some results. It's quite promising. So what is good about XML mind I will just quickly present you some of the features. This is my article in that book in XML mind. They have very nicely done the shielding authors from the complexities of the three and at the same time allowing them to know what is exactly happening. And the thing which does that is this little breadcrumb at the top which shows you where you are in the document. So it's the connection between the three and the visual document and it's very useful and it's very simple because non-technical people are quite used to work with things like breadcrumb, this typical element on the website for example. Another thing is it's quite extensible so we can easily write new commands for example but I implement it very quickly. It's cross references in that book. So for example I have a section here and I do associate ID with it. This is my test section and here I do see, sorry, see cross reference to ID section test. So that's it. They have CSS styling so not something proprietary really you can use CSS and don't, not only for dog book but you can really visually style your domain specific languages so people are able to visually add networks and equipment to your project. One very important thing is that your document is always valid. XMLMind doesn't allow you to do anything which would make your document invalid and this is especially important if you hand it to non-technical people because when something gets highlighted in red they are completely confused by it so it's useful. Another thing is integration with other tools so I have, it's not something out of box but I have to implement it copy and paste so you can easily do things, think like this basically paste table after and I get the Excel table pasted into XML or I can just choose a random page and select two paragraphs and I can do, paste a list after for example so such kind of things. So even for me when I need to convert some Word document into XML it, I tend to realize that it's easier for me to copy and paste from Word rather than using some style sheet and correct all the mistakes which are there afterward. You can also visually edit modular documents so this is important for, in terms of reusability because you can create a document fragment and visually X includes to it in different places of your documentation. So I think I'm running out of time so if you have questions, I have time. I'm finished anyway so. So is that any question? So I'm not the Apple guy but still had more than three beers yesterday. Okay, you mean XML mind? I'm sorry about that but. The question was, it was not a complete question, sorry to interrupt you but it was about XML mind that I'm proposing something and I didn't show some alternatives. I think it was really a quick overview of the features. I spent quite some time maybe several weeks with XML mind now so I only wanted to share my experiences with that. I know there are other tools which can do similar stuff. You always have to do a lot of customization and you probably get to a similar point, I'm sure. Okay. So와 Vikings<|ru|><|translate|>
This article proposes a set of powerful XML technologies (e. g. DocBook, SVG…) to automate authoring of large, detailed and highly visual documentation which would be difficult and error prone to reproduce manually. The author further proposes best-practices for XML authoring and introduces a simple yet powerful framework which supports tasks typically related to document publishing and integration of information from various sources. Rather than building a complex theoretical background this article focuses on being very practical. It demonstrates the use of various technologies on a case study taken from the networking industry.
10.5446/31138 (DOI)
So I know we're all tired and hungover and we have some people to blame for that. And so the organizers had the good idea of picking this session as the last one, since it's mostly demos on things that move and shine. It should be fairly easy to follow, hopefully. So basically, I just wanted to use this session to give an update on where SVG and related web technologies around it are today. SVG has been around for about a decade now. At least it's been in development for over a decade. And it has this bizarre situation where it's somewhat successful. It's in a lot of browsers. Last year, it shipped 1 billion implementations in cell phones, which is quite a good thing. It's in Firefox, Opera, WebKit, et cetera. It's exported by Illustrator and so on and so forth, but no one's using it for some reason. It gets very little usage. So the situation isn't ideal, but it's better than it was two or three years ago. Notably, there were all sorts of bad politics about MPEG trying to take over SVG and swallow it up, and that hasn't happened. And the conformance is much better today. And actually, we've reached the point where supporting browsers is good enough that you can use it in situations where you know that it degrades gracefully to Internet Explorer, or if you have an API that allows you to a small script that allows you to alternate contents. For instance, Google Analytics in some browsers uses SVG to display the graphics and the charts and things like that. Washington Post uses SVG for a system of related articles and things like that. And I've been testing it quite a lot recently, and the support has become pretty good. The improvements that came with the last version of SVG, which is 1.2, and that was released in December last year, basically better graphics. Before that release, we couldn't do things like on mobile. Well, let me backtrack a little bit. The baseline of SVG that people can and actually do use is defined by the mobile profile. So some things were possible outside of the mobile, but when people do SVG, a lot of the time they wanted to work on phones and on desktops. So one of the improvements is better graphics. The SVG Tiny now supports gradients and things like that. It integrates video and no-jo, so you don't need to have an object with a plugin to support video playback. It means you can control it directly with an API in the DOM. It matches mile, so you can have animation events fired off from the video. Again, scripting was in the full version, but it wasn't on the tiny version, so that always hurts adoption. Now the baseline has scripting, and generally the spec has been massively cleaned up compared to the previous version. And while that probably doesn't matter directly to people, it certainly helped implementers do a much better job of showing the same thing for any given file. Interoperability is now getting really, really much better. In browsers it's not perfect, but it's getting seriously there. And it's not just SVG on its own that's interesting. Actually SVG on its own is just vector graphics. It starts to become more interesting when you start being able to use SVG as part of something else, and I'll be showing demos of that right afterwards. For instance, you can now use in some browsers, so all of these things are features that are being deployed now and take some time to catch up, so they're not always the same everywhere, but it grows fast. And this is using SVG as a CSS background allows you to have a shape that actually grows with the background in the HTML and can fill it correctly and can keep a nice fine line without having to use 19 different images to get something to work. Same thing, clipping and masking is basically the operation of hiding something, and you can now do that on any kind of HTML content, filtering, which is basically the bitmap level operation. And as I'll show later, these things become really, really interesting when you mix them up with HTML. For instance, I was playing with having an H1 element contain text on the path so that you could have the titles on your document just go like this and stay text. They stay completely text and you can use fonts, which means you can now have nice topography on the web, which is always useful and I'll basically be showing these things afterwards. Also, I wanted to point out that things are moving ahead in terms of both tooling and standardization. There are better SVG production tools today. It's still not perfect, but it's moving forward. And one of the things that's going on that's interesting and that's really helping with the browser integration is that SVG is currently being integrated into HTML5 and is expected to still be defined by the SVG people, but to become an integral part of HTML, which is very good in terms of convincing browser people to actually support it and support it correctly. And finally, now that we have 10 years experience of things that work and things that really don't work, let's talk about doing a more massive cleanup and killing off all the features that no one uses to try and make something simpler, smaller, easier to learn and more implementable. So, some demos. What should I start with? So this is just showing that clipping can work on any kind of content. Basically, you have an HTML page behind that's playing a video. And I'm just moving around to show that basically clipping is operational here and you can animate it. This is pure HTML plus SVG. What next? So this basically shows the operations that you can do on text with filtering and animation, clipping. And this is basically real HTML text that can be clipped and moved around and rotated multiple times and animated, etc. I'm not a graphic, you know, I don't do pretty things, but this is really easy to actually put together. It's not that much code. Currently, one of the big problems is the Czech people are laughing for some reason. Currently, browsers are not really well optimized, but it's getting a lot better. Some of the work that's been done in optimizing JavaScript is helping SVG a lot. And some implementations are becoming pretty good. For instance, WebKit on iPhones runs SVG. And if you do something really complex like filters, something crazy like that, it will be slow. But it's rather impressive how some fairly normal stuff that moves around, hides things, uses transparency, all those features, is actually fairly decently fast even on this. So yeah, this is basically showing what I was explaining earlier about the ability to use text on the path. This is real text. You can select it. You can find for it. Normally, you should be able to find for it. For some reason, it doesn't like it. This is a better version of Alkohol. Sure. So this is basically just SVG. I can show you if I find it. I can show you the same thing using SVG plus HTML, which would be somewhere in here. So basically, this is, if I show you this source, you see three. Hang on. This should help. So this is just the first H1, which is just not regular H1. This is an H1 containing some SVG. And that puts, this is a text path that references the path that's defined here up there. And just putting the text there makes it wiggle. And then this is a version that just uses directly HTML, but there's a script up there that does the same thing as the second, but behind the scenes, so that you can still keep using your HTML as if it were normal and regular and just a little script into there. And you will get the third one. And this is the kind of trick that currently people have to go to images or flash to use today because designers always want to get creative with titles on pages. And right now it means that using this on three browsers out of four, you will get the wavy thing and it falls back properly in Internet Explorer. So you'll just get a regular title, which you can still style. So let me try to show something else. Hang on. So basically this is showing video integration. You can now use video directly inside SVG and it's fully animated. You can rotate it. It's part of the page. It really works. And the whole idea is really to make sure that features can work together as opposed to having a plug-in which you couldn't rotate because it writes to the screen directly and things like that. This makes interaction of features possible. It's not new to see video in a browser, but it's new that you can actually do useful stuff with it without having to go entirely into flash. Let me try to show something. I'm running 9.62 or something like that, which is an Opera Labs version. You can't get it officially, I think. And the other problems are there's no support in the ring? Yeah, Firefox 3.1, which is currently sort of alpha beta, has that level of support. Safari 4 also beta has comparable support. It's actually getting to be pretty decent today. Internet Explorer. Basically, this shows, this is a demo from Fuchsia Design. Again, it shows the video how it integrates with everything else. You can clip it and you can have transparency overlaid on top of it. It's fully integrated with the language. That gives you funny things. For instance, this is the same video. As I was saying earlier, SVG has bitmap level filters that can apply to anything. In fact, the text at the top with the little effect to make it look like more Web2o gimmicky, this is actually a filter. This is real text. It's just flipped over and reused with a filter that applies the gradient. It's fairly simple. But you can also apply filters live in real time to the video of a grayscale. Going back to the original, tracing edges again. This is a very simple example. There are plenty of filters that can be used and they can be combined and parameterized. You can start doing fairly interesting stuff. Yes, the CPU on this is fairly bad. That will not work on the iPhone. It's still interesting. This has only been manipulated currently by geeks. If people who actually had some talent for design were to put their hands on this, they would probably be able to do something a lot more impressive. Every time we add features to the WebStack, we find out that designers come up with absolutely amazing ideas. This adds quite a few features. Again, video fully integrated. You can resize, flip, play with it in a number of ways. I just wanted to show you this. This is not SVG, this is Canvas. Basically, this is a 3D engine done entirely in JavaScript. You can implement a game. The way that Alpera supports it now, even though it's not complete, they allow you to use SVG as a texture in their 3D support, which means you can show animated SVG plus video on something that moves around, which is completely controlled by script. It's fairly impressive. Now, let's look at some... I actually wanted to show you things directly on the mobile, which is difficult, but we could have used something like that. The problem is the mobile I have, that's not this iPhone, but that does mobile SVG in interesting ways, is completely broken because it's Windows Mobile, don't buy Windows Mobile. But I have some videos that at least show the similar stuff. This is basically a user interface, a complete user interface for a phone. Everything is SVG, just absolutely everything. This is running on a phone fairly decently. The pseudo 3D that you're seeing is actually using an extension to SVG that's currently being looked at in standard. The idea is not, as I was saying yesterday, to add a Z attribute everywhere as X and Y, but it's just to support... Here's a color flow like system. Basically, the idea is to support non-affined transforms so that you can emulate 3D easily with 2D content. Basically, as you see, it's a normal phone. This is a Samsung phone, and it's been released in Korea at the end of last year, I think, October, I forget. This also shows usage with widgets. As you probably know, widgets are becoming fairly intensively pursued. Basically, there are just more mobile applications that can do all sorts of stuff and that are built using SVG and HTML. This is all SVG widgets. It's like the app manager for the entire phone, and you can just build those new applications using SVG. The standard is coming out later this year. It's in Lascaux, and there's a big push by all the operators, and vendors to actually support that kind of small widget application on phones everywhere so that everyone could use them and not build special things for each different platform. I have a few other things, but they're not really as funny. I could probably show you the widgets more extensively. Again, this is the same idea, except it's a different widget manager. You just drag things out, and all of that is done in SVG. This is a capture straight off the phone, basically, as you can see, it's in Korean. Yeah, and some fonts are missing. The idea is really to enable the creation of simple applications using web technology that you can ship everywhere without depending on a given platform or anything else. It's like Java, except they got it right. I believe that's pretty much it. Do you have any questions? Yes? I'm in talks with the implementers who are behind the stuff to try and get them to push them out on their website so that everyone could actually use them. The problem with that is that they then have to ask permission from the designers and from Samsung, so it's taking a while. I was hoping that we could get them in time for this conference, but unfortunately, Samsung being a fairly big company, it's difficult. We're in talks, we're trying to get more examples out there that people can actually use. It would be nice. Yes? So the question is, if I wanted to create an application today that would run on the greatest number of phones, what would I use? The answer is today right now, I really don't know. It's still very split up. You could use something like PhoneGap, for instance, which allows you to basically build your applications in HTML and ship them to iPhone, Android, and I think Nokia Series 60 now, and basically have access to a limited set of APIs from the device in JavaScript. That's the reason why the widget effort is taking off, it's because people want to build applications that just run everywhere easily, and the Java stack has pretty much failed in mobile because it's completely not interoperable, there are many issues. You never know which JSR is going to be present. And basically the idea with widgets is to do something simple, that can work everywhere easily in that it's basically just a zip file with a small manifest and the rest is HTML, JavaScript, SVG, and CSS. And once you have that, you can be pretty confident that it can be made to run anywhere. And there's an effort on top of that to expose the device level APIs to JavaScript in a coherent and standard manner that's currently done in OMTP, the open mobile terminal platform, and we've been, well, I mean, I've been fairly involved in that, and the standard will ship this year. The phones with that sort of level of support will start shipping this winter, basically. So right now, I don't know, but we're working on it, we're getting there. Thank you. That's part of my question on the Google Clouds. Oh, yeah, sure. You told a friend to go to the website to ask the question, right? Yes, sir. Is there a test suite that you are glad to have been in from today because I'm happy to be able to hear that community. Right, so the question is, is there a test suite to ensure compatibility as there is for, as there's the asset test and the test suite for CSS? The answer is, it's not perfect, but the test suite that shipped with the latest version of SVG is already much better than the previous one, and people keep feeding tests to it, especially as browser vendors find bugs and find issues, they tend to feed us tests on a regular basis. And also, there is work on an SVG asset test, and I believe that SVG will be or has been integrated in the latest version of the asset test. So basically, the asset test now tests that browser support SVG as well. Is that a question? Any other question? I have one. Oh, no, not you. Of course, if there are any efforts to develop tools to generate this kind of... Yeah, you mean SVG itself or the whole... There are things that move, and yes, there's a fair amount of effort in the OpenAgeX Alliance to enable the creation of JavaScript libraries that people could reuse easily to integrate with IDEs so that people could develop in a more visual manner. I know that Aptana has done a lot of work around that. Also, of course, for SVG, you can create your SVG in Illustrator because it saves to SVG, and Adobe, despite the fact that they prefer Flash for some reason, tend to actually improve the SVG support in the creative streets with pretty much every release. So they're certainly not stopping support for that. There's also a few tools like Ikevo, which is a Swedish company that does implementations of SVG for mobile phones. They have two authoring tools that are pretty good. One is an animation tool that basically allows you to take directly out of Illustrator or any other graphics program and start animating things. And the other is an ID that's in development that works inside Eclipse and will emulate a phone so that you can do SVG development for it. Again, it's one of those things that are not perfect, but they're definitely moving forward. I'm personally working on a way to export SVG from Flash and Flex because I do think it's possible, but it's taking a while and I'm doing it in my spare time. Any other questions? Are you really too tired? Thank you. Before I yield the floor to Mohamed, I would like to thank the organizers for what has been a really kick-ass conference. Good job, guys.
The capabilities of mobile devices increase ceaselessly, and on occasion they are even useful. That is the case of Web technologies that have been becoming mature and gradually more important in mobile devices. This talk will look at the state of current implementations, at where mobile Web technology stands today notably concerning the recent release of SVG Tiny 1.2 and the improvement in support for WICD documents, and will show demos to give an idea of what can be done.
10.5446/31140 (DOI)
Right, so I'm going to talk about Xproc a little bit this morning, which as Yurka said is a new, relatively new working group working on a standard for doing pipeline processing. I'm going to spend a few slides introducing it and bringing people up to speed if it's a new technology to you. And when I was talking to Yurka about what he'd like me to talk about and what he thought would be most useful, since I had given this sort of introduction to Xproc a couple of years ago, we concluded that what would be most useful would be to try and do some real-world examples. So I went looking for something that I thought was reasonably complicated that I could use for a real-world example. And I ended up wonderfully self-referentially with the process of building the Xproc spec itself. So we're going to take a look at that in sort of more detail. And then undoubtedly there will be plenty of time for questions at the end, so feel free to wait until then or interrupt me as we go along at the end. If you want to follow along, I've got a URL up there. The examples turn out to be sort of long and they're full pipelines. And so some of them don't fit on the screen. If you have Wi-Fi access and your laptop with you, you can point your browser at that URL and follow along as we go so you can see that the whole example is on the screen if that's useful to you. Going once, going twice. Any questions? Thank you. Alright, so we started this national processing model working group in 2005 and among our chartered requirements was to produce a language which would allow us to specify the order in which we wanted processes to be performed. In the very beginning there was XML and it was sort of straightforward what you did to process XML. Maybe it had a DTD, maybe it didn't, you parsed it and you were done. But over time new technologies came along. We had X-include and we had X-accelerate transformations and then the question of what order do you do the processing in became less clear. And what we wanted in X-proc was a way of sort of expressing the order that you wanted these steps to be done in and then some mechanism for dealing with exceptions if things went wrong along the way. Here are some of the use cases that we thought about. There is a use cases and requirements document. I'm not going to read that slide to you. There are sorts of things you'd expect to be able to do with XML documents. We wanted to do it sort of by standardizing existing technologies. If you look around there are still and were then plenty of pipelining technologies. Yurka mentioned MAKE and ANT which qualify, there are at least eight or ten others. We wanted to sort of take the best ideas from those and wrap them up relatively quickly into something that was mostly declarative and try to get the simplest thing that would get the job done out as quickly as we possibly could. I'm not sure this slide is actually all that useful. Later on as we go through the examples I'm going to talk about the various parts of pipelines and steps. So these are some bits of terminology that will come up. I'll try and point them out when we actually get there. I think they'll probably work better. What does XPAC provide? It provides a pipeline element so that you have a document that describes your pipeline. It provides a library so that you can have a collection of pipelines if you want. And then it has a few basic language constructs. You can make choices based on XPath expressions. You can iterate over a sequence of documents. You can iterate over parts of a document and you can do try catch. In addition to the language constructs, it includes a vocabulary of, we hope, useful atomic steps that allow you to do individual operations. You know, the obvious ones are in here like Xinclude and XSLT and various validation things. I won't try and read them all and I won't try and explain what all those are, but they're the initial vocabulary of steps that you can use in your pipeline. You can, of course, invent your own. So I gave an introductory presentation here in June of 2007 when we were at Working Draft and you can see the various statistics there. I claimed in 2007 that we would be finished by October. I don't think I said what year, but I did actually mean 2007. It's now March of 2009. The best laid plans of mice and men are usually about equal. So we're in candidate recommendation now and I think our odds of getting this thing finished this year by October are actually pretty good. Although there's no timeline for getting out of candidate recommendation. You get out when you have two implementations that do everything and you can convince the director that you're done. I do hope, we have more than four implementations now. I have one, Wojcak, who's going to talk later today, has one. And there are several others that are being developed in various stages. There are some in Java. There's at least one on.NET. Somebody on the mailing list was talking about doing one in Python the other day. I think we're going to see pretty good deployment. So as I said, I wanted a sort of meaty example, something that was definitely in the sweet spot for Xproc, something that we ought to all agree you could do with Xproc, but wasn't so small and so trivial that it didn't look like you would necessarily need to do it with Xproc. And the running example that I chose was one that I billed several times a day, sometimes several times an hour on an almost daily basis, the Xproc specification itself. Historically, this was done as a make file. And so what we're going to look at for the remainder of this presentation is what it looks like to convert that make file into Xproc. With this resolution, it's not obvious how to stop growl, so we'll just ignore those things when they pop up. This is no longer fits on the display. This is sort of roughly what goes on. This is what building the Xproc spec is like. I'm not a pointer. There are some examples that are stored in a source form so that we can actually validate them and run the pipeline, so we know the examples are right. And you start by transforming those examples into the little snippets that you actually want to include in the spec. And then the spec itself is basically a collection of XML sources that you Xinclude together to build the complete spec. You take the Xincluded spec and you extract some stuff from it for various parts of the process. So the actual RelaxNG grammar that says what the step types are is constructed from the declarations, from the descriptions of those steps in the pros, so there's a little bit of work to be done there. And then at the end of the day, an XSLT step combines all these things back together and produces spec.html, which is what gets published on the website. The one other little wrinkle that we'll see is that we extract the glossary. The glossary definitions are in line in the spec, but we decided that we wanted to have a glossary at the back of the spec. So before you do the Xinclude, you have to do one pass through the document in order to build the back of the document glossary. So all in all, not a completely trivial process, but not one that I think is too difficult to understand. Oh, and then after you get the normative RelaxNG grammar, you have to get the DTDs and the XSDs at the end, which have one interesting feature. So here is our starting point. I will not attempt to read or explain the entire make file, but here is sort of what does the work now is this make file. And if you look at it, it builds the specification itself, an HTML document, it builds the glossary, it builds the ancillary files that I described when we were looking at that diagram. It builds the namespace documents. Those are actually separate targets. Those are for publishing with the spec. And then it builds the examples and the schemas through recursive calls to other make files. So that is what we are going to try to reconstruct in XPROC. And if you looked at the make file, you would see that these two rules are sort of the core of the process. We start by constructing an X-included version of the spec, and then we use that X-included version of the spec and a number of other stylesheets and things to produce the final result. So in fact, if you unwound the make file and wanted to do it by hand, you do just what I described. You'd X-include the thing together, you'd validate it to see that it still worked, you'd run a transformation on it, and then you tidy up the transformation at the end because the W3C website is cranky about various aspects of the serialization of HTML. So that is... I know why it's not a wide message. So if you wanted to do that in a pipeline, you would get basically this example. So here we are declaring... I'm going to try this again. I'm screwing up the video by moving, I'm sure. You begin by declaring a step. This is how you say, I want to invent something new. I want to invent a pipeline that does some useful work. So you declare a step. This input statement allows you later on to pass parameters. This input statement here allows you later on to pass parameters to XSLT if you want to. So that's what that's for. The X-include step does what you'd expect. It takes langspec.xml and runs the X-include process on it. The way pipelines work is that if you don't specifically declare how the ports are connected, they sort of go in a series. This X-include step doesn't actually have any inputs. Its inputs are explicitly taken from a document. But its output falls through naturally, goes directly to the validate step. So the validate step that follows takes as its input the output from its preceding step, plus it has this other input, it has another document it reads from, which is the grammar to do validation with. The output of the validate step is the validated document. In the case of XSD, this could be one that had had, you know, it could be a PSBI that had type assignments and things on it. For relaxNG, it's just the validated source. That falls directly into my nexuslt step, which takes that document and the style sheet and performs transformation on it. The output from that passes to the next step. exec is an escape hook. It's one of the optional steps. This allows you to run an arbitrary program on your machine if you want to do that. I took the tidy step that I had and I made it a streaming step so that it would take its standard input and write the standard output. We take the output from the XSD-lt transformer and we run it through tidy. Then we store that in langspec.html at the very end and our pipeline does its job. I could equally have left out the store step at the end and then the processor would return to whatever the calling application was the result of running the pipeline, which is also common. What's wrong with that pipeline? Well, there's one thing that's obviously wrong. The point of tidy is to introduce all these tags to the serialization. And so if you take the output from tidy and load it back into Xproc and re-serialize it, you've more or less defeated the whole point of running the thing through tidy in the first place. So that turns out not to be terribly useful. Effectively, the output from tidy isn't an XML document. It's a blob of text. It's not XML. It's not HTML. It's text. You want to store those bytes on disks somewhere. So the first thing that we have to do is sort of try and address that issue. Here's basically exactly the same pipeline, so I'm going to skip to the bottom. It turns out that you can ask the exec step to... You can tell the exec step whether it's expected to parse the result or just pass it back as a blob of text. So here I've taken the exec step and I've said, no, the result is not XML. That's not true. So hand that back as basically a text node. And then if I sterilize that using the text method, what I wind up with is exactly the right thing. And I was going to try and demo some of this stuff as we went along. So let's see if I can do that. I think I'm in the right place. Will it work? Looks like it might. So here we are. We're running that first pipeline. This is the one that takes the output from tidy and reserealizes it. So when it finishes, we'll get to see that it came out not quite exactly right. If I go to there and... So there's the output from that pipeline. And if you look at that, you think that's probably not the output you'd expect from tidy. So let me start this one running where it does the text one. And then we'll go back to the slides while it's thinking. We'll go back to here. So this one is going to basically take text as its... produce text as its output, and that's going to give us the right serialization. Does that make sense to everybody? Okay. Hopefully that's finished by now. The only disadvantage of choosing the spec as an example, excuse me, is that processing the spec with XSLT takes a few minutes or a few moments. Right. So there we go. Now we've got the spec out of our pipeline that looks like it was probably transformed correctly by tidy. So I will not run every one of these example pipelines, but in fact I can run any of them that people want to see. At least I could yesterday. So I wanted to make the first pipelines that I showed sort of straightforward and obvious and avoid any complications that weren't necessary. So I took my little Perl script that runs tidy and then tidies the tidy output because I don't like the output. The tidy produces with respect to line breaks in a few places. So I took this thing that I use that actually reads a document off of disk and stores it back on disk, and I made it streaming so that it would be obvious how it fit into a pipeline. But it's not... you know, sometimes you're going to have processes that don't stream and you want to be able to run them. So I thought, well, the next thing we'll look at is what we want to do if we want to use a process that doesn't stream, if we want to run something that doesn't work that way. Well, that... whoa, hello. That was a little bizarre. Right. I think this is where I want to be. Okay, so that's okay. So now I've moved things around. I'm going to say, well, after the XSLT step runs, let's store the result of that on disk so that we can come back and tidy it. And then let's run the exec step and say, no, we don't have any XML input, we don't have any XML output, we're just going to run this program and it's going to do its thing. And when that's done, we're done. I specifically don't want any output from this pipeline before the last step in the pipeline was the store. And so it didn't have any output. The output from exec is going to be something, and I want to throw that away. I don't want that to come out of the pipeline. Well, that's the motivation for the sync step at the end. Sync says, whatever you give me, I'm going to flush it down the drain. So that looks like it does the job. Unfortunately... or fortunately, depending on your perspective, the... the... sorry. It's beginning to bother me, so we're going to go... they asked me to change the resolution that I was using moments before. And so I did, but there, that's better. The idea for Xproc is that although we don't demand that implementation stream and we don't demand that implementations be multi-threaded, we want it to be possible to have implementations that have those characteristics. And so the only constraints that Xproc places on the order of execution of steps are the constraints that are imposed by the way that you connected the steps together. If one step reads the output of another step, then it can't finish before the step that preceded it finishes. If I'm going to hand you something to consume, you can't finish before I finish handing it to you. So the two steps at the end there, the store step and the exec step, aren't necessarily going to do what you want. And in fact, when I ran it in my implementation, it didn't. Because they're not connected, if you looked at what that pipeline is, if you graphed what that pipeline looks like graphically, you'd wind up with something like this. Where we start with a document, we run it through all these steps, and then we store the result out, and then we run this other step that execs, that processes that. Since there's no connection between those two sub-pipelines, you can run them in any order. You could even run them in parallel if you were multi-threaded. And so in fact, when I ran that example, my implementation decided to do the exec before it did everything else, which doesn't work, because it hasn't produced any output yet. So this is one of the areas where things can be a little strange. I'm not sure what we should have done about this. It turns out, excuse me, there's three sort of ways you can fix this, and I'm going to look at each of them in turn. The trick is you have to force this dependency somehow. If you're going to run steps that aren't connected by the actual connections in the pipeline, you've got to fake it somehow yourself. So this will give us an opportunity to look at some other step types. So here's that same pipeline again. And this time after the store step, I say, well, let's make a choice. Let's run a conditional expression here. And I've explicitly said that the context for running this is the output from the store step. So at this point, there is a connection. There is a dependency from the store step to this choose. So this choose step won't run until the store step is finished. So I've imposed the constraint I want. At this point, I can do sort of any number of things I chose to do, the sort of arbitrary test. I know that the output from the store step is never going to begin with a, this will always be empty element. So in fact, I know that this when conditional will never fire. But I also know that it will be evaluated after the store step. And so now if I stick the exact step down in there, I know that the result will come out correctly. Not terribly elegant, but gets the job done. Also in the realm of not terribly elegant is the hack with an option. So the other way to introduce a dependency between from one step to the other is to use the output from the store step somehow in the evaluation of the exact step. And so here I've said we're going to use the output from the store step to compute the value of an option that I can set on the step. And so once again, in order for the exact step to run, it has to evaluate what all of its options are. In order to evaluate the encoding option in this example, it has to have the output from the store step. So I know that the exact step will run after the store step. And again, things will come out right. Finally, being in the wonderfully liberating position of being an implementer, you could also do this with an extension. So I added to my implementation an extension attribute that says, look, this one depends on that one. There is a dependency here, even though it's not explicit. The working group considered adding something like this to the standard and concluded at the time, this was a few months ago, that in fact we didn't really know what users were going to want. We didn't really have enough implementation experience to know whether this simple one-way single dependency arc was going to be sufficient or not. So we didn't put it in the standard. I expect that all the implementers are going to wind up doing something along these lines. And in v1.1 or v2, we can come back and consider whether we actually know what the right answer is. All right, so now we've got a pipeline that sort of kind of does the right thing. There are still some jobs we have to do. Before we move on and start trying to build on this pipeline to get the whole process going, let's reorganize it a little bit and try and get some modularity. The thing to remember here is that atomic steps like xinclude and xslt are, you know, pipelines that you write, that you give a name to, become first-class objects, first-class steps. So you can use myfunkystep, which is implemented as this pipeline, just the same way you can use that xinclude or an xsltstep. So we're going to modularize this pipeline a little bit so that there are some first-class steps in there. Standard software engineering practices apply. You know, if you break these things down so that they're in reusable components, then you can load them up in other pipelines and reuse them. Good stuff all around. So here is my first stab at sort of modularizing this pipeline. I've decided to put it in a library that has several steps declared in it. I could have declared these steps inside the pipeline. That was a sort of arbitrary choice. If you put all of them in a single pipeline, then you can't call individual components separately. You can't reach inside a pipeline and call it one of its sub-pipelines directly. By putting it in a library, I get a series of top-level steps that I could conceivably reuse in other contexts. So that's why I did it. I have... So here we go. The step called main here is the one that's going to do the work, and it calls format spec to get the job done, and then it calls tidy. I have refactored how things work a little bit in order to promote the idea that this will be a little bit reusable. Where I used to load the langspec.xml file directly in the xinclude step, and the pipeline that formatted the spec didn't have any inputs. Excuse me, now I've got this little... Excuse me, step called format spec, and I'm going to say, no, let's pass in the source. Let's make this into a proper pipeline. Format spec might be used to format other specs. So I don't want its xinclude step to begin by loading a file off disk or loading a file off the web. I want to pass the input into it. So format spec now takes as its input the spec that it's going to format, and I don't want to necessarily store it out in this step, because if I'm formatting different specs, I want them to go to different... perhaps to different file names. So instead of storing its result, it's going to pass the result back. So here we see format step basically takes its input, runs xinclude, runs validation, and runs xslt, and hands it back. So that's what this does here in my main function. It formats the spec. And then it passes that output gets passed to another step I wrote called pltiddy, which takes as its input the source document that it's going to run tidy over and the name of the file name where you want it to be stored in the end, and it stores it there and then runs the standalone version of tidy so that it produces the right output. So now we've got a slightly more modular pipeline with a few steps in it. Starting to feel good, starting to feel like we're making progress. Still a bunch of work we want to do. We've got to make that glossary in order to format the spec. We've got to make those ancillary files. We've got to have some way of making the namespace documents. And then we're going to have to come back to this whole examples and schemas thing where we had recursive make files before and think about how we want to try and tackle that problem. So let's see if we can do some of those things. Making the glossary turns out to be relatively straightforward. What does it mean to make the glossary? Well, it means that you run an XSLT process over the input with a particular style sheet and then store the results of that on the disk somewhere. So before I run Xinclude, I run XSLT, I extract the glossary out, I make sure the glossary gets written to disk. Again, I've got these two pipelines that are not directly connected, so we're going to play the dependency trick. I want to make sure that the Xinclude doesn't run until after the store finishes, so I've stuck in my extension attribute again. And the rest of the steps sort of does the right thing. So now when we run this, what's going to happen is we're going to run XSLT and produce the glossary, then we're going to do the Xinclude, and we'll know that we get the right output out. So that was straightforward-ish. Generating these ancillary files, the implementations, we want to be able to publish a library that describes the inputs and outputs, the signatures of the steps that are described by the standard. I don't ever want to manage things like that by hand, so that pipeline library is generated from the source of the spec. So if we change the name of an option or we change the inputs and outputs to a step in the spec, then the next time I generate the spec, I get the right pipeline library out. So those are the ancillary files we're looking at, and it turns out that's, again, relatively straightforward. Where we had previously built the glossary and stored it out, we still do that, and then after we've done the validation, we run this other little pipeline here, which will make these ancillary files, and if we scroll down a little farther, we'll see the definition of that, which is entirely straightforward. It runs three XSLT processes in a row, one with different style sheets, and stores the results out. Here, I don't care about what order these things happen in. I don't care which XSLT runs first. I don't care which order they run in, so I haven't imposed any dependencies. The processor can do these in any order it wants, and if we look at actually the order it does them in, well, this resolution is, no, I'm not sure it's worth trying to do at this resolution, so I'll skip it. But if you actually run the pipeline and you look at the order in which the steps are executed, it seems that all the store steps could run at the end. I can't say off the top of my head why that's the choice that gets made, but it does. So now we've got the glossary built, we've got these ancillary files built, now that we've got the ancillary files, we have the framework, the ground, the ground is laid for building some of these other things that we need to build that read those files, so we're in good shape. The namespace documents are completely separate documents. They're a work product that the working group has to produce, but they don't actually come from the spec, they come from separate documents, so those are easy to do as well. I added another step type that makes the namespace documents, and if you go, here is the declaration of that step. The namespace documents, there are three namespaces, and so there have to be three namespace documents thinking in terms of modularity. They all basically are processed with the same style sheet, and they all have to be validated, so I wrote a little pipeline step called FormatNamespaceDocuments here, FormatNS, excuse me, and I passed to it the sources for each of the namespace documents, and then it passes them back, and I passed them to Tidy, and we've seen this before. It's the same pattern that we had for the spec itself. So what do we got to do? We've still got to make the examples, we've still got to make the schemas. We're going to require some jiggering, and as I was composing this presentation and thinking about the example I was going to use, I decided that building those parts was going to be a little bit tedious, and so I went off and played in the corner for a little while instead, because I'm easily distracted. One of the things about the pipeline process that's a little unsatisfying, as we've constructed it so far, is that every time you run it, it rebuilds everything. It rebuilds the document, it rebuilds the namespaces, it rebuilds the whole shebang, and make didn't do that. Make was smart. If I wanted to rebuild the namespace documents, or if I hadn't touched the namespace documents and I rebuilt the spec, it didn't build the parts that it didn't have to build. When the working group was formed to work on pipeline processing on Xproc, the inputs to the working group were several proposals that had come into W3C before, and some of them did this dependency tracking. Some of them actually had facilities for expressing these dependencies, as, for example, ANTS does. The working group decided that that was more than the minimum necessary needed to declare victory, so we weren't going to go there. So you don't get that in Xproc. But if you think about what make does, make's not all that sophisticated. Make looks at the timestamps on the files that are related and decides on the basis of the timestamps what to do. Well, we've got Xpath expressions, and my implementation Xpath2 expressions, Xpath2 can compare dates and times. I can give it a list of file names. It sort of seems like if I could, you know, it seems like I should almost be able to build the make functionality in Xproc. And that looked like way more fun than dealing with the examples in the schema, so I went off into that for a while. And it actually turned out to be relatively straightforward. It required me to write one extension step. So, so far, the extension steps we've seen have been written in terms of pipelines. So you can write a pipeline, give it a name, stick it in the file, and reuse that. But implementations can also invent steps themselves that are built into the implementation, or implementations could even conceivably give end users the ability to write Java functions and incorporate them into the pipeline. So what did I need to make this make file processing work? I needed some way, I needed some sort of a step that would tell me the date and time stamp on the file. And that's, from that, I believe I could build everything else I needed. But I didn't have a way of doing that. There are no standard steps in Xproc that will give you that information for files. So, so I needed an extension step. And that's extension, this URI info extension, here's the declaration for it. It has, it has no inputs. It takes the name, it takes the URI of a document as its input. It takes some authentication credentials, in case you want to use it over HTTP. And what it produces on its output is a document that tells you things about the URI that you pointed at. So if you pointed at a file, here's the result of pointing my URI info step at langspec.xml. It tells me a bunch of things about the file, that it exists, that it's readable, it has a certain size, and when it was last touched, the bit of information we need to do this sort of explicit make file, kind of dependency checking. So that's good news. If you, just for completeness, if you point the URI info step at a URI, it does an HTTP head request. It hands you back the headers and extracts from the last modified header. It extracts out this state and time, so that it's in a consistent place. So we can, we now think maybe we have enough bare bones here to do this job. The last thing we need to tell the processor is what the target is and what the sources for that target are, and this is XML. We just invent a little XML vocabulary for this. So I'm not sure that I ought to have put this in this particular namespace, but never mind. So here we have this little dependency file. This dependence file, and it has a single target and some number of sources. Now we know how to get dates and times for files, and we know what files are necessary to build the target, so we ought to be able to do this. And we can, and it leads to a slightly more complex pipeline. In order to promote interoperability, an implementation can't evaluate an extension step, unless it has seen a declaration for the extension step. So the first thing we have to do is import the library that I've stuck my extension steps in so that we're able to refer to it. And then we're going to have a step here called out of date, which takes as its input that depends document that I showed you, and returns true or false, depending on whether the document is, the target is out of date with respect to the sources that were listed for it. There's a little documentation there. So the very first thing we do is we look at the source document we were given, and we ask the question, does it actually have, as its document element, a CX dependence element? Because if it doesn't, we're screwed, it's not going to work. So if in fact you pass it some random bit of XML that isn't a CX dependence document, it throws an error, throws up its hands, and causes the process to abort. If you use this from another pipeline, you could use try catch, you could catch that exception, you could recover, but generally speaking, we just let that fall, we're going to let it fall over if you pass it the wrong input. We'll come back and explain the identity step later. However, if you do pass it, if you do pass it one of these, one of these depends steps, excuse me, depends documents, then we can use the URI info step to, that I described before, we can pass it as the target document, which we have to explicitly resolve, Michael and I were talking about this yesterday, you have to get the base URIs correct, and occasionally that's inconvenient, so we have to be very careful to get the base URI of the target correctly, but pretty standard stuff. So now we've got in our left hand the date and time of the target. Now we've got to consider, are any of the sources newer than that target? Okay, well, that's not hard to do. We have a, we have a for each statement, so that allows us to iterate over a series of things. In this case, I'm going to iterate over each of the CX source elements from that depends document. For each one of them, I'm going to use the URI info thing to get the date and time of that, and then I'm going to make a choice. I'm going to decide whether that particular source is newer than the target or not. To do that, I have a choose, I create a bunch of variables just for convenience in the expressions later. I extract out whether or not the target exists in its date and time, excuse me, whether or not the source exists in its date and time, excuse me, and then I look to make some choices. So, if the source doesn't exist, we're screwed. You can't build the target if you need sources, if the source doesn't exist, we don't, this isn't make, I don't have the ability to sort of look at some other rules to figure out how to build the source. I didn't go that far. So, if the source doesn't exist, then we're going to abort, and here's an idiom that you see fairly frequently in pipelines. I want to construct an output message. I want to make some output to return to the user or to return to the process. And unfortunately, Xproc doesn't make that as easy as you might like. And so, you play this funny trick where you construct a little mini document. Here I've got this inline document called message. And you pass it to string replace. String replace basically does string replacement. So, we say, look, here's this little mini document. Find all the CX target elements in this little mini document. There's, in fact, exactly one. And replace it with this little bit of information that we extracted from that URI. So, that replaces the CX target element in my initial message with the URI of the target. And then we do the same thing. We do another string replace, and we replace the, up here, we replace the CX source element with the name of the source file, and we pass all of that to error, and the pipeline will abort if you've asked it to build something and it doesn't have a source. Assuming that you do have the source, which is hopefully the usual case, we say, look, if the target exists and the target's date and time is larger than, is more recent than the source's date and time, then, in fact, no, this document, this target is not out of date with respect to the source file that we've looked at. So, no, we're good to go. If that isn't true, if either the target doesn't exist or if the target's date and time is older than this particular source file, then we use the string replace trick again to construct a result element that says, the answer's true, yes, this target is out of date. So now we've had the target on our left hand, and we've iterated over the sources, and we've figured out for each one of them whether or not it's out of date. We then pass that on to another choose, and if, in fact, any of the sources were out, any of the, if the target was out of date with respect to any one of the sources, then we just returned one of them to the source file. So, we've got the target out of date, and if we return one of the sources, then we just return one of them. So, we return the answer true with one of the sources, not all of them. If that isn't the case, then we return false, and that's our pipeline. Any questions? Yeah, okay. All right, well, live and learn. I don't, I don't think it would be that, you know, you could easily, you could easily construct an XML document which listed all the sources and all the various dependencies, and then you could write a more complicated, interesting pipeline that would do more of the work. I just didn't think of it until that very moment. All right, so, so now very quickly, we're going to take a look at the pipeline we started with, and we're going to modify it so that it does some of this dependency checking. First of all, we're going to store this little dependence document in the pipeline so that the pipeline knows what to do, and down here, instead of always rebuilding the spec, we're going to run this little out-of-date step, and then we're going to ask, look, if the result was true, if it turns out that, that in fact, yes, the target was out of date with respect to one of its sources, then we run format spec and tidy it and make the ancillary files and do all the work we need to do. Otherwise, we just output the message that says, eh, it was done, we didn't need to do that work. And, yeah. What's the role of the C namespace? Oh, there are three namespaces defined by Xproc. There's the namespace we typically refer to as P, that's the pipeline namespace, that's the namespace you put all of the pipeline steps in, well, the ones the standard steps are in. The C namespace is sort of the standard default for results returned by steps. So, if a step has to return an answer, that isn't, that isn't some transformation on the input document, we just stick them all in the C namespace so that, like, HTTP request returns a C result that tells you about the HTTP request, store returns a C result that tells you the URI that it actually stuck the document in. So, so, C, it was just convenient to reuse C here because you're already likely to have that namespace in your pipeline anyway, so. And the spec says explicitly you're allowed to use that for results you want to return. Any step you write, you can use the C namespace to return its results, so, so that worked out easily. Yeah, result is the document element, this document which consists of the single word true or false in the case of CX out of day. In this example, yeah. All right. Right, so, so now, so now we either, let's see if I actually run, which one was that? That was, oh well. I didn't put the ID in there, so I don't know what the name of that one is, so we'll just move on. If I ran it, it would say, you know, first time you ran it, it would rebuild it, and the second time you ran it, it would say it was not out of date, so. All right, back to work. We've had some fun, let's get back to work. Two more things we have to do, we have to make these examples, and then we have to make the schemas. Well, making the examples turns out to be relatively straightforward, just like we had a pipeline that made the spec, we can have a pipeline in the in the examples directory that has some steps in it that build the examples. I won't go through the process of explaining, I think that the logic is basically the same. You either, you know, if they're out of date, then you rebuild them, if they're not, you don't. So there's the pipeline that builds these examples. Building the schemas was a little more interesting because it was a case where we, excuse me? Yes, this pipeline doesn't, but all the examples used in the spec are stored in files that are complete pipelines so we can validate that they exist and we can run them. I could certainly add that, yeah, I just, you know, I'm trying to keep the examples small. That would, that would, running the examples in the pipeline would require building a framework for knowing what the right answer was and such, and you could do that, but I haven't, I mean, not, I haven't done it in the pipeline anyway. The interesting thing about the schemas example is that when Henry, when Henry actually did the schema construction, because he's not using an XML tool, because he's using a make file, the straightforward thing to do was to construct the XSD file by concatenating three textual documents together, the preamble and then the bit in the middle and the postamble, or the, whatever you call the thing at the end, and only the thing in the middle is actually constructed from the spec. So, so Xproc XSD2 fragment is constructed by the make file and then we construct the XSD file by concatenating these things together. Well, that turns out not to work in Xproc. The one constrained Xproc imposes is, or one of the constraints it imposes, is that the things that flow from step to step to step have to be full XML documents. And so you can't pass from one step to another a fragment of XML that ends in the middle, it just doesn't work. So, in order to make the examples work, I needed to reconsider how to do that. And it turns out that with a tool like Xproc, this is relatively straightforward. I'm not going to belabor the point, but instead of, where is it? Looking for make XSD, make DTD, make XSD, there it is. We check to see if it's out of date, if it's not out of date. Then we... Right, so, so in refactoring this to put it into a pipeline, I said, look, what Henry is actually trying to do here is he's trying to stick this generated bit into the middle. And the way you would do this in XML terms is you'd have the whole document and you'd stick an X include in where you wanted the generated bit to go. And so, and that's in fact what I did here. So, instead of having the XSD in three parts that I concatenate together, I have a single document with an X include in it. I construct the bit in the middle that I need. I run the X include and... I thought there was a step there that fiddled with some of the tag names, but I guess not, so. So, anyway, that's relatively straightforward. I'm almost done, I promise. You...now we just put it all together. So, I have these two other pipelines. I have a pipeline in the examples directory that builds the examples, and I have a pipeline in the schemas directory that builds the schemas. Now I want to be able to use those steps in my main pipeline, and so I import those, and then in the...in here... I've suddenly gone blind. Somewhere in here. Oh, there it is. In roughly the same place in this pipeline as it was done in the make file because I didn't want to think about why it was done, where it was done, I stick in build examples. That goes off and runs that step from the example pipeline and builds the examples. Somewhere else in here there must be one that calls the build schemas. I won't spend a lot of time looking for it. So, there you are. We're done. Except when I thought about this, I thought, well, make...you know, this recursive call to make isn't really like an import. The make file doesn't actually load this other make file and then use parts of it. It actually just spawns it off and runs an arbitrary make file. So, what if you wanted to do it that way? What if you didn't want to build these pipelines so that they depended on each other and so they could import each other? There's no compelling reason why you wouldn't want to do it. But what if you did? What would you do instead? Well, if you knew the name of the pipeline and you knew how to run the pipeline implementation, you could do it exactly the way make does it. You could say just exact. You'll often run this thing and there you go. But that requires knowing...that requires that the implementation has implemented the exact step and that you the user have the appropriate security to run the exact step and that you know how to run the implementation which you don't necessarily know from the pipeline, all very messy. It's really more like eval, which is a step we don't have. There is no standard step in the Xproc library for evaluating, for constructing an arbitrary pipeline and running it. And the main reason why that's the case really is that it... because you don't know the names of the input ports and the names of the output ports and because one of the other limitations of Xproc is that all the steps have to have a known fixed number of inputs and outputs. It wasn't clear how to do the eval step, so we didn't do the eval step. But I have to say being an implementer...this is the first time I've done lots and lots of specs and lots of working groups. This is the first time I've actually been implementing the spec and it's a wonderfully liberating feeling. I thought, you know, I could just make this up. Yeah, the working group didn't feel like doing it, but that doesn't have to stop me from doing it. And so that's what I did. Actually, in reality, I did the eval thing first because that was the first way I thought of doing it. Five minutes. Okay, I'm almost done. So, you know, in order to deal with the problem that you don't know the names of the inputs and output ports, the way that my eval step works is you basically put a wrap around all the documents. And each...the wrapper says what port it's supposed to go on. And then it's...and then the step itself sort of takes the wrappers off and puts the documents into the right ports and throws an error if the ports don't exist, etc. And if you use deval instead of using... Instead of doing it the other way, you wouldn't have to import the example and schema steps. And instead of calling them, you would just use an eval step and say, go run that pipeline over there. That's kind of cool. So what have we seen? We've got the Xproc step...the Xproc specification, which is almost done. It's true that you can do a lot of this pipelining stuff with Ant, with Make, with other technologies. I'm hoping that what we get from Xproc being a recommendation is, you know, all interoperable implementations on different platforms. Getting somebody with a Windows machine to run Ant is a bit tedious. And if someday in the future,.NET shipped with an Xproc implementation, then you just hand them the pipeline. You wouldn't have to go...you wouldn't have to explain to them how to install Java to get Ant running. So hopefully we'll get that. It attempts to fit into the XML processing model. It uses XML documents, it loads and stores them, etc. And it was designed to be relatively simple. I know we went through a lot and I hope it wasn't too confusing. But, you know, relatively simple with sufficient power. There are a few things that, you know, are not exactly right in V1. Because...and I didn't actually point all these out as we were going along. Because, you know, you can't do everything in V1 and you're never finished. So it does...you're not guaranteed to be streamable. There's an interesting open question about how streamable the steps should be. There's that business of using string replace to construct these literal result elements. What XSLT would call literal result elements, that's a bit tedious. And then occasionally you have to get the bindings right. You have to explicitly put bindings in places where you wouldn't necessarily have thought you had to. Because the way Xproc is defined, all the Xpath expressions have to have a context. If you don't have a context, they fall over. So sometimes you have to explicitly give an empty context where you have no intention of referring to the context. So that's a little bit tedious, but there you go. And I must be done. Since there apparently aren't any more slides. Okay, thank you very much. So... We have a different one. We have quite a lot of time for questions. Not too much time. I took more than 20 minutes. I was afraid I was going to take about 20 minutes. I would like to check this slide. There are many, many web apps programs which are already there. And then there are a few things to slide into the Xproc program. So the question boils down to what do you do if you've got a whole bunch of existing processes and you're thinking maybe you want to use Xproc and you want to plug them into an implementation. So the spec doesn't tell you anything about how implementations provide APIs for programmers to do this. So the way I do it and the way Boycheck does it and the way somebody else does it might be different. But my implementation is based on top of Saxon and uses the Saxon XTM framework. So what you have to do to your Java program to integrate it is basically write the few lines of Java code, basically extend one of my classes and arrange it so that the things you want to be inputs get loaded up and turned into Saxon XTM trees of some sort. And then do whatever the process is and then whatever the output of that process is, you've got to arrange for that to be turned back into XML so you can pass it back out again. It's not terribly difficult but you do need to arrange it so that it's XML and XML out. Yeah, I think if the processes that you are running in your organization are inherently pipelined, then it makes perfect sense to put little wrappers around the components that aren't currently piped and use Xproc, absolutely. Oh wait, I mean it's not a problem in the sense that it's not doable. It's just tedious to have to write the XSLT wrapper to put the literal result in or construct the document, pass it back from an identity step. There's a handful of different ways to do it. The working group decided that we were not going to take the approach that random stuff from other namespaces was somehow just literal results. It wasn't clear how to make that work and so we said, no, we're not going to do that. You have to arrange for it to come out of a pipe somehow and it's just a little bit of, you know, maybe in the future we'll be able to improve that. Let me go ahead. Oh, okay. So Mohammed's asking why did I use, why did I make the dependency checking something explicit in the pipeline? Why didn't I just make my implementation know how to do that? Well, mostly because I, you know, I can imagine another implementer implementing the CXURI infostep. At which point my pipeline is now interoperable. You can use your implementation or my implementation. So there's some value I think in keeping the amount of implementation specifics up as small as possible so that, you know, you can encourage other implementers to do it. So that was the one reason. And I was talking to somebody, I don't remember who, I'm sorry, at dinner last night about the idea of using the security manager to tell what files were touched and therefore, you know, figuring out absolutely, completely really which files are necessary and then you could use that, you know, you could extract that information and then you could use it next time. There's all sorts of cool stuff you could do. I just, I wanted an example that I thought I could do in an hour. You just mentioned about having to wrap a little bit of a development, but Saxon supports simplified style sheets. So wouldn't a style sheet with a non-XSLT document element sound as well? Yeah, you'd still have to have a PXSLT and then an explicit input port for that and then you'd have to put the body inside there. You still want, I mean, I don't want to make it sound like this is a glaring, gaping, horrible deficiency of X-Proc. It's just a little bit inconvenient. It's not, in fact, difficult. It's just tedious. So we have time for one last question. Yeah, you could do that. I mean, there's, yeah, you could do that. I didn't think of doing it that way, but you probably could make that work. Thanks for the thank you again. You're welcome. Thank you.
This presentation will explore the current state of XProc: An XML Pipeline Language through a combination of slides and live demos. Particular attention will be paid to demonstrating pipelines that are, or could be, useful to solve real world problems.
10.5446/31141 (DOI)
Today, I'd like to demonstrate two things. First, validation of Atom extensions is a mess. Second, although relaxing is the schema language for Atom fees, every year beats relaxing in validating Atom extensions. First, what is Atom? Okay, this is a very famous blog written by somebody in this room, Alex. Alex is there. This is probably the only reliable source of information about OXML at ISO. The reason I'd like to show this is it uses Atom to represent its fees. I'll show you. This is the Atom feed. XMN declaration, feed element, ID, title, updated link, etc. Alex plans weblog, author, generator. Okay, these things belong to this namespace, which is Atom. Here we see something different, blog channel, DC. They belong to different namespaces, and there appear to be more extensions. For example, Geo. Probably this is about his location. These things are not standardized in the original Atom RFC. These are extension elements. There are quite a few extensions of Atom. Some of them are IETF RFCs, other than just proprietary. But there are just so many extensions in the world. In the case of Alex's blog, there are about six extensions were used together. Google has its own extensions. Microsoft has its own extensions. Yahoo has its own extensions. There are just so many. And such extensions are typically represented by foreign elements or attributes. And this is not specific to Atom. Other formats, such as ODF version 1.1 and OXML, also use foreign elements and attributes. This is just a common practice. I should show you my Google calendar. Before this, I should show you the schema. This is the relaxing schema, which is contained in the original Atom RFC. I believe it was written by Noam. This is written in a schema language called relaxing. There are no other schemas for this Atom format. The relaxing schema is the only. Well, it says the root element is AtomFeed or AtomEntry. In fact, let's look at Alex's blog feed. The root element is a feed element. And this structure is written or controlled by this relaxing schema. A more interesting example is Google Calendar. I use Google Calendar always. I'm quite sure that some of you guys use Google Calendar as well. But they disclose the format as an AtomFeed. This represents the data my schedule. In fact, this shows my name in the Japanese characters. And Google uses quite a few extensions. Google Calendar, GData, OpenSearch. So although this looks like an AtomFeed, it actually represents my schedule information. Here you see Google Calendar elements. Okay. And we have observed more than one extension are used together. Google uses at least two. And Alex's blog uses more than five. I can imagine that this trend will become more apparent. People will use many extensions together. So, okay, this is nice. So we use extensions. We use Atom, which is an XML document. And we use relaxing for validation. This is nice. But what about validation of Atomic extensions? First, the original schema, Atom.rnc in the original AtomRFC, merely skips all extension elements and attributes, of course. Because that schema doesn't know anything about extensions. So far, so good. Okay, some extension RFCs, Atomic extension RFCs, provide schema fragments, relaxing schema fragments. However, such fragments are not invoked by the original Atom.rnc. What does this mean? This means that these schema fragments are never used. They are only for documentation. So Atomic extensions are not validated by these schema fragments. They are simply skipped. Some companies don't bother to write schema fragments. Only prose. So again, extensions are not validated. And I've seen that extensions actually have some errors which should be detected by validators. However, one can argue that Atom is still much better than the other formats. For example, ODF or XML both say they allow foreign elements and attributes. But their schema don't allow foreign elements or attributes. So if you will incorporate such foreign elements and attributes, you'll have validation errors. Okay. But Google is much better. When I wrote the original version of this my article two years ago, version one was not so great, but version two is great. They developed their own schema. I'll show you. Okay. This is Google Calendar Data API document. And it contains three schemas. One is a rewrite of the original Atom.rmc so that the Google extension elements such as GCal, GD are allowed. They also allow open search, I believe. Do they use Google? Okay. For example, Atom Calendar field can contain open search items per page. Start index. How do they allow Google Calendar information? Yes. So here's the Google Calendar selected property. This is a Google Calendar element. And it appears here. So they do rewrite Atom.rmc so that their extension elements are allowed and validated. Actually, these three schemas provided by Google are quite nice. They impose very tight restrictions. What kind of combinations are allowed? And sometimes they even disallow what was allowed in the original Atom.rmc because they are useless for them. Okay. So there are quite a few advantages. First, this is a great relaxing schema and it imposes tight restrictions. And now, as well as the top level Atom field things, Atom extension elements are also validated. This is quite nice. A way better than the other extensions. However, there are some disadvantages. How can we allow more? Do we have to, again, rewrite the Google schema? It's not easy. If we want, for example, allow three more extensions, then it will become prohibitively difficult. And I have tried, actually. I'm, of course, a relaxing hacker. I know a lot about relaxing customization techniques. I can write a relaxing schema that combines two or three extensions, but four is just difficult. And I know all minor videos relaxing very well about customization. Okay. So here comes NVIDIA. It is an ISO standard already. Namespace-based validation dispatching language. There are two key ideas in NVIDIA. First, we create a schema by combining sub-schemes. And each sub-schema is concerned about one or a few namespaces only. And different sub-schemas may be written in different schema languages. Simply put, an NVIDIA script is a collection of namespace schema players. So depending on the namespace, you choose a different schema. And maybe a different schema language. Key idea number two, divide and validate. Even the non-monolithic XML document, which contains many namespaces, you divide them into pieces using namespace boundaries and validate each of the validation candidates against some sub-schema. Okay, I'll demonstrate the processing model. The input is an NVIDIA script and an XML document, which contains more than one namespace. NVIDIA dispatcher creates some validation candidates and finds schemas, finds a schema for each of the validation candidates and then invokes validators for each of the validation candidates. Okay, I'm going to show this XML document as an example. These two colors represent two namespaces. So in this document, we have two namespaces. And this thing, everything appears in red. This is the validation candidate and will be validated against some schema. In the case of AtomFee, this is Atom.rmc. And these elements appear in the original Atom namespace. Okay, these things are proprietary extensions or standardized extensions. They have different namespaces and there are some different schemas and we can find such schemas from the namespace. And we validate these small validation candidates independently. One way to implement NVIDIA is to create a dispatcher of SACS events. NVIDIA engine receives a sequence of SACS events and dispatch these events to different validators. But it is certainly possible to use DOM for implementing NVIDIA or I believe it should be possible to implement a pull parser for implementing NVIDIA. Okay, who uses NVIDIA? OXML path 3 markup compatibility and extensions uses NVIDIA. And ODEV version 1.2 may use NVIDIA. This is not decided as far as I know, but there is a proposal. And I believe ILCA is using this for WFCE. Here it comes. And SVG uses this. And I recently found that open publication structure, this is about ebook, they use NVIDIA. And there are quite a few NVIDIA implementations already. Most of them are written in Java, but one is written in C sharp. Okay, given an atom, what do we do using NVIDIA? First, we don't have to change atom.rnc at all. We use it as is. Okay, given an atom field document, we first eliminate, remove all extension elements, and then validate the document against the original atom.rnc. And the extension elements are validated against different schemas. I'll show you the NVIDIA script. How many more minutes do I have? 40? 40? 20. Okay. Okay, this is the NVIDIA script. Since I have a lot of time, I'm going to explain this in detail. First, we begin with the mode root. Suppose we encounter an element of this name space, the atom name space. Then we validate these elements against atom.rnc, which is generated from atom.rnc. And when we encounter some foreign elements which appear as children of field elements, we do something different. Excuse me. When we encounter open search elements, we use open search 11.rng for validating that open search elements. And when we encounter Google name space, we use gd.rng. But within this Google elements, we might have entry link element, which in turn contains the original atom name space elements. Sorry, Google extensions allow embedded atom fees. Such embedded atom fees, sorry, atom entries are validated against atom.nn3.rng. And this appear in the same original name space. I'm not going to explain this, but I'm going to explain this. This is about Google Calendar. And extension elements of this name space are validated against this relaxing schema. This is quite, well, it becomes a bit lengthy, but it is quite easy to understand. Well, I didn't explain the details of the NVDA specification, but it allows you to do different things depending on the context. Because sometimes you want to allow extension elements only as children of, for example, feed elements. Details of these solutions are provided in the paper. The paper is written in step-by-step style, so it shouldn't be difficult to learn this technique. The first one is extremely simple. Okay, let's go back to the slide. Okay, advantages and disadvantages. First, it is not difficult to add more extensions. If we just rely on relaxing, adding four extensions is prohibitively difficult. But in the case of NVDA, it is doable. And even when RNC evolves, we don't have to change our NVDA script. It's just a thin layer on top of relaxing validation. And now, atomic extensions are not only skipped, but also validated against extension schemas. But there are some disadvantages. The biggest disadvantage is validation becomes somewhat loose, because this approach can't capture mutual interactions between different extensions. If you want to capture such mutual interactions, we have to rely on relaxing in the case of ATOM. So there is a trade-off. When you want to impose tight restrictions on mutual interactions, you have to rely on relaxing. But when you are ready to give up such tight restrictions, you can rely on NVDA and don't have to be annoyed by minor details of relaxing. Oops, the last line is a mistake, sorry. I already have the last slide. How many more minutes do I have? 15 minutes. So many. Sorry. But I'd like to welcome a lot of comments and questions. Before that, probably I should explain the Google Schema more. Google has quite a few schemas. They provide three schemas for Google Calendar only. And Google has lots of extensions. Google Calendar API is just one of them. They have many more, I believe. They have so many APIs. Financed Data API, for example. And I believe this one also provides a rewrite of ATOM.RMC. I'm not sure I can find it right now, but when I checked the last time, I find quite a few schemas. They really create relaxing schemas for the extensions. I believe they have done a very good work. However, I'm not just convinced that this scales. I've already published the first version of my paper two years ago. There are documents of some validation problems. And such problems are revealed by my NVDA script. And I pointed out that problem in my paper. It appears that later they fixed the bug and created these schemas. So I do believe that validation is important. And other organizations which propose extensions of ATOM should also care validation. Do you want to say something about that? No? You're not involved in it anymore, I see. Okay. I'd like to have a lot of questions and comments. More on my work, questions now. And perhaps if you don't mind, you can use it. Unfortunately, it's a little bit like a video. You can ask Paul because I think that might be, for example, interesting to hear how the new version of the model, whether it's an ATOM version or a new version of the model, there are a lot of potential problems in it. Okay. Let me point out one issue in the ATOM.inc. Well, depending on attribute values, it allows different models. And this was not possible in XSG version one. In version one.1, depending on attribute values. I'm asking. Well, depending on attribute values, we want to allow different content models. And ATOM.rnc does that. Yeah, that's 100% because we don't want to run it. Okay. Then the next issue is wildcard. Wildcard is a headache even for relaxing hackers. And it is a bigger headache for XSG hackers because of the UPC constraint. Will it be relaxed? Yeah, so the cool thing is that you want to run the model with it. And you can have actually a specific requirement for the wildcard process. And if you have a requirement for the wildcard process, you can run it. So the wildcard process is not allowed to open up. Mm-hmm. Well, does this mean, well, the UPC constraint is just abandoned totally? I don't know. So you can set it to layout, but the question is, you know, the process is wildcard. Okay. Mm-hmm, relaxings allow ambiguous content models, of course. But still, well, when we try to use wildcards together with specific content models, we sometimes have very harmful ambiguity. In other words, when we try to validate extensions, we sometimes allow wildcards to skip it. So we have to be very careful. And I believe that, well, XST version 1.1 will have that problem. Schema authors have to be careful, right? Oh, yeah, I don't think it's a big problem. Mm-hmm. Something I was going to ask you, I just want to clear up about whether the implementable content model allows the extension of the format, so whether the NVDL introduces it. Okay. There are three options in NVDL. A, foreign elements and attributes are just removed. B, you will remove foreign elements, but in place of such foreign elements, you introduce placeholder elements. So your component schema has to allow such placeholder elements. C, you don't remove foreign elements and attributes. You might want to validate such foreign elements and attributes against different schemas, but still the main thing will continue to have the foreign elements and attributes. And your schema should have wild card elements so that these things are allowed. So there are three options. The first option, which means foreign things are just removed, is easy. However, you will lose control because you can specify fair in your schema such elements are extension things are allowed. The other options are more difficult, but it allows tighter control. But even when you rely on the first option, you have a context element. So you can say, okay, we allow extension elements only as children of this element, in the case of atom feeds, atom entries. Is there any other question? I'm sorry about that. Well, there are quite a few tutorial documents right now. JNVDL manual. Well, who is the author of JNVDL? Is he here in this room? No? Oh. You've provided a tutorial on the part of your... I'm here from Akutian to order OMEDL instrumentation. Okay. Yes. Yes. Yes. I should show you the URL of his tutorial. There is a website, nvdl.org, which is maintained by Ken Holman, who is also here in this room. And this tutorial... This is a zip file containing PowerPoint document as well as lots of examples. So this is one of the nvdl tutorials, and there are quite a few other tutorials. JNVDL manual has a tutorial, and they... Well, my memory is not so great. How many tutorials do we have? One, two... I believe there are four now. Let me check. I should provide more links in this section, tutorials for nvdl. There are at least three. I'll do... Yuka, do you want to say something about JNVDL? Oh. Okay. Okay. Yes. Yes. That part is hard. Yes. If you really create validation candidates as different XML documents, they will certainly have different line numbers. Sorry. Alex, will we discuss in WG1 about this? There is one commercial thing which uses nvdl now. This is E-book, what used to be called E-book. This is serious. They do use nvdl. I'm searching for... Open publication structure. Open e-book has three things. OPS, OPS, OCR. They use relaxing. This one uses nvdl. Sorry. Open publication structure, OPS, version 2. Relationship to nvdl. This one uses nvdl. Yes, this is the schema. They use nvdl for embedding XHTML and SVG and that making your schema complicated. Probably this is the only commercial use of nvdl. This is not by one company, but this is used for real business. I don't know how successful this spec is. But this spec was built two years ago, or one year ago, two years ago almost. But only recently we start to use non-monolithic documents very heavily. In the case of Atomic Yes. In the case of ODA and OXML, probably we will do so. But not very common yet. So I believe that the prime time of nvdl is to come. No other schema languages can compete with nvdl. This is a dedicated solution for this problem. We have time for one more question. Is there anyone else? Thank you very much. Thank you.
Reuse is often the key selling point for XML authoring systems. This presentation examines reuse from various points of view, from the author’s to the developer’s, offering practical strategies for reuse of content. Markup design is discussed, as are necessary prerequisites for making such a system work. What should be reused, and when? How do you uniquely identify a resource, and what, exactly, is a resource anyway? How do you design a user interface that helps the author instead of hindering her? From a practical point of view, how do you design a publishing process that works?
10.5446/31143 (DOI)
S vagy yn cyhoedd i dech am arall eich b Fleisiol yndyr ennill nad oedd y cael ei talw'r model y飛bylen. A bobl yn fwy oherwydd greu gyda gael eich trio peirio. Ac am ryd顆 noser o gefnog i neud middlu betrno'r Velos ond roedd y cyhoedd ychydig rydyn ni s ещё gw Ond, diogel arall os byw y Tych yn fan omnill i ni eich trio'r newid allanol. Vo gael cofliadau amahannodd yn bobl yna si holdion a g もf. Rwy'n du allan o eu addysg인 rumol a lle'n静ed yn b 在 blocked at interfer phyn llog. A r drefn miledd uchgen switches ch bob ben amser, yn i chi mi ym�llion kmrpg stylo. Fy modd yw y meetw? There a very small company based in Cambridge in the UK. We developed applications for processing XML mainly during schemation like validation of XML documents. We use Java and because what we are used to it our clients like it it runs reasonably well on and is well-accepted. Most of the documents we deal with are for publishers. Yu pepperwch yn wilio nad gennymau нарuatech nawr. ein bod yn tyde maes bwrdd yn dylawy тоже ysglawn ision mewn ز a WDD. Ac fe d 기자ud yn y PrettyS Dy hyn yw y ddyn nhw ddies godiarn 홰g wrth gwrs. Yn i fwy ychydig, maen nhw'n decir orez o daid 35 icon autod tens raining yn newid otw ddeachio'r masryg. rym ni hyn arall rydych chi fod wirettech Don mac них di'rgiellurian hwn ni bob ar..] isio, o er efallai impr Weiniddo hwnn nhw ar fath gennym methu Yn ymdweud, mae'r problemau yn ymdweud yn ymdweud yw'r prosesig yma, y cwmysgau xml yn ymdweud yn ymdweud. A oedd ydych chi'n gwybod i'r rhan o'r cyfnod yn y cyfnod, ac mae'n rhan o'r cyfnod yn ymdweud yma, ac mae'r cyfnod yn ymdweud yma, ac mae'r cyfnod yn ymdweud yma, ac mae'r cyfnod yn ymdweud, bo'r gwybod a rhan o'r cyfnod dweud, a'uам mawr dwi'n gwybod i gy 올라i gyda'r你们au xml. Mae'r clif fod immigrants neu fоров hon sydd bywyd bod auta magical os Yn All T saberwyr yn eu imput ama hwnno. Mae'r bywgyl—rydych chi safelyl chyles rhywbeth peth yn ynfa, dyna y cael ei lŷ, gwff Newsmar grapesedd, mae'n dhalo pa rhywle'r mynd yn ymdweud sydd peth yn ail rhai. ond au fod roedd Galaxy<|pl|><|transcribe|>o'ch cydch." Jun general rech Pos Why – Mae o magnetwyr yr örwyl fathol fuddawol yw byw, mewn ymddiff yw ddim i'n leirio'r manifestprodiol ar fy stack wrth hyn yn gallu colliexin.....di wych yn Gruff Cos Chloe ez 29500 e inghic yibl o'r manifestrodiol Sol wych yn gynn valve mae gennym un peth polwыш byddwch a'i gownedf Felly, ydych chi'n gwybod 60 megabyte o document a'r cyfnodd cyfnodd yn ymwneud, ydych chi'n gwybod y ffigurau sydd yna. Felly, ydych chi'n gwybod Dom, y dyfodol o'r document, yn ym 14 sefydliadau, yw'r 231 megabytes o memori, sy'n gwybod, yw'r ddweud, yw'r ddweud, yw'r ddweud, yw'r ddweud, yw'r ddweud, yw'r ddweud, yn 47, yw'r 40 sefydliadau, ac yn ymwybod yn ymwybod yn ymwybod yn ymwybod. Ac ydw i gyflym, dywed body去lli, rydych chi s proporton ni'n grannu sydd ei cyflwyffydd yr ball ac ydych chi eif againr? Felly, y mae ddweud amnoed wych chi'n gwybod argyffyddu yma yfyddynau sydd hanydd ar y bicorawyd. Proses, mae choppedd ar eich cofiown fynd oed hasin yn dangos eu ehau maen nhw fyddwch yw MH Felly, mae y dyfod ei löneu'r etau a'r ei golygulegol ar gyfer casnidauogethndu. Fel rhai yn ddych chi mewn gweld amser間 vergessen neidesaeth, ac y f Norqu Of course, it hardly takes any memory at all and is fairly quick. Some of that will be Java start-up overhead. The details for these tests are in the printed version of this paper. By the way, it struck me after I had done this and was looking at the slide this morning that the reason why this might be quite good for DOM, much better than when I lasted the test in 2006, was that it switched to a lazy implementation default when you build a DOM in Java. Is that the case? The challenge is, can we come and improve on these figures? Is there a particular cause of the problem here? In fact, is there only one single cause or is there a variety of causes? I'll be faced with a classic situation in which there's a speed memory trade-off. If we make it much faster, we'll be having to use much more memory. If we save lots of memory, we'll be forced to make it much slower. Of course, even if we can solve the problem in theory, can we still use a familiar API? It's all very well-solving the problem from an academic point of view and having something that sits there using not very much memory, but it's not very much use. If we can't use the libraries we're familiar with to do real-world operations on that content. I think taking a slightly contrary view about trade-offs, I'd like this quote from John Bentley in his book, Programming Pols. He says it's been his experience that reducing a program's space requirements also reduces its runtime. It's often the case, I think, we find that if we do take care to use memory resources sensibly, we can sometimes reduce the execution time and traditional trade-off laws to not apply. Some observations. This very simple Java program creates a million strings, very small strings. Each of the strings is the text representation of that number. The number of digits stored is 5.89 million. If we take a byte for each, we can assume about six megabytes. More realistically, if we take two bytes for each, we can say about 12 megabytes will be the native space required for this operation. In fact, if we run that, we find it takes 50 megabytes to store that content, to create that content. I think we can reckon generally, as a very rough rule of thumb, that every Java string we create is going to cost us about 40 bytes of data. Of course, we have some overhead for creation and destruction of Java objects. Any typical naive implementation of a Java in memory model is going to be costly right away if we're using objects all the time. However, I've observed that if we use bytes, a million bytes, it costs exactly one million bytes. If we want to implement DOM, we have a slight problem in that DOM, by the very nature, commits us to an object-heavy implementation. If you look at the node class itself, it defines, declares 17 methods that return objects. As soon as we're using DOM, we're buying into a whole object way of thinking. Generally, of course, any tree-based implementation that uses objects is going to commit us to an object-heavy way of thinking. It's very difficult to use standard APIs, which expect objects all over the place if we want to move away from using objects. The premise, I think, is that we should beware objects and befriend bytes, which might seem a retrograde step, but falling back on an earlier primitive way of using Java, where we're manipulating bytes rather than using high-level objects all the time. Java, perhaps, not as we generally know it. How's this work in practice? If you consider this very simple XML document, one way of thinking of it is as a stream of events. If you use sacks, of course, we'll be familiar with that kind of concept. But stepping away from sacks a little bit, let's just think what this document's stream of events might look like. First of all, obviously, we have to start with a document. Then we have to start with an element in this case. An associated with that element that's started there is a string, which in this case is the name of the element attribute, which also has an associated string, which is the name of the attribute. Then an attribute value, which too has an associated string, and so on. We're building up an event list of events, if you like, that represent how that document is, or how it's constructed, or how it's passed. In dispersed with various events to do with XML structures, we have the strings, which represent the content. I said we're stepping away from sacks a little bit, and you'll notice this is not a sacks stream. It's not a sacks stream for two reasons. First of all, I'm suggesting it's something we're going to persist. We're actually going to store this in memory or on disk, whereas a sacks stream is some events that fire, and then we're free to do what we want with them. It's a series of transient events. It's also more finely grained than sacks. Sacks gives us attribute sets. The attribute element is a single collection of things, whereas in this model here, each of the attributes is a separate event. It's a piano role. By replaying the events here, we can rebuild the XML document exactly as we first received it. My other feature of this concept is that we have two kinds of events. The orange ones are structural phenomena, if you like. They're the XML structure of the document that are fired, the events that are fired, whereas the blue things are the content, the strings, the attribute values, and the element content, and so on. Text content. We can think about representing the structural phenomena in the XML document using values, each of which takes a single byte. The actual numbers here are unimportant, but what you might notice is that the high bit is set for all of the values used for these events. You'll also notice that we can represent all the standard XML infestate items and still have plenty of space left in our value space for other events we might want to record later on. I'll come back to that. As well as the structural phenomena, of course, we have to store the string content, the attribute values, and the text content. They are, after all, perhaps the most important thing in your document. We're marking up that content. The sensible approach here would seem to be to use a dictionary and then to refer to the strings in the dictionary by index. The XML documents always have, in this model anyway, at least one duplicate string, which is the... You'll see the start and end of the same element. Often XML documents have lots and lots of duplicate strings. We've seen that normalising that would be a sensible way of reducing memory use, assuming we weren't doing it already. That's the model. We have the structural phenomena in orange and the content phenomena in blue. We use indexes to refer into a dictionary or list of strings which we're storing elsewhere. You can see that string events are always delimited by structural events. They're always interspersed with each other. As we'll see in a bit, we never set the high bit the string lookup values. We're going to use instead seven bit numbers to represent lookup values so we can distinguish between the structural events and the string events in the content. This is what the beginning of that document that we've been looking at would look like if it was held. In memory we have on the right the human readable form of the events, the start documents, start element, reference to a string, and so on. On the left we have the byte values we're using to represent those things. In the middle is the bitwise representation of those events, the stings, and you'll notice that the high bit is set by the structural phenomena so we can clearly distinguish between what is structure and what is string content. Of course there's many situations when we want to refer to indexes of string content higher than what we can hold in seven bits so we have to have an encoding mechanism for referring to numbers higher than one, two, seven decimal. If you want to represent the number 200, you can do it like this. You'll see that the two numbers there using seven bit coding sequence to represent higher numbers than one, two, seven. The principle is we can always see that there's an interspersal between structural phenomena and textual phenomena in the document. Just as a thought, another way of thinking of this is it's like we have instructions representing things that we want to create or representing XML documents and then operators if you like for saying how that instruction should be interpreted. So a start document is just an instruction that has no numbers associated with it whereas a start element has a single string which is that element's name, an attribute has two strings associated with it which is the attribute name, the attribute value. I don't know, but that's interesting. So what happened when we tried to implement this? Early implementation used a sax reader to create the stream so you can take the sax events and then carve them up a little more finely to create the stream I was talking about. We used a simple hash map of strings for the string table so that's not optimal for all the reasons I was talking about earlier. We got the object overhead and so on but hey, we're getting the normalisation. The early implementations also did not handle all of the emphasis. It looked promising so it went ahead and implemented the whole thing. Issues. The biggest issue was that the thing ran like a pig in treacle. It was very, very slow to query the stream. That's because largely we found because all the scanning that was taking place. The stream was very efficient but it was very slow to access any part of it because you had to iterate over the stream values in order to find anything that you wanted. Especially if you think about it, it was very slow to iterate over the stream looking for things like following siblings because you'd have to iterate over all the descendant content before you eventually got to the following sibling you were looking for. To try and speed things up, we introduced three strategies which paid dividends. The first of these is something I've called pseudo events which are things we introduce into the stream to try and make things more efficient. You remember earlier on I was talking about how we had plenty of space left to represent things other than traditional infoset items. This is where that extra space comes in useful because we can introduce events into the stream which represent signposts if you like for iterators so that they can move much more efficiently around. If you look at the blobs on that diagram, you'll see that there's the blue and orange ones representing the structural phenomena and the text phenomena we were talking about earlier. Also, this new green one which is if you like a signpost for an iterator to say, if you're looking for the following sibling, you just need to skip 5,000 bytes forward. It would remove all that unnecessary scanning that we were suffering from before. I think I should note here that this is a classic memory speed trade-off that each of these pseudo events is of course increasing the amount of memory we're using but it is giving us a speed increase. We can of course be placed anywhere in the content and we can make them up as we want. In the end, we ended up settling on, at the moment, these signpost events for speeding up the iterators for the stream. One for following sibling information, one for proceeding sibling information and one for information about the parent of that particular element. So an iterator can find those things quickly if it's moving through the stream. All of these signposts specify new stream positions where you need to go to find the phenomena it describes. Other events I just mentioned that we record. We record iterator sections because we have strange clients that want to validate whether things are in iterator mark sections for some reason. We also record line and column numbers because our users insist on seeing line and column information for everything that's returned. That is actually quite a big overhead that takes a lot of memory to do that as you'll see later on. Stratio number two was better string representation. You remember I talked earlier about how we used a simple hash map in the proof of concept. It's much more efficient to represent all of the text content as a single stream of characters and then have offsets into that rather than have a hash map or other structure. Java being unicode, most of the unicode characters fit into two bytes so that's how we've represented it here. No cheating for English content or so on. Interesting problem we had was also dynamic container expansion. Most Java containers and so the ones we designed as well, obviously they need to resize when you try to put something into them and they're not big enough. Generally their behaviour is to double in size when you hit this barrier, which is generally a sensible thing to do, but it's not a very sensible thing to do in many situations when you're passing XML because you're hitting threshold values all the time and you end up with dynamic containers which have resized and they're much too big for your actual purpose. In order to overcome this, the strategy number three we introduced was something called document sniffing. You pass the document once before you actually build the tree and you collect statistics about the document so you know exactly how much space you're going to need in order to store all the information you want to. Then you allocate that memory and store your stuff in the memory that you have allocated. Remember earlier I had a table showing how much memory was used for particular operations? I think that transient, and that was measured by actually running the thing on the command line and then seeing does it work or does it fail. Transient memory usage is very important. One could argue here that we could just trim back these dynamic containers after the pass and reclaim the memory that's been wasted, but I think the key question is do you have enough memory to run the operation in the first place? I want never to have to trim back excess memory that's been unnecessarily allocated. Where we currently are, we've got some benchmarks for implementation. You remember building a DOM document took 14 seconds. The frozen stream takes 11 seconds. The DOM, the lazy DOM, if it's really a partially lazy DOM it would of course take much more memory if I did anything useful with it. The lazy DOM took 231 megabytes and we've done quite a bit better with 117 megabytes. If we store the line and column information which we need to, you'll see that that spoils things slightly that we've got 217 megabytes, which is comparable with the DOM implementation and it takes longer as well. I said earlier we wanted something that was useful rather than just of academic value, so the next challenge is how do you do that? I said one thought is that you might want to put an API on top of the structure that you've built. Do we really want another API for XML? If we did, I think what this model suggests is an iterator based API because that's the underlying model of how we access the data. Perhaps one that used XPath access because there's an access and a direction and then get the iterator. However, that would not be very useful because there are no libraries out there that recognize such an API. Our approach was instead to try and implement XPath. You could actually carry out XPath queries on the document. The reason we did that is I think XPath is a fairly sane way to interact with XML in code. You build a tree, you write your XPath queries to get at the information that you want rather than manually walking the tree. Also, because we're interested in Schema Tron, having something that supported XPath would of course be very interesting for us. The library we chose was Jackson, which we've got some familiarity with. It's a stable library, especially since Elliot Rusty Harold got involved. It's been very well attended to. It's a high-performance and conformant XPath one library, I should say here, at that URL. Integration with libraries in general is difficult because we have a representation that is based on bytes, whereas most libraries out there expect a tree of nodes type model or a standard DOM or something. However, Jackson is attractive because it works with any model which can provide access iterators for iterating XPath axes. The theory all we needed to do was to provide XPath access iterators on top of our frozen stream and we could integrate the Jackson library with it. However, digging a bit deeper, we found that Jackson 2 is predicated on the representation of nodes as objects. That led to us needing to rewrite Jackson entirely so that it itself was also built around integers representing positions in byte streams rather than using objects. That took a while. This is the point in the presentation where things went a little awry. You'll see two things up there on the left is a norovirus. This knocked me out for a couple of days last week. On the right is a chickenpox virus which knocked out my colleague Andrew Sayles for a whole week. He's done most of the heavy lifting on this project. However, I can say informally that the results of implementing this and implementing Schimatron on top of it are promising so far that running the standard Schimatron iso templates with Saxon gives A-speed, so running our implementation seems to be twice as fast, but using 30% more memory. Our implementation can, of course, be tuned to be leaner and slower. If we get rid of some of the features I mentioned earlier, we can be much more compact, but the performance will suffer, speed will suffer. The code will be released under GPL license, something called Probatron, which will be a Java implementation of Schimatron. What other optimisations could be carried out on this frozen stream? One thought ahead was, is it amenable to assembly language programming to try and make things even faster? Since the basic operation is just iterating over bytes, it would seem that that might be an approach that would pay dividends, especially if it leveraged some of the clever tricks you can do with modern chips. I noticed in Telstan some interesting work in this area with its toolkit, but on going to their site, I see the products being withdrawn and rolled into something called an SOA express bar sort of something. That seems to have gone away, but it looked very interesting when it lasted. There are other interesting features of the frozen stream. They are highly amenable to being paged to disk, of course, because they have no dependencies on other parts of themselves, or split across machines. There are some interesting possibilities there for minimising memory usage or having some kind of multi-process, multi-machine approach to treating them. I also noticed recently there is a trend using multimedia hardware to do operations on things which are amenable to blitting and so on. Again, because we are dealing with basic bytes, operations on them might well be moved into the high performance RAM on our multimedia graphics cards and then dealt with that way. Another option, of course, maybe to design custom hardware to deal with these kind of memory structures efficiently if we can optimise the hardware itself to deal with these structures well. Conclusions. I should say this talk is based on something I talked about. I first gave it in Montreal when it was still theory back in 2006. The situation has improved a lot since then. Java itself has got a lot better and Saxon has got a lot better. But in-memory trees are still expensive. I said Saxon has got better. It is very, very hard to beat it. It truly deserves the labour as the Rolls-Royce of XML libraries. I don't think we have necessarily done any better than it has achieved. 100% streaming still remains the Holy Grail. That is another topic for another talk perhaps. If we could really move away from building these in-memory trees, that would be ideal. I think users might value the ability to choose whether they want to have high speed or good memory use especially if they are having to validate large amounts of XML and they don't care how fast it happens. As I said, maybe there is some scope for extreme optimisation using some of those slightly more blue sky techniques I spoke about towards the end of the presentation. Thank you very much. So we have a full plan for the presentation. It's Mike. In this government indicator, you think you can go to a project that is a project that is a flyway object. Did you discover anything about that? If you go back to the Montreal version of my talk, you will see there is a section on flyway objects. We did think about that, but it seems Jackson needs to have a lot of these objects in memory at once in order to do operations. So we bit the bullet and went the whole hog doing the integer-based list instead. For the sort of validations we do, we often seem to get in a situation where we need to have huge node sets in memory at the same time in order to compare them. So we were moving back towards the object problem if we were going to do that, I think, and that's why we rejected that approach and went for the rewrite instead. The question is, does using seven-bit representation for strings have a cost implication? Not that we've measured. I don't think compared to the other things that are going on, the cost of manipulating things at that level would be expensive, but that's just a hunch. I haven't measured it. In you go. Diolchَ yn cael ei haniad AI'r wych chi, a sydd oedd fod yn cynhyrch yn dynami itanc oherwydd rhai ddiddordeb yn si tspfu fyrdi wedi bod rhaid addiat yn cael gelian. Ond ymates pan hynna'r�oedd, Bran yn gyf辰ch gyfin dynami sydd wediwech yn cael greu'd Books bydd pistol neu olau byd ag yr hyn. Ond gan ymherwydd an patch yf unions unbebyllder os ei eve Emaen nhlawd yn ni gwyso bach os yma eas baeth y fawr o fod i'w bowl o bach. gallan yn mynd i rowa. Ar i chi'n tũ ai'r Mendin am ygoEr craf pe lectures Ac mae hynny o Chäängrd a<|nl|><|transcribe|> final, maet de boom ossMaいただ o'r ystaflu digonw SMEP, hanfair expl Moltwyr cyfle yna sy'n mynd yn eich gwybod oherwydd plant gen Bodyión Gim sujetod dramaes m'taf ar gyfer fanol, os ym裡面 ar crunchy gfrifyddon o'r maen, due ffaws mewn ffordd bobaethodhau.entially BHCR. Mae'n ddweud ychydig yn ddweud, ond mae'n ddweud y virus yn ddweud yn ddweud. Mae'n ddweud yn ddweud. Mae'n ddweud. Mae'n ddweud yn ddweud. Mae'n ddweud.
At the 2006 Extreme Markup conference in Montreal I presented a paper outlining a method of XML processing based around “frozen streams” which seemed to promise better memory usage and execution time for common XML processing operations.
10.5446/31145 (DOI)
Thank you very much. And I also want to thank the organizers for their generosity in including our pre-conference tutorial this year. It meant a lot to us, to my wife and I who run Crane Softwrites, it meant a lot to us. So thank you to the organizers for their generosity in letting us run the pre-conference tutorial. I'm trying something different at this conference. I'm not going to show you a lot of angle brackets. I want you to think about things. And I feel a bit constrained because the camera is facing this way. I have to stay in front of it. Those of you who have seen me teach know that I have to walk around the room and I have to wave my pointer and draw my lines and stuff. I'm going to try not to do that because I want this to be more of a cerebral presentation. I want you to think about things that aren't angle brackets because they will be important to you or they should be important to you. I am feeling a bit daunted because I am about to present something that someone I very much respect in Michael K apparently calls ludicrous. And it is the basic theme of my presentation today. And at least one person thinks it's ludicrous. I want us to think about the role of codes in XML documents. Now, we've had angle brackets for a long time since the early 80s. We've been marking up documents. I have friends in this room that I've been involved with markup since I got involved in 1992, back in the SGML days. And only in the last five years did I learn about the use of codes in documents, in business documents. You may not have known that I've branched out from the style sheet and query world into the e-commerce world in UBL, the universal business language. I've been playing in that sandbox for about six or seven years. And there was a technical problem in UBL that I've been able to address and we'll talk about that today. But it opened my eyes as an angle bracket guy to this world of codes that have been around for centuries. Codes in documents have been around for centuries. We've all seen stagecoach report documents where at the bottom of the list is a 999 for a wild card code value for representing some kind of luggage that wasn't in their code list. So, those code lists have been around for centuries and we, we, XML designers and implementers have got to implement them for our users whether we like them or not. So, at this forum, I'm going to try and get you to think of not of angle brackets but of codes as first class information items that we have to manage for our users or that we have to equip our users to manage for themselves. How often have we, have we tried to let our users manage codes for themselves or information, excuse me, for themselves? So, what I've done is I've excerpted a number of, of slides from a day long tutorial I have on these technologies. I'm, I'm just going to point you guys to the folks, excuse me, you folks, to the, to the specifications. I'll let you read the specifications on the angle brackets. You'll understand what I'm saying in the specifications and, and I'm not here to read you the specifications. So, I want to think quite cerebrally about information and the kind of information we're used to managing with angle brackets and the kind of information our business users have been trying to manage. For centuries, and we have to somehow shoehorn their concepts of codes into an information management framework that allows us to dictate where these codes belong and allows our users to manage the codes because we who are on committees cannot expect to know what our users are going to need on a day-by-day basis. So, and I do mean day-by-day because business relationships change day-by-day. Committee specs don't. And I just want to echo the comment that was made by Mike Kay about volunteerism and, and work on committees. He's talked about the committees on the W3C and, and the efforts already. Makoto has talked about the number of people and the writing of guidelines on the ISO committees. I'll talk about the number of people on the OASIS committees. We all need help. So, if you can find a way to volunteer and perhaps if you can encourage university people to volunteer their efforts in standardization committees and work, it will give them exposure. It will give them experience in working with these technologies. It will give them an opportunity to contribute. I, I just, I hope to echo for, for the W3C, for ISO and for OASIS that all of these committees need help. But back, back to, back to the, to the business at hand. I want us to realize there are two kinds of vocabularies in our documents when we're dealing with business documents. There's this vocabulary labeling the branches in our structured information. And we have a technology called XML that allows us to express our information in a tree to label all of the branches and to validate that our use of the labels of our information is correct so that our applications can identify and find the information in our documents. In order for our applications to implement those semantics. Okay, so we, we all know that. I'm preaching to the converted here. But inside those documents, inside those documents are codes, identifiers, information items that have controlled vocabularies. Now at the break, I heard about the, the hackles being raised at the word code and code lists. Well, I think the generic term is a controlled vocabulary that term has been used for a while. And these are values that don't impact on the structure, but they are important to the application. They, they represent semantics that trading partners and communities are going to, to base their business decisions on their business practices on. And we want to be able to manage these important values in our XML documents. That means we have some responsibilities that I want to share with you because if you are going to go and design documents that are going to work with semantics represented by codes, you have some responsibilities that I want to share with you so that you can leave here with those in mind. Well, a, a controlled vocabulary is some set of agreed upon values. And I've got some examples here. We all know about country codes, currency codes. I was quite interested to learn about the standardized list of transaction payment means has been around for a long time now. Payment by credit card, payment by cash, payment by deposit to a bank account. All of these are our transaction payment means and the codes for these have been standardized for a long time. Units of measure, again, a long time. Now contrast a, a code with an identifier, you have units of measure and you have things being measured. Well, those things being measured are identifiers. The unit of measure is a characteristic of the thing being measured. I might have a list of, of training products, books, products, videos that I sell. I want to have identifiers, a controlled vocabulary of identifiers where each value is representing some semantic. Now, we've been managing information in trees, but I don't think many of us in this room have been managing these codes, managing these identifiers. And so I want to talk about how we can do that because when two parties are using our technology to get information from point A to point B, we want to make sure that the codes used in these documents are going to represent the same semantic, the same meaning. Otherwise, we might be paying invoices for totally different things than we think we are because we have a misunderstanding of what these codes represent. So there's some obvious enumerated concepts and we have data types in our schemas for enumerations that a piece of information is, programmatically is going to be either value A or value B. Well, there's a public vocabulary like currency code when we want to represent that a payment is in US dollars or British pounds. We have codes that represent those semantics, those concepts, and we're going to agree upon those concepts so that we're actually outlaying the correct amount of money. But just between two trading partners, they might have nuances of meaning or their own representation of codes between two trading partners. So we need an information management structure in order to handle these different kinds of concepts. Well, necessarily, these codes have to be unique in a given list because we're trying to avoid ambiguity. So we need a representation that collects these unique values in a list. So we need some list metadata in order to identify the list from which we get this code. Thinking about relational database keys, each key has to be unique, but we can have multiple tables. So we need some kind of information management for identifying the list as a whole, and we need information management to talk about the values in that list. And yes, in W3C schema, we have the annotation facilities, app info that allow us to create custom structures that can contain value level metadata. But I want us to think outside of that. I want to think outside of any particular schema language. And the analogy to value level metadata is the columns that are associated with the key itself. Now, unfortunately, some terminology in our industry has been influenced by ISO. They tend to call the value the code and to call the description the name. Now, this has been awkward because when I've been dealing with people who are new to XML, but they are not new to ISO code lists, they talk about the code and the name. And in our project work in UBL, we have maintained this uncomfortable use of terminology between the name and the description. Well, these are typically abbreviations. One of the goals is to be nice and short with the code, but there are other goals where the representation may or may not be meaningful without some kind of cross-reference. And so you get into mnemonic codes, USD. I'm pretty sure everyone would probably recognize what USD is in its context. When we're talking about currencies, we probably would accept what USD stands for. Another issue about mnemonics is mnemonic in which language? I have to tell my fellow North Americans that SP is not an appropriate abbreviation for Spanish. That Espana is Spanish for Spain and ES is the appropriate abbreviated code. We have to be careful about language when we're dealing with mnemonics. So if we start getting away from mnemonics and we start have non-mnemonic codes, 42. The answer. What is the question? Well, apparently it's what is it for the transaction payment means. Apparently 42 means payment to bank account. Okay, so I've got an XML document and I see the value 42. What does that mean? So what is the question? Well, we have to know what is the context of use, what is the list that we're using, what version of the list, perhaps 42 has changed over the years to mean something else. We have to manage this kind of information. I like the fact that codes are abbreviated from the same code list, 51. Can you imagine someone putting this value into an XML document correctly so that our programs would unambiguously identify this kind of French standard by having our users type this value accurately every time? I think codes really do have an important role because they promote a consistent interpretation of the value when we have managed it appropriately and we have given our users the tools to manage what these codes mean. Okay, now a religious question. What is a code and what is an identifier? Many people feel these are interchangeable. I'd like to think that they do have a distinction. I tend to use codes for characteristics and I tend to use identifiers for lookup values. Again, the difference between a currency code for an account and identification of which account is the difference between a code and identifier. A measurement of meters or the identification of which measurement, the gross width or the net width. This is not hard and fast. That's why I say the word typically. Many people will interchangeably use this. I have found even in the UBL schemas there are things they call codes which I would have called an identifier but they didn't ask me at the time. Nevertheless, we have these values in our documents and trading partners may want to constrain all of the codes that are available. Think about my, I live in Canada and I do a lot of business in the US. My bank is telling me I may be able to get to deal with euros soon but right now I can only deal with US dollars and Canadian dollars. I have a whole iso list of currency codes but in my trading arrangement I only want to use US dollars and Canadian dollars. I want to manage my use of this controlled vocabulary. I want to express that I only want a set of codes, a set of controlled vocabulary of US dollars and Canadian dollars but I want them to represent the semantics that are in the standardized list. I want to say that my USD means US dollars as defined by iso. That's just an example of trading partners wanting to constrain code lists or maybe even augment them, add something new. There are custodians out there of the abstract concept of a code list. They manage these in their own ways. They might be using databases. I've seen a lot of pros lists. Here is the code list and they give me a PDF file and I'm supposed to manage my code list with a PDF file. Well, I have seen enumerations and schemas. Very popular to see CSV files. They might be managing them as database tables. Inventing their own colloquial vocabulary or I'd like to propose using a standardized XML vocabulary for code lists that has a lot of these management features. Already many concepts are officially maintained. Currency codes and country codes are maintained at ISO. UNC FACT, a division or a committee underneath UNECE is responsible for the payment means codes. That means our users or if they've given the responsibility to us XMLers, we're going to have to decide which of these codes are applicable in our space. We're going to have to find the custodians, find which organizations are acceptable to trading partners to determine those codes that represent the semantics we need to use in our information. That's a community responsibility designing the interchange of documents. They're going to have to decide whose controlled vocabularies are we going to use. We might then decide to subset those codes. As I gave you my example, I want to use the ISO code list for currency, but I only want to use two codes from that. Many people need to extend official code lists. There is a code list for transportation status. One of the UBL documents reports transportation status in an XML structure. UNC FACT has transportation status codes and the U.S. Department of Transport has status concepts that are not included in that list. They have decided that in the management of information with UBL, they need to represent both the official semantics plus their custom semantics. That introduces a management requirement. It also introduces a new facet of conformance or of validation. What's happened in UBL is we have formally separated UBL conformance from code list conformance. I'm going to bring that up in a diagram later, but we have formally separated two aspects of conformance when using UBL. I'm going to be referring to UBL a lot here. Although these techniques have been developed to satisfy UBL, they are available for all vocabularies, whether it's a business vocabulary, a scientific vocabulary, or all vocabularies that have controlled vocabularies. UBL, why is that important to Europeans? There is an edict. Five years ago, the European Union said every member of the European Union has to do public procurement electronically by 2010 next year. There aren't a lot of vocabularies out there. In fact, I think only UBL satisfies the requirement. There are a couple of projects. Please talk to me during the breaks if you have any questions about procurement standards, business standards in Europe next year for the 2010 goal. I'm a co-editor of the UBL spec and I love to talk about UBL. As a community of users, there is a committee called the BII that is specifying the requirements for public procurement in Europe, and they have to be, they have this responsibility. They have to decide which codes to use, which codes to extend, what do these codes mean, how are they represented, which values do we not care to constrain, that we want to leave unconstrained, all of these are options. Well, in the management of this information, I want to talk about three kinds of metadata. The list level metadata is identifying the actual list of codes that have the unique values. And I have on the left here the UNC FACT payment means code, which has hundreds of values, and what if I'm in a situation where I only need to use cash or certified check. That's in my business practice, I only want to accept of all of these codes 10 and 25 cash and certified check. So I'm going to create a code list whose list level metadata is my own list level metadata. One of the lessons I've learned in this management of information is you can't subset the ISO list and call it the ISO list from a management perspective, because it isn't the ISO list. This is my list of ISO codes. So I've introduced into this management philosophy the concept of a masquerade, where I have created Ken's code list of just cash and certified, but at the point of validation, I'm going to masquerade this as if it were the official CFACT list. And the masquerade satisfies the semantic association. The fact that even though it's Ken's code list, you don't have to know it's Ken's code list for 10, because the masquerade says that this 10 means whatever ISO says, or whatever UNC FACT says this value 10 means. So I'm managing my list with my own list level metadata, but there is the concept of the masquerade, which says that these values come from this table for semantic purposes. And what if I want to add a new payment means? Perhaps I come from a southern region and I have a new currency in my trading agreements, and I want to include this in my instances. Well, I could have then an alternative set of payment means and list my codes there. This diagram also shows another aspect about document context, because what if for whatever reason in my XML documents I want to have different lists applicable at different contexts of my document? Well, one of the aspects of UBL modeling is that every information item is globally declared with a global type, which means the values have global scope, which means if I use XSD enumerations, I can't do this kind of document contextual differences in the controlled vocabularies that apply. So in this management scheme, we've addressed that. We are supporting the ability to say that at this point of my XML document, I want the full UNC fact list to apply. At this part of the document, I want this controlled subset, and at this point of the document, not only do I want my subset, but I also want to allow this value. And I am doing that association of document context with the lists that apply, the values that apply. Okay, now that's list level metadata. The value level metadata is helping me understand these codes. And the first two columns here are labeled with the ISO names of code and name, and in the first issue of UBL, we only published code and name per the ISO approach, and we ended up with this ambiguity. The name was insufficient to distinguish the code for users. Users wouldn't know from the name, they wouldn't know which code to use. Value level metadata is so important. When you are describing your codes, this is the unique column, or perhaps a combination of columns to make a unique column, you need value level metadata to help convey what the semantics of this code mean. Now that might involve normative values that are crucial to the distinction of these codes. It might include informative values like language translations. Wouldn't it be nice in managing this code list to have a user go to the code, to the value level metadata, and see a translation of what this semantic is meant to represent? So we have value level metadata. Code level, this level metadata, value level metadata, and something that we designers need to remember is instance level metadata. When we design our vocabularies, we have a responsibility when dealing with codes to give our users the opportunity to distinguish the semantic of a code by indicating list level metadata in the instance. UBL accommodates this. There are attributes to information items where you list, where you specify the list level metadata from which the code comes so that my application, when it sees a code, will know which list it comes from. Now, in my, I've slightly changed my union here. I have a union of these two lists. If I use the value 25 somewhere in my XML document, that's unambiguously certified check. Well, what about the value 10? If I just put the value 10, notice that I'm reusing that value from two lists. Now, these lists are managed by different custodians. They have their own reasons for coming up with their values. So we can't mandate or dictate to these custodians to use unique values. So here I have two separate lists with the value 10. If my user says 10, they need to be able to say from which list does this 10 code come? It comes from the UNC FACT list version 7a, or this comes from the alternative payment means list. Now, if we design an XML vocabulary and we forget to give them this power, and we only let them specify a code, that might be ambiguous to an application. So I want you to remember your responsibility when designing a vocabulary, a business document vocabulary, or any document that's using codes, to provide instance level metadata. And this is where the masquerade comes into play. Because I'm saying this is 10 from the UNC FACT list, when in fact behind the scenes it's 10 from Ken's list, but I am masquerading Ken's list as if it were the UNC FACT list, to make sure that the semantics are agreed upon. So those are the three. I want you to leave here with the respect for list level metadata, value level metadata, and instance level metadata. Well, how to model these? I see these really as tabular sparse tables, is what I see these as. Tables where the codes are indicated in one column, and you may have sparse or full metadata in the other column. How you maintain this behind the scenes is up to you. You might use a database, you might use any kind of information. I've seen uses of spreadsheets, word processing, CSV files. And when you try to trade these files as part of legal contractual agreements, they might be difficult to be unambiguous. So what do we have available to us? Bless you. We could be using something like a W3C schema enumeration in our use of, if that's for me, have them call back. They might be, like UBL, have document-wide scope of reuse, which means we can't have these contextual differences of the different code lists. There is a standardized XML vocabulary for controlled vocabularies, for these code lists. It allows us to specify list-level metadata and value-level metadata. It's still your responsibility to design in the instance-level metadata for your users. But at least we have an XML spec for list-level metadata and value-level metadata. And it is not meant just for validation, it is also meant for any purpose, like I'll talk about data entry in a sec. Now, there is still a role for schema expressions, and that's what I said, when you design your schemas, please provide for your users to specify optional instance-level metadata. Oh, right here, data entry. Wouldn't it be nice in a user interface when you are offering to your user a drop-down list, in that drop-down list, to only use the subset of codes that you want to apply in the trading partner relationship? And if the user has a question about one of those codes, wouldn't it be nice to expose all of the value-level metadata about that code? So, generic code satisfies the value-level metadata and the list of values for the drop-down list. Now, the instance-level metadata, you could be populating that because what if you've indicated that the information item is the union of two lists with an ambiguous value, that number 10? So you want to select the number 10, you can say, from which list have I selected number 10, and the data entry package would auto-populate the instance-level metadata to do the disambiguation. Applications, what I found interesting is that this approach to using codes allows very generic applications to be written. Imagine that payment means codelist, okay? I think many different kinds of payment means. If I'm writing an application working with UBL, I might support two or three dozen different kinds of payment means in my application, and I can write that code once. Okay, I've got this application working on transaction payment means, and in my trading partner relationship with Priscilla, we're doing this business together. She wants from me only cash or certified check. She can constrain in our XML instances the use of only cash and certified check from instances of me, even though her application supports three dozen different kinds of payment means. As our trading partner relationship matures, she can accept more payment means from me, and she doesn't have to change the schema, she doesn't have to change the application, she only changes the validation of those XML documents that she gets from me because she is allowing more values to get through to the application. And you have now supported a new trading partner relationship with zero programming changes. All you've done is change the codelist that's involved. Very flexible for trading partner relationships, and this came up with UBL. We're trying to design a business vocabulary, and who are we to say what a business trading relationship is going to be at any point in time, or it's going to change in time. As she trusts me more, she's going to allow more payment means. I can't express that in a schema. So I don't have to manage those as first-class information items. This also offloads some of the validation responsibility, so I don't have to change my application to know about the trading partner relationship because the validation in advance of the application will reflect the flexible requirement. So I've drawn that here in this diagram, and I've cited as examples in UBL, what might be uncomfortable with the separation of the validation of structure constraints, structure and lexical constraints from value constraints. Okay, structural constraints from value constraints. The proposed processing model for UBL is to have the recipient of an XML document go through two distinct stages of validation before passing that instance to an application. So Priscilla's written an application, she's supporting three dozen different kinds of payment means. She receives an XML instance from me. Well, she first uses the UBL schemas, the published committee schemas, or the community subset configurations, to ensure that the structure is valid and the structure of the values, which is the lexical validation. So both of these are structural aspects of my document. Is my use of the labels correct and is my representation of the value inside that correct? If so, then let's now check the validation of the values themselves to help our... Oh, and this also illustrates the two kinds of conformance. We have UBL conformance and we have codelist conformance. And it's the responsibility of a community to dictate when dealing with us, we expect you to conform to this UBL schema and this set of codelists, separate facets of validation that we have to communicate to our users, implemented this way. Now, we supply to our user community a default codelist. We've chosen for 91 different coded information items, we've chosen a dozen codelists that will be popularly used and we publish this as a second pass schema, or excuse me, a second pass set of constraints. And it happens to be XSLT, it could be anything at all, it happens to be XSLT for UBL, and I'll talk about XSLT in a second. Now, if a user of UBL chooses to change those constraints, so Priscilla wants to do business with me, she will change the second pass to only allow certified check or cash from me. She'll accept more from Norm, she'll accept more from Jenny, but from me she'll just do cash until we get to know each other, and then she changes this validation, she hasn't changed the UBL schema. All along we've been conforming to UBL, but this reflects the changing trading partner relationship and the application supports it all. Now, how do we get there? We have chosen XSLT to give to our user community because I was able to create a schema-tron implementation of what are called context value association files, pulling in those generic code files, and that approach allows us to bring in business rules as well as schema-tron assertions, and this all gets assimilated into a single XSLT style sheet. So the last thing I want to show to you, I'm just going to skip ahead to the vocabulary. That's the generic code vocabulary. Oh, there's pretty print style sheets so that you don't have... Oh, that's another thing. My business users don't like seeing angled brackets. We're comfortable with angled brackets. They don't like seeing angled brackets. So there are published style sheets, and we just add these style sheet association instructions into the generic code files. Here's the list-level metadata at the top, definition of the columns, which columns are keys, and there's my row with the exchange of sheep as my additional payment needs. But we've seen angled brackets before. The last diagram I want to show before answering questions is this diagram that talks about a context value association file, a CVA file. This is an XML instance that declaratively specifies for different document contexts by using Xpath addresses. The association between those contexts and pointers to generic code files. And you can specify that one generic code file has the context here. This second one here appears to be the union of two different generic code files, and this third document context is a single generic code file. So this is what the business-trading-partner relationship reflects inside the standardized structure, that in doing business with Ken, only use that subset list, which is check and cache, and later on change that pointer to the larger list, not having to change my application, just changing this association of which values are applicable in which contexts. So I'm going to leave a page of links. Now all of this is... There it is. All of these are in the version of my paper. I've just got a serialization of all of these bullets, as if it were my paper in your proceedings. You can find all of these links. CVA is currently under development, although we think after 18 months, we think that all of the features that... or most of the features people need are already in there. We're going to try and standardize CVA in the next 10 to 12 weeks. Generic code has already been standardized. There is an OASIS committee. I'm the chairman of the codeless committee, where we've actually taken it out of the UBL committee and created a new committee because it applies to all documents with codes, not just UBL. And if you would like to help out our committee, we could certainly use the help, certainly use the new ideas. And as you leave here, designing vocabularies for your users, I want you to think about codes or values as first-class information items that need to be managed. And I think XML provides great facilities for managing or at least interchanging that information. I'm not asking you to manage your codes using generic code. I'm only asking you to publish your codes in generic code as one of the published formats so that I can use those codes. So we have to make that distinction. We're not asking people, the custodians of code lists, to change the way they're doing their business. All we're asking is, please make your codes available as generic code files. There's another appeal I can make to you as the audience. You have contacts with these people who are custodians of code lists. Could you please preach to them the benefits of publishing those codes as XML documents so that they can be interchanged, so that our programs can take advantage of them? So I'm here for both days and then breaks. I'd love to talk more about this. I'll entertain questions now. But thank you for letting me get away from the angle brackets for a while. Yes, Michael. I think you've made two points which are extremely good. We'll discount the others. Why is the separation of structural validation from... Value validation. And the other is the need for customizing validation before the updated level is changed. Yes. I think I'll just shoot it. No, no, no. Don't make them orthogonal. They need to be separate. I can see in 1.1 how I can conflate those two in a single pass. If it takes a piece of customization, I mean, I have a system where the client, where in a particular workflow, a particular field and a document have to have the value of... I'm sorry. A particular field and a particular document have to have the value of... Yes....the value of the document. Unless it's approved. But also, on that workflow, a particular path to document that in another place is option or is mandatory. So I'm only customizing the value set to say it's not approved but it might be grounded. Okay, for those who can't hear, sometimes codes might impact grammatical constraints elsewhere in the document. I want to say that's for code impact in the grammar. I'd say the workflow customizes the schema. I'd be comfortable with both the values. I guess it would, yes. I think what I'm trying to argue is the issue of customization of the schema accordingly, the particular workflow is all formal for the distinction between value and functionalization. Okay. As a committee in UBL, we're creating a set of labels and we're asking our users to impose their own constraints on those labels from what you're saying. We will decide to make things optional. Most constructs in UBL are optional and it would be up to a deployment in order to make that workflow association of changing the grammar for a particular state in the workflow. And that is certainly is allowed in... from a UBL interchange perspective. The other thing that we're trying to do is to make sure that the UBL is not a community that can choose to, in their community, change the constraints. They can't change the labels or the positions or create a set of constraints that an instance can no longer be valid. No, let's put it that... that the instance they're creating can't still be a UBL 2.0 instance. I agree with you that differing paths. I'm just trying to... I'm trying to think on the fly here about this orthogonality. So you're saying that... Well, I think I agree with you. Yeah. That... You know, it's not completely intact, but I think... Yes. My goal here, though, was to really focus on the code part of it. Certainly you could read a generic code file and synthesize a set of W3C schema assertions... or say enumerations, and then read the CVA file and synthesize some 1.1 assertions so that I could read both of those components, the UBL schema and the CVA file, and synthesize a single 1.1 schema that incorporates all of that. I could see that happening quite straightforwardly. Yes. One thing, you mentioned the two instances of 10s and 2 and several instances of Columbus. Yes. Isn't that really a namespacing problem? Don't we need the namespace? It's a value space issue. Namespaces are good... I tell my students that a namespace is a dictionary. Names might have different meanings in different dictionaries. They might have different namespaces. I think that's what we need to use to... Is your claim that we need to somehow put a namespace value in the metadata? Absolutely. That can definitely be a canonical version. You are one of the pieces of list-level metadata. Actually, I have just... One of the versions of list-level metadata is... Right there. Canonical URI and Canonical Version URI. I can choose as the custodian of this list to have as my instance-level metadata a URI as the distinguishing string. I could put a namespace URI there. In this time, of course, you can have different containers of a list that would ultimately go into the same value. Yes. Because of this distinct... Because of list-level metadata and your provision of instance-level metadata to do the distinction, yes.
“Introduction to Code List Implementation” overviews the use of Genericode and Context/Value Association files for the representation and validation of controlled vocabularies such as code lists and identifier lists for XML documents of any XML vocabulary.
10.5446/31147 (DOI)
My name is George Bina. I'm from Syncrosov, the company that develops Oxygen XML Editor. And I'm here with my colleague Octavian that helped me put together this demonstration. I will show some of the important features that we added in Oxygen in the last versions. And I will be using a developer snapshot for the demonstration because I want to show you also some of the features that are working progress and will be available in one month in version 10.2. The demonstration will show the XML authoring part and the XML development. And in the end, I will also mention some ideas that we have for the future. Let's start with the XML authoring part. Oxygen includes visual CSS-based editor that allows content authors to edit XML documents in an interface similar with the word processor. And what we tried to do with visual authoring was to use very simple principles for editing. And in Oxygen, there are just two things that you need to know to edit visually. First, you need to locate a position where you want what content or markup. And then you are the type to insert content or press enter to insert markup. And I will show you that immediately with a practical example. We use, for our website, we have an XML document, this one here. And we convert this through XSLT to get the Oxygen website. And as a side note, you can see here that this uses XML schema, schema-transchema, embedded rules from the XML schema and entities. It mentions also the ZD for entities. And you can see also the CSS stylesheet that is associated with the document. And when I switch to the Outer Mode, I can see the document rendered through that stylesheet. And this is how it looks like. And I will show you immediately how editing works. So there are just two basic steps that you need to know. If I want to enter content, I just go to that position and I just type in the content. And if I want to enter markup, I just press enter and I get a proposal with what markup, what elements can I enter in that location? So if I enter an inline element that is already rendered, and I can type in markup. And now the problem is how to locate the right position. And Oxygen provides mainly three ways to allow you to know where you are in the document. And what I use all the time is that when you pass through a boundary, you can see location tooltip that shows you where you are. You are between the title and the paragraph here. You are at the beginning of the paragraph. If I press enter here, I can enter a new paragraph here. And basically that's all you need to know to be able to edit visually the XML documents. Apart from that, for instance, if we have, we offer also, if we have an inline content, and if I select something here, if I press enter, I get again the proposals. And if I select an element, then that will surround that location. And if I press enter here, the first proposal is to split the paragraph. And if I press enter again, I can see that Oxygen creates a new paragraph, and the split is performed deeply. You can see that there's also the strong and strike elements are present in the new paragraph. There are a number of views for the visual mode. One, for instance, I can put on the tags. And this allows to easily move content, for instance, if I want to move one paragraph in front of the other or with drag and drop actions. Now, one problem that we had when editing the website is that we have many people that edit the website content. And each one has different formatting options. And a small change made in the document resulted in a large number of changes when you compare the revisions from the subversion repository. And we needed to find a solution for that. And what we did was to use the project level support for options that Oxygen provides. And we put, you see, almost all the pages, option pages have these global options, project options, radio button. And we put the options, the formatting options at project level, and all the people edit the website through the site project. And thus, we get only the relevant changes that each one makes on the website. And this is a problem that appears all the time when you have multiple people working on the same document. Okay. Let's go back to out-toring. Oxygen provides ready-to-use support, apart from these basic stuff, for a number of, for the most popular documenting frameworks, document frameworks like Doug Book, Dita, Dita, TI. And for this, we provide more than the basic support. And for instance, this is a Doug Book file document. And if I switch to the Outor mode, you can see a large number of actions here. Oxygen detects that this is a Doug Book document from its namespace and loads specific support for that. One, once I see, if you notice here the tables, you can see that there are current specifications for tables, right? And Oxygen provides multiple cascading stylesheets. You can set alternate cascading stylesheets. And I can choose one that hides the current specifications. Then we provide actions to easily insert different types of emphasizes, different links, sections, paragraphs, images to insert different types of lists. And I can just add more list items. So instead of going like, no positioning between the two list items and entering another list item, right? And then doing something like that, I can just click a button and have that immediately inserted. Then we have extensive support for tables that allows to, that handles both cost tables and HTML tables and allows to create whatever table structure you like through joints and splitting different cells. And we have working progress here to handle support for current weeds and to allow people to drag and drop on a table column, for instance, to specify its width. More than that, we have Oxygen provides support for linking and for included content. For instance, here I have a Dougbook file document that uses Xinclude to add some more of the few sections. And if I switch to the OuterMode, I get this representation. So the Xinclude content is rendered to present a consolidated view of the document. And I can just click on this icon to go to the Xinclude file and I can edit there. Yes? Yes, it works the same with entities. Yes. And I have also, on the linking part, I sketch the presentation here and I put some links using Xlink. You see, here, for instance, right? So if I click here, I get to add a document. But all this for Dougbook, nothing from this support is hard-coded in Oxygen. Everything is just a configuration. And we provide a default for Dougbook. And anyone can create another configuration for another framework for basically your type of document. And if we look in the Oxygen options, you can see here that we have a document type association entry. And this lists a number of frameworks that we support. And for Dougbook, you can see that we have a rule that detects that a document is Dougbook that specifies, okay, it's this namespace, the Dougbook file namespace, any local name for the root, any file name, any public ID. And we allow also to have a class if you want to detect more, looking also at attribute values, for instance, to detect that type of document. And then you can see that we have the RelaxNG schema with embedded schema rules that Oxygen uses automatically to edit Dougbook file files. You can specify transformation scenarios that we want for Dougbook. So when you have a Dougbook document, you just can one click to get PDF or HTML. But all these are configurable by user for any other framework. XML catalogs, new document templates. And on the outer tab, we have the stylesheets that we used, the cascading stylesheets that we used for rendering. And then all the actions that were available in that toolbar. And here the support for defining these actions is very, very easy to use by any user. Because if we look at the insert paragraph action, that is edited visually in an interface. And the interesting part here is that it performs different operations in different contexts. If you are in a paragraph, you will insert the paragraph after that. And you can see here that it says, it's just an expat expression that says, Anchester or Selfies paragraph with the Dougbook file namespace. Then perform this operation, insert fragment operation where this locates the node and after that insert the new paragraph. And here you see that it's in the context of a title. And this is the default action. And for insert section, for instance, there are 12 different contexts that are defined. But the idea is that you can create these actions because you can use this. We provide a lot of common operations already implemented. The source code is available for these operations. So it can be easily extended to create other operations if these are not enough for what you are doing. Okay. What we have for the XML Outer, and I will show you why I'm still here, is for the next version, is track changes support. So if I enable track changes, then when I add content, okay, that is marked up here. If I delete something, that is deleted. And then I can see those markers. And in a further operation, I can accept or reject changes. It will have multiple, or support multiple users. So Oxygen has these two mind parts, the Outer ring and XML development. In the Outer mode, they are working in the visual Outer mode. And basically there are some set of highlights that show different parts of the document, highlight different parts of the document. And when we serialize the document, because this is in the Outer model, and when we serialize the document, we convert them to processing instructions because this will leave the document in the same form. It can be processed in the same way. Yes. Okay, so... We have some more plans for the Outer ring part for the next versions. Apart from the change tracking, we plan to improve the dog book support to offer a better support for creating all links. Okay, I mentioned that we extended the CSS support for handling tableweeds. And another thing that I want to have... is a way to... You know, when you go between two paragraphs, sometimes you may need to go from one paragraph directly to another. Right now you can tap to move more quickly to another element. But I was thinking of a way to... You know, you press R down, you should get sometimes from one paragraph to another if you are not interested to enter structure, if you are interested to enter markup only. Okay, let's move to the development part. This is actually what we do for eight years, actually, from 2001. Oxygen offers support for the mine XML development technologies. We support all the XML schema languages, including NVDL that you saw this morning, including Schematron, XML schema, RelaxNG, DTDs, Schematron rules embedded in XML schema, the XML schema and RelaxNG. We have support for XSLP1 and 2 and Xquery and XML databases. I will just show you a couple of interesting points that are in general in the latest......adjusted additions. One is the new schema diagram that we added in version 10.1 that presents the schema and allows, at the same time, to edit the schema directly on the diagram without going to the code. You can do that using in-place editing, drag and drop, a couple of side views that can help editing and contextual actions. For instance, if I... I included a schema. I consider included content also in the outline view. You see it here. I can add a new element. If I want to rename it, I just click on that and I can enter a new name. Then I can assign a type to that. I can just add that to the person content model. Make it optional. This is the general... If I want to change the type, I can double click here and choose another type or I can use the attribute side view to change any of the properties. The outline will allow me to filter... If I want to see only person and address, I can just type there, add and comma person. I filter the outline to see only those components. Another interesting action that we provide is to allow, for instance, to extract the type from an element or extract a group from a model group. If I extract a global type from here, you see I get a proposal. We set automatically the type value and the oxygen extracts that as a separate type. More than this, we added a new schema documentation support in oxygen. We had also before support based on XS3P schema documentation generator. That was in haste with diagrams and a lot of stuff. We had a lot of problems and we decided to create a new documentation support starting from zero. What's interesting with this is that we convert the documentation in XML. From that, we have a couple of styles sheets that convert to Doug book, convert to HTML and convert to PDF. The interesting part is that the documentation can be part of your documentation. For instance, we use Doug book for our documentation. If the schema documentation is also Doug book, the schema documentation becomes part of your company documentation. It's not a separate research you link to. There are a lot of options that allow you to set what will be in the documentation. You can also export those to files so you can reuse them or use them from the command line if you want. If you want to integrate the documentation in a workflow. You can see here how the documentation looks like. It has schema diagrams. You can click on them to see the...to navigate. I also put here the PDF documentation. The outline will allow me to filter. If I want to see only person and address, I can just type their add and comma person. I filter the outline to see only those components. Another interesting action that we provide is to allow, for instance, to extract the type from an element. I extract a group from a model group. If I extract a global type from here, you see I get a proposal. We automatically set the type value and the oxygen extracts that as a separate type. More than this, we added a new schema documentation support in oxygen. We had also before support based on XS3P schema documentation generator. That was in haste with diagrams and a lot of stuff. We had a lot of problems and we decided to create a new documentation support starting from zero. What's interesting with this is that we convert the documentation in XML. From that, we have a couple of styles sheets that convert to Doug book, convert to HTML and convert to PDF. The interesting part is that the documentation can be part of your documentation. For instance, we use Doug book for our documentation. If the schema documentation is also Doug book, the schema documentation becomes part of your company documentation. It's not a separate research you link to. There are a lot of options that allow you to set what will be in the documentation. You can also export those to files so you can reuse them or use them from the command line if you want. If you want to integrate the documentation in a workflow. You can see here how the documentation looks like. It has schema diagrams. You can click on them to see the...to navigate. I also put here the PDF documentation for schema.1.5. There were some challenges creating the PDF documentation because all the diagrams are very big. You need to put some work to have that in this form. Here is the documentation for XSLT20. You can see here we generate also type hierarchy. You can see what elements can go inside that. You can also use these options to specify what is visible or not in this view. You can also have the ISO schema documentation in Dougbook. I can open it in the Outer Mode. I want to see it in oxygen. On the XSLT support, I will just take one more moment on the schema part. I played with the visual outer page for the schemas. I created a cascading style sheet to render the schema in a visual way. I believe it ended up with a really interesting presentation. You see the schema that I showed you earlier looks something like this. It is really like a quick documentation. More than that, you can edit here. You can just edit in the Outer Mode. It is really funny. What I did this for is that I provided also another style sheet that shows renders only the annotations. It serializes the schema document. The idea was to be able to edit HTML annotations for the schema directly here. You see if I enter HTML or whatever language that can be rendered, that you have a framework in oxygen, can be used and you see the annotation rendered directly here. For XSLT, I have here the Dougbook style sheets. What we have in the new version is this recess higher archive view that shows what style sheets this include. You can browse down to see or you can show dependencies. Where is version included from? This takes a while because there are so many style sheets. Then you can see the past version it is used from. We have this also for XML schema following including ports. We have this for relaxing, showing included schemas and so on. Next we plan to have also call hierarchies, type hierarchies, all this kind of stuff. What I wanted to show you for Dougbook for XSLT is... This is the outline that shows not only components from the current document, but also from all the included and imported documents. I can filter here and I filtered on base-dir parameter. What I want to show is I want to rename this using one of the refactoring actions that oxygen provides. You can see that oxygen detects all the places where that component is used. It shows you what change it will do. If you have a project this size it will be really hard to perform such a rename manually because you will need to find all those places. I have one more topic if I may. It is X-query and X is to support. I plan to show that with using X-EAST. Oxygen supports basically all the XML databases available and provides X-query support and allows to browse the resources from the database. I started X-EAST and I go to the database perspective. As you can see here I have a database. If I open a dogbook document that is stored in the database, I can edit it as any other resource. It is just like a file on the local system. I can save it and it will be saved in the database. We have an X-query here. I can execute that. We added support also for 10.2. When a database returns you a lot of results. We will show only a part of them. We can get the other ones if the user wants to. Otherwise if you get like 15 megabytes of results, it will blow up the application. What we added recently is the support for database functions. The oxygen shows, for instance, the signature of the functions, what functions are available in different namespaces for each database. Some of these features that I showed you are not yet available in 10.1, but will be available in about one month in 10.2. I hope you got an overview of what oxygen does. What we plan to do next will be to do more on this refactoring part. I plan to have this visual part, like we have for XML schema, for NVDL, for instance, or for Schematron. We plan to increase the number of refactoring actions to have an IDE like a Clitz offers for Java, also for XML. Thank you. Thank you.
A live demo of oXygen presented by two of the oXygen team members. The demo will cover some of the important XML authoring and development features like: - visual authoring (DocBook, DITA, etc.) - schema development - XSLT development and debugging - working with XQuery and XML Databases
10.5446/31149 (DOI)
Felly, yn ymlaen, rydyn ni'n gweithio ar y Tony Gray yma'r ffordd o'r tynnu cyfnodau ychydig yw'r cyfnodau ar y XSLT. Rydyn ni'n gweithio ar y 1 ymlaen ymlaen ymlaen ymlaen y XSLT a yn ymlaen i'r unid-tych ymlaen ymlaen ymlaen ymlaen ymlaen. A dyna'r ysgrifennu. Rydyn ni'n gweithio ar y cyfnodau yma'r ymlaen ymlaen, rydyn ni'n gweithio ar y syntax, ychydig ychydig. Rydyn ni'n meno a ddim yahu'r tym Bl feelsaubusterau ac mae'r defnyddoedd pan wedi gweld<|sa|><|transcribe|> i lei chi eraf a ychydig hi'n gweithio i ciento i roi drawer observerau. Mae át wneud twfyn yn myised yn 가e o'r defnyddiaethau a'r eff �ddynn합니다 i unig ac llawer todon a ben r Yellow Coffin ond wedi fel Aha Ocel, sydd ο'n padhy律 ac hefyd yn gallu g Morel Hyry, ond coordinate tra sef bod dros eich gwaeth yn maelio. Ac nid wych yn gwneud ac adrwening dipyn yddech chi vier Hyfrgloedd Cymru, i feddwlBoth I faint fel gweithio dwylo'i gyffredin痴 ddeniwch eich blanwnol yn dod, y sefyll sawr offid int Local Dyn gyvollleg r normal Subs socialism Ja Faya OG First o'n cymdd田 i Ni, 140 olethau Incostangod yna adventure? Sabi'n gwybod hanesidaeth來說, gyda ni? Be gyd ym Waithas gainsべau yng Nghymru Cymru. Yn ei g черfod seiloriaeth pulseid yn fawr. Be chi ei bawb am gennym yr adnod o��d, ac yn gweithio'n goryn am y clywed ac mae hwnno mi fel o'r camp lle botenderon ond yn topping ar syniad ar gyfer ystod. Felly, dwi'n rhaid i'r ddweud y gallwn ni'n gwybod, ac mae'r ddweud ar gyfer ystod, mae'n rhaid i'r ddweud ystod, mae'r ddweud ar ôl o'r test, mae'r ddweud ar ôl o'r test yn ffwrdd yn ymgyrchol. Dweud ystod yn ymgyrchol, mae'n ddweud ar gyfer y methodol agel i'r ddweud ystod ymgyrchol, i'r ddweud ystod ymgyrchol i fanoriai ein gofo i'r ddweud ystod. I'r informesion fwyaf,Backuro ystod yn ymgyrchol, mae ddweud ystod y methodol i'u defnyddio a byddwn i שמ�ill yn soda un o'r cyflwyannau ac mae genno ar Tech Prince ein siawn. Dwi'n ddechmy放心 i wneud gan ficei'r darижun o gr Heidi 제�wre. Fellyreg rwy'n credu eu hangakawch eu lleionol i'r tu siad yn eu ran dewised. Mi â'r dyma cachededd Cymru, Unit Test pren Studятся, diogelwch ar Sut hiffraegul i ayne i'r wnaethach nhw. A ti arniwch bod pobl gallwch chi'n g summu Tennisin testdu ac yn dgnwch yn s проio conversationau. Undrwch yn ni dyfnod yn ysgedeiffur, gan y gdown palau roi'r ddweudio diwrnod … mae yna roi arlyregon Pwgiedol Rhywunごf verifywn, yn promotion yn y Prif Şreig. Wais i fuodau Spe球io Green featuring RwB ar類 voyaw ddeinol i gysylltiadauåt am gondol o'r rhwng Gweithredu menwys gnaeth ofesc Yn lockdown yr archweith<|ar|><|transcribe|> Dyn ni?Wait Fa w advice ymddiw'r codi, ymddiw'r codi, ydych chi'n gwybod ychydig i'r gwahodd, ac yn ymweld i'r gwahodd yn y llangwyd, ddyn nhw'n maen nhw'n gwahodd ar y codi i'r hyn o'r cyhoedd yma i'r cyhoedd. Mae'n gwybod i'r cyhoedd yn ymddiw'r cyhoedd. Felly, mae'n ddweud, yn arweinydd, bod yn ymddiw'r cyhoedd, mae'n ddweud ymddiw'r cyhoedd. Felly, mae'n ddweud ymddiw'r cyhoedd ynllun i'n cefnidaeth yr ibheach ddiwedd inni. Byddwn yn ymddiw'r cyhoeddannu syniadol o庫 survived o'r coudding cyhoedd yng nghyme doedd fe tyng ni wenthyl Barbara analyse fwrdd ac yna siarad500. Rhyw bendr Northwll Pictures i ben Adventures. Mae'r niw wedi hyn kwyng spineis fel Dy horizontally dyedliadol i beth ourod ddu chi'r gridau cynradig yw amgylch cynni chi. Roedd bryddiol. Rydw i bryddiol. Roedd cy<|ja|><|transcribe|> rynnyb yn meddylyg o'u siarfcribedeolれonatelyri lly Ministry o gwylwg o Llod, oedden psicart adamwyd Sport. Felly yn usod IP meng防 pa Monkeybenand, yn fathmentau eich dyn sy'n f재erd 적이 maen nhw o'r가지, na h ancestors enill.to Anyway, zio,Robert variation form complexity again. Man residential pronouns gweldais. Mae'r hwnurinizi-don recerdregoeddurau a heavenly�making, a ei rhaid ati y cyntaf故f honi yn niolodol am ad continental eleers. G respectedHDn hen yn niolodol. Y rhaid yn enw'i dem arnau gan gan gwy Greece y byddai argyllid ac symud o gwread core deck- station al risen'r ar pork prepared y llühren y cad dachte geistio ar y cas wyf ddiddorswyd ar syniw. Worious�, nad oedd yn ni ddiddorg lok<|ko|><|transcribe|> hasio'r etnesio cymenty corporation. Dy chi'n pairunkol ei hyn sy'n gilynt y cerdyn Pan Lorth Cymru ond yn aelod pwysigio. Und Suz y cerdyn gyda'r model Şerno'r形iystiad. Aelodiaeth rhy?nw'rSengzurwng darnabyn. yn fwy o'r dysgu'r cyffredinol, ond y dyfodol sy'n fwy o'r dysgu'r cyffredinol yn fwy o'r stil o'r stil fwy o'r stil, ond y dyn nhw'n fwy o'r stil fwy o'r stil. Felly mae'n cael ei wneud y fwy o'r stil, oherwydd mae'n cael ei wneud. Yn y ddweud o'r ddwylo, mae'n ddysgu'r cyffredinol yn ddwylo'r sianariadau. Yn y ddweud o'r sianariadau, mae'n ddysgu'r cyffredinol yn ddwylo'r llyfr, ond yn ddysgu ddwylo'r sianariadau, bydd y cynnig o'r sianariadau. Felly, o ran rhai o'r proses yng nghylchion, mae'n rhaid i ddwylo'r ddwylo'r sianariadau, mae'n rhaid i'n ddysgu'r cyffredinol o'r ddwylo. O ran rhai o'r ddwylo, o ran rhai o'r ddwylo'r fwy o'r ddwylo'r sianariadau,..ryintrodu population.älltbol. Mae'r sayf working group D'Ardaeth A'wn H cytтаeth yn Gweithio'r Ni— mae'r hanhemлы companion ddechrau o hwnnw.....byddoyo wanton lifad bob, blaen..... Personalization Codi. Mae ger sy'n gweithudiad.....neug wrth ysidebau funud i fideo centres... Tryb join at is.....wn piad gen i mae'n gwell ymrydd gernych y gwaith o dash ymrifol ti wneud. Mae ychydigos o'ch teulu hefyd â'r rem νerwydser o gyfer Osirade Oes anih studibaro fel b薐 iawn yn fawr New Sand wrap mae'r plant Abertynau Dynaidig a dysgu ydw i garllor ma syl runwayol. Da Snif. Ydw i'r blwy Grant wedi'i gynnig yw давай Heb ymryd parellauaniau ar yr én i'r boel codes. Y similarly, jyst wedi gwneud ar hanaf gyfer y p crapwyr hwnindaeth efallai'r busn sy'r interior, yn e gambod gyda behu ei 것wpin, mae heddiw yn f chưa arlaurhaethence yn y f näm ac mae bulbau britanyaeth ydw o'r smells ac hyn yn rhoi i chi dusio ddim yn awr Gwyddon, wedi amser llisation tws snacksal, ac mae'n f workspace cy geisio dill mettio y fDoedd yn importliau gweld cymaint Cymicken neu Iett curtains às ei bod ynree blink o pobl a mae banyddog ni'n tryn gyflies. Dwi'n размall y scenario, yna femy�� f irgendwas, mae'n siarho unrhyw nodwch am disease yma amhu silyniadau mewn brif plug o unflyfish odu i'm mins To pildedigion Ditul ac fynd i unrhywitydd am y porgen i y byd tr完全 losatio fydd, yn amdano hynny sidonmm yn participateil yma os y Genesis I want to apply templates to, I want to apply templates to in pre-process mode. So it's saying when I process this kind of element in pre-process mode, what do I expect to have out in the end? This is another example which shows the kind of nesting aspect. So here at the top level I've got a scenario that's all about creating amending elements. The context in that high level scenario is that I'm going to be testing when I apply templates in leg amending mode. That's amending legislation by the way, not legs. Inside then there's another scenario saying when I'm creating amending elements for a statutory instrument, SI, and there I've set up a context which actually has the element in, the mock object, the element that I want to test. And although that context there, that X context element doesn't specify a mode attribute itself, it inherits that from the outer scenario. So this is in essence testing applying templates in leg amending mode to that particular TC element. And then I've got another scenario nested inside. There the context actually overrides the one on the one level up by providing a different kind of value. So the final nested scenario in there is testing when creating amending elements for an SI with no spaces between the S and the I. So you can see you can structure the test to have this kind of branching, which does help to organise your test suite. OK, so that's setting up the context for testing to see what kind of code you're going to be calling. And then you obviously want to say what you expect to happen when you do that call. Again, these are called expectations. Again, they have a human readable label, which usually starts with it should, and then you describe what it should do. There are several ways of describing what it should do in terms of the code. You can give a X path test. If you give a test that gives you a boolean value, then it will say, yes, if it's true, then it passes the test, if it's false, then it doesn't pass the test. You can also provide some sample output. You can provide embed some XML that shows what you are expecting to happen. If you do that, then you can use dot dot dot to elide part of it to say, and I don't care what happens inside here. I want it to definitely produce a mending element, but I don't care what its value is, for example. I'll show that in a second. You can also do that testing of the output just on a subsection of the output that's generated from the call on the code. It's possible to do a test, for example, that tests against an entire document, does a test on an entire document, and then pick out different parts of the document and match those against some expected outcome. You don't need to test an entire document against an entire result. You can test an entire document and pick out the bits that you're interested in and test that those are coming out in the right way. Here's some example expectations. Again, this is from a real test suite that I've been using. I expect that it should produce an amendment element, and there I've got a snippet that shows what that amendment element should look like. It should have a made attribute of no. It should have inside it an amended element and an amending element, but I don't care in this test about the content of those elements. The second two expectations there are talking about the value of those amended and amending elements. For this particular example that I'm testing, and the reason that I've done those using tests here is that I want to test that the value comes through OK, even if there's some more markup inside. In fact, what would happen is that there would be some more markup inside because I like to keep my tests narrow and focused on one particular thing at a time. Here it's testing the value that comes out against a string, a string value of the element rather than the actual structure of it. Another feature of XPEC, which I haven't documented very well or at all, probably, is that you can actually share tests between scenarios. This again comes from some code that I'm working on at the moment where I've got a document and I'm trying to filter it down to a particular version of that document. It has some data attributes in it that let me say what version a particular element is. The lower two scenarios there are testing filtering to get the current version of the document and testing to get the prospective version of a document. This is legislation again, so sometimes we have prospective versions, what it might look like in the future. Those are generated by applying templates in different modes, but for both of them, I want it to be true that the document that you get and the output is consistent internally. It must meet certain criteria that I spell out. Rather than repeating those expectations in both scenarios, then I can have this shared scenario. I can have this scenario label a consistent document and that will have some expectations in it of what a consistent document looks like. The other scenarios share those expectations and reuse those. Another feature that in XPEC was put in because I found that if you build up a huge set of tests, then they can take some time to run, obviously. If you're focusing on a particular piece of code, then often the tests that you want to look at are just those tests on that piece of code. The focus pending facilities allow you to focus on a particular piece of code or to discount particular parts of the test suite for now. Pending scenarios are purposefully not yet implemented, but I intend to come back to these at some point. You can write them in at some point in the future. You indicate that a scenario is pending either by wrapping it within a pending element or using a pending attribute on it that describes when it will come into action. There's the focus attribute, which allows you to focus on the particular scenario. Here I want to focus on recognising the SI reference for this particular scenario and this particular scenario. When it runs the test, then it will focus and just run those tests, the ones that I'm focusing on right now. Obviously, you remove that focus attribute when you're ready to do the whole test suite on the entire thing. That was a quick run-through of the syntax and how XPEC fits together. Obviously, there's documentation on the XPEC Google code site, which there's a URL for at the end of the presentation. Now I'll talk a bit about how it's implemented. XPEC's current implementation is based in XSLT, and it's a pipeline process. You start off with the XPEC document itself, run that through a style sheet, and it generates another style sheet, which I've called there the XPEC XSLT. I'm used to being able to point a bit high up, really. You can see the XSLT code itself, the style sheet, is imported into that XPEC XSLT. When that XPEC XSLT gets run by invoking the main template of it, then it creates an XML report. Usually, then you will run that through a formatter to create an HTML report. It produces some XMLs that you can use if you want to do your own pretty formatting of it, but there's also this HTML report it has. Obviously, that's a pipeline set of processes. XPEC comes with a batch script that you can run it for Windows, a shell script you can run with Linux or on a Mac, an Xproc pipeline, and an ant batch script for invoking XPEC. There are all those different ways in which you can call it. I'll just show you that in action. I have my version of Oxygen set up so that I can call that batch script, or the shell script in this case. I can call that by using one of its external tools. Here I've got a description for a style sheet that I wrote, which is a basic here griddle style sheet. It pulls out the RDFA within a document. I do run XPEC, and it goes through the process. You can see the report coming up here. It creates the test style sheet, runs through the tests, gets the result, and formats it. I introduced a bug. I introduced one so you could see what happens when there's an error. There's a contents list here, which is the list of all of the top-level scenarios. It's broken down like that. Inside each of those, you have the individual scenarios. I can see here that I've got problems with the RDFA test suite test. I can click on that and go down to those. There are the ones that are green and ones that are working okay. The ones that are gray are ones that are pending. They've been labelled as pending, and that's because they are actually rejected as far as the RDFA test suite is concerned. I included them in the test suite, but marked them as pending. The reason there's this bug here is because I unpended one of the tests. There's a bug here. I can see that this one isn't working. Jump to that. It will show me what the label is. Here I've got the result that I actually have and the result that I was expecting to have. It does a comparison for you. Anything that's green is okay, the comparison is okay, but anything that's red, that means that there's some child or attribute or that's at some level that is wrong. I can see that the RDF resource attribute is actually different in those two cases. We'll go back into the code and sort out that bug. As Tony mentioned, there's also a facility within the XSLT implementation to give a coverage report on the style sheet itself, so you can see which parts of the style sheet have actually been hit during the running of the tests. The way that that works is similar, kind of way. The X-spec gets generated into an X-spec XSLT, which pulls in the XSLT that you've actually been writing. It turns off the normal output and it just adds on a tracer output using a specialised trace listener for Saxon. It pumps out an XML rendition of the trace, which is usually massive and really hard to process. Then it goes through another XSLT in order to take that trace and produce a report in a nice HTML way so that you can see what's happening. I can show you that in action. Unfortunately, I can only show you that in action on this really crappy test one because I broke something. It's running it, formatting the report, and I can see that this template has been matched OK, but this code here just isn't being called at all during the testing. I can go back into the tests and make sure that I'm calling that code if it needs to be called and if not, take out that code because it's obviously redundant. One thing with coverage tests is that although they will show you where code hasn't been called, you can't necessarily rely on where it says that it has been called, that it's actually tested all of the different variations that there might be. For example, if you have a value of that has an if expression inside it, then you might always be hitting that value of with the test succeeding within that if expression. You wouldn't actually be testing when the test fails. You have to be careful of coverage reports. Don't think that just because it's all hit, that means that you're testing everything, but it can highlight the bits where you have missed out bits of your style sheet. I've been using Expec, and we've been using Expec within the company that I work for within TSO, on several projects now. In particular, it's been useful on projects where the transformations are really complicated and really hard to plan and understand right up front. What I found while doing this, and it's more about taking a behaviour-driven or test-driven development approach, is that initially writing tests is really hard work. You seem to be spending more time writing tests than you are actually doing code, and that seems to grind away a bit. But then you hit a point where it starts paying huge dividends. You get to the point where you need to address a particular bug. You just write a test case for it, go and change the code, and then run your tests again. If they pass, then you can be certain that you have not messed up and accidentally done something that will screw up the rest of the style sheet. If it doesn't pass, then you can go back in and fix the error that you just introduced by your fix for the bug. It really pays dividends having that good test suite when you come to coming and trying to revise the code later on. Especially if you're adding new features, then that can be really easy. It can be hard, though, if you haven't written the test to be focused. If you suddenly change entirely the kind of output that you want, and you've got these little snippets all the way through your test, you have to go through all of those snippets, and you might have to go through all of those snippets and change them to meet your new expected result. That's why I say try and keep the test focused on as little as possible and testing one thing at a time, and that way you avoid those headaches later on. I found using tests and using test-driven development just increased my enjoyment of doing programming generally, and it helps improve my confidence about the code that I have been writing. Now, there are limitations, especially with the XSLT implementation. As Tony said, an XSLT implementation cannot do things like testing with the generation of messages to make sure that you're getting error messages coming out when there are errors in the input XML or something, or in the code. It can't test if you've got code that's creating multiple result documents. It can't test that the right result documents are being generated and that they contain the right things, because it doesn't have access to those extra result documents. It also can't test with different values for global parameters. I usually get round that by having local parameters on all of the templates to make them more testable, to make them more self-contained. But especially if you're dealing with legacy style sheets, tend to have a lot of global parameters in them, then global parameters or global variables, then you run into difficulties. The fact that it is implemented in XSLT means that you can't test those particular scenarios, which is a pain. They're also not supported in the language, the expect language, because you can't implement them. It's kind of strange around about this, but that doesn't mean that they wouldn't be useful. What I'm thinking about at the moment is an X-proc-based implementation. This is what it would look like. Instead of the expect being used to create a single style sheet, which then gets run, it would be used to create an X-proc pipeline plus a bunch of style sheets. The pipeline that gets generated would use those style sheets. When you ran the pipeline, then it would produce the standard XML report and you get the report out of the HTML as well. By going to X-proc, we would have access to messages and to multiple result documents. We would be able to set global parameters when we invoked the style sheets as well. I would then have to introduce extra ways of actually putting that into the language. For example, for testing messages, I could say I expect on the message output, I get xyz, or I expect an error output to be xyz for when you have a terminating message. Also testing different results, maybe having on the result output, the main result, hash result would be this, and on this particular file name should look like that. That's where I'm thinking of going, and if anybody has any feedback or ideas about other things that it should do, I'd be very interested to hear them. To summarise, and I think I have been under time, to summarise then, testing really improves your code. Can you put your hands up if you're currently right-excelting at all? Can you keep your hands up if you currently test? Yes. Testing improves your code, and it is a good thing to do. I would encourage you to do it. Automated tests will help you to do it easily. Like I showed you, just hitting the button in Oxygen in order to run those tests and give me an immediate response to say whether my code is actually doing what I think it is supposed to. Or not, it's really useful. Behaviour-driven development is a really good fit with XSLT. Because XSLT is a functional declarative language, it fits to have this declarative way of describing what it should do. The final thing is that the implementation that we've got, the XSLT-based one, it works, and I've been using it a lot, but I think it could be improved, and I hope that we'll be going to an X-proc-based implementation. I hope somebody will help me develop that. If you are interested in joining in, then this is all open source kind of code, and it's there on Google. I would really value your help with it. OK, that's it. Any questions? Ideas? Yes. So what would you want to expect to do with that, to turn bits of the XSLT on and off with the use web? Right, OK. OK. Yes, good point. So I guess that you... I can't see particularly how I would be able to look at that, but it might be something that I'd have to dive into the tracelessness code for. Like if there was a flag on the elements that were actually ignored. I think very often that kind of thing is reflected in the test in different environments. Yeah. But for the coverage it's going to... Yeah. And also I've got coverage. How are we going to determine whether or not a particular template is covered or not covered? And do we analyse only the source tree, only the operation tree or sort of the source tree? And analyse the operation tree, not the source tree in order to determine the code coverage. So a template is defined to have been hit if something inside it has been hit. I think from what I recall of the code. It's actually quite complicated to work out from the trace what bits of code have been hit and what haven't. Yeah. No, finer grained. Yeah. Well, yeah. It's instruction level, and then I map it back to line so that when you see it in the test coverage output, then you have the style sheet looking as it actually looks. It's quite nice. So I got the style sheet tree coming in, plus analysing the style sheet as a string, as a text-based file. Nice complicated code. Yes, in the go. Yes, I think it could do. Although if you've ever done a debug trace running through a style sheet, then you'll notice that it jumps around and goes back to the same bit of code again, and again, just because of the way that XSLT works. So a number of hits isn't, you know, I don't know how indicative that is in the same way, but that's certainly something I could add in. OK, thank you.
Test-driven development is one of the corner stones of Agile development, providing quick feedback about mistakes in code and freeing developers to refactor safe in the knowledge that any errors they introduce will be caught by the tests. There have been several test harnesses developed for XSLT, of which XSpec is one of the latest. XSpec draws inspiration from the behaviour-driven development framework for Ruby, called RSpec, and focuses on helping developers express the desired behaviour of their XSLT code. This talk will discuss the XSpec language, its implementation in XSLT 2.0, and experience with using XSpec on complex, large-scale projects.
10.5446/31150 (DOI)
So I'll get off to a running start to try and catch up so we can finish in time for lunch. So testing XSLT. The next slide is really just for posterity. The detail I made of this is I'm actually Australian and not Irish. So XSLT, you've probably all seen something like this before. This slide was lifted straight from a XSLT training course. XSLT is the source tree and the style sheet. The XSLT process uses the style sheet to produce the result tree. But what we're here today for is to talk about testing XSLT. So I'm going to go through different aspects of how you test XSLT. Beginning with the source, I'm really not going to spend much time on this. You can validate your source before you do the transform using whatever technology you have available or prefer. Similarly, you can validate the result afterwards where you can use a schema-aware XSLT processor to make sure that the result is valid on the way out. If you're producing HTML, then once you produce the HTML, there's many testing tools available for testing HTML, whether it conforms to the HTML schema or DTD. This is a web test and HTML unit for making assertions about the structure of your HTML. There's also an interesting tool called XSLV for testing the entire XSLT transform. It's a static validation tool. It reads your source schema and your result schema and your style sheet. To the extent that it can, it tests that whether your XSLT will actually produce valid output. It converts your schemas to relax in G. It has to make some pessimistic assumptions about your XSLT if you get into using extension functions, et cetera. But for a certain class of transforms, it's able to tell you whether your transform is actually valid to the two schemas. What I'm going to concentrate on today is actually testing the style sheet. I'm going to talk about using a debugger, a profiler, unit tests, coverage utility, and also metrics for finding things out about the structure of your XSLT. It has occurred to me that I'm probably preaching to the choir here, not just because you're a brain like a choir gallery, but how many people have used any of these tools with their XSLT? Quite a few. Also looking at the program and also at the committee for this conference, how many of you have actually worked on producing these tools? Profilers, debuggers, lint testers, et cetera. So the first of these very quickly is you can test your XSLT with a debugger. It's not behaving the way you expect, so you fire up your XML IDE or you can use it in EMEX. If you're an EMEX user, you open the source, open the transform, and you step through and follow through what's happening. There may even be a demonstration of a debugger this evening with the oxygen folks. Another thing to do with your XSLT is to run a profiler on it. It may well be that your XSLT isn't running as fast as you think it should. You run a profiler to find the hotspots, the parts where most of the processing is happening. The usual routine is you profile, you find the hotspots, and it's like whack-a-mole. You fix that hotspot, you run it again, you find you have another hotspot. So you whack-a-mole on that one and find where the other hotspots are until it's running adequately onto the best of your ability. There's profilers built into the XML IDEs. Saxon has a sort of two-pass profiling process where you run Saxon with special options. It produces a profile output. You run a style sheet and you get a report. Similarly with XSLT proc for XSL 1.1, you provide the command line option and it can produce a report that you can then form it into HTML and find where the hotspots are. So the thing about profiling your XSLT is that it tends to be inexact. It might be that the XSLT processor is optimizing what's happening. It might be that you're using a profiler in your XML ID that just works by stopping the transform periodically, finding out where it is, and then starting a transform again. And if it's doing that, it might be sampling two seldom, so it's entirely random where it thinks that the current activity is, or it might be sampling too often in which case the overhead of the profiler is actually skewing your results. Another option if you have the technology and you understand the insides of your XSLT processor is to use the underlying Java processor, Java, C profiler to actually see where the user is and work from that to work out how to improve the formatting. In the one that I cite there, someone had a problem with XSLT number and it was exercising an obscure path through the code and that was what was taking the time. I was reading just last week about Michael blogging about using the Java profile to determine how Saxon went on particular XSLT benchmarks. So the main part of what I'm going to talk about is unit testing of XSLT, where you write a test to test all the part of the transformation. You provide a known input and you make assertions about what the expected results should be. You could do black box testing where you provide a document and then you run it through the entire transform and you get something out, an entire result at the back and you make assertions about that. Or you might be able to write unit tests that test a particular template or a particular function and you can provide just a small fragment and test that. The thing that I've found is that when I talk about unit testing, people have assumptions about unit testing means that I'm advocating extreme programming or I'm advocating test driven development or I have some other agenda. I'm just using the term unit test. Different people have described their frameworks for testing XSLT under different terms. This is just a blanket term for today. I could use a different definition tomorrow. When people have taken offense to the term white box test and black box test, I'm not really going to go into the details of how all these XSLT unit test frameworks work. There's many of the details that are in your conference proceedings and they're also available online from my website. In fact, while we're on the break, Jenny came up to me and said, you're not going to do too many examples about X-spec and still my thunder, are you? The short answer is there aren't really many examples in here. Just in case you're not familiar, I have an example of a unit test. This is for the juxtapen test framework, which is you write tests in Java or in XML. The highlighted one there says we're testing a style sheet called X2J.XSL. We're providing as the input just a small fragment of XML. This particular test is making the assertion that the XSLT processor is actually going to terminate because it won't accept that the root as the document element. It shows a structure of a unit test. You provide some input, either complete document or a fragment as there and you make assertions about the result. When would you use unit testing? It's good for testing XML and HTML output because the structure of the marker makes it easy to make assertions about what's in there. You can use X-parts to locate parts of the result. It's not so easy with text because text is fairly unstructured. I really don't know a good unit testing facility for testing text output. Again, it's just one of the tools that you would use. You might also need to validate your output. If you're a schema trying to use XSD 1.1, when it's up to the 75% to the 95% implementation level. Another thing about unit testing is it's seen as the be all and end all of what you need to do to test. Steve McConnell in his book, I've got software estimation, he paraphrased two large studies of the effectiveness of testing methods. It showed that at its best unit tests can probably find about 50% of the bugs in the average program. If you use unit tests as your regression test, then it has increased effectiveness. Again, even unit testing isn't the only thing that you would need to do. I'm going to describe three types of ways you can look at your test. Black box test versus white box. Clean versus dirty tests and whether you're working with full documents as your test input or fragments. The first of these black box versus white box, sometimes called functional testing, this is where you treat the style sheet as a black box. You provide input and you look at the output and you don't spend any effort in looking what the particular templates are inside the style sheet. Just using it as a unit and then making assertions about the result. Again, if you're making assertions about the result, you might as well use Schema Trauma or XSD 1.1 to test that the result is what you expect. Another approach is what we call white box testing, where you look at the templates and you use the facility in the unit testing framework to test a particular template. Perhaps by setting the context in which a template runs, perhaps by actually saying that this test applies to a particular named template or particular function, you could use a complete document or just a fragment of a document as the test input and then make assertions about the output. Each of these has their advantages and their disadvantages. The black box testing is that you don't need to understand what's going on, you just need the specification for what the result should be. Your black box test will still work if someone goes and refactors the style sheet and changes all the templates or changes some of the X-piles in the match patterns. The advantages of the white box testing, where you test individual templates, is that you can test just a small piece at a time, which is closer to the traditional meaning of a unit test. You just test in C or Java terms, you'd test a particular function. In XSLT terms, you'd test a particular template or a particular function. The thing about the white box testing is if you are testing a particular function or a particular template, then if you change the style sheet, then some of your tests might not work anymore. If you have a test, you set up the context, so it's going to exercise a particular function. If you change the X path by which that function matches, then your test may no longer work. Another way of looking at tests is whether you're doing clean tests or dirty tests. By clean tests, I mean you're testing the happy scenario of you provide correct input and you get a result. The converse of dirty test is where you provide an input that you know is going to trigger an error condition, which could be that the processor doesn't produce an output or produces a message or it could be that the XSLT processor actually terminates. Again, they have their advantages and disadvantages. The clean tests, we all like to see the clean tests because that shows that our code is working. If you also test the error conditions and the parts where the specification says there shouldn't be an error, then you're testing more realistic input. You're testing more what the users are actually going to produce because they don't know the insides of the styles yet the same way that you would. Another dichotomy in what you want to do for your test is whether you're providing full documents as the input to your test or whether you're just working with fragments. A fragment would be embedded in the test definition itself where a full document is more likely to be an external document that you refer to. The advantages of full document is that you can make a document that is valid to the schema so you know that your input is correct. But if in the terms of full document you have all the required elements and attributes, it can mean that the processing of all of these tests takes longer. It's harder to keep track of a lot of external documents. If your schema changes, then you've got a lot of documents to update. If you're just providing little fragments in the unit test as you saw in that juxtaping example, then the tests are easier to write. The input is in with the test so it's easier to manage. If you change the schema, then perhaps only some of your tests need to change rather than all of your source test documents. Another aspect of testing is the coverage whether you're exercising all of your star shields in your unit tests. The only utility I know of is Jenny's X-spec with its coverage, its utility. I've described it as a thing of beauty because it's useful to see whether or not you're actually exercising all of your star shield. In this screenshot, you can see that one template rule is going to match and there's no way that the second template rule can be used. The coverage report has highlighted that as unused and untested. Another aspect of testing things is metrics, which metrics checklists call it what you will. Some of these are proscript, it's a static test, generally a star shield which will go through and look at your X-parts, look at your usage of parameters, et cetera, and tell you what you're doing wrong, whether you're using too many slash slashes, whether you have unused variables, et cetera. Some of these I've listed there, XSL, Qual, Ken has XSL style, part of that is a checker. Another way of looking at metrics is metrics which will not so much tell you what you're doing wrong, but tell you exactly what's in your star shield, how many functions you have and their relative complexity. The possible problem with using metrics is that the metrics become an end to themselves so that you end up, if it's a requirement that you satisfy on the metrics, you end up bending your star shield to not produce any error messages from the metrics analysis rather than writing a star shield which solves the problem in the best way possible. It may be that you want to use a lot of slash slash because you have that sort of structure in the source, but your average metrics checker will tell you that's a no-no. Problem scenario is things to think about when you're setting up to do your XSL testing. One thing I described here is the Jigsaw doesn't fit. You have your source document, you have the specification of what you want to do. It tends to be broken up into, say, this component maps to this element maps to this, this combination of element maps to that, and you broke apart the source and then you write the star shield, very nice and modular, and then you try and put the whole thing back together. The passage you made, which all fit the individual parts of your specification, don't fit together, they don't make a valid document, they don't make a correct document somehow. Unit tests aren't the answer for that. You also need a validation of the entire result. I've been caught this way, I've had a nice specification, I've produced tests to everything according to the specification, and then the result wasn't valid, so it wasn't a whole lot I could do except validate and then modify both the specification and the tests. One of my hobby horses is XSL Message, which is the XSL T recommendation, doesn't exactly say where it ends up, but an XSL Message in your star shield is written to a console to allow to something. It's very hard for a pure XSL T testing mechanism to find out whether the XSL T, XSL Message actually went and find out whether something was actually emitted or whether your program logic just completely bypassed that the condition was supposed to produce a message, you just don't know. Then if you're testing XSL T using XSL T, then if you have an XSL Message and it says terminate equals yes, then not only is your star shield going to stop, but your entire testing mechanism is going to stop. So I'm an advocate of using something more than XSL T to test your XSL T with. Some unit test frameworks also have trouble when you have, in your star shield you match on the document element because they have to match on the document element themselves to start their framework. Some such as XPEC again will start the testing transformation with the named template so that it's quite safe to use match on slash in your star shield. XSL T unit which is from Eric Vanderleest which is like the grandfather of all of these. Testing frameworks has that particular problem. Again, you have your specification for what the transformation do. It may be that the specification itself is wrong. In this example, the name of the attribute is actually FMT, but the specification somewhat wasn't paying attention. They used the English word. If you code to that, then you provide fragments of input which use a format attribute, then you find your star shield works according to those tests. The end result isn't what you want. Really your last best defense is your own two eyes. I was reading Michael Kay's blog early in the week and I saw a blog entry from December where he talked about in Saxon there was actually a schema problem rather than the XSL T problem, but there was a conversion of a particular numeric type that he said had been in the Saxon Java code for five years. He found it by accident because he was reading through the code. It had been there for five years. The users hadn't complained about it. There was 40,000 tests in the W3C test suite and none of those 40,000 tests had exercised this particular conversion. How effective is wetware testing to coin a phrase? This is the same table except this time I've highlighted the parts that actually involve people. You can see that the worst sort of people testing is at its worst, it's still better than the worst unit testing. At its best, it's much better than the best you can do with unit testing. I'm not saying that you should go home and change all of your processes to introduce formal reviews, but be aware that unit testing, automated testing isn't going to catch all of the bugs. It could be that you've left in inappropriate comments in the code, which while technically not a problem, still counts as a bug, especially if you're making a code public. I shouldn't concentrate too much on Michael, but he said this morning about comments in the XML recommendation, XML, which has since been elated. There's other things like you could put the wrong thing and put it in strip space and preserve space, you could swap them by accident. It's not the sort of thing that any automated mechanism is going to find for you. The actual details about many of these access or testing frameworks are in your proceedings, and I'm keeping up to date with one which is available from my website. I don't want to keep you from lunch, so I'll finish. Thank you. Have we got a rear view for that statement? Call me, I'll text you, essentially, we can help you. I don't want to衷 trade with Приb sonic, could you tell me something? I don't want to där � I don't want to aktuell post trafficか we'll go to octane, you can come to pray with the All right. Okay, So all right, we can hang up the comment. ihnrht is at the end. Yeah. cognition is input laughing. Right. What's out there? of what you can do. The generous thousands of people have the balls to put back on. Right. Thank you.
Creating a working stylesheet may seem like an end in itself, but once it’s written you may want it to run faster or you may not be sure that the output is correct (And if you are sure, how sure are you?). Profilers, unit test frameworks, and other tools of conventional programming are similarly available for XSLT but are not widely used. This presentation surveys the available tools for ensuring the quality of your XSLT. There is no one-size-fits-all solution when looking for tools. For example, if you are using Saxon and Ant, then you are looking for a different set of tools than if you are using libXSLT and Makefiles. This presentation covers XSLT-specific tools and techniques. It does not at this point propose to cover general XML tools and techniques such as schema validation or Schematron: they are of course useful, and could be added, but XSLT tools provide plenty to cover in a single timeslot.
10.5446/30574 (DOI)
Good morning. All right. So I am here to talk about the implementation of tax pub, which is an extension to the NLM DTB Jats that me and a team of other people developed a few years ago. I'd like to point out, which is what is obvious, this presentation in the paper is the product of several mines distributed across the world. Coming from different domains as well, it's the Amaliburian Gita Sauder is a computer scientist, Pavel Theodor and Lubo work for PennSoft, scientific publisher. And this distribution of talent and work will be evident in the rather rocky nature of the presentation, which I apologize for in advance. But here we go. So yeah, we're going to focus on what we've done with, or what really PennSoft and Lubo has done with the tax pub extension over the past several years. The focus will be on what we're calling semantic tagging and enhancements. In other words, bringing out the items of interest inside of the scientific literature, in this case, tax nomad literature. And then once having done that, the dissemination of this data from the literature to various external aggregators. And then we'll talk briefly about what we want to do with tax pub in the future. OK, so to start off, I'll give some background on Plesi. Plesi is kind of a weird organization. Donat, Auguste and I, Donat is a scientist. He's a mermicologist. Studies ants. He's a taxonomist. We met on a project, an NSF-funded project a few years ago. The project then did, and we didn't stop working. We found the work to be very interesting. So we founded a company, or organization in Switzerland. We have members in Switzerland, Germany, the US. In fact, another Plesi member, Bob Morris, is here in the audience. And Iran, where Donat happens to live right now. We are, I like this term, a research-based think tank. That's the term that Donat came up with. I kind of accept that. You could think of it as an indie scientific label, almost. We have a very DIY approach to doing our work. And we definitely have the mission to promote open access. I mean, that is the major goal. We do this in four different areas. We have a lawyer on board who advocates, and argues, in the way lawyers can about open access. I and Edo, a few other people, work on technical solutions, such as taxpub. We maintain what we call a treatment repository, which is simply a electronic repository where treatments, which I will describe later, are deposited so that they can be accessed directly and cited using stable URLs. And of course, people form advocacy for open access. Recently, we founded a GMBH in Switzerland, which, from my understanding, is something like a limited liability company. So it's a commercial entity. A small, medium-sized employer owned by Blase, the organization, the Varain, our association, to perform actual commercial services, such as document conversion and consultation. We receive funding from public donors, from private donors. And recently, we've been involved into European Union grants that have just been funded. And yes, and we have clients all over the world. All right, so the context for this work is three or four fold. First is the global diverse biodiversity crisis, that we are in the midst of one of the greatest die-offs in history. And it certainly is a crisis. It's been well known for about 20 years, at least. But there are very few systems and tools to measure and document what is going on. There's a scientific aspect to this that we estimate that there's about 1.8 million species that have been described that we know about. And it's estimated there's something like 8 million more that we don't know about, or that haven't been described, and may in fact be dying, if we're becoming extinct, as we speak. The identification of these species takes place within scientific publications. The naming of new species is governed by codes. I'm sure many of you know this already. There's ICZN, the International Co-Frisiological Nomeclature, ICBN, International Co-Frize and Biological Nomeclature. These codes require publication of descriptions of species in actually up until very recently, paper publications. Just very recently, I think ICBN, the Biological Nomeclature Code, has a lot of publishing in electronic documents. Unfortunately, they specifically said PDF, which is unfortunate that they got that specific. But it is a major leap forward to actually accept the electronic publication. And there's obviously a lot of activity in this area. There's 17,000 species described every year, new species described every year, and about 100,000 redistrictions of existing species. So there's a tremendous amount of content locked up in the literature and coming out at an increasing rate. And however, the challenge here is that it's fragmented over about 2,500 taxonomic journals and books. So it's difficult to access. So that open access is really the key. An open access in both sense of the term. You need to have open free access to this literature. And the content itself needs to be open in the sense of being able to be accessible directly so you can take a great advantage of the rich content that is in there. And so yes, this is the solution we hope is open access and both senses, semantically enhanced publications. Which one of the goals that this does not slide that he's mentioning here is that we really want to, through electronic publication, accelerate the publication process so that descriptions do not take that long to get out and that the data from the descriptions don't take that long to get out. And that the names are registered, new names of species are registered in these developing registries quickly. And we hope the tax pub has helped in that regard. You've already seen that slide. Slipped in there twice. I apologize. OK, so background of tax pub. Tax pub is a very small extension to the blue publishing DTD. As Jeff mentioned, I gave presentation on this and there's a paper available describing it online. There's the URL. The focus of tax pub is on treatments. What I've been talking about this treatments are, think of them as species descriptions. These are very formal descriptions of a newly named organism or a renamed organism. The treatments are, again, highly formal. They're easily identified and they often are separated off from the publication in which they're in. So they're kind of independent entities and have a very important part in the nomenclature process because you have to point to previous treatments if you're renaming species. I hope I'm getting this right. But there's an interlocking set of references between treatments in the naming process. So they're very significant document objects. Another legal point that we've tried to go out and lemmon make is that treatments being statements of fact and not creative works, at least in the Swiss intellectual property regime, do not qualify for copyright. So that we're trying to advance the idea that publishers and should acknowledge that the treatments should be allowed to be free of copyright. The rest of the article, fine. But the treatment itself is of such a nature that it's not copyrightable. And I am not going to argue this point. I'm not a lawyer, nor do I want to be in any life. But that is a contention of ours. And believe it or not, we've had traction from publishers on this. They accept the idea of letting treatments float on their own and be cited and available on their own. So the treatments contain basically two parts. One is the nomenclature section, which is a heading. Think of it as a heading that says what the name of the species is and its status and the fuel. And who's the author of the taxons, things like that. And then there's various treatment sections. We don't even bother to name what they are. We punt out to the type attribute to do that. And then there's some domain-specific elements we added for inline content. Taxon names, created an element for that. Material citation, which basically references to specimens, but could be references to any sort of material that is used to describe a species. And then the weakest component of taxpub currently, this element, descriptive statement, which is an attempt to somehow mark up the phrases that are used to describe the anatomy or morphological features of a taxon. There we go. OK, so here's a sample. Sorry, I shouldn't actually look at this. It marked up treatment. Very simple. You have your outer treatment. Whoops. Treatment element. Then you have a required. The only required element there is a nomenclature element. The nomenclature element contains a name. The name contains the name part, which is the genus name. And this is by the CLEO. I believe. And then the name of the species, Mizzanzi. This is a paristhoid wasp, I believe. Then there's an identifier. This is a zoo bank identifier for this species. And there's another identifier, which comes from a different system. There's the taxon authority. In other words, the people who were responsible for the naming of the species, and the status species, no more new species, essentially, is what that means. So you can see the way in what taxbugs try to do here. Put in some of our own elements to bring out the important taxonomic features, but also using whenever necessary generic NLM or JATS elements like object ID to do the rest. And this is just an example of a section of a treatment. This is the section that discusses the materials and the specimens that were used to name the species. We have an element called material citation that bundles together or delineates one particular specimen. In this case, it's marked up fairly granularly to point out different facets of that citation that this is a holotype, that its location is this King's South Museum in Saudi Arabia. And then there's a collecting event, which is where this specimen was actually collected from. And it was collected in Saudi Arabia, in this particular province, et cetera. And then, again, using a generic NLM element named content, rather than putting all of this elements for things like latitude, longitude, altitude, all this, we sort of advise our users to just use existing vocabularies to provide the semantics and use the name content element to do the actual structural delineation. So here, there's a perfectly adequate term in the Dub Darwin core vocabulary of the coordinates that can serve to supply semantics for the latitude and longitude of this specimen is collected. So, tax pub, I don't really have much else to say about what's changed since 2010. Don't take that as an example of our laziness, but frankly, as a success, that it's been fairly stable. It's worked. And we haven't received much demand from our primary user, PENSOFT, to change it much. One of the things, though, that we did have to do was introduce the notorious x element. I'm not sure all of you are familiar with x. x is an element that gives you a lot of wiggle room. It's there to tag anything, all this interstitial stuff, that doesn't really have, I like to think, shouldn't really be there, but is. And the publishers don't want to take out. And so I conceded this, that they want to put a semicolon between to the tax on status and the tax on authority. So I added x to the content model. We're all happy about it. It's fine. However, this signals a trend that I also had wanted to avoid. Tax pub is really designed with an eye towards new literature. Our hope was that, this is kind of a strategic goal, to shift the work on markup of taxonomic literature from retrospective markup to prospective markup. Because we have those 8 million other species that have not yet to be described sitting out there. And we have 1.8 million that have been. So we'd like to focus on the 8 million that are going to be described. And also hopefully describe them in a way that's structured and granular, that will open up the data to reuse. However, there's a persistent urge to markup the legacy literature and a surprising amount of money behind that urge to do it. So this enables me to use this term, reunification, which I just had to use because it's when do you ever get a chance. So one of the things that we're probably going to do a tax bump, if I shouldn't say probably, is a goal is to have and make an extension not just to the blue TTB but to the green TTB so that we can address the legacy literature as well. There is interest in, for instance, marking up existing floras. And we definitely need to loosen it up more in order to address those types of publications. And we have, there's been interest from other journals to adopt TaxPub, the European Journal Taxonomy. Just last week, we signed a very modest contract, POSY did with them, to consult on seeing how to get their workflow to be based around TaxPub. We'll see how that goes. I'm optimistic. And then the journals you tax, which is one of the major taxonomic journals in the world. The Encyclopedia of Life, which is an organization attempting to create basically a web page for every species on Earth, I guess anywhere. I don't think they're going to limit themselves to the Earth if we find anything else. They're offering us a contract to help Zutaxa also adopt a TaxPub into their workflow. So that they can, and their desires so that they can get the data that's currently locked up in PDFs, which is what Zutaxa produces, to pull out a lot of that data and put it into their species pages in Encyclopedia of Life. And then another major area is development. The enhancement of this description of the morphological features of the organisms. The existing markup we have is really just a first attempt that isn't that successful. No one uses it, I should say. And so there was a proposal to address that. Briefly show this. First thing I want to point out is that those of you who aren't familiar with these type of descriptive data or descriptive language. This is nasty stuff to mark up because it's a combination of very formal language, but also natural language that's tough to mark up in such a way that it can get into databases or get represented even, say, in RDF. So when you get phrases like extremely rarely yellow, often shallily joined around the node, these are, it's kind of fuzzy stuff, hard to mark up. Glabrous or weakly hear her suits. So you have this wonderful combination of terms like labrous and her suits, which means something very specific, but with these words like weakly. So this proposal is to introduce a few elements that basically are saying, this is a categorical element, is saying, well, here's a, I believe what you can call it, a categorical character. It's something that something is, the stem color is what's being referred to here. And luckily there is a plant ontology that we can point to to say what category of feature that we're talking about. And then it has a state. So the stem color is greenish, and believe it or not, there is a non-tall ontology term for greenish out there somewhere. And so for the categorical things you have that, and then you can also address the quantitative features, like just measurements. And here you have the stipule width, which is 3.10 millimeters. So we'll be testing this, and hopefully we can, you know, this might serve to be a lightweight, but yet effective way to address these types of, this type of language. OK. And to wrap up on tax, there's been some challenges. We maintain the code on source forage, which is cumbersome, for a variety of reasons. I'd like to move it to something better. I'm thinking maybe GitHub, I'm not sure yet. But then that's the technical maintenance, or the infrastructure. But the other maintenance, the sort of governance and administrative maintenance, is also very challenging. I mean, plosy, we're all, this is all volunteer effort. We all have other jobs. I'm a librarian. Don't know, it's a diplomatic spouse living in Tehran. He's got a lot of things to take care of. And we do actually have some funding now, but we're not a wash in funding. There's nothing really paying for tax prep development. But it is supported under plosy. And to the extent that plosy is getting these grants and funding, we can hopefully direct some of the funds that way. And then I mentioned this in my talk two years ago, and it still remains, I think, the greatest challenge. When I first created tax prep, I naively focused on the scheme, or the extension. And that's the easy part. Especially because Jats was made in such a way, designed very well to enable extension. It's very doable. But that is the easy part. And that's not the sum total of what an extension or a DTD or a standard or whatever you want to call it is. The documentation is really critical to explaining what stuff is, what you're doing, how to use it. And the documentation infrastructure is not that great. Right now, we use these comments in the extension with some ad hoc markup that is used by, I believe, a Perl script that NCBI has to generate the online HTML documentation. And it's a cumbersome process, as I mentioned, to put it there. And then I've got to send it to Kim. And she runs the conversion. And I get these files back, and I got to post them somewhere. I'm looking for a better way to do that. We also maintain the documentation on a Wiki called Species ID, which is another Palsy project, which actually I've recently found out has been moved to. Anyway, here's the, I'll pick up that for a second. So here is the documentation. Just like the ordinary JET documentation, except we have our elements mixed in. And I should point out also, while probably been noticing, that there's a namespace prefix for the tax put elements. This was, we decided to keep the extension in its own namespace, just to distinguish it better and prevent name clashes with the general blue DTD. So we have this nice documentation that periodically gets generated when I get around to it. And then the documentation is also on this Wiki maintained by GBIFF, the Global Biodiversity Information Facility, which I just found out, is maintaining a collection of vocabularies in the domain. So I'm very happy to see that it's on this GBIFF Wiki. And the value here is that people can come in and add examples and work more collaboratively on the documentation. So I'm going to shift gears to the other side of this presentation. And this is the side I have less authority to speak about, so bear with me. The primary author, Luba Pennev, is the owner, or the president's, imagine of Pensop, a scientific publisher. And Zuki's is their major journal publishing taxonic literature for zoological taxonic literature. As you can see, it's founded almost 20, over 20 years ago. Zuki's, the journal, was launched four years ago. It's a completely open access journal, taxonomy. It's been very productive and successful in the last four years. They're very forward looking and want to take advantage of all the new technologies they can. So they register any new names in Zubank, which is a global registry of zoological names, and then supply data to Encyclopedia of Life, the Pwazi, our dream repository, and Species ID, which is a wiki for species descriptions. I'm a librarian, and I know most of these things, but not all of them. But they're cross-draft members of the issue DOIs. ISI, I guess, is calculating their impact factor and doing their reverse indexing. They're in scopus, blah, blah, blah, blah, blah. I want blah, blah, blah of this one, the PubMed Central. This is one of our, I think, our proudest achievements is that Zuki's actually became, I believe, the first taxonomic journal to enter PubMed Central. And that's, again, an achievement we're very proud of. And then he has the PENSOF journal system, which is next in L-based workflow that drives all this stuff. And it's very productive, as you'll see by this chart. There's been a nice upward trend with a mysterious dip in 2010. But the pages and the articles are coming out, and people are publishing with Zuki's, and they've been very active in these four years. OK, I'll do my best with this. But this is a nice chart, laying out the landscape here. So, I mean, basically with the taxon literature, as I mentioned earlier, you have your perspective publishing and your historical literature. For historical literature, there's other schemas, Taxon X, which is another schema that I'm co-author of, and Taxon O'Litt. And Closie has an editor that helps to mark up the stuff. There's also these systems, scratch pads, and things like that, where that data goes. And then there's perspective publishing that goes through PENSOF's markup tool, blah, blah, blah, peer review, automated submission. The things are very automated. But I think the major emphasis here is that the data, when it's all brought together in this format, the data is going all sorts of places, playing lots of roles in different areas. And there is this feedback also to these other systems as well, for stages, submission, peer review, publication, dissemination, and each with their own challenges. As you can see. All right, so here's a nice slide showing. Here's a, basically here you have a treatment. This is what it looks like in print, sections of it at least. So you have your nomenclature section, and that data goes to Zoo Bank, which is a red pick in the registry of Zoo and Logical Nomenclature. The references go to the Biodiversity Heritage Library. Here's that description, the more morphological description area with these wonderful terms like, you know, to lace or I brownish yellow and, you know, publish one. It's a fusive lack along a lateral margin and all that stuff. That data goes to ELL, goes into their species pages, into species ID, which is a wiki, which does essentially the same thing, and the entire treatment goes to our plazios treatmen repository. Images go to ELL alongside the descriptions and into the species pages. And images also go to Morph Bank, which is a database from images of specimens plus metadata. Very large and important resource in this area. And then the occurrence data, which is where the material is collected, goes into Gepeth Global Biodiversity Information Facility. And when it's done right, all this that long stuff becomes mappable on Google Earth or other resources. So here, again, the goal is to be open in both senses, open access in the legal sense, but also open in the sense that the data is open and able to be manipulated and reused widely. OK, so this is a run through what PENSOFT has done in terms of displaying these articles. So there's some semantic enhancements as we were calling them. Right up here, I'll point out, there's these tabs that allow you to highlight in the article the treatments, the tax on names, citations, localities, et cetera. So this will, yeah. So if you click on localities, you get a list of the tax of the described in the article, and they're plotted out for you. So this is where this parasite wasp has been collected from. What's going on with the key here? There we go. So you have a key to species, which I haven't really talked about. Key is a diagnostic tool. It's a way if you have an animal or something in front of you, you follow these sets of instructions to figure out what it is. So here, it mentions a figure, and the figure, of course, is linked to, and that, I suppose, is linked to, this might probably has gone to Morph Bank also. Then you have a citation, a Sharki 2007, and you get the citation, and this happens to be in Zutaxa. But if you follow this footnote, it might link to it. Zutaxa doesn't issue DOIs, so I'm not sure. Here you have a mention of a taxon. In fact, this is a very large, high-ranking taxon, Hymenoptera. And click on that, and you can get different explanations of what Hymenoptera is. In this case, it's PENSOFT has a thing called taxon profile, which is a page that aggregates information about organisms. This is PENSOFT's profile of Hymenoptera. Hymenoptera are ants and rocks and bees, and you can see they're very widespread, probably in this room. Sure, there's an ant somewhere in here, nearby. And then here's another one, Solonium. Plant, I believe. And here you find, you can follow references to Solonium in the biodiversity heritage library. They're all over the place. And so you choose Solonium abalotum, and there you go. You go to a publication about them in the BNDHL. And so that's one, that's basically what you get when you go to PENSOFT and you read one of their articles. Another thing I should publish point out, it was back there, which is mentioned yesterday, that is not showing it. There is a link right to the side. You can get the XML. We PENSOFT post the XML alongside the PDF. It's very easy to get to. So it's not something that, as a publisher, they're hiding. It's right out there. So anyone can grab the XML and do what they want with it. So as I mentioned, Zookiz has been archived in PubMed Central very happily. And the scripted sections go to EOL. So that has extracted out of these materials, examin sections, and goes to EOL. Whoops. Whoa. Hold on. Wow. Sorry. Now I have three minutes. Four. Four minutes. Almost there. Is there some sort of electronic hook that pulls the presentation off? OK. And there's a species space, which is particularly ugly bug, I think. Really long antennas. And then the material treatments, the isolated treatments go to our plaudite treatment repository. We have something like over 10,000 treatments of different organisms now that are referable directly. And you can get XML for them. And then, oh yeah, this is an interesting paper. In this paper, the authors simultaneously published to the journal and to a wiki at the same time. So here's the actual article, as you'd see it on the keys, but there's a link to the entry in the wiki. And there you go. All the data, as it's mentioned here, is derived from the article. So this simultaneous publication, this allows the collaborative work on the species as well as having an official peer reviewed article. And this isn't just to show the key, the species has been added to the wiki. Data is also provided to wiki species and wiki immediate commons. There's a wiki species page. With the commons, all these kind of scary little pictures of bugs. That's another gross one. I don't know what that is. And note here the abstract PDF or link to. So you can get to the article for this from the wiki species page. OK, so I'm going to speed this up. Yes, we want more symmetric web enhancements. So PENSOF is working on a tool to help authors create collaboratively, write these taxonomic articles that will involve community based peer review. And then PENSOF is very much interested in data journals. So publishing, publications that include data, in this case these are in the term small data, of all sorts. Collaborative offering tool, web interface to help you structure, to help you create the article. And of course, there's commenters. You've got collaborators, commenters, involved in publication process. So here's where you would add your treatment information at the high level, then you've got a subsection, fill in your content. And then this is my favorite part of the presentation. So OK, so then why is it? I'm over time now, but I think I have five minutes. OK, so we have in this case, I'm an optorist. He's captioning off B, goes to his lab, finds, he's curious about it, writes up his little treatment, gets in up with all of them. And this is the peer review process, a wonderful depiction. Then lo and behold, the article is published in a PDF format, great. But there's this underlying data that we all want. And this is my favorite part of the presentation, because I don't know what this means. My theory is that that is a cross over a grave, and that the paper is dead, or the book is dead. And then as I said to Wendell earlier, we want to roll away the stone to reveal the data that's underneath the publication, and then share the primary data to enable reuse of content. So there you go. Yeah, and then here's some quick facts about the data journal, no upper limit of managed size. It's very important. Previously, the amount of data that you could publish in the taxonomic article is very limited, and it was very frustrating to authors. Collaboration, collaboration, community, all that's important, standard compliance, JATS, DOM-Accord, DOM-Accord, et cetera, and code compliance, this will be done in such a way that it will be compliant to the various codes. Lots of stuff goes into the manuscript. It all gets structured, and it goes places. Because of the structuring of the data. And then it goes places and comes back. Lessons learned. Yeah, specificity of domains, a challenge. Market for Currents data is a very big challenge, all that long collecting information. There's other information, but locality is very difficult to identify and mark up if you're not getting the original data, lowering the cost of efficiency of the market process. This point cannot be overstated. It's likely that you're not going to. It's hard to get authors to provide data, provide them articles that format you want. And so you have to either give them incentives or tools, but also probably recognize that you're going to be spending time in the editorial process to do this. And this last point is it can't be overstated. The small taxonomy of publishers, the 2,500 journals, are very, very small scale on storage, huge stream budgets, and very firm in their habits. So not changing. It may not be turning a super tanker, but sometimes it's hard to even turn a dinghy if all you know is to rope over it. OK, I think that's it. Yes. Final point, not easy, exciting. Possible throw up on access. And then, yes, donuts claim that this is the only way. Donuts, I think, very strong and accurate claim that really this is really the only way to go for the missing species that we don't know about on Earth. All right. There you go. Thanks. OK, we have time for one or two questions. Just a comment. I'd like to congratulate you and you, Beau, and everybody else involved. I think this is exactly a brilliant example of what a journal should be about. And you've done it in a particular area. Everybody else now needs to do it in their particular areas. It's fabulous work. I couldn't agree more. Thank you. All right. All right. Good, well, thank you very much.
TaxPub was created as an XML extension to the general JATS to provide domain-specific markup for prospective publishing in the area of biological systematics. The core idea of the schema is to delimit descriptions of taxa, or treatments, within an article, and to use these individual portions of information for various purposes. TaxPub was developed in a close cooperation between the author (Terence Catapano), a community interested in such markup (Plazi), the NLM JATS group and a journal publisher (Pensoft). Since July 2009, TaxPub has been routinely implemented in the everyday publishing practice of Pensoft, to provide: (1) Semantically enhanced, domain-specific XML versions of articles for archiving in PubMedCentral (PMC); (2) Visualization of taxon treatments on PMC; (3) Export of taxon treatments to various aggregators, such as Encyclopedia of Life, Plazi Treatment Repository, and the Wiki Species-ID.net.
10.5446/30576 (DOI)
Good afternoon and thanks for having us here. Is the sound all the way good back there? Yeah, OK, good. Well, what I want to do in the first few minutes is bore you with the story of how this is all began. I'm a physician, and as a physician, I tend to look at medical things. And there was a day that I thought, in the very first days of the internet, that I wanted to educate my students. And so this is part of the story of how we came about. But it started even five years earlier than that. In 1990, some of my family members and I got together. And this was the heydays of CompuServe. Remember those startling speeds of seven kilobits a second? It took us about, I would say, half an hour to download one image, in that case of a car, that we tried to sell. So we had a perfect internet company, a commerce company, going in 1991. In fact, we never sold entire cars. We always sold parts. It somehow turned out to be like that. Because remember at those time, you all didn't have a computer on your desktop. So when we went to Cardiolus, they didn't have desktops. We had to sell them the computer, where to give them a lecture on how to use the computer, how to turn it on and off, and how to go into the system and use CompuServe. There was no internet, besides if you worked at the bigger academic institution. So really, we closed the company in 1994. We ran out of money, like startups, talk about timing. And in 95, the internet came along. And even in 94, where the full staffed office, we were making money and all that, but not enough, we could have sold that company to venture capital in Silicon Valley, probably for hundreds of millions. And I would not be here. I would be talking to some of these remote things from a beach somewhere in the Caribbean. But anyway, so I'm a physician. So I was busy doing my medical stuff. And there's one of the reasons why we were so naive about the real world. And so came the day that I thought, OK, we're burying this car selling or car part selling company. But I could use those images to tell my students what to do in my residence. So I set up a page at the server of Baylor College of Medicine with one article that I put up. And I said, by end of the week, I want you all to learn this, to look at that, and we'll make a journal club. The week ended, and nobody looked at it. So I kind of was frustrating. These were really the first days of the internet or the first browser, Mosaic browser out there. But then the guy who ran the server at Baylor came completely excited and said, you know, Baylor College of Medicine got like 20 hits last week, and you got like 200 with that one article. So I said, OK, there's a model there. So let's go away from resident education and put it out there to the world. And we basically at that time launched our first journal, and it was called the Internet Journal of Anesthesiology because I'm an anesthesiologist. And so from there on, we grew. We added another journal. We formed a company, and then, you know, kind of just adding stuff, but we never required anyone to jump to Hoops to register. We were open access from day one, and we still are like that today. There's no registration. There's no subscription fee, things like that. Interestingly, some people would send me a letter from wherever they were, and I at night would sit down and retype that article so we could put it online. It was kind of interesting. And then I kind of asked them to be submitted as a Word document, learn how to use the Mosaic browser. You remember all that. You're all the geeks here, right? Then I used the NetScape browser tool. Remember, it looked like a target in there, things like that. And then Microsoft Frontpage came along, and man, that looked really cool compared to what I was doing before. But I was hooking up with some people at the University of Tabasca who knew a little bit more about these kind of things. And this is when I met this at the time 16-year-old little nerd. So somehow we hooked up when he was still a little smaller, a little less tall. And he was all thinking already in SGML, XML. I didn't even know what that was. So ever since we worked together, and he's going to tell you all the XML stuff that I have no clue, I just tell him, please do it. And it's done. I mean, that's how it works with these things. So currently, we expanded to 82 journals over 10 years, or something like that, or a little more. And now we are like 15, 16-year-old publishing house, all medical journals, 82 titles. And we have now created about three years ago our own article submission system, which is separate. It's a separate website. And we are just about to launch. We're better testing right now yet the next version of this article submission system. And we're going to touch on things that you would stay away and slide poison to you when I say author-generated stuff. We're going to let our authors do the stuff, do the work. And we're going to see what crap we're going to get, and how we can. So it's probably the next Jats version for us. It's going to call the craps or a crap XML. I don't know yet. But we really decided that I think it's going to be for us very cost efficient. If we can automate even all the mistakes that authors do, to a certain extent, and fix that in an automated fashion, and Andy will tell you how we're going to do this. So with that, really, I want to give it over to him. And he deserves all the kudos for what we're doing, because he's really the brain behind it. With that, Andy tells us. Hello. OK. So our Jats editor, what is it? I guess it represents a move from us, instead of doing all the markup in-house using word macros, to having a web app where authors can do all the markup themselves using what you see, what you get, editor. I guess a little bit of backstory. We base it on, well, we know PHP in-house. We built it in PHP and JavaScript. And particularly, the framework is Symphony 2. And it seems to be the most popular one. So there's a lot of developers out there. If you choose to use it, you could probably, well, very easily extend it. It's not all proprietary and archaic. I guess the main thing is it's easy to use. It's all form-based. And for a majority of the Jats, it's just input fields that you copy and paste your document into. And it's very linear workflow. It's beginning to end just a bunch of menu items and you copy and paste. OK. Our old workflow. We had three different steps. We have the header. Like we have the front matter now with Jats. But we have a header where the title and author information and all the metadata goes. And then we have the body markup where we use Microsoft Word and some Word macros we built 10 years ago to apply tags to Word styles. And the last step is just converting the XML to Jats now. So unfortunately, we're still, well, we're going into this new workflow now. But we still do it this way using macros from 2001. So that's kind of how it looks when you have your document. And I'm going to show you how tedious this is. It's a minute and a half. Seems to have wrecked my video. Does it work? I know it works. You showed it to me outside how it works. And the tedious thing is we have several part-time people that are involved in this. So for us, we want to be very fast. That was always one of my goals. I mean, when still, traditional publisher took about six months or something like that to get something online, I always said, I want to be done in one to two months. So at this point, when we get an article, and you tell me when it starts flowing here. When we get an article, what I do, once it gets accepted and paid for as an audit fee, which is one of our business models, do you still hear me all the way in the back when I talk like this? Yeah, OK. When we get an article. I'll just use the microphone so you can get the video. OK, so when we get an article, the way that works is, once it paid for and accepted paid for all that, I'll take it and I just make sure, as a physician, on the medical content that I'm kind of the table's OK. Sometimes I find pictures that look like they don't belong into that article, and I find it's plagiarized and stolen somewhere else, things like that. So I try to weed out a lot of the bad stuff, and then we send it off. Somebody puts in, you ready? It's not going to play. OK, it's not going to play. OK, so you tell them how we do this then. OK, basically we cut and paste into the title, we cut and paste into the abstract, and cut and paste the article type. For all the authors, we have to one by one pick their surname, their given names, their honorifics, their address, country, all that stuff that just takes a long time. And it's OK if you have to do one or two at a time. But if you're doing it for hours, you start not seeing straight anymore, and then you start cutting and pasting the first name into the last name. And that's how it is introduced. So I was going to show you how boring it is, but it's quite boring. It's time consuming. It would take us maybe not, comparatively, I think we were always efficient doing it this way, like we could do an article in half an hour or something like that. But being all part-timers, that's sometimes still too much time. And if we have part-timers just doing data entry, then there is mistakes, and there's no validation. And we won't find out about a problem until an author comes six months down the road and says, oh, my name's been wrong for six months. So they don't like it. And it takes a long time to publish, sometimes three weeks, four weeks. And then if there's corrections to be made, it takes, again, a couple of weeks. So we decided to switch now to this author-generated Jats markup, because I want to get rid of all those problems. We obviously can't support the whole spec, because it's huge, and there's just simply too much. And too much, I would say, contradiction. Then if we let the author do whatever they want, we wouldn't be able to display it properly. We just won't. So we support a subset. So we looked at our current article corpus, and we just looked at what kind of markup we're using right now. For example, in the article title tag, what do we have in there right now? Do we need to have everything that Jats supports in the editor? And the answer is no. So we just looked at how much we can offset the markup to the author, but still get something that's quality back from them without giving them too much choice. So we support NLM blue 3.0. I guess we have the metadata support, and then we have, I guess, the canvas level stuff, with inline and block level elements that the author can mark up their document, how they see. And we reserve that for the abstract and the body and the appendices. And we limit each input to either inline or block. So a title would always be inline level. So we have a clearly defined set of tags that we allow in those fields. Yeah, so with inline, just whatever is easy enough for us to support, because the editor is based on HTML DOM because it's just too cost prohibitive to have an XML editor on the front end for authors to use. And it's kind of confusing. I mean, I don't like using XML editors really either, because there's so much going on all the time. So all the standard presentation layer inline tags we can support. Yeah, like I said, so it's based on the HTML DOM. So we can't really do nested structures and block levels like section. But that's OK, because we can still use XSLT2 groupings to infer that structure from whatever we have in that HTML DOM editor. You know what I mean with a word style editor, like a CK editor or a tiny MC, something like that. That's how it works. We can have box text in their figures, graphics, pre-formatting, tables, pretty much anything. But we limit just to keep it simple for the author. This is just an example of a lot in Jats. A lot of the meta-de-in Jats can be so simply collected by the data can be collected by using just a simple text area with some markup options. So I mean, that's essentially how it would look. When we collect contributors, we're pretty flexible. You can either be single authors or collaborative groups, like some of the, I guess, government groups that collaborate on papers. Excuse me. But most of the Jats contrib group is supported. And then for author nodes, an author bio is in line level blocking. Formatting is supported. Keywords we just collect. And right now, we are trying to enforce a constraint on them for being valid mesh entries. But it could be any, if you want to apply constraints to keywords, you can just provide a list in whatever source you want, and it'll pull it in. As for other metadata and article meta, we can set article IDs, author notes, supplemental content, any grants, funding information, article history, and permissions. We're thinking about maybe extending the article history section a little bit so that we can include some XML diff in there. So the editor itself can keep track of changes that are made in the article itself, because the editor takes an L blue XML in and spits it back out. So if there was a track record that was all included in the XML, then it'd be good for discounting for who's changing articles if you have no other means of storing that information. What we support in the abstract and body and appendices is kind of moving just because there's so much that can be supported. We don't do anything with math and L right now, or equation support, just because maybe 1% of our articles, if that come in with any equations. And if that happens, we'll just mark it up by hand. Or it can be captured as an image, I mean, if it needs to be. Right now, what we support, though, in those sections is pretty much 99% of what we need. So it's just a WSWIG HTML editor based on the HTML4 DOM. So how we convert it to Jats from the HTML is just using XSLT2 grouping specifically for this Neset section so that we can have h1 tags and h2 tags and HTML end up being mapped properly to sections in Jats so that you can have a discussion with subsections and have it end up being valid XML after. Other than that, we use a lot of regular expressions all over the place to transform the data between HTML and Jats. But essentially, what the editor is, is it pulls the file in and runs some data transformers on the data. And then displays it and anything that's changed gets data transformed back to the XML. So there always has to be some sort of a mapping possible, whether it's by using class attributes. Or ID attributes. There has to be some kind of a mapping. And if there's not, then a different method has to be devised to collect it. Right now, image and figures are handled via out-of-band file uploads on a separate page so that way we can collect large images, otherwise dragging and dropping or using the editor itself to handle image uploads. We don't really want to support that because we want to collect the highest quality images that we can. So if there's 30 megabytes or 50 megabytes uncompressed HIFs, we don't want that effect in the user experience by just hogging up all the browser time while they're editing their document. Right now, we'll accept tables as an image. But we're working on making the tool a little bit better so that tables can just be cut and pasted from word into the web editor and have it capture properly without any messy formatting and still look good as well. Videos and other media types we don't implement yet. They're just generally too large and too rare. I mean, we get videos once a year or something like that. It's not a big deal to support at this point. For citation handling, in the body of the text, which is the only place we've implemented this annotation tool, I mean, it can be added to the title as well. But in the text, you simply highlight the end node reference, and it'll resolve it from the back matter and ask you if you would like to link to this particular end node that you already have in your back matter. So it's really quick. It's not difficult. If the author has a 10 reference number at the end of their sentence, they just highlight it. And they'll say, do you mean end node 10 in the back matter? And if they do, it links it automatically with an X ref. The back matter, we support all the top level back matter elements like acknowledgments. With the notes, the content type can be set so that if there's any arbitrary notes that you need to include, but still retain the meaning of the note, then you can fill that in. In the back matter, that's one of the hardest parts of getting the author to submit their or mark up their own XML is that it has to be tokenized. And we can't expect an author to sit there and cut and paste each of their 150 end nodes like the author name into some form. So what we do is we just have a giant text area where the author pastes their entire citation section. And we explode it by line. And we search that for some kind of identifier at a metadata service. So we look for PMC IDs and DOIs right now. If there's a match for the metadata, then we just disregard what the author gave us altogether and pull it from that metadata source. And now we know it's correct and there's no problems. If there's no identifier, then we use the same annotation tool. And basically, all you have to do is highlight some text and it asks you to define what that text is. And then it gives us back a JSON object. And it works really clean and simply and doesn't take much time at all. And before they can submit, all the end no problems have to be resolved if there are any. So it kind of looks like that where this is kind of a simple example. But if you were to highlight the name here, you would get a window here where you can choose what type of data it is. So in this example here, I would have already pulled it from Crossref. But if that was not there, then it wouldn't take that long to tokenize your end note string. And this just with the mapping again, we can really support anything. You have to find a unique way to convert from the HTML editor back to XML. And that's just with data for transformers and mapping. As long as it can be defined, it's not a problem to collect that information. All right. For validating, when things go wrong, obviously the first thing we do is we sort against the schema or DTD. And if there's a problem at that point, then it requires staff intervention to go in and find a YDXML isn't valid. Since we're collecting from the authors, there's probably a chance that they're going to maybe put entire paragraphs as heading tags or section titles, in which case we can't really check for that that well. So that's when the staff would come in and take a quick look before it gets passed off to peer review. And then, of course, copy editing. And then there should be a five on there as well where we can just go back and fix any problems after. For some sorts of problems, like the end note problems, if we decide that it's too much work for the author to go and fix any end no problems, we were thinking that Amazon Mechanical Turk, which is a platform for putting together human intelligence tasks that you can submit. And they'll have some human go and do the task for you. And then it costs five cents or whatever the person is charging to do this task. So if you have a badly formatted end note, you would create one of these tasks, send it out, and a couple of people would do it. And if the result of everyone's work unit is the same, then you can probably consider it that the work was done properly. And the good thing about it is 24-7. So people are working 24 hours a day on whatever tasks. So if you're given an article and you want to get published within a week or something like that, you don't have to wait for our staff to come and fix up end notes or anything like that. It can all be put into the pipeline. This is the wrong slide. I know. OK. I guess you want to talk about that. Yeah, I mean, to summarize really what we're doing is we're touching the hot potato, right? We're allowing authors to start putting in data into our system. And it's going to be, time will tell us. I mean, who has done that here in the room? It's a hot potato, right? You wouldn't even touch it. Probably most of the people. Because we did that early on with some other things. And really, I mean, authors, as we said before, there's a whole variety of what authors consider to be good. Good. I mean, you go from this end to that end. But really for us, it's really how can we outsource things cost-effectively with the minimal amount of staff intervention to really capture stuff before it gets published? And when we look at things right now, and I'm sure every publisher here, everyone that is involved in publishing and using these whatever XMLs and DDTs and all that stuff, how you call that, you're going to figure out there's always a problem with the end notes. There's always a problem with the images. There's always a problem with the tables. I think the images were pretty much solved. I guess probably most of the people did. The tables is still a mess. So we need to figure out, we're going to work on that. A nice way is to capture tables as they are in publishing them as an image. But again, a lot of people, a lot of authors submit crappy tables. And either they, in a table format, made with Excel or something, or they just tap, tap, tap, tap. And then they change the font and now all the tabs are off, right, something like that. So we're going to have to figure out if, for tables, if we still need to go in there and do some hand work, or we can leave it up to them and just say, hey, this is our requirement, and you're going to resubmit until we accept it. So we're going to have to play with the system. And again, the last really big problem was the end note, the references. And I think he showed it to me, this little tool, that he kind of automated the script he wrote. And it really works very nicely. So again, we're going to collect some information on that. But really, we outsourcing, basically, on the front end to the author. Because when we look at our corrections after we publish, 95% of those corrections are either the name, the first name or the last name, where we cannot identify being here. If you named Frank Smith, we kind of know that Frank is probably your first name and Smith's your last name. But when you get all these kind of names from around the world, you have difficulties. And the way we use then the name when we publish it, we have the first letter of the first name and then the full last name. And so sometimes it just reverses it. And then we have the full first name and the first letter of the last name, if we don't know which one is which one. So we hope that the authors will know their names. It still doesn't tell us that they are going to enter the first name in the field for first name and the last name in the field for last name. But if they do so, it's their mistake. And they cannot come back to us and say, hey, you made a mistake, because I submitted it correctly. So really, 90, 95% of our corrections have to do with names or maybe with the affiliation. And then we have those kind of comics that every two months they either get fired or they change the place, because I don't know why. And then they request from us that we keep on changing their affiliation. And the moment we tell them, OK, you have to pay for that. They don't ask us anymore. So that's cool. That solves itself out. So really, on the front end, we really kind of outsource to the author. Then we do our work. And the only mistake, this is an old slide set that we just did another one an hour ago. And obviously, they didn't make it here. The thing is we do the peer review before the copy editing. So once an article is peer reviewed, accept it, and paid for, then it goes to the copy editing. And we have probably about 10 medical students and nursing students and EMTs and nurses that do this kind of copy editing for us. So they're all people from the medical field. And then it goes out. And then on the back end, for those still problems that we're going to have, we're going to try this Amazon Turk, where we're going to say, hey, OK, we still have a few problems. If we solve it, it's going to cost us $1, $2 per problem. So we're going to put it out there and whoever is wherever in the world and thinks that $2, or $5, or $10 is a good salary to do that, be our guests and do it. And if we get twice the results back more or less the same way, we think it's probably valid and we're going to use that. So I'm sure we're going to publish some mistakes. I'm sure there's going to be errors in there. It's going to be a little trial and error thing until we kind of finalize this. But we're going to touch that hot potato. We're going to work with that, because we think it's very cost-effective. It's going to be very fast. It's going to save us probably three or four weeks in the publication process. And probably the longest step for us will still be the proof reading. With that said, I think we thought it's probably going to open a few questions, because I think you all have stayed away with reason from auto-generated XML. So with that, we'll invite any questions you might have. And I'm going to revert with all the technical stuff. I'm just going to push it right away to you guys. Or you have something else? Yeah, OK, the licensing. So what are we going to do with this? So I'm kind of half medical brain, half business brain. He gives everything away. He's kind of that kind of type. So we're going to try to figure out where the truth is in between the two of us. So we might create, and how do you call this, GitHub? Yeah, we might put it on GitHub and then welcome some contributions. I don't know if you know GitHub. I didn't know what it was. But obviously, you give away something. And people, it's open source. They come in. They improve it and give it back to you. So maybe we'll do a thing like that. Maybe if it really works out, let's say we're going to play around for sure for about six months with that. So we have hundreds of articles that flew through. And then somehow we're going to have to go back and make an analysis of that and see, really, did we publish garbage or not? I hope not. And if it really proves to be a valid tool, then we might consider some licensing. And licensing could be either we license the software to somebody who wants to use it on their own server. The headache with that is we're going to have to follow up with service and then new versions and stuff like that. Or we create a platform. And we buy another separate server or server bank. And we invite people to come and use our platform and pay for the platform, not necessarily for the software, where we just maintain the software all the time on our platform. And you could pay possibly anywhere between maybe $1 or $10 per article, depending whether you just want to use the tool to generate the XML and then take the XML back to your own server and your publishing platform. Or whether you want to use us as a host as well of your journal. Because we're getting all the time requests that we should add another journal or newsletter for academic institutions or whatever it is. And I have rejected them all over the last 10 years or 15 years because we have enough to do to maintain our own growth and to do our own in-house stuff. But we're at the point where almost everything is almost automated. So at this point, we can start looking at maybe doing stuff for other people as well. And so these are kind of the models. And there can be any mixture of those. So we don't know yet. We'll first play around. Because the moment you start taking money from other people, you want to make sure you deliver and you deliver good stuff. I mean, that's kind of my philosophy. And so I have always kind of pushed away some even some very good deals just because I thought we could probably not deliver over time. So once we're ready for that, it means maybe we're going to hire a few more staff people full time and then really deliver something really meaningful. But I'm not sure if we want to go there. I mean, right now we may even follow the way and just give it away for all of those who want it. All right. Any other questions? Or any other? Any questions? Yes, I've got two points. David Chotten from Oxford. One is to stress that if you do want cost effective further development on this wonderful product that you are developing, then going the open source route is a very, very cost effective way to build a community around you. Because I'm sure there are many people who would like to contribute to this, us included. And I think you're right on. And that really plays into what he's preaching me. And I have an open ear for what he's doing. What we could do is we could develop the tool in an open source form. And if any one of you wants to collaborate in some way, shape, or form, this is his address, your email address. Please contact him. And what we may do then is to take that open source tool and put it on our platform. And then people could just pay us for a platform use if they want to. That's an easy way. It would be a very cheap way for somebody who wants to put out a little journal or in a newsletter and doesn't want to go to the whole hoop of whatever having services and all that. It's a one stop solution. And it could be a very low cost one stop solution. The other point I want to make is about your tables as images. Last week I was at the European Bioinformatics Institute at a workshop on literature and data, which was looking at how journal publishers treat data. PLOS is one of the leading journal publishers in the biomedical field, Public Library of Science. And they do a very curious thing. They mark up each of their figures and tables with a DOI and publish it so it can be citable. And you can download it for free only as an image. In other words, the data are not actionable. You can't do anything with them except retype them into a spreadsheet. This is absolutely the worst thing in the world you could do. And I'm very pleased to say that PLOS announced at the meeting that they were changing their policy. And now they will be publishing table data as actionable numbers, not as images. So I would encourage you not to go down the route of putting tables into images and essentially forcing the readers to have to do what Peter Murray Rust described as turning hamburgers back into cows. Well, that's right. But we have 13 years of articles now where the tables are all images. But this is maybe a task that would be suitable for the mechanical Turk there. We give somebody an image of a table. Can you transform this into? But your authors have them as spreadsheets already. Get them to submit the numbers to you, not images. That's a trade-off, right? I mean, we can't make them work too well. When they're paying us to submit their papers. So that has to be the trade-off. I mean, I totally agree that. And one add-on, half, I would say, of our tables do not come in as created in a spreadsheet. They are created with word, tap, tap, tap. It's a mess. It's an absolute mess. You all know. I mean, you're laughing so you really know what I'm talking about, right? Now, we are now part of cross-reference. All our articles have DOIs. And we are in the process of doing that. We have DOIs and we are in the process of getting DOIs for every image. And then we're going to categorize them. So it's going to be easy to access images. We have images, really, that are repeat. We accept case reports that normal journals don't. Let's say there's a case report of a tumor coming out of the year. A normal journal would say, hey, we have to see this like 20 times. We're going to not republish. But we see stuff like in Africa where they didn't have time for the last 15 years to see a doctor. And now this tumor has grown like to the floor. It looks like an elephant kind of thing. And we have pictures that are really absolutely mind blowing for this century or this time of the century. And so I think there's a lot of value in these images. And we're going to somehow categorize and make them available in some atlas format. I don't know how. We still have a big question mark about the tables. Because again, this is, I think, one of the biggest crappy submission types are the tables. And then the end notes. These are really the two problems, like probably most of you can confirm. The other problem we have, which we have not touched at all, is plagiarism. And so we're going to start using CrossCheck here, probably within the months. Then for the changes we talked about, so we can make an annotation on when was was changed, we're going to start using CrossMark. So there's no way for us, or no reason for us to reinvent the wheel when there are perfect tools out there that are being offered. So it's just for us how to incorporate these tools that make the flow make seamlessly. And it doesn't take us too much effort again. So we need to automate all these things. But plagiarism, man. I mean, this is a huge problem right now. And we were talking at lunch about this. People submitting articles, they don't even go to the effort to change the font and the color from where they copied it. So we got articles with three, four different fonts. We know right away it was copied and pasted from different sources, Wikipedia being a frequent source. I have a couple of questions. My name is Jeff from National Library of Medicine. Is this in production now? It's in testing. It's in testing. My other question is you mentioned copy editing. Does that happen before or after the conversion to XML? After. After. And you had one of your slides where you had your HTML DOM and your Jats XML and the arrow went both ways. That's pretty exciting. Does it go both ways? Yeah, that's a data transformer. So obviously it doesn't do everything. But there's some way to. So if you started with a Jats XML article that conformed to your subset, you'd be able to bring it back into your system and do the editing and pick it back up? It's like a black box. You throw the XML at it. And it'll show you in the app the data you have. And then when you click Save, it'll throw the XML back out. OK, great. And one way to look at that, if I'm an author and I start putting all this work now that I'm required to, and we are still one of the lowest author fees, and we ask about $220 and then probably 60% of our authors get a huge discount because from the third world. So it's not really compared to others. It's very low. So it might be a lot of money for people from the third world still. We'll just tell them, hey, look, this is how it's going to look like. So they kind of look at before it's even published the way it's going to look like when it's published. So we think that's pretty cool because they can start playing around with it, maybe change a few things if they're not happy, and we don't have to do that at the back end. Great. Are there any more questions? Yeah, one more. Andrea Lown, an independent consultant. I have a question about that black box. So how are you handling special characters? I think you mentioned TinyMZE and CK Editor, which I believe both use the HTML named character. And C references? We don't use that one. Repeat that? We don't use either of those, actually. But it's like that. But all Unicode characters and then into X-Malentities. So you're using the code points in the references, character entity references? Code point. Or are you using the actual numeric code point in your references? You are. OK, great. Sorry. Yeah, that question just went like whoosh. OK, for me. Kevin Hopkins from M Publishing at the University of Michigan Library. Forgive me if I still missed this, but I'm trying to imagine the interface. And of course, maybe because I didn't see it in the demo. So the user fills out some of these metadata fields. But then for the body of the article, can they go into their Word document where they already wrote it? Select all, copy and paste? Was he a editor who handled this? Or did they kind of need to compose within it like we heard from Anodham last year? For the most part, it handles it properly. But if there's tables in there, it might not be 100%. That's why I said right now we're still going to accept the table images. But ideally, it should be a cut and paste. Thanks. Yeah, I wanted to convince him that we need a form for everything like if it's a case report, it goes this route with the abstract, the keywords, the introduction, the case report, the conclusion, and the references. And if it's an origin article, it has the methods and materials and then the statistics and all that. But he convinced me it's just one block, put it in there. And when you look at the submissions, I mean, again, I think we'll have problem with maybe two or three out of maybe two or three. So maybe 1% will have to hand correct some certain things. But again, it's just a question. Would you have some staff working on 100% of your articles? Or are you just going to let it run through? Or are you just going to look at it at the end and just correct 1% or 2%? And we're going to have maybe in a year or two, we'll be back here and we'll tell you it worked. Or we'll tell you it was a complete failure. I don't know yet. But that's a nice thing. We're small. We're fast. We can turn left, right. If it fails tomorrow night, I can say, OK, we go back to this one and then however little sleep he gets and gets it reprogrammed, then we're back up and running, right? Great. Well, thank you very much. All right, thank you.
At Internet Scientific Publications, we have since day one marked up submitted manuscripts using an in-house developed Microsoft Word macro. After 14 years, we feel that this approach is not ideal for two reasons: 1) most errors that exist in the finished XML are introduced during the data-entry / markup stage, and 2) markup represents a significant time expense for our staff that could be better spent elsewhere. Since we only charge at the point an article is accepted for publication, there is a time investment marking up manuscripts that may never be monetarily recouped. Consequently, we have explored the option of allowing authors to mark up their own documents from our submission frontend website. There are draw-backs to this approach, namely the complexity and completeness of JATS and the huge learning curve a non-technical author would encounter, but we have in-turn concluded that a majority of the JATS definition does not need to be made available to an author in our frontend application. If an article requires more specific markup that we do not support in the application, we can always fallback to publisher side markup using our tried and tested Word macro. Quality control occurs later in the pipeline during copy-editing regardless of which markup pathway is followed.To facilitate this, we have created a self-contained Symfony2 bundle that supports manuscript markup utilizing a subset of the JATS Journal Publishing 3.0 tag suite. Much of the front and back matter is captured using simple form inputs and is validated using regular expressions developed using common input patterns. For the body, an HTML5 DOM based WYSIWYG editor is used. Although the generated markup is HTML5, by using a subset of JATS, we can unambiguously map between the two markup languages. We speculate that Amazon Mechanical Turk could be used to simplify certain article markup tasks like, for example, endnotes, where it would be off-putting for the author to tokenize the citation string. While the distribution model of a final product has not been determined, it will most likely be made available in a dual-licensed manner depending on the commerciality of the customer.
10.5446/30581 (DOI)
Hello, I'm Richard O'Keefe. I'm Faye Croetz. Hi, Jen McAnders. Hi, and we're here to talk about our journey as we might actually migrated to JAS. We've recently done that over the last year. We were participants at the last two conferences just as spectators, but now we have some real life experience that we wanted to share with you. And what we're going to do is present in three specific parts. Jennifer will be talking about the content transformations and the trouble spots, which matches up with the title of our presentation. And Faye will be talking about the validation and the QC piece that had to go on after we had done all these transformations. And what you'll find is that most of you, many of you have been working in NLM or have been working with any kind of convergence of your own. A lot of what you'll see is probably very familiar. You'll see many commonalities and many of it might even be self-evident to you as to what you come across in the papers and the data as you go to convert it. But one of the things that was pointed out by one of the peer reviewers, and I thank you, whoever you are, is that they indicated that everyone understands the trouble spots, how important they are. But a lot of times we underestimate how important they are. And it's really, really important to make sure that you know exactly what you need to do and how you go about doing it. So we don't want to take anything for granted. The fact is that our systems, tools, even the markup that you've used over years, sometimes decades, it changes all over the course of time. But of course the data itself, what you're trying to capture with all these tools, that's the most important asset. And that really doesn't change. You want to preserve its quality throughout its historical lifetime. So as we were doing this, what we learned is that it's not really just about the tags. It also has a ripple effect. It also affects your surrounding infrastructure and the support that you need to build and make the tags as useful as possible. So we'll be speaking about what AIP chose to do, and of course your circumstances may vary. The key thing is that, as we all know, the world doesn't stop when you have a project like this. You all have other obligations and responsibilities that you need to do, and you need to factor it all in as you go about doing this. So just a little bit about AIP. You can see the metrics and some of the factoids about AIP. We've been around for quite a while, and we support a wide portion of the physics community. But what I'd like to draw your attention to mostly is the mission statement at the bottom, because what it is, it's a premise that runs throughout the back. It's really formed the background and backbone of why we wanted to move to JATS and how JATS can help us achieve that particular mission. And also, in regard, allure's presentation was actually a great lead-in to this, only because a lot of the themes that she had mentioned and brought up, you'll see recurring throughout this. So what I wanted to also mention is that in JATS, we found a really great vehicle to achieve what our goal was. And as you can see in the mission statement, it's really applicable to anybody out there. If you just replace physical and applied scientists with any of your consumers, clients, or end users, and you replace AIP with your organization's name, you'll see this is what we're all really trying to achieve. So even though we work in the physics space, it really can apply and be helpful to just about anybody. This is where we had to start with. This was the scope of our challenge ahead of us. The AIP content collection was about 800,000 records, and it was divided up over three particular markup groups. They all derived at one point out of ISO 12083, which was a standard, but over years, actually two decades or at the time, it morphed into really an AIP type of format. And we primarily had everything in a header or a header reference XML, SGML format, and we had full text SGML, which was used on our online platform for quite a while, as well as an XML that was derived from our full text SGML and evolved mostly throughout the course of this particular time. How do we use it? Like most people had been using it. We used it to create the online PDFs, the print PDFs, and also as a source for our HTML, rendering it out on our own online platform. And quite frankly, it really did work well. But as Laura had noted, the evolution of how the NLM became into Jats and how more and more people were using it and new user groups are coming on all the time, AIP recognized the trend and knew that the times were changing and needed to do something. And that if we had any kind of affection for our own XML, the bottom line is it really wasn't that special. It's what it was tagging that was special. So what was the real problem? Why did we want to change? Well, it had morphed into something that was very AIP centric. It became a little overly specialized for our products and it was becoming a product-based markup as opposed to a content-based markup. And with that comes the necessary support and infrastructure and expenses that come with maintaining something proprietary and with many data transformations, so on and so forth. And it was becoming quite cumbersome to work with and as new standards come on and the community starts to embrace them, it makes it more difficult and costly to enhance and incorporate those into our products. And in the end, what we needed to do is recognize that it was at the end of its life cycle. It had matured to a point where we really couldn't take it anywhere further. So what AIP needed to do is focus on what we wanted to do next. So the key was the two areas of standardization. We wanted to join the community as Laura was mentioning. The community is a very important thing and the AIP XML community, well, you're looking at it. This was it. We go out and have a lunch break. That's how we discuss AIP XML tags. Right now, we have a whole wider community that we can take advantage of. We wanted to better position ourselves to adopt standards and obviously best practices that go along with using them, which would make the data interchange and the distribution of our data, which is our primary core mission, that much easier. And mostly, it was just great way to showcase the content at XML because believe it or not, the JETS as is was more than enough to tag what we wanted to tag at this point in time. But, it quickly became evident that not only was it a matter of changing tags, like we needed to do the trend, we needed to take our systems itself and transform that. And that would mean setting ourselves up for success. Converting to JETS is just not the answer in and of itself. It's really making sure you make the best use of it. And what that did is affected workflow, the content management aspects, staff roles, etc. You can go down the line of all the business rules and everything else that something like this would affect. So the first thing I wanted to talk about is the standards. And what happens that everybody is doing it? We succumbed to peer pressure. Yes, we were weak. But not only with the XML, it also mattered with the tools that support it because once you have a widely used XML tag suite or set just as this, is that you have XSLT, you have Schematron, all these other standards that could help. And it eliminates a lot of proprietary type of work that might be too costly for yourself. And it also widens the base on your own staff of who can operate and manipulate and work and make it the most efficient process you can. So and more importantly, as Laurel was chastising us about the listserv, it would enable us to evolve with the community and actually participate, join in, contribute, and hopefully innovate along with the community. And obviously our presence here is testament to that. So typically our XML, it fit nicely in with the traditional usage, so it wasn't so bad. And we had the benefit of dealing with standards to begin with. So we already knew that it would be of a benefit. When you work at AIP, this is what you see pages and pages and pages of. And you would think that extremely long mathematical equations, tables that can take up 400 pages in a journal would be the most devastatingly difficult part of your transform. But since we were using MathML already, and we were also using the Oasis XML exchange model, these ended up actually being the easiest part of the transformation, believe it or not. Now why all these long equations end up equaling zero? We don't know, but it's just the way it is. But believe it or not, all we needed to do in our transform was change the prefix from M to NML, and we were all set. So if we had to prove and change all this, we'd be in straight jackets by now probably. So obviously no surprise. We were using Jets, XSLT and Schematron. We're using the green. We're using the archive version because it's very important for us to have an open mind and also to be able to distribute our content, because that is a large part of what we really profess to do and want to do. So, and as well, even though all these were well and good and we get a lot of benefits from them, the other aspect was that using our old systems, the old management structures and everything else, it really wasn't sufficient to make maximum use of this. So I just wanted to speak a little bit about some of the foundational work we needed to do to get this started. So the first thing was obviously communicate. We needed to make the plan known. First, it's real easy to sit in a room and decide, oh, we should go and adopt this standard. Yeah, that's a great idea. We put the, we just put our foot down. We made the decision. We're decisive and decided to go for it. And that was half the time that's half the battle. So what we needed to do is communicate this out to everybody. Of course, there was a small group in the content technology group at AIP that was working on this particular project, but everyone was aware of it and they understood the importance of it and they understood that by converting to the Jets Tag Suite and maximizing the potential of that data, that it was really the cornerstone of where we wanted to go as an organization. So it was not done in a vacuum. The next aspect of it was that we needed to build for success was ownership. Is that AIP, the way it was originally set up, we had a couple different content groups that would work with the material. We had online groups. We had a production group. We had support groups for the production. And they were reporting to different managerial structures. Although everyone had AIP's best interest at heart, they had their own projects, occasionally their own agendas. And occasionally it would create silent unintentional conflicts where sometimes a decision would be made and it wasn't always officially communicated throughout. And then we would get some of the surprises that you'll hear about later in the content that we wouldn't have necessarily anticipated. So the first thing that we did is make sure everyone understood we have a unified message. We have a certain thing and how we want to approach and handle the data. And so we consolidated our content groups into a single group following that same principle and officially designated, which would be our group here as the official owners of the content and the gatekeepers to make sure that any changes to the markup or anything that's going to affect the product is handled properly. And even though I chose Monty Python's Black Knight, none shall pass. We don't really intend to have his fate, but we do intend to embody or disembodied his tenacity and make sure that everything is done properly. And believe it or not, there was surprisingly little resistance for this reorganization. Everybody realized it was the best thing to do. And in a lot of ways, it actually relieved quite a few departments because this way they didn't have to worry about making unilateral decisions. If you worked in online areas and supported our platform, you concentrated on online. If you were a project manager, you managed projects. If you worked in production, you worked in production, you could focus on your core skillsets and do the best you could at your job and leave the content technical details to the content technology group and pass muster on that. Everyone has a say into how we can change and what we need to do, but it does have to go through a more formal process. And as a result, we'll be able to better maintain our content. That leads to the next piece, which was for the infrastructure we did invest in a new content management system currently. We're working with R Suite to implement that. And what it will allow us to do is effectively manage the content, avoid the unneeded workflow duplication, as well as avoid those unwanted end-arounds to get content to look right or fit right even though it doesn't really have a really solid XML-based reason to do that. We'll have more extensibility, but most more importantly, we're going to have great versioning capabilities. And that's going to be very key to everything. So we can maintain and make sure we can document every change for documentation because in a way, we're almost starting at square one again. We had all our old content built over the years and now we go back to, here's Jax 1.0. This is where we're starting and we'll be able to manage our content and have a really firm grasp as to where it goes, where it needs to go and know exactly who, what, when and where. We need to manipulate something. And when you have authors submitting irradims and publisher notes and things like that, there's very important information to have. And the greatest data in the world, no matter how well you tag it, if you don't manage it properly, it's not going to be as effective for you. So what was the next step? So we know we have Jax, okay, check mark, we decided on that. We were going to use XSLTs instead of some of the custom transformation programs we had been using. We did need to decide what we were going to convert. I had mentioned 800,000 records, we really have more than that, but we decided to just translate the header of records with the references and we also decided to convert our full text XML since it went back rather far to 2005. We decided to put on hold for now the full text SGML that we had. So we made a business decision to do that. The next step was to test the XSLT transformations and adapt our findings and whatever the specifications based on the results. We needed to introduce a nice quality control system and this is where we started to implement the schema tron process. So we started to, again, while we were doing is take advantage of yet another standardization tool and then we need to document what we were doing. So for future generations of AIPers, they won't curse the day we were on the earth and they will be forever thankful that we took care of everything and they know exactly why something was done. And then obviously we'd have to train the staff and our production partners in the AIP usage because one of the key facts is yes, we're using the green archival DTD but it doesn't mean that we don't have our own way we would like to use it. For instance, how we might want to use the main content tag or we're going to use custom matter. What attributes are we going to allow and specific use? All those types of things or what we would be incorporating into our schema tron and taking firm details of so we can manage the data and have it still stay as predictable as possible for us. So with that, the next step would be with this foundation set. It was just time to go wrestle with all the angle brackets and find out what surprises were lurking for us. So with that, I'll pass it over to Jennifer McAndrews. Yes, when it comes to wrestling with angle brackets, that's how I spend my day. And then when someone says, hey, go get some clip art for your slides, that's like the lottery. So we're going to have some fun. So we had our prizes. Once we agreed on what the challenges were ahead of us, we had to get specific about how we were going to go about converting the data set, what needed to be done, what would be helpful to do, and what was going to have to wait for another time. The first and most critical piece in getting started was our document analysis. We needed to know what was out there, what we were dealing with, and how we were going to handle it. Keeping our data sets in mind, we needed to determine what the tagging rules were that we needed to follow. And once those decisions were made, at least enough to get started, we needed a means of keeping track of them. So the creation and maintenance of a document map, which we referred to internally as our spec or specification, became their first big working challenge. This was also really good time for us to create sample XML files and take copies of very specific data structures that may cause a problem later and we would need to be checking. So this is not a big surprise slide because you've seen it. The tagging principles that we decided upon. The first thing, one of the first things we did was define how AIP was going to use JATS. And that was our step off approach to all of these things. When the main facets of the document analysis and creation of the spec, sorry, in the main process, whatever, forget it. So since we had two data sets converting, the SGML sort records and the full text XML both converting to JATS, their tagging principles, our tagging principles were agreed upon early in the analysis. We made very strict distinctions between elements and attributes. We decided to hold customized content models and just not use them right away. We could use JATS out of the box as it were. And in the short term, we got a little bit tricksy with the X markup and we reserved that and in this instance, we used it to wrap problem areas in our data that we knew we would have to go back on later. For instance, for any occurrence we had of an ordered list in our source that abroniously failed to indicate what type of label was to be used, whether it was a bullet or a number, we actually output a label. But we wrapped that label in an X tag so later on we could do a data search and find those instances and go in on a case by case basis. Later on we can use the X but for now we were a little sneaky. In creating the document map, we came upon this very adorable formula. We had our tagging principles, number one, multiplied by our existing documentation, which are all the piles and piles of paperwork sitting in the back of people's desks and basements holding up chairs and the institutional memory. We're fortunate enough to have a lot of people at AIP who remembered or were there at the very inception of markup and online publications. So they were able to help us out with why certain decisions were made so that we could maintain that decision making process going forward and we didn't lose anything in what they meant us to keep. This is a sample of one of the pages from our resulting documentation. The first column is what the element was followed by a sample of what our proprietary tagging was and then it's target suggests. It's that fourth column there that was the most critical for us. These are the instructions to our programmer or the who was writing or XSLT that explained exactly what needed to be done. Frequently, there were instances of why this decision was made and then as you can see, any updates that came later on the process. The idea here is that we have this document that we can always go back to and see why we made this choice, what changed and how it was handled. Part of the benefits of the conversion process was our ability to polish up our archive. Stepping away from reviewing or completely tagged articles and instead just reading the DTD from top to bottom helped us identify any ambiguities in the existing tag set. Reading a tag outside of an article where inside an article you can get a sense of what the tag meant just by reading what the content was. If you're looking at just as a tag on its own, you start to see where it really just doesn't make any sense all by itself. In our example, extra one, extra two, extra three, that's pretty darn meaningless if you're just looking at the tag. We were able to take situations like that and use the more meaningful and intuitive tag set from the JATS that allows anyone to open up an XML file and say, oh, okay, I see, this is the guy's role, this is his professor, I see that all makes sense. The conversion to JATS was an ideal opportunity just for these cases. The only trick came in deciding whether the adjustment should be made using the XSLT or whether it was wider to use a pre-process in advance of the transform. In this instance, we did this via XSLT, it was fairly straightforward for us. Going in, we had some expected trouble spots, things we knew were going to be a problem. Of course, the very first one that so many people are going to be facing is generated text. That's our most pervasive known trouble spot. We found a solution by both accounting for instances of generated text in the specification. For instance, we would have a spec rule that says take, act, and output the text analysis. And also, it means of leveraging some of our existing conversions, our existing programs that in the past would populate data that had been generated text. We ran that as a pre-process to the first XML file, then transformed so that the final output file had all of the data within it. Style variations were another issue for us. Each of these scenarios is a possible method of presenting the title introduction, depending upon what journal you're in and what their styles were. Previously, we could do this through CSS or just page outlay, layouts. But now with Jats, we want to have everything in our file. So we had to locate and document every instance of every possible configuration of our titles. Text tags or tag reuse. We also call these our heavy list lifting tags. We had a number of tags that, while they were named the same, were handled very differently. One of the most problematic is shown here. It's a catch all of info, which for some reason we keep calling other info, even though the ERs aren't there. This position within the reference determined its handling. So it could indicate straight text to be carried forward. It could indicate punctuation or spacing that needed to be moved and then carried forward. Or it could be a mathematical formula within it and that had a whole separate thing. So again, each of these instances had to be located, documented, and accounted for in our specification. Of course, no listing of known problems is complete without the fabulous multimedia. And I was so tickled to see Laura mention it earlier because it really reflected what we saw. At the very beginning, at the dawn of time, we had treated multimedia as an external file. We had a link to it within our main XML and it linked over to our supplementary database. That's our first step. Then we started seeing them coming in. Well, I guess we're going to have to really find a way to tag this in our data because people want a live file within the file. So we came up with an embedding structure. And then we started to see a lot more of them. So we had to come up with a real solution. So we came up with our real solution and that wasn't enough because then we had to come up with a solution that accounted for different formats of the very same file. So now we have five separate, no, one, two, three, four, five, six separate possible tagging structures from multimedia that we have to locate, account for, write a transformation rule for, and beg the programmer to include. And this only is going to work if all of the tagging inside the file is correct. And all the tagging was done by hand so we weren't real competent. But we were hoping. Finally, probably our biggest hurdle was time. As Rich mentioned before, we are the whole content technology team. We look larger in real life. And we had to do all of this work that we're talking about on top of everything else we were doing. You know, you have regular workday, emails, meetings, more emails, more media support calls. This one needs help with that. That one needs help. And meetings, right. But the thing was we were able to do it. The biggest reason we were able to do it was we had the support of the rest of the staff and our management. Everybody understood the importance of this project to the company as a whole. So okay, yeah, there might have been a little grumbling. We said, you know, we can't do that today. You're going to have to wait for tomorrow. But for the most part, everybody was really on board and that was a big bonus. Our unexpected trouble spots, language. We had this group of people who speak in ankle brackets all day long writing a specification for someone who speaks in pearl code. We thought maybe English would be a good answer. You know what, that really turned out to not be a good answer. It wasn't even our common language. But what we decided to do was to use XPath, which in retrospect is a very obvious solution. In our example, what you could see is the PAX, which was a tag that we all know. We know where it goes. We know what it does. We know what its limitations are. So we thought, oh, you know, take PAX and put it here. But our programist says, well, where can I find the PAX? Where is it going to be? Is it always going to be in front of? Could it be in a reference? Could it be here? And these are things we kind of hadn't considered because our knowledge already, we have all that on our head. And using the XPathing in our specification allowed us to show our programmers and anyone else down the line who doesn't have this strange knowledge packed in their head for when they're can sleep. Anybody else will be able to follow it. Nasty surprises. You only think you know your data. You really don't. And the point of it is, again, as soon as a human being comes in and there's fingers on your data, you've lost the measure of positive control. These nasty surprises are out there, no matter how careful you are. And we were, I think, pretty darn careful throughout the years, or have our bus or quality checking unforeseen gaps snuck through. A lot of these gaps were identified on our first conversion run. And the example we're showing here is this attribute lead para. We perhaps foolishly assumed it was there because it's in our rules. It's in the directions, instructions we gave to our vendors. Make sure this gets included in your first paragraph in Chaos Journal. We need that there. It wasn't until we saw the online displays and our lead paragraphs, which need to appear as bold online, weren't there. We had no idea. So then this ended up being a whole go back, identify the problem areas. Rules were written for the transform again. And Fay will talk to you. Actually, you know what? Fay is going to talk to you right now about what was done going forward to make sure that doesn't happen again. Hi, I'm Fay. I'm going to speak to you about the quality control and testing piece. As discussed, we've had about 800,000 files that we've been producing markup for for over 20-plus years. And during that time, business rules changed, publishing styles changed, technology changes. And we separated this quality control and testing into four phases. Pre-requisite training, content tagging checks, incorporating Schematron and online displays. The first step to prerequisite training is that AIP chose to expand the knowledge of the entire QC team. We were already DTD experts, Jennifer and myself, but sending us out to learn the NLM Jats DTD was very valuable. Here we learned the industry tagging practices, and we understood the customization possibilities that the Jats does allow for. We also had a two-day class in XPath, XSLT, and Schematron. And with that two-day class, it taught us how to speak XSLT syntax. We learned to write much clearer and concise instructions for the programmer. And learning the Schematron actually enabled us to write our own Schematron, and we didn't have to rely on programmers. So it was really a great class. The next step was the content and tagging checks. The first part of that was it was actually performed while the XSLT was in progress. Here are the analyst check blocks of XSLT code, and they first confirmed that the programmer understood the instructions. We had daily meetings where held to discuss any new findings or clarifications with instructions. It was early on at this point where we did find one trouble spot, and that was with our specification, which Jennifer touched on. We realized that at that point our specification was way too simple. We would write something, for example, convert the AIP artwork tag to the Jats graphic tag. And to us, that made sense. However, it sounds simple, but it's actually much more complicated when you're writing the mapping, and you have to think about when you're converting, for example, the AIP artwork tag, it converts to the Jats graphic tag when it's a child of, for example, an entry or a de-formula. But in other cases, it's an inline graphic where it falls in other places, maybe a title or a formula. The next step during this phase was the batch processing, and this was performed when the whole XSLT was complete. The first goal we had was to see that the XSLT was working and that the files were valid. That was basically our goal at this point. And here we found, and basically decisions, we started to find some hidden problems. And when you get to these hidden problems, you have to decide, is it better to fix the source material? Is it better to fix the XSLT? Is it better to fix the Jats when the files are already completed? And you have to step back and ask yourself some questions. How many errors are there? How about fixing this impactor schedule? Is it easy to fix? And each problem was done on a case-by-case basis. I'll give you one example. In our AIP markup, if we had a reference item and we had two parts to it, we had a comma in between the two parts. And that was allowed in our AIP. However, when we converted to Jats and you had two parts, that comma had to be put inside, could not be floating PC data. So we had to, in this case, we decided to make an XSLT change. Of course, you're dealing with spaces and you're dealing, it wasn't so easy, but this was something we worked out with the XSLT. So you have to decide where is it best to fix your error and discuss that. The next step was group testing. And this was performed when all the files were, when the converted files were all valid. And we started by running approximately 200 files from various journals and different article types. The entire group checked the same files. Again, more hidden problems were found as you go on. And we looked, for example, in this case for Drop Text. We didn't actually proofread the files, but we knew if we had one tag in the source, we made sure that the tag was also in the target and all the text was inside. And this is where we started running the Schematron. We would get an error, for example, if we knew a table footnote had to be in the format of T1N1, let's say, and it came in a different format, here we would get an error. I'm going to explain the Schematron a little later in the slides, but we would start to check the Schematron errors also. The next step we referred to as bulk processing. And this was performed when these 200 files were approved in the group testing. At this point, we ran all the 800 files through the XSLT. And it was great. We had a 99% accuracy rate. Now you could think that's a great number, however, when you're dealing with 800,000 files, that's still left us with about 8,000 files with errors. So again, we had to decide where do we fix the errors. We fix it in the XSLT, or do we wait and fix it in the JAS when it was done. I'll give you an example here of what we found. We found that with our conference proceedings, we had multiple editors that were grouped together and didn't follow our intended specification. So here we decided to fix the source in this case. And we found a few other errors such as I mentioned. But here, then we re-ran the XSLT and all the files were run. So here, the final step was referred to as analyzed flag data. And this step was actually done on all, was done on the converted JAS files. And Jennifer touched on this. And what we set for analyzed flag data was basically hard to find known problems that we knew about in the beginning. As you see here, I'll give you an example. Here we have a 10 with a superscript minus 8, and that's how our PDF looked. We chose, or in our older data, we had a tag called at other. And this at other tag represented some unknown characters from our first generation of transforms done in the early 90s. And what we did is if we didn't know what the character was at that time, we tagged it as an at other and put in this case an at sign QL, which really had no meaning. It was just an unknown character. So because we didn't know what this tag, how to really convert this tag, we chose to tag it as an X strike tag, a strike wrapped in an X tag. And this way, it would go all the way through the conversion. It would be a valid file. And we knew at the end, we would get a list, a program would supply a list of all the X strike tags, and then we could analyze them at the end and modify all our JAS files. Again, a business decision was made to do this at the end, but it really made the most sense to us. There were a few leftovers at the time, but those we did manually. The next step in the quality control and testing was incorporating SchemaTron. I have to say this was my favorite piece. This is a central piece in our QC process, and it was derived from our preexisting proprietary QC programs. The SchemaTron is a list of checks or resurgence written in XPath syntax. It tracks errors and warnings specific to our data. Now, as mentioned by Rich, we decided early on to use, not make any modifications, not to customize the JAS DTD, and to rely on the SchemaTron as to have more control over our AIP styles. An example, we have one journal, one RKS journal, which has structured and a structured abstract. The rest don't. So here was a case where in the SchemaTron, we can make sure that every KS journal will always have a structured abstract. So this is the type of thing that we checked for. The XSLT was actually performed throughout the whole QC process since we started, because every time we found something specific in a journal, we would write a SchemaTron rule and to match our AIP styles. If the rule, let's say, only happened 80% of the time, we would put it in as a warning. But this way, we had every single rule as much as we could in that SchemaTron. So it's extremely helpful. Where you don't have to say, reviewing this journal, let me make sure that A, B, C, and D is correct. You don't have to do that. If you put it all in the SchemaTron, you could make sure that everything is correct. Right now, we have about 250 rules in place, and we're continuing writing more. Here's an example. If you look at the top of the screen, compound keywords. The top, the orange, the compound keyword has actually four parts inside, and that's valid in JETS. And actually, this was the problem that we found earlier. However, our style, we wanted to only allow for, if you look at the bottom, two parts inside the compound keyword. So on each one having code, one having value for this specific type. So what do we do? We create a SchemaTron rule to only allow two parts inside a compound keyword. So here, the file was valid, but the second check actually gave us an error. While you don't have to understand the actual SchemaTron rule, you'll see the orange is the error that actually comes out when you run the SchemaTron. The first one saying a compound keyword must have two parts, and the second one saying that the actual values of the attribute must be code and value. So again, we were able to put an error in there that shows our style, but yet the file was valid, and we didn't have to modify the JETS DTD. The final stage, again, which Jennifer touched on, is the online displays. And the icing on the cake, I don't know if you could see the icing, but to us, we called this the icing on the cake. And our assumptions at this point are that the file is valid, the SchemaTron is run, and everything is perfect. The testing here was expanded, not just to our small group, but to the online publishing group, and we selected random testers throughout the organization so they could also check the files. Of course, at this point, errors are found along every way, but errors are fewer and fewer along the way. And this was a case, again, as Jennifer found, when we were actually viewing the files, we saw that a paragraph in one of our journals wasn't bold, and sure enough, it was so much easier to find when you're viewing the files. So what did we do? We made a SchemaTron track, and this will never, ever happen again because we found this error. But it's just really a great way to confirm that all your business rules are being followed, this online displays. Okay. Rich. Oh, Jennifer. So just to sum it up, just go over some of the general lessons learned and conclusions. Excuse me. And the bottom line is, you know, don't go it alone. I mean, I think that was the key point that Laura was bringing out in the first presentation, is that you have this whole community out here, brilliant people, people who've gone through often what you're trying to do or solve, and when you're following industry best practices and standards, it just enables you to get that much more in the way of resources. I want to be beat. Set yourself up for success. Make sure that when you do all this work that you build a system and you build an environment in which you can constantly stay successful with it. And it's also impossible to overstate the importance of document analysis. But as we pointed out, no matter how much document analysis you do, there's always something that slips in because you figure 800,000 or more records. You figure how many characters could possibly be in there, how many hands have touched them over years. Doing somewhere is going to look good online or in a PDF, but it's not going to be right underneath the covers. And then what you could do is use the analysis as an opportunity to correct any known problems that you have that you always wanted to fix in your data, but never had the opportunity to do because once you want to take a project like this, how often will you ever get the resources and the attention of the organization to make sure that you can get it all done. So take advantage of it while you can. Do you want to finish up all of them? Okay. Hi. It's good to see you again. I'm kind of like a little bit crazy. Recognizing the difference between bad and incorrect data. I apologize. I didn't write my notes on that one. Can you speak to bad and incorrect? Well, what that was, the difference was is that you have data that is just all out bad. It just absolutely makes no sense. A user put in, there's an empty tag that makes no sense somewhere. It's just totally bad data. And then you have something that's just incorrect data where they chose a tag that's kind of what was you good. It might be valid, but it's not really what you wanted in that particular location. So as you go through your process of analysis, you try to identify which one is which because it makes a big difference as to how you want to go about and fix it. Okay. So incorrect data was a lot of my sneaky Pete solutions to get the math to display. I see. Create your details document map. We showed an example of it before. The more information you can put into this, the better it is going forward. It's a living document that hopefully you'll be able to pass off to any user who comes into the organization or is already in the organization who wants to understand what you've done and why you've done it. So, you know, is detailed as possible, keep it as current as possible. The XPath training, as Faye mentioned, was very valuable mostly because it gave us that common language to work on. But also, you know, I look at it and think, oh my God, how did I not know XPath cold having worked in Markup for so long? So it's become a tool that I didn't even realize how useful it was going to be until I started working with it. Now I can't imagine. It's kind of how I think and we talk about menus in XPath. It's terrible. Schematron is an essential piece of the QC process. We've always had a QC check that has been done for us by our programmers, proprietary, written in-house. But this Schematron has given Faye and I and Rich the opportunity to create our own rules, as Faye mentioned, not have to bother a programmer. We know the date of best. And I don't mean to say that to sound vain, but we work closely with it all day, every day. So we know what it has to have. We know who to ask to make sure that this particular paragraph is treated correctly or that heading is accurate. So it's much more expedient for us to be able to update and maintain this document than it is to have to put this all into some kind of request and pass it off to someone else and create another round of questions. And of course, the top and foremost issue is to work as a team. Hopefully by us being here, you kind of get the idea that, yeah, we do team stuff. And that spirit was really with us throughout the entire process with the organization. Everybody really enjoyed this project and that made it so much easier to have the support of the staff, your managers, your bosses, and just the whole production and programming crew. Okay, and that's just a little slide of the closing statement of the paper that we submitted. And in the end, we just really, we achieved what we wanted to do. And of course, there's always going to be room for improvement, and we will continue to improve that and hopefully be participants, active participants within the community and grow with all of you. Questions? Sorry to take too long. I know we're dreading to come up here, but we can't get us off the stage now. Okay, good. I'm live. Kevin Hawkins from M Publishing at the University of Michigan Library. I was really interested in the slide you had about the heading introduction. You were talking about how this is handled in different journals, that each journal has its own particular style. And what really surprised me was that you wanted to, and in your old tagging, these were tagged uniformly and yet handled by a style sheet in the output, correct? But then in your new tagging, you instead wanted to encode them differently. That is sort of move this information out of the style sheet and into the Jats markup. Why did you choose to do that? Rather than rewrite the CSS to continue to handle the styling for each journal? I'm going to step on Rich's toes on this. Part of that decision was made so that we had more control of the data because, I can't think of a better way to explain it, some of our work process can be fragmented. If we put that decision in CSS, then again, we're going to be writing requests to someone who can change the CSS going forward or who's going to make sure that, oh, this journal is italic, this journal is all cap. By taking it out of that structure and putting it in our XML, where I know, you know, it is almost a little overstepping to have that kind of instruction inside the XML, but that gives us the control. It lets us say, okay, in this scenario, this is, you know, it's bold, italic, it has a labeled number on it, so wherever we send this off, you know, whether we're displaying it or it's going to come up on, I don't know, too terribly much about CSS. I don't know if it's going to display the same on Firefox as it does on Explorer. You know, we've hit those situations before. This just gives us the ability to always make that change. Yes? Yes, I mean, and just to follow up with that is that the slide where we had generated text, I mean, AIP, to help out production systems and processes and smooth things through, we had an awful lot of generated text and all those labels were generated. And what we wanted to do is put that in the content then. For a lot of the reasons that Jennifer just spoke to, in that when we distribute it out, it's up to whoever wants to display this, if you do whatever they need to do to display it however they want. They don't have to guess, well, am I going to have to generate Arabic for this particular journal? Am I going to have to generate Roman numerals for this particular journal? And then, quite frankly, in this day and age, even with the AIP journals recently, there's a lot of, there's always, they're always reassessing their displays and requirements for what they want in the PDFs and all that. So by removing it out of the generated text environment and putting the labels specifically in there or not putting it in there, it would give the control that Jennifer was speaking about in the final products. There was another question. Hold that thought. In fact. Hi. My name is Chris Maloney. I work for PMC. Excuse me. You mentioned that you use R-suite and I was just curious about that. I know the infrastructure is probably very important to your success. What do you think of R-suite? R-suite and I think it's based on Mark Lodgett. Is that right? Well, I can't speak in great detail to that because I grew up, hasn't really focused on it. I don't know. I can't speak to that. I don't know. Evan, if you wanted to make a comment on that. Evan Owens, AIP. I have that conversation offline if you like. We're in the middle of a major R-suite implementation. The core facilities are to have a real archive with version control, a master archive. So we have a production archive and then the publication archive. So version control. Also we're in a world where we have a lot of external systems, external typesetters, external hosting provider, external content enrichment vendors and so on. So that particular project is to provide the plumbing that connects all of that together. And of course we have a large number of external data feed customers. Rich mentioned this in his slides, but one of the real benefits of Jats and one of the reasons we chose not to customize Jats at all is we can then hand our data to any external provider and say this is Jats according to the spec out of the box. Don't bother us. Go read the spec. Wendell? Yeah. Hi. Wendell Pease, Pease Consulting Services. Thanks for the plug. So in addressing the training, you should also be thinking Mulberry because of course the Jats training and a lot of the, you know, so everybody here knows about Mulberry so I don't need to stress that, but in fairness. The thing that interests me, I mean I sort of have a question which is kind of too prong but they're related. One is the question that has to do with the fact that you guys were implementing a migration from one tag set into another and therefore you had to control over both ends. And of course a lot of people are dealing with something rather different in terms of production, you know, pulling data in front that's being edited live and so forth. And so the question comes up as to what lessons that you might take from your experience which would apply also in this other kinds of applications of Jats. The second part of it has to do with the thing that you touched on with respect to the whole training issue and the expertise that you developed internally which I think is actually really interesting because that is one of the trends that we're seeing is that you're not seeing such a strong boundary line between the technical people and the editorial people as we used to. Instead, the editorial people are getting a lot of technical expertise and leveraging it and that improves communications as well as control, right? So can you just say whatever might come to mind with respect to the whole thing about what kind of lessons that you took which you think would help others who are also working with Jats as a team working together? Well, I would think from the training perspective is that even if your, you know, AIP was fortunate to have, we have very talented contact technology programmers who we could work with and so on. But even if you need to use a consulting firm or you don't have those resources in-house, having the training just to speak the language that they do because nobody is going who you hire, it's going to know the data as well as you do. But if you're able to communicate in a language that's common with them and explain what it is, then it just makes it that much easier. So even if you are not writing a schema trial and you're not writing an XSLT yourself, understanding how it works and knowing that those are more than likely the tools that someone is going to be using to do the work for you, it's just a great benefit because once you can communicate that and take advantage of every scenario, and I think that's what Faye and Jennifer had touched on quite a bit, is that we would think, well, we know this is the way we do it. And someone would come back, well, is it every single time you do that? What happens if it's not there? What happens if it's spelled differently? What if the attributes come up capital? What do I do? And it really makes you think that every single scenario that's there, you want to know if it's not written the right way. You could just have drop text and you may never know about it. You need to make sure that every instance is accounted for and only you know that and being able to communicate that in a way that's almost neutral, it's not your organization jargon, it's an industry standard kind of speech to say that is just so much more helpful. I know you had asked the first part question and honestly I don't recall exactly the details. Well it was about what lessons you might think apply to organizations who are not migrating an old data set into JADS but who are actually maybe working actively to publish new material that's coming from authors, a source from Word or whatever the case may be, something that which has sort of come back to the chaos question may be a little less stable. But yet, I mean in that kind of scenario, do you think that this general idea about the technical level of expertise within the editorial group is going to apply the same way? Yeah, I just wanted to comment and then you can answer, you know you have something too. The lessons learned for myself primarily because Fay and I were so involved in the data analysis and creation of the specification. I didn't know it was going to sound trite but documentation from any point whether you're going from migration or you're starting new. The JADS documentation online is wonderful, there are great examples there but like when we went to try and figure out some of the tags that were sitting in our data set and we just went I don't even know what this is, I don't know what it's trying to be, it's a string of numbers, I have no clue, nobody was there who really remembered and there's a lot of digging. So even from the very beginning, you might think it's minor, it makes perfect sense to you and everybody around you, it's still important to have it noted somewhere, have that information in the central repository that anybody can get hold of it and make sure they're going forward. There's a chain, there's a trail for people to follow to find out how you got from A to B. You had a comment? And I think what it is because we're in a situation just as you're describing, we have a production unit, an editorial unit that processes the information on a daily basis constantly and they had been familiar working with XML editors working in the XML and their role has changed and they're looking at the content itself and making sure that the author's information is being preserved properly and what it will come down to is that particular documentation and what's more important to me from our standpoint or anybody's standpoint is that you need to make sure you annotate it with your usage and all that included because just because you have the JAPS documentation saying A, B and C, well how are you interpreting A, B and C in your particular environment and making sure that they are aware of that and one of the things that we would be doing to help enhance that is when you were writing the Schematron, commenting out why we are doing this, it really, if you pull those comments out it becomes that external extra piece of documentation that helps people in production or why are we doing this, why is this happening, what business rule is trying to be met here in this particular case and we can break it down and translate it into a jargon that is more familiar with them and co-op wheels that they can use. And one thing I'd like to add, when it comes to XSLT, if you could say it in a pattern, if you could just say it, it could be written in the XSLT and that's basically the philosophy that we went with, that if you could find the pattern then just, it's really pretty simple to write so that's how we think. Evan. I'll add a little more context. Just so the audience understands, AIP no longer copy edits in house or typesets so the authors, the manuscripts all go to offshore vendors and come back to us right now in AIP XML. The documentation that was produced as part of this project will of course be the basis for them returning JATs to us but they can't do that until we switch our platform because we can't currently host JATs online until we launch our new platform. So it's never easy, changing all the parts all at the same time. But in the end, documentation is absolutely essential and the Schematron enforces that as well. Hi, Jeff Beck, National Library of Medicine. I want to thank you for this paper. I've already shared it in the preliminary proceedings with some organizations who are thinking about their data as an example of, hey look, here are some people that did it the right way. So I want to thank you for that. And then I wanted to kind of go on the back of Wendell's question and that is, when you were writing your transformation specification using XPath, did you realize how close you were to actually writing your own XSLT and were you excited about that? Yes, and you know what, Jennifer and I have now, we write small XSLTs for the company, writing the whole big one, you know, that's still yet to come. But whenever an XSLT needs to be written in the company, we are now the official XSLT writers and that was something that we gained from XPath. It's just, you know, learning that. I mean, that was really great. So I think, yeah, it was one of those things where, like I said, I didn't know XPath. We had a similar kind of format that we would communicate needs in. It wasn't strict XPath. And then of course we started writing our spec in kind of a plain English. And then XPath came in, so yeah, you know, by the time, by the end of it, you look at it and you go, hey, wait a second. You know, I had had this, but you know, like I said, yeah, we do write XSLTs. There's no way some of the things that were done for this, I would have been crying. So we really needed a lot of information. But we enjoy it very much, yes. Yeah, that's what I have. We're having fun, yes. That's what I have Laura for. The things that make me cry. But I mean, the key thing for an organization is these used to be done, these transformations in Lex or Pearl or something like that and that depended on when you had a technical resource, when they were available, when they could fit it in. Now it's more self-contained. Can I have one more question? Hi, Jenny Sherman, Nature Publishing Group. I'm just wondering how many people within your company are involved in this project and how long it took end to end? Well, primarily it was the three of us and we had one dedicated resource to handle the main XSLT that was really monstrous. And there were assorted other developers, maybe two or three, over a piece of their time where it brought in to run the list, do large amounts of data processing because we work on a UNIX platform so we needed their help to run the files and do all that. So I mean, it really boiled down to about four individuals spending most of their time doing it, even though we all had other responsibilities at the same time because we were responsible for sustaining the whole publishing operation with their questions and support simultaneously. And we started probably August, September, 2011 is when we had started to write the initial specs and all that. And it really, because of other responsibilities and resources, it was really about January of this particular year when we started to get full fledged into it and we spent most of the first and second quarter finishing up and running the tests on the files. So overall, about a nine to 10 month span. Another piece I'd like to add to that. At the same time, this team was converting the XML. We were also restructuring all the content, assets, the graphics, the packaging of the content as we moved from our old repository to our new repository and that has to tie in with the XML correctly, of course. So there was another set of projects going on as well. Thank you. Thank you. Okay. Thank you. Thank you. Thank you. Thank you.
The presentation will describe the challenges, benefits, and opportunities resulting from converting an archival collection of approximately 750,000 files to JATS. The goal was to migrate the American Institute of Physics (AIP) and member society archival collection from multiple generations of proprietary markup to an industry standard to create a true archive, all managed within a new, more controlled content management system. Integral to the process was the adoption and application of the XML technologies XSLT, XPath, and Schematron to transform and check the content. Sounds straightforward doesn't it? Perform a thorough document analysis, map out the transformation rules, convert the data. But is it? Have you accounted for all historical variations, generated text, metadata, nomenclature variations on XML file assets? Beside your core, don't forget about reuse for other products, edge cases, online presentation, distribution channels and staff training!
10.5446/30570 (DOI)
Hello. As I was just introduced, my name is Damien Hess. I'm very happy to make this presentation on behalf of my co-presenters and authors, Chris Maloney, Audrey Hemlers, who are both at NLM. And as you heard, I'm Damien Hess at Avalon Consulting, and I am going to be talking about the DTD analyzer, which is a tool. In fact, I'm going to keep referring to it as the tool because saying DTD analyzer gets a little difficult. The way the DTD analyzer works or what it does, it will take any DTD and it will convert it to an XML representation. You can ask what good is that, but I'll get to it. It is an open source project, and we've got the link to where it's hosted on GitHub on the screen, and we encourage everyone visit the GitHub site, download the project, participate, and let us know what you think, and we can make it better. And it is written in Java, so it should be able to run on any operating system. And we've got the command to invoke the basic function of the tool on the screen. You simply say DTD analyzer, you give it the location of a DTD, and you specify a name for this output file, which contains the XML representation of the DTD. Now, what can you do with this representation? We have determined that you can do a lot with it, and we have a couple of use cases where we have found it very useful. First of all, you can analyze your DTD structure and content with the DTD analyzer. As you saw yesterday in the sneak peek, you can also create documentation with the DTD analyzer. And finally, but definitely not least, it's very easy with the output from the analyzer to create scripts that can either manipulate the DTD itself or that can change XML files that conform to your DTD. So we're really talking automated conversion scripts being driven with this tool. I've got examples of all three of these use cases that I am going to walk through during this presentation. And let us start with an analysis use case. This is a pretty common need. Let's say you've got two versions of a DTD, and you want to compare them. Let's say it's the NLM DTD, the publishing version number three, and you've got the brand new NISOJATS DTD, and you're wondering, how are these things different? What's changed? You know, what elements have changed? What elements have been moved? Which elements have been added? You can do that with the DTD analyzer. You can run a comparison report. And we've got the code for this report as part of the project. And actually, this is a static shot of the report. I remember the commands here. Ah, good. We actually have a live version of the comparison report. So as you can see, the output of the comparison is in HTML. So you can view it in your web page. Oh, good. It's big enough to actually see. At the very top of the report, we list how many elements have changed. You can see right there in the summary, 204 elements have changed, going from NLM to NISO. No elements have been removed, but 11 elements have in fact been added. In fact, if you were to jump down, you could see the listing of what's been added. So you have things like AF alternatives, and you have its content model and its attributes. If I go back up to the top, you can scroll through and see a listing of all the elements that have changed. And so you can see, for example, the agree elements, the attributes have changed. The old attributes from NLM are listed on the left. And all of the new attributes, or rather, all of the attributes that are now in NISO are listed on the right. So how do we do this comparison? It's actually very straightforward. Since we've got an XML representation of both of the DTDs, it's very easy to write an XSL that simply steps through both of the XML documents and says, okay, what elements are in this DTD, but not in that DTD and vice versa, and, okay, for every element that's in both DTDs, let's compare their attributes, are those the same, and let's compare the content models. And whatever finds a difference, we can output it in the report. Now, this comparison, I will be the first to admit, is not the most sophisticated in the world. So, for example, if you look at the abstract element, it's listed as having a difference, and we can see the content model of the NLM and the NISO. And you can see that, yeah, the content model technically is different. Or maybe you can't, it's pretty small. What you've got are some extra parentheses. Now, if you speak DTD, you know, actually, that's not significant. They're actually the same content model, but the XSL is simply doing a string comparison, and the strings are different. So it could be more sophisticated. But even though it's only doing this very simple comparison, I've used this tool for a long time. Probably for about five years, I have found it really useful in any kind of conversion projects. It helps me pinpoint the elements that I really need to look at and as a human being do some analysis on. Let's move on. Oops, wrong one. What does this XML representation look like, this thing that's driving these comparisons? This is a high-level view of it. For every DTD, we create this document. The root element is declarations. And then we've got information on all the elements inside the DTD, all of the attributes inside the DTD, all of the parameter and the general entities in the DTD. And we can actually open that up because I've got an example here. This is the output from running the tool on the journal publishing DTD. If I expand the elements section, we can see a listing of every single element in the DTD is represented here in XML. So every element gets an element named element. It's pretty clear if repetitive. You can see the element name. We give information on where the element appears inside the DTD. So for example, you can see that string date is the 54th element in the DTD. We give you the location of the element declaration in the DTD, both in terms of the file name, the public ID for that file, and the line number of the declaration. And most importantly, we capture the content model of the element. And we do that in a couple of different ways. We've got a string form that simply shows the declaration as it appears inside the DTD. But we also parse that string apart and we can give you a structured content model. And this is new. Chris Maloney, excellent work, just actually put this into the DTD analyzer. And you get things like a listing of all the child elements, whether or not it's a sequence, what the cardinality is of all the elements, if there are grouping, there's choices. So that gives you a really rich look at what the content model actually is. And you know how I said before in the comparison, it's not that sophisticated. Well, with this kind of a structure, we could probably do a very sophisticated comparison of how an element has changed from one version of a DTD to another. Just haven't had a chance to try applying that now to the comparison report. So that's definitely on the list of things to try and improve. Oh, I should actually just jump. I don't want to get too in the weeds on how the DTD is modeled. I'll just jump down and just quickly show you how attributes are captured. So every unique attribute name gets captured. And then we show where that attribute is used, which elements, and then exactly the declaration of that attribute within the context of that element. So you can get kind of lost in all this data. So I'll just close it up. Great. Because you're representing your DTD in XML, you can now do all the good stuff that you are used to doing with XML documents. You can apply all the tools to your XML to analyze that DTD now. You can use XPath. So if you want to know how many elements are in the DTD, you just say, count all the elements. If you want to know, this is Audrey's example, how many attributes have multiple declarations, you can write an X query. So what this is doing is, you know how you can have like an alt attribute and it can appear on lots of different elements. And in general, you kind of want the same declaration to be applied for the alt attribute to every element. So it should be required. It should be like an enumerated list. You don't want to make a mistake in your DTD. And actually on some elements make it optional and like just see data and on other elements it's required and it's enumerated. You don't want that. But it's easy to make a typo when you're creating your DTD. Hard to find those problems. But now because everything is in XML, you could write an X query statement like this which actually just loops over all of the attribute declarations and checks them. So it helps you maintain your DTD. Structured comments. So the DTD analyzer, the tool is obviously capturing structural information about your DTD. But it can capture more information than just that. Specifically when the tool runs on the DTD, it looks at every single comment that you've entered into the DTD files. And if you insert special delimiters into those comments, if you structure your comments, the analyzer can read it and extract that information and create what you see on the right. So on the left you create one of these structured comments. You say to till this, the name of the element, and then the text of your comment. When the analyzer sees that comment, it realizes, oh, this is an annotation for the split element. And it will grab that information and create what you see on the right which is an annotations tag with an annotation element inside of it. Now you notice we say annotations with a single annotation. So that implies that yes, you can have more than one annotation for an element because it would be kind of boring if you only had one annotation. You can actually structure your comment to have sections. You can split it up to say I've got a model section, a tag section, and an examples section. And when you create your annotations, we get a model annotation, a tags annotation, and an examples annotation. And as you can see, the analyzer is doing a little bit of magic behind the scenes. Certain section names have significance to the analyzer. So for example, if you put a tags section in your comment, then the analyzer will take all those words and wrap all the words in a tag element. The other bit of magic that's probably even more obvious is it's taking the text and turning it into XHTML. And for that, we are actually using a format called markdown which is kind of like a wiki language. And it will do things like if you put asterisks, it will turn those into bulleted lists. If you indent things a certain way, it turns them into code blocks. So that gives you the opportunity to sort of create even better representational markup inside your annotation. Of course, you may not want XHTML in your annotations. And so you have the option to turn this off. When you generate your XML representation, you just say no markdown. Don't convert anything into XHTML and it won't. So that gives you a lot of control over what your annotations look like. Now, what can you do with this? You actually had a sneak peek of this yesterday. You can take all that structural information and those annotations and you can create really nice documentation directly from your DTD. In fact, it's so good or it's so important we've created a sub tool as part of the project. Well, I didn't do it. That was Chris. You just give the command DTD documenter. You tell it where it should output all of the documentation and then give it the location of your DTD and it will create documentation that looks like this. So this is an example DTD that Chris marked up for us and put into the project and you can download it and take a look at it. We've auto generated the documentation for it and you can see you actually come first of all to like a home page, a landing page about the DTD with like all this text about the DTD. All of that text was inserted into the DTD as an annotation. On the left hand side, we've got all of the elements that are inside the DTD. We have all the attributes inside the DTD. If I click on one of the element names, we go right to the element model. You can see its name, its content model and then we've got this text. For bananas, make a bunch and so do many more. That was an annotation that was attached to that element that was automatically extracted and put into the documentation. I also mentioned that some annotations can be tags. We actually list all of the tags here that appear in the annotations. If you click on one of them, you navigate to which elements are tagged with that word. I think this is really cool because when we were all maintaining our DTDs, we were always writing all these comments into the DTDs, right? But no one ever saw them because only you had access to the DTDs. For all of your end users, you had to create all this additional documentation. You had to create Word documents or PDFs or websites with all that information to explain to people how to use your DTD. Well, that's multiple files to try to have to maintain. You have the option now if you want to, you can embed all of your documentation directly into the DTD and then you just run a single command and you can go from your DTD straight to your documentation because it will extract all of the annotations that you entered. In fact, it is so nifty. We have lots of options and you would have to go to the GitHub site to see what all the options are. But here are some of them. When you generate your documentation, you could say, you know, dash dash roots and then give a list of elements like foobar and baz. And when the documentation gets generated, it will only contain the documentation for foobar and baz plus all of their descendants. You can restrict what's shown in your documentation. You also have the option to say exclude. You could exclude a list of elements or you could say exclude everything that's prefixed with MML. So don't show me any of the math MML elements in my output. I only want to see everything else. You also have the option exclude except. So exclude all the math MML elements except the MML math element. Show that in the documentation. You also have options. You can replace the CSS that's used to style the HTML. You can include a different JavaScript file that you've written so you can control the behavior of the documentation when it's displayed in your browser. And of course, you always have the option, you've got the XML with all the structural information about your DTD plus all the annotations from the DTD. You can write your own process. You could write your own XSL or some other process to generate the exact documentation that you need. Like, for example, maybe you need things output in PDF. You've got the raw XML. You can go ahead and create your own process to do that. Well, as great as documentation is, sometimes it's not quite enough. And here's an example of that. Let's say you've got a DTD like this on the screen that defines a section element that contains other sections and also has a level attribute that lets you set at what level that section is. And in your documentation, because we're using the DTD analyzer, you can say how this element and that attribute should be used. You can explain, you have to nest your sections properly and you can even give an example. You can say, look, section ones contain section twos. Don't put a section one inside a section seven. It's not allowed. The only problem is, is that enough to prevent people from misusing your tags? And anyone who's worked at XML knows, no way. People don't always read the documentation. And even if they have the documentation, they may not understand it. What you'd really like to do is restrict what people can do with your XML. The normal answer to that these days is to do something like create a schema tron file, which is a framework that lets you control the usage of your XML. And so in schema tron, you can write a rule for every section element. And in that rule, you can say, you know, restrict it so that every section's parent's level has to be one less than my level. You know, so that's the rule. And if it's not right, you can insert or output an error message. I'm using a report here. I might have, probably should have used an assert. But anyway, it's all the same thing. You get the idea. Now, the problem with having a schema tron file is, now, again, you've got multiple files. You've got your DTD over here with all the documentation in it. And now you've got all your schema tron files, you know, over here, which are going to enforce the usage. Wouldn't it be nice if you could just insert your schema tron rules directly into the DTD? Because you can. If you want to, you can insert a schema tron section into your comments and then insert the specific test that you want to use to enforce the usage. Okay. Then you say, what do I do? How do I still need a schema tron file? No fears. We've got, or I should say Audrey has authored, an XSL that will go in to the DTD, extract all of the schema tron annotations and create a schema tron file that you can then apply to your normal validation processes. So, in fact, we've got a single command that can go ahead and execute this. You simply say, DTD analyzer, give it the location of your DTD, specify the DTD schema tron XSL and the name of the output file, schema tron SCH. You can then go directly in one command from your DTD to the schema tron file that's going to enforce your usage. I like this a lot because I'm kind of a programmer and a bit of a geek. So, I like the idea that you can go directly from a DTD to human readable documentation, human readable output, or you can go directly from your DTD to machine readable output. In this case, it's a schema tron, but that begs the question, why can't it be something else? Why can't we go directly from the DTD to something like an XSLT? Specifically, can we go from the DTD to a conversion script written in XSL that will then convert content that conforms to my DTD? Can we do automated conversions? And the answer is yes, you can. All you have to do is create what's shown in red, a magic scaffold XSL that will produce another XSL. Now, that tends to give people a headache, an XSL that produces another XSL, but it's perfectly possible to do, and in fact, we've written it. As part of this DTD analyzer project, we have a scaffold XSL, and it will generate a conversion script based on your DTD. But what is that conversion script? Well, okay, now I have to own up to the fact, it doesn't do a lot, and it's not supposed to. What it's supposed to do is serve as a basis for you to continue development on that script. It's a shortcut. It helps you along. Specifically, what scaffold XSL does is it reads through the DTD, and for every element in the DTD, it will output a conversion template for that element. And I grabbed one of the elements to show you. Pretty small. This is for the article element inside, looks like the journal publishing DTD. And this is a good example because it is exactly what every other template looks like. All the templates look exactly like this. It gives you, in the header comments, it shows you the name of the element that's being transformed. It also shows you the content model for that element because of course, we've got access to all the information in the DTD. So that is very handy because when I'm writing conversion scripts, I always forget, like, oh, yeah, what attributes are in this element? What, which are required? Which are optional? You know, I just always have to stop and go and look it up. Now I don't have to. I can just keep focusing on my script and there all the information is. In addition, the body of the template, all this does is it just copies the element out unchanged. Now of course, I don't actually want that. I want to change that element to something. So now I, as a programmer, can go in though and then just, you know, tweak this template to do what I want. Or think about this way. What if I went through the DTD and I annotated it? And I wrote in an annotation, this element should become this other element. I could then go and adjust scaffold.xsl so that rather than output the element unchanged, it could output the element that I told it to. Now what if, in fact, it wasn't me doing the annotation, but maybe it's the analyst. Maybe the DTD architect should go in as the analyst and do all the annotation. And then the programmer can focus on writing a script that follows the rules spelled out in the annotation. So then you could get a nice separation of responsibilities. So that's the advantage of scaffolding. Scaffolding is the automated generation of code. And in this case, huge advantage, reduces typing. I don't have to go in and type a rule for every single element, which I have to do anyway, because the code is generated for me. No typos. I won't overlook anything. It's all done automatically. It can enforce coding conventions. They all look exactly the same. Whatever I want my coding conventions to be, my indenting, my, how my header comments should look, it's done for me. We get uniformity of code in case you give the project to other people to work on. And of course, with auto generation of code, it can be driven by annotations. Again, helps you further along to automate the process and you can start dividing up the responsibility of some people working on annotations and analysis and other people working on implementation. Now, all of that was kind of theoretical. Let me give you an actual example of a real problem that I really did solve using the analyzer, or at least a version of the analyzer that I had, and a version of scaffold.xsl. It's kind of a weird example, but it's real at least. So I used to work at a company and we had a big conversion project. Specifically, we had about 30 years worth of book content and these were big, fat directories full of people's names and phone numbers and addresses and emails. And we had about 30 years of these big, fat directory books that we wanted to convert to XML. So we were boxing them up and shipping them off to a conversion house and they were scanning them and turning them into XML. Now, with that kind of a structure, the data to tag ratio is not great. There's little bits of data and lots of tags. And we realize we're paying by the kilobyte of output. And we realize, you know, if we just shorten up our tag names, like make them all two-letter tags, I'm serious, we could save a lot of money. I mean, I think we saved at least 20% of our budget in conversion just by reducing the tag names. So what we needed, though, we needed a script to automatically shorten up all of our tag names, to minify our tag names. And even more important because those were hard to read, we needed a script that would automatically expand the tag names back to their normal state. That was really easy to do with the analyzer. I took the scaffold XSL, I added one function. All the function did was just calculate a short name. And then I did the opposite. I just wrote another, just made another copy of it and matched on the minimize name and output the regular name. Two scripts. It took me 30 minutes to write them both. In fact, I know that's true because I just rewrote them the other night for this presentation 30 minutes. I could run any DTD through it and I could go from a minified form of the XML to a restored form of the XML automatically. And I could go from the expanded to the minified automatically, guaranteed no data loss. It's just automatic. That's pretty nice. And even nicer, this is really going crazy, we also auto-generated a minified DTD. So you could take your minified, weird two element tag names and just validate it. And I auto-generated this DTD in exactly the same way. I just wrote a special XSL at output the minified form of the DTD. Now this can seem kind of, by the way, the two letter tag names base 26. Anyway, that's all I'm going to say about that. This minified version of the DTD and the scripts can seem kind of esoteric. Okay, that worked in one case. But then I suddenly realized yesterday, actually this could be generalized a bit more. It's exactly the same thing. Like what if you want to have multilingual tags? So rather than outputting all your tag names, this is the authoring DTD I think, rather than outputting all your tag names in English, it's the same thing. You just annotate your DTD to give an alternate tag name and you can automatically output all your tags in French or Chinese or Japanese, whatever people want. And then you can even auto-generate the DTD to validate it. And then you can also just change it back to English whenever you want to. I don't know. It seems kind of cool to me. We'll see. Maybe we'll work on this as a project. Future possibilities. Well, that's one. You could do multilingual tagging. Other things we could work on. Chris has got a lot of ideas about this. Maybe I can talk about this. We could probably do tighter integration with the JATS documentation that's available. Maybe one way of doing that. Maybe we could get annotations inserted into the actual official released version of the DTD. That's one way of doing it. But then, of course, the problem with that is if you ever make changes to the documentation, then you've already distributed it to all these other people and they don't get the changes. So maybe another way of doing it is we could change the DTD documenter so that when it generates documentation for you, it goes out to a public website and pulls the annotations down. That's a possibility. Almost done. Obviously, I mentioned we could use the structured content model to create a better DTD compare. That's an option. And definitely what I was talking about with multilingual scripting or automated conversion, find ways to leverage annotations, good patterns that we could suggest to other people to automatically scaffold and script out our conversion projects. And, of course, we want to promote community use. And toward that end, here are the links. Go to the GitHub project page, join the mailing list, and try it out. And if you find bugs, someone, you can report them and someone will take a look at those issues. So thank you very much. We think we'll open up the questions. Bruce Rosenblum from Enera. This question is almost directed more to Jeff and to Laura. But in watching the presentation, it strikes me that there are different profiles for Jats, one of which is sort of the public tag library that we all work with online. But another is the profile of Jats is required for PubMed Central. And would it be possible using their tools to have for those people who are interested in submitting to PubMed Central, an annotated version of the tag libraries that have the additional information you need to know about what are the PubMed Central rules, particularly those enforced by the style checker? So that because one of the problems I run into is I often hear from customers, gee, I'm getting this error from a style checker I haven't seen before what happened and I end up writing to you guys and I go, oh, you've added a new rule to style checker, but there's no documentation on that rule. So it would be great help if that could be integrated in possibly using these tools. We will not integrate PMC rules with the Jats documentation as generically. We will not. Clarification. I'm not looking for generic. I'm looking for a parallel set of documentation that's documentation of the PMC profile of Jats. Which is the tag tagging guidelines. Can't you integrate it on an element by element basis using these tools? It would make the life of developers who are preparing content for deposit to PMC much simpler. It would be a significant time investment and I don't know that we have the resources at this time. It's interesting. Oh, it's absolutely interesting. But it would be a significant time investment because we have built the tagging guidelines and if we did what Bruce is suggesting, we would have to rebuild the tagging guidelines in a completely different new way. It's absolutely something we could do with this tool, but can we do it now? No, we don't have the time or the resources at this moment. The fact that it's interesting and something you could do is great. We'll pray for a future improvement there. Yeah, it's definitely interesting. And I would like to say it makes a really good case for what he was talking about at the end about bringing in the annotations from externally as opposed to putting them in the DTD. You can't put PMC in the DTD, but they could certainly come in externally. That's right. That was my impression. It could be layered on top of the core JASDTDs potentially. By the way, I should have started out by saying, yes, the tools are really, really cool. Here's my use case that I want to see. Yeah, fortunately Audrey is on our team. She's got nothing but time. Wendell Pease, Pease Consulting. So have you guys done a heads up comparison to other tools in the industry that do similar things so you can give us a feature profile? I'm thinking of NECO DTD, which I've used for years to do this. So that's the obvious candidate. There are some older ones as well. It would be really nice if we could see, okay, this is what this tool does, and this is the XMO profile, get out of that, the features of it versus this one over here so I could make a fast decision on that. Because of course, the first thing that happens is you start investing development cycles and the cool stuff you build on top. So you want to know early on where the feature set is. There is some information about that in our paper. We compare it to a couple of different DTD tools. We certainly haven't done a universal analysis of other tools and looking for gaps and differences between those. Yeah, we definitely could do more. We haven't done an official analysis really closely looking at all the different tool sets. I've certainly looked at TRANG, and I'm familiar with the way that that converts DTDs into XML. And I always found that too much information for what I wanted, but as a tool that actually do the analysis for you or that do comparisons, I haven't had a chance to look at any other. Right. Well, the particular functionality that you need or something like this, actually TRANG doesn't give you. Because TRANG does a straight conversion of all the parameter entities into patterns and so forth. And what you need is the breakout. And you're giving us the breakout in some form, which is obviously the right thing to do. But even if you didn't have a feature set, a schema to tell you what is in your model to compare it to the, you know, what comes out of. I mean, I think the obvious candidate here is the NECO DTD tool from Apache, which does a similar thing. And it'd be great to see what that is. So, brought up the Wiki page on GitHub of similar tools, and it's a Wiki, so go ahead and add any application that you want on there. Yeah, NECO DTD is there. So you're inviting me to do the work that I asked you to do. Yes, that's why it's on GitHub as an open source project is we're looking for other people to participate. Well, if you have any information to share. I asked from Nature, just a quick question about the schema tool, DTD schema tool. You talked about it. Can this be used to generate tagging card lines? Yes. So if you provide the DTD and the schema tool rules, can this be, I think, to get a generate tagging card lines documentation? Yes. Yes, I think that would be a good use for it. That was how it was intended to be used. Could you tell us how, how, because you haven't described it in your presentation? Well, it does require you to be able to generate your own schema rule that will enforce your tagging guidelines. Really, it's a way to combine schema tron and your annotations and multiple things that you need when you're maintaining a DTD into one easy to update file. Yeah. Okay. Okay. Thanks. Thank you very much. Really, really cool stuff. But I would really urge you to explore very hard the possibility of bringing those things in from a separate file and marrying them to the DTD. Yes. Because we're all using the same Jets DTD here, and I bet we're all using different schema trons. Yeah. And actually, what you can do is the annotations, at the beginning of the annotation of the comment, you specify what element it's attached to or which attribute it attaches to. So you can actually put all those comments in a separate file. That's my suggestion. Yeah. Layering. Layering is wonderful. But you could create one top-level DTD that contains those and pulls in the other DTD as an entity, as a general entity. You could. Yes. That would work. But I'm suggesting you really might want to look into layering here very carefully. I thought that's what I just described. Okay. Thank you all very much.
This paper describes an open-source Java/XSLT application that allows users to analyze and manipulate XML DTDs. The application can be used to generate reference documentation, create reports comparing two DTDs, convert DTDs into Schematron, and to automatically scaffold conversion scripts.The DtdAnalyzer tool has been used at the National Center for Biotechnology Information (NCBI), a part of the National Library of Medicine, for a number of years, to help maintain the suite of NLM DTDs, and to develop new conversion applications. NCBI has recently decided to release it as a self-contained application and to move its development onto GitHub (https://github.com/NCBITools/DtdAnalyzer).The heart of the tool is a Java application which converts a DTD into an XML representation, including all the element, attribute, and entity declarations contained inside the DTD. Additionally, DTDs can be annotated with specially-formatted comments which are recognized by this converter, and these annotations are delivered in the XML output.The resulting XML can be transformed to create many useful outputs, and a basic set of those transformation stylesheets, as described above, are included with this tool.
10.5446/30571 (DOI)
When I was tasked with introducing this conference, I of course read through the program and the papers and thought, wow, there's a lot of stuff going on. And then I thought, oh my gosh, how do you introduce something where there's so much stuff going on? It's kind of chaotic, but not terribly chaotic. So I'm like, let's call it evolutionary chaos and see what people think of that. So in order to get to this chaos piece that I think is coming our way, let's go back to the beginning. In the beginning, there were STM journal publishers. And what they were doing influenced the Jets. A little while later, you have archives and aggregators getting involved, and what they're doing influences the Jets. And of course, what they're doing influences the publishers influence the archives and the aggregators, and then they all start influencing each other. And then the Jets actually starts influencing the users. What is in the Jets? This is what you do. And not too long after that, we see it's not just the journal, the STM journal publishers. We see people putting books into Jets and even some magazine stuff into Jets. And it's like, wow, that's a lot of stuff in the Jets. And we see it's not just the Jets. We have Jets extensions and customizations and subsets. And I'm including these in the Jets because really, they're part of the larger Jets picture. Without them, it'd be pretty boring, quite frankly. And when I'm talking about, if we look at the things that we've heard about just in this conference, in 2010, we heard from the American Chemical Society, Portico, American Geophysical Union, and the TaxPub group. They all had customizations, extensions, or subsets. In 2011, we heard from Anodham and Adapon who had subsets or extensions. In 2012, we just heard a little bit, well, I told you what the bits was. We'll hear a bit more about it during the book panel. And we'll also hear from folks at PMC who talk about their journal front matter extension to the Jets. So really, the Jets isn't just the Jets anymore. It's a lot more. And of course, it's all influencing each other. And of course, it's influencing these guys, and these guys, and these guys. And really, it's a great big web of influence, of practice and influence. I really like simple pictures, and this is a little bit messy, so let's simplify. We have a group of users and user communities, and they're all influencing each other. And then we have the Jets, the extensions, customizations, subsets. They're all influencing each other, and those groups are influencing each other as well. It's nice, it's neat, it's predictable. Then this happened. And this is about where we are now. So this is where it gets into my speculation. Hence the road ahead part. Standardization is very important. I don't think anybody would argue that point. But standardization can also do something a little unpredictable. It can introduce a chaos factor to things, which seems counterintuitive, but it does. And this is why it does. If this is the press release from the Jets, or the Jets press release, sorry. And there are a couple quotes from Jeff in this that I'd like to point out. The standardization process brought awareness of the Tag Suite to a larger and more varied audience. We expect this wider audience will find uses for the Tag Suite and new applications beyond its traditional uses in publishing and archiving. And that's the chaos, least I expect it to be. So let's look back at the kind of evolution we've seen in the Jets. The Jets adjust models to accommodate what's being done. A great example of this is the Media Tag. It wasn't actually in 1.0 of the NLMDTDs. It's there now, don't worry. We didn't take it out. But it wasn't there, and it was there in 1.1. But in the very beginning it wasn't. And it wasn't entirely necessary in the very beginning, because in the very beginning, ten years ago, ten plus years ago, we were capturing what was done in the print because that's what publishers were doing. They were printing. You don't print a video file. You don't print an audio file. So you didn't necessarily need a Media Tag. Now ten years ago was about when it started becoming very popular. So Media was in 1.1. What about the permissions tag? That didn't show up until much later. And ten plus years ago, you had articles that were published with a copyright statement. And at the journal level, there was probably a reprint statement. For reprint permissions, contact the Copyright Clearance Center. And that was to the whole journal. There was no individual article, license, or permission statement. It wasn't done. It didn't need to be in the Jets. But now, now it needs to be in the Jets. You have entire publications who follow an open access model. And if you're delivering just a single article from that publication, you absolutely need to have a permissions tag. You lost without it. And it's not just the elements. It's the attributes. And this attribute was, of course, in the Tag Suite in the very beginning. But now it's all over the place. We will hear from a group from Japan in the next two days who will tell us what they're doing with this attribute. And it's really remarkable. But when the Jets first started, you didn't really have bilingual publication the way we see it now. You didn't need XML Lang on everything. You didn't need to capture alternatives of things. You didn't need it. So it wasn't there. But now we need it. The other way that the Jets evolves is it adjusts its models to accommodate what users know they should be doing. And what am I talking about? Contribute. So a lot of you have probably heard about ORCID and the whole idea of individual contributor identifiers. If you're searching for my paper on the EPUB to Jets conversion, you want to search on Laura Kelly me, not Laura Kelly the linguist or Laura Kelly the biologist. And there are a lot of Laura Kelly's out there. So the idea of giving me a unique identifier, it's of course, it's being worked on by other groups, groups outside of Jets who are doing a much more thorough job at this. But we knew it was coming. And so we gave you a way to tag it when it is being used consistently and regularly. But notice that what we're doing is we are accommodating things that are either already happening or we know will be happening soon. There's no predictive behavior here. It's pretty much we see it, we know it's coming, we'll let you do it. Let's take a look at the evolution in publishing. Data is king. It has always been king. And back in the day, it was enough to have data. You had data, you published it, you were good. So what you did with the data mattered. And that was enough. But it's not enough anymore, is it? You can't just publish your data and be done with it. You have to send it here and there and everywhere else. You have to do X with it and Y with it and Z with it. You have to do stuff with your data. So this evolution is that what else are you doing with your data? So it's not just what are you doing with your data, it's what else. And that, of course, is the big question mark. I don't know what else you're doing with your data. You don't know either yet. Everything you know about is where we are now. But what's coming? What is this wider and more varied audience going to suggest that you do with your Jats data? What's the what else? So this is where the picture left off. This nice, neat, organized, clean picture of one group influencing another. And then we had to go and standardize things. So we have these new user groups and new uses. And I have these separate because I believe that new uses will be coming not just from new users and new user groups, but from our existing users. People invent new ways to do things all the time. We're going to hear about a couple of them over the next few days. And some of them are very, very exciting. But I haven't linked these to anything because I don't know what's going to happen. And you probably don't know what's going to happen either. And if you do, I'd like to talk to you and get some numbers. But this standardization, even though by its very name, it implies that it's making things more uniform. And it is, but it also has the side effect of making things more chaotic because you do have that wider audience. You do have those new user groups and you do have those new uses. You have things that you're just not going to see coming. So in my abstract for this paper, or this talk, sorry, not a paper, I asked the question, how does the jets continue to evolve without descending into chaos? The short answer, of course, is that it doesn't. Chaos is unavoidable. At least I believe it is. Unless you want things to be really, really boring. I don't want things to be really, really boring. And I doubt anyone in this room does either unless I haven't met you yet. But the fact is that chaos happens and some really great things can come from chaos. But the connotation of chaos is, of course, a negative connotation. And I don't think it deserves that so much. There are a lot of us, myself being one, who are absolute control freaks. And I need to know what's going on and I need to know what's happening. So chaos is really scary. So when I look at the future of the jets, honestly, I'm a little scared. Not because, oh my gosh, it's going to die out. That's absolutely not the point. It's, oh my gosh, what else are you guys going to come up with to do? What else am I going to have to write code for? What else am I going to have to accommodate? And it's frightening, but it's also exciting. So instead of taking this negative connotation, descending into chaos, let's look at it this way. How does the jets continue to evolve, making the best use of the chaos? Except that it's happening. I think that we all know that there are a lot of us out there standing up here who are very reluctant to accept change and to, well, not just accept it, but to embrace it. To really look around and say this incredible wave of change that's going on is going to affect me. And it's going to make me learn new things or do new things. And it's going to be a little chaotic. But if we accept that it's happening, it's just like any sort of self-help program. Accept it, own it. So own that the chaos is there, that it's happening. Don't be oblivious to it. Don't stick your head in the sand. Don't pretend that it's going to pass you by, because what's going to happen is if you try to do that, is it will pass you by, but so will everything else. The second thing I think is very important is to utilize the available resources. The thing about the JETS community that I think needs to improve is its sense of community. We are our own best and most underutilized resources. How many of you out there subscribe to the JETS list? Okay. And no, keep those hands up. How many of you have ever had a question about the JETS that you sent to the JETS list? Oh, they're like three of you left. Okay. So how many people in the audience have ever had a question about how to deal with something in the JETS? Every hand that is up now should have been up when I asked who sent the question to the JETS list. So consider this your chastisement for the conference. Subscribe to the list. Use the list. Remember that the people in this room are your community. They're your resources. Now I know that in publishing there's a lot of proprietary information and you can't always share the specifics of what you're doing. So you can ask questions and you can get feedback and you can throw ideas out there. And I think that you should because if you're in this room you have a vested interest in the JETS and quite frankly I believe it to be your responsibility to do these things. If you are not taking an active role in the community then I think you're doing yourself as much a disservice as you are the community. So please subscribe to the list. If you have a question or a comment or a thought about something that could be done better, put it out there. And while you're here in the next few days talk to each other. You know this is a great opportunity to meet people who are not just using what you're using but they're doing some of the same things that you're doing. And more importantly they're doing things that you're not doing that you might be able to benefit from if you knew about them. So talk to each other. Get to know each other. Take business cards if you have them, swap email addresses, whatever it is. But remember that you are your own best resources for the future of the JETS. And that unfortunately because I'm 15 minutes shy is all I have. Thank you very much. Any questions? Are there any questions, comments, rebuttal for the chastisement? Nothing? Okay. So we do have one. That's my button. Todd Carpenter, executive director at NISO. Thank you. And thank you to the entire community for all the work that you've done on JETS. One way that JETS will continue to evolve as it has done under a different form over the past decade is through the continued engagement. And there is an expectation that there will be a maintenance agency, an organization to continually evolve JETS as we move forward. Standards are living documents and we need to keep them living and breathing and moving as we change and adapt. Something to. Thank you. Anybody else? Okay. So I'm going to introduce, Jeff, should I introduce or should we wait because of the webcast? Let's introduce. Okay. We are broadcasting over the web. Okay. So we're going to try to keep close to schedule but I was way under. So, you know, I'm going to introduce our next speakers who are actually three. We are joined by some folks from the American Institute of Physics. We have Rich O'Keefe, Faye Kravitz and Jennifer McGandrews. They actually will be swapping on and off so you'll see and hear from all three of them. So if you can welcome them. Thank you.
The JATS user community is growing in both number and diversity. With the acceptance of the JATS as a NISO standard, that's likely to continue. But what does that mean for the future of the JATS when its development has always been driven by the user community? How does the JATS continue to evolve without descending into chaos?
10.5446/30788 (DOI)
Okay, to say the truth, this talk presented a challenge for me. And mainly because we discussed the previous version of this program two years ago at the PRACTEC conference, and it even got into the graphics companion. Thank you very much, Frank, by the way, for this. And so if I would start from the beginning and just repeat those of you who now know about it, will probably be bored. But if I would assume that everybody knows about what previous version and just discuss what new is there, I probably would bore the other part of the audience. So what I will do, I will briefly describe what we are doing. I will not talk much about the algorithms. It was a lot of algorithmic work which went into this. I will just say, okay, read our fine manual, and you will find there is much about algorithms as you want. And what I will do, I will just tell what the new things we can do and the new possibilities. And then, since we have time, I will have some small demonstration. And by the way, for the demonstration, I will need a volunteer. So I will give you time to think whether you want to volunteer or not. Okay, let's start from the beginning. If you ever come to a doctor who works with hereditary diseases, one of the things the doctor would do with you, he or she would ask a lot of rather nosy questions about your parents, about your kids, about your brothers, about your sisters, about nieces, nephews, aunts, everybody. I asked my collaborator, Leila Ahmadeva, how deep is the interview, how many relatives you really should talk about. And her answer was, everybody, the patient can remember. And if you can interview the relatives about somebody he doesn't remember, but they remember, it's even better. So if you have a chance to know your genealogy up to the 13th century, you would make your doctor very happy. And they write all this down and let me show what they do with this information. They do a picture like this. Actually, I wrote, I said that this thing is wrong, I said it's a complex pedigree, but this is what I did when I started to work with this. Now I understand that this is not a complex, this is a pretty simple pedigree. It's basically a picture where they put all the information about your relatives on paper. Sorry. And it has certain formal, highly formal language. For example, the squares are male, the circles are female, they might be filled, they have dots. It depends on their status, whether they were healthy or had some disease. The doctor wants about this. He wants about to know there are some symbols for abortions. There are some symbols for infertile marriage and a lot of other things. Actually, here is a reference to the standard notation. It's much smaller than the XML standard, but it's as boring as this. And if you think about this, it looks like a genealogy tree, but it's much more complex, because a genealogy tree is a tree. You have a root, you go down, down, and here you have cycles, all loops, which makes it very difficult. You have several roots, and I did not do this just for a conference in Ireland, but O-type, obviously an Irish surname, it was in the paper, in this paper. So, somebody, it's a coincidence. Okay. Usually, people just take the information from the interview and manually put all the circles, squares, and lines on paper, and it takes a lot of time and a lot of effort and a lot of drotsmanship. So, you really would like to do this in some automatic way, especially if you want to publish it. If you look at the pedigrees that are published in medical journals and medical books, some of them are really well done, some of them are less well done, and there are some old-school doctors who know it very well, who are well trained, some of the newer generation. Well, Leila told me that she actually makes her students to do it well, but I think she's in minority here. So, we wanted to make a program that would do this, that would make pictures like this automatically. And from the beginning, we understood that it's very difficult to do automatically, because the algorithmic complexity of making tree-like, okay, it's very easy to make a tree. Unfortunately, we don't have three there. So, we said, okay, let's do it in the two-way approach, much like the approach that Manusha told in the first lecture today. First, we have some information about relatives. I call it database, it could be ASCII file. Unfortunately, for most physicians, it's actually Excel spreadsheet, so I will show you that we are unfortunately Excel oriented. You make this database, and from this database, you can manually make a tag file. So, we can make a tag, here's Tricks package with Makris, which says, okay, he is the person, plays this person here, plays all the lines for siblings, for aunts, nephews, and so on. You can make a tag file, you can process this tag and paste tricks, and you'll get a nice post-trip to PDF, which you can publish, you can look, and you can study. And since it's a rather boring process, it's much less boring than doing it with a parent and son, and paper, but still boring. You'd like to make a program, I put pearl here, and with this program, it would read your data and make a tag file. And it's very difficult, it's a usual 80-90 situation. It's very easy to make 80% of work. It's very easy to write a program, which will make 80% of P degrees right. It's very difficult to make the remaining 20% of degrees right. So, instead of putting effort here, we say, okay, let's make 80% right, and for the remaining 20% we'll make some adjustment. Somebody will look at the tag file, make adjustment, and get it done. Okay, so first, a little bit about the tag part. Well, tag part is a PDF-tricks-based. If you now have a PDF-tricks, we have had a workshop on Sunday, we have had a talk, first talk today. Basically, the idea is you have nodes, and you have connections. So, from the point of view of mathematician, your P degree is a graph, you have nodes on the graph, you have connections, and you need macros to put nodes, and you need macros to put connections. Nodes are persons, they're abortions, there are special symbols for infertile marriages, and so on. So, we have a macro for person, you can say male or female, adopted, affected, which means had some disease, and a lot of other things. Here you see, for example, this is a symbol for a pregnant female. This is a symbol for a male who died at least 20 years, and so on. And for relationship and descent, we use lines, which are node connections on the post-trip language. So, here we have just a pair of male and female. Here we have a pair of male and female who divorced. And this is a constant meaning relation, which is a marriage between relatives, like cousins, or second cousins, or whatever. And here are some examples. For example, here is a post-trix code, and those of you who work with post-trix, would recognize how it is done, we put a person, another person, we have put some relation, and here you have a female, which has one adopted daughter, one natural son, and had some miscarriage at some point of her life. Here is a marriage, which did not have children due to anaspermia. Here is a more complex diagram. You have a marriage between Fred and Ginger, which had John, Mary, and again a male miscarriage. This line is a symbol which shows that this person is a symptomatic patient. It has some hereditary disease, but you cannot see it from the symptoms. You can just analyze it and find that it's too weak to be diagnosed. I see that Layla wants to correct me. Okay, it goes like this. Here is a situation when we have twins. These two boys are monosygotic twins, or identical twins in our layman terms. And again, we have nice macros to say this. And this male has four daughters, which are twins. Sometimes it happens. As you see, there are macros, and if you know any technician, he would be happy to write macros for you. Now let's talk about the first part. Again, I expect some physicians and some genetic researchers to be able to write tech pros like this. I don't expect all of them to do. So what could they write? First, this is the format of input data. I hope that the font is not too small for you to understand. Normally, again, it would be an Excel spreadsheet. I don't have Excel on this Linux desktop, so this is an ASCII form of the spreadsheet. As you see, you have columns. Each column says something about the line as a person. You say it's male or female, date of birth, date of death. Mother and fathers are actually names of the lines. Excel can do it automatically for you. In Excel, you would just say, okay, he is the mother, he is the father. In ASCII text, you just need to type in the ID, the number of rows for this. I'm certain that John Tung-Ku can write for you a nice program which would do all this automatically. You would just be able to hand-wave and so on. The number of columns could be different. The program is smart enough to understand what's going on. It looks at this and makes something like this. If you look at the example, John Smith is our first database. Here is the result of this database. Some history of a family of Smiths. The program right now can do automatically scaling. You can say, I have a paper or a letter, please scale the pedigree for me. It can do rotating. A lot of pedigrees, especially if you know a lot about the width, but not about depth. If you know a lot about your siblings, about nieces, nephews, and so on. But you are not sure about your grand-grandfathers. So your pedigree would be wide but not too tall. It means that in many cases, programs would rotate your box to fit on a paper. It can do rotating. It can do what I call the placing of shrubs. Shrubs is a very informal term. What I want to say from the point of view of mathematicians, this thing is a tree. But many people say that a tree is something which has a root, just one root. If you look at this pedigree, it has two roots. It has this family, Smith family, and Brown family. And it's natural to tell it a shrub, something like a couple of trees, something which has several roots but still looks like a tree. And we invented an algorithm for automatic drawing shrubs. It was really a fun part of all this. So some new things. We now have automatic treating of twins, something which is not in the version which is described in the graphics companion. If you see that here Jack and Mike are twins, George, John and Jane are also twins, we can work with them. Now, a couple of words about heart problems. I would call them semi-solved. I will explain you why. The first heart problem is constant winning marriage. Constant winning marriage are marriage between relatives, like marriage between niece and uncle here, or between cousins here. Why is it heart? Most algorithms, including other algorithms, are going to be all algorithms for automatic placing things on a paper, are recursive algorithms. You go from level to level. And if you are a mathematician, you immediately recognize it. This means that you cannot have loops. Because if you have loops, it's very difficult to make a good recursion. The point of constant winning marriages is that you have a loop. You can go to the same point, but several different routes. What is our situation with constant winning marriage? Well, we can do it using a very dirty hack. Something really not good to talk about. What we do? We make it twice, and then we delete extra nodes. So if you look at Laura here, you see that it should be probably at this place. The reason why it's there is that the program first makes two lower ones here and here, and then deleted one. Again, you can manually do this. You can just take a tech file and just move here a little bit. It will work. But I just didn't want to do it here because I wanted to show you. And this is a very simple situation. When you have many children from constant winning marriages, you would see something really skewed. It's a bad problem, but it's semi-solved in the sense that in most cases it does it right unless you are purist and you say that Jack and Laura really should be here and here. Another problem, which unfortunately absolutely unsolvable, is what I call a buddhist problem. Unfortunately, we don't have any Polish people in the audience. We have some Lithuanian people. You might know this famous poem by Adam Mitskevic about old Lithuanian buddhists who had three sons. Three mighty sons. He sent them, he sent one of his sons to Russia to bring rich floors and gold. He sent another son to Germany to bring amber and silver. And the third son was sent to Poland to bring the best thing Poland has, which is a Polish bride, a great Polish woman. And what happened? You can guess that they both returned with Polish brides. It's a very nice story. And why is it a problem for us? Well, let's try to make it a degree. I can put old buddhist, no problem. This old Lithuanian is here. He is his first son and the Polish wife. Second son and the Polish wife. That's fine. And then somebody will say, okay, I want to show the pedigree of all Polish brides here. I will say, okay, I will put father and mother of this wife here. It would be more difficult here because usually you want to put male to the left and female to the right, but I can actually switch. I can take this female and put it here and put here all her father, mother and so on. But what can we do with this woman? There is absolutely no place to put her father, mother or anybody because it's over there. It's a problem for manual situation. Manually people just put here a degree here and just put some lines like this. You can do this programmatically. Then you will have a lot of self-intersections, a lot of problems. And I really was not able to make this work nice. Whatever I tried to do, it looked bad. But I looked at the medical journals and then I understood that they also cannot solve it manually because all the complex pedigrees with things like this still have self-intersections and they are really, really bad. So this is a problem which is because we want to put exactly multi-dimensional pedigrees into two-dimensional piece of paper. And this is difficult, really difficult. So again, this is a hard problem. But still we have a progress because in the first version we could not solve it at all. In the first version we could not solve constant clinic marriages either. Now we can somehow. Now what I want to do, I want to make a small demonstration. And for this demonstration I need a volunteer who wants to do. And if nobody wants, I will take either Anita or Peter because they chair this session. Okay, what I want from volunteer. You just tell me a little bit about your family and we will make a pedigree for you. Who wants? Come on, guys. Please, it's not that difficult. It's not that difficult and it's easy and believe me that I will... Yeah, and by the way, I will not ask you about your disease so it will be by ethical reasons. So it will be just something like a genealogical version of a pedigree. A pedigree of absolutely healthy and nice person and healthy and great family. Okay, come here. Okay. Okay. Come here. Okay. Okay, we'll start from you. I need to put any ID, let it be MA and this will be MARTA. I can put the last name but then the pedigree would be too complicated. So let's do it. Now about date of birth. It's used only to order people. If you have siblings, you go from the oldest to the youngest. So if you don't want to tell me your date of birth, you can tell me any year. Just make it consistent with your siblings. Just a year? Just a year. Okay. Yes. 57. 57. Okay. Now DOD is something I don't want to put here. Proband is a fancy medical name for interviewee, for somebody with homebisters. So you are a proband, that's what I wrote. Now let's talk first about your father and mother. Okay? Okay. I just need name and date of birth and whether they are with us now. Okay, father Joseph. Okay, let's put Joe as his... Joe... Sorry. And he is obviously male. I put the poem so the same bit but obviously you don't want. And I need his date of birth. 1916. 1916. Is he alive? Do you remember his date of death? 1992. Okay. Okay, and let me put here as your father. It will be Joseph. Okay. And let now go to your mother. Her name is Audrey A. A. You are a new wife. Yes. And female male. 1915. 1915. And she is still living. Right. And let me put your mother as Audrey. Right. Now let's go to your kids. My children? Yes. And to make it simple let's talk only about your biological kids. You have adopted kids. Let's not put it in the pedigree. There are special symbols for this but let's make it simple. Okay. Daughter Anna. Mm-hmm. Just look that I don't have a... duplicating ID. Anna. And she is female. 1977. And your mother is you, which is MA. Right. And next? Son Ross R.O.S.S. Ross. Smile. 1979. 1979. And again MA. Right. And son. Mm-hmm. Andy. Okay. We have already... And let's be AD. Andy. We could put column numbers but I usually forget which number is there. He is male. 1982. 1982. The same. Okay. Now let's talk about your siblings. Okay. Oldest is Ruth. Anna. 1940. 1940. Living. Mm-hmm. And it's Audrey and Joseph. Right? Yes. You can have half siblings, which are only father or mother with you. 277. Mm-hmm. Next is Bernard. Bernard. Okay. Let me... My name? 1942. 1942. He is deceased. Okay. Do you know the date of his death? 1964. Let me do it this way. Okay. Next is Mary. Mm-hmm. Good. Okay. 1944. 1944. And she is deceased in 19... of 2006. Okay. Okay. So, you are married. Yes. And you are married. Yes. And you are married. Yes. And you are married. Yes. So, you are married. Okay. So, you are married. Okay? Okay. The world. Myle. 1948. Living. Right. And next is Mark. We have a really good family. German family. Yes. Myle. And 1958. 1958. And he's living. Yes. Okay. This one. Yes. Now, Leila, what should we go next to the... Children of your siblings. Children. So you're... Reigning with the world. Yes. We can stop at any moment. They're not too many. Okay. Okay. Okay. Each symbol we still have. M.T. Marfa. That's a child of Ruth. Myle. Do you remember who that is? 1968. 1968. And first is mother and Ruth is... R.U. for us, right? Please check what I'm typing. I'm famous for my typos. Okay. And this is Sarah. Sarah. Myle. 1975. 75. And she is children. Child of Mary. Mary M.R., right? Yeah. And next is Helen. Myle. 1976. 76. And she is children... She is daughter of... Mary. Mary. Also Mary, right? Yes. Next is... Leonard. Hmm. Ruth. memory. 1977. Hmm. And he is a kid of... Gerald. Okay. Gerald is G.E., right? Yes. Next is... Joel. Joel. Joel. Myle. I cannot do... Thank you very much because otherwise you would have a really big problem. 1982. 1982. And he is a kid of... Gerald. Gerald. G.E. G.E., okay? Oh! Sorry. Sorry. Okay. Yeah, you are absolutely right. You are absolutely right. Thank you very much. Like this. Is this the only problem I have here or something else? No? Okay. One more. Curtis. C-U-R-T-I-S. Curtis. And he is Myle. 1984. 1984. And he is a kid of... Gerald. Also a father, right? Yeah. G-E. Okay? That's it. We can stop here. We can go more and more. And if you ever go to a doctor it would be really much more. But let's see what do we have here. Okay. Yes. First we will make a tag file. Okay. Here is a tag file regenerated. And let's try to make a PDF file. PDF. But make. But... Again, I'm sorry. It's Emacs and Command-Line. Uh... Okay. XPD. PDF. PDF. Let's see what do we have here. Okay. It won't substrate it because... Let's see. Okay. Let me increase it a little bit. Like this. No. Probably 150 is the best. Okay. You can take a look and tell me whether you made a mistake. You have a martyr and you have an arrow. And the arrow means that you are proband. We started with you. You have your father and mother. And this looks like like a pedigree. Unless we made a mistake here. Did we? And it's not very much complicated because we don't have any marriage between your cousins and so on. So it was very good simple. And if you go next page, it's actually let me rotate it back. Rotate it counterclockwise. It's actually... It has a legend. It tells you who are there when they were born and whatever was in our uh... in our... in our pedigree. As you see, it's very simple. And I would bet that if you would try to do it manually, you would spend much more time here. Okay, thank you very much. If you wish, if you have memory stick, I can give you this pedigree as a... Okay. Questions? Comments? Yes? Question. The name is not covered by asking. You see me? So some... non-English name is... Okay. It can work because it's tech. It can work with anything you want. And here, actually, we have two... Right now, we have two models. English and Russian. I can show you. I don't know, but I have it anywhere compiled here. Let me see... uh... uh... Let me see. No, this is also English. We have here... um... Okay. CSV? Yeah. Nice. No. No. Okay. And now you can see... Yes. Here is... Yeah. It's very garbled because it's... I did some strange thing with this, but if you take a look, all names are in Russian. And it's obviously extensible, so if you can... if you want to extend it to Czech or... or Arabic or Hebrew, no problem with this. Just put the proper Babel reference. Yes. Is there any other way? Are these the only ways? Are they fully manual or using the system? Is there any other automatic or semi-automated way of creating these diagrams now? Well, there are several programs which do this. And most of them are... um... I actually forgot to tell you about the last slide here. Right. What you want to do, most of programs are interactive. You just have a mouse, you put point here, click here, click here. There are a lot... there are several commercial programs now available and because the main audience for them are doctors and doctors are supposed to be rich, these programs usually cost a lot of money. What you want to do and you probably will do it semi-mouse driving. So it will make a p-degree like this and then you can take a mouse and shift and then just and it will make it. That's what we are going to do. But if you have a lot of money, there are some programs I can recommend to you which... I think they are less convenient than what we are doing, but I'm biased obviously, but our program is free. I have a lot of money, but I don't have any interest in this. Yes? Can you show the picture with the Polish wife's picture again? Excuse me? With the Polish wife's picture? Ah, the Polish wife's picture? Yes. Surely, here. I know families where the brothers were married to the same wife. One time, there was a other who decided to have this and have a new two. Because if you have a wife, she married to father A and then he divorced and then he married to father B and you get immediately another who called you such a... It's... You mean what happens if a wife has first married to the first brother and then to the second? Then usually you take two lines which shows two marriages. And then you just show the kids of the first marriage, the kids of the second marriage. There is nothing conceptually more complex than standard consent-gaining marriage problem. You just have a look. So it works with this. And you can put the divorce by breaking the line. It now is... In full charts, we usually use labels that are big diagravate in different words. Can you say that in this standard? Yes, but the problem... Yes, one of the ways of solving this problem of complex pedigree is to make parts and then make C page next or C part 2 and C page 3. And then you have the same problem as we have with breaking displayed equations into lines. Human can do this. Programs are really bad in logical breaking of big chunk of information into logically consistent parts. Yes, manually it's easy to do. I don't know how to do it absolutely automatically. Okay, let me... Ah, yes. I understand that you are trying to utilize the probability by the illness of time passing. You are trying to utilize the probability by the illness of passing. So why could you restrict yourself to please? I would say that you can only behave with some kind of graph and what I think is the specific process. Yes, but the graphs here are not trees. The pedigree is not a tree. It's a general graph. It's a general graph which is the only thing it usually has something... it has generations. We want it to be layered as a generation but the biggest problem... Okay, if it were a tree I wouldn't have 90% of my problems because it's not a tree. It's a general... it's a graph of a general... it's a general graph. You are absolutely right and that is why it's so hard to do. The root problem is just the main problem of getting it to another... Getting what? The rest and so on... Yes? Yes? Yes, yes... Yes, yes... Fortunately, this is a little bit less general graph so we can do some solutions for this situation but the thing is that the general problem of getting a graph on a paper is as everybody knows, unsolvable. You cannot do it in a general situation. Yes, yes... Yes, and this problem here is actually part of this. In general case, you cannot solve it. That's how God created this world. Any other questions? Yes? In the example to give an intergenerational merit how do you determine the generation of the children of the parents? I go... I go the generation of a kid is one plus the generation of the youngest parent. And I think it's only... It's only a rational way of doing this. Any other questions? Okay, then I want to... At the end, I want to thank... Okay, my work was free in this project but my collaborator actually got very nice travel grants and lots of grants from Russian sources and I want to thank all of them and I want to thank Tech Users Group which helped with the grants for this and I want to help all of you for being a nice audience. And I promise it in a year or something in the next conference, I will show you some even more nice tricks to doing with a P.D.Grid. We can do it much more interactive, much more... Excuse me? User friendly. Much more user friendly and we want to make it really good. Thank you very much. Thank you.
A medical pedigree is an important tool for researchers, clinicians, students and patients. It helps to diagnose many hereditary diseases, estimate risks for family members, etc (Bennett, Steinhaus, Uhrich, O’Sullivan, Resta, Lochner–Doyle, Markei, Vincent, and Hamanishi, 1995). Recently we reported a comprehensive package for automatic pedigree drawing (Veytsman and Akhmadeeva, 2006; Veytsman and Akhmadeeva, 2007). Since then we extended the algorithm for a number of complex cases, including correct drawing of consanguinic relationships, twins and many others. In this talk we review the facilities of the current version of the program and the new challenges in computer–aided drawing of medical pedigrees. We try to make the talk interesting to TeXnicians and TeXperts by discussing the experience of design a TeX–based application working in a “real world”.
10.5446/30790 (DOI)
Okay, what I am talking here today about is Minion Math, a family of mathematical fonts I have designed over the past six years. In the design of Minion Math, my design goals were that I wanted to have a math font that really improves over the existing math fonts. I wanted to have a math font family which is completely consistent. For example, with computer modern we have some AMS symbols from the AMSA and B fonts which don't completely match the design of computer modern. So I really wanted a consistent set of math fonts. Should be comprehensive or complete. That I have more glyphs available than other fonts. With complete unicode support, this means nowadays of course. Should be versatile, so with less limitations than other fonts. For example, I didn't want to have the math extension font available only in one width and one optical size and one weight. And I tried to identify all the shortcomings and flaws and constraints in other math fonts known to me and tried to avoid them in the design. So why did I choose Minion? Well first of all I like it and I had it on my mind for a long time to design math fonts for it as well. Originally Minion appeared I think in 1990 as a Post-Crypt font designed by Robert Slimbach of Adobe Systems. It appeared as a multiple master font but still without Greek letters and only with the open type version which appeared in 2001 or 2002. The Greek letters were added to Minion. Actually in a book quite known about typographers, Robert Brinkhurst, the Elements of Typographic Style, there was already shown a sample of a prototype Minion Greek but only with the open type version the Greek appeared. And of course Greek letters are very important for mathematics and when I started on the design I wasn't prepared to design a whole text font or even to add Greek letters to Minion that seemed too complicated for me as a design issue. So I was very glad when Minion appeared with Greek letters in the open type format. And the other strong point of Minion, Minion OpenType appeared with four optical sizes right from the beginning. In the top line you could see the four sizes that Minion offers and the bottom line is the same without sizes. So this here is the regular size and here the regular size is scaled and in the top line you could see the four optical sizes and I think even with the sphemis visible that it really makes a difference. In normal typography optical sizes are nice to have but you could do without but in math they're really crucial for really good mathematics, math typography. So what I started out with were what Minion offered then. Just as I told you with the four optical sizes this place up had regular intercaption and Minion came with three weights in the beginning. So it started to add math characters to these 12 fonts. Shortly after I think in 2004 Adobe issued an update with an extra weight medium between regular and semi-build. It's not that much difference visible between regular and medium but still it is a bit of a difference and so I decided to support medium as well. But in math we need two sizes below regular for script size and script script size. So I decided to add a fifth optical size tiny and so now I ended up with 20 fonts in these five optical sizes now in these four weights and I do fully support these 20 designs. Just to give you an impression of the weights, different weights and the color or grayscale, the visual impact of the symbols. This is our two formulas shown in regular then in medium I go back to regular. So regular and medium is not that much difference but I think you could notice it that the equal sign and the arrow below the limb and so on all have the same visual impact on color grayscale on the page. Then semi-build and build. The X2 one here below is the caption size of course and this one this is the regular size and here again is the optical size caption. So currently the fonts are in a state in which I could release the fonts now. Of course they are commercial fonts because they do depend on Minion and I have used original shapes from Minion. But the fonts are by no means finished. I am aiming of course at complete unicode math support. I have designed most of the glyphs but I would like to review many of them so I won't release them in the first release now. So far I have concentrated on the symbols and didn't do much work on the math alphabets so what I will design later on is a complete fractur and formal script and sans serif. And whatever else is in the unicode blog mathematical alphanumerical characters. And what I will design I experimented a bit on it already. It's a real math italic. This is a feature that only computer modern had for many years that the italic characters for math are designed or cut differently than for text. They have wider, more visible italic characters for math. And I think Cambria was only the second font to implement this feature. I am not sure about sticks where the sticks has really italic math italic characters. In most fonts only the kerning and the side bearings are different so that italic characters have a single character in math and not like in text. But I am planning to add a real math italic which is wider than the normal italic. So the design principles for minion math. What I told you before consistency I tried to achieve complete consistency in shape, size and color. For example the asterisk here matches minion, then the blackboard build which is designed already with all uppercase letters matches minion. Of course it's minion more or less taking apart and added a second stroke. The four Hebrew characters used in math do match minion the Weierstrass P here. Here with the six Greek lowercase characters actually four of them are original minion and two of them I've added. Could you see which were originally a minion and which were added by me? I hope it's not visible. Then the infinity sign, the first epsilon here was added and the second row isn't in minion like this so I've added these two. And of course the summation sign and the integral sign and so on. As I told you I'm planning to have complete unicode math support in the future. This is just an extract from the fonts in my production version or in my development version. So in the first release version not all of these glyphs will be available already but as I told you I've designed most of them already. This is meant to be rather impressive than readable now. So I tried to avoid the constraints of other math fonts. So I will have open type fonts. I haven't added yet the math table for open type fonts but of course I will do that. Then as I told you I have four weights and five optical sizes and each and every glyph is designed in 20 versions and available in 20 versions. Then I didn't stick to the model of tech with only four larger sizes of delimiters but actually I took the idea from the new math encoding that Ulrich talked about this morning. So with traditional old math tech I have only four sizes for 10 point basic delimiter. I have 12, 18, 24 and 30 point delimiters and I added intermediate sizes. I have 15, 21, 27 as well and for parent thesis it goes above that with another seven shapes before I have to resort to sticking them together from pieces. On the left actually there's an error in my PDF. I wanted to show that these are composed of pieces of course like in tech but again I do have seven sizes before I have to resort to putting them together from pieces. And of course I do have many variant glyphs. Just a few samples in the bottom and of course there are many more. In open type they should be accessed by open type features then. Actually they are all encoded in the poor private use area but only for technical reasons or tech reasons they could access them in old fashioned tech encodings but in the future there's no need for poor encodings for all these characters of course. And of course there are many, many details and detailed decisions involved. Here I'm sorry I didn't find time to really finish that table. Unicode defines seven sizes of geometric symbols and the symbols are scattered all over the place in Unicode. The tiny size is actually only usable for the center dot, the C dot. So that's the only symbol here because a white center dot or a white square wouldn't make sense or even a black square wouldn't be recognizable against the C dot. And then there's a very small, medium, small, medium, normal and large size. What Unicode doesn't tell us about is the vertical positioning of the glyphs and when preparing this presentation I didn't find enough time or it didn't get around to do that properly. I decided that the medium size should match the operators so that a medium circle looks like a circle times, O times symbol just without the times. So the medium, medium, small, small, very small and tiny are all centered on the math axis. And the normal size for squares they do sit on the baseline in my design and the large goes a bit below the baseline a bit like a big operator. So there are many more shapes and I decided that for most shapes I will add six sizes, so large from size one large to size six, very small and tiny is only the center dot but I will have circles, black and white and squares, diamonds, liaisances, triangles, of course they're left and right triangles as well and elliptical shapes and rectangles and so on and all these will be available in six sizes in my fonts even if they are not encoded in Unicode. So for the operators, the related and junked symbols I tried to have a consistent look all over. Some of these shapes I don't like in other math fonts, they often like to look too stacked and they visibly fall apart and I tried to have a consistent shape throughout. And in particular these circle shapes I think are much too large in many other math fonts. I think sticks are too big or too large in the sticks fonts and in Cambria math I think they got it utterly wrong. I think such an O plus, a circle plus should look just as large as a plus with a circle around it and not with a much larger circle. Actually Ulrich Viet has this brochure about Cambria math and you could see how the shapes look in Cambria and I think in the back cover, on the back cover Ulrich. Page three of the covers. There are some circle shapes and I think they are much too large. Barbara? Yes, but in Cambria the basic size, the binary operator is too large. In sticks I'm not sure because with the organization of glyphs and sticks I don't know which is which. About arrows I have two remarks. Of course the bar of an arrow is just the basic stroke width of your other math symbols but I think the arrowhead should really match somehow your text font and the obvious candidate to design an arrowhead upon is the French quotation marks, the guimise. So I took that from Minion as a model, of course I had changed it a lot but the arrowheads really do match Minion and look quite Minionic now. And the other remark is that the basic width of arrows is taken too small in many math fonts so the basic arrow is too narrow to have room for all these embellishments that appear on other arrows and I think the basic arrow should be wide enough so you don't have to change the metrics to have all these other arrows. Of course you could have wide arrows as well but the basic size should be consistent throughout. And then of course the diagonal and up and down arrows. Actually the arrowhead is different for up and down arrows than from left and right arrows and also different in diagonal arrows. I think you can't notice it here but if you turn the arrow around you could see it. One of my main remark here is that the diagonal shapes are in my opinion too large in many fonts. On the left here is shown computer modern with the left to right the up and two diagonal arrows overlaid and on the right is my design for Minion and here in computer modern the diagonal arrows are obviously drawn in a square and here I try to draw them rather in a circle and I think it looks much better. Normally the diagonal arrows look much too large in computer modern I think. So negated symbols in original Minion this is the not equal which is included in original Minion. I think the slant is just too much it's too sloped and then the negation slash is also too short. So I completely redesigned the symbol in Minion math and this has the advantage that I could keep the same slope all over the negated symbols of course with some obvious exceptions here in the bottom where I couldn't keep the slope. Here I tried to have the same slope then the symbol doesn't look negated at all it looks like a new symbol but not like a negated symbol and also with the size of the negation slash I could keep the same size for many symbols only for more stacked symbols I need to change that to a larger size. So again I get more consistency all over the fonts and most of the negated symbols look clearly like negated symbols while when I have this slash then for less sign I would have to apply a different slope then it gets more confusing. Then some characters in original Minion aren't usable for math so the italic new or new and the V aren't different at all in Minion so that's Greek and that's Latin and of course within a word it doesn't make a difference or it's recognizable of course but with in math you need two different shapes so you completely redesigned these. So now the new or new in Minion math looks like this and the V looks like this and matching to the round V I of course need a matching round W. And of course mathematicians do like some very strange letters. I think I've seen it in use only in two or three of my math books but of course every math font needs to have this V. And here again I try to model it as closely as possible on Minion. You could see some of the Minion letters I used to model the Vajashtras P upon. So the beginning is taken from the upright omega and here the loop in the bottom is from the VARP and here that's taken from the S a bit and changed and of course here it's taken from the italic B. I'm still not completely satisfied with the look of that loop here in the bottom but at least the metrics are fixed for this character and maybe I could change the shape a bit later on and of course the depth of the descender here is taken from the italic F so it really matches Minion in all aspects or as good as possible. Yet another issue is the asterisk. I think it should also match the text font. Of course it should always have this form so it should always be a six pointed star and not a five pointed star or anything else. And of course it should have one vertical stroke and two cross strokes then. The reason why mathematicians have this predilection for six pointed asterisk is of course handwriting because handwriting you could do it like this and a five pointed star in handwriting is not doable that easy. It really should be six pointed always. And of course to match Minion I modeled it on the dagger and double dagger and of course the additional symbols using the asterisk use the same shape as well so the circled asterisk and the big operator which is also added in the font. I think it's not a unicode character yet. I'm not sure. I think it's a quite a published example. But it has to be a math book or does a typography book count as well? Okay. I'm just going to call this one as a little source. Okay. Of course I used it in a typography book already. And if we could click like that into the examples. Yes. Okay. So this is the example from the LaTeX companion which is shown in many math fonts and here it's computer modern and then this is how it looks in Minion math. Should I go back again? For example in computer modern I think the summation sign is a bit too bold. It's not that visible here but in many books it really looks too blotchy on the page. And this is Minion again, Minion math. And I do have a third sample of Fourier and Utopia. Utopia is another font designed by Robert Slimbeck of Adobe just like Minion and Utopia was designed by some French. Thank you. Here again it's Minion. Actually I haven't worked enough on the kerning yet so it could be improved in some places. What I do have is a basic size of these moustache characters but not yet in the larger sizes. So here in computer modern it's too large and here it's too small. And I haven't worked yet on the alphabet so that's just from the RSFS form script font. So some more samples from test math to testmov.tech, the sample file from the AMS math package. But I think it's time for questions and you could look at the samples during the questions then. Okay, thank you. Thank you. Good evening. Nice tovl0e terug, in character and melody. And as Paul Francis pointed out, things like that can get into the music zone, standards have to be shown to be new, to a particular different kind of stress. We... Therefore, mass thought designers should not be restricting their recovery up to the use of a thought. Obviously we need representation of new codes or obstacles that are going to create in the future. For example, in the area of math alphabet, math physicians would use lots more math alphabet than will anyway. It would be nice to have them in the... in the... in the form of the... I mean, Paul would know the history of it, but the particular ones that have happened to end up in the code form with some of the triumphant... in that two-room, Barbara. It's somewhat arbitrary. Yeah, okay, somewhat arbitrary, for the career. I mean, I assume that... I assume that math paper has five different script fonts. Now, unit code is not 1007, it's... Why not? I'm not sure. Just to give one more example, we had your big... big alphabet, then maybe Barbara's right, but I know what it means. Yeah. Exactly what it means. And math and math supports the idea of having it. If you see what I mean. Well, I have added all obvious variations, or will add all obvious variations in the private use area. So, for example, unit code, for some areas, only codes the left-to-right version, and of course I have a right-to-left version or a double-head version, wherever it makes sense. And... I have double diagonal arrows with two heads, double-head diagonal arrows, which goes in the pur as well. All the larger sizes of big operators and of parent visas or delimiters, and so on. I do have a symbol for greatest common divisor and least common multiple, which goes in the pur as well. A new kind of delimiter. And so on and so on. And some small... Yeah, for example, for the circled symbols, I will have a circled plus with a white rim, so that the plus doesn't touch the circle, also in the pur and so on. So, I'm trying to add all the math characters I could think of. I think my correct answer to that is that the math and the algorithm are actually coming up in a way that directly exists, but we don't have to... Okay. Let's just take one more question and let's get down there. You have a quick question. What's the legal path? Do we have Adobe Class? Yes, I do have an agreement with Adobe now. So I got licensed for fonts. And I'm licensed to use the name Minion. So Minion is the trademark of Adobe Systems, or just a trademark and used by permission. And... What, is that a mark? They had already, yes. Okay. Okay. Okay. Thank you.
“Minion Math” is a set of mathematical fonts I have developed over the past 6 years. Designed as an add–on package to the Adobe MinionPro fonts, it consists of 20 OpenType fonts (4 weights, times 5 optical sizes). In future releases it will cover the complete Unicode math symbols, and more. In the design I tried to avoid all flaws and shortcomings of other math fonts, with the aim of creating the most comprehensive and versatile set of math fonts to date. In this presentation, I will talk about the design principles for Minion Math, and the design decisions I took. I will also show many samples of the fonts and will compare them to other math fonts as well.
10.5446/30792 (DOI)
I was going to do the lower tech talk earlier, but it turned out to be easier for Hans and me because we need a meeting here and now we have some extra time to prepare the talks on Thursday. And MPLibs was somewhat easier because it is done already. I prepared this talk for the NTG men meeting last month. It shows immediately because it was prepared for a much smaller screen. You may want to move to the back of the room. The last couple of years I have given a talk that went like metapost developments and what have we done over the past year. All of these talks have been slow progressions of John Hobie's metapost. It has been somewhat unpleasant to work with the Pascal code when you actually try to change or extend something. So, two years ago we thought it would be nice if we could modernize the whole system a little bit at least. Then we got some funding, we developed a project proposal and we came up with a number of goals of things we want to achieve for this basically re-implementation of metapost, which is we want reusable components, something that you can plug in into a different program. We want it to be a re-enter because of that. So, multiple programs can use the same library without having all this code replicated in your system memory. We want all the input and output to be redirectable. Not so much that it's redirected already in the library, that would be a lot of extra work for everybody. If you don't want the library to use the disk, then you can set it up so that it doesn't touch the hard disk ever. We wanted to simplify the generation of external labels to make MPH calls. Some of you may have seen the diagram where I have presented last year, where there are some eight or nine jumps that you have to go from BTECH into a picture, where makeMPX is run by metapost, then MP2 is run by makeMPX, then make it the instance, DVI2MP, then DVI2MP does something, etc. It's all very complicated and it's very hard to explain to users, and it can go wrong in a great many places. So, we want to have an easier thing, especially if you have a library, then you want to ship this library. You don't want to write a 20-page document explaining what all these extra programs do in this executable directory. You may have a collection of these, of like reminding all readers that the library is now closed, the book library is now closed. I'm sure we can get out of here. A little bit of a remnant of the BTECH here is allocation in English, allocation that all these static arrays that are inside of metapost don't work very well if you have a library and somewhere in the middle of the controlling application, certainly the library stops because which story out of memory or an overflow this or that, please recompile metapost. You can't do that, so we want to solve that as well. This overall means that the system should be more flexible than it was so far. I'll try to move a little bit more quickly with all of this talk, it's technical, I'm sorry. It's on the stick because my laptop is notoriously complicated to hook up onto any other beamer, so I send it not to even private, it takes 15 minutes and then it's sort of suboptimal. But it doesn't mean I cannot demonstrate anything, so it's a bit slow now. But metapost is in Pascal web like Techeys, like Metapont is. I'm not going to say it because John Hobbie copied large sections of Metapont the first time he created metapost. But Pascal web is a typical example of an early 1980s construct. It has a huge amount of global variables, all arrays are static. There is a string tool because there is no real string facility in Pascal at the time, so everything is done manually using a separate file and the compilation itself is wildly complicated because these days you don't actually have a Pascal compiler to compile Pascal web, but we use a C compiler to compile Pascal web, which doesn't work. You first have to convert the Pascal to C, so there is a separate program that goes from Pascal to C. And then this program generates C that needs a runtime library. So you have to also link this runtime library with your application. And then some special sub-post processing has to occur to make sure that a couple of Pascal constructs are converted to yet other C constructs. And the whole process takes a long time and involves a dozen programs or something. Why is it? The font inclusion code that I added to Metapost last year is actually in C, but it is borrowed from PDF test, which had borrowed it from a couple of other programs. But it was a bit confusing because it came from three or four separate places. And in itself it was also confusing. And then there is the dependency on the 5C runtime. So I decided to redo the whole thing and all of the Pascal was converted to C web, period. There is no Pascal code in MPlib at all. This replaces both the old Pascal web and the C stuff that I fetched from PDF. So the compilation now uses only C-tangle, followed by the C-compiler. The code has been internally restructured quite a lot. And the executable impulse, I guess many people are familiar with, is now just front-end. It's 200 lines of C code that calls a library that is the hard work. And then the MPX is integrated, which is now also written in C web. And Out of Conf is used for the configuration. Then there's an instant structure so that you can say to the library, give me a new Metapost. The string pool is revised. It's also been strung in. I did not quite correct in these phrases. But it used to be that error messages, status reports, so we're all in the string pool file. This is no longer true. The only things that are still inside a string pool are the user-generated strings, like macro names. And string commands, et cetera. Everything else is just inlined in the C code. It's normal, C string, no-terminated. And the Post-it backend has been isolated, separated from the core that is the type setting. And then the core exports a bunch of objects. And these objects are then converted to Pascal. And that's what it is, something completely different, if you feel like it. All the C functions start with mp underscore, so that there cannot be any conflicts, hopefully with other C code. And we have dynamic memory allocation almost everywhere. There are a few exceptions. I'll get back to them later. All I.O. uses function pointers, so that a function is called to do the actual I.O. Overrule the function, if you feel you have better writing. How would you do that? I'll show you. You start an option structure, then you create an instance with these options. You run some MetaPast code, either a file or a string buffer, and then you finish up. And then you run a couple, you see? So here you, this is the code that creates the option structure. Then one of the options is command line, so you set the option command line to, in this case, the first C argument, which usually will be a file in this case. And you run initialize, this creates an mp for you. This is the actual mp instance. If you have one, you can do mp run, and the library sees that there is a file on the command line. It will read the command line. It will read the file, it will process the MetaPast that's in the file. It will probably end with an end command to the file. And then the library will be done, and then you call mp finish to free all the memory that's there, open files, etc., just in case there is no end in the file. If the file just ends with eof, it's not necessarily clear that this will be the end. So you call mp finish, and then really, really sort of, nothing of the library is still in the memory. The rest is just a bit of a, okay, there could be, you could do more error checking, of course. There can be other options, there's about two dozen options. And about six or seven of them are the fields for functions to overrule the internal IO functions. So you put it on the previous slide, right, yet, there's not just a sub-possess. Now, it's the, we're here. This is the form of file. This is mp run. Sorry. Here. But normally when you create a metapost input file, the last command in the file tends to be end. Right. But not always. It could even be invley. You could do a begin-vig and never have an end-vig. If that happens, then the library is not finished, but it only reaches the next C line. That's why you do mp finish. So it can guaranteed clean up whatever is still in the memory. Okay. And this is the C example. The distribution also contains Lua bindings, which is also the bindings that are used inside Lua. You can use them in pure Lua if you want. You don't need Lua text. You can just build the library's interface to Lua and use it with a stock Lua interpreter. And that would look something like this. This loads the mp.asl library. This replaces mp, the combination of mp initialize and mpoption. And it just says, okay, we're not in the metapost. And the memory we want is plain. So that would be plain.not-mem. I'm going to check it. There could be a problem here, of course. The memory could be not found or something. Or the memory could be exhausted. So you have to do a test to make sure that something valid came. And then you can, in this case, run a buffer instead of a file. And a command for buffer is execute. So you do this is a string in Lua, multi-line. And it just doesn't need to be used. This is a string. This is a string. This is a multi-line Lua string. Two square brackets, multi-line stuff, two water square brackets to end it up. Then this will, in the lower bindings, this will return an L, which is an mplib result structure. It has a number of fields, but the most important one is an array fig. If you have generated a figure, then L, that fig, is an array of figures. You can have begin fig 1, blah, blah, blah, begin fig 2, blah, blah, blah. In that case, there would be an array of two figures here, instead of just one. Then the figures themselves are objects, and you can just ask the post-cris representation of the objects. That's what I said earlier, that the entire back end is isolated from the core engine. So at this point, all that fig is that exported internal graphite, and the frugin-call basket runs the traditional basket back end. All the values within this possible array? Yes. Q. This is a bit of a confusion of the memory between keys and arrays. I think Ruby is nice like a key to an array, but nothing is literally up to say. Is that going to hash in the map? Q. I'm not sure we need to understand whether it's or not standard code. It does help. Q. It's a lot like Paul, basically everything is a hash array. So there's a hash array L, which has a number of fields, one of them is fig. The field fig itself is an array, so you end up with L.fig1. The objects in this array are actual objects, and then you use colonization to get a method on the object. It looks like Ruby. It looks a lot like Ruby, but more the high level syntax. Just a little bit to say about MakeMPX. Then MakeMPX, MP2, DVH1P, and the trough version of DVH1P called DOOM, are all rolled into a sub-ribery. The sub-ribery is actually called MPXOUT.W, and it is linked inside the MPOST executable. It means in practice that if you don't set the MPX command, that MakeMPX will no longer run an external MPX program. The only thing it will run is the tech type setter or the trough type setter to generate the actual pictures, and everything else is handled internally. Well, that's the state so far. The first battery release was planned for our 2008. I uploaded it on Friday. Then I uploaded a second one on Saturday, and a third one early on Sunday morning at around 2 o'clock. So we're now actually in the third battery release. It has the library source, it has the front-end, it has the lower bindings, and it has a documentation file explaining the API in much more detail than in the slides here. It's about 25 pages. When we're confident that most of the books have been removed, and probably maybe a more for 2, 3, this will actually become the next official release of Metapost, and Pascal Metapost will be totally retired. What's sometime in October or something? I'll start working on everything else that I wanted to do and couldn't do because of Pascal, and that I cannot finally get around to. Here's an inner-memory allocation for remaining items, the tricky ones. The extension of the range for numeric values, the extension on the precision, hopefully some more mathematical core functions you can use, and some configuration of the error strategies, some of the errors that you get a report for. You would love to say, I know this is a redundant equation, don't pester me about it, I'll fix it later. I'm trying to find a way that a user can say, I know what I'm doing, or maybe I don't know what I'm doing, but at least don't ask me any questions because you should just run and shut up. And then now that the messages internally are certainly C-streamed, it's also not possible to change the language of these things. When there was an external pool file, you could feasibly convert the pool file into a different language, and it no longer works, so I will add getX support where you can create PL files for Metapost. And then the RP itself is the last item, the RP itself is pretty high level now. You run code, you get images, period. There's no way to say, okay, I have this equation, what is the answer of the equation? And it would be useful if you could get this information, at least on the C level, and use other parts of MPlib for other practical uses in different controlling programs. Then finally there, funded by the user groups. That's all of you basically. Thank you very much. And then here's where you can get yourself. This should be the final slide now. Oh, okay. This is the background. Actually, every page is a different background. They're all generated on the file by MPlib as being run inside water. There's no disk file involved. How would MPlib have an impact on applications that use Metapost such as MetaType 1? You could. It's not under my control, but you could. And it would make it less confusing, I hope, but it has all these gorg-specs and shell-specs and things. Yes, that's one of the planned usages, basically. Could you say something about backwards compatibility? What? What do you want to know about backwards compatibility? Same input, same output. I want to know what everybody wants to know. Will my old documents continue to work the same as before? Yes. Same input, same output. Thank you. Except for some 8-bit switch on. Sorry? 8-bit switch on the command line is always on. So you always get 8-bit input. So if you had previously created a mess of history document just to get errors, this will not be backward compatible anymore. That's about the limit of... One more. You mentioned that the first reference is backwards. Do you have any other slides in mind? Actually, I have on my hard disk already a pure Lua script called mplib to PDF. And it uses the Lua bindings to generate PDF directly. That's one thing I add in. It's pretty easy to do. With the Lua bindings you can basically generate whatever output you want. You could output SVG just as easily. I just happen to know how to generate PDF. But it took me maybe two days to write a thing and it would suggest that SVG would probably be about the same. So, yes. Thank you very much.
The first stage of the MPlib project has resulted in a library that can be used in for instance LuaTeX, and is also the core of the MetaPost program itself. This talk will present the current state of affairs, the conversion process (from pascal to c), and the interface. A roadmap towards MegaPost will be presented too.
10.5446/30793 (DOI)
When you think about it, Chris should be an author on this paper, but since we were doing this in a rush, we didn't actually have time to get an advantage by him. It wasn't appropriate to just add his title. But in some ways, this stuff all comes from discussions with Chris, starting in Samalov ten years ago, and back and forth with various Euro tech or type meetings. I think the last one that really counted was a meeting that took place in Kyoto, and these whole new directions in typesetting or something like that. And we wrote together a couple of papers, but I think it was Chris's insight that really drove all this. The idea that in many ways we're still stuck in the typewriter age, that the idea of mechanical text is still dominated by the typewriter age. The idea that the input and the output strongly resemble each other. Now they're clearly no longer identical the way they were in the original text, but they are very strongly related. So if you look at a sequence of typeset lists and the original sequence of characters that generated it, you can see they're very strongly resembling each other. But text is used for much more than typing and printing. Let me move forward. When you think about a document, it's going to be edited many times. It's going to be rendered at least as many times as it's edited these days. And when web blogs and so on and so forth, things get annotated, which means you add to it without changing it, but you add in parallel commentaries about it. Unfortunately, if you go to sites such as YouTube, most of the comments are in some pile, but from time to time you do find interesting comments. And of course they get searched, and that's how small companies like Google and Yahoo drive their business. So these companies are clearly not thinking in terms of the sequences of characters with which you've edited something. So I can't see, unfortunately, an SMS for my son in French. I first have to translate it into French before I can understand it. So I think, and I suspect this will be the case when my daughters start sending me SMSes too. But really what we need is a new model of text. But there cannot be a single new model there, because whatever single new model of text will come up with, someone who will meet, even in this room, several hands of a robot, you haven't considered this. So what we're more interested in is a new model in which you can describe naturally the different forms of text that could possibly be manipulated. Now at the very top, some of you have heard me speak about versioning. Versioning is inherent to all of us. You should be able to have different kinds of characters, different kinds of languages, different kinds of stories, different kinds of lots of things. And many different input methods, multiple outputs too. And most importantly, you should be able to take into account the different kinds of processing the text might be subject to. So that includes not just printing, it means being able to read a sequence of characters transforming this into words, stemming the words, doing some grammatical analysis, I don't know, natural language translation, all of these we should be thinking of this in terms of text. So how can we do this? So essentially we need to be able to move from one representation to another through various kinds of processing. So type setting would be one of these different kinds of transformations. And we need to be able to completely separate the different kinds of representations. Now when I say internal representations, it's deliberately plural. It's not a typo. So it's not as if there is a single internal representation, a single canonical representation in a specific organization for a specific set of purposes, there may well be a single canonical representation. But in general no. So these are the two papers that I'm referring with Chris. They're on our box if you wish to get them. The solution, it turns out, already existed. But it took us several years to invent it and then realize that we invented something that already existed. But that's not a bad sign, especially given that it already had been invented for linguists. They're called ADMs, attribute value matrices. They're also called feature structures, hand pass. And what are they? They're simply association lists, I call them tuples, where the values themselves may be ABMs. Now any of these, so you have this dimension value pair, or you might call them t-value pairs, I prefer the word dimension. And any of these values may itself be another sequence of dimension variables. So what that means is any specific value can be reached by following the path from the top. And effectively that gives you a pointer into this data structure. Nothing new. It's very difficult to invent something that's completely new that no one has ever invented before. So what can be encoded? From a long of theoretic point of view, everything simply becomes a mapping for multi-dimensional indices, the values. So there's three very common structures that will come up in text. One is the ordered sequence for flat structures. Another is the ordered tree for higher-archival structures. And then you have the multi-dimensional table. For those of you who just came in late, especially those with a linguistics background, I thought I'd mention what the solution was, which is attribute value matrices. So how do you get through these things? How do you manipulate them? The etiology with object-oriented containers is relevant. They're called iterators, so these indices actually will become iterators from a programming point of view. So if you have an ordered sequence, what counts is the natural number designating what point in this ordered sequence are we? If you have a tree, then the path leading to a specific point in the tree is what counts. If you have a multi-dimensional table, then it's a coordinate in Cartesian space that is of relevance. And the order in which the dimensions are specified is not important. If you want to define a range, then you would give it for the ordered entities a pair of iterators. For the tables, on the other hand, you might want to specify an end-dimensional parallelotope, which would then require possibly two end iterators to specify the corner, the ranges. So that's all. I have applied Ash Cross number, which is eight. So the rest is just examples. So let's look at examples. The simplest case, one that we should all be familiar with now, is a sequence of unicode characters. Here we have a sequence of things, and each of the things is a pair. It is a true tuple saying, this is a unicode character, and here's the index. Now, that's fine, but even in this conference, we've seen situations where, or at least people have talked about it, where unicode isn't enough. So it might be some situations where you'd have visual content, which doesn't have an abstract representation of the unicode character, but is conceptually equivalent to the unicode character. So for example, suppose you wanted to write a book about the origins of writing systems, and you wanted to show a picture of a specific, let's call it a glyph, carved into stone, but which doesn't correspond, but which, it's not to you, done the analysis, that you can then say this corresponds to this unicode character. So it's so long as it has not been made abstract, you need to keep track of your stone glyph. Similarly, you might have some glyph in an Adobe font, which was never encoded in unicode, so you'd want to keep track of it. So this allows you to talk about, in an abstract way, all of these things are equivalent conceptually. Then, you start, this is the kind of things that were being dealt with in Omega, then you start to think, well, we want to be able to read things, we want to be able to write things, but it might be that we don't want to say this is the only way to write things. So we might have some, take a step back, and be able to say, internally we're going to encode things as sequences of words, and then these words, if we print them, then we will choose presentational variants according to certain criteria. So this would allow us fairly easily to deal with US versus UK spelling. Yes? I'm just puzzled by the spaces between the words. I mean, at the top there's no representation of the spaces between the words, and under the spaces between the words. There wouldn't be words between the words spaces. I need more insight into that. This is, the reason we put spaces is as a visual cue to ourselves when we are reading that these are separate words. It's a convention before Carlos Magnus in its map up. The spaces are marked, yes, thank you, but the point, the words it's translated into some sequence of words, the spaces are marked. I just wanted you to say explicitly that the spaces are implicit. No. They're marked. They're an artifact of a particular visual representation. That's fine to say. Thank you. I will ignore more comments from the fine background. So here what happens is you're looking up words in a dictionary, but there's still no, nothing to deal with declination, for example. You have to look up some kind of conjugations, noun or adjective, declinations, or other such things. So then let's move on to a higher level. So here we can have some sort of analysis, and now we were, this would be the lowest dictionary, for example. But these would correspond to the stem words. So for a verb, the infinitive form, for the an adjective, a canonical representation, or now a canonical representation. And so you look at the French verb conjugations, it's, I don't know, typically 18 or 20 different actual conjugated forms, but in the dictionary you only find one entry, except for a very unusual part of this pronoun. So then this is a particular word corresponding to, no, it's a pronoun. It's a subject pronoun, subject pronoun in first, in first person singular. Then there is a verb, which is in first person singular, in the indicative, in the composed past. And then there is an indeterminate article in feminine plural. This should be S's. And then now, which once again should be singular. So once you go to this kind of structure, it turns out that in fact you can include error messages with number variants. So you can say five lines are incorrect, or zero lines are incorrect, or one line is incorrect. And so when you're generating mess, so you have one text, which corresponds to the error message with a hole in it, and only when you throw it in, then quickly it would determine whether it should have S or S, or no S, out of the line. You could have conjugation variants through time, corresponding to sound shifts, or spelling reforms, or other such quite controversial things through history. And of course dialectical and language variants. If you want to write, it's actually possible, I believe, to write a single conjugator that would work quite nicely for French, Italian, and Spanish. Alright? But let's move on. Confound words are notoriously difficult to translate. I came across this with, I think it was a Google translation, but I'm not sure. I came across a text with internet outfitters. Actually no, it wasn't internet outfitters, I was reading the Spanish. So, vendero de equipo de caballero de internet. So, cowboy, you see, internet cowboy equipment vendors. That's when internet outfitters became. And that's simply because it took each of the words in the compound word, and I'm sure you've all seen equivalents in your own language. Alright? But on the other hand, you can have a dictionary of compound words, such things do exist, and that's a higher level of discussion. And these could be encoded, and then you can have sequences. Whenever you don't have a canonical representation of this, then you would be forced to put it down in the sequence of words that form the compound word. Alright? This would allow you to do things like automatic, adding an indefinite article, if such a thing exists in your language. And as I said, you can always use a sequence of entries from some lower level. But you can move higher, you know, if you're dealing with a grammatical structure. So most languages can be described sooner or later in something that resembles this. You've got a subject, verb, object, and indirect object. And the exact order in which these appear in a text will vary from language to language, but they will even vary within the language. For example, in French, five of the six possible combinations of verb, object, indirect object may occur, depending on whether you're using pronouns for the object or the indirect object. Alright? But so all of these are different structures, yet at the same time, they're using the same canonical family of structures. And finally, I mentioned tables. Now tables are mapping from multidimensional couples to cell. There's not really no order dimensions on the actual couples. And when you visually present something, you transform it into a two-dimensional entity, this transformation may be highly complex or very simple, depending on how complex the original table was, and also on the kinds of visual representations that you're trying to encode. So the history of all this is that this came out of a project called Translucid, which is a program in language that I've been developing with the help of Lanca and Pobi. Pobi actually wrote the first multi-threaded implementation for this program in language. And this is a declarative language in which all entities are arbitrary dimensional infinites arrays. So it's through this work in continually viewing everything in terms of these infinites arrays indexed by finite couples that we came up with this idea for text. It just turned out that it was identical to the ADMs. The difference being that in linguistics, the ADMs are ad hoc structures. While we actually have a semantics and tools that allow us to manipulate these. So although we actually have no software to talk about these relevant texts, we can imagine such software actually being written. So, there's my call. Very good, John. I'm very proud of you. Next, any questions? Do you want to have all of those presentations and then we'll have all the questions if you have? I can dodge the tomatoes. You started out being very handled when talking about this one. Not only text but voice. What was it you said? Is that representable to the ADMs? I have no idea. But it's probably something worth thinking about. Is it the only other kind of audio thing that we deal with? We have speech, we have music, and the rest is just not. No. Music is just not. That's true. Because the people who use what the other end does use, do you think it's just a noise? Yes, or a cue? Yeah, I agree. So the answer is, for certain things, definitely it works. So if you're looking, for example, at perfect, imperfect plays, all the sequences, and so on and so forth, chord progressions, these can be included quite nicely this way. But when you have one phrase which is overlapping with another phrase, say Bach's Hue or something like that, I'm not sure. Probably, but I'm not sure exactly what it is you're playing. So I think that might be more relevant to the next talk where you have, where here I've been talking effectively to a single text, but with no links to the outside world. While this example I just mentioned, with the overlapping phrases, might be more multiple texts interacting in interesting ways. But that's the subject of the next one. So can I say what I'm doing? Okay, Jonathan. We can go. So what's the language for the ABMs? Well, basically they parse an entire sentence into a single ABM. That's a common thing. So it's a representation of grammar and meaning? Yeah, so it's verb phrase equals this, and that verb phrase might be composed of other things. And so on and so forth. Sorry? I'm too old to pick them. Yes, yes. And the ABMs are, it's a very short sentence. The ABMs are very close to the top. And the verb is very big sentence. And they're way, way down. Also depends on how much annotation you have. You know, how much analysis of the denominator. Okay. Great. Thank you. Thank you. Thank you. Thank you. Thank you.
The Unicode model of text makes a clear distinction between character and glyph, and in so doing, paradoxically, creates the impression that the ultimate representation for text is some form of abstraction from its visual presentation. However,the level of abstraction for different languages encoded “naturally” in Unicode is quite different. We propose instead that text be encoded as sequences of context–tagged indices into arbitrary indexed structures, including not just character sets such as Unicode, but also dictionaries of words or compound words. Furthermore, these sequences need not necessarily contain elements from the same indexed structures. Using our approach allows natural solutions for a wide range of problems, including the creation of documents that can be printed using several alternate spellings, the automatic generation of error messages with arguments, and the correct generation of nouns or adjectives with number, case or gender markers or of verb conjugations.
10.5446/30794 (DOI)
Okay, so as John said, he was talking about individual, if you wish, items. And he presented a whole series of different items, he presented characters, he presented words, Hong Kong words. So here I made a summary of a few of those items that we could think a document can be represented by. So we have the character, the stem, perfect sub-exists, declensions, word, common word, phrase, a sentence, section of a fabric, and so on. You can imagine that you could have something else in between, you could have combinations. You can have just a group of characters that supposedly for somebody doesn't mean anything, but they might have an opening for somebody else. So the sort of taking from John we have items, but what do we do with them? So what we really propose is a sort of a document model that will have different parts to it. And so the first one, the easier to think about is a sequence of these items, and that's what we call a galley. But one galley on its own is not very interesting. So we have thought of a mechanism of joining these galley together, and that's what we call a list of links or a link, simply. You can think of also algorithms to manipulate these galley, transforming one into another. If we have a list of characters, we can produce a list of words, can we provide a list of sentences? Parts in from input to galley, so if we have an editor, I want to be able to take that input and put it into a galley that I want to use for a specific algorithm that you want. Then obviously taking it out from that storage and put it into an output form, whatever I decide might output form should be. And obviously input and output mechanisms. So let's think a little bit about galley and how do we think of them. So here I just have a very simple sort of string of characters. And we actually have here, we can see two galley. We first look at the left side. You see, w81, w82. So we can think of a galley that has words, and they are indexed by w81, w82, w83. And on the right side we can have another galley that is used to present characters. And yeah, for now we're just thinking about having these two different representations. So an example link would be watch. So if we go back, we have the last bit of the word primary. It has index from 228 to 232. So we can just sort of grab that range and say I want to have all this range linked to a function, g of c, where I can say that g of c is called a blue. So this is what we can say is, in other words, markup. I'm marking up that section of my galley to be printed in blue in my output. But you can imagine that that function can be anything. Another example of multiple galleys, you can imagine that this would have any script that just wasn't quite easy at that moment. We have three separate world galleys in three different languages. And arbitrarily we have indices for each of the words. So this came about when we were thinking that how do we easily think of printing parallel texts in different languages? So how are we going to be able to match? Okay, I want all without having to be dealing with translation. Well, I have this first paragraph and I want this whole paragraph to match in Spanish with this whole paragraph in another language and this whole paragraph in another language. So having this list of words which have indexes for each of the words, then we just create a tuple in which we say that first tuple, the beginning of that tuple, this region should match with the beginning of that word and the beginning of that word and the same before the end. And you can imagine the algorithm being able to say, okay, I have to, I know that I have to type these three chunks of text next to each other in a specific area of my visual excerpt. And this is just a supposed output that we can imagine. So after this point of that half, all the prefixes have to be at that point. And that would apply, especially if you have a screen, so you can imagine how many things that are mixed with each other and that they are matching visually. So that brings us to something when, what happens when we have several galas. In previous example, all the galas were sort of at the same level. But what happens when we have the main galley of being just text, but we also have another galley which has the footnotes and we also have another galley that has the markup, like the functions of color that I mentioned before. So this has to be interleaving. We have to find mechanisms to be able to point from one to another. And that's the way we put them separate because we want to be able to say, usually marginal, not beautiful, not consider parliament, if you do it instead of search, you want to be able to differentiate it too. So we could also have tables the way John mentioned them, so they just be a tuple and that would be another galley that gets inserted somewhere. So now we have this point just to the main galley which John talked about, about being the iterators, and that's how we would manipulate the sort of ranges. So what are these lists of links? Well, they would be represented by tuples. They could be a point or a region. And they have, since we have, since we could have separate lists of these links, they can be overlapping, they don't have, they can be nested or overlapping, so that's probably what John was thinking about when you were questioning about. Newspaper can have one galley representing one sort of linear time, and then having another one which overlaps the, in another time frame, if you wish. You have got more equivalence between galley, which is what we're doing with the paragraphs. Many things. So this is an example about a very simple word, we have the word for mammals, and we have how we can have three different points representing three different places in the world. So if we have a pointer, if we have a point three from three, that means that that's the relation between the root and the termination of the word. And so one for civil, the three is just the root, and that's just to show that we can represent absolutely any point in this stream of characters, any point for any region that we wish to. And we can, we can tag it in our galley, we have just roots or something like that, we can just say, okay, this is one root, this is another one, and have specific descriptions for each of them. So, well, this is just really much brainstorming. What sort of algorithms we can think of applying to this galley? Well, you know, this is an infinite list. We can do, we could imagine doing any of this at any time. Line numbering for the types of text, for example, we can have one galley and convert it into another galley. And in the analytics, we can imagine being able to use a very flexible range of algorithms. So, and the output is the same as the output given, you know, just the algorithm. What is really, really, really important here is that the storage of the galley, the data structure of the galley, be flexible enough for the algorithms to be able to manipulate them, and to be able to manipulate new galley as needed. And those are the end comments of the model. This is just, it would be so much flexible that anything that we could imagine seeing. And the key point is really the design of these data structures in the galley mode, having this link, atoms representing different things of a text. That's it. Any questions, any comments? Yeah. When you speak about synchronization between galley, it obviously means different things for different kinds of galley. For example, if you have a view of individual strings, your synchronization means separate in the same time. But when we speak about typesetting, it's completely different things. It means maybe on the same places on different pages or in the different. So, I think that in the new model, you need to have some, besides the general term synchronization, you need some instances. What do you mean by synchronization for any kind of galley, right? Yes, this is, at this point it's more of an abstract model that once you start going down into the specifics system, you start applying what you just said. Yes, this will be just fine. Yes, this will just be position. Synchronization and position. Yes, exactly. Interesting. To my impression, you have different kinds of galley. Those which can be derived by transformation from others and can have cells which absolutely have to keep, because, persistently, because they are essential to keeping the semantics of everything. So, probably there should be kind of classification or something to indicate. I think that that's what applies to John said about certain representations to have canonical representation of their texts, and that would be there by base. And everything would be derived from there, but somebody else might have another base that they want to keep at their common fold, that would just keep all the things they repeat. Well, I have to be very careful to throw away representations. I have seen that in certain text projects, people have thrown away representations on how it was put into old books and their recredited galley afterwards, to see how far the margins were about. These simple things as how far, how wide the margins were was actually important for cultural research. So what you're saying is that with any form of input into the system, we have to be careful. In the end, yes. You don't know what the semantics again lead. You don't know what kind of questions people are going to be to ask on your start. So, it's something that you might see as something derived and you don't care for it. And suddenly some historical research person comes across and wants to research, but how far into what's basically, is because we're doing research on typographical traditions. And suddenly he's interested in, well, here we have narrow typesetting, and here we have white typesetting, and how this changes over the years. And so normally you would say a plan or a white space is to just turn on the, and not just get the discussion between you and Jonathan, or it's not relevant, but it might be relevant in the context of research of traditional typographies. Any other questions? Thank you.
We present a general model for electronic documents supporting parallel containers of content, tied together through link components. This model is usable for a wide range of documents, including simple textual documents with footnotes and floats, complex critical editions with multiple levels of footnotes and critical apparatus, maps with multiple layers of visual presentation, and music scores. This model is inspired from the C++ Standard Template Library, whose basis is that Containers + Iterators + Algorithms = Programs. In our approach, the “iterators” are pointers into the parallel containers, keeping track of callouts for notes, floats, and parallel links. The data structures required for this model are remarkably simple, and will allow the rapid development of many different kinds of algorithms.
10.5446/30797 (DOI)
Apart from the obvious pun with the cork and bottle, it has always striked me how actually the whole world of tech encoding could be described as a Pandora's box, or a bottle, or an emphora or something. So it's really an opinion on both accounts, for the city of cork and for the general situation of tech encodings today. So I'll start obviously with the beginning. The denies is that during this summer, Mojcá and I, Mojcá is a young student in Slovenia, in computer science, bioinformatics, something like this. I'm doing something in between as well, math and computer science. So we were sponsored by Google, who gives out a lot of money to students, to commit open source code, and Mojcá does something related to tech documentation, and I made a proposal related to making tech more unicode compliant. It may sound maybe a bit ambitious, and maybe also unclear. So I'll start with what it means exactly. A basic sanity check, what is unicode? I suppose everyone has at least heard the name and has some idea about what it is. So my uninformable definition is it's a universal character set, suitable for any writing system and any script. It's not the official definition, I think you'll find in the thick book that's behind cover, 105,000, 1,500 pages. So I'll just take, well, suppose everyone knows as well, and then the actual content of my research, of my project for this summer. What does it mean for a tech-based system, for a tech-related system to be unicode compliant? It's really not clear at all. So of course I had some idea before beginning, back in spring, in the spring, but then we started actually doing something related, but not completely the same. So our first step was suggested by Moisa actually, and related to hyphenation patterns. So as you know, we have quite a lot of hyphenation patterns for many languages in tech distributions, and none of them are in UTF-8 encoding actually. So they're quite ignorant of unicode in that respect, which doesn't mean that they can be used for many different writing systems. Of course they can, but they're not in unicode at all. It's not in UTF-8 encoding with either, which is a stronger requirement. I mean, it's a more precise requirement actually. This was a real problem when ZTEX started to be integrated in TechLive one and a half years ago. Actually it started being integrated approximately two years ago, and it was a problem because it expects by default UTF-8 encoding. So Jonathan devised a system to automatically convert to the different hyphenation patterns. So that is for each pattern file available, some hyphen.tech. He wrapped it up in a file which he called XU-ZTEXUNICode, I assume, and given the actual, the bizarre diversity of files that we have, I think it's quite appropriate to call it a zoo hyphen that we had. I will zoo of different patterns around some of them really in the wild, which we didn't know what to do with. So the TechCode, I'll show you more samples after that, I'll show you what we've actually done. But the TechCode is quite simple because the so-called legacy encodings were all in some 8-bit front encoding. So for this, in order to convert the different 8-bit characters we need to convert, we simply make them active, so it becomes a micro, and we make that micro output the UTF8 byte sequence, usually a 2 byte sequence, for the appropriate character in the front encoding the pattern file is in. So this is very well, this does indeed work for ZTECH, but I think it's not the real way we wanted to have things, so nowadays we might want to have the master files in Unicode and have it loaded, have it converted automatically to an 8-bit encoding if needed, but nowadays it doesn't seem really reasonable to rely on haphazard diversity of front encodings. So what we did with Moetza was to revert the problem to address it the other way around, which means we are going to take every hyphenation pattern file available and convert it to UTF8, that is, we would convert it beforehand and have them as the main file, and at the same time we wanted to get rid of many complicated macros that were around because in the actual pattern file we did not only have patterns and the patterns command with different patterns from that language, we also have a lot of support codes, so the CAD codes and LC codes are really a very important part of the support code, and then we also had other things, really complicated things that really messed up the situation and made it really unclear, but on occasion it was really not clear which character were actually in the file, I mean which actual patterns, actual stream of characters were meant to be in the file. And finally we wished to adopt a cleaner name scheme for the languages at hand, that is, up to now the patterns were mostly called with something like two character or three character or code, followed by the name hyphen, which was remotely okay, but actually we had really difficulties because for English for example, this is the most basic example, and yet it's already a problem because the file was called, there was text, sorry, Knuth original file called hyphen.tech, which more or less accounted for American English, and then someone also devised a file for British English, which was called UK tech, the UK, but unfortunately, UK is also the ISO code for Ukrainian, and that's a problem because we also have the Ukrainian file. Of course, as soon as you know it, it's really not a problem, but it really messes up things much more, so we decided to find out some set of language tech that could account for the diversity of languages we had, and we found out that actually the only one that we could use was the IETF language tech, that is the request for common 4646, which is quite precise, I'm afraid I don't really have time to discuss all of this, but I'll show you the complete list of languages at one point, so if anyone wants, I can discuss the exact problem we were facing. So, and just to insist on it, the ISO language code simply weren't enough because English for example is a single language, yet we had code for American English and British English, we could have Irish English or Australian English, but we don't have it for the moment, and of course I'm not intending on working on it at all. So, the next two slides are a bit of tech code, it's really fun, it's been contributed mostly by Jonathan and Tako, and Mojtse and I reworked it again. So, what we want to do actually is we want to have a single file, so the equivalence of what Jonathan named the zoo file, where I call here the load-nation-language code for the particular language, so this is so to say the top-level file. So, we have to detect to test which kind of tech engine we're running, and we're doing it like that. We just have this macro actually expect to see two characters before the bang, and it defines the macro-second as being the second, its second argument. And the funny thing here is that is not a Latin T, it's a Greek T. If you look at the file, the UTF-8 encoding for a Greek T, which is two bytes. So, this means actually that if we're running something like ZTECH or LuaTECH, which is natively UTF-8, it sees a single character, because its input is UTF-8. So, it sees a single character, so hash1 is T, hash2 is actually empty. But if we're running PDF-TECH or TEC-3 or some E-TECH as well, for example, we actually see two characters because we see two bytes. So, hash2 is not empty. So, the simple test is simply this. If second-regard is empty, we will simply output a message to the user, and then we do nothing else, we simply input directly the actual file with the real patterns. And otherwise, if second-regard is not empty, it means that we're running an 8-bit tech engine. So, we output another message, and we force to input a file that will do the conversion from UTF-8 to the appropriate 8-bit encoding in tech. Just to be precise, this, Mojta and I are really mostly context users, and the EC name is some context idrosyncrasy for the T1 encoding, so again, the cork encoding, because most languages of Europe, which are written in the Latin alphabet, use the cork encoding, of course, it was devised for that. So, that's it. So, I will not show the hiv-dash as early as it's simply a pattern file which we converted to UTF-8. Yes, Jerm? So, it didn't mean for me to do this, but I just don't understand the point that I needed it. Why am I the kind of virtualizer that uses UTF-8? Is it out of this or my students? The problem is not converting the file to UTF-8, the problem is loading them into tech, that's what I'm distrust after that. If you have general questions, could you just ask them after the talk? So, I'll show the converter tech file now. It's really a bit funny. If you know the internals of UTF-8, you should guess how it works. This is an extract of the actual converter file because, of course, it's much longer. So, I just showed how it works for the three characters that are used for Slovenian. And those three characters happen to be encoded with a two-byte in UTF-8. And the first one, the C with Karon, which Hatchek is encoded with an exaggeration of C4 and exaggeration of 8D. So, there are the two bytes. So, it simply makes C4 active and it takes one argument. And if that argument, this is C4, it takes one argument. And if the argument is AD, it simply outputs the appropriate character code for C with Karon in the T1 encoding, which happens to be A3. And I stripped it out that actually if it sees anything else, it insults the user or the person running the file. And the last was for the other two characters whose UTF-8 encoding starts with C5. And then I mentioned LC codes before because it's a very important part of the pattern loading mechanism. Actually, Tech expects any character in the pattern file to be a letter, that is to have cat-code 11. And also to have an appropriate LC code, a lowercase code, which actually must be non-zero. So, in most cases, we have patterns, I think in any case, in the patterns. In other cases, we have patterns in lowercase form. And so, the LC code is simply themselves. So, for the three characters at hand, we simply give LC code for this. So, the actual problem, and I knew I wouldn't have time for this, but I still have to mention the different problems. Actually, it may answer John's question now. It doesn't work that way at all because we had probably any language, we will have problems, we had problems, and we will still have in the future. So, for example, some set of languages could be handled in OT1, the old Tech encoding, the 7-bit thing. And the pattern files tried to accommodate this, towards accommodate both. So, this was some sort of hack that was introduced in the German pattern files, that actually the short S has a different code position in OT1 and OT1, and it was encoded twice. And it isn't straight forward at all to reproduce this from a single master file in UTF-8, because it means that whenever we encounter a sharp S in the UTF-8 pattern file, we should actually output two different patterns for 8-bit tech engines. And the same happens for French, Danish and Latin, because each one of them has some special character that has a different code position in OT1 and OT1. So, for those, actually, we simply dropped the nice approach that I showed to Slice before. We simply don't do that at all. We simply do, if we're running a UTF-8 tech engine, we, of course, we input the UTF-8 pattern file, and if we're running an 8-bit tech engine, we simply input the old file. We didn't want to touch this at all, and actually we're not convinced at all that it does indeed, but I mean that the old file with OT1, which tried to accommodate OT1 and OT1, I'm not convinced at all that it really works very well for OT1, because in OT1 you simply don't have accent and characters. You use tech macros for that, you use the accent macro. So, having the sharp S in German is, of course, fine in OT1, but it by far doesn't account for everything, and likewise for French and also Danish. Actually, Latin is cool because Latin has this OE in modern, in the spelling of Latin nowadays, you have the OE League and the AE League, which can be completely represented by OT1 with single character. So, how much do I have, Anita and Peter? Cool. So, sometimes you have the Cyrillic community in Eastern Europe is actually very active because Russian and Ukrainian can load really completely different pattern sets. So, the master file, you hyphen and UK are hyphen, the legacy file are really well done because you can define which pattern, you can just set a macro which defines which pattern set you want to use, because different people contribute to different patterns. We have half a dozen, I think, for Russian, and you can also use different encodings because unlike European languages that use the Latin alphabet for which the default encoding, the mainstream encoding, really is T1, unlike this in the Cyrillic script, we have different encoding, namely T2A, T2BCD, and also encoding called X2, I think like it. So, when we realized that, we weren't quite sure what to do and we simply decided not to try to emulate this for inclusion in TechLite 2008 because it seemed too sharp notice and it seemed just not wise to try to emulate this all behavior. And sometimes it has to be mentioned, of course, and I already said it actually, sometimes Unicorn is inherently bad for representing some language. And we just can't accommodate Unicode's so far Greek, I mean ancient or polytonic Greek because these are the languages, not monotonic Greek with a single accent sign. Here, we really couldn't try to accommodate for Unicode's bad, but Unicode's false and text problems as well, so we simply split the patterns apart. And sometimes I already warned Johannes that I had to discuss this with him, sometimes really need to fix things in Babel as well. So bear with me, Johannes. That was not really a pattern problem, not at all actually. And then the result, well, we started this approximately two months ago. It was really Moetza's idea, it was all driven by her energy. And now thanks to Karl Dari, it has really been important into TechLite, so of course we uploaded it to C-Time, the name is HFUTF8, and we imported it to C-Time. So I'm contemplating whether I will be speaking about my actual project for Google Summer of Code, so let's say no and let's skip to the thanks. Right, of course, I mean people who are interested in what I actually do can already talk with me and I've already discussed that at length with many people here. So thanks to all those good people, to Karl Dari, first and foremost, before all, and to Jonathan, to Taku and Hans, and all those people, the people in the fourth paragraph were actually really very receptive to our initiative. And it has been really nice to see that it still has something very alive. Dian Mu Mohamed Agic, for example, who contributed the Serbian patterns actually at that time in 1990, he called them Serbo-Coration. So he did that 18 years ago, probably he started 20 years ago, and actually he was extremely receptive and said, yes, I have to fix things, and all those other people did too, of course, for Dröm, Werner Lemberg, and Blakim Ervalovic helped us a lot for our Russian too, etc., etc. So thank you very much. I think we have time for a few questions. Can you repeat the questions then? Why not every day use ZTECH or slide out that station and say, do you use ZTECH or not? If that's what you really want, just tell the people that don't use ZTECH to get lost, is that a reasonable default? I mean, Karl would have never supported that for all. For example, we... I remember 10 years ago when I was in San Rondo, four weeks later, for those who were in 8th and 5th grade, I said, this is ridiculous. You need 15 minutes of... I think he would have more things to do, to answer to you during his own talk. Yesterday he told me I was correct. Today, 10 years later, now he used over 6,000 universe, and then we can't talk about the new engine, so... We're not patching anything about Unicode. I'm sure that anyone first mistakes at the time. You really don't. I'm sure I don't. Sorry, this is just completely wrong. I mean, I really don't care for... You're talking about... Sorry? You said you should drop the 8-bit tech engine and force everyone to use Z-Tech. Am I correct? Is that what you were implying? I'm saying that collectively, this is a reasonable thing to do. Otherwise, effectively, it will be condemned to the world of the future. Okay, this is ridiculous. We... Can I now speak, or is it for... I mean, that's not our responsibility to do. That's... Of course, I find it sad that we have... We don't have more people trying to use modern tech engines. I'm modern. I'm really... I really mean Z-Tech and Lua-Tech here, but... It's a bit sad for them, but do we have to force them to force that and punish them? Does it make sense? I don't agree, and Carl, you wouldn't agree. You're really ridiculous. Go ahead and get him. Okay, well, now I'll send us a call to the next speaker. We can debate this later. So, that's... I will have a quick question, and I'll talk to you very quick. Have you considered simply... I mean, I'm always... This is for discussion, but have you considered just in generating out of the unit externally in a corresponding API? I mean, the problem with the tech and the old tech environment is that there were the facts the way John decided to make the high foundation depend on the current function rather than on the language. I mean, he called it language, but he actually was talking about what language plus function does. So, effectively, you need tenants for every and every unit in the API. Absolutely. Actually, when you look carefully at it, you really don't... Some particular language really doesn't use different front-end coding. It really uses one. The patterns are encoded for one particular encoding, and German and French with T1 and OT1 are an exception in that respect. And Russian and Ukrainian also are, as I mentioned, because it's really the only one that tries to accommodate different encodings that try to be... The patterns themselves are in some particular encoding. But... OK. Yeah? My question is, wouldn't it be easier from a support perspective to just move the job one and call the same German T1 and different net? German T1, German T2, German LY? I mean, they're more encoded around actually new, and if you just need to convert it to its own... I think... You don't have to do all the methods, though. Absolutely. It seems a saner way to proceed, and I think we'll probably move to that in the future. But in the beginning, it wasn't clear how messy the situation was. I think I actually get it wrong. I've been understanding that. Oh, no. Just... Yeah. No, no, no. This... Those are the 49 languages. Stop. It's in. It doesn't matter, so it's a very simple question. Thank you. Thank you. Thank you.
In the TeX world, the name of Cork is associated with a standardization effort dating back to 1990, the Cork font encoding, which can be used for most European languages written in the Latin script. At about the same time, though, a much wider standardization effort was initiated, as the Unicode Consortium was created to devise a universal character set suitable for any language and writing system. Of course, it wasn’t long before people felt the need to support Unicode in TeX–like systems. How far are we today? The latest extensions to the TeX engine are all labelled as “supporting Unicode”, but upon closer inspection this reveals rather imprecise: does it mean enabling UTF–8 input, handling multibyte characters, or implementing all the Unicode character properties and algorithms?
10.5446/30799 (DOI)
So what I want to talk about today is not what I've talked about at most tech conferences for the past few years which has been Z-Tech project that I hope at least a good percentage of people have heard of by now and so I won't repeat those old presentations, that is something different. But in a way there is a link because one of the things that has driven Z-Tech and I think made it successful has been that it's made some things easy that used to be really hard. People used to be intimidated by the idea of trying to use a new font in tech or in latex particularly and many many users although there were tools that could do it there was font instance and other systems that could help you use any font you wanted with latex. A huge percentage of users never got over that initial barrier of feeling like it's it's really hard it's really complex I can't do it so tech meant computer modern or maybe times Roman or something and that was it. And I've been really gratified by how Z-Tech has been accepted and I think one of the big reasons has been that it's made it so much easier for people without any kind of technical know-how to use whatever font they wanted to use. But I would suggest that there's another significant issue that we have in the tech world and that's the tech has a reputation a well-deserved reputation for being really really good especially at really technical material math and sciences and so on but there's another side to that there are a lot of people who could benefit from tech for writing their articles letters novels whatever they want to write but they're not mathematicians they're not physicists or engineers and they see Greek symbols on the screen and they're just frightened off. They look at these things and they think wow that's way over my head and so they won't touch it. So I wonder couldn't we make tech a little bit more accessible to those kind of people who who aren't spending their entire life doing math equations or chemistry or whatever it is. So what does tech look like when a newcomer comes to it? What's the first thing they see? They see some kind of interface they have to write a tech document they're going to use a program to do that and we have lots of tech environments around the world let's look at a few of them. Don't worry that you can't see the detail on the screen here but this is one of the tech user interfaces that's out there and it's got a place to you to write your tech document and it's got how many dozen buttons are there across the top there and they've got all sorts of interesting symbols on them that most of them I don't know what they mean and if I were new to this I certainly wouldn't know what they mean with little Greek symbols on them or cryptic abbreviations for things you might processes you might run but I don't know why I would want to run those processes. What is a DVI file anyway let's look at a different one that's another tech interface it's very similar isn't it? It has a whole lot of buttons and about a quarter of the screen is actually dedicated to where you can write your text and the rest of it is this really quite scary technical stuff for somebody who's used to double-clicking an icon on their desktop and starting to type in a window I would suggest that this is frightening and there are people who will not get beyond that initial shock. Let's try it oh dear another one but it's very much the same loads of strange math symbols and Greek letters and yes the equations in the toolbar up there you know sitting here in this room most of us are technical people and we use this kind of stuff but there's a whole world out there of people who will never write a math formula in their life but they could still benefit from tech type setting there's another one well I work it's cruel isn't it I'm not trying to say these are bad programs these are great tools for one kind of user but they're not the right tool for every user there's yet another one it was hardly any room to write your document here but there's lots of buttons with DVI PS and PDF LAT and what do these things mean anyway I just want to take my text and get a PDF out of it oh lots of symbols down the side as well let's skip past that why yeah we've seen enough you know that's what tech interfaces are like there's another kind as well which is something like emacs I didn't even touch emacs here I would suggest emacs is a great program I used to use it I got out of the Unix world and I stopped using emacs and I've never climbed back up the learning curve and to be honest I don't want to yeah I know it's a great tool for people who know how to use it okay here's a different type of tech interface this is TechShop and I would suggest that in the last few years tech on the Mac which is what I'm running on here as well has been incredibly successful and one of the big reasons for that has been TechShop that Dick built an interface that for a Mac user who was comfortable with that as a computer didn't maybe know much about tech it wasn't frightening it was easy for them to start up easy for them to use it didn't frighten them away before they even tight-hello world and several of us have talked at previous tug meetings and so on and concluded that there didn't seem to be anything quite like this elsewhere in the world except for what was on the Mac with TechShop and maybe there's a place for that level of interface something that's much much simpler much cleaner doesn't present you with all the options right in your face when you first start up because if you're presented with all the options you don't know what to do so TechShop I looked it up on Wikipedia and there's a great quote that says the introduction of TechShop calls the tech boom among Macintosh users and I think that's probably true so it gave an interface that was so much simpler and cleaner than all the if you like the high-end technical tech environments there are some power user features in there but they're kind of hidden away and you have to go and learn how to use them you don't have to worry about them until you're ready another great strength of TechShop that came from its environment on the Mac was the focus on going straight to PDF so the default way to run TechShop is to write a latex document or a tech document and get a PDF file and everyone knows what to do with a PDF file you don't have to worry about DVI and DVIPS and there's a whole level of complexity that's taken away by that step of adopting PDF as the default workflow and then Dick had some really neat user interface things as well that apparently have been hugely popular like the little magnifying glass that you can click on anywhere and you have preview and see it magnified and from what he says people just love that and any tool that has that is automatically a winner and he has this mechanism of synchronizing back and forth between the preview and the source so you can click somewhere in the preview of the document and get to the corresponding point in the source so TechWorks what I was here to talk about today is an attempt to do essentially the same thing as TechShop but to make it portable to other systems because TechShop was built from the beginning to run on the Mac using a bunch of Apple specific technology and Dick did a fantastic job it's great but unfortunately it's only on the Mac and there's an awful lot of people on Linux and on Windows who don't have that available to them so TechWorks aims to give that same experience as close as we can at least in a portable way that will run on other systems so how do that well the main point is to build on portable and open source technologies I've been working on TechWorks now for a few months as kind of a spare time evening project and my aim is to write as little code as possible and still build something that works by taking existing components and putting them together so to be able to do a PDF preview it's using the popular library which is the open source way to do PDF stuff and it's built using Qt which is a powerful and open source application framework that gives a huge amount of standard application functionality and in turn these rely on a bunch of other libraries but those are key pieces that make it possible to build something like TechWorks so what am I aiming to deliver or what are we aiming to deliver I should say because this although I've been the person doing coding so far this comes out of discussions between several people over the past couple of years there'll be a simple text editor that will behave like you expect a GUI text editor to act on their platform the same kinds of windows and font support in the editor and so on I'm doing read doing just the standard text editing features syntax coloring for tech documents of course yeah and you can see the list I'm not going to read them all out and then the ability to execute tech and the various tools related to that like bibtech and so on to create a PDF version of a tech document so it's a very standard kind of tech interface then the third thing that you don't find in most of the existing tech environments is an integrated viewer as well to see the PDF output not DVI view or a PDF viewer because remember I don't want to complicate users like yeah many of us have lived with DVI for years we're comfortable with it but remember the whole world out there has no clue what DVI is but everybody knows about PDF so PDF is the standard for formatted pages formatted documents so we'll have a PDF display that automatically comes up when tech finishes automatically refreshes if you rerun tech a nice little magnifying glass to let you appear at bits of it just like tech shop does and just like users love and based on a new technology from Jerome that we'll hear about later in the week sync tech there'll be the ability to jump back and forth between a point in the typeset document and the corresponding source either way so there's a kind of the key components that make tech works anything else is optional maybe added at some point but that's the fundamental requirements that I have those kind of capabilities there so there's lots of more advanced power user features that would be great to have someday but there's a condition that goes with all of those which is that they must not complicate the interface for a newcomer they must not make the environment look more intimidating more technical than it needs to and that's where I feel that tech shop has achieved something in a different way from the other environments that are out there so let's have a look what have we got well actually you're seeing it right now this is tech works presenting a PDF so there is something that works but let's see it run so there was my the source of this document which is just a regular Beamer presentation and the PDF view which let's get back to a earlier page we can magnify it and look at that so we can zoom in and out just like you'd expect like that's the right key to zoom out again let's see that's run what did I want to demonstrate particularly let's bring up another document so this is this sample 2e that everybody in the tech world I hope knows and loves you've seen it before let's scroll down in the source here somewhere here in the source text this is where it says this is the second paragraph of the quotation let's command click there and it will highlight that point in the PDF or we can go the other way let's skip to the last page there is an environment for this that's command click there and the source file will jump to that point and there's a little screen reader glitch here right now sorry about that let's jump back to the first page and go to the beginning of the document and the source navigates right there to that line of text so this is sync tech at work question at the back okay we could take it right now if it's yeah you're not gonna get a nice jump because there is no source to jump to in that case well I imagine we'll hear quite a lot more about what sync tech can do from Jerome he has a presentation later and that's his this is his technology it's just built into tech works now the synchronization I should say is supported by both the PDF tech and Z tech engines in tech live 2008 so it should be available for most kinds of document that you might want to do let's close sample and look at another document right now this is sample document for the polyglossier package which is an alternative or like a replacement for Babel to work with Unicode and Z tech support multilingual typesetting and the nice thing about this is it lets us show that yes we have on the fly spell checking in the editor here right now it's set to English so this English paragraph is okay just about everything else is misspell but if I just change my spelling language to German then the German paragraph here looks a whole lot better it doesn't like my best Gosprochen and as a single word which does sound perfectly okay but the dictionary I'm using obviously doesn't know it so it offers alternatives let's see we can now we've got some Greek down here and so spellchecker supports Greek as well if I do that then suddenly the Greek paragraph loses ill its spell check warning so we can turn that off altogether as well which makes more sense really in a very mixed up document so there is an integrated spellchecker there which is based on hum spell right now just an open source spell checking library okay this is running on the Mac I might as well use Tech Shop but the difference with tech works across is that it's portable so we can go over to a Linux machine and bring that up and launch tech works here open a document sample to a document moves inside all right let's make a new document I was going to show you at some point templates as well so tech works will include a collection of templates for common types of document so let's do a beam of presentation how about introduction for the next speaker it's one of the samples that comes with bemer so if we bring that up we'll get a new document based on that template which we can immediately I have to say that before we can type set it so typeset yeah bemer does a lot of stuff but there we are there's the PDF result so that's how easy it is to run a document through tech works ideally it would come out of the box for whatever operating system you're using and need no configuration if you have any standard tech distribution like tech live installed tech works should just automatically use it and be able to run standard documents so that was Linux let's try it on Windows so here we are Windows launch tech works and we get a pretty full-to-blank window you can there are different preferences you could choose to have it always come up when it's first launched with choose a template or open a file rather than a blank document well let's open a sample document here let's try that one so this is a actually a book from the Bible is do with some formatting macros that I've been working on and you can see the source document here is actually very short because it doesn't have the actual text in it it's just a little framework a driver that's loading the text from another file but sync text still works with that so if I control click over in the PDF it will open the right source file and go to the right line and in the same way if I go somewhere else down the source document I can click there and jump to the corresponding typeset output so we've got all the same functionality right across multiple platforms okay that was windows just for completeness I should check that it runs on Vista as well it's just the same it looks like a Vista program now instead of an XP program but all the functionality is there works in exactly the same way let's bring up that same sample document and there it is previous speaker mentioned about the time it took that program to start up when it was first launched this is pulling in a lot of libraries actually there's about 20 megabytes of libraries that go along with this to provide all the open source frameworks that are being used but that starts up actually very quickly okay let's get back to slides five minutes sounds good let's jump back here so the main other thing I want to do is to invite people to please join in because so far what you've seen is a prototype that I've been developing but I can't do everything there's a lot of ways in which I would love to see people in the tech community if you think there's a place for a tool like this I'd like to see people contribute if you want to dive in and write C++ code please do it's open source the code is up on Google code now if you don't do coding that's fine there's lots of other things to do so download it try it out there are although it's definitely not finished at this point there are a couple of binary packages that you can download and try you can download the source and build it and try it there is currently absolutely zero documentation so volunteers are very welcome of course ideally it won't need a great deal of documentation because it's supposed to be very straightforward very simple but obviously we still need documentation yeah let me show you one quick how I use a feature because that's also mentioned here one of the things that I'd like people to contribute is command completions for the editor this is a feature that I have in the editor I bring up a new window and let's make the form a little bigger so you can actually see now it's not big enough to see yes we can move the window over a bit that's over here in the middle so to help you type your tech documents you want to do things like begin figure and figure it gets tedious to type out all these commands so we have a command completion facility I can type back slash bf and press escape and I'll get possible completions that are associated with bf well that's not the one I want neither is that ah there we go begin figure and I can type my figure in there or I can get a figure with optional arguments or maybe I wanted something else with bf so this you can cycle through various options let's get back to the one I really wanted and fill in here we come to the end you don't have to have a backslash on the front of things like this it's all coming out of Herb Schultz will recognize where are you here he wrote a lot of this stuff for Tech Shop and so I just adopted it from there I can do beq for example and that's an abbreviation for begin equation or equation star or equine array so it's one of the kind of power user features that's there but you don't have to be bothered about it if you don't want to template documents I've got a few that I've put there so far but it would be great to have some really well-designed templates for newcomers for how to go about writing a latex article or a book or a beamer presentation or whatever design some nice icons yeah I've got some icons on the toolbars but some of them I drew and they look pretty bad so anybody with a little design talent is welcome to contribute anyone who would like to localize the interface into another language the infrastructure is in place to put the interface into different languages but I can't do that I'd be very happy for people to contribute or to package it for different distributions right now I don't have any installers I don't have any Debbie and packages anything like that there's just the code and a couple of binaries so there's lots of different ways in which you all are very welcome to get involved and let's have a great tool to invite more people to come join the party and maybe move on to really highly technical tools in the future but let's not scare them away before they get started and still that run through a culture and then to attack into the PDA update? Depends on which tools you're using, there are some plans in that direction at least. And one of the things that I want to do in general is provide quite a bit of support for different ways of integrating graphics, because that's one of the things that challenges people. Postcript in particular. Well, it would be possible to run Letech, DVI, DVIPS, GhostScript, et cetera, toolchain behind here. That's something that you could configure right now, but I don't see that as being what we should present as the default. The main people who use that are the more technically inclined people or old-timers who've got a big investment already in post-Script-based workflow. And I don't see that as the primary audience that we're looking for here, but I'm very open to support that kind of thing as well. I'm interested in this because I have a very personal experience. My son, who I see two or three months, two or three weeks into the year, is computer illiterate. He'd get a Macintosh and he wanted to discipline us in Australia. He said, throw this word junk out and he needed to use Letech. What's Letech? We downloaded TechShop, it took three quarters of an hour to teach him this, and systemically his teacher's prism for the type quality of the support. And he had no intention of going back to work. No, it takes 45 minutes to teach this. It should, yeah. Question on the interface. You still think that the sort of Letech Tech source is actually what people want to work with. Have you considered these kind of other ideas that are floating around? I mean, there's scientific work which actually gives you a busy thing from then. There has been, I mean, I don't know if people know that there was a program called No Tech, which was really ASCII in code only, no batch, no mapping. Did you think what you want to do was very simple method like Wiki-tech, Wiki-input like three spaces, empty line and stuff like this. Did you consider actually going further away from that, or you found that this is not necessary? I think there's a place for those tools as well. Things like scientific word or licks or something. There's a place for those. The problem that I think tends to come up is that they end up constraining the kinds of documents you can write to what fits within the model that the WYSIWYG editor understands. And the same with the No Tech. I've worked with that kind of input for a couple of projects that I've ended up typesetting, and it works great within a very narrowly defined domain as well, but it's not a general purpose tool. Do you think you're in the middle between those... In a way, yes. So you should be able to use something like this for anything that can be done with tech. No predefined model of what kind of tech document it has to be. Okay, well it's... Lots of questions. We're three minutes late into the coffee break and we need to get back to the three-minute start right now. Those of you who want to stay an hour to talk to each other, let's look at what is done here.
One of the most successful TeX interfaces in recent years has been Dick Koch’s award–winning TeXShop on Mac OS X. I believe a large part of its success has been due to its relative simplicity, which has invited new users to begin working with the system without baffling them with options or cluttering their screen with controls and buttons they don’t understand. Experienced users may prefer environments such as iTeXMac, AUCTeX (or on other platforms, WinEDT, Kile, TeXmaker, or many others), with more advanced editing features and project management, but the simplicity of the TeXShop model has much to recommend it for the new or occasional user.
10.5446/30800 (DOI)
I'm going to start this talk with a detour, so to speak. Some of you may have noticed I was not here Monday morning when I was supposed to give my talk. This is our rental car, as we had patched it up quite nicely, compared to what it looked like before. I'm the organiser for rescheduling me, so I could give this talk. Fortunately, no one was harmed except for our bank account and the car. The galley module, as I'm going to talk about, we need to define what a galley is. Yesterday, John and Blancant and Tori defined galley in a way that's not consistent with the way I would define it. I'm talking more of a traditional metal type galley, where you have a rectangular area, you put text in it, and figures, whatever you have, from the top, mostly. You have galleys that have a horizontal restriction, you don't have vertical restrictions in most cases. The most prominent case of this is the main vertical list, as we all know in tech, but there are some restrictions, like min boxes or the mini page from a tech that can be restricted. Each galley is separated by any amount of vertical material, it can be nothing, or it can be a goodie bag of tricks consisting of penalty spaces, special rights, marks, all the lovely what's it notes, any tech programmer love. So, the galley module is a fighting tech on two fronts. One is manipulating into paragraph material, and one is manipulating paragraph shapes. Some of the current problems we have, I've brought a few examples. If you add this sequence in current latex, you will notice that you do not get three points of space after additional space, but you actually get 13 points of space, because the v-space confuses the other latex internal v-space command, and then you get a bit more than you asked for, which is fine if you're at a restaurant, but not when you're typesetting a document. Next example, this poor unsuspecting user has started the next paragraph with a group, and this means that all the clever things the section command does about page breaking for the next and the following paragraphs, apply to the second paragraph rather than the first, which is also slightly surprising. An even better example is if you put the section inside a group. There was, less than two weeks ago on CompText text, there was someone doing exactly this. He was using the same page declaration, because he wanted to keep some text close to this heading before it appeared. And all of a sudden, he started getting weird page breaks after the heading. In this case, you get an indentation of the very first paragraph, which is fine if you're in France, but otherwise not. The page breaking parameters do not apply, and the itemized does not get any vertical space before it, but it will get it after it. So, why does this happen? The problem is tech, and the way latex uses it. You have some very primitive instructions, these skips, penalties, so on and so forth, and latex just plasters it onto the page. Unfortunately, tech is not very good at going backwards, and especially not on the main vertical list. If you're inside of e-box, then you can do some things like on-curn in vertical mode, because it's restricted vertical mode, but you cannot do that on the main vertical list, which is where you need it most of the time. And so I asked more to the problem, tech freaks some material that is meant to be vertical as horizontal. I had a nice case at work recently where I had to color some changes in blue, but unfortunately my color change altered the output because it added a WhatsApp note in vertical mode. So I had sections that started changing pages once I started coloring them, indicating that changes were made. That's not tech, but that's probably not the way in which colors are implemented. Yeah, but... What is tech's problem? There was coloring, but wouldn't be a weird solution to it. At this point I would love to quote David Carlyle when he presented the color package at a UK tech meeting, and at the bottom of each slide I've been told he had the text, It's not my fault. But, well, we have to face the music, we have to do something. The later kernel as it exists does some things. It has these complicated macros, add v space, add penalty, and try to do the right thing, but if anything intervenes, the logic is confused. Again, because things are added when they're seen and not collected. And someone using color specials will know that, okay, I need to leave vertical mode before I start, because otherwise I might be moving reference points or boxes and all sorts of things. And at the end of the day, it often fails anyway. So, a solution, a proposed solution to this is, very simple, prevent tech from doing all of this. Then collect all in-sparagraph material, defer it until you're sure that there is no more coming, then start examining and manipulating it, and then apply it to the main gallery. The lock sheets, one of the causes of our problems are so-called invisible objects, right, smart, special inserts, and we need to keep them under control, because if they appear on their own, they might actually enter horizontal mode and change reference points. So we keep them under tight control, remove them either up or down. If they appear in vertical mode, they must be attached to a horizontal mode material. Including in that is penalties, vertical spaces, flags for breaking and up-breaking, and of course the what-sets. Remove them up or down. We have paragraphs that, parameters like indentation, could be objects to be typeset at the beginning of the next paragraph, and what-sets notes. We have paragraph justification, which is all the usual tech paragraph settings, plus a few bells and whistles, such as horizontal and blue specification after a manual line break, and a begin par fill skip to work instead of the tech par indent. When you're writing a book, most of these settings will be permanent. You have, we do gallery levels, and you have permanent settings for this gallery level. We call them the static gallery parameter values, or the SDPD. In the processing step, you might want to change a few of these values. So what we do is to make the processing use dynamic values, which are copies of the static values for each level, and then you're free to do modifications of them. Of course, this procedure requires that we tie up tech good and prevent it from doing anything. So the package makes these skip, penalty, all these no ops, par and every par, carry around a lot of information. You're not allowed to redefine par, if you do that, you're in a big trouble, but the package provides interfaces for the themes you would usually need to redefine them for. One of the examples before showed that using tech grouping can lead to surprising results. So in this case, we do not rely on tech grouping. We have gallery level groupings that are maintained globally and manually. And of course, this package is very intrusive, because either you use this interface, or whatever you do will not work. I mean, if a user types vskip something, they'll just perform an assignment and then do nothing whatsoever. And then comes paragraph shapes. As it says, people do weird things with paragraph shapes. This is an example from one of the early pages in my thesis from last year, which was meant to dazzle my advisor, showing him complicated paragraph shapes. Of course, it makes no sense, but as long as it worked, you can see you have lists that change the paragraph shapes, because they're invented and the lines are not quite as long. You have dropped the capital, you have a cutout, and inside the cutout you have a different funny paragraph shape. And my primary sample of something that changes the paragraph shape is the Reckon package, which during a typical run for an equation, if it's long enough, it'll change it five times. And of course, this causes big problems with something else that's already changed the paragraph shape. And that, unfortunately, happens quite often. The list environment in my set is the primary sample. And using Dali for this will help enormously, because large parts of Reckon right now consist of modifying part shapes. And if we have something that works for that, then all the better. So, the Dali module exists, you can download it. It works reasonably well using the X-Pill 3 language, but it needs to be extended in several areas. It was written at a time where it would work on it commenced when e-tech was not the default engine as it is now. e-tech provides more generalized line breaking penalties, so that you have widow and pop penalties that are now erased rather than single values for one line. So it needs to be extended to accommodate for the Dali. Not yet. Pages breaking. So it's not changed the paragraph out a bit? It also has demerits. But they're not line break? No. So, you need to have a paragraph shape data structure. I've realized that when working on Reckon that it needs some work there, because tech does not provide such a data structure. It provides it for the current paragraph, but that's about it. And as I said before, it's very intrusive, so once you start using it, you have to redefine half of leitech. Most importantly, the list environments. Work has been done in that area as well. There are some accompanying modules that are not yet fit for release, but almost, that do all of these things. And of course, the most important task is to talk tech into adding all the data structures we need in Nuotech, so we don't have to do all this by hand using macphers. Questions? A very simple question. Is your thesis available somewhere and is it in English? Oh, but I'll tell you where to get it. Yes, I think it's very much interested. Yeah, and these are the kind of developments that were not known when the package started, but yes, it helps enormously if we can traverse these lists. That's the problem from the Reckon. Chris? I think it's probably the most important thing, but my feeling is that these things should never get into the person in the first place, so I don't really want to have to do all the processing on the vertical list. That's the sort of philosophical statement that you like, that you can do it, should you do this. Yeah, that's the question if you want to prevent things from appearing first and then traversing it. It's a good model of document, a list, these invisible things should always be firmly attached to a visible thing that really is going to be type set. Precisely. That would be a much better model. Precisely, and that's what Galli does right now. Yeah, that's what we try to emulate. Yeah, by moving it up or down. That's again, not just about the main version, but about the protocol list as well. Correct. Yes, the other stuff? I'm delighted that you were able to be with us and the schedule was pretty new around, so we could talk. There's a lot that's very interesting, and I'm certainly delighted that you and the Lua Tech team are talking with each other, so it seems. I'm very interested in the statement that you decided what you wanted to do, you figured out how to get TEP to do it, and then you found that to get TEP to do it, you need to redefine a lot of internal civil aid tech. Yeah, but that was to be expected. Why do you find this surprising? I'm very pleased to hear the statement. From a member of the LATIC 3 team, who are good at this, and if I could play, he's a difficult LATIC. Yeah, I mean the recognition that the current state of LATIC gets in the way is an important recognition. But that recognition was made in 1994 or something. 1990. Yeah, I mean that's happened there. We know that for a long time the underlying kernel is built on concepts that take you basically as far as LATIC 2A has got. And if you want to get further, you have to scrap it. I mean this is known. It's unfortunately the big problem of, as Morten said, we have... You can't program around a lot of things, but some of those problems get so convoluted if you try to sort them within the current tech engine that it would really be helpful to be able to take some of those data structures that you try to emulate and prevent tech from doing something actually in the engine itself. And that's around the page breaking and that's around all those kind of problems. So we have two very important things. We have the output regime, which of course cuts across everything as well, and we have the galley structures, which has to be in control for us to do the things we want. And yeah, there's no surprise that it has to redefine half of current LATIC because we're about to redefine LATIC 100% pretty much for the next version. So these are just intermediate steps. So I think we agree that there are assumptions built into the current version of LATIC, and the use of tech limiting us in terms of the things that we want to do. Yes, I think we all agree on that. Chris? Just to continue that more pertinently, there are things in tech, in the tech model that we're going to get to doing, or we just simple things like this. And one thing you didn't place before, which is almost in the same area but not quite, we only have one main vertical list. Yeah. What do you and I agree on? The extent to which tech is important. Everything. Oh, we know that since 1990 as well, come on, Dylan. I'm not causing to have the argument. No, no, I'm not saying. I mean the correct argument. Thank you very much. Thank you.
TeX has a well–deserved good reputation for its line breaking algorithm and it has found its way into other software over the years. When it comes to interparagraph material such as penalties, skips and whatsits, things start getting murky as TeX provides little help in this area, especially on the main vertical list where most of the action is. This article describes the galley module which seeks to control line breaking as well as taking care of inter-paragraph material being added at the right time. In other words, galley can assist packages such as breqn which has to construct paragraph shapes on the fly while taking current ones into account as well as ensuring the output routine doesn’t get tricked by penalties, skips and whatsits appearing in places where they could allow breakpoints where none are intended.
10.5446/30801 (DOI)
My presentation is not about technology. It's really about design issues and how they are influenced by technology and how they can sort of throw some issues back to the technology itself. I've been typesetting for a very long time and I was using a lot of sort of poor configurations for dealing with math. As I got further into typeface design, I really maybe want to tackle the issues of what makes a good typeface for math usage. I knew that I had identified lots of problems as a compositor and a designer, but I really wanted to look for some models about what are the issues involved. I was casting about for things to look at for this and I eventually arrived at these sort of three case studies that bring up different kinds of issues, particularly because they're all very tightly related to very specific technologies where they look to these problems for solutions. The solutions that they arrived at have become models for other kinds of design. They really set the stage for what would follow, which made it very interesting to look at what actually led to these conventions that happened. When I'm speaking about the Times Four Line Mathematics series, this is the complete overhaul of the Times New Roman typeface family that the Monotype Corporation produced for working with metal type for its hot metal machine casting system, where they developed a new method of setting maths in metal and used that as an opportunity to completely rework the entire Times family. The AMS Euler fonts you're probably more familiar with, but I wanted to look at a little bit more about the, in this case, the interdependence of the font design and the technologies. In a way, this is a very experimental typeface and I actually think that some of the issues of that experiment could get a little bit more use than they have over the years. Then the Cambria family, which Ulrich talked about, I'm actually very glad he talked about a lot of the technical issues because it frees up some time for me to look at the actual shapes of the letters a bit more. There are a couple of general issues that came up in all of these projects and I think in every attempt to create a typeface that works for mathematics, or even to evaluate and choose typefaces for setting maths, there's the issue of legibility and how text legibility is a very different animal from the legibility of mathematical notation. The problems of combining multiple type styles, multiple scripts, multiple sizes, letter forms and numerical forms versus symbols, all in one context in a way that they're all clear. And then the positioning and spacing issues, whereas text that we read tends to move in a very steady horizontal flow, mathematical notation in a way behaves like a script of its own. It moves vertically and horizontally and to some extent even back and forth a little bit more than you may expect in how we're trained to read normally. The most basic issue of legibility of text versus mathematics is that we read text basically by identifying word shapes. So when you have an issue of a small typo or things that don't quite make sense, the way our brain interprets all that, we just move past it. So in this effort, you know, I have the word mathematics set normally, it's set with an alpha rather than A in the middle in the phrase in parentheses, but we just sort of move past it in the way we process text. Those little kinds of errors can completely change the meaning in mathematical notation if people don't identify those shapes correctly and easily. And easier is probably better than saying, well, technically, yeah, it's a different character and you can tell if you look at them side to side, but if those things can be identified more quickly, it's probably a good solution for the problem at hand. There's a sample equation from the specimen for the times series when it was redeveloped, just showing the mix of multiple kinds of things that happen all in one context in mathematics. I mean, it's very easy to see and I'm sure if you compose math, you know that you're pulling together things from different sources all the time and trying to fit them together in a way that minimizes the visual chaos of all that stuff happening in one place. But at the same time, you're also asking people to switch back and forth sometimes very rapidly between reading text and going into the mathematical notation and trying to work it out. So anything that can minimize the issues of that back and forth will help. So the times series, as I said, this is a solution for speeding up composition of math in metal, which for hundreds of years had been done in a very manual way. Type would be precast and probably a very highly skilled composer would be needed to quickly fit all those pieces together. This is very, very expensive. It required a lot of organization within the type shop and it kept a lot of this specialist work out of the hands of printers who didn't have the skilled staff to deal with it. So Monotype, who's a provider of typesetting equipment, wanted to find a way to give more of their customers the ability to take on this kind of work, particularly in the wake of World War II when there was an explosion of technical publications that could really benefit from a better solution to this problem about how to fit the pieces together. So any kind of a change to what would happen in providing typefaces for metal typesetting was a huge industrial undertaking. It involved the casting equipment, the typesetting equipment, the creation of the molds that created the metal type. So Monotype chose Times New Roman, which was one of their best sellers for book work at the time, and invested a huge amount of time and money into getting it to work. The solution was basically a simple one, mechanically, but in so much effort had to go into preparing the typeface for it, they took it as an opportunity to rework the typeface for the math usage. And because of the need of every glyph had to be reworked to work with a new means of setting the type, they really had to redo all of the glyphs that might have been used for math, which in the past had been pulled from many typefaces that someone may have available in the shop. And in the end, by the time they really discontinued the family in its phototype setting age, they had redrawn over 8,000 glyphs, all provided really only at one size, but it gives you some idea of the extent of the project. This is a diagram of how mathematical material would be fitted together to be set in metal type. Every black box is a solid piece of lead that needed to be fit together with everything else. You couldn't have spaces where there was no material. A piece of lead always had to be fit in to hold this mass of type together. And when you have material that's not capable of being set in an orderly row, you have to start packing things in around it. If you're starting to make sizes of type, you have to pack things around to make sure that your 5-point character fits in next to a 10-point character all in the same leading. And this is the work that you would need someone to just do by hand, is get all the right pieces, identify all the right characters, pull them from the fonts of the right size, pack them all together. Well, monotype wanted to work out a method that would automate as much of this as humanly possible, so that a more of an average worker in a shop could take care of it. So they worked out this solution called the four-line method, where just about, I think their figure was about 80% of the material that would show up for math, could be set in line by looking at an equation as four rows of material. And the core of this solution was to basically set a full-size letter larger than the piece of metal, what they call the body of a type slug that it's set on. So if you look at the H and the X in this diagram, those are pieces of 10-point type, but instead of being set on a piece of metal that would give you 12 points from baseline to baseline, they were set on a 6-point piece of metal with the letter overhanging. And then that allowed them to set things like superiors and inferiors of first and second order on those half lines cast on their own. So this way, instead of pulling together pieces of type that were set from entirely different sets of type matrices, these could all be loaded up at one time, set all at the same time, with a little bit of handwork to put in oversized characters, any horizontal rules that had to be fit in, and things like that. Like I said, since they had to create different matrices, the molds that would actually cast these pieces of metal type, they had to redraw every one of these characters to go through their full production cycle. And that gave them the opportunity to really adapt the typeface. Some of the things that they did were to just improve the consistency of what would get pulled together for math, as well as increase the legibility when possible. Before the introduction of the four line system, most material, at least in the UK, recommended the use of monotypes in modern series 7. A modern typeface, a bit squarish, a lot of contrast. You can see in the figures in particular shapes that closed up a lot at small sizes. And it was mixed with available Greeks that didn't necessarily fit the same proportions. As you can see in the lines I've outlined in red at the top, the X-heights didn't quite align originally. So when they re-drew the series 569, which was this reworked version of Times New Roman, they just evened out those proportions. They increased the sizes of what was available for the second order, inferiors and superiors, to help make them a little bit more legible, and also switched to the style of figures used in Times New Roman, which didn't close up quite as much at those small sizes. The big difference they made, they made throughout the entire set of italics of the entire family. It was a small change, but it actually made all the difference in fitting these pieces of metal together in a context, where you were often switching back and forth from an italic figure to an upright number or symbol. These are photographs of the pattern drawings that the metal matrices would be cut from. This gives you an idea of what they did. A simple change of changing the slant of that italic four degrees, making it a little bit more upright, made a huge difference. Times New Roman actually had a pretty dramatic slant of 16 degrees, that when you got into a lot of characters, made it very difficult to combine with other symbols. You can see in the original drawing of the basic Times New Roman that the pattern of an F and a B were shared. The Fs were murder. When they redesigned the family for the four line system, there's actually about 16 different shapes of just the lower case italic F, that they provided to handle all of these situations. They just kept going and going and going. They were consulting with a lot of technical presses at the time, who of course brought up exceptions all the time, and rather than trying to standardize and force presses to work in a given way any more than they actually were already doing, they tried to remember that these were paying customers, and in a lot of cases the customer was right, and they wanted the characters that they wanted. You can see just the basic set that they started out with with the release in 1959, including some notes about the various exceptions that were available. But they still stuck to this idea of trying to get the authors and the publishers to work in a given way, that made things easier on the system. The time series 569 for Matt still had to be loaded in a hot metal caster entirely at a separate time than the type that was used to set text, which is what this whole notion that I've highlighted in yellow is all about. They looked largely the same, but particularly in the italics, they were not the same at all, and they were definitely set on different sized bodies, different sized pieces of lead that supported the letters themselves. So they had to be set separately, and if someone really and truly wanted an equation in line with text, they had to be cast in metal at different times, switching out equipment on a machine, and fitted together by a person, which is what the whole notion really was behind encouraging them to keep them separate. It just made it faster and cheaper and easier to produce for the compositors. And this is a pretty successful system. When I introduced it in the 50s, it just swept through the UK, because it opened up the field to so many people, and it became a standard of how work was approached, even once it was adapted for film, for about 30 years. If you just look at books published in the UK around this period, they're all set in times in New Roman, 10 point. There was a similar system that was used in the US and North America, or based out of North America. I mean, both of these products were sold all over the world. That was based on the Century model, the Century type faces. But it's a similar concept of four lines getting stacked together to ease the composition. But it still had this restriction of working with metal, keeping elements separate, and it was based very much on the concept of the math should look as much like the text as humanly possible, so that people aren't aware of jumping back and forth between these modes of reading and these styles of type faces that were used. Well, years later, after all this had become a standard, Donald New had a different idea about how things should get mixed and how they could get mixed. With the tools that he had written to put into play, the creation of type faces, composition, it was very, very easy to pull together different type faces, symbols drawn from different forces, switching between modes within a text. Aside from just the capabilities that he had written into tech to do this, this is a style of reading and presenting information that he was interested in, that he favored and experimented with. And this is definitely a movement away from traditional book typography. And when Herman Zapp was approached to develop type faces, he and Newt very much had a collaboration on the concept of how this would be approached. Herman Zapp did the drawings that the type faces were based on eventually, but there's a very, very passionate correspondence between Zapp and Newt about what this new type face could be, and they didn't feel hindered by the idea of making sure that it had to match what it was going to get used with. They could try a different model and a new approach. What they did in the course of this has come up with a very innovative design when compared with traditional book typography. And since this is being designed at the same time that the actual technology was being refined in, the design that Herman Zapp came up with that the American Mathematical Society and Newt really were very fond of, they took that as a reason to help push the technology further a little bit to really express the design well and make sure that it could be presented the way it was conceived. So the basic idea that arose after the initial discussions was an upright italic shape of letter, a casual form that reflected the tone of handwriting that a mathematician would use, and working with the traditional notion of the letters being used in mathematical material being set in italic, but just eliminating the problem of how to actually fit all the pieces together with a slanted shape. They turned it upright, and then it's hung onto the characteristics of an italic typeface with a sort of minimal use of serifs and certain shapes to the terminals and a bit of a casual shape to the letter. And they took another step further with this notion of a calligraphic idea, but a very casual kind of calligraphy. One of the really interesting notes that was brought back to Zapp, which he really got behind, was this notion that you don't want to, we're not trying to capture the sense of fine, formal, broadened pen calligraphy. We want to capture some of the immediate casual feel of someone very quickly writing out their notation to express their ideas. So Zapp worked with that, refined it, and we're left with this design, which is very unlike the kind of italics that would have been used in books previously, even though both he and Newt wanted to start with ideas from book typography. When they really analyzed the problem, they moved away from it in the design, which really opened up this experiment in math looking a very different way than it had been before. And this is just an example I have of even an editorial manuscript, where again the authors and the editors direct connection to the material very often was through handwriting. All of the simple type setting techniques didn't give you access to all of these. So there was a kind of familiarity already with seeing shapes like this, that they wanted to rather than bury it in the process, make it a little bit more visible to the end user. The trick of course was taking the subtlety of a typeface design, like the one expressed in Zapp drawings, and render it properly in fairly primitive digital technology. Primitive is a little bit too misleading a word. Technology that was still being defined, still being improved. One of the first things that they realized when the team at Stanford tried digitizing Zapp drawings, was the basic conceptual model of MetaFont, wrapping stroke styles around a basic skeleton, and didn't really capture everything that was in Zapp's design. If we go back a little bit, these drawings, the image on the right, if you haven't seen any of these before, are the full-size drawings that Zapp delivered for digitization. There's a little bit more going on in those outline shapes than a basic skeleton with a mathematical model of a pen applied. He's opening up some of the areas where strokes come together to keep ink from building up. There's some subtle modulation in the width of things. The conceptual idea of the pen creating that shape has a little bit more rotation in what's going on with the pen than just setting an angle and expressing it along a skeleton. One of the first things that they realized that they were going to have to do was rather than working off the center of these strokes, work with MetaFont in a slightly different way and create the outlines, and create a structure that wrapped around the edge of the character and could be manipulated that way. After that phase was done, they still had to get routines that could generate all the bitmaps off of those outlines. At the time that this was all done, everything was being rendered as bitmaps in the version that would go to press. It wasn't being done quite on the fly on an image setter in the same way that Postgrep technology would do later on. There was a lot of time that went into fine tuning the proportions of these characters and the spacing and analyzing what would happen when these were rendered at different scales just to get evenness across the stem widths of all of these strokes. That one pixel difference that's being pointed out in this diagram from David Siegel's book really changes the overall look of text when you see it full size at anywhere between 5 and 10 points. Those just read as different, unlike shapes because of that single pixel. The same notion of one dot shifting one side or another also affected the spacing. This is a spacing test that showed them trying to get the metrics right to produce an even texture. The team at Stanford that was doing this was trying to work out general principles so that a user could still specify this type being used at a variety of sizes and get it to work. They were trying to really work out the parameters of what would happen. There's this interesting back and forth in this typeface design, not only as a visual in the design experiment, a new approach to the typefaces, but also an experiment of how to make this technology grapple with typeface designs that had a little bit more subtlety to them and asked more of the technology. It was a really nice result that they came out with. You can see this is an example from the first use of the Euler typeface, concrete mathematics, where Nuth worked out a slightly different version of computer modern to work with the overall color of it, but it's still working with this idea of you're really visually switching modes from text to math as a very straightforward clue that the material that you're reading is changing from one type of thing to another and it demanded a different style of reading. I'm very fond of this design and I wish this concept had gotten a little bit more use over the years. I find looking at material, I see things like this show up once in a while, and I find it very refreshing, but it's still not very traditional and a lot of other typefaces that get designed for math back away from this model and go a little bit back towards the times model of having things that are very tightly related, following a more traditional presentation of the characters that get used inside the math. It's just an example of the basic set of characters that was actually designed and packaged within the Euler fonts themselves, effectively very limited, just a few alphabets that were designed to work together, the basic Latin, the Greek characters that didn't have immediate equivalents in the Latin characters set or also the ones that weren't used, they didn't bother with. A Fractor alphabet and a Script alphabet that were designed a little bit more simply than their traditional models in a way that worked better with the other alphabets that were created. All other technical symbols that would soon could be pulled from other fonts, which Tech made so simple, which in itself was an incredible change from the previous ways of typesetting mathematics where everything really had to be fitted all together, whether it be on a film strip or a matrix case. So jumping ahead, yet again, we have Cambria Math, which is the showcase for mathematical typesetting from Microsoft's ClearType project. Again, I'm very glad that it all recovered a lot of the technical issues going on behind this, so we can look at some of the other things that came up. Cambria was very, very clearly intended to be a replacement for Times New Roman within the suite of Microsoft products. And to that end, the basic concept driving it was not to rock the boat too much. And if you look at the two typefaces next to one another, they're very, very different, but Cambria is, like Times Roman has really become over the years, particularly in its digital versions, a very, very evenly textured, fairly neutral typeface. And one of the big things that Microsoft wanted to do when they made this switch was they didn't want to have anything too experimental. This is for consumer products, everyday users. They wanted to keep it pretty straightforward, not give people any big surprises. So when they chose Cambria to be the showcase of the mathematical tools that they were going to make available, they wanted to make sure that everything worked together very, very smoothly. So they designed a huge family that would match the text, because they didn't want people suddenly wondering what would happen when they switched into the math mode within Microsoft products. A lovely idea which is immediately thrown out if someone is working in a different typeface and suddenly inserts an equation using the equation editor where it will just, at this point, default to Cambria until more typefaces are available to take advantage of the tools. The basic driving concept for the look of Cambria were the possibilities and the restrictions of clear type rendering, which is a new technology for showing how typefaces would look on screen. This on-the-fly transition from the underlying outline of the typeface design to how the pixels on screen would show it. What clear type basically made possible was to use every one of the sub-pixels of an on-screen letter to show the shape of that letter. I shouldn't say glyph and be a little bit more specific. Rather than just thinking of, is this black or white, or is it on or off, going across the horizontal dimensions of the screen, it said we actually can switch on or off the red, the green, or the blue channels and get a little bit more definition in this one axis horizontally. Yelibosma, who was the chief designer of the Cambria family, is probably one of the small set of people in the world who really, really gets on-screen hinting. He's worked with it a lot. He did a lot of analyses at the start of this project for what kind of shapes really respond well to this rendering model. And Cambria is his answer to that problem set about how do we not only come up with a new typeface design that's relatively neutral, but one that really shows off this new technology and what's possible. So he arrived at shapes that are somewhat rectangular in overall feeling. As he describes it, curves that move very quickly from the horizontal to the vertical. Squared-off terminals respond very well. They read very, very sharp and crisp with the rendering model. They also very, very even stem widths and regular spacing help a lot because they're more likely to hit like channels in the rendering as it moves across horizontally. And in general, a fair amount of contrast between horizontal strokes and vertical strokes also come up crisper and much easier to read at small sizes. So he avoided large diagonal gestures wherever possible. You can see even if you have something like an italic character, it's pretty squared-off. So in a lot of, I think this slide probably is where I've just saved a lot of time because I don't have to explain quite as much of what's going into all these. Great. Okay. All right. I can jump through. So all of this stuff working into the typeface features, but the typeface still took advantage of all these alternates that were available with OpenType. There's a text mode and a math mode, except they're all wrapped up in the same typeface now. And it's relying on the technology that's setting the math to decide which is necessary. The second you engage the equation editor within Microsoft product, it's going to not take the glyphs out of the text set and pull the ones from the math set. And you can see how the alternates work. The math alternates were the ones designed by Ross Mills to have a few more visual cues that this is a very, very different letter from the upright version or from other alphabets that are getting used within the equations. You can also see in the Greek, the top is the text model, the bottom is the math version, where it was drawn to be a little bit more upright to ease with a combination of other elements. And this is just a little bit of a diagram of how the box model was implemented in OpenType, where it actually puts a lot of the responsibility for getting these features right and gives it to the typeface designer instead of a compositor at this point. This allows the typeface designer to look at the shapes that he's putting together and make some decisions up front about how the typesetting technology will fit the pieces together. The OpenType tables actually allow you to define what the nature is of all these, what can be quite complex cut-ins around the outer contours of these letters. And as the typesetting tool positions something vertically, it analyzes where the cut-ins go to figure out how closely the pieces can fit together, which allows it to work regardless of the shape that's been drawn for the typeface. And I have no doubt that different typeface designs, like someone mentioned, are going to throw up very different issues for how these hooks within the OpenType table work. But I'm really eager to see other typefaces that use it to see how the solutions are arrived at. A large variety of optical sizes were built into Cambria math to provide different starting points for any kind of scaling that would be needed in the typefaces. And it's not just a linear progression. You can see that they sort of, some of the features increase or decrease in size in a way that's deemed appropriate for where they're meant to sit in the vertical positioning. Lots and lots of characters, as Alrick mentioned, in all of these alphabets shown at the bottom, every one of these glyphs is a separately encoded unicode character. There is a separate code for a sans-serif math glyph, and they're all written that way into Cambria math. So these aren't stylistic changes. These are separately encoded glyphs within the typeface that the math engine will actually pull in as needed. And thank you. I'm sorry I had to cram so much into so little time, but I hope it made some sense. Thank you.
After a brief discussion of some of the typographic and technical requirements of maths composition, three case studies in the development of maths types are presented: Times 4–line Mathematics Series 569, a complement to the Times New Roman text types as set with Monotype equipment; AMS Euler, an experimental design intended to contrast against non–mathematical typefaces set with TeX; and Cambria Math, designed in concert with a new text face to take advantage of new Microsoft solutions for screen display and maths composition. In all three cases, the typefaces were created to show the capabilities of new technological solutions for setting maths. The technical advances inherent in each font are shown to be as central to its function as its visual characteristics. By looking at each typeface and technology in turn, and then comparing and contrasting the issues that are addressed in ! each case, it becomes apparent that even though certain challenges are overcome with technical advances, the need to consider the specific behaviours oftype in a maths setting remains constant.