doi
stringlengths 17
24
| transcript
stringlengths 305
148k
| abstract
stringlengths 5
6.38k
|
---|---|---|
10.5446/31520 (DOI)
|
Alright, so talk today is on scaling composed with Fizz. Fizz is a tool that we built internally to kind of help us with communication. So, I'm JP, I'm with Compose. If you don't know who we are, hopefully you will after this talk. We're a fully managed platform for open source databases. So, we have several offerings today, Mongo, Elasticsearch, Redis PostgreSQL, Rethink, etcd, RabbitMQ, and DiskQ. And this is a sponsored talk, obviously. So, I did want to throw in a marketing slide for my friend Tom. If you go to compose.io.com, you do a 60-day free trial right now. And we also have kind of a campaign going on. If you try out Redis, we'll send you a special edition Redis t-shirt. So, Compose in 2015, so we're a company starting in 2010 and 2015. We had about 20 employees, I think, and we're fully remote. We have a pseudo office in Birmingham where a couple people like to congregate. But for the most part, we're a remote team. And we span the United States and several other countries. So around this time last year, we were acquired by IBM, which was my reaction when this happened. But since then, we've continued to grow. And so the green dots are all of our new employees since the acquisition. And what's cool is we've just continued to kind of spread. We're even more remote now than before. And I think we're actually doing a pretty good job of making it feel like we're one big team all in the same place. Just quick break, there's anyone else in here on a remote team? Or is pretty much everyone in office? Got one? Two? A couple people remote? All right, cool. That's good enough. So a breakdown of our organization, it's primarily engineering. And then next to support and ops. And then there are other three groups that are pretty much all the same. And it is not a typo. We don't have salespeople then or now. So that's always a fun thing. And our organization structure is a little different as well. We're pretty much all empowered to make our own decisions. We sort of choose what you work on and who you work with and how you go about doing it. And so in practice, this is kind of what this looks like. And I'll kind of talk about this as part of the tool. But it's always a fun thing to see. All these lines going between each person represents things they've worked on together. And so you don't really have a cool way to see that if you're not all in the same office where you really kind of get that day-to-day interaction. So we really try to do it with the tool. So breakdown of the app. Obviously it's a Rails app. And the primary store is Postgres and we use Redis for Slack commands as well as sessions. And they were actually trying out Disque for ActiveJob. So it's sort of the backing store for that. And so kind of have a checklist. So if you kind of break it down, one and two are sort of going to be the core of what Fizz is. And then I'll talk a little bit about what's not a project because we don't do project management, which is always fun. And then four to seven just kind of help you with your day-to-day. Kind of help you get insight into what others are seeing and what they're doing. So the first thing is what's everyone working on? We get this a lot. Even though we have this tool, people are still want to know what others are doing. And they want to know what they're doing, not just within their own group of engineering or marketing. They want to know what's going on around the rest of the company. So the first concept we have is called Posts. And the tool tries to help you remember people's names. Even though there's only 40 of us, it's still not easy to remember all the names associated with each person. So as you're typing out a post, we do get some help there. And here's a couple of example posts. And going into more detail about those in a little bit. So one I had and then one Matt had earlier. And then they're not all serious either. So the other thing is there are things that haven't throughout the day that you would normally share in a work environment, whether it was you had to deal with something with the wife or you had to deal with something with the house. In this case, Kyle had to deal with IBM. And so he felt that that was worthy enough of a post. And so here's kind of our general rules for posting. Emoji, strongly encouraged. You should mention other people. I shamed Michelle earlier today for not always including other people in her posts. But the idea is you want to be working on stuff with other people. And the only way that someone's going to know that is if you mention them in the post. And then we feel like we've made hashtags cool again. Lisa, who's in our marketing and analytics group loves to use the hashtags. And it's kind of like get-hub-marked-down. Most of it's pretty much supported. You can do lists and code markdown. And we want to be posting often. Ideally, you'd post once or twice a day. Maybe when you get going in the morning, kind of talk about who you're working on something with or what you're doing. And then at the end of the day, kind of what you have accomplished. If you go more than like a day or two days, except for the weekend, the fizz bot will heckle you in the general channel of Slack. So you will be shamed if you do not post. And the other thing that we do is we kind of like some interaction in the tool. It's not used that much today. But I think as we continue to grow as a team, it's going to be something that we use a lot more is the threadable posts. So you can comment on the main posts or you can comment on someone's comments. And you can also like a post, which is something that we felt like is pretty good. And I have an empty slide in here. I don't really know what happened. So that's kind of the general concept of posts. Everything's really built around that. It's simple. It's to the point. And it really kind of highlights what you're doing. But the other thing is people often do grunt work or they do work behind the scenes that no one ever knows about, except for maybe one or two people. So we want the ability to make that known and make it easily seen. So we have a concept called praises. And it's just in the same, you know, same post form you just prepended with slash praise. And it basically gives credit to that person. So a lot of times what I see happen is people are like, you know, head down working on something. And they actually forget to post for a couple of days. But you may notice that. And so a lot of times people try to take the time to praise that person. We also praise people for going on vacation and actually taking time away and being on vacation because the hecklebot doesn't really know what vacation is today. It's one of those things we're hoping to get in soon. And you're not supposed to be able to praise yourself, but there's probably a bug in there now that somebody did earlier today just to test it. So here's just, you know, example praises. These are also work related a lot of times, but not always work related. So they're, you know, have a slightly different styling. And the biggest thing is, you know, you want to praise at least one person. And the next thing is a project. So I don't want to, is anyone a project manager? Thank goodness. Okay. So the idea with project managers is a lot of times they're kind of the keepers of the communication, right? So you're working on a project. They're the ones doing the status updates. They're the ones communicating whatever it is you're working with other people. We don't have that because we don't really do typical projects because we don't have managers. Teams self organize around a single goal. A lot of times that's whether it's a feature release or even a bug fix. And even then when you're kind of gathering around a central thing, we don't have leads. There's not someone that's like in charge of the project because it's not a project. We also don't have due dates, but that doesn't necessarily mean we don't plan for things. So what's kind of cool with the fizz is as you're working on something, it's a lot of times you can really see when something's almost finished because people are basically saying, you know, about to wrap this up. And the biggest reason we don't have roadmaps, we kind of every day almost we just really evaluate what's the most important thing to be working on right now, whether it's for the customers or for internal support. And we really try to base all of our work off of what's most important right now. And that doesn't actually mean that we don't do email with this, but that doesn't mean there's no communication. We have a special tag called noteworthy that DJ, he's one of our writers, he basically takes all the posts that have used that hashtag in the last couple of days and he'll generate an email for everybody. So what we call these are actually milestones. And they're very simple and short and to the point. And you don't really like complete a milestone. You can join a milestone at any time and when you're done doing whatever the work is, you just move on to it. Move on to something else. You don't do one more than one thing at a time. So milestones, you can't really be a part of two of them. But you can also you can work on a mouse on another project or another milestone without actually joining it. So it doesn't prevent you from helping others out and you can kind of reference that in the post with the carrot symbol. So if you did a post with carrot 32, it would actually be attached to that actual milestone. And you see that the goal is always very simple and straightforward and something that should be accomplished within a week. Very few milestones will span more than a week. We don't really, I don't know if many that have gone that more than two weeks really. So I've mentioned hashtags a couple of times. And what we really use it for is a way to give context to posts. So we are able to basically take a post with one or more hashtags and people and aggregate that information to give context around whether it's a central function or a central group. So we feel like we've made the hashtag a thing that people use in the workplace and it's not going to get shamed for. So we use them a lot as a shocker. Marketing loves to use hashtags. No big surprise there. But every group kind of uses it. And so what this shows is basically all the users that have used this hashtag within the last several days. And here's some example ones. Lisa is probably the most frequent hashtagger in her posts. So it's always fun to show hers. And then on the top hashtags, that's on the each person has a profile. And so you can kind of go see what are the things they spend the majority of their time on. So this is mine. And so it's mostly engineering related stuff. And then we also have it hooked into Slack where if you use with certain channels, you can set it up to where if somebody includes a hashtag like marketing, the marketing channel. But every post, no matter what it is, will hit our general channel. And we feel that's important because a lot of times you want to read back through when you're out. And we also built some new stuff to where you can basically be gone for a week or a day. And you can easily kind of get a feed of what all has happened. So the last thing you kind of saw earlier is connections. So one thing we don't have is obviously really organized groups. We have general engineering. We have general marketing. And we don't have like product development. We don't have, you know, our UI group even is still we just, you know, we're all engineering. And so without giving context to the work that we're all doing, there's no way to see like something I'm doing is actually applies to another group. And so that's where connections come in. And so I thought it would be kind of fun to do a little show off some part of the app. We create what we call edges. And so we use a comment table expression and post grass. And so we can do some recursive queries. So what we're doing in this one is we're taking all the mentions and the hash tags on posts and we're kind of cross correlating them with one another. And then we group them. And then we use that in this select query where we create weights. So it's basically saying these two users have done something together within some certain frequency. And it kind of creates a score. And then we kind of throw them in a hash and return the number, the nodes, which is just each individual person. And the edges as well. So here's kind of what the edge would look like. You'd have the ID. And you'd have the two users and then the weight. And then the other bit is the nodes, which is just each individual person. And it kind of gives us a really fun graph of the company as a whole. And so to us, this is what a healthy working environment looks like. You don't see a lot of siloed people. You don't see one particular person doing a lot more than others. You don't see... It's basically you don't want to see any outliers. And we feel like that's what this shows. And it really, no matter what the view is, you don't really see it. This one kind of breaks it down. So you can see each individual person a lot better. But even the links, it's hard to see them, I think, as the slides. But we feel like... So one cool thing is banks who's kind of in the middle there, he's been around for, I think, a week. And he's already got connections to six or seven people in the company. So it's a really cool way to see if you've got someone new, are they getting involved? Are they kind of diving in? Are they working with other people? And we feel like that helps us a lot. So the next thing is Slack. It's kind of our lifeline within the company. When you log on in the morning, you kind of hop on Slack and sort of see what's been going on right now. You may want to read back through. But all of our posts, they hit general. And these are just some examples of things that can happen throughout the day. But a big thing that people talk about is fear missing out. If you're on vacation or if you just want to take the day off and you kind of want to know what's happened, what have people posted about, you can go to the app and scroll back through it. But what I've found is that I spend the majority of my day like Slack is there in front of me. I'm not kind of jumping into other tools a lot. And so if you just, we have special slash commands. So you can do slash day or daily summary or weekly summary. And it'll just dump in whatever that range is for you. And then so you can easily kind of read back through it. And since it's a slash command, you can delete it. So it's not going to clutter up your main channel or wherever you post that. So the last thing is GitHub. So this is something I added for engineering. And specifically almost kind of selfishly for myself. Because I'll be working on something and I might go a day without posting. And so as I'm finally getting down to posting, I'm trying to think back through like, what have I done? Like, I know I've worked on more than one repo or I've spent a lot of time reviewing someone's pull request. These are all activities that we want people to spend time on. Pull requests are vital for us. It's kind of how we do all of our, it's how all code gets deployed and issues as well. We have a central repository where support will just file issues with information from customers about bugs. And then so that's work that we can then correlate from support. And then engineering will get involved and other, anyone will get involved with that GitHub issue. And so you want to be able to kind of take all of this action that's happening in another tool and be able to kind of use it in your posts. And so here's an example of what happens in Slack. You can just run slash fizz activity. And it will basically dump out all of your activity for the last week, I believe. And it breaks it down. It kind of groups it within each specific thing with a number. And so what we do here is when you run that command, the trick is, you know, you run it and you get some output in Slack. But there's, you know, you need to be able to correlate that back to an actual post. And so when I was working with one of the designers, he, originally the numbers were going to correlate with the actual post ID. And he felt like that it made more sense to just give a straight numbered list. So what we do when we, you know, generate this is we have a user and then we have, you know, whatever the number is in the sequence and then we just create, we dump a list and read us. And so when you use it, we can then pull it back out and say, all right, they gave us 64 and 65. What post ID does that really correlate to? So when you do a post in Slack, you can just say, you know, slash fizz, what you did. And at the end, you just throw activity colon and then whatever, you know, however many numbers you are on there. And you'll see that those correlate to 60, I don't think I had it in there. So it would be 64 and 65 in the list there. And so what happens in the post, the actual post, we tried to kind of mimic it after, you know, if you go to your user profile and get up and it'll show you a little breakdown of what all you've done, we tried to model it after that and be able to actually link to the activity. So, you know, if someone posts something and you see some commits to a project that, you know, you're interested in or, you know, you work on yourself, you can then go and take a look at it. So the other things, that's, you know, it's really other items we're hoping to add soon. We use HelpScout for our ticketing system and Trello, our ops team likes to use that. And there's lots of other things that we use. We don't really make people only use one tool. So the goal is for us to open source this pretty soon here. When we first started working on it, we were pretty lazy and with hard code things that really should be environment variables like our Slack token or our Slack channels or anything like that. So we're trying to clean up the code base a little bit to get it out there because we think, you know, it might be of interest to other people. And it does use, you know, more than one database, but we're a company who can help you with that. So that's all my questions. Errol, it's awesome. Talk to other, anyone have any questions or comments? All right. Well, thanks for coming. You know, I'll be around afterwards. And, you know, we got a booth if you want to come by and talk database or talk Fizz. Okay. Thank you.
|
Compose is committed to making remote work work. Our biggest hurdle is communication and teamwork. When we joined forces with IBM, we added a new issue - how to scale. So, our devs built an app we’re open-sourcing called Fizz. Built on Rails, Fizz helps us empower our team to do great work, feel like family, and operate happily and efficiently as an international, remote, self-managing organization. We work transparently, commit to open-source, wear sweatpants, and genuinely enjoy each other and we’re committed to keeping it that way. We harnessed the power of Rails to make that happen.
|
10.5446/31522 (DOI)
|
I don't have any time to waste, so let's go ahead and get started. First off, a disclaimer, this talk is not about skyscrapers. So, I know you're probably thinking like, but the very first slide, it said how to build a skyscraper, but I promise you this talk is not about skyscrapers. And it's really important that we remember this as we go through the talk. So for those of you who are on time, you're going to be like really confusing the other people that come in late because we have an exercise to do here. Any time you see this slide, I'd like you to read it out loud. We're going to try that right now. This talk is not about skyscrapers. All right. But when I first started researching for this talk, I did find it really interesting when I started to read the descriptions of the considerations that you have whenever you do skyscraper design and construction. I think it's interesting anyway. So, first skyscraper we're going to talk about doesn't even technically qualify as a skyscraper. It's the Equitable Life Building built in 1870. But to be fair, skyscraper is also a term that we've used for very tall horses, very tall men, and even very tall hats. So I think we can probably give a seven-story, 130-foot tall building a pass. Now, the Equitable Life Building was the tallest in the world from 1870 to 1884. And it was the headquarters of the Equitable Life Assurance Society of the United States, but that's a mouthful, so I'm just going to call them equitable. Now, they were a life assurance society. They were a life insurance company is what that really is. So them being a life insurance company, they were experts at assessing risk. Now, they had determined that their building was fireproof. We'll come back to that a little bit later. So its basement housed safes and vaults that were filled with several billions, and I do mean billions in 1870s-era money of securities, stocks, and bonds. So put simply, this Equitable Building was the center of most of the wealth of New York and in the New York Financial District specifically. And it really showed us this building is gorgeous. And tenants in their building included bankers and lawyers, and it even had an exclusive Lawyers Club, which is what you see here. And really, it only had one problem. Can you spot it? It had stairs. And a lawyer on the seventh floor of the building was not going to have very many clients if they had to climb up six flights to get to him. So thankfully, a solution to this problem did exist. A guy by the name of Elisha Otis was a tinkerer. He and his sons, actually. And at age 40 in 1851, he was managing the process of converting an abandoned sawmill into a bed frame factory. Now, while cleaning up, he had a reason that he needed to get all of his debris up to the upper floors of this factory. And hoists and elevators existed, but they had one really important flaw, which was that if the rope broke, then anything that was on this hoist was likely broken or dead. Kind of an issue. So he and his sons designed what they called a safety hoist, and it wouldn't fall to the ground if the rope broke. And he didn't think too much of it. He didn't patent it. He didn't try to sell it. And he didn't even ask for a bonus for designing it. But three years later, the bed frame business was declining, and he was looking to try something new. So he formed a company to sell his elevators, and he got no business for several months. Now, the neat thing about these elevators is these teeth, right, on the side of the elevators. Whenever the rope would break, the spring would release its tension, and these pegs would shoot out into these teeth to stop the elevator from falling. So again, no business for several months. And then came the 1854 New York World's Fair. Now, he had a great opportunity to demonstrate the elevator in a really dramatic way, and he was a bit of a showman. So he gets up on one of these hoists, and he has an assistant cut the rope, and he's fine. Now, everybody, it's kind of like NASCAR. Everybody's waiting to see the disaster, right, that's going to happen. But everything's fine. And I'd like to point out, too, that this is charcoal drilling, but there's a photo bomb in it. I'm not exactly... I'm not sure what that's all about. So these elevators weren't perfect. They ran on steam engines back in that day, and so that meant somebody had to keep them constantly fueled. But even though it would be a while before they were updated to run on electricity, it was a big deal. You got to think about equitable here. It used to be that when you had an office building, because people didn't want to climb stairs, if you owned the building, you made the most money on that investment by renting out the lower floors. So a company would lease the space in the lower floors, and then make all of its employees go up and climb and end up sweaty and a mess whenever they get up there. Speaking of which, how those showers this morning, huh? So now there's a safe way to travel easily to and from these higher floors. And the highest floors also happen to have the perks of being the most well-lit, the most well-ventilated, the furthest away from road noise. So this literally turned the value proposition for buildings upside down on their head. And all of this was the result of something Elisha Otis didn't even think was that big a deal. I'm just glad he shared it. But anyway, we were talking about the equitable building, you know, the one that was fireproof and had billions of dollars in its basement. This is the Café Sauvaron. It was a really fancy café in the equitable building. Now picture that it's January 9th, 1912, and it's just after 5 a.m. And the wind is howling with gusts over 68 miles per hour. And it's making the below freezing temperatures even cooler. And Phillip O'Brien, who was the timekeeper at the Café Sauvaron at the time, had started his day by lighting the gas in his small office. And he distractedly throws the still lit match into the garbage can. By 5.18 a.m., the office is engulfed in flames. And the flames spread to the elevators and the dumb wader system and quickly engulfed the entire building. And the fire department arrived, but as you can see here, it was so cold outside that as they're spraying the building down with water, it's freezing on the building. And they literally can't put the fire out because it's turning to ice before it gets to the fire. So the building was completely ruined. And so it was that the building built as... And so it was that the building built as fireproof was lost in a fire. And history buffs out there might also remember that 1912 was a year that an unsinkable ship struck an iceberg and sank as well. You know, you'd think two disasters in one year would be enough to teach us maybe we shouldn't be making these grandiose statements anymore. But again... This talk is not about skyscrapers. Next skyscraper we're going to talk about is the home insurance building. It was built in 1885. The architect was William LaBaron Jenny. And the story goes that Jenny left work unusually early one day. And his wife thought perhaps he was sick. And she rushed to meet him at the door. And she took this heavy book that she had and she sat it on a bird cage. And inspiration struck Jenny when he got there. And he said, if so, frail a frame of wire would sustain so great a weight without yielding, would not a cage of iron or steel serve as a frame for a building? I'm not quite sure why it kind of sort of rhymed. That's very poetic if he really did say that. But we're going to go with it. So the home insurance building is considered the father of the skyscraper by most. And it was the tallest in the world until 1889. It was built from cast iron columns and rolled iron beams for the framework up to the sixth floor. And then from that floor up, it was steel beams. Now the majority of the masonry that was used was actually hung from the framework like a curtain. So in construction like this, the masonry was there to look pretty, to keep the weather out, to keep the people in, that sort of thing. But the heavy lifting was done by the framework. And this made the building drastically lighter to the tune of about one third the weight of a typical load bearing masonry building. So something as simple as a bird cage led to an idea that was going to revolutionize everything about how we went on to build tall buildings from this point forward. But you may have noticed I said that the majority of the masonry wasn't load bearing. And since there was still some load bearing masonry in the building, it left things open to debate. And so the end result was that if you were from New York, you said, well, the home insurance building really isn't the first skyscraper. But if you were from Chicago, you certainly did think that this building was the first skyscraper. But the interesting thing about this is here are these people in Chicago, and they built this awesome building upon an iron and steel framework. And it's clearly a technical accomplishment. And more importantly, it's serving the needs of their occupants. But it was so easy for people to come along after the fact and sort of debate, well, you know, it's not really that impressive. So this is Leroy Buffington. He doesn't look very happy, does he? Maybe that's because he claimed he had the same idea for this framework sort of design in 1881, except he didn't build it. He did, however, apply for a patent for it in November of 1887, and it was granted in May of 1888. Now by this point, the technique is already in wide use. But still, Buffington started a company he called the Iron Building Company for the express purpose of pursuing lawsuits. Now this is a flax mill that used iron framing. This was built in 1797. It sort of sounds like prior art to me. But that didn't really stop Buffington from trying to extract money from anyone who was going to pay. But again, I like, you know, for post lunch, this is impressive. I love you people. So the next building we're going to talk about is the Monatnock Building, built in 1891 in Chicago, Illinois. So there were these two brothers, wealthy brothers. I could only find a picture of one of them, Peter Brooks. But Peter and Shepherd Brooks believed Chicago was going to be America's largest city. And you can tell Peter was rich because they don't do oil paintings of people that aren't rich, I find. They hired this guy on the right, O and F Aldis, to be their property manager. And Peter only ever visited Chicago one time. The brothers relied on Aldis to do all of the heavy lifting, figure out what they were going to do. So Aldis recommended that they retain Daniel Burnham and John Root of the very imaginatively named Burnham and Root to design this building. Now Burnham was a pragmatic businessman, but Root was a bit of an artist. He had a flair for the artistic. This is a sketch from 1885 that was drawn by Root. This was at this time, the building was planned to be 13 stories. And it had this sort of Egyptians inspired ornamentation that you can see up here. Now Peter Brooks was known not only for being very wealthy, but also for being very stingy. And he preferred simplicity. And in fact, he insisted that the artists refrain from any kind of elaborate ornamentation. He said in fact he didn't want anything to protrude at all because it was just going to create a place for pigeons to nest. He really had a problem with pigeons. So when Root goes on vacation, Burnham, the business guy, has another just a draftsman create a simpler drawing. You might imagine that when Root came back from vacation he wasn't terribly pleased. Here was this artistic work that he had done that was being gutted, essentially. Now he did eventually, however, decide to throw himself into the design. And he found a way to kind of get invested. He declared that the heavy lines of the Egyptian pyramids had captured his imagination and that he would throw the thing up without a single ornament. So by embracing this constraint that Brooks had provided instead of fighting it, he was able to find a way to remain invested and passionate about his work. So this is the sketch four years later in 1889. You can see Root really, he can't quite give up entirely on a little bit of ornamentation, but he has these little protrusions that stick out along the way, these little bumps that you can see. But Aldis was able to sell Brooks on the idea because these protruding windows would increase the square footage they could rent. So in fact the height of the building was calculated by determining how high can we actually get away with building this thing while still having enough room to rent because this was low bearing masonry. So by the time you got down to the bottom of the walls, these walls were six feet thick. So imagine you keep going higher, you lose rentable space, you can run it through an equation and figure out how to optimize. So Chicago also had soft soils, so they had to devise a special raft system that kind of floats the building on top of the soil. So this is the finished product. It's 215 feet tall, 17 stories. It was the tallest of any commercial structure in the world at the time. Now they knew that the building was going to settle. They designed it to settle eight inches, but by 1905 it had settled that much and quite a bit more. They ended up having to reconstruct the first floor. By 1948 it had settled 20 inches and so they actually had to put a step down. So to get into the building now you step down to go in because it's gradually sinking. And guess what? It's found to be sinking in 1967. They forgot it already was, I guess. So profitability is a really important factor to consider, but it can't be the only thing that you consider while you're building your building. Again, next skyscraper we're going to talk about is the Fuller Flatiron 1902. So during the construction of the monatonic, John Root passed away. Daniel Burnham was still in business. He had DH Burnham and company and he partnered with a guy by the name of Frederick Dinkelberg to build or to design the Fuller Flatiron building. Now the Fuller Flatiron was originally supposed to be called just the Fuller Building after the recently deceased George A. Fuller. He was kind of a big deal in the architecture community. But locals called it the Flatiron. Now I assumed like, oh, it's because it's like iron. It's made out of iron or something to that effect. But as it turns out, it was much simpler than that. The building looked like a Flatiron. And at the tip it was only six and a half feet wide. The shape of the plot of land they had, this triangular plot of land, it necessitated a different kind of shape for the building. And if the monatonic required walls that were six feet wide at the base, you might imagine that's not going to work whenever you're six and a half feet wide for the entire building at the tip. And it was only 16 stories tall. So this is even a taller building. So at this point, that's obviously not the case. You can see that they're not six feet wide. They're not six feet wide walls. So since it was better to have an oddly shaped building than half a building, Burnham and Dinkelberg had adapted their approach so that they could fit the space that they were given. And this meant choosing some new materials. And in this case, it was all steel. It was not masonry at all. So the Flatiron was built on an all steel frame. Now if you look at these photos, you might not be terribly surprised to hear that the locals were calling this building Burnham's Folly. And in fact, they were actually taking bets on how far the building's debris was going to blow when the building toppled over during the windstorms that would hit. So but you know, there was an engineer. His name was Gordon Purdy. And he was involved in this project. And he had designed bracing that had already been tested to withstand four times the wind this building was ever going to encounter. And so it was after this building went up during the first windstorm that hit very shortly after, it was 60 mile per hour winds. And the tenants were saying they couldn't feel the slightest vibration in the building. And not only that, one even said that the filament in his light bulb didn't even quiver whenever they had this windstorm. And so this didn't surprise the engineers one bit. They had run their tests. They knew what was going on. But it really blew everybody else away. But again. So this is a twofer. We're going to talk about two skyscrapers at once. They both went up in New York and we're going to talk about 40 Wall Street and the Chrysler building. Now, H Craig Severance and William Van Allen were formerly partners at another architecture firm. And they were very different personalities. Van Allen was, again, an artist. He was the type of guy that liked to hang out with other architects and discuss the finer points of design. And Root was very much like him. There's this pattern you see of these kind of pairings of a business person and an artist. But Severance, on the other hand, he spent his time with the business folks and he was drumming up sales. You might be able to tell here, humility wasn't exactly his strong suit. And he didn't really have a particular passion for architecture as art. But still, whenever the trade magazines would all refer to Van Allen as this great designer, this very impressive person, and they didn't really mention Severance at all for the buildings that they designed together, he took it personally. And their partnership, as you might imagine, ended badly. And then, to make things worse, they found themselves in competition with one another. So, Severance had been commissioned to design 40 Wall Street, but Van Allen was commissioned to design the Chrysler Building at the same time. Now, you might already, you're probably already familiar with the Chrysler Building, but I talk to people regularly that don't know what I'm talking about when I say 40 Wall Street, so maybe this will actually help. We call this the Trump Building today. Back then it was known as the Bank of Manhattan Trust Building. So, Severance had assembled a bit of a dream team, it consisted of his associate Yasuo Matsui, and consulting architect Shreve and Lamb to design 40 Wall Street. Now, Walter Chrysler had Van Allen designed the Chrysler Building for his car company, but he paid for it all himself because he wanted to leave the building to his children one day. And he was obsessed of every single detail of this building because he later referred to it as a monument to me. So the Chrysler Building was announced a month earlier than the building Severance was working on, so you might not be terribly surprised then that the 40 Wall was announced. It's a bit higher, right? So in October of 1929, Severance is visiting the site of his construction, and his building is just about to catch up with the Chrysler Building, and he's feeling pretty good about things because Chrysler Building is slowing down now. They're putting these domes on the top that you probably would recognize, and they can't go much higher. Now, Chrysler was already in the process of drumming up press for his building, so he was announcing that the steelwork was complete, which would have made the Chrysler Building the tallest one in the world at the time at a revised height of 850 feet. But Severance wasn't really worried. He had already put in motion plans to build higher than announced, and this was once. So the month was filled with all sorts of announcements from other builders, and everyone was claiming that they would build something larger. In fact, there were people saying, well, there's nothing really stopping us from building a building that was two miles tall. So Maynallan was silent. He, Chrysler, and very few other people knew that they were going to build a lot higher than anyone was expecting. So in the third week of October, Severance hears about the sighting of a 60-foot flag pole at the top of the Chrysler Building, and so he raises his plans again. And this was enough when they leaked this information to the press to declare that the Mancoman-Hatton Trust Building was in fact going to be the one that would top out the tallest. And then it just made sense, right, because the Chrysler Building couldn't go much higher. And so they knew that the Manhattan Trust Building was going to be 925 feet. Chrysler Building would be 905 feet. And this was all including that flag pole. Only the flag pole wasn't a flag pole. The flag pole was just one part of a five-part, 185-foot, 27-ton steel spire that Maynallan named the Vertex. And he had had it built on the offsite in all these five pieces, and then he shipped each part separately to the building. And then they hoisted them into the dome's fire tower on the 65th floor. So then they partially assembled them, hoist the base up, rivet all the rest of the pieces in place in about 90 minutes. So Van Allen and Chrysler go to bed this evening knowing they have the tallest building in the world. But the best part is nobody else actually noticed, because from the ground, this stuff kind of just looked like a really tall crane or something attached to the building. And so they just kept quiet, because, you know, severance can keep on going if you want, so let's just keep it quiet for a little while. And so when 40 Wall Street tops out in November, the New York world runs with this headline, and they aren't talking about the Chrysler Building, but they're talking about 40 Wall. And four days later, this kind of uninteresting trade magazine called the Daily Building Report from the Dow Service. It's normally running things like what are the costs of building materials all around the country, so you can optimize for that kind of stuff. They break this news, this dramatic news, that the Chrysler Building is over 238 feet taller than anybody really knew they were building. And so after all was said and done, the Chrysler Building was towering over 40 Wall Street by over 100 feet, and it became the tallest man-made structure that was ever built. This beat out even the Eiffel Tower, which had been the tallest man-made structure up to this point. But both of these buildings cost a fortune. I mean, you've got 13 mil on one side, 14 mil on the other. Think for a moment about how much extra expense was incurred on these buildings, just because they were trying to win against a rival. To make things worse for the winner, Chrysler refused to pay Van Allen his 6% design fee after they finished this work. That would have $840,000 that he stiffs this architect, because the architect hadn't quite been bright enough to enter into a legally binding contract when he received the commission to build the Chrysler Building. And Chrysler would have paid anything up until the point this building was completed, and he had won the title to reach this height. But after that, it just didn't really seem like it was worth it. And Van Allen had to sue him to get paid, which ended up making him a bit of a cautionary tale to other architects. And in fact, no major studies have really been devoted to the state, to Van Allen's work, and he's little known in the history of architecture. On his death, the New York Times didn't even publish his obit. Again. So another neat skyscraper is the Empire State Building. Again, we're talking about New York here. Back in August of 1929, this is during the construction of the two buildings we just discussed, rumors started circulating that a new developer was going to take ownership of the site of the Walter Faustoria Hotel. Now, Al Smith was the former New York governor that was running against Herbert Hoover for the presidency. And he had invited John Raskob to chair the Democratic National Convention after Raskob had been running his campaign. Well, Raskob was VP of finance for GM. And until 1928, when he got ousted by a guy by the name of Al Sloan, who was a supporter of Hoover and claimed there was a conflict of interest here. So Al smelt at the company. Well, Raskob's like, okay, fine, whatever. He sells his GM stock. He wants to finance a building. He creates the Empire State Company, and he hires Al Smith to be the president of it. So Al Smith was a politician, right? He has a flair for the dramatic. This is how he announced that he was going to be the president of the company and that he was going to build this building. And of course, he announced that it was going to be an 80 story skyscraper, the tallest in the world. But again, this is around the same time that everybody else is making these grand claims of like two mile tall buildings. So nobody's paying attention. So speaking of these months, remember, Shreve and Lamb, they were the consulting architects that were brought on to work on 40 Wall Street. During the same time, they teamed up with another guy by the name of Arthur Loomis Harman. And by October 2nd, 1929, they were already showing scale models of this new building, the Empire State Building, to Raskob. And I think that's really interesting because these other buildings hadn't even been done, so they sort of had some insider information about what was going on. And now Lamb was again, Lamb was an artist, very much like Van Allen and John Root before him. And his partnership with Shreve was very much, Shreve was the business guy. Now the thing about this is, he was also pragmatic enough to know that there were certain concessions he was going to have to make, even though he had a flair for the artistic. And he had a tight deadline, and that was going to be the primary constraint that he was going to have to deal with. Now initial drawings for this building were created within two weeks. And a final design was reached in four, this is really fast. And one of the things they did that was really interesting, instead of designing from the bottom up, they designed from the top down. They set a standard for light in the interiors, and this was the thing that they said they weren't going to compromise on. They wanted to place a standard on how pleasant it would be to work in the spaces that they were building. And Lamb had his priorities straight. He understood that certain things had to be constants, and everything else was going to have to shift around those things. He wasn't willing to sacrifice lighting, ventilation, anything else that was going to make the property valuable and appreciated by those people who mattered. Well who matters? They're the present and the future occupants of your building. There are people like her, there are people like him, there are people like this guy because if the building can't be maintained it's not going to be very good for very long. Occupants can come in all shapes and sizes. But one thing is for sure, just because someone is big, strong, loud, and they want to use your building to make themselves bigger and stronger and louder, that doesn't mean that you should put their needs above the greater good. So one reason the building's designs took shape so quickly was that they were able to use parts of designs that had been done before. This is the Reynolds building in Winston-Salem, North Carolina. It was designed by Shreve and Lamb previously. And this is Kerou Tower in Cincinnati, Ohio, designed by another firm. If you look at the scale models side by side, you can see that there were aspects of the design that were sort of swiped from both of these other pre-existing designs. And it's great to be able to reuse previous work because it can make your work go much faster. So fast forward to November of 1929. Al Smith has just announced that they've also bought the adjoining land to the Waldorf Astoria, which of course the news people are now recognizing, well, this means they're building higher. Now Shreve, Lamb, and Harmon, they all wanted to keep the height down to be practical because the higher you went, eventually you even had to have people get out and change elevators, right? They can't just ride one elevator up to the top. They have to get out, walk around, and change elevators. That's fine and good, but Raskob's the guy that's paying for this, and he wants to add more height to Empire State. And so the next day, Al Smith, in his typical fashion, announces that they're going to add five more floors. He announces the new title, the new total is going to be 85 stories and 1100 feet. That's an overestimate by about 50 feet, but you know, he's not the tech guy. It's okay. All right? So I love this, though. I love that this is what the actual architects are saying. We want to do sound development of usable space. So John Raskob, he's sitting in his office, and he's looking at this scale model that they've provided. And like every client ever in the history of ever, he decides he knows exactly how to solve this problem. And what he reportedly said at the time is that this building needs a hat. Now he didn't mean a literal hat. He meant it was a mooring post for Zeppelins to be able to dock above the streets of New York City and let passengers off at the top of Empire State so they can then get down the building. And it was going to be so much better than the Chrysler building's spire because this had a good purpose. And it was going to need another 200 feet. And this is going to let them stay true to Shreve's promise that we're going to do the sound design of usable space. But you'll notice how Al Smith just happens to mention the final height of the building in his announcement about this new development. Now never mind the feasibility of docking a Zeppelin at this height above New York City or what was going to happen when the Zeppelin got caught by some wind gusts and needed to maintain even keel by dumping several hundred gallons of water on the people below. I want to remind you now, if you don't remember this from physics, water weighs about eight pounds a gallon. And so we're talking about well over a ton of water being dumped on citizens below the building. But Rascott had to build the tallest building. None of this mattered. Now this plan was going to add $750,000 to the cost of this building. But because it had marketing appeal and because everybody was so enamored with flight at this era, the architects had really no say on it at all. Rascott and Smith were determined it was going to happen. So this frustrated Shreve, he wanted things to be practical. But in the end, they still had to go with it. Now with the designs completed, it was time to start building. And the interesting thing about building is I don't care if you've designed from the top down, but you definitely need to build from the bottom up. When you build, everything that you build has to sit on top of something else. Now your definition of bottom might change depending on what you're building on top of. But the only way to make sure that the structure is going to be sound is that it's sitting on something else that's already sound. But it's important to be honest with yourself. If you've built an entire ecosystem and your own building as well on the top of another person's building, while considering that person's building the bottom of yours, you can't really complain when the bottom gets yanked out from under you. By the way, during the act of building the Empire State Building, the real heroes are the steelworkers that were putting in work. The ways in which they had to do their work were extremely dangerous, extremely stressful. They were always operating in their tight schedules. And they didn't even always have time to put in proper safety nets. And sometimes even the supports that they could build didn't seem terribly fit for purpose. Now sure they got to have lunch, but they sometimes had to have the lunch in the office, as it were. And construction of the Empire State Building, it started on March 17, 1930, and it went for 14 months. This building was rising at a rate of four and a half stories per week, which was a record speed. So 14 months after construction began, building opens. And it was going to have the world record for tallest skyscraper for the next 40 years. So notice how short-lived Van Allen's record was after all the work and the effort that they had put into it. Now they completed this monumental feat with only five deaths. And five deaths on record, when you look at the conditions these people were working in, seems pretty low, like really. But even one life lost is too many. Next one we're going to talk about is the United Nations headquarters. Again, we're in New York. Now the interesting thing about this building is, compare this building to the height of the previous building, and this building is actually like half the height. And yet it took longer to build. It was constructed from 1948 to 1952. That gives you some idea how quickly things moved on the Empire State. Now the big thing about this building is it's all windows. They had decided they wanted lots of light, and so everything had to be sealed windows. But you know what else is built with lots of sealed windows? A greenhouse, right? So the problem is that with light comes heat. And if you want the light but you don't want the heat, you have to figure something out. Because it really doesn't matter if you're building is super pretty if nobody can actually stand to be in it. So the solution to this had started earlier, and it started in response to a problem that was encountered by a printing company in Brooklyn. Now the printing company was actually having a problem with their paper getting wrinkled by humidity. And so then when they would ink the paper, there would be wrinkles and they would cause the ink to come out and misalign. So a fellow had already come up with a solution for this. His name was Willis Carrier. He was an engineer that had worked to basically come up with a way to remove the humidity from the air. It happened to have the side effect of also cooling the air. And it worked by blowing air over a set of coils filled with coolant. So he called it the apparatus for treating air, but we later came to call it air conditioning. So the first space to use a similar kind of technique to cool for human comfort was actually the New York Stock Exchange. And the guy that had designed that system was Alfred Wolf. The thing about the system that was used there was first off it was very expensive and it was also very heavy. This device actually weighed 300 tons. So in 1922 Carrier had improved on his original design. He had added a centrifugal chiller. And what this meant was that it was simpler, it was smaller, and it was most importantly way more cost effective. So without this, a building like the UNHQ wasn't going to be able to exist. And this is really important because, yes, the technology existed before, but there's a big difference between it existing and being accessible. But again. Now we're going to talk about the Willis, or we may have known it as the Sears Tower at one point in Chicago. Fosler Raman Khan was the architect for this building. And he was actually a structural engineer tasked with building an office complex for Sears Robucking Company. And they wanted to host all Chicago employees in one building. So this was going to have to be a very tall building. Now Chicago is known as the Windy City, and it's not really known as the Windy City because of the gusts off Lake Michigan, but the gusts from Lake Michigan can batter the city with winds of over 55 miles per hour. Now the taller a steel skeleton building gets, the more susceptible it is to bending in high winds. And so this, it creates a swaying motion that gives you a sensation not unlike seasickness. You can get seasick on the top of a very tall building. So Khan had developed something he called a tube structural system, which doesn't look much like a tube, but in theory it really was. He took the skeleton that we were used to, the steel skeleton, turned it into an exoskeleton, pushing everything out to the edges. And not only did this give you better resistance against wind, but it also reduced the building's weight even further, and it opened up more use of the floor. You had to have these large floors, like open office floor plans, for instance, we all love those. And the thing about this is, unlike this lobster, whose exoskeleton is not winning in many beauty contests, Khan's exoskeletons opened up new avenues for design of buildings that frankly turned buildings into art. And you were able, evolutions of this design were able to become very, very impressive over time. And the important thing here is that it wouldn't have been possible to build this high if we hadn't built up a thick shell to guard against the wind. Now the Sears Tower was built using Khan's bundled tube structure. It's exactly what it sounds like. It's the same kind of tube construction, but a big bundle of them. This was nine separate buildings of various height, and it used the same construction bundled together. The end result was that even with wind speeds of over 55 miles per hour, the top of the Sears Tower only sways six inches. So it's interesting how multiple small structures working together can be more resilient than a single large building. But we're almost done. Two more. First one we're going to talk about are these two, is Taipei 101. It was completed in 2004. It's built in Taipei, Taiwan, which I probably would know from the name. Taipei sits near the Pacific Ring of Fire, which is the most seismically active area on Earth. It gets hit by an earthquake about twice a year. And earthquakes are very different than wind shear, right? Earthquakes, they have a very strong effect because they affect the foundation of a building. And so an earthquake can literally break a large building. And this means it's pretty important to test against breakage before you erect a large building on a foundation. And so it turns out that spaghetti models, like you might have built in science class at one point, they model steel very nicely. They bend and break under similar conditions, similar characteristics. And so this is how they test them. This is so awesome. I wish this was my job. Seriously. You get to play with these models all day? Now look, the structure seems mostly intact, right? But if this had been a real building, that top floor would have fallen down and killed everybody inside. The structure was too rigid and it transferred too much of that vibration to the top floors. The industry, by the way, has a term for this kind of failure. So this is fun. It turns out that the only way that you can actually assure a lack of failure is to test for all modes. But the only way to know of all modes is to learn from a failure that actually happens. And so it's not possible to be absolutely sure that any given structure is going to resist any loading that could cause a failure. All we're really doing is figuring out that it's acceptably unlikely. Think about that the next time you're on the 30th floor of a building. Someone's in charge of deciding what's acceptable. So it's really important that we test to ensure catastrophic failure is acceptably unlikely. Hopefully set a good bar for that. So the designers of Taipei 101, they made it rigid where it had to be and flexible where they could afford to be. And this is a floor plan, a typical floor plan for 101. You can see these yellow dots on the map here. They represent 36 rigid steel tubes, including eight mega columns in red. These are all pumped full of concrete. And then every eight floors, there are these outrigger trusses that are essentially like big rubber bands around the building that allow the building to shake, essentially. Now on March 31, 2002, a 6.8 magnitude earthquake hit Taipei. And Taipei 101 was still under construction. It destroyed smaller buildings. It toppled two cranes from the tops of this building. But the construction ended up resuming without incident after an inspection said everything was fine. There was no structural damage. In fact, the engineers said during a quake Taipei 101 is the safest place in town. So you'd be surprised how flexible it turns out you can afford to be. Now that flexibility is great for withstanding a quake, but you might imagine that making a building that's kind of strapped together with rubber bands has negative effects in the way that you can resist wind. So if every time the wind gusted, everybody got sick, you probably wouldn't have any tenants. So it actually has three tuned mass dampers. And this is the biggest one. It's suspended from the 92nd to the 87th floor. And it weighs 728 tons. And what happens is, and this is during a typhoon last year, what happens is it swings in the opposite direction to sort of maintain against the wind so that when the building is swaying, it's kind of pulling back in the opposite direction. And you know, it's really good when winds pick up to have something at the top that's pulling for you. This talk? Right. And most importantly, I want to talk about the Borsch-Kalifa. It was built in 2010. And everything that we've learned so far has been refined and improved and applied to make this building possible. But that's not actually what I want to talk about when it comes to the Borsch-Kalifa. So after the attack of September 11th, it was actually discussed that maybe we wouldn't be able to build any super tall buildings anymore. Because the problem becomes one of evacuation. In an evacuation situation, stairs are really the only option. It turns out that walking downstairs is almost as difficult as walking up them. And at twice the height of the former One World Trade Center, Borsch-Kalifa, they needed a plan to ensure the safety of people that were going to be inside in the event of any kind of accident, really. And you know, the building had a naturally fire resistant concrete core, which, you know, that helps. But even so, as you build higher and higher, more and more people need to walk further and further to get to safety. So the big question then is how do the people that are in the Borsch-Kalifa get out in an emergency? And the answer that surprised me is they don't. Turns out that it's not just enough to give only one option to people who are in danger to leave the building. So what was done is that refuge rooms were built on the mechanical floors of the Borsch-Kalifa. They were built from layers of reinforced concrete. And they had fireproof sheathing. And the walls of these rooms, they can withstand the heat of a fire for up to two hours. And each room has a dedicated supply of air that's pumped in from fire resistant pipes. And by creating these safe spaces that are, the people in danger are able to go, the architects make it more likely that people are going to survive a catastrophe. And these spaces are every 25 floors or so. And that's important because it doesn't matter if the safe spaces exist, if they're not, if they're too inaccessible or if they're too risky to get to. Now in a fire, you probably have heard that it's not usually the fire that kills you. It's the smoke inhalation. Well, if the route to a refuge room is blocked by smoke, then that room is no good. So the Borsch-Kalifa, if anything, activates a fire detector, a heat sensor, a water sprinkler, a network of high-powered fans kick in. And they force clean, cool air through these ducts into these rooms to push the toxic smoke out of the stairwell so that the route to the safe room is clear. And it's important not to just provide the fresh air and the safe space, but to actively work to push out toxic elements in your building. And of course, none of this is a substitute for rescue workers. The rescue workers still need to come to the aid of those people who are in the refuge rooms. Safe place is just a place for people to go while there are people actively working to resolve the issue and resolve the emergency. Because anything worth building is only worth building because of how it impacts people. This was not a talk about skyscrapers.
|
Since 1884, humans have been building skyscrapers. This means that we had 6 decades of skyscraper-building experience before we started building software (depending on your definition of "software"). Maybe there are some lessons we can learn from past experience? This talk won't make you an expert skyscraper-builder, but you might just come away with a different perspective on how you build software.
|
10.5446/31524 (DOI)
|
Hi. Okay. Hi. My name is Kat. I'll be talking about how we deploy Shopify. Just a little bit about me. I'm a developer at Shopify. And actually today is my one-year anniversary there, which is kind of cool. Oh, thanks. Yeah. So I started one year ago today as an intern and now I'm speaking to all of you at RailsConf. So that seems kind of crazy to me. But I'm really excited. So on with the talk. A little bit about Shopify if you don't know. Our mission is to make commerce better for everyone. And that includes anywhere. So Shopify powers online stores, but it also has a point of sale system. It lets you put buy buttons anywhere on the internet. And basically is just trying to make commerce better for everyone. So I'm going to start with the talk. I'm going to start with the talk. And basically is just trying to make commerce better for everyone. Over 275,000 people or have Shopify stores. And in total they run over 17 billion dollars worth in sales. So that's a lot of people and a lot of money going through the Shopify platform. As a platform, we're able to do 25,000 requests a second, which is like super important for all these stores and all this money. We have some pretty great, great, great, great, great, great, great, great, great, great, great, great. And we have some pretty famous people that are on Shopify that run a lot of flash sales that create a lot of traffic. Some of those people being like Kanye West, Kylie Jenner, and the like. Shopify is a Rails app, is one of the largest Rails apps in the world and has been around for over 10 years. So that's a lot of code and a lot of code that we deploy daily. And what we used to deploy Shopify is a little something called Shipit. Shipit is actually open source. So if you go to github.com slash Shopify slash Shipit dash engine, you can see it there. And what it allows us to do is on GitHub when you make your PR and you merge it to master, it shows up here in Shipit. And it has some checks there that the container build is successful and so on. And once all those checks pass, you can then deploy your commit. And when you deploy it, you get a screen that looks a little something like this. So on the side, you have like a visual representation of the deploy and then in the middle, you have the whole record of the deploy and what's happening. So on the side, you'll see there's little boxes labeled SP and a number. And those are our servers and the SP just simply stands for Shopify Borg. There's about 200 of them and the five dots in each server represent a container. So the green dot is the container has deployed the new revision and the blue dot is that it's currently switching revisions and the gray ones we haven't gotten to yet. So you can track your whole deploy here visually and at the bottom, you'll get a nice message when it says that your deploy has succeeded or you'll get another message if it hasn't and we'll talk about all those messages today. So a couple fun facts. We deploy Shopify on average 30 times each day. The highest we've ever done it was 41 deploys in one day. So that's a lot of deploying. Also, because we use Shipit, basically anybody who makes a change and makes a PR can deploy Shopify, not just developers. So we have lots of people deploying Shopify every day and it takes about four minutes to get through the whole deploy and have your code be out in production, which is fast and exciting but also really terrifying. So what I'm going to talk about today is sort of what happens when you press that little blue button and ship it and send your deploy out into the world but also what happens when people make requests during deploy and anything like that that can happen. So back to that little blue button. So deploying Shopify, all it really is pressing that blue button and monitoring how things go but what's actually happening is that deploy button triggers a Capistrano deploy script which then runs and that script the first thing that it does is it takes the new revision shot and puts it onto that host server. So I think I already explained it but SB1 is the Shopify Borg and that whole box drawn there is the host and the server. So once the new revision is into that revision file the supervisor daemons are then started and their first thing that they do is they take the new shot written in that revision file and start all the containers. So the containers each one has five containers. I'll explain why in a bit. But each container is started one at a time. So the first container is started first and once it completes we go to the second container and it's done. So the little green outline and the solid arrow is just like my way of noting that a deploy has finished on that container whereas a dashed arrow is where switching revisions on that container. Another responsibility that they have is to check the exit status of the deploy. So there'll be three different statuses on the deploy and those statuses each mean a thing. So we'll go into them now. So one is you can get a success code and that's a successful status and if there's no more remaining containers your entire deploy is successful and things are good to go. You can also get a successful exit code on that container but if there are more containers you just continue through the process of the supervisor daemons restarting the containers that still require that new shot and repeating until each container has switched to the new revision successfully and there are no more containers. Or unfortunately what if your deploy fails? That'll be a different exit code and there's two options for deploy failing so that the revision is flapping or that the deploy simply failed. So what does it really mean that a deploy is flapping? It's just like kind of a term that I don't know it's hard to figure out what it would mean. So what it really means is that here's an example of a server and it started with revision A so all five containers had revision A and then we deployed revision B so container one successfully got the new revision and everything looks great but then on container two we tried to change the new revision and that deploy failed. So now we have three containers that are running revision A, the old one. One container that's running revision B, the new one and the second container that we have no idea what it's doing because it simply failed. So that's what it means when a deploy is flapping, that it's running multiple revisions at the same time and that is not something that's desirable at all. And the solution for this and what we do is we just restart the application. So similar to a deploy but different and we can just do it by pressing a button in ShipIt and what it does is it restarts all the containers but I'll go into more detail in a bit. So what happens if a deploy failed? So what if we deploy and it just like simply fails? It never switches revision, now we have one mysterious container and the one that we can't use to rest with the previous revision. That can happen if the container is just simply down. Like if we can't reach the container for whatever reason it's down we can't restart it, it won't start up, then a deploy will fail. You can do the same thing to just restart your application and this will restart all the containers and try again. So I mentioned a bit earlier that Shopify runs and has a lot of very big and successful merchants and why that's important to deploy is that these merchants drive a lot of traffic to the platform. So whether it's a flash sale with a new product or just all the shops having Black Friday, Cyber Monday sales a lot of traffic gets to Shopify. And one of our biggest Shopify plus merchants is Kylie Jenner. She's the one third, third in and she sells lipsticks and she sells them online with Shopify. She first started, she only had three lip kits in three different colors and that's what she sold but she drove so much traffic for everybody trying to get them. Now she sells multiple different types of lipstick lip gloss in different colors so you can imagine every time she has a sale she just drives so much traffic and that can affect deploys. So we'll talk about that. So when somebody wants their lipstick from Kylie Jenner they make a request and you guys all laugh but a lot of people want these lipsticks. Like head over to the internet, look up you can see how it goes down and so they all want these lipsticks. They make a request it goes to the internet the request gets routed by the load balancers to a server and then that request once it gets there will be sent to a container. So if that container is switching revisions during a deploy that container will not accept that request and that request will have one retry and be sent to another container. And if that container is free so not switching revisions, it doesn't matter if it's running the previous or the old revision that container will serve that request. If the request if a container is serving a request and then the deploy tries to switch its revision that container will lock it'll take it'll serve all the request it's currently gotten and then once it's finished it will switch revisions and any request that are sent to that container in the meantime will be sent to another one. So each request only has one retry. So if it gets denied like if that container says no then it gets one retry to go to another container and this is because if you send a malicious request at worst you've blown up two containers and that's all. So sometimes you wonder like should we lock a deploy, lock deploys during a sale and the reason we wonder this is when one of these one of these five containers are switching revisions we are down 20% in capacity. So one out of five, 20% and that container can't take requests. So sometimes we think like oh if this sale is going to be huge or if we don't know what the sale is going to be maybe we can lock deploy so we're at 100% capacity. Most of the time we don't and we continuously deploy throughout the day and let people have their sales and everything is usually fine and dandy. And then we think like what if we had more containers? Like if we had more containers could we then deploy with even more certainty that nothing would go down because we're losing capacity because we're switching revisions? We're not. So here we have one server that has five containers and another that has 10. So if we were to have 10 containers so double our amount we would then have to switch revisions and deploy in parallel. We'd have to do two containers at a time. And doing two containers at a time so two out of 10. We're still down one out of five which is still 20%. So even though we increased more containers we still have the same downtime in capacity which is why we only have five containers and the reason why we chose five containers is and not less is because we're okay with being down 20% of the time like Shopify can handle it during a deploy but we're not okay with being down like 25% if we had only four containers. And the reason we don't go up to 10 is because it would just add more complexity. Now we're running these containers and switching their revisions in parallel and trying to keep track of everything while still doing a deploy. So then we wonder like how, what if you had to deploy during a sale and you still knew you couldn't be down 20% like you couldn't lose 20% capacity but you still had to deploy. One thing that you could do is only deploy on half of the servers. So this SB1 and SB2 we have 200 of them. Like let's say you only deploy to 100 of them. So you're only deploying on half. So you're down 20% but only on half of your servers. So that means you've only lost 10% capacity. And if you want even less downtime you just deploy onto a quarter and then you're only down 5%. So these are some frequently asked questions we have about deploys. So as I mentioned before sometimes the solution is to restart the application and what does that do? So when you press deploy the Capstranos script runs and it puts the new revision SHA into the revision file on the server that the supervisor daemons then get and then they restart all the containers with that file or with that SHA. So when you do a restart they already have the new SHA so they know what to do. So all they do is they start at the step of restarting the containers. So they just restart all the containers that still require the new SHA and continue on with the deploy from there. So some things can get interesting. So because we ship it and we just have this UI with buttons and we let anyone deploy, Shopify in fact you do it in your first few days of onboarding a few things can happen sometimes. So what if you deploy the current revision again? So this container or this server has all containers that originally started with revision A and now we've deployed revision B and then halfway through the deploy someone decided to restart. So they press restart like what's going to happen. So what will happen is that second container is already switching revisions from the deploy which is all great and fine but now another container in this example the fifth container is also now switching revisions because of the restart. So all that means is that now instead of only losing 20% of capacity we've now lost 40. So if a request came in at this point it can only go to three of the containers as opposed to four or five if there was no deploy or restart going. Usually we don't stop that we let that happen because our platform can handle it but you are down 40% so like maybe if there was a crazy huge flash sale and all of a sudden we got so much traffic maybe Shopify would go down because we're already down 40% capacity. But even worse what can happen is if you deploy a new revision so you've started with revision A you deploy revision B and someone comes along and deploys revision C so that's the third revision. So kind of what happened in the previous example another container will start switching revisions but now it'll switch to a completely new third one. So right now you can see in this server we have three containers running revision A one running revision B and a third running revision C. So not only are you down 40% in capacity because you have two containers switching at a time but you also have three different revisions going which is going to result in a deploy being tagged as flopping. Yeah and at worst you could have five new revisions like what if you just keep deploying before the deploy finishes so you deploy once, deploy doesn't finish deploy a second time, a third time like you can get into a situation where your server is running A, B, C, D and E all at the same time. So I talked a little bit about restarting and deploys and I just want to highlight the difference between them because they are very very similar. So a deploy gets a new revision whereas a restart is using that current revision. They both restart the supervisor demons and that's fine they'll go to restart the containers with the revision that they have but another difference is that the deploy downloads a new image whereas a restart already has the current image. So usually a deploy takes a little bit longer than a restart does just because of that simple fact. Now this slide requires a little bit more context so at Shop5 we have two data centers one that we keep passive and one that we keep active. So the active data center is one where we send all our traffic to and we deploy to both of these at the same time always and the reason that we do this is that we want the passive one to always be as up to date as a possibly can with the active one. So if we have to fail over we know what kind of state that other data center is and which revision of Shop5 it's running. Like it would suck if we failed over and it was running Shop5 from a week ago or even a day ago because so many changes go into Shop5 like one day does make a big difference. So when you deploy you're deploying to both data centers at the same time and if it fails in the passive one we almost rarely ever do anything. We just like let it go because we know there will be a new deploy soon and that deploy will restart all the containers and bring it all up to the same revision. So even if containers were running different revisions in the passive data center because we're not sending traffic there it's alright. In the active data center though if a deploy fails or there's it's flapping or there's failure or anything like that we do need to take action on that because that's where we're running all our traffic. And normally what we do is just restart the application and monitor that. Thanks. That's my talk on how we deploy Shop5. Music playing.
|
Shopify is one of the largest Rails apps in the world and yet remains to be massively scalable and reliable. The platform is able to manage large spikes in traffic that accompany events such as new product releases, holiday shopping seasons and flash sales, and has been benchmarked to process over 25,000 requests per second, all while powering more than 243,000 businesses. Even at such a large scale, all our developers still get to push to master and deploy Shopify in 3 minutes. Let's break down everything that can happen when deploying Shopify or any really big Rails app.
|
10.5446/31525 (DOI)
|
When I submit this talk to RailsConf, it is in the track of we're living in the distributed world. But I'm surprised to find that I'm the only talk in that track. It seems like there are no other tracks that talk about scaling of Rails applications and distributed systems. So I think the reason might be that as Rails developers we are following some best practices so that making our app distributed or scale our apps doesn't seem that hard, doesn't seem that problematic. But this one, this GitLab thing is a bad boy, I would say. It really got some problems and I mainly got to talk about how we fix those problems. So thank you very much for coming to my talk. My name is Minky Pan. I came from China. I work for Alibaba Group and that is my GitHub account and my Twitter handle. You're welcome to follow me. So what is GitLab? Well, GitLab is, well, let me say it secretly, it is just a GitHub clone, open source clone of GitHub but nobody likes to say that. So a better way to think it is a Git box that you could deploy on your machine. It is installed on-premises. So just a quick survey, how many of you use GitLab in your organization? Two of you. Thanks. So GitLab is, if you see it as a black box, it actually exposes two ports. One is HTTP, the other is SSH and HTTP are used on two purposes. You can clone a repository via HTTP and you can push content to a repository via HTTP. And also more importantly, as a Rails application, it provides rich user interactions with the web page. And on the other hand, the SSH only allows you to clone and to Git operations. And in the back end, from a very simplistic point of view, it stores this content on Git and that is what makes this thing a monster to scale, very problematic on that part. So if you look closer, it also uses some other stores on the back end. One is Massacre. Actually, they also support Postgres secure because they use ActiveRecord, which abstracts the actual implementation of the DB so it's changeable. And the other is Redis. Use it as a queue for delayed tasks and also as some cache. And the other is FileSystem. They use FileSystem to store the Git repositories. So that's the black box. If we open it up to see what's inside, then you could see it's basically structured like this. It's all open source, so you could also download the source code and see it. When you deploy it on the front and there are two parts, Engines and OpenSSH server, well the reason why those components are inside GitLab is because GitLab has an OmniBus package that you can install and they actually depends on those two other packages. Engines is for HTTP and OpenSSH server as we mentioned is for the SSH port that opens. And when some requests came, for HTTP requests they came to the second layer, Unicorn is for the ordinary Rails requests, but for the request for Git, like clone and push, it goes to GitLab Workhorse. It's another service written in Go to make it fast. And if it came as a SSH request, it goes to the third part of the second level, namely GitLab Shell. And on the third level, the third level is called by the second level components. Mainly Rails was responsible for operations on page. And GitLab Git is a wrapper around Rocket and Rocket is a wrapper around LibGit2 on the fourth floor. And SideKick, yeah, that was for some task handling. And on the lowest level it is Git and LibGit2. They utilize both implementations of Git. You know, LibGit2, if you don't know about it, is actually a rewrite of Git in a way that is portable, embeddable, works as a library. Ergo the name Git2, they see it as the second generation of Git, but with Lib as a prefix because it's a library. So this structure works really great for small teams, but the company that I work for has 30,000 employees. This is from the physical year report of last year. They just published a new one this year, days ago, the day before yesterday. And the stock price went up. Looks good. It's a public company. So let's scale it. So how we do this? Well we first consider about the problem on the front end. When the request came, it's either HTTP or SSH as Rails developers. We are most familiar with HTTP. And on the server it's actually run as a unicorn instances. And that's something we are very familiar with as well. We just put the engines in front of them, set upstream in the configuration, let them point to the unicorn servers in the back, and we are done. But for SSH how to deal with this is a problem. So I started a project called SSH2 HTTP. It's open source on my GitHub account. It basically eliminates all those SSH requests because the way Git interacts with the server between HTTP and SSH is very similar. And the request to SSH could be easily delegated to a Git request on HTTP. And as we could see from the slides later, SSH is actually such a pain in the ass. There are more complications to this. So I guess that is the reason why GitHub now says HTTP as a default. When you go to a public repo on GitHub and the Cologne URL, as far as I remember, is defaultly as an HTTP URL instead of a SSH one. There are actually complications to the architecture that makes the SSH access a little bit slower than the HTTP one. But actually in Alibaba we did not use my approach. My approach was this slide, but actually we use this slide. What we did was we are not using engines as the front end. We use something called LVS. And it is a feature from the Linux kernel. And the specific part of it that we are using is called IPVS, which expands to IP virtual server. And LVS stands for Linux virtual server. It is actually a layer for switching service, unlike engines which operates on layer 7 of the TCP IP stack. It does load balancing on the transport layer. So it supports all communications as long as they are TCP IP. So the difference between HTTP and SSH are illuminated. But there comes as a cost as well, because when you go down to the fourth layer, you lose the ability to do health checking with the status code returned by the request. Because on the seventh layer, you could see actually what the status code of your HTTP requests are and mark some server as health or not healthy. But on the fourth layer, you cannot see those. You can only see packets. You can only see the date. And you are rewriting. You lose that ability as well, because there is a layer from 7 as well. And like I said, that comes with complications, because SSH protocol involves some security mechanisms that checks with your keys. And if you have more than one machines in the back end, their keys are not the same by default. So when you deploy the application, you first have to copy the host keys across the whole cluster to make the host key the same. Otherwise, when you connect to more than one servers, the client will complain seeing that all of the SSH keys different. Is this a security vulnerability? You got to check it out. And it will not connect. And secondly, if you remember, you could add SSH keys from the client via the web pages when you clone a repository like on GitHub. And same thing happens in GitLab. So when you add your SSH key to the server, it has to dispatch or copy all of that keys across the entire cluster to make every machine accept your key. Specifically, they add a line in the.ssh directory slash authorized keys. And they have to do it on every machine. And we did that via, well, you cannot do that via SideKick, because SideKick only, you know, only one machine in the cluster fetches that job, and the other will ignore the job. So you have to do it in a way that broadcasts all the keys across the whole cluster. And we did that via Redis PubSub data structures. And that goes the back end. Well, the real trouble begins at the part that GitLab stores its repository on the FS. And I want to pause the moment to remind you of the 12-factor app. The reason why GitLab is such a bad boy, unlike other Rails applications, is because it wireless the fourth rule of the 12-factor app. That is some principle advocated by Heroku, where the fourth rule says back end services should be treated as attached resources. Like Twitter service, Amazon service, MySQL service, they should all be configured as a UIR that could be easily attached and detached. But GitLab stores some content on file system. That is the source of all evils. The content they store are, firstly, Git repository, and secondly, user-generated attachments and avatars. Well, we are going to move them to the cloud to make a scale. Well, actually, standing at this point, you have a lot of choices. The choice that I am going to elaborate might not be the best. I want to analyze the options that we have. So if you, when they run into a Rails application that has a similar problem, you could evaluate those operations as well. So the first option is some feature provided by GitLab Enterprise Edition. It is called GitLab Geo. And that doesn't really solve the problem. You see how GitLab Geo does things is they make four replications of your GitLab instance across servers. It assumes that each machine of your cluster has enough file system storage to hold all the content of your Git repositories and they make 100% copies across them. It's officially supported, but it really didn't solve our problem at Alibaba because the size, the overall size of all repositories are big. We don't want to store them on one single machine. There are not enough disk space to hold them. So from a distributed system point of view, GitLab Geo is a one master and slave full replication design. And in CEP theorem, which says consistency, availability and partition tolerant cannot be achieved at the same time. You can only achieve two of them. So GitLab Geo achieves A and P of those three parts. There are no disaster recovery supported and absolutely no sharding because it's full replicated. And the other option that we could use is seemingly a very perfect way to solve the problem. Well first of all, we eliminated SSH by that gem written by me called SSH2 HTTP so that we could forget about the problem SSH and focus solely on HTTP. And seemingly there is something we could take use of. It is the, you know, every repository stored on GitLab could be routed using namespace slash repo name. And that part appears almost every UIR of every request. Like when you see the repository commit history on page, the wrote, the root format contains that part. And when you clone it, when you push it, they all contain that part. So why not use that part as a routing key and make some routing logic into engines to make a sharded GitLab? And everybody doing that, every request after it came into engines will be sharded. For example, if we are going to have a cluster of size three, we could invent some hash algorithm that this tributes, that hash the namespace slash repo name into the cluster into any one of those three machines. So it seemingly is perfect. But can you, you know, spot some problems inside this? Actually, one problem is, sidekick does not have sharding. Maybe it does, but you have to dig into it and see how you could do that. You know, each shard of those three GitLab shards could spawn some sidekick tasks which needs to be consumed by corresponding sidekick shards as well. So when you start the sidekick shards, you have to start it with special queue names as well. There's one complication and there are others. Changes have to be made on application level as well because it's not every page on GitLab falls into a single shard. Like in the admin page, you could see a list of all the repos with their sizes. Well, if that request falls down into only one single shard, you will not get that information because some repos reside in some other shards. So major changes will be introduced to the application level as well. And also, you need super user authentication because the SSH requests are not designed to access all repos. Their user authentication layers in front of them is also another application layer, logic layer change that you have to be introduced. This is actually not ideal. Every way of solving this comes with a cost. So let's then think about how to deal with the file system storage. Well, we got a lot of options. Well first, we could make it a 12-factor app by making the file system attachable. There are some vendors who provide such solutions like hardware network attachable storage. They usually call it NAS and they are software NAS as well. Google has GFS and also we could use remote procedure calls to only make shards on the FS level instead of on the application level of the entire GitLab. And also we might consider killing it. We could maybe use Amazon S3 to replace the FS as the backend for GitStores. Well, we evaluated all those options. It turned out that NAS is not for Alibaba. Hard NAS, well, Alibaba do not buy those things because it has non-IOE policies. And soft NAS, Alibaba does not have that yet. Like Google have GFS but Alibaba does not have AFS. But I have to remind you that those two options might be good options for your organization if you want to scale GitLab. You know, they are really good means to solve the problem because it introduces very little change to your application level because all the changes are confined on the lower service that got attached to GitLab. But I did not try them and they surely come with a cost as well because software NAS tends to be very complicated. As far as I know there are some good solutions called CIF FS which just came to stable about a month ago or days ago. And if something happens on that layer you need to have some very talented operation or dev-op engineers to solve those problems. And also by attaching a NAS, soft NAS, you will also lose performance because each operation to each IO to the FS is now networked. And they are added latency to each network IO. And you are replacing the thing on a very lower level so the added cost will be much. So those two operations if you have a chance could dig into it. And RPC, well that is a good solution. I looked up how GitHub solved their problem. Seems like they are doing RPCs. They are dispatching access to some RPC calls into Git Shards instead of Git Lab Shards. It's a shard on a different level. It does that. It sure looks like a good solution. And what we did at Alibaba is use the fourth option. We killed the FS and used the cloud. What clouds we use is called Alibaba OSS. Well it's something that not that well known but you could thought of it as the same thing as Amazon S3. It's object storage in the cloud. And how we did that? So the rest of this talk will become a little bit technical. It turned out that GitLab has three ways to access Git repositories. Namely Libgit 2, Git and Git. Git is very old jam. It's written in Ruby. Well we found that it could be eliminated making the whole problem easier because it's only used in the Viki part of GitLab and it's used in a jam called Golem. And Golem was designed to have this Git access part plugable. So we unplug it and we plug a rocket which uses Libgit 2. So that makes this only Git and Libgit 2. And we compare those two projects, Git and Libgit 2. Well Git was pretty old, it's probably started by Linus Torwath and it did not consider the problem of backend to plug and unplug. So its backend is hard to replace. All of the code are written to access content from the file system. But Libgit 2 is very modern. I don't know how their creators think about the problem but they designed the backend as replaceable. You could write your own backends. So the basic idea is we write our own backends. We write the backends that actually stores the content on the cloud storage. And also, well the grid has been eliminated. Also we have to implement Git on top of Libgit 2 because Git cannot easily replace its backend storage but Libgit 2 could. So cloud based backend, what's that backend looks like? Well that emotes some details about Git. Git has two parts to store its content. One is called ODB and the other is called RefDB. ODB is for the chunks of data that you put inside the repositories. And the RefDB is the branches, tags that you put in the repositories. And for the ODB there are also two parts, two kinds of ODBs. The first is loose ODBs. Those are, you know, Git is fundamentally a content addressable file system. The content addressable being the SH1, SH1 value of the object that you are trying to fetch. So loose storage actually stores each SH1, SH1 values. I will, you know, open up a example, well, maybe I will.js. That's a Git repository. And if you go into the.git directory and you see tree, you can see there are some, like those files, those are called loose stored files. And they are also packed stored files. Those are the packed stored files. That's what I mean. So we wrote a cloud based backend to store both types of those files. The basic idea is for the loose files, it's pretty straightforward. When you read, you make an HTTP request to read it from the cloud. Oh, I forgot to explain the RefDB. It's very similar to loose files where you can see it's under the refs directory. All of your branches are inside the refs has master. And master will tell you a SH1 value. So it's basically K-value store. And that translates to HTTP request pretty straightforwardly. You see each refDB read, we made an HTTP read and each refDB write, we made an HTTP write, each loose ODB store, we make it a HTTP port and each loose OSS store, we made it a HTTP read. So that's the simple part. The complicated part is the packed content of it. Because if you only store those loose content, it will be as slow as SVN. The very reason why it gets so fast is because it has a very good design of packs. Pack files are used both as a way to transfer content from between server and the client and as a way to store the content you are repository on disk. It's both a transfer file format and a storage file format. The way we write OSS, the way we write those packs are easy. We just translate them to port request to HTTP. But the way we read it is complicated. You see every pack came with the index file. And that index file tells you about if you are looking for some object in the pack, where to start. So each request will be translated into a lot of ranged HTTP requests. First it will read the IDX file to find the next range to read in the pack. And then it read only that small portion of that file using the range header from the object store. So as an example, if Git needs to read this content, then first byte will be blah, blah, blah. And it will binary search in the index file and it will get an offset to begin in the pack file. And in the pack file it will see if this content stored is a delta or not. If it were a delta, then it has to continue looking for the base of that delta. And the whole chain continues, continues until you find the root. And by combining all the, combining all the deltas with the base, you get the object that you are reading. And here's an example. It's a real-world example. The chain is as long as file. You have to jump inside the pack file to actually get the thing that you want to read. Because each time you read it, it's actually only a delta. So that is a real problem to us because if the IO pattern inside that pack file is not good enough, then you will end up having a lot of range requests on the HTTP. That will make the thing awfully slow. What the good news is, the inventors will get, they made some very good heuristic algorithms to, when the pack files are generated so that those IO patterns are not that bad. So when we make a range request, we could actually make the range bigger than we needed. Therefore, we could fetch bigger content from each range request. And that content would be sufficient to fetch all the way to the root of that object. And by this good characteristics, we reduce many HTTP requests to make this whole solution not that slow. That's one part of it. And the other part of it, as I said, you have to make Git talk to libGit2 because Git does not have the backend replaceable. It turned out that this is pretty easy. Actually the inventors will get their pretty smart folks. They write Git in a very unique way. All of the commands are, they call each other. Like in Git fetch and Git clone, on the server side, the first code was Git upload pack and Git upload pack will then call another command called Git pack objects. And for the commands that deal with the transmission protocols, we will not touch it. And it's complicated. And we do not touch it. We only touch the thing that does IO from the disk. So we only need to replace the Git pack objects. And in the Git push scenario, we only need to replace or re-implement Git unpack objects. And implemented on top of libGit2 is very easy. It's no big task. And also there are other scenarios. There are two scenarios when doing Git push. There's small data got unpacked right away and got written to the loose storage. And their big data didn't get unpacked because unpack consumes time. And they directly create index for it and write those pack. So in this case, we need to re-implement Git index pack, which is pretty easy task. All right, so after all of those changes, let's see how the performance looks like. It definitely got to be slow because you're still changing a fast file system IO to some slow HTTP IO. So let's see how it looks like. Well the text fixture we use is a repository called GitLab CE. It has more than 200,000 objects. And when packed, it weighs more than 100 megabytes. And Git push, well about same performance because, you know, file system, we write it directly to FS. But on cloud, we write them directly to HTTP and there are not too many new operations created. It's just only added a small amount of time to each of those two operations. And Git push data also, like I said, there are two scenarios. When you push large content, it only stores the pack. So this is large content scenario. And if you only push a little content, it got unpacked and stored loosely. And this is the delta case. Also not too much time added. And Git clone, well, it is actually 100% slower because when you do clone, the range operations happen and that's what makes it slow. And also Git fetch, it got way more slower because this is the delta fetch. This usually happens when you do Git pull, when your coworkers updated the wrapper. And it also has to go through the whole process of the range operations that I mentioned. So it's really slower. But the good news is it's not that slow. The user has to wait longer, but it's not as something that they cannot wait. And also on the page, it got way slower. All of the Rails operations were affected because we are operating on a deeper level and Rails will call Rocket, Rocket will call LibGate 2. LibGate 2 is slow, so Rails is slow. Like on this page, we are listing a file list and the show actions now take five seconds to run. Well, let me see, all of those benchmarks are all without cache. So the real world scenario will be better because we have cache. And like this, this is another Rails operation and before the change is 50 milliseconds and after is about five seconds. So that's the reason why we have to add much cache to it. We added cache on multiple layers. Like those Rails layers, we added them, I'm not going to elaborate on all the cache that we add, but for some interesting aspect of this, this is something interesting. Well, LibGate 2 was designed in a way that it could have more than one ODB backends and you could even set a priority to it. So we basically made a hamburger structure of that backend. We added two new backends to it, which is the cache backend. The servers that we deploy those things to still got a file system to use and we use that as an on-disk cache. If we read some content once, we'll store it on the file system so that the next request hit it could just read the content from the file system instead of making remote HTTP calls. And the good news is the ODB of Git never change. You can only put data into it, but you can never modify data. So we are free from the problem of cache expiring. And also the refDB could cache that we already is, but that's way more complicated. That might not worth the effort. I might remove it in the future because you have to expire the cache. RefDBs got updated all the time. When you commit a new commit to master, say, the refs slash heads slash master got updated and you have to expire the cache. So that's going to go into details of when the cache got updated. And lastly, I want to say something about the future work. For right now, it seems like this idea works more or less acceptable. And if you guys love it, I will try to do an AWS S3 version of it because it's currently working on OSS, which is not so widely used. And there is some need for this. The reason why there may be some need for this is because GitLab cannot be deployed to Heroku at this moment. And if we could make this backend for AWS S3, then the users of GitLab could have a chance to deploy it to Heroku. And also, GitLab still has many direct calls to Git for the history page of the commit history page of a repository. It actually spawns another Git instance to fetch the result. So we could eliminate some direct calls to Git. And after, if we develop that backend for AWS S3, we could add settings for the user to choose which backend he wants to use. It could be either file system or AWS S3. That would be perfect. And Gollum, we could do some work to make them use rugged as the default. In Libgit True itself, we found it slower in many scenarios compared to Git. So we could improve its performance in the future. And I will be actively do those jobs on my GitHub account. So if you're interested, you could look into my account and see how it goes. Thank you very much.
|
GitLab, the open source alternative to GitHub written in Rails, does not scale automatically out of the box, as it stores its git repositories on a single filesystem, making storage capabilities hard to expand. Rather than attaching a NAS server, we decided to use a cloud-based object storage (such as S3) to replace the FS. This introduced changes to both the Ruby layer and the deeper C layers. In this talk, we will show the audience how we did the change and overcame the performance loss introduced by network I/O. We will also show how we achieved high-availability after the changes.GitLab, the open source alternative to GitHub written in Rails, does not scale automatically out of the box, as it stores its git repositories on a single filesystem, making storage capabilities hard to expand. Rather than attaching a NAS server, we decided to use a cloud-based object storage (such as S3) to replace the FS. This introduced changes to both the Ruby layer and the deeper C layers. In this talk, we will show the audience how we did the change and overcame the performance loss introduced by network I/O. We will also show how we achieved high-availability after the changes.
|
10.5446/31527 (DOI)
|
All right, I think it's time. Are we ready? I have to tell you something funny before we get into this talk, though. I decided it would be cool to have my five-year-old daughter do the illustrations and the pictures for this talk. So how about halfway through it, I realize that I'm giving her all these weird requirements and making her do all these things to fit my talk. So I think I have actually become that interviewer that we all hate. Might be a problem. So take the rest of this talk with an appropriate grain of salt, I guess. But we're going to talk about coding interviews today and how to pass coding interviews and what they are, that kind of thing. And the reason I wanted to do this talk is basically this tweet right here. This is where I got the name from the talk, by the way. And it's true, right? It's true. It's sadly true. And I feel like that what we do in interviews and what we do in our day jobs, there's a bit of a disconnect there and we don't spend enough time talking about the interviewing side. So let's talk about the interviewing side. These are the coding interviews I'm familiar with that are in wide practice. You can have take-home problems where they give you some challenge that you work on with or without a time limit. Technical interviews where you work with one of their employees often involve some kind of pairing but it's probably not like normal pairing like you do on the job. Mostly it's them watching you struggle to figure out some problem or something like that, maybe providing a little bit of insight or questioning. There's the whiteboard interview where you're programming on a whiteboard or in a Google doc or something like that. And then there's, I've seen auditions where people have you do some real work on their application, either on your own or with help and then they judge you based on that. So I work at No Red Ink and as part of my job, I do grade take-homes, I do technical interviews. So I see a lot of what doesn't work and I think I have some tips that I can give to help you get past those problems. And it's kind of in my interest to do so, right? Like if our director of engineering comes to me and says, let's hire eight new people this year, if I could find a bunch of good programmers say like in May, then I could take a few months off from interviewing, right? So that would be just great. Okay, this talk comes with caveats for sure. First of all, you know, there's no guarantees. I am going to tell you the major mistakes to avoid and I'm going to try and raise your chances in your interviews, but it's well known that the mood of your interviewer, their level of training they've had, these are all things that are out of our control. So it's definitely a numbers game. I'm sorry about that. I wish it wasn't, but it is. And that's something we can't fix from this talk. I'm also not going to judge the various interviewing practices. I think there's things our industry does well. I think there's things, plenty of things our industry does not do well. That's a great conversation to have, but that's not this conversation. There was a really good talk yesterday called hiring people with science. If you didn't catch that talk, you should probably watch the video. It gets into some of this and it was really good. Also I am not familiar with every kind of interview out there. For example, the whiteboard interview, I've never taken or given one, so I won't have much to offer there aside from general tips. And finally, if you do everything I tell you to do in this talk, it's going to take a lot of time and your spouse and family are probably not going to be that happy with you. So it's up to you to apply the appropriate filter of how much you should do here and for your needs. Okay, caveat aside, let's talk about what you can do. The worst thing you could do is show up to the interview and then start trying to pass the interview. You need to do some things to get ready to help you swing the odds in your favor. And the first one is to study. Interview questions are pulled from a well-known subset of computer problems. So if you are more familiar with those kinds of things, you're going to do better in the interview. And it may not be stuff that you are using every day in your job. So if that's the case, it's really important that you kind of load this back into your brain. These are some of the concepts I think are the most useful. I've cross-checked this list with a couple of different sources. They all pretty closely agree and I've kind of put my own interpretation on it. I've kind of arranged it so that the most important stuff is at the top and to the left. So if you kind of work your way this way as you have time, I think it's probably the most helpful. But these are things that you need to know. And it's not that I think you will be asked to implement a binary search directly, but sometimes just being familiar with these concepts can help you in some ways in an interview. And we'll talk about that. So I stuck big O notation in the upper left, which might seem a little bit surprising, but I think it's more important than people give it credit for this is a tool for understanding how long your code's going to run or how much memory it's going to eat up in the upper bound and I would say that I think it can actually help you solve problems easier. We'll get into that in just a second. But take a look at these lines. These are different common complexities for algorithms and it's a big difference if you're on that green line that slowly climbs up, whereas if you're on that orange curve, which is almost a straight right to the top, right? That's a big difference. And understanding the difference between these can be very powerful. So I think big O notation kind of gets a bad rap. I think we think of it as a formal tool that only matters when we're documenting something or things like that. But I don't think that's the case. I really do believe it's useful even in programming and maybe even more especially in interviews style programming. So let's take a quick big O notation quiz. Let's see if you remember how much you remember. So I'm going to give you a few seconds. Think in your head of what you think the complexity of this code is and then I'll show you the answer. Okay, got a guess. This one is big O of N. It's linear algorithm, right? The two important bits are right here, we scan over the data the entire set twice, summing once and multiplying the second time. That would be two N, but in big O notation we drop constants and non-dominant terms, so it's just N, right? It's linear. How about this one? This is the classic recursive algorithm of the Fibonacci sequence. Give you a few seconds. This one is big O, two to the N. And I wanted to show this one because if you run into recursive algorithms there's a formula you can use to figure it out. So this line right here is the important one. It's however many branches there are and then however raised to the power of the depth you have to go down. So in this case we're making two recursive calls, one with I minus two and one with I minus one. And we have to go down however many steps in the sequence based on the parameter that's passed in. So that's a formula for figuring out recursive algorithms. Okay, this one's more complicated, so super bonus points if you get this one. All right, anybody think they know it? I'm sorry? N log N. N log N. That's kind of part of it. This one is big O, B log B plus A log B. Which seems horribly complicated, right? But it's not quite as challenging as it looks, I think, if you break it down. So this is the sort right here. B uses a hybrid sort and like most good sorts it's B log B or N log N. So that's the first term of the answer. And then the second term of the answer comes from these two lines. We're scanning over A and for each element in A we're doing a binary search on B. So they're dependent work, you have to multiply them together so that's where the A log B comes from. The sort is independent from that so we add it on. It don't feel bad if you don't know these or you didn't get these. I took a big O quiz when I was preparing for this talk. I'm like, I wonder how well I would do. Yeah, I miss plenty of them. So it's not like everybody gets these right all the time. But studying them will give you some advantages. I'm going to try and make the case for that now. So let's do this. This is a classic interviewing problem. I pulled it from a list of interviewing problems. It says, given two sorted arrays, find the elements in common. The arrays are the same length and each has all distinct elements. So the most obvious solution to me is something like this where you would scan over one of the sets and for each item in the set you would see if it's in the other set. This one is, I believe, big O N squared, I guess, because as you're running over the set you can potentially have to run over the other set to the entire set for every item. So this is the brute force approach. But just looking at the problem and knowing what I know, I know that the sets are sorted and I know they're distinct. I know they're the same size. If I use all of that knowledge, then I'm guessing there's a linear solution. I suspect that's the best I can do, that I can walk through the set one time and know what items are shared. I can try to figure that out on my own and that might look something like this. This is kind of the way I just described it, where I can scan over one set and collect the answers. Meanwhile, I can keep a pointer into the other set and I can bump it along whenever I pass a certain size because then I know I won't have to consider those anymore. But this is hard for me to think about and I don't know if I got it right. I think it's linear but I'm not entirely sure. Anyway, it's complicated, right? But if you're familiar with things like bigger notation and how we measure complexity, there's easier ways to get to a similar solution. So we could do something like this. Why don't we just run through this set, one of the sets, and load it into a hash. And then let's run through the other set and check it against that hash. We made two passes but they're both linear, right? So we're still at a linear algorithm and we added a hash insertion check but that's constant time on a hash so it's a non-dominant term and we end up dropping it anyway. So this is a linear solution and it's much easier for me to think through this one and realize it. That's my argument that this can be a tool to find easier solutions to problems. Ruby can help you as you're playing with these things. Mini test includes some assertions that will give you inputs and then measure the time your code takes to run and fit it to a curve to see which category it falls in. This code's a little small and I apologize for that. You're not missing much though. This is the recursive algorithm. I showed you earlier for the Fibonacci sequence. This one is pretty much the same thing except it's inside a default block for Ruby hash which gives us memoization, right? Once a value's been calculated it'll be in the hash and it's just cached. And then this is the dynamic programming version. So if memoization is coming at the problem from the top down with caching, then dynamic programming is coming at it from the bottom up and building up to the correct answer you want, right? Just three different ways to calculate the same thing. So then we can test these with mini test using mini test benchmark. You get these assertions and you can say, I expect this algorithm to be exponential. The problem is this algorithm is so bad it quickly overwhelms Ruby. And so I had to change the inputs down to a set that would keep it something that can execute in a reasonable amount of time. In this case I really had to chop the inputs down so maybe not a lot of data points which is kind of bad. But that's kind of a bit of feedback too. If your algorithm overwhelms Ruby quickly it's probably not a great algorithm, right? So that you can kind of still use that as a guideline. This is the memoized version. It's linear. Still a little bit more than Ruby could handle but I only had to trim the range. It's tiny bit. I knocked off the highest category. That's it. And then the dynamic programming version Ruby could handle just fine. Which kind of surprised me. The hash version is happening mostly in CU I think. I expected that one to do better but it didn't. So these are some tools you can use to play around with algorithms and find what they are. Let's look at a different problem. This is not a coding problem. It's a brain teaser. It says you have nine balls. Eight are the same weight and one is heavier. You're given a balance which tells you only whether the left side or right side is heavier. Find the heavy ball in just two uses of the scale. I'll give you a couple of seconds to think about this and see if you can come up with the solution. Does anybody think they know how to do this one? Ideas. My daughter illustrated this one for us. So if we put three balls on the left side and three balls on the right side and three balls on the table, if the left side is heavier it's in that set. If the right side is heavier it's in that set. If they're even it's on the table. Either way we've narrowed it down to a set. Then we can do the exact same thing with one ball each and find the heavy ball. This is part of what I was saying when being familiar with certain concepts can help you get to answers to interviewing problems. So this problem I think you are more likely to figure out if you've played with things like binary search or merge sort. This is kind of similar to those principles, right? And dividing them out into categories and managing those categories individually. So that's what I meant by just being familiar with the concept even if you don't specifically have to implement that algorithm. No matter what you need to practice these kinds of problems. I wish there was a better way but there's not. My favorite site for doing the practice is exorcism.io. The problems are fairly good size. They're pretty consistent. And while you solve them then you can participate in the feedback mechanism on the site. So you know help improve your code at the same time. You need to get to where you can solve exorcism style problems in under an hour, right? If you have a technical interview that's an hour and a half maybe you're not going to be coding the entire time. So really you've got to be solving these problems in under an hour. You can also use myoldersiterubiquiz.com. There's more problems there but they're less consistent. So you've got to kind of take it with a grain of salt. Exorcism is probably more useful. If you really want to go all in, this book is over 600 pages of programming interview problems. It's got explanations, general advice. It gives you leadings to the problems, discusses the answers. This is the deep dive right here. If you go through this book and get to where you're familiar with the problems in it you're pretty much ready for anything I think. So if you want to go that far. Another thing I would recommend that's not about solving problems specifically, you need to have some open source. You want to be able to send it to them or get it to them in your cover letter. It's a great place to put it when you're applying. But it doesn't have to be a big gem. A small gem is fine. In fact I prefer it when I go to look into your open source if I don't have to read 3,000s of lines of code to figure out what's going on. So having even 100 line gem is great. There are some things like having a great read me, clean code with small methods, tests, documentation, the kind of stuff you do in your day to day job hopefully. And that's what's great, right? Is that they get to see you as you are in your day to day job, right? Which is something they don't necessarily get out of some of the things in the normal interview process. So this can help. Okay, that's enough for preparing. What about when we actually get to the interview? These are the things you should do while you're there actually taking the challenge. So first of all let's talk goals. Your main goal is not to be seen as a bad hire. We have lots of information that tells us that good hires are good, but bad hires are really bad, right? It's not just the performance of the bad hire itself. They drag down the performance of other employees. So it's like a multiplier, right? It's bad. So companies are optimizing for avoiding bad hires. Which is good news actually. It means if you can avoid making major mistakes, you look a lot better than a lot of other people. So really you want to try to just avoid making major mistakes. That should be your primary goal, okay? And I'm going to give you some tips for things you can do to do this. Number one, you have to read carefully. You must understand the problem you are trying to solve or you are not going to solve it well, right? This is pretty straightforward. If you're being given the problem verbally, you have to ask clarifying questions until you really have your head around it. There's two reasons for that. One, you have to understand the problem. We've already talked about that. But two, if you are taking the problem from someone else, then your goal is to get that person on your side and it begins with this conversation, right? Specking out the problem, understanding what you're trying to do. That's what you're trying to accomplish here. And a lot of people make this mistake. It's a basic mistake. When I interviewed at No Red Ink, a very nice guy, Michael Glass, who interviewed me, gave me a problem to reimplement set time out in Ruby. And the very first thing I did, I was like, oh, yes, set time out in Ruby. I know what we're talking about. And I just cracked open a text editor and started writing a test about how I was going to test the different threads going on. And Michael's all, no, no. In JavaScript, there's no threads. There's just this one stream of execution. I want you to do it like that. Oh, yeah, that. So everybody makes this mistake. You have to understand the problem. It will save you. And I'm going to give you a pop quiz. This is important. Lots of people don't do this. It's a little bit shocking. Here, I'll give you a pop quiz. Let's play a game. If the problem says, you should use get, should you use get? Yes or no? Yes. This is a good idea, right? If they said that, then they have some reason for it. Something they're trying to get out of it. Maybe they're looking at how you structure your commits, how big they are or not. Perhaps they're checking the amount of time it took you between commits. They can see that. But anyways, they have some reason for telling you that and you should do it. Here's another one. A little bit trickier. If the problem says, no one has ever solved this using Rails, should you use Rails? Yes or no? No. Why did not everybody say no? Were you thinking in your head? Yeah, but I could be the one. Right? Is that where you were thinking? You're not going to be the one. I'm sorry to break it to you. And you don't need to be. Why tie one of your hands behind your back? Why make it harder? Why don't you try to just solve the problem well? That's easier, right? So don't do it. Don't do that. Stop and think. Because programmers, we just, you know, we'll jump right in, we'll start coding. It'll come to us. We can always refactor, right? Yeah, okay. But if you're on a time limit, you've got to use your time well, right? And using five minutes to make a plan so that you use the rest of your time better will pay you back, right? It's okay to build a plan. It doesn't mean you're doing waterfall development if you build a simple plan. Okay? It's okay. And in fact, I feel like this is important enough that we need another hilarious tweet. So here you go. Right? Okay. When you're doing this thinking, you have to think out loud. And this is tough for some people. And I really feel for you. I know that some people struggle with this. Loudmouths like me, we have no problem with this. We can't stop talking. But some people do struggle with this. If you do, you need to practice. In front of a mirror, you can have you and your friends interview each other. You can do it with a spouse, significant other, whatever, until you get the hang of it. You must externalize that internal monologue. I interview multiple people that get to a part and they start thinking and they just stop. They stop talking. And I try to be respectful of it. I give them a little time and then I'll ask the question like, can you tell me what you're considering? I've actually had people say to me, hang on, I just got to figure this out. And I'm like, yeah, I know. And I got to figure out whether or not I want to hire you based on what you're figuring out. Right? So we have to know what that is. We have to be able to understand it. When you're doing this, work in magic words. There's some great magic words that help. My first thought is whatever. After that, it almost doesn't matter. I don't care if your first thought's a terrible idea. Right? It was your first thought. And then I get to watch you walk through the process and see if it's good or bad. Right? I'm considering the trade-offs. I love hearing this one. Hopefully you know by now that all of programming is trade-offs and you're always, you know, weighing one thing versus the other thing. If you tell me you're considering the trade-offs and then you make a choice that I wouldn't have made, I won't hold that against you. I know that you know that there were trade-offs involved and for whatever reason your experience told you this path was better, that's fine by me. If you're considering it, I'm good. I might circle back to this. All programmers write code they don't like. Hopefully you know that too. And we write things and we're like, I'm not totally satisfied with that. Maybe I'll come back when I have a better idea. It's great to hear that, that you know that, that you're aware of that, that you'd love to improve it. But right now you're going to get on with solving the problem. These three magic words are the most important. I don't know. Say it. I don't know. Apparently, this is very hard for people to say. Very hard. If I can't make you say I don't know in an interview, it's a huge black mark against you as far as I'm concerned. It's think about the one thing you know really, really well. Maybe better than anybody. Maybe it's Ruby programming. Maybe it's golf. Maybe it's painting your Warhammer 40K managers. I don't care. One thing you know really well, think about how much it annoys you to listen to other people talk about that thing when they clearly don't know what you know. That is you in every other subject. Okay? There's a lot of things we don't know. We need to get comfortable with admitting that. There's no reason not to say that. There are things your interviewer doesn't know. You just say I don't know. It's not that hard to say I don't know. I don't know. Okay. I wish I didn't have to have this slide in there. We interviewed a candidate who was doing a CSS recreation of a page on our site that happened to include a woman model and they were making sexist comments about the woman model as they were. Please don't do that in an interview on the job ever. Just don't. We want to know that you have a personality, a sense of humor. That would be great. But please keep it appropriate. If you are a five-year-old deciding which dessert you're going to have is the hardest decision of all. If you're given an interview problem, there's a hard part in the middle. The core of the problem. It's a search, a sort, an algorithm, a thing that does the work, transformation of data, the interesting bit. That's the part they want to see you do. Start there. Do that first. This is the most common mistake I see people make in solving problems. They read the problem and they think, oh, yeah, I know what they want me to do. I know what they want me to solve. They start building this elaborate object tree, doing all this setup work, data import, beautiful UI. Whoops, time's up. That's what they submit. They didn't do the problem. They didn't get to the thing. If you run out of time, you want to know you did the thing, right? The important bit. You can always say, oh, I didn't get to the data import, but that's like two lines of CSV code or whatever. But you want to have worked on the interesting part. So start there. Also, while you're doing that really interesting part, because you're faking out input and hard coding decisions from the UI, go ahead and just make it a test case. It's the same thing. It doesn't cost you any extra time. You can test the success case. You can test a couple of edge cases. And here's the killer secret. No one does this. No one. So this is a massive advantage. And it doesn't cost you any extra time. You don't need to test the whole thing. When you get to the data import in the UI, whatever, drop the testing. That's fine. But when you're doing that core bit at the beginning, test it. You will look amazing. No one does this. It's a great secret, I promise. Avoid dependencies. Right? And unless the problem specifically says you're working with something, the interview problems are of such a scope that they're simple problems. You're not the kind of thing where you need to drag out rails. Don't do that. They don't want to see that. They want to see if you can think through a simple problem. They want to see if when the sales team asks for a certain report, are you going to be able to take some data, munch it around, and come up with it? That's what they're looking at. Also don't use rails means don't use rails. And it also means don't try to recreate rails and then use that. That's wrong scope. Wrong scope. You're off the map. Right? Keep it simple. Along similar lines, write less code. I feel compelled to point out I got mad about this picture because my daughter was not following directions. I told her to draw ones and zeros. So what is this butterfly thing? I don't know. I told you I've become that interviewer, you know, that one we don't like. But in my defense, I used this picture. I had another one without the butterfly and I used the butterfly one. Seriously, don't gold plate. You have to keep your solution small enough that it doesn't overwhelm me to go through it. If I crack it open, it's 500 lines of code. And the first thing it does is fire up a lexer and run some stuff through that and hand the tokens off to a parser. My eyes have already glazed over and I don't care what you did. So you're, again, you're off the map. You're in the wrong scope. Ignore bonus challenges. Don't do it. It's not worth it. It's a trap. Okay? It's a trap. Luke, it's a trap. Yeah, don't do it. Here's the problem with bonus challenges. They often involve extra work. You probably have to do more data important. You got to write some extra UI code. And here's the insidious part. When you read it, you think, oh, I'm going to write my solution so that when I have time and come back, I've already set it up for the bonus challenge. Okay, you're not going to end up having that time. So you're not going to come back. And what you did is you made the main problem harder. Great. Don't do it. Ignore the bonus challenge. It's not worth it. If you do that part of testing the core, it's worth 10 times the point of the bonus challenge. Just test the core bit, forget the bonus challenge. This is good news for you. When you read the words extra credit, you can interpret them as I can completely ignore this. It's great. It's like an advantage. Time limits can exist for several different reasons. One might be that the company wants to know what you can do in a certain amount of time. Another might be that they're trying to keep you from sinking massive amounts of your time into something that they glance at for five minutes and say no thanks. So there's lots of different reasons time limits can exist. You don't know what reasoning they have for using it. So if you're working on a problem and you're close and you're not done, go ahead and submit what you have at the time limit. Then if you want to finish it, go for it. Finish it. If you're 30 minutes over the time limit, wrap it up, send it in and say, have the problem in my teeth, wanted to find that last bug or wanted to handle this other case, here's what I got, feel free to ignore. They can do that. You already submitted what you submitted on time. They may ignore it. They may go with that one. But for me, I will always grade the later one. For me, knowing that you wanted to finish the problem means way more to me than what you could do in the specific time limit. So that, you know, it can't hurt. Go ahead and submit. And finally, if you are taking the interview face to face, you need to practice your text editing skills. Get good at it. This is a bias, but when people watch you using your text editor and they see you using shortcuts, they think that you work with code all the time, that you've developed these coping mechanisms, that you've practiced it. And so you come out looking better, more prepared. You don't need to know every whiz bang feature in your editor. If you're like me and you use Emacs, that's literally impossible. I won't live long enough. But you need to just pick five. It doesn't matter. You won't have that much time to show off that many anyway. Make sure one of the ones you pick is multiple cursors or macros, because these are wow features, right? You don't want somebody to use multiple cursors or macros. You're like, whoa, how did they just change those 10 things at once? That's amazing. Right? So it sticks with you. Just make sure you have some editing skills to show off. You probably won't work all of these in every time. That's okay. Interviewing is hard for everybody. That's kind of the point, right? They're watching you struggle with a problem. They want to see you struggle. It's supposed to be hard. But everybody else is doing that too, right? You're not competing against the problem. You're competing against the other candidates, right? So avoid the major mistakes, and you're going to look pretty good compared to those other candidates. So it's okay for it to be tough. But if you work over these things and practice these things, I believe they will increase your chances. All right. Thanks for your time, and good luck with the interviews. Thank you. Thank you.
|
If you apply for a programming job, you may be asked to complete a take home code challenge, "pair program" with another developer, and/or sketch out some code on a whiteboard. A lot has been said of the validity and fairness of these tactics, but, company ethics aside, what if you just need a job? In this talk, I'll show you a series of mistakes I have seen in these interview challenges and give you strategies for avoiding them. I'll give recommendations for how you can impress the programmers grading your work and I'll tell you which rules you should bend in your solutions.
|
10.5446/31528 (DOI)
|
So it is time to get started. So this is Inside Active Job. This is the Beyond the Magic track. My name is Jerry DeAntonio. I live and work in Akron, Ohio. If you are an NBA fan, you have probably heard of Akron. There is a local kid who has done pretty well from the NBA. I went to school about 10 minutes from where I live. I work at Test Double. You may have heard of Test Double. Justin Searles, one of our founders. It was on the program committee for Rails Coffee. Test Double, our mission is to improve the way the world builds software. I know that sounds very audacious, but we truly believe that every programmer has it in themselves to do that. I believe every person here has it in themselves to do that, and that is why you are here. Definitely it has been a great company to work for, and I am very proud to represent Test Double here. Personally, one thing I have done, my biggest claim to fame lately is I created a Ruby gem called Concurrent Ruby. You may have heard of Concurrent Ruby because it is used in some very well-known projects like Rails. Concurrent Ruby is a dependency of Action Cable. In Rails 4 and 5, it is used by Sprockets. I also use some gems like Sidekick, Sucker Punch, elastic search and Log Stash Utilities. Microsoft is your Ruby tool. Much of what I am going to be talking about today draws my experience from that, but this is not going to be a sales pitch for that. This is going to be about ActiveJob and Rails itself. Because this is beyond the magic track, this is not going to be an introductory topic. This is going to be a deep dive into the internals of ActiveJob. I have had to make a couple of assumptions in doing this. I am assuming that if you are here, you have used ActiveJob, probably in production, you have used one of the supported job processors, you have some understanding of concurrency and parallelism. If you need a better introduction to ActiveJob itself, I highly recommend the Rails guides. The Rails guides are excellent at this and provide a lot of great information. If you need an introduction to concurrency within Ruby itself, same-less plug. I did give a presentation last fall at RubyConf called Everything You Know About the GIL is Wrong. That video is available on YouTube and that could be an introduction into that. So with that, let's jump into what is ActiveJob. So in order to get into the internals of this, I need to briefly remind us of what it is and where it came from. So ActiveJob, according to Rails guides, the definition is this. ActiveJob is a framework for declaring jobs and making them run on a variety of queuing back ends. So ActiveJob can be everything from regularly scheduled cleanups to billing charges, mailings, anything that can be chopped up in small units of work and run in parallel. The couple key terms there. It's a framework. We're going to talk more about this, but asynchronous job processing, pre-existed ActiveJob. There were things like Backburner Delay Job, Q Rescue Sidekick, Sneaker, Sucker Punch. Many of these things existed before ActiveJob was created and ActiveJob came along as a way of unifying those. ActiveJob helps us schedule tasks to be run later. That was mentioned briefly this morning in the keynote that when you don't want to block the currently running PREB request and you want something to happen later, you use ActiveJob in order to make that happen. And that can happen through what we call ASAP processing, which is where you get to this as soon as you can or by scheduling it at a later date and time, potentially. And this is also good to support full parallelism. Some of the job processors are multi-threaded. Many of them, however, actually are forked. I'll talk about more and can run multiple processes on machine and scale across multiple processors. And in some cases across multiple machines. So the impetus for ActiveJob is that background job processors exist to solve a problem. We have these long-running tasks that we don't want to block the web request. So we want to be able to send a response back to the user and get the page rendered for them. And some of these tasks then occur after that. So for example, if I'm sending an email, this email takes time. It's a sin to begin with. Why should I block the web request to make sure that email posts when I can send the response back and have that post shortly thereafter? So ActiveJob supports that. And the processors behind that support that. So like I said, ActiveJob, this is going to be important when we get into the internals, ActiveJob came later. And there are all of these job processors. Each one was unique. And each one, they all did virtually the same thing. They had slightly different capabilities. And when it got to a different late, they all saw the same problem. Right. So ActiveJob was created to provide a common abstraction layer over those processors that allow the Rails developer to not worry about the specific implementation. Right. This sounds familiar. This is not dissimilar to what ActiveRecord does. All right. Relational databases existed. ActiveRecord created an abstraction layer over that that allows us to run, use different databases to fairly switch between different databases if necessary. Most importantly, run different databases and test, prod, and dev. Right. ActiveJob does the same thing. It allows the abstraction layer that will allow us to choose different processors, change different processors as our needs occur, and run different processors and test, development, and production. Right. And so ActiveJob had to do that while supporting the existing tools that people were already using. So, according to RailsGuides, is picking your key back end becomes more of an operational concern. Use the developer, don't care which back end you use is being used. You simply write the jobs and let the application and then use whichever back end in whatever environment makes the most sense. So, because we're looking at some code, I want to real briefly remind us what the code looks like for ActiveJob before we jump into the internal. So, this is a simple job class. This should look familiar to everybody. The important things are that this class extends ActiveJob base and that it has a method called perform. Right. Most of what ActiveJob does is encapsulate it in the ActiveJob base class, which goes and will eventually, as we look through the details, call this perform method on your, on object of this class when the job actually runs. And look at those details. Right. And as a reminder, the way we configure our back end is we use this JobQ, ActiveJobQAdapter configuration option within our application RB. Now, InsideJob is what I'm going to call the adapter we're going to build here. We're actually going to build in here a real adapter that is functional. Right. So, the, all of the adapters that are supported by Rails have a symbol that follows normal Rails inflections that maps the adapter name to what you set the convict value for. So, if InsideJob existed as a supported adapter in Rails, this would be how you would set that. Right. Then, that's how you configure which back end you want to use. And then later when you want to actually do something later, you call the perform later method on your class, passing it one or more parameters. Okay. And that should look familiar to everybody. And if you want to schedule the job for a certain time, then you can use the set function to specify when and there's a number of different ways you can do that. So, that's just a reminder of what we see on the front of ActiveJob. All of that should look familiar to everybody. We're going to talk about is what goes on behind that when you make this perform later call. Okay. So, I guess we're going to build a asynchronous back end here, right up here, during this presentation. One that actually works and is functional and will meet a, it's minimal, but it will meet the requirements of ActiveJob and show us how this works. So, a couple of things just to give a sense of where we're coming from. Like I mentioned, there are multi-threaded adapters and there are forked adapters. Multi-threaded adapters run your job in the same process as the Rails app itself. Okay. There are a couple that do that. The advantage of that is those can be very fast and you don't have to spawn separate processes and manage separate processes. All right. We all know that MRI Ruby does have some constraints with concern to concurrency, but it's not as bad as most people think. That's what I talked about at RailsConf last fall. And since most, MRI Ruby is very good at multi-threaded operations when you're doing blocking I.O. And most of the tasks we're going to be making these background jobs for are doing blocking I.O. They're sending emails. They're posting things to other APIs. And so, since they tend to do blocking I.O., they tend to work very well with Ruby's concurrency model. So, a threaded back end is simpler because we don't have to manage separate processes. Many of them, however, do use or they do spawn fork separate processes where you have to run separate worker processes. Those give you full parallelism, but they require active management of those processes. So, for what we're going to build here, we're just going to do a multi-threaded one because I can do that very easily. And it will demonstrate all the things we're going to do. And we're going to use thread pools for that. Most job processors will also persist the job data into some sort of data store. Redis is very popular for this. The reason for doing that is that if your Rails process exits, either on purpose or by crashing, if all of your job data is in memory, you're going to lose it and those jobs will never run. So generally speaking, for production, you want to have a job processor that does store the job data in some sort of external data store to allow it to persist beyond restarts. We're not going to do that here mainly because in simplicity, I want to demonstrate what goes on in active job. We don't have to go to that level of effort. So, our job processor will not persist through a data store. So it makes it good for testing and development. But not necessarily we wouldn't use what I'm going to build here today for production. So in order to do this, we're going to need three pieces. The first one is active job core. This is provided by active job itself and it is the job metadata. I want to talk about this more, but it is the thing that defines the job that you need to perform later on. It is probably the, I'd say the most important piece of all of this because it's the glue that binds everything else together. The two pieces we're going to build today are the QAdapter and the Job Runner. Remember, active job came about after the Job Runners. So the Job Runner is independent and it provides the asynchronous behavior. The Job Runner actually exists as a separate thing. Sidekick is a separate thing. Sucker Punch is a separate thing. You install those separately. The QAdapter has the, its only responsibility is to marshal the job data into the asynchronous job processor. So the job processor provides the asynchronous behavior and the QAdapter marshals between your Rails app and that job processor. And those are the two pieces we're going to build here today, the QAdapter and the Job Runner. For all of the Job Runners supported by Rails, the QAdapter is actually in the Rails code base. Okay. If you go into GitHub, go into the Rails code base and look at active job, you'll see that there's a folder of QAdapters and there's one QAdapter in there for each of the processors that Rails supports. There's also a set of unit tests as part of the Rails code base that will run against every one of these job processors on every commit. And they ensure that all of the supported job processors meet the minimum requirements of active job. The one we're going to build today actually will pass that test suite. And once it's in the job, it will pass that test suite. So strictly speaking, the Rails core team has responsibility for the QAdapters and for that test suite. But knowing from experience, the people who create the Job Runners themselves work very closely with Rails to make sure that those adapters are up to date and work well with the processors. So let's stop then and talk about the active job core class. Like you said, this is the glue that ties it all together. It's not obvious. So this is the job metadata. It is an object that represents all of the information about the job you've posed. It carries with it the proc that needs to be run. It carries with it things like the Q and the priority was stopped in a minute. And it carries with it all of that metadata. It provides two very important methods which I'll talk more in a minute. But they are the serialized and deserialized methods. These are very, very critical. I'll talk about them in a minute. The job metadata itself, there are several attributes on this object which we will look at and use internally within active job. These are not things that you as a Rails developer have to know about, but these are things that when turned inside of active job are very important. One of them is the Q name. Most of us should be familiar with that. You can specify when you create a postman these jobs what Q should run against. And if you don't specify, it's the default Q. Priority, some job processes support prioritization where higher-party jobs run first. We're not going to support prioritization in ours. That's optional. But the priority would be attached to this as well. If you schedule a job to run at a specific time, you get an attribute called schedule that which tells you when. We'll look at that because we are going to do scheduled jobs. The job ID is internal to Rails. And there's a unique ID within the Rails instance itself that identifies each job. Rails uses that within active job to track each one of these. The provider job ID is one that you can provide within your job processor. So if we want to do it within our job process, if we want to have our own kind of ID system that made sense for us and worked, we can then attach it to the job metadata under the provider job ID. So Rails does not create that. We would create that ourselves. We're not going to use the provider job ID today because it's not essential, but it is available and it's something we would add. So let's actually build a QAdapter. We're going to go outside in. So like I said, the QAdapter is responsible for marshaling data into the job processor. The job processor is the more interesting piece. We'll look at that in a minute. But we're going to start with the QAdapter. And we're sort of going to pseudo-TDD this. The QAdapter, most of the QAdapters were written when active job was created because the job processors already existed. And they had to handle that marshaling. In our case, because we don't have a QAdapter, or excuse me, we don't have a processor yet, we can decide what the API is going to look like. So within our QAdapter, we only need two methods. It's very simple. One is NQ and the other is NQAt. The NQ method takes that job object we looked at a minute ago, and it marshals that into our processor. And the NQAt takes the job and a timestamp and marshals that into our job processor. So notice in this case, I've decided to make the API very simple. We're going to create a thing called inside job. We're going to have class methods NQ and NQAt. We're going to pass the serialized job. We're going to pass the QName. And in the case of the NQAt, we're going to pass the timestamp. So a couple of things to note. One, this is not very OO. These are class level methods that we're calling on this class. And I did that because I want to emphasize the stateless nature of this. This is very critical to understand. Active job is by its nature stateless. The state for your job is encapsulated in that job object. All of the metadata about the job, everything related to that job, all of your state is in that that we're passing through. The actual QAdapter itself is inherently stateless. Its job is just to... In your notice, we even call a class level method when we post the job. Because we're sending this thing to happen later on. It's a fire and forget. We're not creating anything that's going to be persisted. And in fact, any kind of stateful behavior here would be potentially thread unsafe. So we're just going to call these class methods and throw this data at it. And then we'll build those class methods in a minute. And that's all it really takes to build a QAdapter. Now, one thing that's very important here is the serialized method. And I have to go into this in a little detail. The reason why we call this serialized method is twofold. First off, and less importantly, is thread safety. Remember, Ruby is a shared memory language that has object references. So if we have maintained a reference to anything that was passed into that, and we hold onto that reference when this thing finally goes and gets processed later on, if it's processed in the same process on another thread, we run it a potentially not thread safe behavior. Now, the normal usage pattern makes that not really a big deal. But if we serialize the job into a representation of that, we then let go of those references and make it thread safe. There's one important reason, though. And the most important reason is for consistency. Remember, we want to be able to work with multiple job processors in prod and dev and even test we want to be able to. So when those job processors are going to persist into a data store such as Redis, they must serialize somehow. Maybe this can't take a Ruby object and throw it into Redis. Or throw it into a traditional database, we have to serialize it somehow. That's sounding bad. So if every job processor created its own serialization method, we could potentially run into problems when we switch between these. We don't want to have hidden errors where we run this in tests and we run it in dev, and all the serialization works, and then we run it in production with a different processor and the serialization fails or does something different. So ActiveJob provides one common serialization routine method and one deserialization method so all of the job processors can choose to serialize the same way. And in so doing, that will allow make sure we reduce one potential set of errors when we move between job processors. So we are going to serialize here, even though, you know, this is the simplified version and we're not storing this in a data store. We want to do that serialization to make sure we get that consistency across processors. So internally, like we said, we need to do two things. We need to provide inside the QAdapter, I'm sorry, I moved on to the job processor now. So we have the QAdapter, now we need the job processor. The job processor's responsibility is to provide the asynchronous behavior and that asynchronous behavior is QDependent. So we want to have multiple Qs and have each Q process a different set of jobs. So for this, we're just going to use a, what we need is we need to be able to post jobs into different Qs and have them behave asynchronously. We're going to use a simple thread pool for this, right? Because within the context of this simplified application, a thread pool works fine. A thread pool has its own Q and therefore, by creating a separate thread pool for each Q, we do get a separate Q for these different jobs. We just have to map the thread pool to the Q name, which we'll see in a minute. And then, obviously, a thread pool has one or more threads and therefore provides asynchronous behavior. So we can very simply deal with these needs of the Q and asynchronous behavior by just creating a thread pool. Okay? So what we're going to do here is we're going to create the thread pool, but because this is all very multithreaded and therefore needs to be thread safe, not only are we creating these threads within our job processor itself, but because Rails can be run under multithreaded web servers, we need to go through a couple of hoops in order to get some thread safety here. So we're going to use a concurrent map class. This is similar to a Ruby hash and support similar APIs, but it has some additional behaviors. One, it's thread safe, but it also has some additional behaviors to make that work. Hopefully, most of you know that with Ruby's hash, when you create a new hash, you can pass a block to the initializer and that block will be called if the key does not exist. And that block will then be to initialize that key. So what we're going to do is whenever we try and retrieve a thread pool from our map of Qs, if it doesn't exist, we're going to create a new thread pool at that time. So we'll lazily create our thread pools as new Qs are needed. Basically do this together and just see kind of one way that you might make this pass. So this compute of absent is just a necessary thing in order to provide the atomicity and synchronicity that we need to have in order to create this new thread pool in a thread safe manner. So there's some concurrency needs there, but the end result is that's basically like creating a hash and passing a block in to the constructor. And then we're going to have a create thread pool class. In this case, we're just going to create a cache thread pool. A cache thread pool is the simplest kind of thread pool we can create rather than getting into the details of all the different configurations we could do. Basically a cache thread pool has an unlimited Q size. It will grow and add more threads as needed. And with threads become idle, it will shut them down and remove them. So over time, we'll get an optimal number of threads in that, which for our simplified processor is fine. So I mentioned that we need an NQ method inside our job processor. It's going to look like this. Basically, when we NQ this job, more jobs NQ, we're going to simply post the job to the thread pool. And when the thread pool pulls it off, we're going to call active job base execute. That's the important part right there, active job base execute. The first line, the Qs, the posts, that's just getting the thread pool, creating a new one if necessary, then posting that to be run by the thread pool whenever the thread pool has an available thread. Active job base execute is responsible for actually interrogating the job, looking up our specific class to process that job, and then posting and calling the perform method on an instance of that and passing in the arguments. So when you, in your class, early on we saw we create that perform method and it takes a set of arguments and it runs, active job base handles the interrogation of the job, creating an instance of that and calling that method. All we need to do is call the execute on that in our, when our thread pool takes this and runs it later on. And that's all it takes. Active job handles that, like I said, the internals of that. And that right there is enough for us to actually post asynchronous jobs that are performing in ASAP way in a real environment. Now for the NQ for later, it's a little more complicated. Right now. We have to get into the time. You're going to commit. So fortunately we do have at our disposal a high level abstraction that handles these kinds of scheduled tasks. And, coincidentally, it's called schedule task. All right. So the internals of schedule task are sort of beyond the scope of this, but the idea is a schedule task will take a number of seconds in the future that something is supposed to occur and it will queue it up and it will add that roughly that time. It will then pass it on to a thread pool to make it work. Where we are. So when we notice when we actually use our perform at and we use that set method, right, Rails provides a lot of convenience things for allowing us to specify when the job happens in the future. Right. Rails gives us all those great time helpers that we like, you know, one day from now and, you know, one week from now and at certain times and so forth. Active job is responsible for taking all of those convenience things that we use as Rails developers and converting them into a number of seconds in the future when this runs. Okay. So by time we get this, we already have the number of seconds in the future. So in our job process, you don't have to worry about all of those wonderful date utilities that Rails has. Rails does that for us. Right. So in this case, it's really convenient for us because schedule task, not coincidentally, takes just a list, a number of seconds in the future when the thing should run. And in this case, normally within concurrent Ruby, all of the high level abstractions run on global thread pools. So you don't have to worry about managing your thread pools. In fact, most developers should never use a thread pool directly. All right. Most libraries that provide thread pools provide them internally and provide high level abstractions that allow, that use those thread pools. So a normal circumstances, a schedule task or a future promise, any of these things, would use the global thread pool. But in this case, we need a specific thread pool because that thread pool represents our queue. Fine. I'm just going to add. So all of the high level abstractions in concurrent Ruby support the tendency injection of a thread pool. So in this case, this executor option, which is very common, is a way of saying, when you do run this thing, run it on this specific thread pool. So. So what we're doing here is saying, look, we know how many seconds in the future this thing needs to run. We know which thread pool we want it to run on. Just go and handle that. Schedule task handles that. And at the time the thing needs to run, it'll then grab that job and it will run this block. And we're going to do the same thing we did before. It's just called active job base.execute. That execute method doesn't know anything about the sinker's behavior. It just knows now is the time to execute that. Same thing we saw a minute ago. And just in case we somehow get a time value that's not in the future, we're going to just check that delay and we're going to post it directly if somebody's not in the future. And again, that's all it takes to post the task later on. Rails handles all the time sensitive stuff. We just need to make sure that we can do it at that time in the future. And believe it or not, that in its entirety is a functional asynchronous job process. So the next slide doesn't have a bunch more code on it that we normally see. And I'm putting this on one slide because I want you to see just how simple this can actually be. That in fact a real functioning asynchronous job processor can in fact fit on one slide. And this is basically it. We have a class called inside job. We have our Qs constant where we have this thread safe map where we're going to keep track of all of our thread pools. We're going to have that create thread pool method which will just return the pool we want. Then we have our NQ behavior which just throws the job onto the thread pool. And then we have NQ at which actually looks at that delay and that time stamp and gives it to a scheduled task. And that right there is actually a fully functioning asynchronous job processor that plugs in that can work with active job. And like I said, the other part was the QAdapter. And remember the QAdapter just looks like this, right? It was simply when active job calls NQ or NQ at simply post this thing off into my job processor. So that's it. So believe it or not, that is actually a fully functional asynchronous job processor that will work with active job and could be used in test or development in order to actually get asynchronous behavior without having to install registers under deep dependencies. So the next question you're probably going to ask is, all right, Jerry, are you going to put this code up online so we can look at it like this? And the answer is yes. If you want to see this code, you can find it in a very convenient place and that's Rails. The genesis of this presentation was that last fall I went to the Rails team and said, you know, it would be really useful if we had a simple asynchronous job processor in Rails 5. As you all know, we can in our config specify the inline adapter. The inline adapter will go and it will run the job synchronously so we don't have to deal with those underneath dependencies. But the problem with that is it's not real asynchronous behavior. And if we're using the inline adapter in test or development, we can sometimes mask problems by not having real asynchronous behavior. They said, why don't we just build a simple one and we'll call it async job. We'll make the, instead of inside job, it'll be async job, we'll make the symbol just async. And why don't we allow people in test and dev to run these jobs really asynchronously in order to potentially find bugs in them. And the Rails team said, that's a really good idea. And they worked with me and we got this merged into Rails 5 last fall. So if you use Rails 5 and you use the async processor, this is basically what you're going to have. This code was lifted almost line by line from the original implementation of that. Now, since then, the Rails team has done some refactoring on that. So if you go and look at the implementation now, it'll look a little bit differently. So just to give you some context of what you see different, they decided to collapse things into one file. When I originally wrote this, I had two files, one for the QAdapter and one for the job processor to sort of mirror that normal behavior you would have of the QAdapter and the processor being separate. They collapsed them into one file because they're very short and didn't need to be two. They renamed some stuff to go along with better Rails conventions. They are assigning that provider job ID, again, in this case, we're not really needing it, but having that does, again, provide for greater consistency with the production ones. And they decided to throw one thread pool for everything and it's fenced with having multiple queues. Because, again, in test, all we care that these things happen asynchronously. We don't particularly care about configuring the queues for various different behaviors. So if you go look at it now, you'll still see async job and it will do exactly what we showed before and it's right now available in Rails 5. So if I've piqued your interest and you want to learn more about this and see other things that you can do with this, the two things I would suggest you look at more deeply are Sucker Punch and Sidekick. Sucker Punch is a threaded in-memory async, just a job processor. It does a lot of what this does, but does it way better and more fully. The creator of this tells me that the main use case is if you want to send emails from a hosting provider, a one-click hosting provider like Heroku, you can fire off these emails because there's not a lot of high cost of failure if that thing goes down. And so those jobs are just retained in memory, not persisted through Datastore. But Sucker Punch does use thread pools just like this does. It does map those queue names to thread pools and provides some configuration of those. And also does some really cool things where it decorates every job with a job runner class that does certain cool things like track the number of successful jobs, track the number of failed jobs, handle errors, and do things in this nature. So it's a really good example of how you can decorate a job when you push it into a thread pool and do some really cool things. Like I said, for most of us, we shouldn't use thread pools directly. The high level abstractions in the concurrency libraries provide those capabilities. But this is a really good example of how you can do that. And also, Sucker Punch does some really cool shutdown behavior where if the Rails app is shut down for some reason, it will look at the number of the jobs that are still running and try and allow the jobs to execute completely before shutting down and some other things. So there's some cool stuff in there. Sucker Punch uses a lot of the tools that we saw in here. It uses concur Ruby thread pools. It uses concur Ruby's scheduled task. Another great one is, of course, Sidekick. Sidekick is also an in-memory, I'm assuming, it's also a multi-threaded job processor. It does not use, it does persist all your stuff to a data store so that way your job data will persist beyond a restart of the application. It does not use thread pools the way we saw here. Sidekick actually spawns its own threads and manages its own threads, but it still deals with all those same things with the internals of an active job. And of course, Sidekick has a whole bunch of additional features. Like I said, Sidekick doesn't use concur Ruby's thread pools, but it does use concur Ruby for some of the low level synchronization and atomicity stuff, thread safety stuff you saw here. So there's two great examples. If you want to look at this further, then go look at those code bases and see beyond what we've done here. So with that, I just want to say, again, I work for TestDouble. We are hiring, and we are also for hire. So if you, we love talking to people about software development and about software and about how we can all improve software. So if you'd love to chat with us, by all means, reach out to us. You can find us on email, social media. Myself and Justin will be here for the rest of the conference. Like Justin will be speaking on Thursday in the afternoon at 3.30 in, he's going to be talking about RSpec in Rails 5. So I've got stickers up here. I've got stickers in my bag. I hope you had a chance to talk with you sometime before the conference is over. And with that, again, my name is Jerry DeAntonio. Thank you for having me. So I do have five minutes if anybody has questions I see or don't want them to run out. That's cool. I'm hungry too. So the question was resource contention within the job itself. If you have multiple threads running simultaneously and trying to do things, all of the asynchronous behaviors provided by the job processor itself. So all active job does is provide the compatibility layer. It's important that the job processors themselves handle all of the concurrency, any kind of locking or synchronization that is necessary. But generally speaking, if you follow the best practices, a lot of that contention goes away. So for example, you're not passing an active record object. I did it. You're passing an ID, which you can then use to pull that up later on. We're serializing the jobs so that we're not storing references. But ultimately it is up to the job processor itself to be thread safe. So the question was, would you be able to use multiple job processors simultaneously? And the answer is yes, but not through active job. Active job only allows you to specify one handler. However, all, as far as I know, all, I'll say most, but as far as I know, it's all of the job processors can be used outside the context of active job. So for example, you might specify sidekick for being your main job processor, but say for certain things you want to use sucker punch, you would then just instant sucker punch directly. And so you can do it that way. And I don't know, I can't imagine why Rails would change that. But again, it's very possible. Yeah. So the question was, could we subclass active job and have two different runners? I guess, again, it's ruby. We could probably do anything we want. But there's that one configuration value within the application config. I guess we could specify, of course, we can create our own configuration values. We could create some new ones, grab those and do something of that nature. I'm sure it would be possible, but it's not something that would be directly or easily supported by Rails in active job itself. So multi-threading is just when you have multiple threads, right? A thread pool. So the question was, difference between multi-threading in general and a thread pool. A thread pool is a managed thing where the queue and all of the threads are managed by the object itself. So one of the things, like I said, so I could spawn my own threads just by calling thread.new, right? What happens if those crash? What happens if I want new threads? What happens if I have idle threads? How do I enqueue those things there? There's a lot of plumbing involved in that. We can always spawn multiple threads, but in order to manage that, there's a lot of extra stuff. How do we handle exceptions? If you throw an exception on a thread, it will crash the thread. How do you handle that? So a thread pool takes all of that, puts it in one object with some very well-known, very common cross-language algorithms and manages those things. So you create a thread pool, you give it a set of configuration parameters, things like how many threads to run at a minimum, how many to maximum, how many things can you enqueue, if the queue gets full, what do you do, if you can't generate it. The outbreak system won't give you more threads than what you do, and it handles all of that for you. So all you do is just create this one abstraction, the thread pool, and you throw stuff at it, and it manages all of that in queuing and dequeuing. How to find those migrations. If threads crash, it will handle that and so forth. So there is some overhead in the thread pool itself because of all of that. But just like anything else, that overhead comes at the value of making you not worry about those things. So generally speaking, you start with high-level abstractions that use the thread pools, so you don't have to worry about that. Then maybe later on, you specify your own thread pools and inject them and take a better control. And then maybe if you're this guy over there, you just write your own threading yourself. But that's sort of the progression. And that is a fantastic question. The question is, how does it handle exceptions? What we did here does not handle exceptions very well at all. The thread pool itself will protect itself from any kind of exception on the thread. So thread pool will not allow its threads to die because of exceptions. Thread pool doesn't do much with them. Again, this is one of the reasons to use that, to use a high-level abstraction. Because if you use a future or promise or active agent, those things have consistent idiomatic ways. Idiomatic ways of handling things like return values and errors and so forth. So if you look at Sucker Punch, the job decorator class in there actually handles the exception. It's like a nice little bit of Rails magic. On the thread itself before it bubbles up and then doing things with it. So again, the high-level abstractions are the ones where people provide you with better error handling and consistent idiomatic. They're dealing with return values and minimizing the weight on things and so forth. And again, that's why you should always start with high-level abstractions and then only inject the thread pool. Thread pool is meant to be the very lowest level in that and just provides sort of the engine. So like I said, the actual job processor itself will handle things like errors. So if you look at whatever you use, which one of those supported job processors you use, they are all doing the error handling in this case. Because this is the bare bones minimum, I'm not handling errors at all. Your job is just going to die, you'll not know about it. But again, this is meant to be minimal and trivial. But if you use any one of the full-blown production-ready job processors, they will handle the decoration of that job and they will handle the errors and they will have their own way of doing that. That's one thing that ActiveJob does not include is a consistent way of handling errors. So of course, you could always put your own error handling in your perform method and handle it there, but ActiveJob doesn't really do that for you. I wouldn't do one. I mean, in terms of production, the ones that are out there are very fantastic. They're very mature. They've been used a lot. The reason for this one was to put it in Rails itself so that for development and testing, you can run your tasks synchronously and get a better understanding of how they're going to work in production. One thing, I've talked to a lot of people. In fact, one of the... If you go back and look at that commit and you look at the actual discussion around that PR, it was actually DHH himself who said, hey, I really like this idea. I generally install Sucker Punch for dev and test. It would be nice if I could just do that within Rails and not have to have that extra dependency. Sucker Punch is great. I've worked with Brandon, the creator of Sucker Punch, and he's fantastic. But it's an extra dependency for just dev and test. Rails have done a great job of providing the inline adapter for dev and test. So it makes sense for Rails to provide that simple async one. And so we just minimized what we actually need and put it in Rails itself. So now for dev and test, you can do that and get a better sense of the real behavior in production, what it might be. And how about the... I'm sorry? It's in Rails 5 now. So all you need to do is in Rails 5 is just say async in your config and it's there. Anything else? All right, thank you very much.
|
ActiveJob made a huge impact when it landed Rails 4.2. Most job processors support it and many developers use it. But few ever need to dig into the internals. How exactly does ActiveJob allow us to execute performant, thread-safe, asynchronous jobs in a language not known for concurrency? This talk will answer that question. We'll build our own asynchronous job processor from scratch and along the way we'll take a deep dive into queues, job serialization, scheduled tasks, and Ruby's memory model.
|
10.5446/31533 (DOI)
|
My name is Nate Berkopeck. Welcome to the month of May. I'm also known by the name of Grand Maester Nate. I'm an expert in Rails magic. I've forged my links at the old town grade Citadel. Just kidding. Rails is not actually magic. DHH does not actually sacrifice chickens in his backyard to make all of the components work. So we're going to talk a little bit about some of that magic today, being the different parts of the Rails framework and how they all fit together when you type Rails new and get this huge magic set of folders and files. What are they all actually doing for you? And can we get rid of all of them and put a Rails application in a tweet? Again, my name is Nate Berkopeck. You may have seen my blog online, a blog at NateBerkopeck.com. I also have a course called The Complete Guide to Rails Performance. Normally, talking about performance and Ruby's speed is kind of my shtick. That is at RailsSpeed.com, that course. We're not going to really talk about performance today. There is some performance benefits to this talk, which I'll talk about, but they're not really the most interesting part. I think what's interesting here is the underlying modularity of what Rails can do for you. But first, let's do a little word association. Let's do a little psychoanalysis on all our poor Rails developer minds here. When I say Rails, what do you think of? Do you think of bloated or lightweight? When you have a Rails application and it's 50,000 lines, does it feel a little bit like this? When you think of Rails, do you think well-architected or do you think of spaghetti code? I've heard actual controllers described as a crack den of inheritance. Do you think of object-oriented when you think of Rails? There's a Ruby web framework, formerly known as Lotus, now known as Hanami. I don't know if they still say this in their marketing, but they used to say that Lotus aims to bring back object-oriented programming to web development, which left me wondering where did it go? And who is bringing OOP back? When you think of Rails, do you think of modular or monolithic? There are many people on Stack Overflow who complain about CSRF not working, and the common advice is to just turn off CSRF protection. Surely that by itself is proof that people should only turn on this feature when they need it. Presumably whoever wrote this comment also probably thinks that SSL breaks things and so all of us should just not verify our SSL certificates. When you think of Rails, do you think of fast or do you think of slow? Does your Rails application sometimes feel like it's going at a bit of a leisurely pace? I don't make me do my lines, up in here, up in here. Y'all gonna make me go. Oh wait, wait, what's that sound? Oh, it's a plug. Oh, I wrote a course about fast Rails applications. Oh, it's at realspeed.com. All right, that's all, I'm gonna stop talking about it now. I'm gonna stop talking about it now. Realspeed.com. So my thesis here is that although we have a lot of negative mental associations sometimes, I think, with Rails, I think it's actually a lightweight, well-architected and modular framework for creating speedy web applications, but it just doesn't advertise itself that way. And I think a lot of this comes from the way that our BDFL DHH talks about Rails. In his keynote last year at RailsConf 2015, he described Rails as a kind of prepper backpack for Doomsday. Like he wanted to be able to rebuild Basecamp by himself if all of the internet went down and he only had Rails left. He wants Rails to be able to do everything and to be able to rebuild Basecamp with just everything that comes in Rails out of the box. And that's a very expansive vision. That's not like, oh, I'm gonna do a one-file Sinatra app and it's gonna be cute in blog posts. Like that sort of vision leads to a very different perception of a web framework than what other frameworks come to advertise as their simplicity or whatever like that. But that doesn't necessarily mean that Rails can't be simple or can't be lightweight. So what we're gonna do in this talk is we're gonna take the file structure and the boilerplate that you get from a Rails new command line and we're gonna basically just hack it all off until we get down to 140 characters. So the first thing we do after we type Rails new is we have to add a controller. Our app has to actually do something. And so I'm gonna add a Hello World controller which is gonna render Hello World in plain text. That's the only thing I'm gonna add to Rails new, right? And I got to add a route for this, obviously, and configure routes.rb, okay? Now, at this point we have 433 lines of code generated by Rails new and that's spread over 61 unique files. That is a lot. And so that includes the YAML configuration files generated, all the.rb files, the rackup file. It does not include blank lines or comments. So this is 433 lines that are actually doing things. So that's a lot to wrap your head around. Step one is to just delete all the empty folders and files. We get a lot of folders that are generated by Rails.new, or Rails new that just are empty and I have this.keep file to force git to put them into your source control. But these folders are otherwise empty and they're just there to serve as a placeholder. Rails doesn't need these things to boot up. It doesn't need empty folders to start an application. Which seems kind of obvious in retrospect, but a lot of people keep these folders around with no intent of ever using them. Some of them, the more interesting ones here that we're gonna delete entirely are the entire lib folder, the entire log and temp folders. Log and temp will just be recreated if Rails needs them. And the entire vendor folder, if we don't, because we're just doing a hello world application so we don't need assets or whether or not they come from our own app or from some vendor. We're also gonna delete all the sort of boilerplate empty files that don't do anything. A lot of these files, especially config initializers, a lot of these are just blank with comments. There's no actual code in these. They're just comments that tell you, hey, you should think about such and such decision. Go do that. And a lot of these like application job or application helper are just blank modules that don't do anything either. They're sign posts that say, hey, you should put such and such code here. They don't actually do anything functional. Same thing with the public directory. We can delete the entire public directory. The files like 500.html and 404.html, when Rails has a 500 response in the production environment and will try to show the user 500.html, if that file doesn't exist and you have an overwritten Rails' behavior, it will just render a blank 500 response, which for our little Hello World application is totally fine. So my point here is that empty does not equal worthless. I think this is actually one of the most important things that Rails' new does is create all these empty comment-filled files. Because what it does is it creates a common vocabulary. It says, okay, when I come to a new Rails application, I will go to the app models directory to find the domain model. Like I know that that's where most of the business logic is gonna happen, hopefully. I know that the controllers and all the HTTP related stuff is gonna be in app controllers. If I'm hacking away in the view and I see like a jQuery plugin, I know that that's probably gonna be in vendor assets JavaScripts. It just gives us this common starting place, which makes it so easy for any company that works on multiple Rails applications or as consultants like myself, to go from Rails application to Rails application and say, all right, well, I know pretty much how this is organized. Because we have this default organization that we've all agreed upon by using Rails' new. We could hash out this problem every time we started to do application, but we don't. I didn't really realize how important this was until I started doing more work with JavaScript on the server side. It's just like a free-for-all. Every different, some frameworks don't even tell you how to organize anything. So, you know, if you're using, this is especially true, it seems to me in electron applications, where everyone just sort of has their own folder structure and each app kind of does things their own different way. In Rails, we don't have that problem. We just all have agreed that, well, we're just gonna not bike shit on this and this is where, how we're gonna organize our applications. Step two is to delete the entire app folder. So some of these are kind of obvious, like if our Hello World application doesn't need assets, so I can just delete application.js, like action cable, I'm a bit of a cord cutter, so I don't use action cable. That was a tender look, it wasn't mine. Don't need CSS, it's more action cable stuff. The controllers, we're gonna inline. So we're gonna move the actual controller-related stuff that matters into config application.rb, and we're gonna delete all the views and all the models, everything else in app. So the only part of app that we actually care about in a Hello World application is this part, right? The controller, so I'm eliminating application controller. That's just a common useful pattern to have. You should usually have an application controller that has common behavior between all of your controller classes. If you don't, that's fine. You don't need an application controller. And in a one controller application, we definitely don't. So I'm just putting it at the bottom of config application.rb. If you, I think Xavier Noria's talk, just talked before me in here, is gonna be a really good compliment to this talk. If any of this stuff, because I'm going too fast here, is like, well, I don't know why it's in config application.rb. He talks about the entire boot process, and when this file gets loaded, and why it's important that I moved it here. So it's really confusing. I would watch his talk when that gets posted. Step three is delete the entire bin folder. So this one's a little more obvious, I think. The bin folder is just full of a couple of useful things that make Rails development easier from a developer's perspective. Two of these files are just really simple, like best practices. They don't really do anything. And that's bin update and bin setup. These are just example scripts that are just intended to say like, hey, successful Rails applications generally have a setup script that sets up a develop environment, like it installs Redis and Postgres and all the weird things that your app needs. And so a new developer should just be able to run bin setup and then Rails server and be done. That's just a best practice. Here's an example. So you don't need that file, you can just delete it. Same with bin update, same thing. Then the four other files here, bin spring, bin rake, bin Rails, and bin bundle are bin stubs. What are bin stubs? Bin stubs basically just wrap a gem executable in a little bit of environment setup. So usually it means setting up the load path with bundler or whatever. It might set a constant, like it might say, hey, my app lives in such and such a directory, now load the gem executable. And that's what actually gets run when we type Rails servers. It should be the bin stub that gets run. And in the, if you actually look at bin Rails and bin rake, you'll see they also use spring. So the bin Rails sets up a new application process with spring, then does what it would normally go and do anyway. We don't need these. What we can do instead, because we're just, all we wanna do is set up a little Hello World server, is to use config.ru directly and use a command called rackup, which comes with the rack gem. So rackup looks for a config.ru file in the current directory, and then it treats the contents of config.ru as the body of a block inside this code. So it thinks app equals rackbuilder.new, constant is your config.ru file, and then calls.toapp on it. The config.ru file in a generated Rails.new application looks like this. All rackup files, that's.ru used rackup, end in run, and then some rack compatible application. If you don't know how rack apps work, I'm gonna get to that in a second. But that's all you need to know is we have to have it, we're gonna call rackup in the current directory instead of Rails server, and it's gonna execute config.ru directly, which is basically what Rails server was doing anyway, just with a lot of other fancy add-ins that we don't need for our little Hello World server. Step four is to start using only what we need of the actual functional components that remain at this point. We're going to, so at the top of every, everyone's config application.rb, you're gonna see require Rails all. And what that does is loads up all the different frameworks in Rails. The truth is is that I think many applications don't actually use all the components of Rails, especially API-only applications, or very simple ones like the one I'm talking about today, like a Hello World. If you actually look at the Rails all file, it looks like this. I think this is the entirety of it. I didn't cut anything out here. It literally just loops through all the different framework components of Rails, requires each of them, and then we're done. So all I've done here is taken that out of an array and done it line by line and commented out the parts that we're not gonna use. We're literally just gonna use Action Controller for this Hello World app. I don't need a database. I don't need ActionView, although ActionView gets loaded anyway. ActionMailer, I don't need it. Not gonna send any email. Don't need ActiveJob. Definitely don't need ActionCable. And I don't need Sprockets, because I'm an API server, so I don't need to serve any assets, or have anything to do with assets. So to give you a little history lesson on even why this is possible, a gentleman named YehudaCats, who I think is here, not in this room. I mean, he's here around in Rails. He's there, hey, Yehuda! So, wow, okay, so this is the second time I've talked about someone's framework in front of them. So in 2008, Merb merged with Rails. Yehuda sacrificed his web framework at the altar of Rails. And at the time, he had said, Rails will become more modular, starting with a Rails core, and including the ability to opt in or out of specific components. This was sort of like a hallmark of Merb. You couldn't do this stuff in Rails, too. It wasn't this modular structure that we have today, but that was the way Merb worked. So when we merged Merb into Rails to get to the Rails 3 release, that was a big part of the work that was done, was extracting these different framework components. Now, some people didn't like this. Jeremy Askinis, a coffee script author, had said on Twitter that all forward progress stalled for nearly two years. It's still slower than Rails, too. Bundler is a nightmare, no JS won. Case closed. Thankfully, that doesn't seem to have been the case in the four years since he said that. But I think it is true that Rails 3 wasn't really like a feature release. It was a release for cleaning the internals, making things easier for plugin authors. And part of why I'm giving this talk is I think a lot of the work that went into Rails 3 and this modularity that's been given to us isn't used enough, or at least we're not aware of it enough as Rails developers. So, know this stuff so that Yehuda's work was not in vain. So, the other thing we're gonna lead here is our gem file. The only thing we need in our Hello World application is just gem Rails. So we're just gonna get rid of the gem file. Eventually, to get down to tweet size, we're actually gonna have to get rid of bundler, which is annoying, but it does technically work. It's important to know that Rails is very conscientious, I think, of what goes in the gem file and what goes into the Rails framework itself. And everything in the gem file is very much a suggestion. None of it is required to get a Rails server working. None of it is official, I guess, TurboLinux is officially sanctioned, right, because it's in the Rails organization or whatever. But none of it is required. You can get rid of all of it if you want. One area that you kinda saw this happen was with action cable. Someone suggested during the development and the merge, like why don't we just make action cable a gem and then put it in the default gem file. But DHH said, I think action cable at WebSockets is so important that it needs to be in the main framework. It's too important to be just in the gem file. So there's a definite decision being made here about what goes in the framework and what goes in the gem file. And so when you look at that gem file, don't think that anything in there is something that you have to be using. We're also gonna dump all of the config files that we don't use anymore, because I just got rid of action cable by not loading it instead of Rails with Rails.all, or Rails slash all. We're not using ActiveRecord anymore, so I can delete config database.yaml. We're not using Puma, so I can get rid of Puma.rb, and I'm not using Spring. So I can get rid of Spring.rb. Then at this point, we've got like five files left, and now we just inline everything. So we move five files into one. There are four very important files here. Again, Xavier's talk's really important here. He talks about why there are four different files for all this stuff and why we don't inline them all into one. So we have boot.rb, environment.rb, application.rb, and production.rb, and we're just gonna put all of these in the rackup file. So they're all gonna go into config.ru. And once you do that, it looks like this. So this is what I consider the smallest practical Rails application. And we're gonna get to tweet stupid length in a minute. So let's walk through this one line by line, because I think this is important to understand. We're gonna require the RailTie for ActionController. We require the RailTie and not ActionController directly, because there's some stuff that happens in the RailTie that we wanna make sure that actually gets run. We're gonna define a application that inherits some Rails application. This part right here is exactly the same as it is in application.rb. We have to define a secret key. I got rid of secrets.yaml, right? So I have to define my secret key as a config point. Because this is a toy application, I'm just gonna set it to some meaningless string. You obviously have to do that for real in a real application. And then I've inlined config routes.rb inside of my application definition. So normally at the top of config routes.rb, I think it says rails.application.routes.draw. So we're just doing that here inside of the application itself. Finally, I've got the controller at the bottom of the file here. And I have to call initialize on Rails application. This runs all the initializers and all the hooks in Rails ties. And finally, every, like I said, every rack of files is gonna end in run some rack compatible application. So this is the smallest practical Rails application for all intents and purposes. Yeah, listen to that, Bill. Oh, take a look at that. Oh my God! Woo! Listen to that horn! It's beautiful, I know. It really is a work of art. But there's a lot happening here. Even though this is like a 10 line Rails application, you're still running, if even if you just required Rails all at the top, you're running dozens of initializers. Tons of features are still in this Rails application, even though it's 10 or 12 lines or whatever. The most, where I think you can see this the most is in the middleware stack. So there is a stack of rack middleware that each request passes through before it even gets to your router. There was a talk this morning about rack middleware and more about how that works. All you gotta know is basically there are things that wrap your application and that they get, are like filters for the request. So first, the request goes to this middleware and then that one, then that one, then that one, so on and so forth. And at the very bottom here is where we call the router. Or it gets passed off to the router. And all these middleware do something. So as some examples here, action dispatch request ID adds a unique ID header to every request. Rack runtime adds a runtime header that says this request took X amount of time. Remote IP protects you from IP spoofing attacks. We have cookies in session store. We have the flash middleware that runs the flash hash. And then we have some middleware at the end for HTTP caching. So all these middleware are all doing things and you didn't have to configure any of them. They're just the default Rails middleware stack. Some of these can be eliminated now in Rails 5. We have this new config point called API only. If you set this to true, it'll remove the session-based middleware down here. I think it removed some more. It's sort of like the, it's sort of similar to when you call Rails new dash dash API. Like one of the things it's gonna do is set API only equal to true. You can also delete middleware on your own. So if all of this sort of boilerplate doesn't appeal to you, whoops, all this boilerplate doesn't appeal to you, you can just delete each middleware that your application doesn't use on its own. Now why would you do that? Middleware are not free. Each one of these is gonna cost some time. It's not zero. And some of them are gonna use some slower features of Ruby like block.call. And when there's 20 of them here, that does start to add up. And I'm gonna talk a little bit more about that in a second. Here's another hidden Rails modularity feature as we're trying to strip down our app even further. There's a little class called action controller metal, which is pretty cool. Action controller base inherits from this class, inherits from action controller metal. And all action controller base is, is action controller metal plus a ton of modules, like 50 modules, like a ton of modules. These are just some of them, like cookies, strong parameters, force SSL, et cetera, et cetera. If all these things don't really mean anything to you, that's okay. And it's kind of intended. But if you do feel like you wanna dig into how action controller base works a little bit more, if the idea of a thin controller appeals to you, you can inherit directly from action controller metal and build your own controller. So even the render method is actually factored out into its own module. So if you inherit from action controller metal and build your own controller, you can actually do that. And you can actually build out into its own module. So if you inherit from action controller metal, basically like the only thing you can do is work with the rack response directly and say, okay, the response body is this. You can't even call render at this point. So like this isn't 100% equivalent to our Hello World application, because this won't set the correct content type. But you can basically start here and then just start including all the different action controller modules in your controller. And here's another part where I blow your mind. All controllers are actually just rack apps. So this is our controller. And we can just run that controller in our rackup file as if it was a complete application. The dot action method takes a symbol, which corresponds to an action, and we can run the dot call method on this object that's returned by this. And it's just a rack application. Okay, so now we can talk about rack applications. Rack applications, rack sits between rails and your web server. Rack is like the common language that web servers and rails speak. This is what allows you to use Pumo, web brick, thin, all between, and without having to change any of your code. Because rails just speaks rack, and the web server bindings all just speak rack as well, so that they have like a common interface, right? And that interface is a rack application. A rack application is just an object that responds to call and returns an array of three values. The first one is the HTTP status code. The second one is a hash, this slide is wrong, it's not an array. The second one is a hash, which is the rack env, or environment hash. And the second one is the body, which I think has to respond to the each method. So usually our responses are gonna be arrays, because they have to respond to dot each. So our controllers just by themselves are rack applications. And when the router, the rails router is basically working with your rails application, it kind of doesn't know that each endpoint, or when you call get hello, hello index, a rails controller is basically functionally the same to it as any other rack application, which allows you to do some kind of cool things. You can send routes to Sinatra applications, because Sinatra applications are rack applications. As far as I know, Hanami slash Lotus is also a rack application, so you could just mount Lotus Hanami applications inside of rails. You can also route directly to Prox, which is kind of crazy, because they are rack applications that respond to call, et cetera, et cetera. So which we're gonna use that in a minute. Well, we could use that in a minute. We could make our hello world application even simpler, by just routing directly to a proc, rather than even bothering with a controller. If that action controller metal, and all that stuff sounds interesting to you, but you don't wanna start from zero, there's this cool method, and it's kind of hard to read, I'm sorry, it's kind of small. Action controller base.withoutmodules.each, and then you give it a block. You can basically start with action controller, and it'll give you a list of modules, minus the ones you don't want. So we're saying every module in action controller base, minus params wrapper and streaming, include all those. So you can kind of start from all the action controller base modules, instead of starting from zero, and including them one at a time. Okay. Finally, when it comes to modularity, I think it's important to know that not all models need to be active record. You can put anything you want in the models folder. Actually, it doesn't have to inherit from active record base. Active record base is just a class, that like action controller metal, just includes or extends a bunch of other modules. That's literally it, if you look at the base.rb, wherever it is in active record code, active record code, and it's not any more complicated than that. So whatever is in your models folder, it doesn't have to inherit from active record base. There's tons of cool stuff in active model, which can make your plain Ruby objects sort of walk and quack like an active record base object, but not actually have to do anything with persistence or database. So check out active model for a way of modularizing your Rails applications even further. So like I said, I said I was gonna talk about the performance story, so here's what it is. If your Rails application is not doing anything meaningful, or it is an API-only application, which is not using several of the main components of Rails, if you don't require all the parts of the Rails framework, you can save some memory. So if you don't require, for example, sprockets, I think you can save 10 megabytes of memory per process, which is not small potatoes. I mean, when that across four processes in a typical server, that adds up a little bit. And if you start getting rid of logging middleware, you can save a couple of milliseconds per request. So when I talked about config.middleware.delete, some of those, if you delete half a dozen of them, you can shave some milliseconds off your request time there. Not really that big of a gain, obviously. I mean, most Rails servers are 100 millisecond response times right, so shaving six milliseconds, 10 milliseconds off, it's probably not a big deal to you unless you're at GitHub scale, and you have an average response time of 50 milliseconds, and so that's like 10%, awesome, great. I think the performance story here is really not as interesting as the modularity story and the code organization story. So in general, I just want you to realize that framework code is nowhere near as important as application code. Rails is not slow. Your application is slow. The way you use Rails is slow, but you're not starting from a five yard, what's the opposite of a head start. You're not starting at a negative here. You're starting at the same level that pretty much everyone else is, and it's the application, it's everything that comes after Rails new that makes an application slow. Okay, now the insanity, that's code golf. So this is the application as we left it last. This is the smallest practical Rails application that makes any sense, and here's the world's smallest Rails application. I'm gonna call it a shell command. Now here's what it actually is. So this is a shell command. We're gonna call rackup, we're gonna pass rackup a dash r option, which is gonna require a library. We're gonna require Azure controller real-time, and then the dash b option is basically saying, okay, this string is config.ru, it's the same thing. I think dash b actually stands for builder as in rack builder, and then we're gonna give it a string that's like our config.ru. We're going to run a anonymous class, which inherits from Rails application, and then this block here is the body of the class. We still have to set a secret key base. I don't know why, because I'm not using any secret key base-related things like sessions. Here's a little code golf trick. This question mark x is the same thing as quote, x, quote, is just one character less. I think that's only there for historical reasons, because Ruby 1.8 had this weird thing where if you did question mark x, it returned a number, like it returned the ASCII code or whatever, and they couldn't get rid of that, so they made it return a letter instead. So anyway, it doesn't matter, because we just want to set config secret key base to something that makes Rails shut up. This is very insecure, don't do this, but this is a toy application, so it's fine, and then we call dot initialize on it to run all the Rails ties and set up the Rails application. And this application doesn't actually do anything. It's 404s as a service, because there's no routes. So all this Rails application can do is serve empty 404 responses, but it is a Rails application. Yeah! Yeah! Yeah! Yeah! And it fits in a tweet. You could make it useful, quote, unquote useful, by adding routes.draw to some proc here, but then you would just have an application that served 200 responses instead of 404s. So is this even practical, Nate? Well, no, not really. There's two practical applications I can think of for Rails applications that fit in a file or probably not a tweet. Test suites for gem slash engines. I think plenty of gems need a way to test with a live Rails application. Sometimes people just do Rails new inside of their test suite directory, which ends up creating this huge, 400 line monstrosity, which is longer than their entire test suite. You don't need to do that. You can just use our one line Rails application and it's gonna be functionally the same as a real Rails application. So we use that technique in the Raven Ruby gem, which I maintain. That's the Ruby client for the Century Air Notification Service. And I think every Rails, or every Ruby gem that needs to test against a Rails application should be using a similar approach. And also for API-only applications, I'm not the kind of guy that does single-page applications and only uses my Rails server as a back end for my Angular app or whatever. But if you are and you're not rendering HTML responses, maybe your Rails application doesn't have anything to do with assets and all the assets are handled by Nginx or whatever. There's a lot of the Rails framework that you cannot load. Sprocket's an active record and the most important ones as far as memory goes. Not loading those parts of the Rails framework will save you a little bit of memory. And it will save you some headspace of not having to think about these components being in your global namespace. But the reality is that most applications need 80% of what Rails provides. When you see Sinatra, Sinatra's like require Sinatra, get, slash, do, and all that and that's really nice. The reality is that an application that you could get paid to work on every day is gonna need about 80% or more of all these things that get provided for us in the Rails framework. So while it may not win any blog post beauty contests, I think the way that Rails new is set up is actually the way that most of us would minimize the work for the majority of applications. And so Rails is modular, you just maybe never needed it. So your homework today. Try not using Rails slash all in your Rails application. Try loading the parts of the framework that you need. To actually test to see if this makes any difference to you, I suggest using derailed benchmarks. It has a little tool that will help you see how much memory you're using on startup. So when you're not requiring these parts of the framework right, you wanna see that number go down. It's author Richard Steemann's right here in the front row. And also if you're thinking about deleting middleware to save some response time, you can use Apache bench, that's AB, to pound your application with tons of requests per second and see if it's actually improving those times. Consider action controller metal and active model. Just take a look at those classes in the Rails code base and see if there's any way that you can simplify your models or controllers, maybe you're using all these features that you don't actually need. And then the next time, you maybe the next time you would reach for Sinatra. The next time you would reach for Cuba or some other simple Ruby framework, try just starting from a one line Rails application and see where that gets you instead. So this talk is available in the form of a GitHub repo. If I skipped, if I went fast through any parts and you wanna know, okay, how do I get from Rails new to a tweet? It's available here in the form of a commit log. So if you go to Nate Birkbeck slash tweet length on GitHub, you can follow the commit log line by line. I've read some really long commit messages about why I did certain things. It gives a little bit more of the background of it. This presentation and the slides are gonna be available here at Nate Birkbeck slash Rails lightweight stack. Actually, that's Rails underscore lightweight underscore stack. Markdown, screwed it up there. In that repo also, there's a bunch of different, one file Rails applications and different practical uses and different API only uses for this kind of stuff. Lots of other resources there. Like I said, my course is the complete guide to Rails performance. It's available at railspeed.com. I also have tweeted, I've tweeted the one line Rails, the one tweet Rails application at my Twitter, at Nate Birkbeck, if you wanna retweet it there. So thank you very much for your time. Thank you.
|
Ever felt jealous of "other" Ruby web frameworks whose applications can be run from a single file? Did you know Rails can do this too? Starting from a fresh "rails new" install, we'll gradually delete and pare away to find the beautiful, light, modular and object-oriented web framework underneath. Eventually we'll end up with a tweet-size Rails application! We'll learn about Rails' initialization process and default "stack", and investigate the bare-bones required to get a Rails app up and running.
|
10.5446/31535 (DOI)
|
My name is John Arnold. Some of my team is running in late. It's great. Thank you for coming to hear me talk today. I want to thank GrailsConf first of all for the opportunity to be here. This is my first RailsConf. It's been awesome. It's great to be back in Kansas City. I was born and raised here. Anybody else? Go Royals, et cetera. I'm from Kansas. I'm not a huge sports guy. I'm from Kansas City. I live in Nashville. I was great to come home for a visit. I got to see my mom because, you know, Mother's Day is this week. So that's important. Go moms. This is a good definition of how my week has been thus far. I'm from the barbecue. What we're here to talk about today is growing pains in a small company. That's what my talk is called. Or thinking big when you're small. Nobody knows growing pains. Growing pains. The TV show? No. Okay. All right. Fine. Okay. So thinking big is fun. But in a small company, a small team, we need to organize our thoughts, our work, and our team to be effective today. Startup culture has put a really bad habit into a lot of us, especially those of us who aren't technical. We start to frame out this idea of what a great trajectory of a startup will be. We want to hire some code ninjas, some Jedi, some rock stars to come in. Really just some nerds at the end of the day to help us slam out an MVP. We'll just slap a little business on it. And before you know it, we'll be in the Unicorn Club sipping our side cars and planning our second Mars base. By the way, doesn't Elon Musk just look like a supervillain right here? He's, uh, you're coming to Mars, kid. All right. So in reality, though, the teams that we build don't look like that. And people have talked a lot about hiring already, about how silly it is to call people that. There's great articles out there about how what we're really looking to hire are actually scientists and librarians. Ghostbusters? No? Nobody's seen Ghostbusters in this crowd? Okay, fine. All right. Scientists and librarians, all right, thanks. We don't really have this rocket-like trajectory. We have this startup curve that's been around for a while. You know, companies like ours were small. We're like 15 people. We're somewhere in this trough of sorrow. We have some false hope wiggles that like give us excitement. We hire people then we get big. We do all the crazy stuff. And there's these crashes and all these things that start to happen. You know, Jeremy mentioned this in the keynote yesterday. But what we're actually building is not actually startups at the end of the day. We're building small businesses. We're building companies. And the latter half of this curve is not to get sold. Like I hate whoever wrote that, that it's a buyer. For me, it's not really to get sold. It's a build something that's going to be a sustainable team and a sustainable company. That does not mean that we hustle. No hustle. No side hustles. No weekend crushing. No blah, blah, blah, blah, blah. No. I don't want you to hero dev, Mike, our CTO. No more hero dev, no overnights, nothing like that. I want you to be a cultivator. I want you to garden. I want you to lay out standards and defend them and be boring with the standards. Lay them out and stick to them. So anyway, about our company a little bit. We're a multi-tenant SaaS company. We're built on Rails. We sell our product to global insurance companies. So big, gigantic multinational companies that are implemented all over the world have terrible infrastructures, terrible technology, everything else. Yeah, it's great. I love our clients. Yeah, hi. So we have a behavioral economics model that we use to incentivize insurance policy holders to change their life, hence Life.io. And we've done some things differently. The biggest of which being, like I said, we're about 15 people, but we're implemented with these massive, massive companies all around the world. Our dev team is not that 15, by the way. Our dev team is like five people. Yeah, so we're a really small team pushing out work that touches hundreds of thousands of people, even larger as we're continuing to scale. And we've learned that the product that we build is really just the software part is really just a portion of the overall product. The team and the promises that we make both to each other and to our clients is actually what matters. With limited resources and constraints, companies just like ours, we have to bloom where we're planted and seek opportunities where they come in order to maximize wins. We can't just go out and try to find something, close it. We have to make it work where we're at. So we've been around for about four years and have come from a very small app to a little bit bigger app, a little bit bigger team. Lots of things happening. So I'm going to talk a little bit about the tactics and the philosophies that we've used to get us those four years. The first and most important tactic is to steal. I am a certified stealing shit that works practitioner that badge indicates that. That is a real thing. You can get that on the Internet. That's great. So we use frameworks that help us instill clarity, vision, focus, value. These are all things that Jeremy mentioned in the keynote yesterday. We've stolen a ton of frameworks. We've thrown a bunch out. So these are just some that work well for us today. Jobs to be done. We took this from Thoughtbot and Intercom like this week released a book on this too. Basically the question is what job is the user hiring the product to do? What are we replacing in their life or making better by using software technology instead of something like paper? What motivation is happening for the user today? It's a really simple question to ask. We write about four or five of these for our users and about two or three for our clients. What do they hear for? What do they actually want? They're very simple words. They're very simple sentences. They're not these huge long stories or press releases or whatever. Just like what do they hear for? We also use this great framework called 666 which I call the Devils Framework. Not really. Oh, whoa, whoa. It was a devil. Yeah. Speaking of the devil, hey. All right. So really what this means, this is a roadmap process. This means six weeks, six months, and six years. Sales hates this. Six years. No client will ever buy six years. We got to buy tomorrow. What's it going to be tomorrow? We can't do six years. What does that mean? So six years though is really, man. Six years is really this. Six weeks. What are our current team actions? What are the things we're doing today? Six months. What are the priorities that we have that are directionalizing us? And then six years. What is the worldview like? What is it that we believe unwaveringly that is really defining the product? When you mix this framework with user interviews, feedback, team workshops, other people have talked about that already this week, it defines the opinions and it sharpens that worldview. Now what's cool is when you mix that with that jobs framework, oh man, you get something like this. Six weeks. We change with the user's ability. Then we start to change their behavior. And then six years is really a question of how have we changed the user's life as a result of using our products? And that right there, especially those latter two things, are what we can actually sell to a big client. They get excited about this. How do we actually fit into this picture? And what does this look like? How do we come along for the ride? And the other result of this too is every feature that we build has a roadmap like this. Every big piece, we know where it's going. It exists in the platform for a reason. And it has something that is an opinionated worldview at the end of the day that we have to define and defend. Otherwise we won't build it. We don't have time to build it. All right. So another thing we've stolen is a concept called Fun Day. I think this was at RubyConf last year. Basically, it's maintaining a list of nice to have items, technical debt, even though it's not always really that fun, and things that we know we need to do but maybe don't have time to do right now. What happens a lot in our team is we'll start down a new road. We'll start planning some requirements. We'll start building out something and we'll hit a block. We're small. We have to pause. We have to get design, which is contract right now. We don't have full time design, to come in and help us with these things. And so while those pieces are being built, those blocks are being cleared, our team dips back into these items for a day or two. It does something that they've enjoyed, that they want to get back into. And that balances new feature development, that balances client deliverables with technical debt. Everything we've talked about right now is important because fundamentally, I believe that every company needs a secret at its center, as you can see here in this diagram. Uber and Airbnb are canonical examples of this. They have taken a secret that they found in the world, that individuals have unused assets, cars and houses, empty rooms, and found that they can make money on them. They built upon that secret and actually built software that sat upon that and made something great. We have a secret, I don't want to tell you, we have a secret. I think everybody needs a secret like that that informs and defines their world view. All of that fits into a list of the priorities that we keep. So when it comes to actually choosing what to build, what to market, what to deliver, we talk about these five things. What do we believe in unwaveringly that we will never back down from? What are improvements? You can talk about 10X and all that kind of stuff. What are small improvements, big improvements we can make on things we've just put out? Prioritize user research and feedback, find things that are on vision from that feedback to put into the cycle, and then scaling growth, quality and stability. Those five things are how we prioritize our roadmap. Things we believe in sometimes involves things our clients believe in if they're paying us to do that. It depends. But the things that we unwaveringly believe in really, really ultimately guide our priorities. So a small company like ours, we also have to sell a lot. For us, when we're talking about selling to clients, it's really more like consulting someone who wants to buy your products. Andreessen Howard said it's harder to get a law pass in Congress than to sign a big software client. And I think that's absolutely true. You have corporate layers and approvals and legal technology, infrastructure procurement. You have all these people that you need to get on board. You have to pass their bars. You have to go through all the different pieces to actually build something for them. But like I said, it's really at the end of the day more about consulting. It's what are your objectives? Let's define those together. We do client-facing workshops where we meet with prospects and actually plan out what their objectives are for using software like ours. And then after we know those objectives, we've kind of coaxed them out of them. A lot of times they don't know. We coax those out of them and then show how we can use our software to help them. We use frameworks like strategic alignment and design thinking and all those sorts of things to really understand what their problem is and then show them how they fit into our solution for the future. Back to the whole like six-week month-year thing, we show them from that how they fit into the six-month and the six-year vision and how they can have their own version of that too for moving forward. We also have to sell to our team. We need to sell that vision and remind them that what they're working on fits into the product. It fits a part of the marketing, the overall experience, the engagement. The vision that we want to talk about, that we keep communicating, it gets to the point of it being annoying to our team. We talk about it so much. It's like, I know, I understand, I know. We have to do that because it's very easy to get focused on one thing, get a different vision of it and start to veer off course. So reminding of how that one little feature, that one enhancement, that one fix is going to put another brick into the road for the future. That's always how we have to sell new work to our team. Another thing we do all the time is we are wrong. I'm wrong all the time. I have to talk to these guys. I have to talk to all sorts of people, state my opinions, state my research findings, state whatever it may be, and be wrong. So in terms of being wrong, it is incredibly important to choose your losses. There's a very simple framework for this to follow too. It's easy to choose losses with a client when they're going to pay you for the change, whether it's something you disagree with. That's very easy to change your mind. It's harder when it's dealing with things the team is recommending, when it's talking about directions we should take with partners, integrations, all sorts of things like that. So how do you know what to be okay with losing it? The way I think about it is in terms of micro and macro. I've talked a little bit about vision, or probably a lot about vision so far. The grand story arc of your product, this kind of macro arc, that six-year arc of where are we going to be when this thing is all sudden done, that is a place that you don't want to lose at. But in the micro and the day-to-day transactions in those smaller pieces, there's things in there that can be undefined. There's things in there that can change and that you can lose at. In the micro, those are places that you can experiment and fail. It's actually really great to make lots of mistakes here. You can fix those mistakes quickly. You can learn from them quickly. They don't actually impact much, even though they might feel huge in that moment. However, when it comes to that large arc, those unwavering things that you believe in, those are things that you have to be resolute at. You have to be right at those. That comes from, again, user research, site research, all those learning pieces. But your own unwavering belief in what you're doing and where that vision is going to be, you have to be right on that, even if you give up on those small things. Another thing we do a lot is say no. We say no to our sales team a lot. We say no to our executives, to clients, to users, to the team. You have to say no more than you say yes. Jeremy mentioned this in the keynote yesterday. You need to build a mobile app. We have that same thing. Investors want us to build a mobile app. Our clients aren't paying for it yet. Our users, that would be nice, but they're probably not going to need it for a little while. We have to say no to that, even though it's probably going to get us more investment money. It's going to divert so many resources from us and take us off our focus that that short-term money is not worth that diversion. Other reasons to say no, we have so many good ideas. Every startup does. There's a zillion good ideas. There's things to put into the product. The team is really smart. Good ideas come out all the time. We see competitors doing things. Clients are asking for things. But you have to pick and choose. The way you pick and choose at this size is what's being paid for. What are you being asked to do that's actually going to keep the lights on? There's two kinds of payment that I'm talking about here. Yes, clients and users who are paying us to use the software. That's part of it. But we also have to go where the user's time is. That is a form of paying, too, time and attention. What's preventing a user from onboarding? What's preventing them from coming back? What's preventing them from loving the product? Those are the things that you should think of as being paid for because it's the user spending time with you. Those are things you should be focused on. Again, what supports the vision? You have to prioritize ideas and work only on those that fit those grand future visions that you have. Think in terms of systems. In order to get to this big feature, these other pieces have to sit in first. We have to put these next bricks in place first. Each of those steps, X to Y to Z, needs to fit into that macro story arc and needs to have a place when you write out that story. We also say yes. We say yes a little less frequently than we say no. But there are many things inside of those that we can enthusiastically say yes, too. These are the same list we just looked at, just looking at it from a different perspective. First, there's so many good ideas. Our team is so smart and they're right all the time and they're really thoughtful and considered and they come up with really great things to do that will help us achieve the goal. Yes, yes, yes to that. Again, what's being paid for? Even if it's something that we've disagreed with and the client's willing to pay for it, something that we want to add on later, new functionality that we have to change or do something else, if we said no to it a bunch of times and they still want it, make them pay for it, then do it. Then say yes. It's easy. Then again, what supports the vision? Thinking about systems is incredibly important for the progression of features in a system. It's incredibly important for what you're going to be building. Sometimes, especially to leadership, people who aren't on the team day in and day out, those little steps seem like detours. They seem like forks in the road that we shouldn't be going that way. We should clearly be going this way. But every one of those features gets to be expressed as because we have this, now we can go build that. That's the path we have to take. Another thing we do a lot is we change often. We change processes all the time. There's a couple of frameworks that we use here for this. A couple of mantras that I have that we really hold on to here. The first of which is happily dissatisfied. This is something I keep in hand when we think about the work. Was I happy with our last release? Yeah, I was pretty happy with it. It went pretty well. Was I satisfied with it? No. In no way was I satisfied. Was I happy with that last client interaction? Yeah, that went pretty well. But was I satisfied? No. That is a great place to keep yourself, to go, yeah, we did well, but we can always do better. Better is really the mantra word underneath everything. There's always something we can improve. There's always a system or a process that we can use to make things better. We'll talk about how to pick and choose those, how to fit those in with everything else we have going on. A couple of easy process things. Style guides, to me, are actually the product that we're building. Our team needs to focus on repeatable styles. That's on the rail side, the front end, the design, the content style guides. These minimize team frustration, cut out guesswork. We learned that the hard way. We're still learning it. The nice thing though is you can pull out sample style guides. On the content side, Slack has a style guide for how they talk to their users in the app. Take that, use it, tweak it to fit your own voice, you can start with that. It's a complete thought. It's done. Other style guides like that exist all over the place that you can take and modify to your own purposes. The style guides, like I said, those really are the product. The software that we build, that users and clients interact with, is really just the expression of that product. Focus on style guides. They're going to be way more repeatable, way more extensible than just one off request. What are ways you can make things repeatable and find ways to build features and build functionality inside of those style guides for the future? Rebecca talked about this before. She did a great job about this. Defining processes when you fail, when you're small, this is the main opportunity that you have to build a process. We're a small team, like I said. We don't do in terms of our product development process, we don't do story points, we don't do estimates, we don't even do structured sprints. We tried because I came in and was like, oh, let's do all this stuff and put on the tie and make it all fancy. It wasn't needed. It took so much time from our team that it was less focused on the actual work. It's what we call the work of the work as opposed to the actual work. Working on the work and doing the work are two different things. At this stage, we have to focus as much on the actual work as we possibly can. Put in place only the processes that you need to prevent failure. Extending on dry, don't repeat your mistakes. When you have a mistake, stop and talk about it and institute a process. There's a zillion processes out there for those things, but you have to step in the trap before you institute the process, otherwise you just waste time. Rebecca talked about this, but I want to talk about this too, postmortems. It's a great place to talk with your team when things go wrong or when they go well. This is probably the most structured thing that we do is a retrospective with postmortems. Running a good postmortem takes two things. Objective facts. Rebecca said some great stuff about this, some will go too much into that. Objective facts about the failure or the success. Statements that explain the story. It should be a really boring story listed out as a series of sentences. The client asked us to do X. We responded with this document. Then this happened. Then this happened. There shouldn't be adjectives. There shouldn't be lots of names. It should just be these facts and events happen. Then the team, the people who are involved with the failure or the learning, do what's called a plus delta exercise. All this really says is for each of those objective statements plus what went well about this. What are we going to do again about this? Delta, what needs to change for the future? How should we build something differently for this? This right here is where our processes come to life. We find those things that we need to add so that the mistakes don't happen again. This process takes maybe half an hour. We make few enough big mistakes that we can handle one of these every few weeks and it's no big deal. It sets in motion a lot of good processes and best practices. Things to add to our style guide, to our implementation process, to our deployment process that we make sure that are going to happen for the future. Sound good? Feeling good? Okay. Other things, we grow slowly, slowly, grow slowly. It's very easy to get money and go, we need to hire a bunch of people. We're going to be awesome. We want to make a Mars base. Here we go. It's time to make our Mars base. No, you have to grow slowly and you might be able to extend or add to or augment your team and the work that you get done. But you shouldn't think, now we can turn this thing on and start churning. I'm saying this to myself a lot too because I want to grow fast and we have to be reminded to grow slowly. Seriously, more slowly than you want to. There's been some great talks about hiring. Eric and Joe, I think, gave some great talks about this this week. A couple of things that we've stumbled into and found worked really well. Hiring generalists or people who have switched careers is great as you start to grow. People who can do multiple things in the organization. People with history in other areas that are applicable to your team. We have a developer here who is an editor. That's really helpful for our content side of things. Developer who was in the healthcare world. We have kind of a health slant to our staff. Getting people who can help out when there's other things that need to happen. Our team is small. We don't have people who are focused on those things. Sorry, I'm making you nervous. This is another thought-bought thing that I stole. T-shaped people. T-shaped people are people with a high level of specialty or experience in several areas that would be applicable to the work that we're doing. One deep specialty, obsession, passion, fascination, curiosity that goes deep in one area. I want to come in and I'm really interested in wearable devices. I really want to focus in on that. I did some project management. I did some this. I did some this in my past. That is a person that you want to look for who's going to take the team further as you continue to grow. More about growth. I always say it's better done okay than done great by somebody you have to lay off later. Grow slowly. Don't over hire. Focus on the roles that your team needs. Not people. People are expensive and you hurt them and they stay with you until they run out of things to do. Roles can be played by multiple people. You can have multiple hats today okay. Yes, it might be a little bumpy but using the failures and shortcomings of those roles will define the job description for the person that you need for the future. Does that make sense? Fail to build the thing for the future. Okay, last big thing. This is a great concept. There's a company called Infusionsoft. They're based out of Phoenix. I think they use PHP which you know whatever. They are awesome. They're a small business CRM. They have a gazillion clients and they started in a couple dudes garage like classic startup story. And they have this great wall. That's actually some doors in their office. And they call this their Everest mission. Basically what the doors were maybe you could see is it went mission, vision, their core values. What their purpose is as a company. And then there were summits each year that showed where they wanted to be as a company. The size, the number of clients, the things they wanted to do, the different milestones that they want to hit along the way. First off, awesome that it's that visible. This is the main doors to their conference room. Like their big all hands conference room. Everybody sees this at least once a week. Everybody is reminded of this. It is physically printed in the room so that people can see it. But second on either side of this thing are huge boring spreadsheets that they printed off that have a list of key objectives, key results, people's names and timelines for every single thing that was on the bottom parts of the door. And what's amazing, they printed this like four years ago and they've hit those numbers like dead on every single year. I can't say on the finance side or the business side about all those things. But in terms of achieving the growth goals and the team goals that they wanted to meet, they use an OKR process. Intel, I think, started this, but Google made this really famous. Objectives, a company provided goals. So those six month future views, the months, like what are our current priorities? And then key results. The way we implement this is we assemble teams within our company. We're small teams so there's not that many to begin with, but a few groups of people to focus in on one of those objectives and that group writes the key results. We don't force it down their throats. We don't say, this is the stuff you got to do or you're fired. They come up with how to achieve the goal. And all we do is check in and make sure that that's working and make sure that we're hitting it. So what that does is it gives the team ownership. It gives them those clear objectives and it shows how they fit into the big picture. It lets them find their place and blossom inside of where we're blossoming. And it gives freedom to fail, too. These are not, like, performance review. We're going to look at these once a quarter or annually or whatever. These are things that, weekly, we are checking in on and measuring against and making sure that they're moving us in the direction that we want to be. So remember, pursue your vision relentlessly. Be annoying in talking about your vision. Your clients should know it, your team should know it, your pets should know it, everyone should know it, and everything you do should be part of your talking points. Be systematic in completing it. What are the pieces that lead to the next step? Now that we've done that, we can do this. Once that's done, we'll be here and then you rattle out the vision again, always there. Choose your battles. Don't be afraid to lose. You can lose now. Losing is a learning opportunity. It's a growth opportunity. Losing in the short term might mean that the client sees you as flexible and willing to work well with them and wants to continue the partnership. That's worked with us. We had to concede a lot to grow a lot. Speaking of growth, change often, but grow slowly. Let your team define its own path based upon those key visions and hopefully it will all fall into place. Questions? Thank you.
|
This talk is for anyone who's had to promise new features to a client when they weren't actually sure how they'd deliver on the promise. We all have big visions and big ideas, but our teams and abilities are finite resources. We'll talk about how to manage expectations, growth, technical debt and delivery (and still be able to sleep at night despite that last wonky commit). We'll talk about the never-ending product roadmap, product reality and how what we create falls somewhere in between.
|
10.5446/31536 (DOI)
|
Today we're going to talk about a somewhat non-standard approach to search in your applications. Often search is done in some sort of an external service. Sorry. Better. Better. Better. I'll just stand like this for half an hour. Starting over, because I had that extra time. We're going to talk about a somewhat non-standard approach for searching your applications. Often search is going to be done with some sort of an external service. So, instead of doing that, we're going to use the tool that's already responsible for storing and finding data. Welcome to multi-table full-text search in Postgres. I'm Caleb Thompson. We're not doing a Wednesday hug. You can find me at these places on the internet. Feel free to tweet to me during the talk. I love that shit. But if you're not tweeting, please do close your laptops. I have code here, and it's going to be hard to get anything out of this talk if you're not looking. Now that I've told you that you should close your laptops, I don't like to jump into credentials at the beginning of the talk. You're already here, and that's sort of what the point is. We had the abstract. We had the bio. So you'll hear a little bit more about me at the end, but I do have one claim to fame, and that's that my birthday is in the Simple Delegator docs. I know I'm an expert. I'm going to talk to you about a real-life feature. We're going to iterate, explore other options, and optimize, just like we would when we're developing a feature in our applications. We're going to talk about full-text search, what it is, how it can help us. Hopefully, you could have guessed that from the title. We're going to talk about views, and no, not the HTML views. We're going to talk about database views. Naturally, we want to explore the performance implications of whatever we're doing. So we're going to look at some of the performance implications and how we can mitigate them. And we're going to talk about materialized views, of course, as one way to do that. We'll look at a couple of gems that can help us out while we're doing this fun stuff. And of course, we're going to look at all of the other options, or some of the other options for what we're doing. Let's look at the classic example. Let's search for articles in our application. The simplest thing that we could possibly do that works is to search for substrings. Here we've got articles where the body includes. How's that looking? Terrible? Can somebody hit the lights over there? Just keep going. It's fine. How's that? Better? That looks better. I can see it. We're going to look at articles where the body includes some substring. Pass in that query. You've probably seen this in your applications. This works. It works if your users know exactly what they're searching for. So if they want to find an article with, say, the word, book in it, and they know that it's going to be lower case in the middle of a sentence somewhere, then they can search like this, and you can pass that search right in. Like I said, that's doing exact substrings. That's not the most useful thing. So a tiny little step forward that we could take is to do the case in sensitive like with I like. I don't like this. But it is slightly better. All right. Well, let's leave that. It sort of does what we need for now. You don't need to know where in the sentence the word that you're searching for is. That's cool. Okay. Well, naturally, features expand. We need to search based on the titles of our articles. We probably should have seen that coming when we were searching. We can handle this too. We'll just sort of extend what we had already done. We're going to pass in that query two times instead of once, and we'll still do that case in sensitive like. And the percent signs in here are so that it'll show up anywhere in the query. Anywhere in a word. So you could have any sort of substring in the text. So now we want to search by the author's name. This is getting a little more complicated, but, you know, again, probably something we should have seen coming. We'll go ahead and join onto our user model. I apologize. We're going to use the user and author's tables interchangeably in this talk. So, users table. So this is basically that same query, but we're doing, we're letting Rails, A-Rail handle the join, and then we're doing, you know, pulling out the user's name in the same two fields that we were already pulling out. That query, query, query is starting to stutter, which is something that we don't really want in our code. So one way that we could refactor something like this is to do it in a query object, and this is less performant, but arguably easier to understand when you're looking at it. But when it comes down to it, we still got really poor results. We're only searching for these case in sensitive substrings, and, you know, that's not great. What if the word is going to be plural, and we have a singular query, and we're searching for singular things? Google knows how to do these things, and that's what our users are going to expect from us when we're building this search feature. Enter full text search. Full text search allows us to query for rows based on natural language searching. Hey, Caleb, what is natural language searching? I'm glad you asked. Natural language searching allows us to remove stop words from the query. So these are words that appear in all sorts of sentences. They have not a lot of semantic meaning to us, and we want to not really include them in our search results. We don't want every search that includes the word and, or every article that returns the word and to show up in our search results. Again, just like with the like versus I like, we're going to eliminate casing. Fairly straightforward. We want synonyms to show up. So if our user has sort of a concept in mind that they're searching for but doesn't remember exactly what it is, then both of these should return the same results. And we're going to incorporate stemming is another feature of natural language searching, which means that related words like try, trying, and tries, these are all different versions of the same root words, so we record them under the same concept, and when we're doing our searching under the hood, we're actually searching for that root word instead of the specific words that we're passed in. So here's an example of making that same query. We've got, we're going to sort of zoom around in this code a little bit, and we'll highlight the more important pieces. So here we're looking at the text that we want to join. We're saying title, concatenate that with an empty space, and then the body, we'll call that the text just because we need to give it a name for Postgres to be happy. And so that's what the, the two pipe operators are, the concatenation. And we're also going to pull in the author's name as the text. And naturally we want the ID when we're pulling out of the article, and the article's ID when we're pulling out of authors. We want unique results because we don't want the same article to show up a bunch of times in our query if it shows up, you know, multiple times in the body, or if it shows up both in the author's table and in the title, or in the body. All right, so that's a lot of SQL. Where do we put all of this? We could throw that back into our query object that we looked at last time we had some code, basically just some inline SQL, pass that through an execute, or where. Same thing with the scope. Throw it in there, just pass in that query so that it's interpolated. But to be honest, SQL doesn't belong in.rb files. We've got an extension for that. And so Postgres actually has our answer in the form of views. View is a partial query stored in the database that can be selected from, and it returns a set of columns that can be searched on later on. The nature of views is that because they're basically just a query, they can have multiple source tables. So we've got, right now, we've got the user's slash author's table, and we've got the article's table. So this view will allow us to sort of abstract that away and just say, this is the text that we care about. And then we can perform a where later on. We can do whatever we need to do to complete that query so that it's meaningful to our users when they're actually performing a search later on down the road. So if we were going to build, here's just sort of an example view. We've got this create view syntax. Just give it a name, just like you would a table. We can select distinct user IDs. So right now, obviously, we're pulling users with recent activity. So we're going to look at a couple of different tables. We want to see all of the information about the user and also the last time they were active. So we only want one instance of each user. Like I said, we want all of those rows from the user's table. And we're going to create this concept of a user's last active time by pulling in the activities created at column. And we're just going to limit that to activity or users who were active in the last seven days. So when we're looking through this view, it looks pretty similar to what searching through a table would look like. You select everything from users with recent activity, where, order, whatever you need to do. And in fact, it's so similar that ActiveRecord can use a view as a back end for a model. So what we can do is create a fairly vanilla model. It looks very familiar. And we can interact with that as if it were a table in our database. So we've got this user with recent activities model. As you can see, it's an ActiveRecord subclass. We're going to give it a table name just because our naming of that table didn't match what Rails would have expected when given the class name. And we're going to tell it that it's read-only. This isn't strictly speaking true, but it's easiest to just assume that a view is going to be read-only. If you need it not to be read-only, then there are some special rules for that. That's an exercise for the audience. But what this says is it tells Rails that nothing can be deleted and nothing can be written into this table. So it's read-only. You can only query against it. Will this work with the full-text search? Yes, we're going to talk now about our first gem, Textacular. Textacular was originally written by Aaron Patterson and lives here on GitHub. Textacular takes care of the full-text search portions of the queries. And it assumes that you want to search over every text field because it's called full-text search. So it's full-text, full-text search, I guess, on a record, on a table. And it gives you some variant search options like basic search and fuzzy search and advanced search. For our purposes, all we really care about is this basic search. And that's going to be what's most generally useful when you're building sort of a single field input that your users use to give results back. So that looks like this. If you're searching for a game, any sort of game that included Sonic, Sonic the Hedgehog, or Super Sonic, whatever, I don't know. And you can get a little more complicated even with the basic search and say the title needs to include Mario and the system needs to include Nintendo. But I don't want any Mario title on any Nintendo platform. So this is sort of the next simplest, useful thing that you can do with Textacular's mixin'. All right, so let's go back and take a look at that search that we wrote. This is that same SQL from before to get out articles based on either the article's name, body, or the author's name. So our search result is really simple on the Rails side. We're going to create this three-line class. We're going to include Textacular. And we're going to tell it that it belongs to an article because we named that field article ID. And we want to actually use it if we want to say find an article written by Sandy or that it mentions Sandy. Then we just do this basic search for Sandy and map that onto the articles. If you wanted to get a little bit crazy, you could include Enumerable into your record. Enumerable is a super important and very useful feature of Ruby built into the standard library. And if you don't know about it, feel free to come up and ask me afterwards. But basically, it's going to give you all of those cool each and map and everything else. So you'd be able to use this class with searchresult.new, the query, and then.each. So basically, you can treat it as if it were any sort of other collection, array-like collection. So creating this view, I'm sure I've convinced you now that views are great to use and that you want to use them. You want to know how to use them. So creating that is fairly straightforward. You've got Active Record-based Connection and Execute. So this is a migration that you could actually just shorten this to execute. And we're going to use that createViewSequel that we just had on the screen. And then to drop that, we just say dropView and then the name of the view. How resistant changes this? All right, well, let's find out. So we have some feature creep, as we always have in our features. Project Manager comes back and says, articles whose comments match the query should also show up in the results. If somebody has mentioned Sandy in a comment about an article, we want that article to show up in our search results. So to recap, we're now searching on an article's title and body, an author's name, and a comment's body. And any article that any of these things are related to should show up in our search results. So the updated SQL looks like this, the updated query. And the new part is this new union with a fairly straightforward select and join where we're pulling in the body and the article ID from a comment. So let's take a look at that updateView migration. What we can do is throw the new SQL into that up record just like we had done before. And unfortunately, what we need to do for the down migration is put the old text of the view into that down migration so that when we're rolling back, Rails knows what state to put the database into. That's sort of a pain, but we can handle that. That's not too bad. And unfortunately, we can't always update. You actually can't remove any columns from the view, which we happen to be doing when we're rolling back this migration. Because we no longer have, oh, I'm sorry, no, we don't. But if you had added a new column into your view so that you have more information, say that active at column from before, if you say we don't really need that active at column anymore, let's remove that from the user with recent activity, then you couldn't just do that with an update. So what you have to do is just first drop that view and then create the view again. And again, we need to dump that whole SQL into the migration. So that sucks. You also can't dump a view into dbschema.rb. And so the solution to that is to tell Rails that the database format is going to be structure and then you're going to dump into structure.sql. It's going to dump out an actual Postgres SQL version of your entire database. Sorry. Luckily, we've got our second gem, scenic, which adds view methods to migrations and allows views to be dumped into dbschema.rb, which is what you expect, and just generally makes views a little bit easier to work with. Of course, I had a little hand in this. I am one of two maintainers. The other one is in this room of this gem. So creating this scenic migration is pretty straightforward. The readme goes over it, but you're going to write into a SQL file basically just a query portion of the view. So you don't need to worry about the createViewSyntax or the dropViewSyntax. It will handle that for you. And because you're writing it into, you actually write it into a.sql file. And so you're getting whatever sort of editor support, your Vim or Teamup or Sublime or whatever people are using these days at them. You get whatever benefits that gives you. So mine gives me indentation and some nice syntax highlighting in SQL. And then for the syntax in the migration, you actually just have this createView, which is reversible, just like createTable. And you can go back to using that change method. Then if you need to change the view, you can actually just do this updateView. You tell it what version numbers and it knows based on a naming convention how to find the new and old versions of the SQL, of the SQL for the view. Even that's a little bit tough to remember. So we did create some generators. We actually have a model generator that gives you that read only and infers the name of the model based on the name of the view. So that that naming will match up. So you end up with a three line method. Or a one line method and like a total of five lines in your file. And then when you're writing into that SQL file, it looks just like this. So the first version of the searches, we just write in this is that same SQL from before. Pretty straightforward. And that lives in a SQL file so that it's a lot easier to read and look at when you're in your editor. We also have a view generator for when you need to update that view so you can just do rather than scenic model. You can do scenic view. I don't need notes. And that will give you the next version of the next numeric version of the view. And it dumps in the old version of the SQL which then you can update with whatever you needed to add. Now there are definitely some performance implications with this approach. As I mentioned, this query is pretty slow. It has to search across three different tables and a couple of columns to get us all of the results that we need. So what it comes down to is actually an order of magnitude slower to get these results. And unfortunately views can't be indexed. Luckily underlying tables can. So the recommendation here is add indices. There are several types of index. The one that you're most familiar with is Btree. Btree is great for exact matches on a column. So either text or even like the substring matches are okay with a Btree you might get an index hit with that. And definitely for primary keys where you're just looking up an ID or a UUID. Those are great. For full text search, the ones that we're interested in are Gen and Jest. Gen stands for generalized inverted index and Jest stands for generalized inverted search tree. There's some information you'll never use again. Gen lookups are generally about three times faster than Jest. They also take about three times longer to build. Gen indexes are moderately slower to update than just indexes and they're two to three times larger on disk. What does that mean? Who cares? This is what it means. You should use Gen if you don't have hundreds of thousands of rows in your database. You're not concerned about longer writes blocking the DB. You're adding the index late in the game and you don't care about disk space perhaps because it's 2016. And you want very fast lookups. So we're optimizing for read heavy. If we were building a log manager or something like that, then we would want to optimize for write heavy and maybe Gen isn't the right solution. So on the other hand, you should use just if you have very large tables, millions, billions of records. There's an order of magnitude in there. If you're between those two, it's up to you to figure out a work at a consultancy. You can pay me. You should use it if you have performance concerns right now. And when I say that, I mean that you currently have performance issues, not that you are concerned that in the future you will have performance concerns. You should use it if, for some reason, disk space is important. It's 1994. And you should use it if your table is very write heavy. Like I said, log aggregators are a great example of this. So adding those indexes as Gen is pretty straightforward. These are the four fields that we've been using and you just say using Gen Rails and how to handle that. Materialized views are another way that we can improve this performance. Materialized views are a tool to pre-populate the results of the views. So it's going to run that entire search query that we had. And it's going to store all of those results into a temporary table. We'll pay, say, the 400 milliseconds whenever we're creating that table. But then we can query against the temporary table, which already has the results in it, and that's much, much faster. So we query against that result set rather than performing the full query. And it's another order of magnitude faster even than the I like was. This is without the indexes. And the downside of a materialized view is that it's not always updating because it is storing into that temporary table. You have to tell it when you want to pay that 400 milliseconds to get your update happening or however long your query takes. And you can do that as often as you like. You can do that on every write with a trigger in SQL or with an after commit hook. Looks like this. Or you can do it maybe on a timer. If your search results don't always need to be up to date, you could have the Heroku schedule or do it every 10 minutes or hour or day or whatever. So you can do that either with a Postgres trigger, exercise to the reader, or with Rails after commit hook. That looks like this. All right. Well, what about some prebuilt solutions? There's a lot of options out there, and I did say that we would look at them. We've got Elasticsearch with either Tire or Elasticsearch Rails with Chewy. It's including who knows how much into your models. I know it's a lot. You can use Solar via Sunspot, but holy shit. Sphinx with thinking Sphinx actually does use a separate file, but still, like, I don't know what any of this means. Why do I have to figure this out? I already know things. I know SQL. All right. So what these services are great at is faceted search. If your search doesn't look like a single box, it looks like Amazon's sidebar, then full text search is going to be a little bit more difficult to work with. I'll admit. Or Postgres' full text search. These other tools do full text search for you as well. All of these things have to run in your development machine. They have to run on your production machine, which means that they have to be running. They're slowing down on your machine. You have to deal with all of these dependencies. You also have to deal with them every time you're doing an update to your system. If you ever change that version, then you need to make sure that development and production are all in the same version. If you're ever going to roll back, you need to make sure that's handled. Big pain in the ass. They all need to be faked in tests because you don't want to be actually using these things live in tests. In fact, I had a couple of coworkers who were doing, were using Solar, I believe, and a great feature of Solar is that it synchronizes its index across the same subnet. Because they were both sitting on their work computers and had the same external IP addresses, their test indexes were being synced between each other. And that was a lot of fun for a week. All of these have a lot of cruft in the models. I said some, but it's all. And they're removing a data concern from your database. So they have this arcane syntax and ultimately they make me make this face. So by combining materialized view, full text search and some Rails magic, we have a pretty cool search feature that doesn't require any new dependencies. And it makes me smile. Thank you. Thank you.
|
Searching content across multiple database tables and columns doesn't have to suck. Thanks to Postgres, rolling your own search isn't difficult. Following an actual feature evolution I worked on for a client, we will start with a search feature that queries a single column with LIKE and build up to a SQL-heavy solution for finding results across multiple columns and tables using database views. We will look at optimizing the query time and why this could be a better solution over introducing extra dependencies which clutter your code and need to be stubbed in tests.
|
10.5446/31539 (DOI)
|
I am very pleased to introduce our next keynote speaker. She has background in computer science, psychology and design, which I think is a fascinating overlap. And is currently the director of user experience at Blue Wolf, has also written a book, which I think if you ask nicely, she might give you a copy. Can try that out later. I may have over promised. We'll see. I think so. I think so. We can do that. Thanks for backing me up. Okay. Ladies and gentlemen, please help me welcome Chanel Henry. Thank you. Thank you. That was a great intro. It's funny because the past couple of weeks I've been staying with my mother, and I'm 33, so that's usually something not to brag about. She said something that was really hilarious. She said I was too kind, and I said thank you too much. So I'm ignoring her, and I want to say thank you to all the people that made this possible. I want to say thank you to everybody that's here. I wanted to also say just thank you to everybody on the board and everyone that has made this conference possible for the future engineers, the builders, and the connectors of the world. And I really want you to give all yourselves actually a round of applause if you can. Please, thank you. All right. Okay. So let's get into the down and dirty. This talk is about a lot about UX, very little bit about Rails, and a lot about awesomeness. So as you know, my background is very, I like to say colorful, right? There were so many different ways I wanted to start off this introduction, like with some jokes, I was watching Seinfeld, okay, how would he do it? I watched Oprah, and I wanted to be like you guys are amazing, look under your seats, that's where the free books are, there aren't any, don't look. And then I'm like, you know what? I think the best way is to just say hi. So hello. At the bottom of each slide, you'll see my Twitter handle. That's also my handle at Medium, where I write a lot about what's in here. So if you want to go on that journey, you know, I welcome you to in my mind. And then I decided just to use the RailsConf 2016 UX if you want to tag or say anything cool or anything like anything about the talk. You can also download the presentation at bit.ly slash Rails UX 2016. So who am I? I'm a user experience architect and consultant, and it took a very long way to get there. My first computer was at the age of, I want to say five or six, it was a Macintosh Plus. So definitely dating myself there, but I already told you how old I am, so you don't have to do the math. And then through that, I just loved doing, you know, a whole lot of IT, was always taking stuff apart, was always getting grounded for it. I loved being a developer back then when, you know, Cubasic was like, yeah, like you say it now, and people are like, whoa, like, what are you talking about? So that led me on a very long journey because I also loved doing art, and everyone said there's no, there's no money in art. There's no money in making things look beautiful until you're dead. So I thought about that for a while, and then I decided to, okay, maybe I should look at these computer courses and just, you know, continue to go down that path. I also realized that I was a storyteller. I love to tell stories about my life. I love to write about stories about my life, and then I realized I love to speak about it. I was a very, very shy kid, like the kind where, I think one time I was in a group, and the person next to me introduced themselves, and then I introduced myself with their name. And they were like, wait, you're the same, I was like, wait, no, no, no, I'm sorry, I'm not Melanie, I'm Chanel, and then I had to really just look at that, you know, for a while, which I'll get into later. And then most importantly, I'm an alchemist, which if no one has read the book, you are not alive. No. You must read the book, the alchemist, of course. It does really help a lot with, I believe, with the journey that we're all on. I believe that we all do this for different purposes and different reasons, and for me, you'll find out what those reasons are throughout this talk. And then the most important part of my life is that I love making beef jerky. It's very random, but I've been making it for about 25 years now. Ever since I was eight, I was the kid that didn't ask for Barbies. I wanted a dehydrator. Because I love beef jerky, and I loved fruit roll-ups. And what better way to get that than by having the source. So I did that, and now I make what has been told to me by a lot of people, including, if you're not, well, I'm sorry, I'm from Philly, Philly, in the house, any Philly in the house, no? Okay, none. Awesome. Yeah, city of brotherly love. One of the top news correspondents there, her name is Renee Shinolfo-Tah. She actually said it was the best beef jerky she's ever had, and she's from Colorado, so I think I take her word. So that's important to me, because I always feel like we need a hobby over the things that we do. Unfortunately, every hobby that I had turned into a job. But that's another story. At the core of it, though, I'm a phony. And right here, I should probably walk off the stage, because what do I have to say, or what's going to be important? But I think what matters most about this statement is that it's true and false at the same time. And I'll get into that. The first thing is, what is user experience? So I felt like when I first got here, I was overwhelmed by the brilliant minds that I was running into. I really couldn't have a conversation with a couple. I thought we were speaking Japanese, or maybe another language that I didn't know about. That's how far removed I was from the back end language. And I kept asking myself, well, what could I possibly have to say that could be useful? But being the director over at Blue Wolf, which was recently acquired by IBM, woo-woo, felt important, because I feel like I can pop my collar a little bit. Sorry, sound. So we were recently acquired by IBM. But coming in there, it was a group of 600 people worldwide. We have about 11 offices, a couple in Australia, a couple, one in France, one in the UK. I think we just got an office in Prague. And then we have a lot over here in the United States, San Francisco, New York City, Atlanta, Boston, just name a city we're probably there. So they bring me in, like, the one that looks like nobody, because I look like a kid. I still get carded at the casinos. Like I can't even sit at the blackjack table and really get into it before they're like, oh, that's cute. No, where's your ID? So I come in there and nobody knows what design is or what UX is. The first question I got was from a developer. And he was just like, what's the quickest way to make an icon? And I said, going to Icon Finder and trying to find one that has an open creative license. And he was like, oh, I'd ever thought about that. He was actually designing each and every icon. And then I realized that this was going to be a challenge. Because the internal learning adoption of this company would be difficult because they had never had a design department. But we have Fortune 50 clients, some of the clients that I've worked with, GlaxoSmith Client, T-Mobile. I don't want to call out another bank that didn't work, but Northern Trust and Western Digital, which was my actual, my first project. And that was where I really learned that there was a very big gap between designers and developers. So then, as always, I felt it was my mission. Like, I couldn't sleep unless the whole entire organization knew exactly what UX was. That happened about maybe four months ago. And I've been there for three years. So, and it happened when I became a rebel. So, you know, we're titanium partners with Salesforce. And that's basically what the company does. We implement Salesforce into the corporations, into these top 50 corporations. So it was really difficult to try to talk in there, to talk to the business people, the sales people, the project managers, the CEOs, C-level people, everybody, to try to get them to understand what exactly UX is. So this is how it's usually seen. I'm really used to having it above. So that's why that weird thing just happened. Because this is how it's normally seen. So UX is typically seen just as interface design or visual design. People would actually email me like, hey, I have a wire frame. Or I have this idea. Can you develop, design a website by tomorrow? And I'm like, no. Do you know what it takes? What's the company? Who's the audience? They're like, oh, that doesn't matter. Just make it look pretty. That's the worst thing you can say to a designer is make it look pretty. Now the cool part about me is that I'm a tri-brit, right? So actually, let me go back to that slide. How UX wants to be seen is that we do everything from field research, face-to-face interviewing, product design, feature writing, requirement writing, technical specification writing. So we have to know at some development tools to know what we can build. The prototyping, visual design, copywriting when I never wanted to write because I'm such a goofball that when Western Digital wanted me to write their copy, I wanted it to be fun and they're just like, that's too much humor. We're imagine us with a straight tie, but we just loosened it a little bit. Don't get too out there. I'm like, OK, OK, you should probably get somebody from your marketing team. And then we also do brainstorm coordination. We really get off with doing those strategy sessions and having a lot of fun with that, really thinking about what's possible. And how that happens is by this UX methodology that I actually put together for the company. This is version two because as you know with a designer, or you might not know, we're never happy with anything. I still already want to change this. But the company went through a rebranding. So then I had to try to make it more simple as to what it is that we do. Now I know a lot of you in the back can't see it, but I'm going to go through it a little bit quickly and then show a couple of examples. But when you really break it down, because I love to look at patterns in the world. Like I'm a pattern seeker, a problem solver, and then I just try to connect all of them to make it make sense. Not only to me, but to everyone else. So the first part is about discovery. And I put together something called an XPR workshop, which stands for experience process review. Some companies may call the regular ones like a BPR, business process review, going through requirements, looking over your competitors or people not in your field, but that are doing something close to it. Doing analytics audit, in which cases most companies are like analytics who? And I'm like, oh no, why aren't you measuring? We can't measure success if we don't have a success metrics to measure it against. Value proposition. Then we get into the user analysis, which is my favorite part because I love stalking people. I don't know if a lot of you saw me yesterday, but I was a lurker. I had a red hoodie on. I was in the matrix. Just looking at everybody, just seeing what everybody was talking about, what everybody was doing. Someone sat next to me and he had an advanced swift book and I was like baller. Okay. I just love looking to see what people are doing. This helps me to do it in a way where I can see the users do focus groups, storyboards, interviews, contextual research when you're trying to be the fly on the wall. So it's like you don't really know that I'm there. Then there's information architecture. We bring all that information together, all the content together, all the information about the company together to put together a site map and then wire frames, which puts together a good content strategy. Then that's where the visual design comes in at. This is what everyone thinks I start with. Like, yeah, just top in the Photoshop or sketch and do something cool. But I can't because I need to do the mood boards. I need to go through the pattern library. I need to figure out the style guide. I love style, like both in fashion and in plain eye. I don't like looking at things that are unpleasing. Sometimes I'll run past mirrors because of that. That's another issue. Sometimes this is a therapy session. I just want you guys to know that. Talking for me. Yeah. Then usability testing as well so that we can make sure that all that information that we learned actually makes sense and can be used by the users because at the end of the day I always say that UX is just about users and content. If you don't have either, there's going to be a great disconnect with the product. Because I have Dev up here really small, it does not mean it's not important. Actually it's one of the most important things. Because I'm not a developer, I couldn't put it in that process, but I needed the company to know that actually we need to be involved with development because they actually placed us under marketing, which is a whole other topic and a whole lot of legal issues. After that we go into maintenance, which is going over the UX specification document, which really just talks about everything that I talk about in plain English. It helps to create an improvement plan, figure out the success metrics, and then also get that user feedback. UX, we need to be told how you feel. I love responses, I love criticism, I love feedback. I like to hear good stuff too, but I always love to just know what's going on in a person's mind. Some of the things that I was talking about were the competitive analysis, where we take the context and look at the content and the users and try to figure out the best way that your product can be successful. We do process flows in which it's like, okay, you have a lot of screens. I think the best example was my first project with Western Digital, which is actually up now, it was redesigning their customer service portal. They have three different types of audiences. They have the ones for the direct buyer, like Apple, Staples, and then they have the home users and then small business. For small business, just to get to the login screen, it took about seven or eight pages. Then the whole entire flow was about 65. We were able to condense that with me and the technical architect to about 20 pages. Users got scared because they were like, wait, no, wait, hold on. You couldn't possibly have put all the features in the process flows, but we did. We explained it to them and then everybody finally understood, oh, okay, now we get it. I like those aha moments because we started to have more fun with it. It started to become less of a who are you, that early dating phase where you're starting to see, oh, I don't like that about you. I'm like, am I going to like that you're a vegetarian and I really, really love to barbecue on the weekend? Just trying to figure out what that connection is going to be like. It became very fluid after that. Branding elements are also a really big important part because this helps the developers, the front end developers, to realize, okay, every page must be consistent. We're not just going to use aerial on this page and whatever, Georgia on this one and whatever. I try to make sure that they understand how important it is in brand consistency because in doing so, then we can create these beautiful products. The problem with Western Digital, even though it did end up being a success towards the end, but the biggest problem at first was that we had offshore developers and then they had no budget for UX. They thought we could do a whole entire customer service portal for this global conglomerate in 150 hours. In reality, it was more like a seven to 800 hour project. I'm a fast worker, so I did what I had to do, but I started to keep asking myself, okay, how can I change this for the next product, for the next project? There were so many projects that happened after this. Because I was one of the only UX people there, and there was only one other person, so two for a company of 600 where there's 100 sales people going hard every day and selling something that they really don't know anything about. It's like I had to figure out, okay, I used to be a developer. I still am considerably somewhat technically kind of a developer. I can understand the language. I know it's possible. I know what can be built. But how can I marry the two? The semi-conclusion that I got from all of this and that I feel like you have got by now is that UX is important in development. Dev needs UX. But it's weird because as I was reading about this, they kept saying that user experience, you don't necessarily need to know development. I actually disagree with that. I feel like in order to know what you're able, what your capabilities are, you have to be able to know what can be built and what's possible. Then I feel like the place where the two can marry are in the process flows. That's usually when we do come together. Because we're able to see, okay, can you build this? Can you make this quick? I know it's going to cost the client more, but let them know that they're saving money in the end. My grandmom always had the saying, if you buy cheap, you buy twice. I kept saying that and they would always laugh. Haha, your grandmom's funny. But they never did it until the end. Then we had to do a change order and it was another million dollars. Then I wanted to say I told you so, but it's like you can't. You're just like, okay, no, no, you were right all the time. You know, Jason Fried, and I liked his quote back in 2013, he said, here's what our product can do and here's what you can do with our product sounds similar, but they are completely different approaches. I truly believe that's why 37 Signals is so successful today and why Basecamp is so widely used because he gets it. Just because you can make something doesn't necessarily mean that you've made it for anybody. This is why I always have to tell a lot of the companies that I go to and it's intimidating because I'm usually the shortest one and usually the only woman as well and some other things. Then I go in and I'm just like, okay, listen to me because I know what I'm talking about and they're like, wait, what? Then finally they're like, oh, wow, you do. Okay, let's have this conversation. I usually let everybody know that everybody is a part of UX. I want the CEO if he can come in here. I want the TA, the IT guy. I want marketing. I want the interns. I want everybody. GSK was really good for doing that where they brought in their pharmaceutical reps, the IT team and just a lot of people in this meeting and we were able to just have such a beautiful marriage between the two. I remember someone coming up to me and saying this was so awesome. I haven't used Post-its and crayons and markers since grade school and I'm like, wow, we can do this in meetings. It's okay to use Crayola. It's okay to talk about feelings. It's okay to talk about these different things because I feel like it gets to the mission and the vision and that's usually what my XBR workshops are. They're very similar but not the same to Google's design workshops. I just try to have that conversation because at the end of the day, and by the way I love quotes so you're going to see a lot of them, design creates stories and stories create memorable experiences and great experiences have this innate ability to change the way in which we view the world. There's a lot of times at my job where I always hear design can't save the world. They always say it's not like we're saving lives and that bothers me because I got into this field. I got into all the fields that I got into and I went to all the schools and got all the degrees that I have to basically, oh and I also dropped out of a lot of schools too so don't think I'm not trying to be arrogant with that because you have to learn. But basically what I learned at the end of the day is I do want to change the world and we do have the power to change the world because of this. So I'm holding this not because I'm texting my friends because I'm bored, I'm holding this because this is the key, my keynote and I'm able to see what's next. We can build these experiences that are either sticky, like in a good example of that is Facebook, you can't get off of it, they want you to continue to use it versus slippy which is like the watch where it's like, you know, I don't know what's there but it's still providing me with the information that I need to do what I need to do throughout the day. One of my biggest projects that I remember working on was with the United Nations World Food Program and we were able to design and I designed with an organization in San Francisco. We were able to design basically a platform for NGOs and non-profits for Africa that was kind of a mixture of our census and our food stamps program to be able to help eradicate like hunger, crisis and sickness within Africa and to me that was changing the world. Because something so simple as someone will go into the store and say, we're out of rice so the guy closes up all the rice and keeps it for himself but with this and then there's a hunger crisis because of that but it was never true. But then what we were building was something where they would be able to check to see if that was true or check to see what they could do about it and then they were able to handle the situation in a better way. That couldn't have happened had we not did the user testing, the field testing in Kenya working with all these NGOs making sure that we were able to actually hit our goal. So that's one of the biggest things if there's anything I want to leave you with too is about the fact that you can actually change the world. And I know that sounds very like reading rainbow-ish but it's like in those that remember reading rainbow I could sing the song if you don't, butterflies in the sky, no. But it's true like we have the power like we are the geeks and the nerds and finally you know appreciated for that to be able to create such amazing things that can help save people's lives especially in the medical field, especially in mental health, especially in any education, I was like in the hotel and they were talking about like yeah you know like join our Kansas City Public School K-12 and I was like oh my god that would have been perfect because I hated going to school. Like I liked it but it wasn't fast enough and then I was too social but I'm also extroverted introvert and I know some of you get that. I'm an INFJ if that means anything to anybody but you know like it was difficult for me to learn and to also connect with people so I always suffered in education. The quote up here is from me. UX has come a long way and it's not going anywhere but we still need to bridge the gap between design and development and honestly most of the conversations I've been having have been with all designers or developers and I'm the only designer but I feel like we all need to have this talk. But one of the other biggest things about why we can't have this talk is because there's a lot of barriers into STEM fields, science, technology, engineering and math. There's ageism, racism, sexism, nepotism, oh my, like there's just so many like things that can stop people from getting into these fields or exploring these fields or even exploring the power of these fields, right? One of the biggest things that got to me was actually all of those plus a couple others and I remember writing a really, really, really, really personal but not really rant about like how come Kevin Rose and Tim Ferriss are doing all these awesome things and I'm just sitting at home like still like just designing this or doing that, like why am I not, you know, like riding on the plane next to Bill Gates and doing some like really cool stuff. Why am I still, why am I not awesome? What's like where is that? Is it too late? So I spoke to a coworker and I ended up writing a blog post about it. It was the quickest blog post I had ever wrote and it was the one that most trended ever in life and it scared me because it got about maybe 20,000 views the first day and then became one of the top 10 stories of that month for Medium and that was back in 2011 and that would actually spawn a whole series of events as to why I'm here today. I had to talk about this question right here. Is it too late to be awesome? Apparently there's a lot of people that struggle with feeling like it is and I always talk about this during tech talks because I'm very existential and very philosophical and I consider myself like a techno hippie because I really think I belong in the 70s but I probably couldn't do well without an iPhone back then so I try to figure out like how can I mix the two? How can I still be a free spirit, you know, like the Denise from the Cosby show or whatever and just kind of be like, you know, it is what it is but still be taken seriously. I remember having a podcast back in, I think it was December with Saran, kind of Code Newby and I know you guys all know about that, Code Newby. There you go. Our talk was hilarious to me because I'm just like, this is like a developer podcast, are you sure you want me? And she's like, yeah, no, this is good. I think this is what we all need to hear. And she was right because I had to determine what is awesome, why do people struggle with this so much, why do we see so many, like if we really take off the blinders, right, and we all know what's been happening in the news, I'm not going to depress anybody, I try not to watch it. I only watch it at night because Jimmy Fallon makes it funny but you know, like there's a lot of different things that cause us to be afraid but at the end of the day we have to realize life is short but I didn't want to start off with that. I didn't want to be like, well, you know, you have to be awesome because you're going to be dead soon. Like no, don't say that. You know, like there's everything you just don't say so I'm like, okay. But this post that I would write would begin to let me go through a lot of self discovery and a lot of discovery with other people to find out what it was that was getting away. So before I could say what was awesome, I had to define it. The same way that I didn't know I was successful until I defined what success was for me. And it's not the same for you, it's not the same for anyone else. And this definition might not be the same for you but to me and to Brené Brown and to Oprah and to, you know, to a lot of people, you know, it's being your authentic self. Who is that, right? Like sometimes we're so distracted. We're so distracted with the things around us, our insecurities that we're not really sure as to like how to really, as to what that looks like. So Brené said vulnerability is the birthplace of innovation, creativity and change. Vulnerability. How do you be vulnerable? I'm going to be vulnerable right now. I have been scared crapless to talk to you. But this happens every time. Like I've done many, many talks but every time before that I'm like, okay, okay, what am I going to say? I can't rehearse. I can't do this. Like I know I have the comedian gene. I know I like to make people laugh. But it's like do people want to laugh all the time or do they want to be taught? How can I do both? And then I was able to find a connection in there somewhere. But it wasn't until I was able to get to that vulnerable place where most people had told me to be quiet like, no, no, no. Don't talk about the fact that you actually hate that design. Don't talk about the fact that you know that you had like a mental breakdown like in 2013 because I overworked myself. talk about like all of those taboo things, but the thing is we need to talk about them. Because humans are wired for validation. That's why it's like it's funny because you know being a UX person and I say you know I value feedback and I can't really see many faces except the ones in the front and I see you smiling which is great so that's me like okay I'm doing something to make one person smile that's awesome but you know we're wired for validation whether it's internal or external. External is the dangerous one because you know we're always going to look for that answer. We're always going to look for that approval. The internal one is difficult because we weren't taught that. Like we were taught in kindergarten to be like be yourself. Do your thing. And then by like fourth grade it was like shut up take that class. What are you doing? You're going to be a doctor not a painter in the woods like you know like so we were always like taught to be something. But and this is this was the top highlighted part of the blog and you know I love Taoism and Buddhism. I like all the spirituality actually it's kind of cool. And my undergraduate degree is also too in youth ministry as well. So it's like it's a nice little mixture in there of a whole lot of different things but when you are content to be simply yourself and you don't compare or compete everyone will respect you. Funny part about that is that last part doesn't even matter. Who cares if they respect you right. I think what like like the subtitle of the is it late to be awesome was the dangers of perfectionism and comparison. And that's usually when people are like oh crap yeah like that's good like okay I get that because I'm always trying to compare myself. So I always I always try to say like okay what's holding us back. And in psychology I actually learned a lot of different things there's a negativity bias. Our brains are more wired to hold on to the negative things than they are to hold on and see the positive things in life. Meditation as much as we drill it into your head actually to me brings your spirit back to your body. It allows you to actually see like and like be present and see like hey listen it's not that serious. Like because once you're authentically yourself and if you can cling on to that one positive thought you feel good throughout the day. An example of that is a smile like when I see a smile and I'm having a bad day I can't mean mug the person back. Like you know by the way Philly is very very mean. So I remember going to it like a bus agent and I'm all skipping and giving him my money and he's just like what the like what the hell are you like smiling about. And I'm like oh okay like he was really serious like why are you smiling. And I was like I thought it was a good day but now it's not you know like so now I'm just going to continue on with my bad attitude thank you very much. But you know like when you know in Atlanta it's a different story it's so nice there I walk down the street angry and I have a you know old man like smile up there like cheer up there and I'm just like oh okay okay like yeah I don't know what I'm smiling about but now I'm really happy like okay. So there's the negativity bias then there's the invisible audience which I'm imagining now. No the invisible audience is an awesome phenomenon because it's actually the thing that developed within us when we were adolescents. It's the thing where we feel like everyone is watching us that everyone is paying attention to us and I always say that if that is true and it is true then that means that if I'm thinking about you thinking about me thinking about you thinking about me then literally less than one percent of my thoughts are actually going to be about you. And when you have that freedom knowing that nobody's really thinking about you like I don't know what you guys are thinking about I know you're looking at me but like I don't really like it's not as serious as sometimes we make it out to be. And once you realize that that less than one percent is there then you start to have a little bit more confidence. Now the problem with this invisible audience is that we've been we're manufacturing a society where we need those validations. How many likes am I going to get in the next 10 minutes if I don't get 15 likes for my outfit I'm changing you know like that's how that's how we feel. But at the end of the day once we realize that like no one knows what they're doing everybody's winging it. Like everybody even Kanye is winging it right like like nobody loves Kanye more than Kanye loves Kanye and it's one of the things I always have to remind myself like be Kanye no no be yourself because because that's what it's about everybody is winging it and that's what brings me back to I'm a phony and that's the whole imposter syndrome. We have this whole fear about feeling like that we really really truly don't belong here. Like we really like there must be a joke. It really didn't mean to hire us whatever you know like but at the end of the day we have to really think about this equation focus plus action equals momentum. Focus is a byproduct of choice you have to continually choose to be focused. Now as a person with ADHD it's a hard choice but I choose to I choose to be like focused on whatever that is whether it's speaking to you whether it's writing something whether it's designing something whether it's thinking about a new invention which happens like five times a day and I need a builder to do that so if anybody wants to talk to me afterwards that'd be great. You know it's just focusing on that and action requires commitment because sometimes we get so excited about the idea and then it just dips off and you know you've made a great book the dip and it talks a lot about that and momentum of course as we know is just making sure that we're keeping up with it no matter how slow no matter how fast we're keeping up with that. Now I'm going to do Tony Robbins because of course have to right. Excitement must lead to immediate action or you will lose the power of momentum more dreams die because we fail to seize the moment do it now. Carpe diem right like we all learn this we say it but it's like we're so we're still so afraid to do to do that thing to do that one project to learn that particular language to to do that particular you know path that we want to do because it doesn't look like the next person but if you ever notice it's the it's the weirdos and I can definitely been called a weirdo more like I could be a millionaire like if I got a dollar from when they called me you know weird normal or crazy but it's like those are the ones that stand out those are the ones that people want to talk to or want to be a part of like that movement so we have to take action and yet it's time for a small story so and it's going to be great because I'm going to try this in three minutes because I don't want to hold you up but every year gumroad does a really cool thing called small product lab they do it like four times a year and they make it so that you come up with a product that you can like execute within to attend a period so I decided yeah okay I'm going to write a book this is the book it's called mutination I love long subtitles so it's called the skeptics quick guide to tackling depression anxiety and other soul-sucking ailments in a distracting world so this book and this is what I had so I did this talk to it as well and it's been changing so it's not like I'm you know recycling and you're not getting you know old goods but like I did a talk similar to this at South by and I had everybody write down one goal right so my goal for this particular you know program was to write a book then it was like what have you been avoiding I'm like well writing a book so and it's like well what's your superpower and it's like doing things fast and making people laugh but mainly doing things fast because usually making people laugh part is by accident and I just hop in front of it like Ellen and then try to like make fun of myself before anyone else can and it's been working so far but but the big part about this was that I learned a lot through it because I didn't complete it on time so here's where here were the results I ended up becoming depressed crap okay so that would happen right so I'm writing a book about depression I've come to press that's perfect so so it's just like okay like and I remember like being so excited about this book I'm like yes like I figured out my own depression because you know like I said I love mental health advocacy you know I struggled with depression anxiety my whole entire life ever since I was three and I was like what do you have to be depressed about a three a lot okay a lot when you really recognize the universe and your place on it so it's like oh my I was afraid of the sun rays like it was crazy so you know I had to go through this depression so then I decided to write the group because it was a Facebook group and I was like guys I don't I know I promise you this awesome book I did a cool cover design like look at it's great right it's gonna be awesome but it's not written and everyone they did the opposite of what I thought they would do like 40 people were like that's okay like it's okay like that's expected like whatever you're going to write is probably going to be amazing because you're going through this experience and then I was like okay all right all right cool all right I have a little bit of confidence there and it was during my birthday so it was during the time where I was turning 33 so I was like oh wow I'm really a failure it's another milestone 33 is a really important number for me I have nothing to show for it mind you like all of my friends are like what are you talking about like you have this job you have this you have this but it's like you know you get in your own way so I ended up writing a post about how I'm going to do this in a week and I said if anybody who's out there is reading this post nobody is I know then like if I don't finish it then please tweet that I'm a phony never listen to her again and just like unsubscribe from my blogs but I said but if I do get it I get to like go to my Amazon wish list and pick something like under $50 because it's like long and I like I like to buy stuff off Amazon so what happened was the stupid post went viral so I was like oh no like like again like why did the post that I don't want go viral and the ones that I do do so the post went viral and then I became depressed again it's like seriously like okay okay that's fine but I kept going Monday Tuesday Wednesday Thursday I thought about it Friday I started and then a book was born by Sunday now it was surprised me that I was able to write it because I had realized I had already wrote it I just I was afraid to just put it out there so I was able to create this book it's a small book and it has a different type of I'm like doing it differently because of course you know I'm a techie so I'm doing it as an application so it's going to be updated this is version one version two will include like my life story version three will include you know more applications of that because of the fact of even how I got here by meeting like a random book publisher is south by Southwest that was like can't wait for that book and I'm like okay and then I just threw my computer like way deep in the closet and then after that I did write three chapters ended up meeting Oprah handed her like the chapters and she's like okay and I'm like yeah definitely not writing that now like and it's just like what are you doing like you're a self-sabotager but I ended up writing it it's on Amazon now if you go there you'll see it and that was like one of the biggest successes I have done to this day because I finished something like and you know as there's anybody out there that knows I feeling about finishing something like it's it's it's real so the funny part was though I forgot to reward myself I was ready for the failure I was so ready for the failure that I forgot to reward myself but what I realized from all this was the greatest gift ever to you is it's about accountability like you can't like it's you have a greater success rate I've noticed when you share it you have to share what you're doing I share everything sometimes some things I shouldn't share like my friend Scott Hanselman he's always talking to me like remember everything is a perm link and I'm like okay okay like but but I don't like because I don't have a filter and I don't really want to have a filter it's like I just want to share it and I want to see how the people feel about that and then we can talk about it like let's just let's just talk about it so my favorite quote that I'm going to leave you with because this is also something that I battled with with even with talking with you like should I be really techie like I'm going to just appear like the you know like like I said like the hippie designer that's like hey guys like cool design stuff like you should work with me like or whatever and even with the book I'm like what do people want to hear but the biggest quote that I have ever ever encountered and I'll probably have it tattooed somewhere I have like eight tattoos you can't see well you see a little bit but you can't see most of them but don't ask what the world needs ask what makes you come alive and go do it because what the world needs is more people who have come alive and that's by Howard Thurman and that is really like important quote to really really take in because we're always wondering what people want but what we want is you you know at the end of the day it's not about because everybody wants something different and everybody will be a critic and I can do a whole talk on that but that's not here so but the biggest part about this is to remember that we want you you know like I always I always quote Jay Z and there's two things I'm going to say from him even though he's like at the dog house right now but he always says you know like everybody can tell you how to do it but they never did it and I found to be like the best way that I was able to be my authentic self and to be the self that I wanted to be was to be around those that I wanted where I wanted to be and it wasn't a comparison it was admiration and then another thing too is he says difficult takes a day impossible takes a week and I thought that was cute because I'm like oh yeah that's I mean most rappers always say like a lot of really cool things but it actually happens for them so I'm like this is actually must be true in some way and it is because I didn't think a lot of things were possible but I just needed to wait and I needed to be myself so if there's anything that you get from this it's that we need to talk more bridge the gap but most importantly be our authentic selves so go and be awesome. Thanks. Thank you.
|
With a background in Psychology, Computer Science and Cybersecurity, Art Direction & Design, Chanelle Henry has an intense passion for problem-solving and creating methodologies; helping outline, encourage, and propel the UX Process. Currently serving as a Director of User Experience at Bluewolf, she uses creative and innovative solutions to execute ideas to consult with everyone from startups to Fortune 50 companies to help refine their goals, make progress, spread the gospel of UX.
|
10.5446/31540 (DOI)
|
Good afternoon, everyone. I'll start the presentation. Thanks for coming. Welcome to Enginyards sponsored talk. I'm Alan Espinosa and I'm a support engineer at Enginyards. Today, I'll be talking about the aspects of deploying Rails applications with Docker. My background is that I'm an operations engineer for the past few years. I've used Ruby mostly from my day-to-day work, so I've used Ruby back in the day when it was still popular to build systems before Go came along. But it's still my go-to language of choice. I'm not really a Rails developer. Last time I used Rails was six or eight years ago, I think. But I do a lot of Ruby development, not just with web. So I'm the author of the Docker high-performance book by publishing. So given Docker changes a lot, the book will probably have obsolete content the next few months. So when writing the book, I tried to make sure what are the concepts that can last through the Docker updates. I'll do a bit of talk of that in this session. So I'll be talking about a bit from the second chapter of the book on how to optimize Docker images. So then I'll tailor it so that you can figure out how to do your deployments using Docker with Rails in mind. So optimizing the way you roll out Rails in Docker. So when we say about optimization, so most of us think about having faster response times and having your controllers respond really fast. So you have asynchronous workers spinning up those things. But all in all, if you look at the broader picture, performance is all about improving the experience with our customers. So from the experience of car customers, you trace the value stream, then you can start doing things like refactor your controllers, your business logic based on the feedback you received from basically production traffic when interacting with your customers. So another way to optimize down the line is like you can tune the middleware. So you can figure your unicorn workers or puma threads and then set the memory allocation so that you're utilizing your machine. You can tune the SQL queries so queries are fast. So all these tuning, you don't operate in a vacuum. Like these tuning is, the tuning you make is informed through your instrumentation in your application and your machines. So you put in like logging and then application metrics where they say like my application is okay, like how users are interacting. So even something as simple as Google Analytics can give you a lot of insight. And then you can correlate that with system metrics in which then you can use that to inform your scaling decisions. So you scale up your application to keep up with the demand from your customers. So you add, you add just architecture, you add caching, you add capacity. So like how many of you knows when do you need to spin up a new instance of your Rails application? How do you know the limits? So having good story about how you operationalize your application is important when optimizing. So you try to tune this your application so that it performs faster but in the end like you need to roll changes to your environment. So if your deployment and process is slow, then this, the tuning you might do now might be obsolete by the time you get to production. So there's also a need to tune the delivery of your software to production. So here in this talk I'll focus more about that. So even though I focus mostly on Docker, this concepts are fairly broad and general. So most early adopters like rode in the container hype. So now we're at starting to get past that point now. But so in the end, like even though you can package Docker containers and stuff like the, the, it all boils down to focusing to the, what's the value of our application and Docker is only a tool to reinforce the way you use it. So I was in our booth at engineered earlier and I started talking to people on how they use Docker and most of the time like there's a lot of like, okay, so now we're just starting at testing. We're trying to convince people to run it in production and there's a lot of resistance with that. And as an operations person, like I can understand some of that hesitance. So trying to know what it means to change your stack to a container base will help people convince if you think really think Docker is for you because it all boils down to delivering your app basically. So in delivering the app, so it's normally, normally we just talk about deployment, but there's also like the build phase. So there's a natural tendency to think of the build phase as compiling code binary like you have your C or go code to be binaries. This doesn't seem intuitive at first for Rails and Ruby developers because Ruby is an interpreted language, but if you look at how things are, like there's an equivalent of binaries in Rails. So when I say binaries, it means that anything that's needed to be dropped in the environment like in production in order to run the application. So it's important to know what will get deployed so that in case I get paged at 3 in the morning, I know where to look at. So in Rails, like you have your gem packages. So they're a nice way to do. So you do a gem install in production and you're done. But aside from that, like there's, it's not really the final binary itself because when you do a gem install, sometimes if you depend on native bindings like Nocogory or the FFI library, you install, you compile stuff and produce the shared object files. So the code in your Rails app is part of the binary. So the controllers, the models, and the routes. So if you don't get LS files, it's everything there is part of your binary. And then you have the dependencies, the gem dependencies for that application. So you do a gem install dash G or a bundle install. And then finally you have your Rails assets. So there's a lot of binaries when it comes to making a ready to run Rails app. So the thing with Docker is that it gives a nice, I guess, interface to wrap our brains around because Docker has the notion of a Docker image, the container which needs to run. So all those Rails binaries needs to be merged into one binary called the Docker image that needs to be deployed. So you build it in your build server like Jenkins and then you push it to what they call a Docker registry. Basically it's an artifact repository like RubyGems where you, from Git commit, you build a Docker image, you push it to the Docker registry and tell your ops teammates that, okay, I have my image ready, you can now pull it and deploy it in. So there's just one thing that changes in your application. Basically you add a Docker file which defines how the Docker image is built. So for those of you who are just starting with Docker, so this is just a basic Docker file to define the image. So here you have the environment you want to, so from Ruby 2.2 and then you add your current directory in your build. Basically it's all the files needed by Rails and then you do a bundle install to pull in the dependencies or compile assets. And then in the end you also define how to run your application. So here you run real server or in production you should be running Unicorn, Passenger or Puma instead of WebRick. So in the build process we have something like this. So if you run the Docker build command and specify the name, so here I'm naming it Rails app. So you can see it's starting to compile the Docker image. So here you see it's adding the Rails directory to your application and then after that it pulls in dependency like do a bundle install. So here you can see it's compiling native bindings to live XML because I have no cover installed. And then here's just a short view of what the build will look like. So when you do a Docker build it will run for a few minutes because you're downloading gems and you're compiling the gems. So here it took one and a half minutes. So a feature in Docker that helps in the build process is its concept of a build cache. So if you run the same build again without any changes to the code, the build will finish right away. So here it just took one second. So behind the scenes you can see that since there were no changes to your application, so since you built it earlier it will reuse the cache to rebuild the image same with bundle install. However, however, if you make a change to your application and you make a small change, for example you updated the route or you changed the model. So the build will take just as long because since in the Docker build steps so you had a new content in your application so it created a new what they call it image layer. So the next preceding steps would need to be rebuilt because the dependent one is a new layer so it had to run bundle install again. So this is not much of a problem if you're starting out but once you have a lot of teams or you're trying to do a larger factor like trying to do a bundle install every time you make a change starts to get painful. Those one minute will start piling up. So what you can do is you can optimize your Docker builds wherein you separate your application according to which one doesn't get updated and which one gets more updated. So here I split my gem file and my actual application so that I'll be able to exploit the cache more often. It's the same concept as having a separate rate task for your unit test where you finish right away versus your integration tests when you need to spin up a database or a cache or everything else in your stack. So the initial build is the same like around one and a half minutes. It will take just as long but if you make a change to just your application it will finish as if it was nothing was changed. Well actually something changed but here the change happened at the later step when you added your application code. So if you didn't add anything in your gem file or gem file that lock it will reuse the cache you had earlier so it will greatly improve the build time. So the concept of having a build process when making your Rails app is being able to get feedback as fast as you can if the artifact or the Rails binary that's ready to deploy is actually good to deploy. After producing the Docker image you vet it through a series of tests like your unit test, your integration test in your delivery pipeline and guarantee that it's good to go. And when it's good to go you're now off to deployment. So I found this on the internet where they substituted compiling with deploying since deployment takes most of the time I think especially on a Friday night. We've had a lot of customers especially on a Friday afternoon doing the deploy and supporting them. So we've had customers who talked to us for support where their deployment takes 30 minutes to finish and those things. So without a rapid deployment process then you don't have the valuable feedback of knowing if what you change is actually useful for your customers. So I'll show a few items that can delay the deployment process and show how this can be improved. So as an operations engineer I don't really like doing this type of deployment process where you log into the server and do a get pull of the latest deployment and do a bundle install. I guess it's a personal preference but one it's low because if there's a lot of change you have to pull in a lot of gems and recompile everything in the build process. So sure you can parallelize it across your fleet so you have cabstran or some other thing that SSHs in parallel to do the bundle install. But in terms of being able to roll out the changes safely and the ability to roll back so you want to do it little by little. So your parallelization is limited by how much you want to update at the time. So if you do a canary deploy you want maybe to deploy one first and then next two three four until you finish your full fleet of servers. So that will slow down the process. So contrast with that to deploying Docker images so the Docker has command called Docker pool which basically downloads the image from the Docker repository. So the deployment workflow is just download the image and run it. So it's it's simplifying your deployment process and if you have a canary deploy you can do that as well. So you're now bound to how fast you can download your images from your Docker registry like Docker Hub and not from like other sources like Ruby gems. So I'll talk more about that in a bit. In in the end even though we rely on a lot of community packages to make our application it's still us who is ultimately responsible for the availability of our application. So like this was this this site got popular in Twitter during the NPM left pad thing. But it's concept is very powerful for everything. So it's the site is who owns my availability dot com. So if you refresh that it will send out random articles about availability and the concept of reliability and like the notion introduced by human operations. So it's a nice site to check out if you want to do reading on operation related stuff. So so so this is a typical architecture of well it's not an architecture but how the stream the value stream goes. So we have our customers relying on our application to be up all the time because it serves their own business usage well. And then conversely we are dependent ourselves on other services for their availability for us to be able to make our application. So we're dependent on Ruby gems and if you're using Debian you're dependent on the mirror of app repository. And we and we introduced Docker in our stack. So now we're dependent on Docker hub to pull our images. So I guess this is where your operations team mates are having their hesitance because you're adding another dependency that can cause things to break. So so if you're relying on these services it's good to be able to vendor them and not rely your deployment process on them so that even though Ruby gems goes down. Docker hub goes down your application can still be deployed or you can still roll out changes if you need to update things so you don't need to declare a snow day or something like that for your team. And so so what I like to do when even on my dev machine is to add proxies everywhere. So there's a notion in corporate environments where developers hate configuring the proxy settings for their app for their development environment. There's a lot of friction there but like so there was a talk yesterday from Jamie. We diesel about trying to understand ops teams so where they come from and what's the source of the grumpiness. So trying to understand and show empathy goes a long way and we can all you can also learn a lot of stuff from our image in other departments. So. So here actually in my dev machine I have like three proxy servers for each type so I spin up servers I have a mirror for app and then I have a mirror for Docker registry and have a mirror for my Ruby gems. So you can do. You can do this things in your dev machine or your Jenkins server so you can so bundler has the mirror setting which basically says if in your gen file if I'm if I have a source ruby gems dot org it will download to another endpoint. So it's like man in the middle attacking your dev environment and it produces this and yeah it produces this entry in your global user level bundler config. So like I also do this one in my gem RC so I actually remove ruby gems dot org in my gem sources and add my local one. And then I install like any of these proxy repositories so you have artifact and nexus Docker registry for Docker images so they they off off out of the box like nexus and artifact supports different formats. So so I so I actually just installed one like just nexus because it's free. Like so I have my own proxy ruby gems and and my own Docker registry so when I do a bundle install I it's it's I'm not dependent on downloading images all the time. So actually when I do development I can do a get clean dash FD and then it will remove all the cash gems all the compiled gems and then I do a bundle install again and I can install it right away. And not depending on the dependencies also also works for databases. So if your database is down then your application the real application should be able to. Degrade a degrade successfully so you your your master database may go down but so people cannot post updates to their accounts. But if you have a slave database then the real application can read from there so you can still serve the request on a read only basis. You have everything doesn't just fall down like it's bit by bit falling down but you can it's like if you have a hole on the ship you can start to call that bucketing out the water while another part of your team does the plugging in of the hole. So it's it's trying to handle failure gracefully. So in conclusion even though containers and rails abstract abstract a lot of information from us so that we can focus on actually writing our app like it's still need we still need a good reliable infrastructure that we can build upon otherwise it all crumbles down if you have bad foundations. So it may be the magic and the elegance of Ruby and Rails that attracted us to our careers but growing us engineers knowing the magic behind what we're using and knowing the higher level first principles can. Can help us accommodate changes in our stack in our application when doing a deploy debugging things when things are on fire in production knowing when things fail. So knowing all these things will have will make us have a more operable application environment so that we can focus on serving on serving the needs of our users. And with that I'm done at my top and if you have questions I can take them and if you go by our booth in the engineered booth we have limited copies of my. Docker book so if you pass by and you can talk to me and tell me about your story in using docker or convincing management use docker and so on. Thank you.
|
You’re very happy as a Rails developer for drinking the Docker kool-aid. You just need to toss a Docker image to your Ops team and you're done! However, like all software projects, your Docker containers start to decay. Deployment takes days to occur as you download your gigantic Docker image to production. Everything’s on fire and you can’t launch the rails console inside your Docker container. Isn’t Docker supposed to take all these things away? In this talk, I will discuss some Docker optimizations and performance tuning techniques to keep your Rails packaging and shipping pipeline in shape.
|
10.5446/31543 (DOI)
|
Good morning everyone. Are you awake? Yes, sure. Good morning everyone. Are you awake? Okay, yeah, it is the first talk in the morning. It is always a little bit sleepy. So, welcome to my talk. Thanks for coming. This talk is going to be about RubyMind, about RubyMind different tips and tricks and some productivity hints and I hope we will learn something new today. But before we can start, I would ask you to do something for me. So, please stand up. Okay, thank you. So, please raise both your hands if you use RubyMind. Okay, please just stay. Please raise both your hands if you don't use RubyMind. No, no, no, just stay with your hands. Okay, let's stretch it a bit. Okay, yeah, morning routine. Thanks a lot. So, please sit down. I think that now we are ready to start. So, my name is Tatyana. I'm a product marketing manager of RubyMind. I'm a part of RubyMind team at JetBrains, but I'm not a developer. I used to be a developer 10 years ago. I used to be a Ruby Rails developer when Rails was really young. But now I'm not. So, if something is not working today, don't blame me. Okay, I'm joking. I'm a RubyMind part of RubyMind team. So, of course, you can blame me, but I will blame the team, the developers. Okay, so, let's start. When talking about ID, the first thing you start with is actually making your environment looking good for you and making your environment comfortable as possible to your eyes and fingers. So, we will start with some tips in making your ID a little bit more suitable for your needs. And we will start with the color scheme. Sorry. It's my rescue time. So, let's go ahead. I want to use this very quick switcher and change the look and feel. For example, to duck one. Or you can switch it back to default one. And just want to ask you which is better for you for this presentation for today. Duck one? Okay. Okay. Let it be duck. So, another thing that sometimes it's useful when you're working in a pair for something or making presentations. So, it's a very quick way to switch the color. And the same way you can switch the key map. I'm using this one. But, of course, you can use another one. We have some predefined ones like text mate, e-max, and some others. Again, for example, if you're a pair programmer for someone who uses another key map, it is the best way to switch between them. You don't need to go to preferences to search for key maps there. You can just make it in one click. But still, if you want to go to preferences and, for example, adjust the key map for your needs, you can do it here. Go to key map and preferences. You can see all the comments here. You can change the shortcuts. And you can even search by case stroke. Like, and then change it. So, you can adjust any predefined key map for your habits and for things that you are used to. Okay. So, let's go back. And, of course, you can also adjust all the colors. I mean that we can switch these predefined color schemes. But, again, you can save all the colors for everything you need to. You can change it. Adjust. So, just in case you need it. And another thing I want to show you that from my point of view is quite useful. It is a list of plugins. You can see that we have a lot of them. Most of them are bundled with RubyMind. So, they are preinstalled. But if you need something more from RubyMind, you can always go to the list of plugins and install something new. For example, for this presentation, I am using this one. Presentation Assisted Plugin. It shows all the shortcuts I use at the bottom. So, here it is. And to install new one, you just need to go to this install JetBrains plugins or even to install not like this from JetBrains but from the community. Okay. So, you can also adjust the look and feel of the window. You can see that I have, it's in my view, I have a project tree at the left, usually, and have an editor at the right. You can also switch all the tool bar, switch on navigation bar, status bar, and here the small icon that helps you to navigate among all the tool windows. You can have this tool window icons here, but you can hide them as well. And another interesting tip here that you can see, there's numbers in the name and the title of every tool window. And this number means that you can choose command this number to open this tool window. Like command one to open project view and to hide it. Or, for example, command four to open a run window and to hide it. So, and if you're open too many, for example, tool windows, a lot of them, and you're a little bit lost and you want to go back to your code, the best way to do is to use shift command F12. This will hide all the tool windows and just go back to editor for you. So, it's a good way to stay focused on your code without having all this tool window icons actually open. So, I hide them. I don't need them. I navigate with a keyboard and I don't need to see all of them all the time. If you want something even more to be even more focused on your code, you can go to this view mode and enter destruction free mode. And it looks like that. Nothing. Just code. Or you can exit this destruction view mode. And, for example, enter presentation mode. And it will look like that. Sometimes I use it when I'm coding a lot, when I have live coding during the presentation. But today I won't use it because I want to show you all the windows and all the ID, not actually we want code. So, I will exit it. Okay. So, something that was about setting up your environments. And now let's talk a little bit more about navigation through your code. Maybe the most common way to navigate the most simple one is to see this project tree, project structure. And nothing special here. Just I hope you know that you can search here. Just start typing and you will search for this tree. But the things that I want to show you is in this small icon in the settings and these two options. It is auto scroll to source and auto scroll from source. By the way, who uses that? Whoa. One person. Okay. So, if you switch auto scroll to source, it means that when just going through your tree, you will open also open all the files in the editor. But, frankly speaking, I don't like it. I don't think it's very useful. What I do like it, it's another one. It is auto scroll from source. And this one means that when switching through your tabs, through your code, you will still be aware where you are in your project, when the project structure is opened. Sometimes it's good. For example, when you are debugging and digging in your code, sometimes you really need to know where you are when you are a little bit lost. So, from my point of view, it is a good option. Okay. So, it's about project view. Another thing is that if you are doing Rails, and I think that most of you are doing Rails because we are at RailsConf, you can also switch this project view. Okay. Don't blame me. Please blame developers. Just a moment. Let's try to fix it on the fly. But I will definitely add a bug report. Sorry for that. Okay. So, you can use this Rails view. And this view means that you will see the structure of your code, not at the folders and files structure, but by means of models, views, and controllers. So, what you can see here is all the controllers in one place. And, for example, under each controller, you can see all the actions. And under each, actually, you will see all the corresponding views. So, kind of Rails view. And, of course, you can navigate from here as well. Ah, let's switch to views and etc. But if you still need, for example, I prefer Rails view, but sometimes I still need to get back to project structure view, to folders and files structure. The best way to do this is not to switch this project view, actually, but to use navigation bar. So, you can see that here on the top, I have navigation bar. And I can really easy, very, very easy navigate through all the folders. And I can, again, just start typing here. And it understands snake case and coscamal case. So, it's a good way to navigate through your files and folders if you still need them without opening project, project, to review it at the left. Or, a snake on your Rails view there. Okay, but I have a question, one question. Do you feel comfortable with this view or something is a little bit weird, a little bit strange, something you don't like? Come on. Any ideas? Tabs, yeah! I think that, yes, I think that tabs are really awkward here. They take so many place of your code editor. Why do you need them? I don't think that we do, that we actually do need them. I don't. I hope that you don't as well. So, the best way to deal with them is just to switch them at all, switch them off. And we can, of course, do it with the preferences, go to preferences, find the right line here and switch it back. But I have a smarter way. It is the way by using find action feature. Do you use it? Or, my users, do you use it? Okay. So, do you know that you can, with this shortcut, you can find any, every action in the ID by the name, like start searching, like copy or something like that. And it is also a good way if you forgot some shortcuts to remember it. But it is also a good way to change preferences. For example, if I start typing, not tab, but tab placement here, you can see that all the preferences are listed here as well. So, just start searching and you can just manage them from there. There is no need to go to preferences. There is no need to search through all the preferences lists. So, please use it. But you may now ask me if I don't have tabs, how I'm going to navigate from my recently edited files and so on. And the best way to do it for me is to use command E to see all the recent files or to shift command E. And you will see all the recently edited files. And, for that speaking, you will see all the files and tool windows. And from my point of view, it is much better than tabs. Okay. So, and of course, you can just start typing here to filter if you have the huge list, if you have edited a lot. Okay. So, another thing that I'm sorry, just want to click escape. If you still want to go to, let's go back to controller. And sometimes if I'm in view, I can do with a simple icon. If I want to navigate not only through my files, but also to navigate, let's do here, for example, to navigate through all the methods, for example, in my class, in my file. The best way for me to do it is to use command F12. It shows structure pop up and it shows all the methods. And the interesting thing here is that if you will click it once more time, you will see all the inherited methods as well. So, sometimes quite useful. And another thing, a little bit more, some smarter, a way smarter to navigate is actually to use go to definition. Because I believe that a lot of times what you really need is not to navigate to any file or class, but to navigate to declaration, to definition of any variable method you are looking at. And the best way to do is to use command B. And you will navigate to definition, to declaration. Okay. And you can use it again and again to dig your code. And you will navigate to libraries, to gems as well, not only through your source, through your project, but through gems as well. And you can also use this one is command Y. If you don't want to switch to that file, where the definition is, you can use this quick definition pop up to see the method definition in a pop up, not going to the file. Okay. So now I have a question for you. Please raise your hands if you already knew something new from the last slides. Okay, great. Now you can leave. No, I'm joking. I'm hoping that you will do something more new as well. Okay, let's talk a little bit more about coding now. So, yeah, starting with creating new file, I think that it is very basic action. I hope that you know it. But interesting thing here, hopefully, noted that you can use Rails generators from here as well to use your scaffolding, controls, whatever. And one more thing here is that when creating, for example, a new file, simple file, you can also use like directors here. And everything will be created for you. No, I don't want to edit. And with this, how to scroll from source action, you can see that you will be navigated in your project structure very fast as well. So here I am. No, don't really need it. So I'm going to delete. Yeah. But sometimes, for example, myself, sometimes I want to create this temporary file, temporary directors with this for RB files just to experiment, to play with some piece of code. And for making that, I don't want these files to be stored in my project structure. I don't actually want them to be stored in my version control system or whatever. I just want to play with them. And the best way to do that, not to create these temporary folders, at least for RB files, but to create scratch files. And you can do that with Shift command N, create, for example, some Ruby file. And it is like file, but it is not physical file in your project structure. It will be stored inside Ruby minus ID. But still you can start coding here and everything will be available like code completion. And good thing is that you can also use Shift control R. Yeah. And you can run it from here as well. So for Ruby code, it's pretty useful from my point of view. Okay. And if you want to go, for example, to see where these files actually are, you can go here, go to scratch files, again, the same box, sorry for that, but I won't do this now. And you can see there that, for example, I have free scratches. So they are stored for you. You can then back to them again and do something. Okay. It was strange behavior. Okay. It's just my thoughts about this bug. Let's go back to some controller. For example, yeah, this one. This one. Of course, when coding, you manipulate code a lot. And you use a lot of copy and select code actions and so on and so forth. And in Ruby mind, they're pretty, pretty, the basic ones. But still, I don't know if you use it. You can use this extend selection. It is quite useful. It will select the chunks of code, semantically. And you can then, like, for example, here, yeah. And you can then, for example, move them. Please don't do copied paste. Just move them if you need moving. And you can also copy some lines and you don't need to select the whole line. You just need to put a cursor on it. And you can duplicate them, delete them. Again, you don't need to select the whole line. Just put the cursor. And when copying a lot of them, and if you need to paste, you can use it with shift and you will see the history of your clipboard. So sometimes it is useful as well. And we also do have some multiple selections. You can set multiple cursor by finding next occurrence, for example. And you can just start typing and you will see that code completion is working here as well. You can use it for all the places. And so on and so forth. Right now speaking, if you want all the power of, for example, Wim editing, and I have, by the way, who is using Wim? Okay. So, for example, if you want the whole power of Wim editing, code editing, in RubyMind. So if you want to have the both from both worlds, the best way to do it is to install a special plugin. You need to go to plugins, find, sorry, idea, Wim plugin. You need to install it. It is not bundled from these general plugins. And once it is installed, you can go here to tools, Wim emulator, and just enable the emulation. I won't do that, but if you want to try, you are more than welcome to go to, after my talk, or to go to our booth, JetBrains booth, we have it, and the exhibit at home. And to try on my laptop, for example, with this plugin, just to try all the actions. Because I am not a Wim user, so I'm just not, even not pretending to try all these smart things from Wim. But you still can take a look how it works in RubyMind. Okay, so one more thing about editing. I want to talk a little bit more about code snippets. So do you use code snippets? Okay, so, yeah, okay. So in RubyMind, we have a lot of predefined code snippets, usually, but you can also create your own ones. And sometimes it's a great way. When you have a kind of, when you're looking similar code, that you don't want to type once a lot of times. And I'm not thinking about defecting here, I'm just, it's another topic. Okay, so you can just select this code fragment, and go here, like, save as live template. Here it is. And you will need to add a title for it. And you can go there and add some variables here. And I also want to add end variable. It means that I want to put a cursor, another line after ending with this code snippet. Okay, let's now try. So if I go there, oh, sorry. And now command G and TCC, you remember, it's our new, here, you can see that cursor was placed on the right place. It's the first variable place. And now you can start just typing and complete. And it will be completed according to our code snippet template. And then I just click tab and go to the end of line. It is, it is the end of my code snippet. Okay, so let's go back to them to the list. Just to show you that, of course, you can also use a lot of predefined ones, or you can change them slightly if you want. Or you can set up your own ones. And we have a lot of them, not only for Ruby, but for JavaScript and Rails and SQL and everything. Okay, I'm just going to delete this one. Another really quick way to code when I'm talking, now talking about HTML, it is Emmet. So anyone using Emmet here? Okay. I really, I do like Emmet. I'm not really experienced in it, but still. So you can use it in RubyMind as well. Just, just start typing. And with the tab, it will make your HTML from your Emmet. And another thing that you can now select this code and use Shift Command G. No, Alt Command G, sorry, yeah. And surround it with Emmet. So like here, yes. Here it is. And you can also, if you're not really sure about your Emmet, I use it a lot. You can also choose Emmet preview. Whoa, where's my link? Okay. Oh, I see. Yes. It depends on the place where the cursor stays. So you can preview before, before adding to your code. Okay. And now, for example, if I change something, let me, sometimes I want to look at it a little bit nicer. And I mean, I mean that I sometimes I want to, after that to reformat it, to make code style look, looks good. So talking about code styles, there are several, there are several ways to work with the code styles in RubyMind. And first of all, we have a lot of code style preferences in our preferences. You can change them if you want to. They're based on some community, community styles. And you can use this alt command L or with shift to define a scope for, for the code fragment that you want to reformat, according to these code style settings. But actually you can also use editor config. Anyone using editor config? Yeah. Oh, you're not doing a lot of open source or something like that because they really do to have editor config files with the code style settings just to have the same code style for all the developers among, among the project. Okay. So by the way, if you have editor config, you can just put this file, editor config file, the root and all the code style settings will be, will be, got from here. Sorry, sorry for my English. And if you're writing JavaScript and using ESLint, you can set up RubyMind to use this one as well. So it's about code style. Okay. We'll have not a lot of time here. So now I want to talk a little bit more about not only writing your code, but about cleaning it up. So about some inspections and refactoring. I hope you know and about our, I need some broken code at least. Okay. Yeah. Here it is. Hopefully you know that RubyMind highlights all the errors, it causing too a lot of inspection that we have. And it provides a quick fix option. So without enter, you can see that you can see that it was unless used with else and you can just click enter to, to fix it, to, to change the, the code. And sometimes into the good way, just maybe to provide, for example, new methods. So not only to clean your code, but to write it as well. Like you're first just putting a method without declare, declarating it. And then you're just going to intention action and create it with one click. So, and one thing that maybe you don't know that you will also have this small icon where you can manage your highlighting level and switch the power safe mode if you need to. So if you don't want to have all these high life things, no problem. Just don't be panic. Just switch it off. It's okay. Yes, but be responsible for your code in this point. And we have, you can also run a code inspector through all the whole projects, for example, to see all the errors that RubyMind can find. And I want to show you one of the examples. It is not only expecting, but it is also about locking duplicates. Here it is. RubyMind finds all the duplicates in your code. You can see the details here, like here. It is almost identical. You can see. Or like here, it is just identical. And it means that if you have a lot of duplicates in your code, it means that you want to do something with this code. It is definitely not dry. And you can just go from this window to your editor. You can see that we have navigated to an editor. And we have RubyMind highlights this code fragment. And now I can, for example, just select it and use myrefactor, this option, and, for example, extract method from this code. Some new method. Okay. And this dialog is very interesting because you remember that we had two places with the same code. So RubyMind warns you that if you're extracting a method from a code fragment, if you do have the same code fragment, so you probably want to change, to invoke this method there as well. So do you want to replace it? Yes. Do you want to replace it? Yes. Okay. So it is just a way to inspect and refactor, like your code. Okay. Now let's talk a little bit more about testing and debugging stuff. Run all my tests with rake. Yeah. And while running, maybe if you use testrun, maybe you know that by default it is on the bottom. But you can always move this tool window with this option. You can move it top, bottom, left or right. No matter. I like to place it on the left. But maybe you prefer some other options. The interesting things here I want to show you is this small icon. It filters all the tests. So you can see all the list of tests, the list of all tests, or you can just click and see only failed ones. So I think that you may want to stay focused on the failed ones, not on the green ones. And you can see what was going wrong, what was wrong by switching. And you can see here what was going on. And you can just navigate by clicking from here to the test code. So it's quite useful. And again, if you want to, if you have some failed tests and you want to find out what was wrong, you can navigate to tests. And a good way is just to debug this test from, it is like a quite natural next step, I guess. So you can just put a break point here. And you can debug control shift here. I just forgot. Okay. So you can see that now our test is running on a debug mode. And we stop at a break point. You can see that. And we can see the list of variables here. Yeah. And we can go step by step. Okay. Set equal maybe. Just one more time to show you. Without step over, but with step into. So here it is. You can step over, you can step into, you can go through your code, look for some more details. And by the way, you can also manage break points. You can go to these more break points and see all the break points for your code. You can enable or disable them. So if you have a lot of them, sometimes it is also useful. And you can also add some exception, additional break points and etc. And one new, quite a new feature that I want to tell about debugging. It is this small preference. No source. No. Where is it? Oh, I know. Ignore. Yeah. So we can go there. And you can see that we have this small setting in debugging stepping. And it actually means that you can decide whether you want to step into your libraries, into your gems, or you want to stay inside your project. So if you don't want to go to dig into all the libraries, just put this setting and you will debug by stepping over only in your code. Even if you have break points there in your gems, you will go only through your code. And step on break points in your code, in your own code. Okay. So the main tips and tricks. And just one more question. If you use virtual control system with RubyMind, if you use RubyMind. Okay. We have not a lot of time. And maybe if you don't mind, I will answer all the questions afterwards or at the booth because I want to show you one more trick if you don't mind. Okay. So about virtual control systems, it's quite a useful thing. Let's go to virtual control. And, okay, you can see all the history here through all the project, through the repository. And you can see your local changes. But the useful thing that the small trick I want to show is that, for example, if you are going to commit your changes, not push, commit, okay, common K. It was the wrong shortcut. Yeah. You can see that we have this commit dialog with the div view as well. And now you can actually edit your code just in your commit dialog. If you need to fix something small and you mentioned it, you don't need to go back to editor. You just need to check this icon. And now you can just change whatever you need. For example, just get the version from the left or just start typing here something. So whatever you want. So it's a small trick that from my point of view is quite useful if you're using virtual control systems. Okay. And one more question is about databases. Do you use databases in RubyMind? DatabaseU? Okay. So you know about them quite a lot. But if you don't use, just want to let you know that we have this database where you can see all the data you have. And you can just open your tables. You can see all the data. Of course, you can even change the database from here, but I think it's better to do with migration files, you know. But when you want to look through your data and to have some queries like we do, it's sometimes good to go, for example, to... Where is it? Hello. Yeah. Here it is. Like open some console and just start typing and it will highlight and not highlight, but complete everything you need. So you can run them and whatever else. You can play with the database. And you don't need to set up everything because as far as Rails projects do have database configurations files, actually everything you need when you set up your database the first time, you need to go to this database, click here, and import from sources. So everything will be suggested to you, just according to your database configuration files. So it's quite useful for my photo view. Okay. So that's it. Only one minute left. Thank you for your attention. If you have some short questions, I will be glad to answer. Or we can just have a chat after the talk or at the booth. We have a booth at the exhibition called JetBrains. So if you want to have more demos, if you have any questions, we will be glad to help. And there one guy, he is a developer, reminds you so if you want to blame something, you have some bugs to talk about. Come on. You are very welcome. Okay. So thanks. Thank you.
|
There are many development tricks and habits that lie at the root of productive coding. IDEs, like RubyMine, are a big one. Adopting a new tool does require an initial investment of time though, as you customize your environment and learn the shortcuts.
|
10.5446/31546 (DOI)
|
Hello. Hello. Welcome. Thank you for coming. Some of you have seen where Indeed Prime, the booth out there in the convention hall, some of you have stopped by. Thanks for coming through. This is a little bit more of a speak about the candidate, the job seeker. Experience through the prime process, some of which you can also infer the client side of those who are hiring, how it's kind of used and how the job seeker is kind of more well vetted and loud onto the platform. But yes, definitely first and foremost, thank you and welcome. As you can tell, we are Indeed Prime and this is priming you for your job search. We are essentially a startup within Indeed. And those of you who don't know Indeed, we are a startup within Indeed. We who don't know Indeed, we are the largest job search site in the world. This is our mission statement. I help people get jobs. It turns into we help people get jobs and it carries on through everything we do at Indeed. It carries over to Indeed Prime as well as we help people get jobs. The goal of Indeed Prime as the startup within Indeed is to focus on a certain vertical, kind of what is most important to the internal recruiter. Where do they find value and how can we help that along? Still helping people get jobs just in a different way and that brings a little more value to an internal recruiter. So what is Indeed Prime? It's a platform. It's a two-way platform where top tech talent and top tech companies can find one another through us. The top talent being well vetted, like I said, whether it be your work history, education, hacker rank test that we give, just to make sure we're finding the top talent that is active and ready in their job search for a new opportunity. On the tech company side of things, those who are hiring, those who kind of fit the criteria, I mean everything from front end to data science, product management, and they're also active in their job search. So motivation on both sides just to ensure we have kind of a medium of the minds, right? The talent who is looked to find a new position and also well vetted, and the company is also looking to hire. They meet on Indeed Prime. To the job seeker, very important, right? What companies are actually using Indeed Prime? These are just to name a few that are in process of making hires. Some who have already made hires, as you can see, it kind of runs the gamut. A small startup to a large established e-commerce, ride share. It runs a gamut from small startup to large established company and every industry in between. So people do have preferences, that's something we like to know. Because some people have tried the startup. They're worried about funding, they're worried about, it's my job safe, right? So, oh, I'd like to tend to gravitate towards something more established. Or vice versa, I've been in a large established company, I'd like to get on a smaller team, I'd like to help out my team and be a more integral part. So with some of the companies that are making hires. With this comes a bit of repetition as far as some of the stuff that already exists in the space. So it becomes, okay, so how is Indeed Prime different? Well, first and foremost, sorry to all the third party recruiters out there, but we are looking to make sure you don't exist anymore. My apologies, but we need to make sure that there's got to be something better than somebody who has an ulterior motive, whether it be their commission or whatever their need in that third party space. It doesn't exist on Indeed Prime, right? We say, we aren't recruiters, even though we have tech recruiting backgrounds, it just helps us to vet a little more, ask the right questions, figure out where you'd like to be going. We are not commission based. We don't have any commission based incentives. And lastly, we don't actually allow third party agencies to post or pull from the site. So that leaves us with 100% service based product. As long as there are an open communication between the job seeker and Indeed Prime, we can get behind you in whatever you're excited about. And then that hopefully in turn creates some sort of relationship if and when you do need to move, you use us, refer friends, knowing that our motive is to truly get behind you in whatever you're excited about. So you get contacted directly, and one of the big keys for us is salaries are always up front. So one of the questions if you go through our indeed.com.prime, filling out that profile is essentially also the application to get on the platform. And one of the requirements we have is minimum based salary expectation. And a client wants to come on as well, we ask minimum salary based expectation. Yes, it's negotiable. It's not written in stone, but it is up front. We need to make sure that we're on the same page and that we're not going to waste your time or the client is not going to waste their time to find out two weeks into the process. We're off by a decent amount of money. And this would have been a no go had this information been presented up front. So to the job seeker, how much does prime cost? Right, one of the top questions we got at the booth for the past few days. Nothing. We're not putting the cost on you because again, we're mixing up weird motives as far as what's happening, what's being exchanged. The well vetted talent, what you've done, your work experience, your education, should speak for itself. It should be what's in demand. And we feel like you should be on there having companies reach out to you. Companies essentially competing for you. So it's free and easy to use. It really is nd.com.slash.prime. You can sign up in roughly three minutes. This is the first slide you're going to be presented with once you've clicked get started on the nd.com.slash.prime website. A few basic things. The one off to the right phone number, actually very important. And I know how reserved people want to be with that number, but it comes to play in such a big way on down the line. So a few basic, what's your career path, LinkedIn URL, so we can kind of get everything in line, start to get a sense of your digital footprint. What roles have you done? What roles are you kind of interested in? It'll be key because sometimes somebody is like, well, I'm a full stack and back-end person. I'm looking, I have 10 years and engineering management is on my roadmap. That's something we'd like to know. So there's two ways to read into this question is stuff you have done. Roles I'm interested in because I am currently doing them. And roles I'm interested in is because I am interested in them on my roadmap. Years of experience in that career path and also drop down for as many languages, databases, frameworks that you can come up with and you have experience with. Kind of cut and paste your job history, your work education. Make sure that that's all correct for us. The next slide you presented with, what are you looking for? Full time contract, internship, what's your location preferences? Your work authorization, companies are well aware, they understand. Job status, job search status, another big one for us, right? Like I said, we want to highly motivated people who are looking for jobs, but we completely understand if you want to kind of put your information in, have a profile set up and indicate that you're passively looking. Because at one point you might well be actively looking and in that case, everything's already set up for you. Like I said, minimum based salary expectation, make sure that's filled out. And again, negotiable and truly a minimum based. You can work on equity and bonus structure on down the road in the interview. Relocation if it's also that's applicable. At the very bottom, one of the other big questions of the week, how do I search without my current employer knowing? It's a very key question. We understand the search and anonymity, which is why this field was put in. Hide my profile from whom? My current employer, someone of my past employers I would prefer not to go back to. And we can list different iterations and whatever companies you'd like on that slide. So where are we located? Indeed is actually located. One of the investors of indeed is Austin, Texas. Prime is there as well as the first ground to ceiling product that they built. But we have offices in numerous cities and Prime's expansion is in six key hiring markets. Seattle, San Francisco, Los Angeles, Austin, Boston and New York. And on the road map coming soon here in the next couple of weeks at Denver, Chicago, Atlanta and DC. And then internationally shortly after that. So are you alone in this? Do I just fill out a profile? And then do I move on? No. With signing up, you actually get your very own personal talent team. They're going to be there to help you from the moment you sign up and are looking for a job all the way until the day you're hired. The first person you come across after you're reviewed by the review team is a talent ambassador. They're the first point of contact. And one of the reasons why that phone number is actually really key. So we can exchange a lot of information in a short amount of time. We can also do it by email, but phone call really kind of gets a sense of the person, right? Throughout all this, we're your advocate, right? We want to know a little bit more about you. We want to make sure that we understand where you'd like to be going and where you're coming from. So this phone call is really important to confirm all your information. Basically start to align your digital footprint if it already is not assigned. So there's a spot to put in your GitHub, LinkedIn. Make sure your LinkedIn and your profile definitely match up as far as dates, job descriptions, summaries, things like that. If you have a personal website, stuff you've done, this is kind of more important for UIUX people. They like to see a sample of your work. Get in touch with you. What have you done as far as hackathons? Have you won something? Are you an international chess champion? We just don't happen to know it, because those are little things that can help you stand out. So that conversation is to get a good sense of you. So like I said, learn more about your current situation and also what you'd like to do next. And of course, explain how this entire process works as it is mostly new for a lot of people. After that, when we're done, I attach your notes to the profile of what we've talked about, and it moves on to the next person, our talent writer. So they take your profile, they incorporate everything we have from our notes. They go to LinkedIn as well, see if there's anything that we are missing. And they basically create a nice little tagline, a little something to kind of make sure we try to capture the eye of that hiring company. And after the writer, well, then it's time to get launched on IndiePri. So with that complete profile, something to grab the company's attention, they put you on the platform for three weeks. And that time, hopefully, we create a little urgency, thus the three week mark to make the companies a little more urgent as far as them reaching out to you. This is the person that's on. If the skills match to what you need, go ahead and reach out to them. Also, who knows, somebody could have been fired for something dumb on week one. They're looking on week two and they contact you on week three. Then case by case after that, as far as what happens after that next three weeks, sometimes we have little powwows on people and what we can do differently. Well, how can we highlight just to make sure that you're getting the full function off this platform as you were expecting? So then do we stop there? Right? Are you left alone? I'm onto the platform and now Indie Prime has washed their hands of you? No, you've still got an advocate on your side. So to talk about this next is Travis. He's one of our talent consultants and I'll have him take it from here. Thank you, Mark. Appreciate it. Good afternoon, everybody. How's everybody doing? Good. So yes, I am a talent consultant slash career coach at Indie Prime. And once your profile is live on the platform for that three week period, everybody will be assigned a dedicated career coach to help facilitate the process, the interview process for you as you go through multiple interviews. Some of our candidates have been contacted between five to 10 times as their profile is live, which can be quite unmanageable for some people as you have all busy lives and daily schedules that you need to attend to just as well. So we are here to facilitate that process for you. Some other specific areas that we can help you in are salary negotiations, resume help, interview tips, and mock interviews. So salary negotiations. How many of you in this room have received an offer before that was less than what you expected? Anybody? It can be quite nerve wracking not to get the offer that you really want. So that is where my role comes in. And I can provide some tips for you as far as market, excuse me, market, market of salaries, geographical salaries, depending on what location that you're looking to work in, as well as your skill set can be a factor that can get you the extra five, maybe 10K that you are looking for to make that offer quite reasonable for you for your next gig and opportunity. Also, the big one is resume help. I'm sure all of you are quite talented on the technical space, but I have seen some resumes come across my desk that are just not off the par there. So I'd love to extend my service to provide resume tips for you as far as making edits and changes to your resume to make it most marketable and presentable to industry standards of what hiring managers and recruiters are looking for. Interview tips. So there are many companies that are currently enrolled on our platform from startups to Fortune 500s. Many of you know about the Ubers, the RetailMeNOTs, the 10X Auction.coms, but many of you may not know about small startups. So we have a whole section and segment of materials for you to provide company backgrounds, their financial status, revenue, and financial forecasts, as well as their locations and what opportunities are available across the country in particular companies. And lastly, Monk interviews seems to be that a lot of people exercise this option that we do have available. So you are comfortable and fully prepared and ready to go into your interview for whichever company you may be interviewing with at the time. And we have situational-based questions. We have behavioral-based questions and a slew of technical questions that we gather from previous candidates that have interviewed with these companies to make sure you are well prepared and your coding abilities are up to par and you're confident in concising your answers and ready to interview with the manager so you can win that job that you are looking for. And finally, the ultimate goal of the entire interview process here through Indeed Prime is to get that job that you want and get hired. So by a show of hands, does anybody like free money in here? Okay, awesome. I'm glad all your hands went off this. That's great to see. So once you are hired on the Indeed Prime platform, we give you a $5,000 signing bonus just for lending your new opportunity through Indeed Prime and to say thank you just as well. It is quite the offer there and incentive to use our product and our platform. And another opportunity that we have for you to make money through Indeed Prime is if you're not on the market currently right now and not an in-demand job seeker, but you may have friends that are in the industry or on the market, we do offer a $2,000 referral bonus, which is quite nice. So it's a very easy way for you to make money on the side just by referring your friends over to Indeed Prime and hopefully they are able to land a job and we will send you a check for $2,000. So that is quite nice just as well. And if you would like more information about Indeed, please visit our website listed here. It is www.indeed.com. And I would like to hand it over to my colleague Mimi who will be talking about resumes and resume types and reviews. Awesome. How's everyone feeling? Good. If you guys want to stretch a little bit, open your chest. So my name is Mimi. I am a career coach with Indeed Prime. One of the really cool things that we do offer is a resume review. That is actually a service that a company, actually a candidate came to me today and was saying that a company is charging 500 bucks to have his resume reviewed. And I was like, get your money back. But he couldn't. But how many times have you guys seen a resume and or put something on your resume and you're questioning it? You don't know if it's putting your best foot forward or really showcasing yourself? This was something that I thought was interesting. An ability to smell fear is a quality I've never seen listed on a resume before. Probably not something you should put on your resume. It takes about three seconds to make a first impression. How many times have you met someone and literally said, I never want to talk to that person ever again in my life? It really is important to make sure that your overall personal brand is a, I guess, a good forefront of what you want to present. So I want to be seen as a good person. I want to make sure that my resume is going to show that. Poor quality resumes. Your possible ESP is maybe not something that you should share with employers. I'm sure they're really happy that you can sexy dance, but they don't care. You want to make sure that your skills are listed very clearly and that you're putting your best foot forward again. Overcomplicating your resume and having too much content can be an issue. Of course, we want employers to know I'm good at this. I'm good at that. And I want to show you all that I can do, which is great. But you don't want to have too much information on there where it is not readable and understandable. Also, it is big internationally to put your photo on your resume. I personally believe that it's good to have that anonymity, even though, of course, you can go to LinkedIn and find a person's information in their photo. But to have that anonymity, you're really, again, just showcasing your skills and your past experience outside of, hey, here's who I am and I'm kind of a cute guy. So, no, employers should only see what your skills and your past experience are. So let's talk a little bit about resume formatting. The chronological resume is good for candidates who have had career progression. If you have started as a junior developer and you're now an engineering lead or engineering manager, this is a perfect resume for you. Why, right? What you're asking why? Your overall showing your progression and what you've done throughout your career, why you're a good fit for that company, and how you have used your skills to progressing your career. A stable work history is an interesting thing to try to forecast in your resume. But it's important that on this resume, you have a few breaks in your overall experience. So as far as your actual dates, listing the dates on your resume and the chronological order is perfect for this type of resume. It shows staying in the same field, your past job titles match your current requirements and also what you're looking for. And this is an example of basically what I just said, starting basically at a junior level and progressing throughout your career. The functional resume. This is a resume that I actually like to refer to interns because you want to be able to show your past experience and like what you've done, your projects. But you don't necessarily want to showcase the fact that you're an intern. Maybe you want to focus more on your skills. Maybe you're changing careers and this is a perfect resume for you. I think this is the resume that I use personally just because it's again showcasing your skills and kind of putting your best foot forward in that light. And unfortunately, sometimes recruiters only look at keywords and I mean, you have to showcase yourself as much as possible. I think it's really good for people who are also re-entering the job force after an absence because you're able to again not focus on the years of experiencing and the gaps in employment as opposed to focusing on your skills. And here's an example of that resume. So as you can see, the skills and development is brought to the forefront as well as the qualifications. Again, what are employers looking for? It's important to know that no matter how you format your resume, ultimately everyone's kind of looking for the same thing. So as long as you're showcasing your skills and your past experience, employers again are more so looking for keyword matches and making sure that your resume aligns with your job description. Your career profile. It's good. A lot of people remove their objective off of their resume and it's, I mean, it's again a preference, but your objective kind of shows your overview of experience, what you're wanting to do. And it gives a brief snapshot to the employer of your thought process and again, where you're wanting to go in your career. Your experience. I tell my candidates all the time to take out any irrelevant experience, not necessarily omit it, but if you are looking for a job and let's say you wanting to be an engineer and you put that you went to astronaut camp on time, that's really cool that you went there, but it's not pertinent to your resume. Again, skills. I harp a lot on your technical skills because obviously that's what an employer is looking for. But you showcasing your skills again, making sure that they're relevant to the job that you're seeking. Education is great to implement there. Also any licenses, patents and certifications. And again, your projects. Those are the type of things that again, I encourage my interns to make sure that you put any projects on your resume. A lot of people don't put that information, but it's pertinent, especially if you're coming from having little experience to try to get a career and, you know, develop your overall career. So indeed prime. Let's first back up and say, you know, I don't know what I want to do. I need someone to help me figure out what I actually want to do with my career. That's another facet of my position. I've actually sat with candidates and spoke with them for hours on end about where they want to go in their career, what they want to do and how we can help them to develop to get to that point. So knowing yourself is the first step, knowing where you want to go is a second step, figuring out what your personal brand is. The third step. And then once you figure out those three things, you are ready for your job search at that point because you know where you want to go. Indeed prime. What we do is again, the process is about three minutes to actually sign up and get going on. These are some of our clients. Dropbox, Evernote, Facebook. Again, great clients that work with the company, but above everything making sure you understand your personal brand. If you have any other questions, feel free to stop by the booth. Grab some swag. A lot of awesome stuff. And then here's all of our contact information and thank you again for your time and have a great rest of your conference. Thank you.
|
Indeed Prime is the job-search industry’s newest disruptive product. Prime takes the best job-seekers, works hard to make sure their profiles are perfectly polished, and puts them on the Prime platform, where our exclusive group of clients come to them. With Indeed Prime, jobs come to the job-seeker. In this session, join Indeed Prime’s expert group of talent specialists as they set time aside to help you practice interview questions, edit your resume, and prep for the next step in your career.
|
10.5446/31547 (DOI)
|
Good afternoon. My name is Mickey Risenes. Can everyone hear me okay? Awesome. So welcome to the talk. I work for Spreedly in Durham, North Carolina. I'm a software engineer and my Twitter handle is Mickey Rez. Feel free to reach out. At Spreedly, I develop on Active Merchant and also other Spreedly things. I also wrote a library for Android Pay, Decryption of Tokens. So I have some software credibility, I guess is what I'm trying to say. I don't know. It's always up for grabs. All right. So a little background on myself. I married young and had five children and I homeschooled them for 16 years. And from the time that I was 18, I've been tutoring or teaching math for money, which is always great. I've coached roughly 15 high school sports teams, two of which have won state championships. And all this is just to say that I'm used to being on top and the one in charge and the one with answers. And it's a really nice, fun place to be. Roughly six years ago, I returned to school to finish my degree because I was going to return to work and my husband was going to come home and say with kids. And that marked the beginning of my journey back up the totem pole. Being at the bottom again, it's very enlightening to put yourself on the bottom of that totem pole. So I would like to quick rabbit trail. I suggest anyone that's in a position of teaching, take the opportunity sometime to put yourself in a position of the learner, if only so you can realize all the things you don't want to be as a teacher. Also in this industry specifically, there's a lot of you have to come through your fears a lot because there's so many times you have to say you don't know this or you don't know that and that can be afraid. You can be afraid to admit that. So I recommend something with a little fear factor in it. Not necessarily derby, but it was fun. So when I first started putting this talk together, I realized that I wanted to make it for basically any new dev that came to your company. And I did, except I realized as I was going through it that I really projected some of my preferences on any new dev. And I don't think you're going to get away from that, but what I can do is tell you what my preferences are so that if you see that heavy thread, you know that I already have that preference. One of my preferences is that I have to have regular feedback, clear evaluation criteria and defined expectations. If I don't have those, I'm unhappy. I get unhappy a lot just so you know you're going to see this. All right. If I'm not growing, I'm unhappy. Again. I love teens. So when I feel isolated, I'm unhappy. And I believe that training is a two person activity. So if you're in a position to train someone and that person is not willing to put in the work to learn, then you've hired the wrong person and this talk is not for you. You should go find a hiring talk to figure out how not to hire those people. So there are three basic teaching opportunities that you're going to run into when you onboard someone or sorry, when you hire a new dev. The first teaching opportunity will be onboarding. This will be about six to eight weeks depending. And the next type of opportunity will be continuing education, training or mentoring. And this will be unbounded in time. This should be as an engineer, this should be your whole career. You should be learning and preferably you should have someone to help you along the path even if it's just up here once you reach the senior level. And then you're going to have routine daily questions. I'm going to break down types of knowledge into three categories as well. First, there's tools of trade type knowledge. This will be your languages, design, data structures, databases, all those type of things that you would learn from a wiki or a book or more book academic type learning. So we're going to call those tools of the trade. Also you will be having to teach domain or industry specific knowledge. For example, I work at Screeley. We run credit cards. We help people process them. So I have to know a lot about PCI compliance. It has nothing to do with computers, but if I don't know what, I can't write the program. And then there will be company policies and development process. So basically just company information. These are the three types of information that you need to know how to deal with. And there are two types of new developers. I'm going through all these types. Who doesn't like text on any? You have experienced developers new to your team. And so they clearly have different needs than junior developers new to all the things. So the teaching should be the same, a lot of the same principles, but juniors will need more help in the tools of the trade area. And now an overview of what teach... Am I going to go there again? Overview. If I'm talking and it seems like I should go another slide, just flag me down. I'm really cool with that. Overview of teaching and training. So I'm going to give you everything you need to know about what the point is. The point of teaching and training is to reduce unknowns. And if training isn't reducing unknowns, what's the point? You're doing something wrong. So throughout this talk, and as you go back to where you are, you always have to ask yourself when you're training, am I reducing unknowns? Because people will make it about all kinds of other things, but really it's about conquering the unknown. For example, I'm onboarding someone at Spreely right now. And first week, you'll just be talking with them. And you'll say, and I say, and then my lead, Ryan says, how are you doing? And he's like, I don't know what you guys want me to tell you when you say how you're doing. My stomach's upset, I had a great breakfast. No. What we're looking for... So then I had to really quantify it. It forced me to quantify. He had an unknown. There was an expectation I had. I really didn't communicate what it was. I asked this general question. I'm like, okay, when I ask you how you're doing, I want to know three things. One, do you have something to work on? Two, are you blocked? Three, is there some question that I can answer for you? So that was a teaching opportunity because he had an unknown and I had to fix the unknown. And every time you have a new developer and you fix an unknown, they are more comfortable and they will become more productive. So most of us are familiar with this iceberg thing. Most of you probably know where I'm going with this. Totally projecting. So we have known unknowns, things that we know that we don't know. For example, there's a distance between Earth and Pluto. I don't know what that is. That's a known unknown. I can't give you one of my unknown unknowns because I don't even know that I don't know it. But I can't give you one of my daughter. I have an 11-year-old daughter. She doesn't even know that she doesn't know what a data structure is. So we have unknown unknowns and we have known unknowns. So basically the top is what we know we don't know. And you know what is going on. And the bottom represents all the things we don't even realize we don't know. But the problem here is that it's wrong. The bottom of the iceberg is woefully too small to represent how much we don't know. And what's going to happen is thank God, junior devs don't know this because I think they would all run if they understood how much they didn't know. So experience devs, what happens for experience, brings you encounter a problem, you solve the problem. Then you realize that you're able to solve this problem even though you had no idea how to solve it when you started. So that gives you confidence that even though you have no idea how you're going to get to finish line, I've gotten to finish line tons of times totally not knowing what I was doing. I can just do that again. But junior devs haven't experienced that success yet. And so you really need to get them to stick around long enough and expose the fact that you don't know what you're doing sometimes. It's really important for them to see. Get them to stick around long enough to see that really you start, a software engineering is about starting with all kinds of things that you don't know and whittling it down, answering those unknowns and having a solution. So when you go back and you're trying to assess how am I doing? Am I teaching better? Am I doing a good job training? You have to ask yourself two questions. The first question is, are you reducing unknowns? And it's shocking how many people can teach for a long period of time and never reduce a single unknown. And you can fall into this like if you're preaching to the choir, it doesn't help anyone. So make sure you're reducing unknowns. And the other thing is, are your devs acquiring more problem solving skills? Because the fact of the matter is, junior devs don't have as many problem solving skills. Experienced devs will have more. But whenever you change stacks or change companies, there's going to be new problem solving skills that you're going to have to learn for that stack or for that company. And so like for example, I want to spree lane, I'm setting up my machine and I can't get stuff to run. I said stuff, I was good. I can't get things to run on my machine. And so it's almost like it's post-gress running. Well, I thought it was. I thought it wasn't. So and the thing is, is that I'm not a junior dev, but I needed help in that area and that was helping me with a problem solving skill. Now on my very first team, I had a wonderful teammate and she helped me run a great problem solving skill. So she would send me helpful links. Now you might, and so like at first you're like, ha ha, and you're like, damn it. Why did I just ask that question? But she really helped me out because junior devs really do need to go to Google first. If you haven't asked Google, don't ask me. So that is a job skill and they should have that. So now we're going to go into the seven laws of teaching. I was a teacher as you saw in my bio and I like eating, which you probably already saw in my bio. It seems like my favorite thing to do besides skate. All right, so seven laws of teaching. They seem straightforward and people mess them up all the time. This is a book written in 1884 and all classically trained teachers should have read it. So I'm going to go through the laws and apply them to software. The law of the teacher, the teacher must know that which he would teach, therefore know thoroughly, clearly, and familiarly the lesson you wish to teach. Now because there are three types of teaching opportunities, I'm going to go through them individually for this lesson. First you have onboarding. And many people feel like onboarding is their personal, like it's a one man show, it's not, it should be a team or company show. If you don't know what you're supposed to be teaching during onboarding, you can't onboard someone. And you can't make that decision alone unless you're the manager or the owner. So basically there should be things that you're trying to teach people during onboarding, mostly involving company specific information of the three types of information you have that can be your science, the domain and the company. Company and domain are going to be big during the onboarding process. Once you finish the six to eight week, they should have an idea of what the company is about. They shouldn't be uncovering new company things as they roll through. But the important thing is that you can't figure this out on your own. You've got to have someone help you. If this is definitely a team issue. Now at Spreedly, my lead Ryan, if he has touched anything, there's document on it. And I love that. Also if you touch something, once you do document it, I hate that. So that was priceless coming in because there was always documentation on anything that I needed. And so after I came in, then shortly three more developers came on. And what we realized is that all of us, there was no onboarding process, but all of us did the same things for the first six weeks. And so we had a process. We just hadn't written it down. So what Ryan did was he went and he created a new engineer's handbook. And this has been invaluable. The new engineer's handbook covers all things from setting up your machine, what type of work assignments you're going to have, the expectations like office hours. Basically what you need to know to have all those things that people that have been in the company for six months to a year, if they've been there, they kind of know all these things. You want those things in your handbook. You want to answer as many unknowns about the company so you can get that off of their plate and start really coding and talking about domain. So some other things that you can do during the onboarding process will be you want to make sure they understand your dev process and the working agreements on your team. For example, do you have a definition of done done? They should know what that is and they should know what that is pretty quickly. Am I talking too fast? I'm just going with Arthur's response. I'm not listening to anyone else apparently. All right. Also present your style guide. You may not have one and I, it's okay, I'll get on you, but if you have a style guide, it's nice for them to know how you want them to write their code. And if you don't have one, I suggest you get one. If only for you, you want to adopt one basically so people coming on know the expectation, not so that you can hammer people over the head, although that's fun too, but you definitely want everyone to know the expectation. Now you want to know how the team fits in the company as a whole. That's an important thing for them to know with whatever team they're going to. Also to understand how the developer fits in the team. Basically all of this is trying to manage expectations. And then you want to tell them how the team deals with conflict before the conflict comes up. It's already stressful coming on in the company. And if you come on and there are, and you have conflicts right out of the gate, like that would ever happen, it would be nice to know what the process is before you face it. Then you have continuing education and mentoring. And the goal here, this is unbounded. So this will happen over weeks and months, hopefully years if they like the team. And the point of continuing education and mentoring is to take people where they are on your ladder, whatever level they are, have them master those skills and also push them towards the next level. Engineers want to grow. And so if they feel like you're helping them master their skill set and also pushing them to the next level of their skill set, it's reassuring that they're not the only ones driving their career. So you want to make sure that they know what the engineering ladder is. You want to go over that, what your expectations are for them to move on to the next level. Because everyone wants to move on. And if they don't want to move on, you should probably get rid of them. Because people that aren't getting better aren't worth it. I'm sorry, I'm very harsh like that. Style guide reviews are a great thing to do during this time because most style guides, like for example Airbnb has a JavaScript style guide that I spent a lot of time in at one point. And I enjoyed it because I like knowing why people are making those style decisions. Now people can be flipping about style and just not, you have to do this way. But a lot of times they have good reasons why they've made those decisions. And so going over that with inexperienced devs is a good way for them to make their own opinions. Because really what you're wanting them to do is acquire strong opinions and hold them loosely. What that means is they should have an opinion and be able to discuss other opinions and hopefully improve their opinion. And so this continuing education should be helping them along that path. And then lastly, you need a core book list and recommended reading list. So every company has things that they value. And if you don't, when I was homeschooling all my kids read the same set of core books. What this did for us is when we sat down at the table and we wanted to discuss something, we could reference the same books. We want to talk about government. They already read the Federalist Papers. They read the Anti-Federalist Papers. They read John Locke. They read Rousseau. We could hit any of those and all my kids will know what you're talking about. It's awesome when you can talk to your kids, when you can talk in a group and you all have the same reference points. So having a core book list can help get everyone on the same page. And that's very helpful for having better opinions, holding them strongly or holding them loosely and being able to defend those decisions. And then the last for the law of the lesson, the routine questions. So on these, you need to have the answer or know how to find the answer. Be able to teach them the process and how they're going to find that answer because you can't plan this out. And the same teaching goals apply. You need to be reducing unknowns and demonstrating problem-solving skills. So if in the course of a routine question, you realize they're missing something that they should already know, you should fill in that gap right then because every time you fill in an unknown, developers are more happy and they're more productive. The second law is allow the learner. The learner must attend with interest to the material to be learned. Therefore, gain and keep the attention and interest of the pupils on the lesson. Don't try to teach without attention. This comes down to hiring. You need to make sure you're hiring people that want to learn. But also, you need to provide an environment that is conducive to learning, which means you need to welcome questions. And the reason why you need to welcome questions is because questions expose unknowns. And you can't fix an unknown until someone's told you they have it. So you definitely want to foster that environment, which means that if asking questions is good, everyone should be doing it. It shouldn't just be the more inexperienced. You know, they should see seniors asking questions. It's encouraging to them to make sure they ask theirs. Now, if you have an experienced developer who's not asking questions, one of two things is going on there. One, they haven't reached their full potential. Or two, they are just really good at Google. The second one is acceptable. The first one is not. So another thing that we did at Spreeley is we made a Slack onboarding channel. And this allowed people to be able to go into a more private room with you have your mentors and just your onboarding people there. And they can ask the questions without having to expose all their unknowns to everyone in the company, which can be, you know, rough after a few weeks. So, but I've gone through, you know, imposter syndrome is very common in our industry. Any level can face it. And so I've gone through and I've actually scripted. I had some really great actors and actresses actually play this out for me. I scripted this great little talk you can have with all new devs about imposter syndrome. No one knows what they're doing. This is a very helpful conversation. Everyone should hear this. Which is all faking it until we figured out. I mean, and I realized that some of us know things, but there's so many things that we don't know. And I think that we really need to make sure people realize how much experienced devs don't know. The Google did this study to figure out what made teams effective and what they came up with. One of the things, I'm not going to go through all five, but one of the things was psychological safety. And basically that is defined, can we take risks on this team without feeling insecure or embarrassed? And questions can be considered a risk. So if you can't ask questions without feeling insecure or embarrassed, your climate is bad. You need to fix that. So I also have this theory. I think this is my theory someone might have said before. So I'm not crediting them because I don't know. The number of regular interactions someone has with a team should improve their psychological safety. And I'm going to give you two scenarios. First, a developer has 40 interactions with their team, and of those 40, they're all questions. And the second developer, they have 200 interactions. They ask more questions. They ask 60, but they've had 200 interactions. Now that first developer that only had 40 interactions, they're going to start becoming very self-conscious about asking questions. Because every time they ask a question, it's a reminder that all they ever do is ask questions. And that can get heavy when you're coming on someplace. You want to have something to offer. You want to have something of value. And your questions are just a drain on those around you. And some people won't feel that, but some people will. And you want to watch out for that. The person that has 200 interactions and 60 of them are questions. The questions are part of the flow, but they don't define the flow. And this is where team building comes in handy. You need to have some interaction. And it can be meals. Meals is great because everyone has to eat. And you want to make sure that your activity is inclusive. So drinking, I like it, but not everyone does. So you want to do things that everyone on your team will enjoy. Water cooler chats. It's nice to have a kitchen where you can kind of go out and hang out with people and chat with them. You need to have more interactions than just questions all the time for the new people at your company. A foosball ping pong. Where else am I right with that? Company events and parties. And then, you know, team building type events. So basically you want to provide some interaction for your new people at your company that don't involve them, have to ask questions about things they don't know constantly. Pro tip. Don't start explanations with clearly or obviously. It's a bad juju. Just don't do it. The law of the language. The language used in teaching must be common to the teacher and the learner. Therefore, use words understood in the same way by the pupils and yourself. So there are two keys to success for this. Don't use industry specific acronyms or domain specific acronyms that frustrates people. They don't know what you're talking about. And it's disheartening. And we do it a lot. And then you want to define your terms as you go. At Spreely, we developed a repo that had a glossary of terms in it. And that was really helpful. So we would point the new people to the glossary. We would try to make sure we're explaining acronyms as we go. But at the very least they had some recourse to go figure out what acronyms were when they got confused, when they saw things in docs, and they didn't know what it was. So you want to make sure that you're using the same language. And I'm going to get an example from when I was teaching math. Number 12 here. One day, this is like step graphs. That's basically what this chapter is on. And a student raised their hands. And it turns out that they had never been to a post office. So they couldn't understand the problem. The language of the problem was beyond them. They didn't know what was going on. And so in order to... It made me feel old, which was not nice. And I could have just explained what a post office was, but that would have, again, made me feel older. So I changed it into text messages where the first four or like, you know, 120 text messages are this much money and then additional money for every block of 15 text messages after that. At this point in time, phone plans weren't doing the whole unlimited text thing. So they understood the problem and they were able to solve it, but they really needed to understand what the problem was there. The law of the lesson. The truth to be taught must be learned through truth already known. Or begin with what is already well known to the people about the subject and proceed to the new material by single easy natural steps. So this just means you have to teach from the known to the unknown. And we do this all the time. If someone asks what a word means, we don't define the word they don't understand with more words they don't understand. We define that word with words they do understand. So when you're working with a new developer, you want to make sure that wherever you're starting your explanation, they actually know where you are in that pipeline. So if they're back here and you start here, your explanation is useless. You have to make sure that you back up to where they are and some of that is going to be you asking them questions. But I have an example of someone chaining things together to help them learn something new. And it's not the greatest, but it shows the chaining part. Michael Scott has an amazing mnemonic device by which he memorizes people's names in the crowd. I don't know if you guys are office fans or not. Right? Your head is bald, it is hairless, it is shiny, it is reflective, like a mirror. Your name is Mark. Yes, got it, it works. So what Michael Scott is attempting to do here is chain things together. He knows to learn something he doesn't know, which is a good idea. I wouldn't do that though. Okay, so the idea is that you want to teach from the known to the unknown. Random factoids don't stick. You have to make sure everything is cohesive. So when I was teaching math, when a student couldn't complete this problem, I would ask if they could complete this problem. More often than not, they couldn't. And you would think that by the time you got sophomores and juniors in high school that they would be able to do that bottom problem. But most of the time, if they had a problem with the top problem with algebra and fractions, they actually couldn't do regular fractions. And if I started the explanation just working on the algebra, they were gone. I had to make sure I backed up and went over fractions again, which seems silly, but it's just reality. So sometimes I think of teaching like throwing toilet paper at a wall, you know, and so you just throw a bunch of information at people and you see what sticks. And that's kind of, it is what it feels like sometimes, but the reality of the situation is that that kind of takes away that you can affect, you can change the effectiveness. And how you change the effectiveness is when you're throwing the information and it actually attaches to something already. If they already have information and you throw more information that attaches, then it will actually stick. If you don't, it's just all wasted words. And it's frustrating too, because what happens is if someone asks me a question and I explain it to them, and at first like let's say Arthur asked me a question and I start explaining out here. Now the first time I do this, he might say, wait, I don't understand how you're out there. Like, and he'll pull me back to where he knows, because, you know, he's communicating. But if I continually end up in front of him all the time, way out here with my explanation, he's going to start feeling the need to save face, because he's like, I never, I never, I don't know what they're talking about all the time. And so then you just going to get this, oh yeah, yeah, yeah. That's not good, okay. So you got to make sure that you're starting where they are, because if once people go into save face mode, then you've kind of lost the battle, because now they're just trying to make sure that you don't think that they're stupid. And then, you know, see past rule on that. You want to make sure you have an environment where they can ask questions, where they feel like they can learn and have some psychological safety. The law of the teaching process. Teaching is arousing and using the pupil's mind to grasp the desired thought or to master the desired art. Therefore, stimulate the pupil's own mind to action. Keep his thought as much as possible ahead of your own expression. Placing him in the attitude of the discoverer or anticipator. Most of us know this by like, Socratic method type things. And so, you're trying to make sure your students are making the lessons their own, not just don't tell them. So, I think the best value as teachers that you can provide to students or people that you're training isn't to provide answers, but to teach them the right questions to ask. Just like, if you became an excellent Googler, you're just more effective, because you know how to ask questions. That's basically what makes people good at Google. What question do I ask? So, but it can be time consuming to teach people how to ask questions. So, I want to go over first the benefits to just answering questions. Because there are a few. Just answering questions. It's easy. It's fast. And I think number three is the most important one. It makes us feel smart. And it's even better if we can make them not feel smart. You know? So, like, if you're just answering questions, you have to ask yourself. If you're playing this little passive aggressive thing, where answering questions and not really providing the backdrop for someone not to answer that question again, it's kind of passive aggressive and you need to knock it off, because it makes people not like working with you. And also provides job security. So, that's another big benefit here. So, just answer questions. These are the things you'll get from that short-term gratification, but long-term it's bad for the team. So, in education, we use a thing called question-answer flow, which is basically where I ask a question and you give me the same answer. And we do this over subject matter. And you end up memorizing things. And when someone asks a question, they'll automatically know the answer. So, it's like predictive text, but it actually works. You want to reuse the same words verbatim. It triggers the person to start hearing your words before they ask the question. And it's extremely effective for math education. So, here's some examples. I would ask my class, what does of mean in math? Now, I'm sure many of you probably already had this question, like, means multiply. And you definitely want people to have that, this is, what do you do here? And they know the answer. So, you're basically teaching them how to ask questions. When they read a problem and there's of in it, they're like, oh, of, I have to multiply. What's the first thing we try to do when we see an algebraic fraction? Factor and reduce. Now, this is a harder one. People don't like fractions and people don't like algebra, so put the two together and they just hate that. So, at least if you give them some way of dealing with that fraction, the first thing you want to do, try to factor, try to reduce. At least they have something in their toolbox to address that. So, when I joined my first team that had a working agreement for testing on a PR, I would put the PR out and I'd ask for a review. Someone say, you wrote test for that, right? Like, damn it. You know, nothing else. Put it out there. But that only happened a couple of times before. I'd be like, I'm going to push the button. You wrote test for that, didn't you? No, darn it. So, that question-answer flow, using the same type of flow to get someone on board is very helpful. So, new developers need to hear that flow, even experienced new developers need to hear the flow because the stack is different. So, you want to take them when you're working through a problem with them or you're working through any issue. What you want to do is talk out loud and tell them where you'd start looking. You know, like when you call customer support, is the machine plugged in? Yes, the machine is plugged in. You know, they're starting debugging from the beginning. But when you start working on a code problem, we already have all these questions that we're asking ourselves, but we're not telling people that we're asking ourselves and resolving the question. So, when you're teaching someone, you want to ask the question out loud and resolve it so that they can learn your question-answer flow and they may not adopt all of them. But when any questions that they adopt will help them tremendously in their debugging efforts and also to understand that you're basically asking your questions to get to your answer and they should mimic you on that. Alright. So, although repeating answers is really great and helps people memorize the answers and have that available, if they don't understand your answer, repetition won't help at all. So, you need to go back to make sure you're using words that they understand and that you're starting somewhere that they can actually follow you. And then those types of answers, when you repeat them, they actually do help. The law of the learning process. The student must reproduce in his own mind the truth to be learned. Therefore, require the people to reproduce the thought the lesson he is learning. Thinking it out in various phases and applications until he can express it in his own language. I think the takeaway here is that we definitely want people to make this information their own and we want to give them a chance to apply it. And we also have to accept the fact that people have different learning styles. So, you have social learners and you have solitary learners. And although a company might have a certain culture of mostly pairing or mostly working alone, you want to make some accommodation for people, especially in the ramping up on-boarding period to do things that make them comfortable. Because having to learn a bunch of stuff and being comfortable is very difficult. If you have to learn a bunch of stuff, but you can be somewhat comfortable, it's just so much easier. So, if you have people that like pairing, accommodating them a little bit on the pairing, if you're not a pairing shop, it would be good. And also, if you have solitary learners, but you're a pairing shop, you might want to give them some time on their own just so that they can feel comfortable learning the stuff and feeling comfortable in that situation. You can't talk understanding into people. And this is something that I saw when I was, three years ago I started skating so I could play derby. And basically you would go to practice and someone would volunteer to teach. And the best teachers were the teachers that taught you your lesson and then they let you work on it. And it was a mess. I mean, you know, you're all over the place. But the fact of the matter is it's a physical activity and no matter how much you understand the physics, no matter how much someone tells you how you're supposed to plow stop or how you're supposed to hit someone, you just have to hit them a lot to get that down. So, the idea is that you can only tell someone so much and then you've got to give them time to work it out and you can't talk it into them. And people make the mistake of, I'm going to give them the benefit of the doubt that they're trying to help because the flip side of that coin is that they just like to hear themselves talk. So, if you find yourself talking a lot and someone's glazed over, ask yourself, am I helping them or I'd just like to hear myself talk because you're going to have to adjust that. The law review and application. The completion test in confirmation of the work of teaching must be made by review and application. Therefore, review, review, review, reproducing the old deepening its oppression with new thought, linking it with added meaning, finding new applications, correcting any false views and completing the true. I realize all these laws are like 1880s English. So, I apologize. But the idea here is that you want to review with people. I think PRs are probably one of the best ways to review to go over the work to make sure that they're doing things according to style, guidelines or maybe just design, things that how you want them to design. But in order for review and feedback to be helpful and effective, the developer needs to understand the expectations and the evaluation standard. So, if you have not made these things clear, you need to because it's just going to, it's going to really, this is probably the number one frustrating thing for me is if I'm not sure what you want from me, because I really want to do a good job. I don't want to just do a good job. I want to blow it out of water. But if you don't tell me what that looks like, like what is it if I just, you know, if I meet the bar, where is that? If you don't tell me where I can meet the bar, then I can't, I definitely can't jump over the bar. And that's really what I want to do. And the five keys to a successful Google team, one of the keys was structure and clarity. So, our goals and execution, goals, roles and execution plans on the team clear, because if they're not, you should fix that. Your team will be more effective. And then you need to establish a feedback cycle. Make sure that that new developer has a way to talk to, a comfortable way to, for you to check in on them. And the feedback there should go both ways. You should be able to give them feedback about how they're doing. They should give you feedback about how you're doing. I coached a volleyball team from middle school through high school. And at the end of six years, we won the state championship. And it was one of the things when I look at other teams we played over those years, because we're just a bunch of homeschool girls. Some of them literally couldn't run when I first got them. So, like, to take them from that to this championship team was, it was an amazing journey. And I watched a lot of other coaches with much better players, taller players, you know, club players. And we would just knock the snot out of them. We just beat them up. We made them cry. It was awesome. I mean, like, seriously, that is like the golden standard of coaching. Like, can you make the other team cry? I'm serious. So, I would watch these other coaches and I realized that really it wasn't a failure of the players. The players on those teams were fine. The coaches really didn't know how to use them. And the reason why is because the offense, because there is offense to volleyball, the offense that they run in Iran or the defense, there was some breakdown. The kids did not have the vision of the coach. And so, for six years, literally six years, I had the same exact huddle talk, so much so that if you're on my team for more than two months and I do this, you know the huddle talk's coming and you know exactly what I'm going to say. I'm going to tell you the three things you have to do to win a volleyball game. And then when we're in practice, I'm serious. And so, when we're in practice, we'll be serving. And I'm like, you go back there and you serve 10 serves and you better get four in because if you don't get four in, you know, like one, two, three, four, don't go in, you're letting your team down. I'm very clear about my expectations on that team because that is just the nature. If you want to win, this is what you have to do. So, my standards are really clear. I'm very good about giving them evaluation criteria. They can take that criteria and when I'm not there and the ball doesn't go to target when they pass and they shake it off somewhere, they know in their head, they already hear me yelling at them. So, they have that criteria and I'm telling you right now, those girls delivered. I mean, we whooped up big time. It's awesome. All right. So, the feedback cycle, I just covered that. High performers love to smash expectations. So, you're going to have like A developers, B developers, C developers. If you want your A developers to really take off, you better tell them what your expectations are because they're going to be aiming above those and they can't aim above what they don't know. And lastly, not only does being a good teacher help people learn, but it will help you learn too because when I'm talking with someone now as a teacher, if someone says a word that I don't know, I'm just like, I don't know that word. You have to explain it to me because I realize this is a breakdown in the teaching process. This isn't a Mickey's ignorance. I mean, I am, but that's not the point. The point is that they're trying to teach me something. They're trying to communicate something and I can't follow them because I don't know the words they're using or I need to say, you need to back up because I don't understand. And as a, once you understand how the teaching process works and the learning process works, you'll stop people and you will be able to be a better learner. Any questions? I know that I'm done and anyone can leave, but I don't want to not answer questions people have. We're good? Awesome. Thank you. You've been great. Thank you.
|
Your team gains a new developer. You are responsible for bringing them up to speed. While not everyone is a natural teacher, everyone can be taught basic teaching fundamentals. We will take a look at principles anyone can use to become a more effective trainer/teacher. Better teaching technique makes the training process more effective and enjoyable. Effective training reduces new developer frustration and increases job satisfaction for everyone.
|
10.5446/31550 (DOI)
|
Thank you everyone for holding till the bitter end, last session before the last keynote on Friday. My name is Jason Clark, so I'm here to talk to you about real world Docker for the Rubyist. And this talks, Genesis really comes out of the fact that the company that I work for, New Relic, deploys a lot of our services using Docker. And I hear a lot of hype about Docker. I hear a lot of people saying you should use it. And then wildly diverging opinions about how to use this tool. Docker turns out to be a toolkit that gives you a lot of options that you can pick from. And so what I wanted to do was I wanted to give you a presentation that tells you a way that you can approach Docker. This is tried and true, tested stuff that we've been doing at New Relic for actually the last couple of years. We got into Docker pretty early. So we've experienced a lot of bleeding edges and we've experienced a lot of things that have made our lives easier. So this talk is going to take the shape of a story. And this story is going to be about two developers. Jane, who is a new New Relic employee and has a great idea for a product, a service that she wants to run. We encourage experimentation so it's a lines of code service that will do metrics for how many lines of code you have. Super useful. Like, you know, we want to let people experiment and see how that goes. And Jill, who is someone who's been at New Relic a little longer and has some experience and can help answer some questions. So as we are a public company, this is our safe harbor slide which says, I'm not promising anything about any features. Please don't sue us. Please don't assume anything based on me making up stories about services that we might develop. Okay. So I'll clear, this is a bit of fiction, but it will help us frame how we use Docker and give you a picture of ways that you might be able to apply it. So you know, one of the first questions that Jane has as she comes in is, why does New Relic use Docker at all? What is the purpose and what's the sort of features that drove us to this technology? And one of the big components of it is the packaging that you get out of Docker. So Docker provides what are called images. And an image is basically a big binary package of files. It's essentially a file system, a snapshot of a piece of code and dependencies that it has that you can then distribute around and run. At this point, Jane's like, okay, I've heard about this. There are images, these images you can build off of them. You can take, so for instance, there's a official Ruby image that's maintained by Docker. You can use that image and then put your Ruby code into an image that you build off of it. And Jane pauses Jill at this point and is like, okay, so this is slightly confusing. I've heard about images and I've heard about containers. And what's the relationship here? And so the relationship is that an image is kind of the base of what you're going to run. It's sort of the deployable unit where a container, which does not really resonate for me in the term, but is the running instance of something like that. Now the way that you can think of this is if you draw an analogy to Ruby, the image would be like a class in Ruby. So this defines what's there, defines what's possible and what's installed. And then the container is like an object instance that you have new. It's a individual running piece of that. Okay, so we've got Docker images and then those we can deploy as running containers that do our application and run our code in our staging and production environment. But there are lots of ways to package up your stuff. I mean, you could just shuttle files out there. You could make a tar out of it. So that's not enough to tell us why we would want to use Docker. And that brings us to the other major thing that Docker brings, and that's isolation. So for most of us, we don't have our app set up in such a way that one host will be completely maxed out by an app that's on it. We may want to be able to share those resources and run multiple things across multiple machines to increase our resiliency and use our resources well. And traditionally, you might have done it in some fashion like this. You have your server, you've got different directories where you keep the different apps that are there and you deploy and run those things on that given host. Well, the problems here are pretty obvious when you look at it and see these things sitting next to each other. They're all sharing the same neighborhood. They could interfere with each other's files. They could interfere with processes that are running. They're sharing the same memory on the machine. And there are lots of ways that these two applications that are running there might interfere with each other. Docker gives us a way to contain that, to kind of keep those pieces separate. Now they still use the same kernel. This is not like a VM where there's some separate operating system. But Docker provides insulation so that each of those running containers appears to itself as though it is the only thing that is in the universe. It only sees its file system, a subset of that. It has, you can put constraints on how much CPU and memory it uses. And so it minimizes the possibility of those two applications interfering with one another despite the fact that they're running on the same host. So this is a pretty attractive thing for us to be able to have sort of shared hosts that we can deploy a lot of things to very easily without having to worry about who else is in the neighborhood. All right. So clearly, you know, Jane's new developer has shown up. How do we get started? Well, Docker is a very Linux-based technology and it has to be running on a Linux system with a Linux kernel. And you know, a lot of us here don't run Linux systems directly. We run Macs or Windows. And fortunately, Docker Toolkit is available. So this comes from Docker. It's the kind of sanctioned way to set up a development environment and be able to get the Docker tools installed on a non-Linux system. So once we have that, then we can get down to actually writing our own images to construct an image for our app that we want to deploy. So Jane, you know, sits down with Jill. They're pairing. And Jill has her write this in a file called Dockerfile in the root of her application. And you know, Jane recognizes a little of this. She had done some reading about Docker. That from says, what image should I start from as I'm building the image for my app? But that's all that Jill tells her to write. And she's like, well, shouldn't there be some other things? Like, this looks more like a Dockerfile that I've seen, you know, has working directories and copies and runs and a bunch of shell commands and things that are setting things up. So Jane's really confused about what's going on. You know, this is an image that we're using from New Relic that we've got. But where's the rest of the Dockerfile? Jill says, OK, this is a fair question. But you know, running code's awesome. Let's get your thing deployed to staging, and then we'll dig into this later and look at how that very simplified Dockerfile actually provides us a lot of value and shared resources. So having written this basic little Dockerfile, Jane goes to the terminal and writes this command line. So it says, Docker, build the minus t provides a tag for the image that we're going to construct and tells it to work in the current directory with that dot. And then once we've done that, there will be a whole bunch of output that appears at the command line as Docker goes through, takes that base image, and then runs the various pieces that are baked into that image to build out a package of your app. Now if you have errors in your Dockerfile or you have problems, file permissions, things that go wrong, this would be the point when building that would tell you. You'll see output from those commands there. But once it's successful, if we ask Docker what images it knows about, it will give you a listing, and here we'll see our LOC service image that we built. It gave it a default tag of latest because we didn't tell it to give it a particular tag. And that image is a runnable copy of our application that we can do something with. This is all well and good for Jane on her local machine, but clearly if this thing's going to go into a staging environment, that image needs to get from her computer somewhere else. And to fill this gap, there are a variety of things that you can do called Docker registries. Now by default, Docker runs one called Docker Hub. This is what all of the Docker tools will default to if you don't specify as you push and pull images. It's where it will look for it. There are alternatives though. So at New Relic, we ran into a problem when they deprecated what version of Docker you could use more quickly than we had moved some of our systems off of it. And so we had to go looking for some alternatives as well. One of them that we've had pretty good success with is called Key. I know it's spelled kind of funny, but that's how the word gets pronounced. And that is very similar to Docker Hub. It provides you a nice web UI. You can push and pull images. They have a paid service, so you can have those be private. And so that's been one of the major alternatives that we've gone to as we've moved off of Docker Hub. Another alternative as well is a piece of software out there called Dojistry. Now Dojistry is a little more bare bones, but what it lets you do is it lets you store images on S3 in your own S3 buckets. And so it sort of takes that third party provider out of the picture, which can be important if you have critical deployments. If we have our deployment, depends on Docker Hub being up. If Docker Hub is down, we can't deploy our stuff. That might be a problem for you depending on your organizational structure and scale. All right, so we have an image. We have this picture of what Jane's service looks like that she wants to get running. So she wants to go get this started up out in our staging environment. And so how does she do that? Well, at New Relic, we developed a tool called Centurion. Now typically, if you want to just run a Docker image and create a container off of that that will start your application up, you would say Docker run and then the image. And that image has a default command baked into it, which is what will get invoked. And then this starts running. If you run it in this fashion, it will be blocking. You'll see the output that's coming out of the container as the commands run. So you can imagine that this is something you could go out to a machine somewhere in the staging cluster and go tell it to Docker run these containers. And that would work. But unfortunately, if your company gets to any sort of size and scale, you probably want things running on multiple hosts. And you probably have a lot of computers that are out there. And interacting with those individually is problematic. And so that was where Centurion came in. Now, this is certainly not the only way to solve this problem. And I'll briefly refer to some other possibilities later on. But when New Relic started with Docker, these things didn't exist. And so Centurion is a Ruby gem that allows you to work against multiple Docker hosts and easily push and pull and deploy your images and do rolling restarts and things like that. One of the other big powers that Centurion brings is that it is based off of configuration files. And these are things that you can then check into source control, you can version and have a sort of a central point where you know what's deployed in your Docker environment. Rather than individuals going out to boxes or starting containers that you don't know anything about. If you run everything through Centurion, you have a central record of what's actually going on. So Centurion bases these configs off of rake so that they have some amount of dynamic programming that you can do in Ruby. So you define a task for a given environment that you want to deploy to. So in this case, we've made a task for our staging environment. We tell it what Docker image we wanted to pull onto those hosts and that allows us to have it grab the latest. You can also tell it different tags. So if you had different versions of the service and you wanted to deploy a certain one, you could do that. And then to handle that issue of having lots of hosts that we might want to start on, you can specify multiple hosts that Centurion will then go and restart or deploy these services too. So with that, it's pretty easy to get Centurion started. It's just a gem. You install it and it installs an executable for you called Centurion, unsurprisingly. There's a number of flags that it will take, but the basics are you tell it an environment, you tell it the project where it should find the configuration, and then you give it a command. There's a couple of different commands. We'll just give it a deploy and say go out there, start these things. So Jane's a little nervous. She's hardly been here at all, but she asks, does this all look good? Are we ready? Yeah, let's go. She kicks the Centurion command. And what you'll see is a lot of output as it connects to the various hosts. It will go through them. It will pull all of the images down to those so that all of the boxes that you need have the image that you're going to then start with. And then one by one, it's going to stop any container that's running for that particular service on that box and then start a container up for you based off of that config. So after it's connected, there's also options that will let it hit a service status check endpoint so you can do rolling deploys where you make sure that things are actually up and running before you start to the next host and shut things down. All right. So having done all of these things, it's been shipped. Things are in staging. Jane is able to test out her code, see that things are working swimmingly, and goes home for the day feeling very accomplished. It goes to bed, comes back the next day, and unfortunately, things were not as great as she thought. The service is not there. Where did her app go? Well, it's time for the tables to turn. Jill's going to ask a few questions of Jane. So Jane says, well, where were you logging to? So we're going to start trying to figure out what happened here. And Jane looks through the code and she had kind of cribbed a line from somewhere that she wasn't really clear about. But in her production configuration for her Rails app, it looked like it was a standard practice around New Relic to have all of the logging go to standard out rather than going to files that would get written inside of the Docker container. Okay. So that being the case, this actually put Jane in a really good position because New Relic's infrastructure, where we run the Docker hosts, actually takes all of the standard out that comes out of Docker. So like we saw, when you run a container, you see what's going to standard out from that. So we're able to capture it. And we actually forward it to a couple of different places. We forwarded it into an Elasticsearch instance, which runs Kibana, which is a fairly common sort of logging infrastructure. I've heard it referred to as the Elk stack. Elastic something with an L log stack. Forget in Kibana. And then also, we actually take that opportunity to send things to our own internal database, event database called Insights. And this lets us do analytics and querying across these logs. But you could set things up to send these logs that are coming out of your Docker containers anywhere you want. But I highly recommend that if you do use Docker and production in this way, that you do make sure that all of the logging that you can is going out of the containers and not getting written inside of them, because it will give you better visibility to it, for one, by getting it out. And it'll also prevent the file system sizes from getting huge in the Docker containers themselves. All right. So they take a look at the logs. There's not really anything there, unfortunately. You don't always hit a home run on the first go. So it's time to take a little closer look at the containers themselves. Well, that's actually something that you're able to do. And Docker provides the commands for it. So here, we're specifying a minus capital H. That points us to a different host. So by default, Docker is going to be talking to Docker running on your local machine. And so this lets us go point at our staging environment. And the command way off on the in there saying PS lists the running Docker containers that are on that host. And here we see a container ID. It has a nice straw, which will be fun for us to type. But that's an identifier for the individual running container that we've got going out there. And it looks like it's still there and it's running. So what we can do from there is we can say exec rather than run and give it the container ID. The minus it sets things up to be interactive. And so this will actually give us a bash prompt onto that Docker container. Now this depends on bash being installed on the Docker container that we're connecting to and there's a variety of other things that could interfere with this. But we have things set up so that we can do this to do any sort of debugging that we need on those containers as they're running in our production and staging environments. All right. They look around. They see that the processes are gone. It's not exactly clear what's going on. But they eventually dig up some stuff that looks like there might have been some things happening with memory. And that tickles something in the back of Jill's brain. She remembers another project that they had that had some similar problems where things just seem to be disappearing. Like processes would just go away with no trace that they could see. And the problem there was memory. So the lines of code service apparently is clocking in at a good 300 meg. Not totally crazy for a Rails app but a little big. And that was the key that they needed to figure out this problem with things getting killed. So like we talked about way at the beginning, one of the key things about how Docker provides you isolation is that you can set limits on the containers for how large they can get and what memory they can consume. This prevents the individual containers from interfering with other things that are on the same host. And it turns out the 256 meg was about the limit that was being set by default if you didn't specify anything. So as soon as you got past that, then Docker's infrastructure would kick in and it would just kill processes to free up the memory. Well, this is clearly not a good situation. And so fortunately, we allow for configuring that. So in Centurion config, you can say memory. Tell it to give us two gigabytes. And what this actually correlates to is a command line flag that you can give to Docker to tell it how much memory you want. And basically any of the commands that you can give, any of the flags that you can send to Docker when you're running things to modify that environment and tell it differently how to run stuff are available through the Centurion configs. So you have a source-controlled place to do all of the changes that you might want to do to how your containers run. All right. This is great. We've got two gigs. Things stay up. They keep running. But we actually asked for a little more memory than we really needed. And Jane's like, well, we should probably, you know, eke a little bit more performance a little out of this, even though it's in staging. It'd be nice to have a little more room. So I want to increase the number of unicorn workers that I've got. Jill's response is to try the MVE. So Docker provides flags when you're running to let you set environment variables that will be passed along into the container. And this is actually a really fundamental part of how you should structure your Docker systems so that things get passed in from the outside. So when we say minus e unicorn workers, once we're inside that container, it's just an environment variable like you've probably seen in many other places. For our setup, we have a fairly standardized unicorn config. And what we do is we look for the unicorn workers environment. We turn that into an integer and tell it to run the number of workers that we want. And so our Docker image can be used to scale up or down to run larger or smaller numbers of workers without us having to construct a new image that changes that configuration. As you might expect, Centurion supports this. In fact, this is one of the key features of how we use Centurion is that we drive as much of the config out of the code and out of the file system and into the environment as we possibly can. And so you can say, envars, give it a hash to give it the names of that. Now this is not the only thing that you might want to configure. Your database YAML file in a typical Rails app gets parsed through ERB before it's actually run. And so you can do things like this where you parameterize potentially off of the environment. So when we run in our production and staging, we can be explicit about where to go connect to our databases. But one of the niceties is since this is just Ruby code inside of the ERB braces there, we can also give you defaults. So if you're running the Rails app locally, it's going to work. It's going to find the things that it needs. Similarly, application-specific configs are something that we can rely on the environment as well. So in your application.rb, you can set config values and you can set these to arbitrary names, arbitrary things that you want to pass around. And then those will be available throughout your Rails app. So here we take, we're looking for another service that we're going to talk to. We set the URL and we have a default to fall back to. We set timeouts. And what this does is this gives us one central place in our Rails app that we will see all of the things that we can configure through the environment, all of the knobs and switches that you might want to control. Using this from other places in your code is as simple as saying rails.configuration. The accessor that you specified. So here we can get the service URL and timeout that we were talking about and use those throughout our system. Now some of you may have heard of the 12-factor app. This is a thing that Heroku has promoted that's got a lot of principles around how to run applications well in production. This whole environment-driven thing, while it applies very strongly to Docker, it's not limited there. And this is one of the key tenets that they have with it. This is also a really good idea to drive things through the environment as well for security reasons. If you have secrets, you have passwords, you have sensitive pieces of information. If you put those into your source code or put them in files that are in your Docker images, if somebody gets a hold of that Docker image, they will be able to see what that stuff is. So if Docker Hub gets compromised or some other place does, your secrets, you don't want them baked into those images. By putting them in the environment, they're only there at runtime and someone would have to have access to the containers to be able to get at those bits of information that you don't want. So this is all well and good. Jane's feeling awesome about the work that's going on. But she really wants to understand better. So that one line Docker file that we showed at the front. She just wrote one line to say from this image, how does that actually work? Well, it turns out at New Relic, we put a lot of effort internally into building shared Docker images on top of other pieces of the Docker infrastructure that we've gotten from the world at large to make our lives simpler and bake in the things that are shared across our applications. So base builder was the name of the image that we grabbed from to start. And so this encodes a lot of our standard ways of approaching things at New Relic. So for one, for various historical reasons, we run mostly off of CentOS. That's what our ops folks are most comfortable with. And so we derive ourselves off of a CentOS image rather than a lot of the base Ruby images or either Alpine or Ubuntu Linux. Well, we know that this is something that people are running Ruby off of. And so one of the first things that we do in this base image is we install Ruby versions for you. Now, we end up using RBEMV, RBM to do that. That's not strictly necessary because there's not going to be version switching going on. But it just happens to be the tool that is most commonly used at New Relic for switching Ruby's. You can get a Ruby installed onto your Docker image however you would choose to. Once we have that Ruby version installed, we can start putting other things that we assume people are going to use. So for process management, we use an application called SupervisorD so we can install that. Most of the time, you're running something that's a web service or a website of some type. So we run that through Ingenix. So we put that into this base image. In fact, we can go even further. We can gem install Bundler and then rehash so that that executable for bundling is available. And this is great. We're finding all of these things that are shared between these applications and taking that duplication out, making it simpler for people to build their images. So why not just bundle install? Get all the stuff, right? Well, here we hit a roadblock and it's pretty obvious when we try to build it what's wrong. The base builder, this is a base image. This isn't the application itself. So we don't actually know what your gem file is yet. Someone is going to build their app on top of this. And so we can't go and do the bundle install when we're making the base. We don't know what's going to get into that actual app. But fortunately, Docker provides us the tools that we need to do what we really want, which is to say, when somebody uses this image, I have commands that I want you to run. And Docker's parlance for that is on build. So any Docker command that you put into your image, if you say on build before it, it will wait to run that command until after somebody has already used your image in their own Docker file. And so we can do things like wait until somebody uses this in their app and then go copy their gem file in and bundle install. So we get their copy of dependencies, but we don't have to have them write the lines to know to go bundle and do the correct things in their particular Docker file. In fact, we've pushed this approach quite a ways and provided not just standard things that everybody does, but options that people may want to choose. So Unicorn is used pretty broadly at New Relic. It tends to be the default web server. But there are people that are using Puma and like to try that out. And so what we've done is we've created scripts that allow for those sorts of configurations to be a one line thing that you can put in your application's Docker file. And all that these have to be is a script that modifies whatever configs you need to on disk to get the effect that you want. So in our case, this is just a matter of changing out the supervisor config for which app server to start up and then swapping in a Puma configuration instead of a Unicorn config into the app itself. But this is a one line thing for somebody to do in their app and be able to try out. In fact, we've even gone so far is to provide helpers for installing other things that people might want like Python. We have some teams that have background processing that runs in Ruby and then has some Python that it needs to invoke. And so we can provide simple wrappers baked into these base images to smooth the path for app developers as they do their work. All right. So it's a fun technique. It's fun to see which things you can pull out and make it so that people don't have to think about. But let's get back to some code. So Jane keeps writing her app. She's working on this lines of code service. She wanted to write a file somewhere. She just kind of picked the root directory to go write it. She's getting an error out of it. So she goes, pings Jill. Jill comes over. They take a look at it. And it's, you know, again, a pretty straightforward error message. But it's not totally clear why this is happening. Permission denied. She tried to put this file there. And Jill, you know, being fairly experienced, knows just what the problem is. The problem is with nobody. Nobody who's nobody. Well, nobody is an identity that we have on our Linux machines that has fewer privileges than root. So it's actually a user that we run our things in inside of our containers by default. And so here, this is not super relevant in all of the details, but this is how Supervisor starts an app up, and we say user is nobody. There are things at the Docker level where you can control this as well. But this kind of makes Jane a little confused, because like she's heard from many different people about how Docker runs is root. And isn't that fine because the containers are isolated. And while it is okay to do that, and it's not, it's sensible why Docker has chosen that as the default, it doesn't mean that you can't crank things down further. So if you are writing your own applications, you can be more defensive than Docker is itself. And by running as nobody inside of our containers, we give ourselves extra protection in case there is some exploit or some problem with Docker that would let them elevate root privileges inside of the container to the outside host. So running things in as secure of a mode as you can within the boundaries that are in your control will end up giving you a safer result in the end. All right, so Jane gets that fixed up, starts writing things in a location where she's allowed. And then maybe a little late, but she comes around saying, yeah, I want to write some tests. Like how does Docker fit into this? Well, there are some ways that you can work with Docker to make sure that your tests are running in a realistic environment like where you're going to deploy it. The simplest most straightforward thing that you can do is you can run alternate commands against the images that you've built. So here we say Docker run against an image of our lines of code service. And we just tell it to bundle exec rake. And it goes off and it runs the tests inside of that container. Now this presumes that all of the configuration that's necessary is there. If it needs database connections, you'd have to figure out how to feed those things in. But at base, all it needs to do is run that Ruby code inside of the container instead of running your full web application. But unfortunately, this has a problem. And that's the fact that this relies on the image that you built having your current code. And I don't know about you. I occasionally will edit my code while I'm working on it. And if you make a change to your tests or you make a change to your production code, you have to rebuild that image to be able to get those tests to run against the current thing that you're doing. And I don't know about you, but this would make me very sad. Something that gets in that loop of making it so I have to do something extra before I can do my tests is not a really great experience. Fortunately, Docker does have some options that will let you get out of that and do things in a little different way. And that is with mounting volumes. So here we have a Docker run command. It's running against our lines of code service image. And that minus v is the important part. So what that is saying is take what's on my local host, my host where I'm running, at source my app, and make it so that that appears inside my container at slash test app. And so this mounts that in without rebuilding the image. And so what we actually have happening at New Relic with most of our tests is they run against the Docker image, but they just mount the current code into that image rather than rebuilding it from scratch. You have to do a little directory munging to make sure you're in the right place to go run the code. But otherwise, this is a very good approach to keep you from rebuilding images all the time. So life moves on. Jane's got more and more things that she's wanting to do with the service. And as often happens, she maybe is looking to use Sidekick to do some background processing. And she needs a reticence instance and figures, oh, I need to talk to somebody to provision that or set that up. Well, it turns out that what we built with Docker allows her to kind of self-serve that and have stuff deployed through the same mechanisms. So what we have is we have an image already constructed that we use internally that has reticence installed and that takes all of its configuration through environment variables. So all that Jane needs to do to get a running reticence instance into staging or production is to create this configuration and go deploy it the same way that she's been doing with her app. This is a powerful approach. You can do this with anything where you've got some sort of appliance, some sort of just code that you would like to be able to have people parameterize and run without tampering with it. If you build the images to drive off of the environment, then people can just take that and run with it and just use it kind of out of the box. So there's a lot of talk about Docker. There's a lot of things that are going on. Centurion came out of the need that we had a couple of years ago at New Relic, but there's a lot of other things that are in the ecosystem that might be of interest or something that you might want to pick up today. So one example of that is Docker Swarm. This comes from Docker. It is software that easily allows you to control a cluster of hosts. So the sort of staging environment that we have there. Docker Swarm is a good way to sort of bootstrap yourself into running in that type of environment. Something that we're looking at to potentially either evolve Centurion into or use to replace it is a project called Mesos. And Mesos, in conjunction with a thing called Marathon, allows you to have more dynamic sort of scheduling for your containers. So rather than saying, I want to run this on hosts A, B, and C, you would tell Mesos, I would like to run three instances of my image. Please go find somewhere to put them. And it would put them out there. And it has some really nice properties around resilience. If it drops one of those instances because something crashes, Mesos can start it back up for you automatically. You can scale things dynamically with it. A similar technology that's for this sort of container orchestration is Kubernetes from Google. And there are a lot of other things that are out there that are happening in this space. There's a lot of people working to make this a better workflow. All right. So we've come to the end of Jane and Jill's story. We've looked at how you can use Centurion to deploy and control multiple Docker hosts. We've looked at how using the environment to drive your configuration allows things to be more dynamic and controlled. We've looked at some tips and tricks around building shared images so that you can spread best practices within your organization and not repeat stuff. We've looked at some security and testing and a little peek at the future of where things might be going. I hope this has been of use to you. And hopefully you'll have good success if you choose to use Docker at your company. Thank you. So the question was where the Dockerfile lives. And yet typical practice for us is that the root of your Rails app is where the Dockerfile would live. It doesn't have to. You can put it in other locations, but that's been the simplest sort of convention that we've followed. Yeah, so the question is between Vagrant for similar sorts of workflows of testing and developing in Docker. From what I've experienced, Docker startup is very fast. So if you have a pre-baked image, the image building takes a while, but actually starting a container is really quick. So it would definitely be worth looking into. I think it provides similar things to Vagrant and is a little lighter weight. That's one of the selling points there. So the question was what concrete usage do we have of this? I think at last count we had a couple hundred services that are running on this internally. It is not everything that we run. There are a number of our bigger, older applications, especially the Data Ingest that aren't converted over to Docker. But pretty much any of our new products that have been developed in the last year or two have been deploying into our Docker cluster. So yeah, so the question was the deployment workflow is building a new image, like run your tests, build a new image, and then deploy that image. And yeah, that's correct. We run things through a CI system. We happen to use Jenkins, but it's fairly up to you exactly how that flow happens. I showed a lot of us using the command line directly to do those deploys. We don't actually do that much in practice. You have a central CI server do it, but all that it's doing is calling Centurion from a little shell script the same way that you could from your local machine. Yeah, so the question is what do we do about things like database migrations and asset compilation? Asset compilation, very often we will do it image build time. I didn't show it here, but that is a common thing for us to do in constructing the image. We have some other techniques that we're playing with for externalizing the images entirely from our Rails app, which takes that out of the picture. Database migrations, the database currently and probably for the foreseeable future does not actually live in Docker itself. And so we will tend to have another environment where we would potentially use that Docker image to go run the migrations, like use the image, run that command to go talk to the database and do those migrations. But it's not part of the individual deploys, it's normally scheduled separately as part of things. And the questions, what about the fact that migrations might break currently running instances? That's something that we kind of have to manage ourselves at this point. It's certainly something you could build more infrastructure around. We tend to just have a very conservative cadence for when we do migrations in the apps that have those sorts of setups. So red light is on, so I'm out of time to be on the mic, but I'm happy to talk to anyone who would like to afterwards. Thank you for your attention. Thank you.
|
Docker’s gotten a lot of press, but how does it fare in the real world Rubyists inhabit every day? Together we’ll take a deep dive into how a real company transformed itself to run on Docker. We’ll see how to build and maintain Docker images tailored for Ruby. We’ll dig into proper configuration and deployment options for containerized applications. Along the way we’ll highlight the pitfalls, bugs and gotchas that come with such a young, fast moving platform like Docker. Whether you’re in production with Docker or just dabbling, come learn how Docker and Ruby make an awesome combination.
|
10.5446/31553 (DOI)
|
Okay, this is writing the latest Rails for Charity. How many teachers do we have in the room tonight, today? One, how many people pair program? You're all teachers. And this is Teacher Appreciation Week, so thank you very much for being awesome teachers. Today's Wednesday, May 4th. May the 4th be with you. And there's something special that our team does every Wednesday, and we do what's called bad, and I emphasize bad, joke Wednesday. So I'm going to start off the talk in tradition for my company and start off with a bad joke. And since it's May 4th, what do you call a Jedi? For what do Jedi use to view PDF files? Adobe Juan Canobe. Yep, bad joke Wednesday. So be the change that you want to see in the world. That is a powerful quote. It's even more powerful when you put this quote into action. So here today I'm going to try to hopefully motivate all of you to be the change. And hopefully, since you're such a small group, that you can be motivated to help others at RailsConf be the change. To change the world that you personally want to see changed in the world today, right now. So that's my goal for this talk. I'm going to start off kind of telling you how I personally am trying to be the change, and then I'm going to go into how my company has enabled its developers to be the change. So how am I trying to be the change that I want to see in the world today? Well, it's skipping a slide, but the way I want to see the world change today is I want to teach kids how to code. And it's not just teaching kids to code. It's giving every child the opportunity to program. And a lot of us think, well, you know, it depends on the kids. Some kids gravitate more towards computer programming. They're the gaming kids, right? And that's what we think of when we think of teaching kids how to code. We think of these specific group of kids. But what I want to do is I want to change the world today so that computer programming is a required class as young as kindergarten. This is how I want to change the world personally. And for me, it's really important because no matter what a child's passion is, understanding how to use the computer as a tool gives them the edge that they need to be successful with their passion, what they want to do, whatever profession they want to get into. The computer and programming a computer will help them be that much more successful. And so this is my passion. This is how I want to change the world. And this is my mission. And so, and I had a whole slide up there that showed me with a bunch of Coder Dojo kids and they were all, yay, we finished our code. It was awesome. So that's what we missed out there on. But that's essentially what I do on a regular basis. I'm always looking for an opportunity to teach every kid, give every kid the opportunity to be introduced to programming a computer. And my motivation is what you're looking at right now, is my children. Here we are at Super Pi Day enjoying Pi Together. This is kind of a benefit of having a nerdy dad as you get all these special little holidays. And they're actually really bummed that I'm not home for May 4th, but oh well, I'm here to celebrate that with all of you. But this is my motivation. And that's a really important thing in identifying how you want to change the world is identify what motivates you. This is my oldest son and he is my inspiration. Back in 2012, this is a kid who's kind of your stereotypical programming kid. He came to me at about 10 years old asking me questions on how to solve algebra problems. He was reading algebra textbooks for fun because he loves to learn. And he came to me with that. I was pretty shocked. And so I instantly was like, dude, you are going to be an awesome programmer. I can totally see this. I sat down with him and introduced him to computer programming. And I did that with the only thing I knew was I was like a Ruby programmer. So I was like, okay, cool, let's look how am I going to teach my son how to code. And so I went in and found Ruby for Kids. I don't know if you guys are familiar with that, but it's a really awesome platform for teaching kids how to code in Ruby. And he loved it. Just like I thought. He sat down, got into bugs, he became a programmer right before my very eyes. It was such a special moment for me because he experienced what all of us programmers experience and that is like the nasty bug that you can't figure out. He's banging his head in the keyboard. Dad, can you help me? And I'm looking at his code and I can't figure it out. I'm like, wait, let's go through this together and we start analyzing the code. He figured it out on his own and he did like all of us do like when you figure out that bug you've been beating your head on the wall for is he started jumping up and down. He's like, yeah, dad, we did it. This is awesome. And he's like running around or slapping files. I'm like, dude, you got the bug. You're a coder. And that's when it really started. I started to think about it because what was happening to me right there is I was ponding with my son. I wasn't teaching him to code. I was bonding with my son. Very special level. And it meant a lot to me. I have two other kids. I'm like, I want the same thing for them. And not only that is he's learning to code. I did learn this till like, well, I did a little coding when I was 11, but I didn't really learn to code to the level he was until I was really later by school and even college and actually when I graduated, I wasn't a CS major. I was civil engineering. And so I was really excited about that. So we started doing it more with Hackety Hack. Ruby Warrior, awesome game to play with kids, text-based, or it's now graphical. And you get to code like the class to be the warrior and all the little features you want to add to your warrior to do different things. It's super fun. And so I was having like more and more bonding moments with my son. And then I wanted to have the same experience with my middle son. And he and my youngest were really into Minecraft. And so I introduced them to Computercraft, mine. And I don't know if you guys are familiar with that, but that was really exciting for me too. And it was actually a really funny story because I sat down with them, showed them how to craft a computer, and they were like, oh, this is cool. They're already into Minecraft. So we're crafting. This is awesome. And then I click into the computer and we're on a terminal. And my middle son does like an LS command. And he's like, dad, check it out. I'm listening to all the directories. This is awesome. He's super excited. And I'm sitting there going, really? Like if I just brought up a terminal, he would have been like, what the heck is this? Dad, this is boring. But because I was in Computercraft, he loved it. And with him, he eventually kind of got sick of Minecraft, my middle one, my younger one, still totally into it. But that's when I realized that programming isn't about teaching the kids about the specifics of programming, all the detailed commands. It's about understanding what interests them and teaching them through their interests. That's what's really important. And that's when I kind of discovered Scratch. This is a graphical programming language that really allows the kids to get to their interests as quick as possible without dealing with text-based things, without dealing with, you know, configuration and any headaches that you might have in setting up coding. And so that's when I really started working with Scratch and my middle son is he wants to be a Broadway star. That's what he wants to do. So you're like, how am I going to teach someone who wants to be a Broadway star and get excited about computer programming? Well, it's actually really easy. You just do everything and you create like Broadway skits in Scratch. And then instantly he's like, dude, this is totally cool. Like, I want to do this and that. And he just starts coding without even realizing he's coding. What is he doing? He's creating. And he's loving it. So I was really excited because I got to teach this to all my kids and I got really, really frustrated one day. And that was when my oldest son came home from school one day. And he said to me, dad, I'm not very good in math. I mean, this is the kid who was studying algebra textbooks and he thinks he's terrible in math. He's like, I suck at math. I literally cried. I couldn't believe it. I said, you are a math genius. What are you talking about? And that was the moment when I got super frustrated with the schools. It's not the teachers. The teachers are awesome. It's not the principals. The principals are awesome. It's the system and the curriculum that they're following right now. It prevents them from really focusing on encouraging the kids and making them better instead of discouraging them and giving that feeling. But without even knowing it, given the kids this discouraged feeling that they can't be good in something like math, even though they're awesome. And it was just memorization of math facts that he didn't spend time doing. And that's what gave him the impression he was bad in math. So unfortunately then I went to a parent teacher conference and man, did I lay into that teacher and my wife, poor wife, had to pick up all the pieces from that and she was super embarrassed. And I learned a valuable lesson that's not the teachers' fault. And so that's when my wife told me, you have to be the change. You have to change things. Your job, don't put it on other people. I'm just a developer. How am I going to change it? I don't know anything about teaching. I don't know anything about the system. I could just go and sit back in my cube or my office with my kids and just pair with them. Like my kids are going to have a great opportunity. Or I could try to change the world. I could make a difference. This is what I decided to do. But I figured I'd do that once I made a lot of money because teaching doesn't pay that well. So I was like, I'll wait. But I ran into a friend of mine and he told me, hey Joe, I'm teaching on Saturdays, computer science, to a bunch of underprivileged kids in downtown San Jose. And I was like, that sounds pretty cool actually. I'll go do that. He was so excited because he was teaching Excel and he didn't know what he was doing. And he's like, dude, you're a programmer. Like you could come and teach these kids. So I went and did that and I spent what, like two weeks preparing for it. Got into the classroom at Sacred Heart and started teaching a room of 13 year olds how to cope with scratch. And half, well probably a third of the room was really into it because this is the only chance they ever get to sit in front of a computer because they can't afford a computer and they live in a house with three other families. And the other two thirds of the classroom just didn't want to be there. They didn't want anything to do with it. And I was trying to explain to them how scratch, the scratch language went from scratch all the way down to binary and was going through all the translations. It was like major turn off. And so I went home, went back to the next class and I was trying to rack my brain. I was like, do what I did with my middle son. Find out what it is they care about. So the next time I went in the classroom I said, what do you guys care about? And I went around and each child would tell me what they're excited about. And lo and behold, that's what they're excited about. One Direction. That's easy. All I did was I put one direction just like that onto the scratch screen and we were able to create little bubbles on there and they were able to create a conversation with their favorite band members and they were so stoked and they're doing conditionals. If you say this, then do this and they're doing loops and I'm just like, holy crap, you guys are coding and they wouldn't leave. Like the first class, they couldn't wait to get out of there and this class, the parents, it was on the second story. They had to come up to the classroom, okay, it's time to go. They're like, no, we want to stay. Mr. Dean's going to upload the music. And so, you know, I upload the music, they wanted to stay, we played the music, it was really awesome. And then eventually the principal had to say, okay, the computer class is over, you've got it for that now. And so that was like a really valuable lesson that I learned in being a change, in being a teacher, is that all the planning, you really have to change your plans and do what's best for the students. So that's my Sacred Heart story. But there's actually another really cool story that I'll share with you guys about Sacred Heart. And it really moved me. And this is the type of things you experience when you take these small steps to change the world. I had the opportunity on New Year's Day, one of the high schools dumped all their old computers. And a friend of mine called me up and said, dude, there's a bunch of computers outside of the high school and they're just going to go to garbage. Do you want me to get them? I said, definitely, get them all in my garage. And then people, like actually a friend of mine from work came over and we all got into the garage and we started to build up these machines and fix them up for the kids. And so on the final day of the final exam, I had the kids demonstrate to me their final project and to the principal and to their parents. And at the end of this, it was like the greatest feeling, one of the greatest feelings of my life is at the end I got to be like Oprah and I was like, okay, congratulations. You all get to take those computers home through years. You can go home and code now at any time. I had kids hugging me, parents crying. It was the most moving experience that I've ever experienced. It was awesome. And that's the feeling that you guys get when you're being the change. So then I got addicted and I started up and found it, one of the founders for Koderdojo in Silicon Valley and I teach kids every month how to code and I do coding clubs after my son's school, still fighting to make it part of the curriculum at their school. I teach robotics during the robotics season, during fall season at school. So I teach kids how to code through robotics. I also teach Boy Scouts and I am the chapter lead of Little Bits, which is really cool electronics components that you like snap together, kind of like Legos. And it has an Arduino bit so you can code these robotic components. And that's super fun. I coded like an asteroids controller and we played asteroids. And now with Little Bits being the chapter lead, I do meetings with other educators across the world, different countries. And I'm being the change in the world so my little acts there are having a huge impact across the entire world. I'm just a developer. That's all I am. And I have a passion to change something. And I took that step and it's completely transforming my life and other people's lives. So you have to be the change. It's easy. You just got to take action and do it now. Alright, that's like the halfway point and one more joke for you guys. What do you call a fight between two film actors? Anyone? Star Wars. Bad joke Wednesday! Alright. So I work at this awesome company called onsite.com. Don't worry, I'm not doing like the big recruiting pitch here. But pretty much a lot of my colleagues are here in the front row and I really appreciate. Shout out to them. They've heard this talk like I think six times and they're still here listening to it again. So thank you guys for that. And it's just an awesome culture. One of the things that's really neat is we started sending folks out to RailsConf about four or five years ago. And it really inspired the company to start up its own internal conference. And the motivation behind starting up this conference was really developer happiness. And this conference is called DashCon and we've done it three years now. And it's called DashCon because our URL is ondashsite.com, which is the worst thing you can do in a URL is put a dash in it. That's terrible. Like Internet 101, don't pick a domain name with a dash in it. But we're on site, so we own the mistake. We are DashCon. So it's a really fun time for all of us because we learned from RailsConf that it's a special feeling when we come together and we share our technical skills. We share learning new things. It's just really awesome. So now we do this as a team and the company allows us to take a whole week off of work, the entire engineering group, QA, IT, all the developers, everyone. And we all get in a room and we talk about anything that we want, whatever our passion is. And we share it with each other. It's a phenomenal bonding moment, phenomenal. And the productivity you get out of that is amazing. I don't even think I need to explain it. It's obvious. And this year we did something really special where we decided to do a hackathon for charity. And we were really excited about it because what an awesome way to do team building between a bunch of engineers is to do a hackathon for charity. Now it's not the hackathon in the traditional sense. It's not the hackathon where we're competing against each other to see who can come up with the best product. It's really identifying core charities that we think we can make a difference in. And we all get in a room and focus on delivering a successful technical solution for this charity to help them with their mission. Now, I'm going to warn everyone right now that these charities are very sensitive topics and their missions are phenomenal in what they're doing. So I'm going to explain what these charities are right now. The first charity we looked at was the Housing Industry Foundation. And what we did for them was provided a way for them to manage their grants. And what they do is they provide grants to people that cannot afford to pay their rent. And so they're going to become homeless. They cannot afford to make that bill so they're going to become homeless. And it's because of an unfortunate circumstance that came up in their life. And life is cruel. There's no second chance. If you can't pay your rent, you get a three-day pay or quit put on your door. And if you can't make up that, I'm sorry, you're out on the street. And you've got a black mark. You can't pay rent. Who's going to want you to live in their complex? It's really sad. And HIF, what they do is they help these people and give them another opportunity. And they just give them a grant. They give them the money so that they can pay the rent. And that's pretty awesome. That's a killer mission. And we, as a development team, were able to help them. And that was a big match for us because they were excited about it. We were excited about it. We saw something that we thought we could definitely deliver to them to help them with their mission. So another charity that we looked at was the City of San Francisco. And they are creating like a database for low-income housing so you can search if you can't afford. The houses in San Francisco or apartments in San Francisco, you can search through this database that they wanted to create. And you can find out what you can afford. And that really wasn't a match for us in terms of what we were trying to accomplish. And it's primarily because they had a lot of help and support. And with Google, they had a lot of support with Facebook. And it was just too complicated for us to kind of inject ourselves into that complexity. So beware when you're searching for these charities that you want to do a hackathon for or that you want to start a meetup on. Make sure it's a match with what you're trying to accomplish. Now this next charity, one of our... So those were the two charities we kind of looked at. We were like, okay, well we got one. And another developer on the team came to us and said, hey, there's another charity that means a lot to me and I want to be the change. And it's pretty awesome because he came to us with this charity called Grateful Garments. And that picture says it all. It helps those that have been sexually assaulted. And what happens today before this charity was they would leave the police station or the hospital in the gown on the right. What this charity does is it gives them clothing so that they can leave with a sense of dignity, whatever dignity is left. And that touched the entire development team when he brought that forward. And we all were like, yes, we want to take that on too. We want to help them. So we started up a product called Stockade as part of our hackathon as well. So now it's time for us to plan this out. How the heck are we going to do this hackathon for these charities? And the key thing in that whole process, in the whole process of planning, is meeting with the charities, understanding what their needs are, understanding their existing technology stack, understanding how they do things today, what their processes are, what their mission is. We didn't have the opportunity to actually volunteer and work with them on these charities, like in the field, like actually going through the inventory of clothing, or helping them fill out grants. But if you can do that, that's even better, because you'll understand their business process. And you'll understand how you can help. And let's face it, with our skill set, we're super heroes to them. So we really should share our talents with these charities that are trying to do good in the world, and then we can be the change. You also should set up your environment, your stack, figure out what are you going to use, are you going to use Rails 5, are you going to use Rails 4.2, are you going to use a different framework, like what makes sense, right? And so it really allows you to start analyzing clean slate, no legacy code, kind of our dream, right? We want to be able to build something from the ground up for them. And what we did was we first went in, we're like, awesome, we get to use Rails 5 for sure. So we dove in with Rails 5 and we were checking out, this is before the hackathon started, this is what I mean by planning, and we started exploring it, like how's the device gem going to work? You know what, it doesn't. It doesn't. It's broken. There's bugs. So we instantly went back and said, you know what, we have to go back to a stable Rails environment. We have to respect the charities. So that's the key message there when you're planning this, is listen, respect the charity, and put together an environment for them that you have some experience. It's not free for you to learn every new technology in there, but I guarantee you, you will learn new things. So I'm going to show you a little bit about where we're at with these different technologies that we built for these charities, and they're still under development. So Grantzilla, they started with just like an access database, a bunch of fields, and the way it worked was they would manually fill out a form, and then they would manually type in, or the person filling out the grant would fill it all out, and then they would give it to the rep, and then the rep would get it, and they would manually fill in all of these fields, and then they would have a record and access to the query again, so it was exhausting, totally exhausting. And we built that. Like, you guys are Rails developers, like, that's like scaffold with bootstrap. That's easy. And they're like, oh my god, you guys are superheroes. You're the best. Yeah, we're being the change. It's awesome. And this is Grateful Garments. This is how they're doing it today before we do our stuff. This is their Excel spreadsheet, inventory along the left column there, and the orders all the way across. Like, this is going to keep going until I'll see. Like, I didn't even know Excel went out this far. It's ridiculous. And this is how they're managing it today, like, so inefficient. And you think about it, and you're like, oh yeah, that's painful to use, but you know what happens is they get the orders wrong, and when they get the orders wrong, they ship the wrong stuff, and when they ship the wrong stuff, it's costing them money. They're trying to do a good thing in the world, and it's painful that they can't. It's primarily because of the technology that they're familiar with. They need a superhero to come in and be the change. See? Only out to LC. Crazy, huh? So this is what we have now that we're in the process of building for them. I mean, it's night and day. Here's the orders, simple lists. There's everything in the order, the inventory. This is responsive design, so they can walk around with their mobile app as they're walking through their warehouse to track the inventory. Like, dude, I don't really even need to say much more. I mean, this is obvious. And you guys all know this isn't hard. This is easy. But they're like, you guys are awesome. And they can't wait for it to be done. But, you know, as always, we're engineers, and we're trying to build this thing up and give them this awesome, shiny car right from the beginning. And we're not going to just give them this small little MVP thing. We want to overbuild this thing. So we in the hackathon, silly us, we're like, we're going to deliver this whole thing to you guys in three days. And they even were like, you guys can do that. Of course we can. We're badass. And yeah, we're still working on them. Because we're fine tuning them, we're engineers, right? Oh, this isn't secure enough. We need to have the best security with SSL certificates. And this is, we can be more efficient here. And we're just like, we can use an awesome mailing tool to send, you know, help with your process and blah, blah, blah. And we just go on and on and on. And we learned a really valuable lesson is don't over promise. Set your expectations early and start off with the skateboard. They're going to be happy with that. Just something to get them from point A to point B and then iterate on it. But iterate on it as if you're building a minimum viable product. And oftentimes there's a misunderstanding with that. And a lot of people think it's the top way of doing it, but it's really the bottom way. You have to have something that works. You can't just throw a wheel at them and say, like, yeah, good luck. This is cool. These are get commits from stockade. And you can see on the top there during the hackathon that everyone is doing all their commits and all this activity is happening. And then when the hackathon ends, it kind of dies off. But the coolest thing about this graph that I love so much is look at tech. He's not the highest in the beginning, but as the hackathon ends, he's being the change. He found his passion. He found his motivation. He found like his kids to code. And that's this stockade. And now this is a guy, he told me like, he just gave you chills, chills down my spine when he told me this. He says, I used to go to work, come home and play video games to relax. Now when I go home, I want to be the change and I code for charity. And that's so much more rewarding, so much more fun and makes me so much more happy than relaxing and playing video games. And it's evident. I mean, look at his commits. It's awesome and inspirational. And it's inspired a team to form around him. We finished the hackathon at our work and it was like, okay, that's it. You know, don't do the hackathon during work hours anymore. These folks are doing it. As you can see from the commits that are happening after the big spike in the beginning, they're doing it at night. That's pretty cool. They're being the change. So, yeah, so if you're like, yeah, that's cool, Joe. I don't have time for this. It's too, I got a lot going on in my life and maybe one day when I have more time, I can do this. Well, you know what? Make the time. You all made time for RailsConf, right? Why? Because you want to be a better developer, right? I mean, we have a lot of stuff going on at work right now, but you're here because you want to be a better developer. Well, guess what? If you code for charity and you do this, it's going to make your path to becoming a better developer that much straighter. Because you're going to learn how to unwrap complex or simple, depending on the charity, business processes and apply a technical solution to that. You're going to learn when Rails 5 comes out and is stable, we're going to learn how to migrate from 4.2.5. We're going to be writing the latest Rails with all these awesome gems and trying it out with these charities. Again, be respectful. You don't want to just say, oh, cool, I want to learn this thing and I'm going to add all this risk to this charity. No, it keeps you honest. And you're doing what you should do to become a better developer and all that is, is practice. All this stuff that Jeremy is talking on the keynote this morning, he was saying all these cool things about a team and you're like, oh, that's not my team. I don't have that. Well, guess what? You can have it right now. If you just be the change and you code for charity, you can have that. And you're going to be an awesome developer. You're going to go in and say, oh, yeah, I already upgraded a whole app from 4.2.5. And this is what you don't want to do. This is how you want to do it. Everyone at your office is going to be like, holy crap. Like this guy is awesome. Like, oh, did you do that on your personal project? Yeah, I did. But it's actually really implemented for real and people are using it. Like customers of mine, this charity, like check it out. You've convinced everyone with that. You're the boss. You're the man. And it gets you closer to whatever your dream job is. That's personal, right? My dream job is very different from what your dream job is. Identify what your dream job is. Identify what charity you want to be the change in. And that is really going to be a way to get there. Some of you may be thinking like, I'm too junior to do this. I'm not good enough. No, you're awesome. You're 10 times, 100 times better than these charitable organizations. And you know what? I'm sure I am positive you'll be able to motivate and encourage a senior engineer to help mentor you along the same mission to help that charity. A phenomenal opportunity for you as a junior engineer to become a better developer and to find your dream job. So let's review. These are the tips that we discovered in being the change from our hackathon. Find your motivation. That's personal. You have to find yours. The way to do it, try one charity, then try another one, try another one, and you'll find yours. I promise. But don't give up. Because once you find your motivation, you're going to change the world. Explore many charities to identify the one that's for you. Just like I just said. Identify a charity advocate. You really have to identify someone in that charitable organization that will have the power to help you set up DNS. They're going to understand the business processes so you know what technology and how to implement the technology. Someone you're going to need to transition the technology over to so that they can use it and they can maintain it. So that's really important. Manage the expectations. Like I said before, we did poor job in that, but doing that is really critical to the success. And like I said, just a little skateboard is all they really want and they're going to be more than happy. And as we all know, as engineers, we're going to build this up to be the awesome rocket ship. Get a team around it. Don't do it alone. So be that core committer on that charitable coding for charity and then have others committing with you. And that way, if life gets in the way, you can have someone help you out and you don't leave the charity high and dry. Just really define what completion is for them so they understand that. So what do you want to do to change the world? That is the question I put out to each one of you. And if you don't have the answer to that, that's okay. I didn't at first start. You can start today. You can start right now, right after this talk. Go out to these charitable organizations that we did the hackathon for. Do a pull request. Help us out. Give to these charities. If it's education and teaching kids to code that is your passion, talk to me. I'll talk to you forever about that. This is like my passion. But I've been moved so much by Grant Zilla. I'm also one of the core committers for that. So you can make a difference right now. Try it with these charities. And maybe it will strike a chord with you and that becomes your passion. Maybe not, but at least it's a place for you to start. And if I can ask you all to do a favor and tweet this out to your community. Tweet this out to RailsConf. Get them on these URLs and get people to start helping these charities out because they're doing an awesome thing. There's also Ruby for Good that I don't know folks have heard of. But that's also a really cool organization that you can start learning about other charities. And then just Google what motivates you. Google it. Find out what's out there and start looking into it. Start a meetup. Take action. Do it right now. Be the change. Thank you. I'm Joe Dean from OnSite. We're always hiring as well. So thanks so much for all your time. Appreciate it.
|
As developers we often forget that with our development skills we have the power to change the world. This talk describes how our company organized a hackathon to create three open source projects that helped charitable organizations become more efficient at helping people. Bringing our team together around a shared philanthropic goal created more team unity, improved team communication and most importantly allowed us to apply our development skills to do good in the world.
|
10.5446/31554 (DOI)
|
All right, so when you walked in, you saw this. That's me, that's Justin Searles, but you can see how it's written by hand on an index card. There's a story behind that. This is not my talk, which is part of why I'm so nervous, but really, please don't leave, you're in the right place, at least. So you just stay where you are and I'm going to do my best despite this. This is actually not acceptable. I am being trolled so hard. Okay, so this is not a bait and switch. I've spoken at RailsCon two times before and I intentionally wrote abstracts to get into the CFP and then I talked about what I wanted to talk about. So this is not a bait and switch like those two. It wasn't intentionally that, it is now. This was supposed to be Sam Thippen's talk. Everyone go follow Sam, he's great. Sadly, Sam is in the hospital, so he wasn't able to give this talk and that's why I'm here instead, but it's a British hospital, so he's just in hospital. So send your good wishes to Sam. Why me, why am I here? Well, Sam likes to give conference presentations wearing my company's branded t-shirt, Testdouble, and so people are often mistaking him for one of our employees such that he actually now has intro slides like, I do not work for Testdouble, but I love them and also Surls, which I heartily appreciate. We love you too, Sam. That's why we're here. You're a great member of the community. So this talk's going to be fipping great. Only problem is I finally understand imposter syndrome. So I've got a little bit of imposter syndrome because I am a literal imposter today. In three main categories, one, I am not British, and as we all know as Americans, of those of us in the room here are American, everything out of British people's mouths sounds a lot more intelligent, so I have that shortcoming. And therefore today I resolved to use fewer contractions to speak with authority and to drop the rotic r. So let's practice the sentence together. Mini-test is not better than R-spec. All right. I feel better already. Two, I lack Sam's glorious mane. I don't have a big bushy beard. Sam of course drives his R-spec powers from his beard. This is obvious because why else would he have it? So I have not shaved since I agreed to do this at 7 a.m. Friday morning. Some straggles. So I now know a few things based on the R-spec beard powers. One, beards are itchy. Two, R-spec. And three, what beard oil is. So if anyone, I forgot my razor, true story, if anyone has some beard oil on them, hook a brother up. Third thing, third way in which I am an imposter today, I am not on R-spec core. Here's a little like organizational chart of where I fit in to R-spec. That's R-spec core and that's me not being in it. But you know what, apparently it's just not a RailsConf without a talk from an R-spec committer about R-spec. So far to date, the only R-spec thing I've committed to is this talk. So I decided to become an R-spec committer. It sounds like a good idea. So let's get started. I'm going to make my first R-spec commit right here. I am so committed right now to R-spec. All right, so I'm just going to push it up. Access denied. So I tried everything earlier in the hotel. So let's try it one more time. It always works. You know what, you get this error message also when you get hubs down. So it's probably just to get hubs down. So as this talk's resident R-spec committer, I have some startling announcements to make. I'm here ready to announce the future of R-spec for you today. Current version of R-spec is 3.4.0. I'm here to announce the next major release of R-spec, R-spec 5. R-spec 5 is going to be revolutionary because we have some really awesome headline features that are very convenient to me and my purposes. The first, Turbo-spec. Let me tell you about Turbo-spec. Yep. Turbo-spec dumps the object space into the cache, into memory after running every single one of your before hooks. It does this so that it can cache each nested example group setup code so that you don't have to run it across all your tests. And then if you run the R-spec CLI with tack tack turbo button, it speeds up your tests. Turbo-spec is going to make all of our slow R-spec suites way faster. Warning, it doesn't work if your application has side effects. But for the rest of us, it's going to be just awesome. I have another feature for R-spec 5 that I think is going to really just make true believers of R-spec happy. Specspecs. You just create a spec of type spec and then you can say things like, hey, this model order, I expect it to have five specs. I expect order to finish within about two hours to have 95% code coverage, to limit the nesting and indentation to just three contexts, to usually pass, and to be good code. I don't know why they didn't have this in R-spec 3, it's in R-spec 5. Remember, it's important to spec spec your spec specs, people. It's not get lazy. Obama's saying things. Audio doesn't work anymore because of their shenanigans. Let's try one more time. All right, what he said was just in just give it a rest. Damn it. I'm going to be, now I'm not going to sleep tonight. So thanks, audio. All right. So I'm still anyway, regardless. I'm not sure if I'm cured or if I'm still impostering. I am not Sam. If you don't know me, this is what I look like on Twitter when I'm getting retweeted for saying terrible things, that's me, Searle's. I'd love if you became my Twitter friend and got me some feedback about how things are going. I know it's not great so far. This is the Justin Searle's marriage simulator. Basically it's just you sitting across the table with me looking at my phone and making slanted faces. So we can all empathize with Becky Joy a little better. This is me on brand. I help run a software agency called TestDouble. Our mission is audacious. We're just trying to make the world software less awful. And I'd love if you got us any feedback. Hello at TestDouble.com. All right. So again, talk title, back to basics. RSpec, Rails 5. What's there to know? By the way, sidebar, did you know Sam rejected my Rails talk? I just thought I should mention that because I am supposedly honor bound to cover all this Rails 5 stuff because it's important to cover for the purpose of the program, which I took with just nothing but grace. So Rails 5 stuff. My first question to Sam via text message on Friday morning was, will RSpec just work with Rails 5? No. And he was saying it as an implementer. He's thinking about all the work they needed to do because obviously if you've ever maintained a gem, news slash major Rails releases break gems in surprising and myriad ways. I went and searched for just open GitHub issues that are demanding Rails 5 support. Just search for it and you get a whole lot of salty randos saying, hey, Rails 5 is not supported. No description. Give me Rails 5. You owe me. Come on. Gems. Work. Work. Give me. Rails 5 is not even out yet, people. So if you know a maintainer, go give a maintainer a hug because seriously, Rails major release upgrades are big work. RSpec considers this to be feature work. They don't want to make any breaking changes. They want you to be able to upgrade very gracefully. That's why they respect Semver as much as I don't. They're at 3.40 now. It's going to be 3.5.0, which means that they have to keep it running for older versions of Rails but also new versions of Rails. So I hope that you take a moment to thank the RSpec team for their thankless work because everything that they're doing here is behind the scenes. But there is one change that we all have to know about, which is, is it true that functional tests and controller specs are really deprecated? Well, yes, it actually is true. They're going away with Rails 5. They're deprecated, at least soft deprecated. To which I say finally. If you don't write controller specs, by the way, feel free to just play with your phone for this portion of the talk. If you do, it all started when DHH opened this issue saying, you know, the mechanism for making, verifying that you assigned a particular instance variable and a controller, making sure that a particular template was going to get rendered, those are testing implementation, those aren't really valuable. Let's deprecate functional tests. And I feel like he was absolutely right. That was a really good point. And of course, if you disagree, you might disagree just because you write controller specs, but here's my beef with controller specs. This is the testing pyramid here. At the top of the testing pyramid, it's just a way to illustrate these are full stack tests that call through everything in reality. And stuff at the bottom, these are just unit tests. Stuff in the middle are difficult to explain tests. And that's what controller specs are. So the problem, right? The opportunity. Oh, my gosh. All right. I'm so glad to be one of those like just chill, go with the flow kind of guys. All right. So the problem with controller specs at this level is that above that point in the pyramid, there are untestable things that can break. So there are only of limited value. And everything below it, the messages that you get are going to be unclear reasons why things are going to fail because it might be something way, way deep below you. That is actually the root cause of the failure and the error messages aren't going to be very helpful. So it helps you in that very skinny way, but I don't know how much value that really adds. Another thing about controller specs that sucks is that they were a lie to begin with. Their API implies that a request is being made. So if you've got a controller, you do like get index like you're actually making an HTTP request and then you have these assertions like you render this template or you redirect it or you have this HTTP status. Oh, look, I'm making a request. Wrong. It's just like that's just really silly sugar of a facade and it's just invoking your controller methods, which means all this other stuff is not happening like middleware is not getting invoked. So your controller specs might be passing when your controller is totally busted, but they're faster and that's why they exist. And they might be faster at runtime, but in my experience they're much slower at fixed time. They're just a maintenance nightmare for all that no value that they provide. So but you know, despite the criticism controller specs, it's semver, right? So our spec is promising not to break our tests with Rails 5. The way that we are doing that, the way that you do that, all that you have to do is add this gem to your gem file called Rails controller testing, which will reintroduce the functional testing bits that our spec Rails needs. And then meanwhile, the RSpec team is doing the hard work to make it seamless. It's my understanding Sam Pippin's doing a lot of that work. And I hope that's not what put him in hospital. So thanks to Sam and the RSpec Core team. If you already have a lot of controller specs, stop writing those now. There's stuff that you can do instead of controller specs in the future. Here's some alternatives. One, you could write full stack feature tests that test that everything's fully working when everything's really integrated. You could also do nothing. I do nothing. I have not written a controller spec for seven years. And you could also do request specs, which are very similar. We'll talk about those in a second. Because request specs are like honest versions of controller specs. They bind to, they map to many test integration tests in Rails. And the reason that they're honest is that the API looks the same and the assertions look the same except it actually exercises the routing, the middlewares and the views. So if something blows up, you know, it's a good blow up. Another cool thing is because it's using RackTest, you have access to the response body and you can make assertions on the actual response that's generated instead of all this weird implementation stuff. When to use request specs instead of controller specs or nothing. Specs that assert like a complete API response, like if you've got like a JSON API and you can assert everything that it does. Cool. Request specs are probably the right layer to test at. Specs that just assert, you're assigning certain IVars or rendering certain templates. Just needlessly coupled to the implementation, probably don't need a request spec. Specs that assert HTML that comes out of the response body, probably not a good idea unless your app has absolutely no JavaScript, which is probably unlikely. So that's a bit about request specs controller specs. Third bit. It was in the abstract, right, that we're going to learn how to test action cables. So does our spec help us test action cable? No. It turns out that action cable testing isn't built into Rails yet. There's an open pull request and I assume that when that ships, our spec will have a wrapper for it or something. So just test through the browser for now and make sure your website works. All right. There we go. You're now ready to our spec with Rails 5. Thanks very much, Sam, for trusting me with your talk. There's nothing more for you to see here. You can close Skype, Sarah. There's nothing, I think he's actually like maybe here. I think I see him waving actually. Hi, Sam. Yeah. He just looks excited. Yep. All right. Bye, Sam. So one time, Aaron Patterson's up in the front row. One time I texted Aaron something and he tweeted it and got a million retweets and I felt really salty about that because I was like, no, that was my random internet meme that I copied and pasted. And he sent me this in response. It's not fundamental attribution error. It's internet attribution error. So this is my talk. Our spec in Rails 5. Why are you here? Really? Like shout it out. Somebody tell me why you're here. No. Next question. Somebody sent. So why'd you come to this talk, especially if you didn't know who I was? Okay. Something R spec related, anyone? What's that? Okay, thanks. All right. Thank you. Action cable. Thank you. R spec cable. Well, I had two theories because I couldn't make the slides after asking you. One, how the hell do I test action cable? Sorry for those people because I don't know. Two, I'm not happy with my test suite. And now I have a third theory too. You know, I'm new here and what the hell is all this about because it's just like a lot of forensics and who are these people? I'll focus on the one that I can actually address which is what happens when we're still not happy with our test suites. Well, if you have this motivation and that's part of why you came to this talk, maybe you were thinking like, well, R spec might have a new feature that'll help me hate my test less or maybe Rails has some new thing or removes a new thing that will help make the pain stop, make my test suite more sane. I think that's a natural thing to do, especially when you're in a conference, we're here to learn about technology. We're searching for tools and tools are easy because we can grab them off a shelf and use them but they're way easier than like critical introspection, asking ourselves hard questions like maybe it's our fault that we have terrible tests. There's two keys to happiness with testing or anything in software. One, the tools that we use. Two, how we use those tools. And it's not a two-step recipe. There's, it's like not a false, it is a false dichotomy to like blame one side or the other. Some people will say like, oh well, clearly we just need better tools whenever we have a problem and some people have a disposition that says, well no, we just have to think differently. We have to design harder. Like if the tools failing us, we're not using it hard enough. And that's not a good mental model either. I like to think of it as like first there were people thinking and they were doing stuff and then that they wrote tools to help them do their job and then the tools are actually a usage of them informs how we think about the problem and it's this hopefully virtuous cycle, this feedback loop. So I do believe that tools matter. Tools aren't everything but tools are important and we're going to talk about how tools prompt behavior. Some tools guide us in a healthy direction to build good stuff. Some tools enable our bad habits and some tools just are written to be relatively low opinion, not very opinionated. First, I want to talk about a tool that enables a lot of bad habits. It's a, you might have heard of it, it's called RSpec Rails. And I feel like whoever invented RSpec Rails was like, here's our marching orders. We're going to just do whatever Rails does and then wrap it with RCLI and DSL as uncritically as possible. So you got controllers? Yeah, we can spec them. Great. Without thinking whether that was a good idea. You got a testing pyramid? We got a testing pyramid. You want model specs and controller specs and helper specs and view specs and routing specs and request specs? Sure. And feature tests too. Why not have all these layers? And honestly, as somebody who's, especially when I was a novice coming in, I was like, well, clearly our tools are built for good reason. They have a good reason for having all these different tests. Test all the fucking time. That's great. Okay. So I thought, like, I looked at that and I was like, man, I got my work cut out for me to like live up to this seven layer nacho of testing. And what I came to realize over through a lot of usages is like, well, all those tests are very integrated. Every single one of them will call through to the database. And additionally, they're very redundant. When I have a new model that I'm writing here and I make a change there, I have this incidental coverage in all the tests above it. So all those tests now need to be updated as well. That creates a lot of like low value work, just cleaning up all my tests. So here's pro tip. Here's how I use RSpec Rails. This is a secret. My secret to using RSpec Rails is I have this whole thing and then I blow away all of them except for sometimes feature specs and sometimes model specs. And then if I have any sort of Ruby that's like at all interesting, I'll write it in Ruby code and then I'll test it with plain old RSpec. And that's the only way I've been able to find sanity with RSpec Rails. But it's not the tools fault per se, but I had to fight that tool to get to this point. I had to fight all the documentation and all the blog posts and all of the arguments with people about why I was having problems. And that was not an example of a great tool experience. Let me tell you about an experience with the tool that I thought was really, really helpful and great. Its name is RSpec. RSpec itself is actually really awesome, but I think that a lot of people have a hard time with RSpec Rails and then they turn around and they blame RSpec too. And I think that's kind of unfair. It's worth it to like look at them separately. So let's talk about what makes RSpec kind of cool. First of all, I don't believe that RSpec is a test framework per se. I think it's better to think of RSpec as a framework for helping you write better tests. RSpec influences our design. It was designed to do that. It was a response to XUnit with lots of repetitive methods that were all set up, like tons of tests set up in action and assert. But what was cool about nested example groups is we can see the same symmetry and have very terse tests that aren't redundant, but we don't lose any clarity through drying it up. That's one of my favorite things about RSpec style testing. Additionally, I love that the assertions guide the naming for our methods. If I write this test and the thing doesn't exist yet, by using this matcher BSilent, it's going to assume that there's an instance method called silent question mark on that class, which is a really handy way to inform that the usage is like sensible. Like, that's a natural name now. Additionally, years ago when I learned about Let, I was pairing with Corey Haynes. Corey is a really smart developer. He looked up to him and said Let is great because it lets you call out your setup stuff, create a new user and assign it to this method, and even better, it's lazily evaluated. I was like, I don't know, Corey, I worship you, so lazily evaluated sounds sweet. That's great. I'm going to use Let for everything, so I've used Let a lot. Another feature, LetBang, which will eagerly invoke that block, it has this interesting thing because people generally find LetBang by being like, well, I want this to run in exactly this order. I want to make sure that it invokes. Jim Weirich and I paired and he looked at my code base and he's like, dude, you're doing this totally wrong. Don't just use LetBang for absolutely everything. It's like there to draw out your attention to side effects in your code. It should be minimal. It should have them very, very sparingly. If you need to have a side effect in order for your code to work, that means that you have this coupling of state not just to the arguments but to other stuff happening in the system. That's why there's a bang. It means don't do it. That was an interesting conversation that I never would have had if it wasn't for RSpec. Additionally, RSpec reduces friction. The CLI is great because it's really convenient, easy to use, pretty obvious, helps you focus on just what you want to run, has a good output. It's all work that I'd have to do if I was building my own rake tasks and my own testing CLI stuff on every project. I love RSpec's built-in reporters. Oh, my God, we're at 30 minutes because of all the AV stuff. Please don't leave. All the reporters you need. You have all the CI stuff that you need. There's so many RSpec plug-ins. I love that I get to focus on just my tests and not the stuff around my tests. Additionally, RSpec fosters empathy. The API is designed to let you have a place to write in what the heck you're doing, describe the slide and how it complements RSpec. You have this opportunity in there to tell a little bit of your story in a way that's congruent with your tests. Another thing I love is that it shifts your perspective. RSpec has a domain-specific language. It does not look like normal Ruby. That is a level of indirection. However, it forces me to think of my methods not just as methods, but outside in what's it like to use them. What's it like from the perspective of a stakeholder? What's it like under a different context? I really like the DSL for forcing me out of just thinking of just methods and classes. Another tool talking about tools prompting behavior. It's possible to write tools that just don't have a whole lot of opinions. Minitest is a good example of one such tool. It has a different priority than RSpec. An analogy I picked up from Aaron this week is you could think of Minitest as a race car. That's why DHH uses Minitest, by the way, if you don't know. It's lean, mean, it's essential. It's only what you need to get your tests written. It's all pure Ruby, except it has these hard bucket seats. Versus RSpec, a luxury sedan with a lot of knobs and dials, but it's mostly full-featured and quite comfortable to ride in. If you want a comfortable seat, RSpec offers you this rich Corinthian leather experience that you can just sit in and feel comfortable. The Zeitgeist right now, and by the way, if you don't know the word Zeitgeist, it's a German word for time snapchat. The Zeitgeist right now is saying that Minitest is really hot. When I talked to all my friends, a lot of them have dropped to RSpec, started using Minitest. I think it's just really popular right now. I think that one of the reasons is people generally spread fear and uncertainty and doubt about RSpec that is too verbose, is bloated, is slow, is too much in direction, is better to just write pure Ruby. You ain't going to need it. I am here too. I use Minitest on a lot of my projects. I like Minitest just fine. I like that it doesn't have very many opinions and it gets out of my way and I can just write just the tests I want. But of course, I carry with me the fact that I actually have very finely, after years and years, I have my own testing opinions that I know work very well for me and I can write tests without getting myself into too much trouble usually. But if you're not a testing expert and you don't want to be a testing expert or if you're on a team with novices, what I would suggest is like, remember, I learned a lot discussing RSpec and grappling with its API and its features with past teammates. I think that you might benefit from that too if you haven't had that experience yet. So yeah, on one hand, RSpec takes longer to learn, but when you learn how to use RSpec, you're also learning stuff about design and testing. And so maybe that's not so much a bug as a feature in some cases. So if you're still not happy with your test sweeps, I suspect that you might be looking for a tool to solve your problems when instead we can use our brains and use thinking instead and change our approach. Oddly enough, at RubyConf last year, I gave a talk on exactly that. You can find it called is.gd slash stop hate. It's called how to stop hating your tests and it's not about tools, it's just about thinking. All right, so in the time remaining, I'm going to get a little bit more meta. Why are we here really? The fact that anyone came to this talk worries me. I would not have come to this talk. Let me explain. Let me back up. First of all, giving somebody else's talk is a lot like testing their code. Because I've had to open up all of Sam's work and his notes and stuff and try to understand what he was going to say here today. So if you see something confusing when you're looking at somebody else's code and you're trying to write tests for it or trying to review it, it's easy to think there are obviously a moron. So it's important to assume that the author is smart and intelligent and had reasons. Meanwhile, if you see something that's obviously awesome, great, it's still your job to put on a critical hat and investigate it anyway and ask the hard questions about why we're here. So let's critique this talk. Not the stuff that I said, just the Sam stuff. The stuff that I said, it's fine. This is the abstract. I assume you've read it. I won't reread it or anything. This is abstract. This is the first thing I read when he texted me to see if I could give this talk. This is my opinion of the abstract. People like peanut butter, people like chocolate, slam them together, RSpec Rails. This talk, I felt like I read the abstract. I'm like, this could be a six paragraph blog post. And so the next thing I did was I googled RSpec Rails 5 and found Sam's six paragraph blog post. And I was just thinking, I was mad, I was like, why was this talk selected here? How did this talk fly through the CFP process without any criticality whatsoever? Like, that just doesn't seem right. Now granted, my talk was rejected and I'm a little bit biased. I might be a little salty. But when I thought about it, I think that the reason was that this was a safe talk. This is a comfortable talk. This is well within everyone here's comfort zones. I use RSpec, I use Rails, let's find out what's new. Great. But I feel like that comfort should scare us because when we're in a group like this that's maturing and we're getting up to major version numbers like five, you know, comfort can breed complacency. So RSpec, if we're just content with where things are and we're pretty happy with RSpec and we're just happy to see, you know, like little tiny tweaks here and there and make sure it continues to support stuff in the future, you're not writing blog posts about this new RSpec thing, you're not writing new tools, you're talking about RSpec less. Even if RSpec does everything you want it to do. Minitests, meanwhile, lately, like the zeitgeist, I've seen a lot of people talking up Minitests, writing more plugins, educating people a little bit more with blog posts and as a result, it's getting a little bit more attention. So as a new person walks into the room, they're going to see people talking about Minitests more than RSpec, they're going to tend to go towards Minitests, not RSpec. So this reminds me a little bit of a similar dichotomy. Rails. Rails is pretty mature now. It's over 10 years old, it solves the problems that it solves really well and it's pretty well known what it's good at and what it's not. So people talk about Rails a little bit less. Especially all of us busy getting stuff done and building things. We're not out there advocating Rails anymore because we get to use Rails at work, which is itself fantastic. However, when you look at jobs, Rails jobs are on the decline. They're not just slowing down, it's negative growth. This is another thing, the technology that shall not be named. Everyone's talking about Node.js. Like it or not, 900% year over year growth in jobs on Indeed. There's a lot of activity there and it's not about, this is not a contest of who's the better technology or who solves stuff better, it's what's the front page of Hacker News. So my challenge is thinking about this talk and why the hell we're giving this talk and why we're here. That was ironic. Because that's one of our options. The other option, if we're not willing to be uncomfortable, is we're going to see Ruby jobs start to dry up. There might be fewer people at Rails count 2018 than this year. Another way to think about this is if you're not familiar with Ruby Together is a non-profit that pays people to work on Ruby open source. Another way to think about this is ask, what were the conditions necessary in order for Ruby Together to seem like a necessary and good idea? Well, when an ecosystem is popular, everything's easy because there's just wave after wave of person on the Internet who's going to write open source for free just for the ego, just for the fame to be attributed to the new popular thing. Also easy, sponsored stuff like Oracle backs Java. Java's not going to go anywhere because Oracle's incentivized for Java to be successful. Google is not going to drop go unless they feel like it. I already dropped the mic, but it's done. JavaScript cannot die because multiple vendors have staked their businesses on it. Every single browser, JavaScript is not going to go anywhere, so it's a really safe bet. We're talking about RSpec, it's mature at this point. I don't mean mature as like a four letter word. Mature means mostly done. Bundler is mature. Rails is mature. Ruby is mature. They mostly do what they need to do to do their job well. That means as a result that when you maintain a popular gem like RSpec, it no longer makes you rich and famous necessarily. The ecosystem, the stuff that they had to do just to make RSpec continue working with Rails 5 is almost all stuff that you don't actually see. It's all internals, legacy code refactoring. No one really wants to do that. The reason Ruby together needs to exist is because the energy and the funding to keep Ruby competitive isn't there otherwise. That is disconcerting because Ruby together isn't going to ever be big enough to solve that fundamental systemic problem. Let's talk about my real job, sales. I spent a lot of time talking to business people about software solutions and building software apps and stuff. Entrepreneurs that I talk to are always talking about certain technologies that they hear about that are advice to them. The mean stack like Mongo Express, Angular, and by the way, when people, I've talked to multiple business people this year who are like, yeah, we're going to build a new application, we're going to do it all in Angular 1.x. People are teaching business people, oh, you don't want Angular 2, just stay on one forever. I don't get it. We're just going to wait. Wait it out. And Node.js, the so-called mean stack. A lot of entrepreneurs are pushing this kind of stuff. Another one, a lot of people are just assuming based on trendiness. Node and React are just the way to go. You know who's talking about Ruby and Rails nowadays out in the marketplace? Like has the ear of CTOs and directors of engineering? People spreading fear and certainty and doubt because they have their preferred upstart technology that's faster or whatever. And what those businesses are hearing is that there aren't enough Rubyists out there, that the Rubyists that do exist cost too much, that Ruby is slow, and that our spec doesn't scale either at runtime or operationally. Now if you're in the room, you're like, no, no, no, Ruby's fine, this is okay. But I think that this is like a real important bit of anecdote from the life of Justin Searles we all need to deal with to help solve my consulting sales problem. So, because I don't like sales. But that's why it's so frustrating is Rails is still the best choice for entire classes of applications. But because we stopped saying it a few years ago, businesses stopped hearing it. People only share new stuff that excites them. That's novel. If you were to discover immortality today, it would drop off the front page of Hacker News after a week or two. People wouldn't be talking about it. They'd find some new shiny thing. They'd be talking about React Native 1.0. And not that you just, you know, defeated death. Even though that thing is way more objectively better, it's not novel after a certain bit of time. So the dilemma, right? Ruby is no longer new. Ruby is still good. We got to do something so Ruby can remain relevant and we can keep working on Ruby at work. What's the, we do something part? Remember Ruby's mature. It does its job mostly well and one thing that I think our community, the technologists need to get comfortable with is that it is okay for tools to be mostly finished. It is okay for software to just mostly do its job and be good at what it does. In any other industry, it would be ridiculous for us to say otherwise. Like, oh, that's clearly obsolete now because it's not, you know, super active and they're not adding new features. At a certain point, it just does what it needs to do. Remember I said the key is to happiness. We're our tools. We're not like Ruby. We like Rails. That's why you're here. And how we use them. So maybe it's time for us as a community to de-emphasize the tools and start talking more about how we use those tools to accomplish really cool stuff. Because there's all these evergreen problems in software. There's all these problems we're never going to solve. We're never going to solve testing. We're going to just get asymptotically better each time. We're never going to solve design because we're always going to find new ways to design code. Like human issues are never going to be solved either, right? How our code communicates its intent to its reader is never going to be solved. I swear, I get like five bonus minutes. Sarah, can I have a minute? She's nodding very tepidly. So we got to tell stories that help people solve problems in ways that are more than just look at this new shiny bobble. And if you love Ruby, tell your story in Ruby and associate it back with Ruby so that Ruby remains known as a community of people who really get object oriented design right, who get testing right, who get community and inclusiveness right. Being known for those things and having people talk about those things are enough to keep us relevant. And when you think about whose job this is, remember that most of the people who made Ruby so famous in the first place don't write Ruby anymore. Their chapter is complete. Most of them have moved on to other ecosystems. Some of them are no longer even with us. And that means that keeping Ruby relevant is not somebody else's job. I hate to break it to you, but the fact that you show up to a conference called RailsConf in a room that holds just a couple hundred people means that you're one of the top couple hundred people whose job this is now to keep Ruby relevant, if you care. So my message is make Ruby great again. And tell your story. We don't have the time to talk about it today. Use this hashtag and tell me something that you could do to tell a story that might change something, that might have an impact on others, and I convinced them that Ruby is a better solution than the technology that shall not be named for whatever it is that you're doing. Again, my name is Soros. I'd love to be your friend. I'm going to be here for the rest of the week. If you want to help us in our mission to fix how the world writes software, consider joining test double. We're always hiring great developers. If your company is looking for senior developers and you're struggling to find people to add to your team, our developer consultants are great senior developers who would love to work on your team with you and build stuff alongside you. You don't want either of those things, but you want a sticker. I've got stickers too. And most importantly, thank you all so much for your time. I really, really appreciate it.
|
Something's in the air. It's Rails 5! A lot of Ruby developers are preparing to get their apps upgraded to Rails 5. Vitally important is, of course, your test suite. In this talk, you will learn everything you need to know to get your RSpec suite working on Rails 5. Learn about: The deprecation of controller specs and what to do about them ActionCable! How to test our favourite new feature General tips for upgrading The technical content of this talk is for almost everyone, from bootcamp grad to seasoned veteran. Come along to learn and ask practical questions about RSpec.
|
10.5446/31556 (DOI)
|
Hi everyone. So today we are going to see how we should test Rails 5 apps. My name is Prathamesh, that is my Twitter handle, and I work for BigBinary, I'm a director at BigBinary, and we are a Ruby on Rails consulting company. We also specialize in React and React Native. So if you want to talk with me about Rails, React Native, we can discuss it later. So I like stickers as everyone, but I'm a bit different. Like all the speakers that give talks, they always have stickers with them, and they give stickers to the attendees. But I like to collect stickers. So this is my laptop. And in RailsConf, I got few stickers, but if you have some stickers, then contact me later, because I want to pelegate them. So I also help in maintaining this site, Kotriyaj.com. It was started by Richard Schneeman, who in the morning he got the Ruby Hero Award. And this site helps in getting you started with your open source contributions. So if you subscribe to this site, it will send you an email with some issues on GitHub, and then you can start contributing to open source. So if you are interested in starting contributing to open source, please subscribe to this site. We can also discuss it after my talk if you want any more details. I'm also part of Rails Issues Team. So as part of the Rails Issues Team, I get to triage issues. So if you have opened any Rails Issues issue, you might find me commenting there, or if that issue is no longer valid, then I might close it also. So everyone is excited about Rails 5, right? There are so many new features, like Action Cable. We just had an awesome talk about Action Cable. But besides that, we have API-only apps. We have Ruby 2.2 support only. So all of these new features means we have to also test those features, right? So we need to know how to test all of these new changes. And there are a lot of changes related to testing also. So basically with Rails 5, there are not just many features, but there are also changes related to how we test our code, how we run our tests. So there are a lot of significant changes related to running tests. And also to the way we write tests. So today we will see all of those changes, and we will see what things have changed and how we should go about writing tests. So we will start with a running test, how we will, how we should run Rails 5 tests. And then in the second part of the talk, we will see how we should write tests, okay? So let's start with running. So before I talk about what is the way to run tests in Rails 5, let's do some recap. So before Rails 5, the only way to run tests was RakeTest. And I'm only going to talk about the Rails default test tag that comes with Rails. So I'm not going to talk about Rspec or any other tools. There is a talk after my talk, which is going to be about Rspec and Rails 5. So by default, we can run our test using RakeTest. It just runs all the tests in our application. And if we want to run some specific tests, like controllers, models, then we have specific RakeTasks, okay? And then if we have some other folders, like if you have test workers or test services, we can extend those RakeTasks, we can write our own RakeTasks and basically run those tests. But if we compare it with other testing tools, it has some limitations. Basically, the limitations are around how we run the test effectively. So if I have a test sheet, which is always passing, there is no need to basically have anything in the test runner. Basically I just need to do RakeTest and it will pass. So there is no need to do anything. But that is not always the case. So what happens is I write a test, that test fails, then I write some code, then that test passes, and then I write the next test. So that is the normal flow that we follow in our day-to-day work. And this is a typical example of how we run a test before Rails 5. So there is a controller and I am testing that controller and it prints the output. So if I am able to use this output to run the test again, obviously that will help me to improve my testing workflow. So this prints some output here about which test failed. And I can see that, okay, from user's controller, the test from line number 9 has failed. But if I try to reuse this information to rerun the failed test, it doesn't work because Rails is not able to understand exactly how to rerun this test again. And this was before Rails 5, obviously. So to sort out these kind of problems and limitations, Rails 5 has introduced a test runner. So this test runner will help us in rerunning the test. It will help us in having a proper workflow for running our tests. And this test runner can be used by bin Rails test command. So this is a new change in Rails 5. If we do Rails and minus, minus help, then we will see a command for Rails test. And obviously this is different from the rake task. So it is not actually a rake task. Basically there is a proper executable which has proper support for getting arguments. It also has documentation as we will see later. So this is the output of bin Rails test. On Rails 5 apps, we can see that it has finished the test and there is nothing significant here. But we will see later how it improves things. So let's see how we can rerun the snippets. So the same example that we ran earlier, rake test and a controller test. But here in the output, we can see that it has printed which test has failed with the line number. But it also has a command ahead of it. So I can just copy that bin Rails test and whatever the test that has failed. And if I run only that particular line, then it will run only that test. So I am able to basically copy paste things and run only those tests which have failed. It also has good documentation. So before Rails 5, when we only had rake task, there was no way to document things like which arguments this rake test method is, rake test command is going to accept. There was no better way to do it. There was also no better way to parse those arguments because rake had some, like you have to get things around if you want to parse arguments to rake properly. So that was also one of the goal while designing the test runner that it should have proper documentation and it should accept proper arguments. So if you do Rails t minus h, we will see the documentation of this command. And we can see that it does a lot of things. Like I can rerun the snippets. I can have failed fast. I can see the back trace. I can defer the output till the end. So a lot of things are there. And earlier we saw about how to run a single test, but we can also run multiple tests. So if you want to run a test from one model, a user model and post model, that is also possible. So you just parse which arguments like the test and the line number and it will properly run tests from line number 27 from user test and 42 from post test. It will also be able to figure out, okay, I want to run tests from particular folders. So you can parse test controllers, test integration. It will just run all the tests from these two folders. So there is no need to like basically augment rake with new rake task if you want to run tests from specific folders. You just have one command with you which can run all the tests in the way that you want to run the tests. Another cool thing is you can also run two tests at a time. So you can parse the line numbers of tests by separating them with colon and now it will run two tests at the same time. Another new feature that is added is related to back trace. So before Rails 5, we had to parse the environment variable for seeing the back trace. So basically what Rails does is it doesn't show you the exact complete back trace of your failed test. What it does is it uses its own back trace cleaner and it will only show you the relevant lines. But sometimes we want to see the actual output. What was the full stack trace because if there is any failure related to some gem that we are using, we need to see the exact line that code was failed. So now we can see that with just passing minus b. So it will show the back trace if there is any. We can also do fail fast. So we can just pass minus f and it will stop at the first failure. So it will not run the whole test suite. It will just stop at the first failure and it will print the result on the console. So here it just prints interrupted and exiting and we can see that it has only run four, it has only run five assertions. So it has not run all the tests. Colored output, most wanted thing, right? We want our test output to be colored. So that is also present here and that is sort of activated by default. So when you run test in Rails 5 using this new test runner, you will always get the colored output. You don't have to like pass a flag or write some configuration in any file. It will just work out of the box. And we know that Rails test recipe is actually powered by Minitest. So underlying library that is used for the test framework that comes with Rails is Minitest. And Minitest 5 has this plugin architecture which allows you to create plugins which actually hook into the Minitest code and they can customize the output that comes out of Minitest. So Rails 5 actually uses this capability of Minitest to provide a custom reporter which will have this colored output and all the other features. And it also has a plugin for providing options for all of these things like fail fast, defer output, back trace and everything. So using this Minitest 5 plugin architecture, Rails 5 has added this test runner. And you must be wondering, right? All of these tools are already exist in other tools. Like if I use RSpec, everything that I showed already works. There is nothing new in this. So obviously, yeah, there are inspirations. And this is inspired from RSpec, Minitest, MaxiTest and other tools. But the point here is that Rails always says that, okay, if something is good, then it is always part of Rails. You get things as part of Rails as the default Omacasa stack. And you don't have to configure things which are good, which already work. So basically following that pattern, now starting from Rails 5, you get the test runner as part of the default Rails stack. So if you are starting a new Rails 5 app, you don't need to configure 10 different testing libraries or 10 different things to have all of these features in your app. Basically, you will get it out of the box from Rails 5. So this was all about running the test. Now we will move on to the second part where we will see what things have changed about writing the test. And one of the significant change that has happened is related to the controller test, how we write the controller test. So this is a typical controller test from Rails 4. We just tested, okay, my article is getting created or not. I post to create action, then I pass some arguments, I pass the article params, and then I check whether I'm getting redirected to the article, new article page or not. In Rails 5, if we scaffold generate this test, then it will look like this. So there are many changes and we will go one by one. Instead of hitting the create action, we have articles URL, okay? So we are no longer hitting a particular action. We are hitting the route, okay? This is the route helper that we have in other parts of code. In our application code, we are using that in the test also. Then instead of passing the params by themselves, we are passing them as a keyword argument. So if you see the previous example, here the params were passed as it is. There was no keyword argument that was used to pass those params. But now we have a keyword argument for passing the params. And the third and most significant changes, the super class of the test is now changed. It is no longer action controller test case as in Rails 4 apps. It is now action dispatch integration test, okay? So this is how a typical controller test will look if we generate it in Rails 5. Now why this change? Was it required? So let's see why this change was made. So if we compare the test case from Rails 4 and Rails 5, they almost look same. Basically what we are testing is we are just testing that article gets created and we get redirected to the new article page. So these tests are almost same. There is not much difference in what they are trying to achieve, what they are trying to test. The only difference is how we are testing. What is the mechanism that we are using to test it? But there is a sort of significant difference and that is integration tests are slow. We already know that, right? We want our test to be fast and integration tests are already very slow. So we write functional controller test. And that's what we were doing till Rails 5. That's what we were doing till when we were using Rails 4. But integration tests are no longer slow. They are now comparatively as fast as your functional controller test. And that is due to the work of Eileen. So she worked a lot on integration tests in last one year. And now the integration tests are almost as fast as the functional controller test. So if you consider by speed, there is no significant difference in the output that you get from running a functional controller test and integration test. So Rails team decided to just like deprecate the controller test in favor of integration test. And it is obviously close to real world. So because when we test the controller using functional test, we are not actually running it like what happens in real world. So basically functional controller test don't have the full middleware stack. So it does like if we don't go into too much details, what it does is it just does some magic and directly goes to the controller. But that's not what happens in real world. In real world what happens is request comes, then it goes through the rack middlewares one by one and then it hits the controller. But that's not what was happening in case of functional controller test before Rails 5. So now we are close to the real world. We are actually mimicking what happens in a typical request response cycle. So Rails 5 generates integration controller test by default. Whenever we do Rails G scaffold something, it will have integration controller test generated instead of functional test. But we also have old apps, right? In our old apps we might have action controller test case, those functional controller test cases. So what will happen to those old test cases? So they are backward compatible. Basically the change is only for new Rails apps. So if you are upgrading existing Rails app to Rails 5, you don't have to worry. Your test will continue to run as it is. So they will not, so basically if you generate a new resource in your Rails 4 app which is upgraded to Rails 5, that test will be integration. So all the new test, those will be integration. But your existing controller test, they will continue to work as it is. So this change is backward compatible. And in Rails 5.1, this action controller test case might be moved into a new gem. So it might be moved from, like removed from the core Rails and it will be moved to a separate gem. And then we can use that gem to continue using this old behavior. So we don't have to actually change our existing test. We can use them as it is. But all the new test, those will be now integration test. Though this change looks a bit simple and I said that, okay, functional test and integration test are almost same. We are doing the same thing. It's not actually doing the same thing. Like I lied a bit. So there are some implications of this change which are important to understand while writing our test. So let's see what are the implications. Now in Rails 4, this is the typical controller setup that we do. Like we fetch a user and then we sign in that user. Generally, if you are using device, then you have sign in and then you pass the user, whatever user that you want to sign in. But internally, it does this. Internally, that test sign in helper actually sets a variable in session. Session of user ID is equal to something. And then when you run the test, your test has access to that session and then your test interprets that user as logged in user. But now that we have moved to the integration test, we no longer have access to that session. So we cannot access the session directly. We have to actually log in the user. So before trying to create an article, I have to actually send a post request, log in a user, and then send another post request to create the article. So this is how, like if you use Kappa-Berah or any other tools, you might know this. You always have a like before filter, not filter, but before block where you actually sign in the user and then you do whatever you want to test. So similarly, now we will have to add this setup block. We cannot use this session directly. It is not accessible in the integration test. So this is one change that we have to remember while writing the new tests. Then we were able to access the headers directly. We were able to access and assign and change headers by just accessing them as request.headers. And that is also not possible now because integration tests don't have direct access to those headers. So we have to pass those headers explicitly as keyword arguments. So with Params, we can also pass whatever the headers that we want to have on our request. This was about like all the implications that were there while writing the integration test. Now let's take a step back and see what we are actually testing when we are testing a controller action or controller URL. So again, let's see a typical example of Rails 4 test. We are again trying to create an article and see if it gets created properly or not. So here we are testing these things. We are testing which action gets used to create the article. So we are testing the create action. We are also testing the instance variables. So in our controllers, we create like we use the instance variable like at article is equal to article.new and we pass the Params. So we are testing that instance variable gets assigned or not. We can also test what was the status code like assert redirected to. So we are verifying that, okay, we got redirected to the new article page. So we can test a status code in old tests. We can also test template. We can check that, like my index template got rendered. So we can use assert template and use it to test which template got rendered. And obviously, we can also test what was the actual HTML result of this request. So I can actually use assert select and other helpers to verify that, okay, my DOM content was what I was expecting. It had this particular div. It had this particular span or whatever. So I can also test the actual generated HTML. But in Rails 5, there are some changes. We cannot test all the things that we were testing in Rails 4. So this is the example of Rails 5 test. Here also, we are testing the request because we are sending the request. That's why ultimately we are testing that request. We can also test the status code. We can check where we are redirected to or was the response successful or was it failure. And also, we can test the generated HTML. So we can write assertions about what DOM content was generated. But we cannot test the instance variables. So earlier, we were able to test what instance variables were assigned in that particular controller action. But that is no longer recommended in Rails 5. We cannot also test which template got rendered using that assert template helper. So that is also not recommended by Rails 5. So basically, these two helpers are now deprecated. If you are using Assigns Assert Template in your existing test, then you will get a deprecation warning when you upgrade to Rails 5. And when we are using these Assigns Assert Template helpers, we were actually testing the controller internals, like what the controller was doing internally when we hid that controller action. So we were assigning some instance variables. We are rendering some template based on some condition. And these are all controller internals. It is not like the result of controller is not dependent. It is actually dependent, but they are not the result of the controller. They are just internal parts of the controller. So this is no longer recommended for testing in Rails 5. And you will get a deprecation warning. So the recommended way to test controllers in Rails 5 is to test the end result. What was the end result of that request? Something like some HTML got rendered. Or what was the status code of that request? What was the response status code? So those are the things that are now recommended for testing. And we can test them using AssertSelect and Rails DOM testing gems. So these two gems, these gemman helpers are used for asserting what content your DOM has. So we are no longer just testing a controller action. Okay? We, like in Rails 4, we were able to test the controller action and we were also able to test the internals of controller. But now we are testing the result of your controller and view combined. Okay? So the way your controller works is it passes the instance variables to your views. And then view gets compiled and you get the HTML. So in Rails 5, this operation of like passing the views, passing the instance variable and data to view and generating the final HTML is considered as a black box. So you are not allowed to, like you are not recommended to go inside that black box and do something. You are only recommended to test what comes out of it. Okay? So we test what comes out of it, the actual output, which is the compiled HTML or the response code. So there is a space. Basically there is a space between how your controller passes your data to views. And before Rails 5, we were, like we were, like we were allowed to test and go inside that space and see what instance variables or what data I am passing from my controller to my view, what template I am rendering. But that is no longer recommended. So Rails 5 tells you that don't go inside that controller and space between your controllers and views. Don't go there. So controller view interface is no longer recommended to test in Rails 5. That's what I was trying to say. Consider it just as an implementation detail. Like Rails is a framework which has a detailed, like implementation, way of implementation for passing your instance variables to your views from your controller. There is no need to test that your instance variable gets passed to your views using those assigns and other helpers. Assume that they are just passed by Rails. So consider it just as an implementation detail and test what is the actual output that comes out, that comes at the end. But you might be thinking that, okay, you are saying so many things about how to test Rails 5, but I actually need to test my instance variables. And there is, there are some valid reasons why I want to test my instance variables. So let's see why we were actually testing those controller internals or why we might have to test those controller internals. Okay. So why do we test controller internals? Why do we test the assign helper or why do we use that assert template helper? We use it to just verify that, okay, correct template gets rendered. If I have some conditionals, like my, let's say my subdomain, based on my subdomain, I'm deciding which template to render. If my subdomain is this, then I want to render index template. If my subdomain is this, then I want to render some other template. So valid use case. I also want to verify that correct data gets passed from my controller to view. So I'm verifying that, okay, I'm assigning proper instance variables and those data, that data is getting passed to views. So I might use, might test controller internals to just verify this fact. So that's why we test the instance variables because that is the only way to send data from controller to view. And that is the most recommended way to send the data, right? It is everywhere in Rails guides. It is present everywhere in all the documentation. So we know that that's the way to send data from controller to views. It's the sort of law, like it's contract, like nobody's, nobody has concretely written it, but it's there. We know it and that's why we test it. We actually test the interface between controller and view, okay? So before Rails 5, we were testing two things when we were writing the controller test. We were also testing what was coming out of that controller, like the generated HTML response code and other stuff. And we were also testing this interface between controllers and views. We were testing what was happening in that middle space also. So testing controllers in this old way, which is isolated from your views, like you are testing this middle part, isolated from your views, is now no longer recommended. But sometimes we need it. Consider a case where you are writing a Rails engine and which is used in your Rails app and you're creating a controller in that engine, okay? Now you have your Rails app which has the views, which will consume that controller. Basically those views will get rendered from your controller. And you have some conditional logic based on which your particular view will get rendered. Let's say you have two templates and you have a conditional logic based on which that controller will render a particular view. And then that logic can depend on what data gets passed from controller to view, okay? So you depend on that middle part. You depend on that interface of passing data from controller to view. So this is sort of a valid use case. Consider an example of device gem, which provides a device, like we use device, right? We subclass from the device controller and then all the behavior gets added to our controller. And our views also get rendered. So views are present in our application. But controller is present in device gem. So if we want to test device, we want to have some mechanism to verify that correct data is getting passed from device controller to our views, okay? We just cannot test it in isolation. So sometimes it is needed to verify, to test that middle interface. Yeah. And as I said, the example was controller, providing a controller via an engine. Obviously there are ways to go around this also without using the assign and assert template. If you want to, like, have a controller which is outside of your Rails app and if you want to still test it, you can provide some default view. Let's say I'm dependent on act user instance variable to be passed from my controller to view. Then I can add a default view in my engine, which will depend on, which will render something based on that act user instance variable. And then I can test it using assert select or Rails DOM testing gem. But if you don't want to do that and continue to test it using the old way, you have this gem. So the assigns and assert template helpers are removed from Rails core and put it into this Rails controller testing gem. So if you include this gem in your test, you will no longer see the deprecation warnings and you will continue to run your test and write your test in the same way as before. But it is not recommended. Okay. So this was about all the changes related to controller test. Now we will see how to test API apps. So Rails 5 has introduced API apps and we can create them using minus minus API, Rails new minus minus API, and it will create API only app. And now we will see how to test those apps. So typically in an API only app, we only deal with JSON and XML. We don't deal with actually what HTML get rendered because we no longer generate HTML. We only generate JSON and XML data. So our unit test, whatever model test that we have, they will remain the same. There is no change in the way we will test those for API apps. But there is a change for testing our controllers. So in a typical API app, we will have something like this. We are creating again an article. But this time we want to send the JSON data. So this example is for actually sending the HTML data because if you see the example for JSON, we have to do lot of things. We have to pass the headers for content type. We have to also convert the data to JSON before passing. So we cannot pass hash. We have to actually convert that data to JSON. We have to also set the format of this particular URL to JSON so that Rails will understand that, okay, it is JSON incoming request is JSON request. So all of these things we have to do in Rails 4. But in Rails 5, we have a way of, like, we have a way of encoding that request. So if we want to send the JSON request, we can just tell Rails to, okay, encode this request as in JSON format. And it will take care of all of these things, like converting your params to JSON, then setting a proper header, setting the format to JSON. It will take care of all of these things. So we just have to specify the encoder as argument. So if you see here, I am just passing the encoder that I want to use as JSON. And it will do all of the magic behind the scenes to consider this request as a JSON request, do the request encoding, set the proper headers and everything. And it will send that JSON request properly. Currently, we only have the JSON encoder. There is no other encoder that is present in the code base right now. But if you want to have our own encoder, let's say XML, then we can use this hook to register it. And then we can just pass it as XML. So this register encoder method expects two things, the way you want to, like, parse the parameters and the way you want to parse the result, the body of the response. So if you parse these two things, then it will register that encoder with Rails code. And then we can use it in our tests. The other thing that we want in our API test is parsing the incoming response. So we send the request, something happened, and response came. And we check that valid JSON was returned to us. Now, if you use Rails before Rails 5 for API apps, most of you will have this kind of helper, parse JSON in your test helper or spec helper file. It just parses the, like, it just does JSON.pars response body. So you might have seen these kind of helpers in your test code. What it does is it just parses it as JSON. So to avoid this parsing in every test, Rails will have now a helper called as parse body, which can be called on the response. Okay. Now, this helper, what it will do is it will figure out that I was sent a JSON request. So I should parse the response as JSON. Okay. So if we call this in an API test, then it will properly figure out that I want to parse, I'm expecting the response to be parsed as JSON body. So it will just parse it and it can compare it properly with the expected hash. Okay. Obviously, this requires that as JSON to be parsed. So these kind of things help in writing API test effectively because they sort of try to reduce the boilerplate code and which are required for, which were required before Rails 5. Now, let's look at some of the other changes. So yesterday, how many of you attended the talk about active job? Okay. So Jerry, he talked about the async adapter that he added in Rails 5. And now that is the default adapter for your development and test environments. Now, what is the advantage of it? So before Rails 5, you might be using sidekick for your production environment, but your development and test environment was not using any async kind of adapter. You might be using development like whatever the inline adapter that comes with Rails, that was the default for a test environment. And if your code is dependent on some synchronous code, then your test might fail. Basically, they might give you false positives and your code might be differently in production than in test. But now that we have a sync adapter in Rails 5, that is the default adapter for your test environment. So you don't even have to specify that in any configuration file. It will just have the async adapter as the default adapter. So we are reducing difference between your test environment and production environment for active job. We also have a random test ordering enabled by default. So in last Rails conf, Aaron Patterson, tenderlo, he talked about having a fast test. And one of the ways he mentioned was having parallel test. If there is any way we can have test running in parallel, we can obviously speed up the overall time. We will have a test getting finished in less time. Now, if you want to run test, if you might have tried running test in parallel using some gem, there are already gems which do this. And if you might have tried it, and if your tests were dependent on the order in which those tests are run, you might have faced issues while running those tests. Because if tests depend on the order in which they are run, that is not, they are not good test. Basically, if you run them in some other order, they will fail. So they are not good test. And before Rails 5, the order that Rails uses internally to running test was not set. Like you are allowed to configure it to set to something else. But now it is set to random by default. So now, like we are a step closer to having parallel test. Actually, not. But a small step closer to having parallel test. How many of you like fixtures? Only two people. Okay. So whenever we talk about testing in Rails, that talk is not complete without talking about something about fixtures. So in Rails 5, there are some like nice additions to help fixtures also. And first one is file fixture. So how many of you actually have some JSON files in your test support folder? Okay. So for you, there is a good news. Now you don't even have to write those small helpers in your test helpers for accessing those files because you get a helper from Rails. So you can store those fixture files in test fixtures files folder and then just use this helper and pass the name of the file. And it will just have that file object. And then you can read the contents. You can, whatever you want to do with that object, you can do it. So again, it will reduce one more small helper that you used to add in your test helper file. And you can configure this part. So by default, it goes inside the test fixtures files folder. But if you want to set to something else, you can set it. Another change is related to using custom active record classes. So if you're using a legacy Rails app, you might have your model name as model name different from your table name. So let's say I have a post model, but my table name is something different records. And I have another model comment. So if I want to use fixtures with this kind of setup, I have to use a table name which is records.yaml. And for another model, I will use comments.yaml. So this setup works fine. But I have to tell active record that, okay, I'm using post model for my records table. So the way to do that earlier was to set this set fixture, set this using the set fixture class method. And we could set it in our test helper. So this was possible before. But there was one limitation. If you run this using some of the rake task that we have for loading the schema, it didn't used to work properly because it was not going through that test helper. So test helper gets involved only when you run the test, not when you run this rake task. So you have to do some hacking to get around and to specify what table you actually want to use for that particular model. Now you can add a table specific or model specific metadata in your fixture files directly. So this is a typical example. You can just specify that my model class will be post for this particular fixture. And it will use that class for loading the data properly. And it will ignore this key. So underscore fixture will be ignored by default. So it will not cause any issues with your actual data. There is also a change related to how we test request. But I think I'm almost out of time. So I'll skip it. So yeah. So testing Rails 5 apps is actually a better experience. We can run test effectively. We can write integration test more effectively. Focus is on integration test, end to end testing rather than testing things in isolation. Controller plus view is considered as a black box. So you are not recommended to go inside that just test what comes out of it. And obviously there are gems which you can use if you still want to use if you still want to test in that way. So happy testing. Write good tests. And how many Star Wars fans we have here? Okay. So today's not the May 4th, but may the force be with you for writing good tests. And may you use the default Rails stack that comes with Rails. But I'm fine with you as long as you write this. So sit for the next talk which is given by Justin and he will tell you about how to use RSpec with Rails 5. Okay. And if you want to know more about Rails 5, we have a blog series. So we have around 32 blogs right now present on this URL for specific things that are coming up in Rails 5. So if you are interested in knowing about things in Rails 5, check out our blog. There is also a newsletter. So if you want to know what happens in Rails in a particular week, you can subscribe to this newsletter and you will get a weekly email. And yeah, Rails Confedition of this newsletter is coming soon tomorrow. So if you subscribe today, you will get to know what happened in this week. That's it. Thank you.
|
Testing Rails 5 apps has become a better experience out of the box. Rails has also become smarter by introducing the test runner. Now we can't complain about not being able to run a single test or not getting coloured output. A lot of effort has gone into making tests - especially integration tests - run faster. Come and join me as we commence the journey to uncover the secrets of testing Rails 5 apps.
|
10.5446/31558 (DOI)
|
Okay, so today we're going to be talking about small details, a big impact, and the general topic area here is, takes a lot of little details to get a product to work well, and it's actually pretty hard to ever give a talk on any of those details because on their own the details are pretty boring, so we decided to put together a talk that had a few of them as vignettes that talk about, hopefully both motivate you to think about this kind of stuff in your own applications, but also give you a sense of like what kind of stuff we think about working on our product. So first of all, I am YCATS, and I work on a lot of open source projects. I don't actually know what to say about this slide except I work on way too many open source projects, and I like it a lot, and it's great, and you probably know me from some of them. My name is Liz, I am infinite math on the Twitter. I used to be a cartoonist, I was a cartoonist for about 10 years before I got into engineering. I've seen like five books published out there, they're graphic novels. I just started getting into programming about a year and a half ago, and I just started working at Tilda on Skylight about three months ago, so death by a thousand cuts or big things come in small packages. Every day you make a choice. Sure, your app works, it does its job, your users are technically getting what they paid for, but you can do better, we can all do better. But how? User experience is a story, like a movie. You know exactly how you want it to go from beginning to end. User signs up, user logs in, user interacts with your landing page, etc. Unintentional not to Kansas by the way, I didn't even think we were going to be in Kansas City. But what if they click on something unexpected? What if they take a wrong turn? They can easily end up in a choose your own adventure style scenario that you never planned for, and suddenly they're on the vampire express to terror island. Weird things happen. Users end up in unexpected places. It's your job to make sure they're guided through seamlessly and they don't even realize it was a weird place to begin with. You don't want your user experience to end up being like that scene from Dune where Patrick Stewart is carrying a pug into a laser battle for some reason. You don't want that. You want to be like Gandalf, riding in on a bunch of eagles to save Frodo and Sam at the peak of Mount Doom. Sure, it doesn't make a whole lot of sense if you think about it too hard, but when you're in the middle of it and you're watching it, you're thinking, yeah, of course this is what happens. This is a good user experience. So the first thing I wanted to talk about four different vignettes and the first thing I wanted to talk about was what seems like a very small problem. But before I do that, I want to just take a second to talk about Skylight just so you have a context of what it is I'm talking about. This is Skylight a while ago, a few months ago. And you log in, you get a list of your endpoints. They're sorted by a thing we call Agony, which basically just means things we think are probably a good idea for you to work on. So if you have an endpoint that doesn't get hit a lot, then we don't care that much that it's slow. But if you have an endpoint that gets hit a lot, maybe a little bit slow matters a lot. So we try to combine those in a way that feels intuitive. We call that Agony. And then we also have these little heads up things, those little red icons that mean there's some database problem or some memory problem, and we try to give you some details about that. So that's sort of the experience. You should definitely come by our booth to see a whole demo. But what you can see on this page is that there's a bunch of numbers. And actually the Agony index used to be invisible. It used to just be the way the default list was sorted. But that actually sucked. And people couldn't, we would say it was Agony and they would get it and it would be great. But it meant that it wasn't a thing you could click on to sort by it. You had to, we had other UI was annoying. But there's this thing in the second to right column. It says RPM. It looks like this column here. And it's like the thing you would do if you were designing this page, I think, the first time. And basically what happened is we shipped this feature really early. It was one of the first things we did. And people kept saying to us, there are too many things that say 0.01. I don't actually care about 0.01 or 0.03. Those things are the same thing. What is an RPM? So New Relic has RPM and they tell you it means request per minute. And that's what it means. And you can learn that fast. But it is also true that the first time you ever see RPM in your life, it's like, what does that mean? Is it like a car term? Are they doing something with? So there's this problem. So the first thing that we do is, OK, if people don't care about the difference in 0.01 and 0.03, fine. We'll just say everything is less than one. And we shipped that. But that didn't help a lot because now there's just a lot of less than one. So this is something, if you've used Skylight, you know that this is a thing that we've worked on for a long time. All of our UIs are things that we've worked on for a long time. So this is one that took us a lot of time to figure out what the right answer is. And that's what I want to talk about. So what should we do? What's the solution? So we have a really great designer. So we tossed the problem over to him. We said, OK, what should we do to make this easier? And he said, well, first of all, indeed RPM is not good. You should not say RPM. Let's change that to popularity. And that is great. That's good. And then without looking at any of the actual numbers, he just said, OK, we'll use a filled bar to indicate how popular your endpoint is. So endpoints that are not very popular will have an empty bar. And endpoints that are very popular will have a filled bar. And we looked at this design. We said, ah, that's pretty good. That feels great. So obviously we should go ship it. And so we did. The way Skylight works is we don't ship anything to every customer right away. We always create a feature flag. And because Skylight uses Skylight, we can test it on our own app. So we tested our app with the feature flag. And we saw this. And he said, OK, that was a cool idea. And the word popularity is indeed good. But that isn't what we wanted. So what exactly is going on here? Why did that happen? So first of all, this is like the thing you learned in school, that there's this normal distribution, things are bell curves. Most of the things are clustered off towards the center, things that are not at the center, decay at either side at the same rate. So a five foot tall person, or sorry, like a 10 foot tall person is just as uncommon as a zero foot tall person. And a seven foot tall person is just as uncommon as a three and a half foot tall person. Whatever the exact details are here. They use the metric system here, which I, as an American, do not know. But the point is that this is how we think about the world. And largely because physical things sort of operate this way. Things that are physical sizes, things that are just sort of random in nature have a sort of random result. But a lot of things in the world actually look like this other distribution, a thing that's called log normal distribution. And it's not surprising that if I show you this, probably like half the room has already left. They're just being polite and waiting for me to get to the next slide. But this distribution doesn't look like a thing you learned in school. And you probably already have enough trouble. I do certainly with the thing you learned in school. So like forming an intuition about what's going on here is pretty hard. But interestingly, this kind of thing actually shows up everywhere. So the top left is income distribution. The top right is the survival of egg to smolting ratio. Bottom left is population size in the city. Bottom right is commute distance from your house. You can see, wow, these things all look the same. That perhaps is surprising. And if you go look at your skylight endpoint, you'll see, wow, that actually looks also exactly the same. And in fact, this is so much the case. This distribution, the other one was called the normal distribution, or bell curve. This thing is called the log normal distribution. And it's so much the case that this is common that there was a performance company that got bought by SOSTA whose name was literally log normal. So the intuition that everyone has about bell curves is so wrong that this company called themselves log normal just to make that point. So what's going on here? What's going on here? What's going on here is that you expect, and our designer expected, that we're talking about a bell curve, but we're actually not. So you thought, oh, well, if we just put the amount of popularity on the graph, then you'll have a bunch of things towards the middle, and you'll have some big ones and some little ones, but that's not actually what's happening. So just like anything else, that's not what's happening, so this doesn't end up working out. So the first time somebody encounters this, I think people learn this problem. They say, oh, well, there's an obvious solution to that problem. My PhD friend in statistics has told me this solution, which is that you should just take the numbers and you rescale them in terms of a, so I said it's log normal, so you use a log scale and you can rescale them, and that works great. Now you have popularity in terms of a log scale, but there's a bit of a problem here, which is that this person is going to think reasonably. If the bar is twice as big, that probably means there's twice as much popularity. And the reason that they think this is not an accident, it's not something that they have gone to school to learn, it's a thing called pre-attentive visual processing. Pre-attentive visual processing is how most people process most things the first time they see it. And here is a couple of examples. So the game of the game here is one of these things is not like the other. And if you look at the left image here, what you'll see is that it's very easy to notice that the red circle is red, and that's because there's a very strong visual variable, and there are certain visual variables that are very powerful that human beings understand instinctively. The right side has, also you can eventually discover that the circle is there without engaging the logical part of your brain, but it's a weak visual variable, it's harder to see. Similar story here, one of these things is not like the other. On the left side it's quite easy, there's one variable, is this thing filled, is it empty? So the pre-attentive variable, we get it right away. On the right there's just no distinct feature, so it's actually hard to understand what it is that you're looking at. So again, our friendly emoji user says double the ink means twice as popular, obviously, and he thinks that because physical size, length of things, is a pre-attentive variable, and you're just not going to fight that. There's no point in fighting that, so don't even try to switch to logarithmic scale, even though you're a PhD friend who turns on the logical part of their brain, and may be able to fight the pre-attentive system in their PhD work. The human beings normally do not, and nobody signs up for Skylight to get a lesson in statistics, so this just doesn't work. So what did we end up actually doing here? We ended up going with this triangle thingamabob, and we didn't invent the triangle thingamabob, a lot of systems that have similar problems do it. Interestingly, a thing that you'll notice is the antenameter is also a weirdly shaped thing, and if you try to ask OS X, like I am a power user and I know the magical incantation where I can hold down Alt and click on the thing, they still give you a number that no human can pre-attentively process incorrectly. You have to Google what that means. There's no way that you could try to figure out what that actually means. And in fact, the whole reason why decibels exist in the first place is because noise is one of these weird distributions, and if you try to tell people, oh, think about noise in terms of how many joules it is or something like that, that equivalently doesn't work. So these problems are all pretty similar, and the canonical solution to this problem is the weirdly shaped shapes. And the reason for that is that people don't do a good job of identifying how much area weirdly shaped things have, so the pre-attentive system simply doesn't kick in. By the time you start thinking about it, it's already too late, the pre-attentive system has no idea what's going on, and now you can make it mean whatever it is that you want. So yeah, people don't know what's up. So we went with this, and the cool thing about this is that in addition to the pre-attentive system not kicking in and doing the wrong thing here, people are used to this antenna meter, meaning if it's twice as big, it means twice as signally, which doesn't mean anything. So popularity, similar, actually turned out to be a good word, it means twice as popular, but that doesn't really mean a lot. And so using this icon that has a general purpose meaning as, doesn't mean a lot, hand wave, hard to say, turned out to be the right thing. So being that Skylight is a Rails profiler with a customer base of mainly developers, we wanted to give our users the option of signing in with GitHub. This would add convenience for our customers who are probably already signed in to GitHub anyway, but it would also position us to take advantage of the existing concept of GitHub organizations and their related permissions instead of having to come up with something similar on our own later on. Most people are familiar with the experience of signing into an app using authentication from some other application like Facebook, Google, whatever. You just click sign in with GitHub. You, leads you to an interstitial page where you authorize the application, and boom, you're signed in and returned to the app. Fortunately, most users will enjoy a seamless experience. Everything will work as it should, and all will be right with the world. That said, we still wanted to account for the rare, not so happy path. Well, first of all, what is the happy path? Well, the least ambiguous way for a customer to connect their existing Skylight account to GitHub is to first sign in with their email and password, then head over to the account settings page, and just click the connect to GitHub button. The customer will then see the connected account information, their GitHub username, and then they can easily sign in with GitHub going forward. A common issue that we noticed when we were looking at other apps that use OAuth sign in is that it's really easy for a user to not actually remember if they signed up with an email, password combo, or OAuth, or both. Speaking for myself, I usually forget as soon as I sign in. By displaying the information right there on the account settings page, we're sort of aiming to mitigate that confusion. If, for some reason, the GitHub account you authenticate with is already connected to a different Skylight account, we'll just let you know in a message bar at the top of the screen. How about the edge cases? What about people who already have a Skylight account, but it's not connected to GitHub yet, and they click the sign in with GitHub button? In this case, we redirect the customer to an interstitial page with an email form, a field pre-populated with their email address that we get from GitHub. We focus in on the password field to prompt you to sign in. Once they're signed in, we just connect the account to GitHub automatically so they can just go forth and sign in with GitHub to their heart's content. What about people who already have a Skylight account and they click sign up with GitHub? Obviously, we don't want to sign them up for a new account or treat them like they're new here. Even though they've clicked sign up instead of sign in, we treat them as if they're signing in. We just log them right in. Once they did click sign up, after all, we should just check to make sure that they're the right person. After all, it's totally possible that they really do mean to sign up, but their co-worker is signed in to GitHub on their computer, and we're logging them in as their co-worker. In this case, we just casually make them aware of who they're signed in as at the top of the screen with this helpful welcome message. If they're not that user, they can just sign out and sign back in with their own GitHub account. If the customer already has a Skylight account, but it's not yet connected to GitHub, and they've used the same email for both, clicking sign up with GitHub will lead to the same interstitial page, but with a little error message at the top that just says, you know, that email is already taken. Again, all they need to do is sign out of GitHub, sign in with their own account, and it'll be fine. This meant that we had to implement OAuth. OAuth is easy, right? Sure. Let's say sure. There's even a super simple gem called Omnioth GitHub we can use. However, there's a problem. There's always a problem, right? GitHub's OAuth configuration allows us to supply one redirect URL. This is how GitHub knows where to send a user once they're authenticated. It doesn't know the difference between a user who's signing up versus signing in versus connecting their existing Skylight account, so suddenly it's not so simple. We had to figure out how to deal with all the different roads a user might go down and account for how the user themselves might expect it to go. As you can see from this chart that I made, that I drew myself. Interstitials are terrible. I think we can all agree on that, right? Just think about all the times that you try to visit a website on your phone and you're forced to click through some nonsense interstitial that's trying to get you to download their app. You don't want to include these things unless you absolutely have to. An interstitial page can easily be that Patrick Stewart in a laser battle holding a pug thing that we're trying to avoid here. So how can we begin off? How can we make this sort of incongruous thing feel as natural as possible? We thought long and hard about it before making the decision to include these interstitial pages and we only included them once we realized that we were making a choice between a potentially awful experience for a small amount of users or a slightly awkward one for maybe a lot of users. We don't want people who mean to sign up for a new account to be let off to someone else's account with no explanation, right? We certainly don't want people accidentally creating duplicate accounts all over the place. We don't want to assume that the person who's logged into GitHub is the same person trying to create a new account, especially when we can just check and let them know. At the same time, we don't want to replicate the typical terrible interstitial experience. So we spent a lot of time thinking about this. Anything we can do to create less work for the user is what we should do. If GitHub is passing us an email address and what we want is for the user to sign into their existing account, just put it in the email field. If we want them to enter their password, focus on the password field. Just make it easy for your users. Be Gandalf. So let's talk about Rust. So we use Rust at Skylight and it's probably easy because a lot of people do this to think, oh, they probably use Rust because it was like our cool technology and they wanted to have an excuse to use it. But actually, when we started using Rust, it was not that cool of a technology and it was pretty scary. It was still pretty new and we had a pretty good reason to use it. So what is the problem that we're trying to solve with the agent? So obviously, in order for us to give you information about what's going on in your application, we need an agent that collects the information and sends it to some server that is processing it and producing those nice reports. So what is the problem here? The general problem for the agent is that we want to instrument your application efficiently. But efficiently really is a pretty important thing here. If you install an agent that is supposed to detect why your application is slow and it makes your application slow, we probably have automatically failed out of the gate. So it's pretty important. Everybody who builds these kinds of things cares a lot about making sure that the agent itself is not really impacting the performance of your application. So what possible solutions are there to use? The most obvious solution, and this is what basically everyone does, including us out of the gate, is to write it in Ruby. The nice thing about Ruby is that Ruby is a safe language and it's already a language that by definition you have in your app because of your Rails app. And we can be careful and catch exceptions and the likelihood of something going horribly, horribly awry is pretty low if we're careful. So write it in Ruby is one option and it works pretty well to collect the baseline information like how long it took to render a template. Unfortunately Ruby itself is a pretty slow language and that doesn't mean it's intrinsically slow for everything but if we're trying to collect a lot of very fine grained information it may end up being slow. And in particular a thing that we really wanted to do that we do now is collect information about how many allocations happened in a particular area of your application and those are fine grained areas and that really means we need to hook into every single allocation. Ruby has a nice friendly C API for hooking into every allocation but so should we write it in C? Should we write the part of it that hooks into every allocation in C? And unfortunately the thing about writing things in C is that we're asking you to take our program and put it into your application and if we just have to write anything that is performance critical in C there's a pretty good chance that we mess up somewhere and take down your application. So I think we were pretty nervous. We have some C now but we were pretty nervous about like just saying oh from now on everything is written in C I think even the best C programmers like the C programmers that write crypto libraries occasionally mess up and have massive vulnerabilities like heart bleed. So it's hard to get things right in C. Another option would be write it in C++ but actually C++ isn't that much better from the perspective of like am I really sure I haven't messed up and caused the application to crash. So we had a prototype in C++ that I think probably we might have shipped and it worked but I was personally pretty nervous about like how many people on our development team would be able to maintain this thing. So the thing about C and C++ is that it's actually pretty easy to write a program in C or C++ that compiles, runs, you run your tasks everything is great and then boom you have a segmentation fault and now a lot of your users are angry. So if it's your own segmentation fault in your own app you'll probably anger at yourself but if we tell you to please install our agent and all of a sudden your entire Rails app starts crashing you will probably be very upset at us and slow is better than angry users. So and in this case slow doesn't mean the agent was slow it just meant we couldn't add features like the allocation tracing stuff that we really wanted right. So slow is better than angry user but unfortunately slow means we can't ship the features that we want. So what sort of happened around the same time that we were exploring the C++ story is Patrick Walton from the Rust team wrote a blog post that said like by the way we previously thought that a garbage collector was a pretty important thing for Rust but what we recently realized is that the ownership system that Rust came up with is actually generally better than the garbage collection story for systems programming, for C programming and we're going to get rid of the garbage collector. He said this in like August or September or something like that and I happened to come across that blog post and like in October I said okay I'm going to see if I can take apart the hotspots of our application specifically the serialization parts, serializing the trace and send the server. I'm going to see if I can turn that into Rust and I took like a couple weeks and did it. So we ended up going with Rust. I ended up being able to ship a pretty productive little slice of our application. We didn't rewrite the whole agent and we never will. A big chunk of our agent really does want to be something that hooks into Ruby at the Ruby layer. But we basically got to a point where it was clear that we could take hotspots of our application, rewrite it in Rust, compile it, ship it as part of our gem and be very confident that it wouldn't crash. In all the years that we've now been shipping Rust we've never had a crash seg fault in the agent that was attributable to our code at all. So that's actually pretty great and that's something that maybe you might find surprising and it is actually just a fundamental characteristic of Rust that unless there's a bug in the Rust compiler and a bug in the Rust compiler is basically like a bug in Ruby. Like a bug in Ruby could mean that your Ruby program seg faults but that's not your fault. And if you install a C extension in Ruby and there's a bug in it and it calls a seg fault that's also not your fault, that's the C extension's fault. And the similar story in Rust, if you write Rust code and it compiles it has the same story, it's a safe language just like Ruby. And so the idea is if it compiles it can crash and I would definitely recommend that you take a look, this is the Rust 1.0 blog post and so we talked about, in the blog post you talked about stability in general about Rust being a stable language, we talked about community which is a pretty important thing for Rust but we also talked about these four things, memory safety without garbage collection, concurrency without data races, abstraction without overhead, all of which sound like contradictions in terms and which through very simple primitives that work pretty well, Rust ended up being able to allow you to say if it compiles it won't crash. And really what it comes down to is in Rust you can write low level code that you know is efficient without fearing that you'll crash and probably for your own application in 2013 was not the right year to do it but for us writing fast code that also we were sure couldn't crash was like a core business value that we needed to figure out how to do. So, you know, rolling up our sleeves and getting into the swamp with whatever Rust looked like back then was well worth it. In fact, it allowed us to ship the allocation trace feature and make our agent small at a time where perhaps people didn't realize that that was possible. Tomorrow Godfrey is giving a talk, Chan Can Code on Helix which is our open sourcing binding layer between Rust and Ruby, it's pretty cool, he's going to talk about it and you should definitely go check it out. All right, names, let's talk about names. Names are really important, just ask anyone with a name more complicated than Jane Doe. Getting someone's name right is a sign of respect. When someone introduces themselves to you for the first time most people will at least try to make sure they're pronouncing it properly and if you're the one introducing yourself you can pretty well guess that someone who doesn't put in that effort probably is not worth your time. But let's go a little deeper. People change their names or they go by names that aren't the same name on their government issued ID but they still need billing and other official documents to be addressed to their legal name. People change their names for all kinds of reasons, sometimes they're fairly innocuous ones but sometimes they're more serious. There are situations in which calling someone by their legal name instead of their chosen one can present an actual safety risk for them. But keeping that in mind, respecting your customers alone should be enough of a reason to put some effort into calling them the name they want to be called. I've been called Liz my entire life so the minute I pick up the phone and someone on the other end says, oh, is Elizabeth there? I know it's a sales call. I hang up on them. A lot of services will take your full name and nothing else and then every few weeks you get an email that awkwardly addresses you with like, greetings Elizabeth Bailey. I literally received this one while I was putting this talk together. So of course these services mean well but it's awkward. It's the Patrick Stewart holding a plug thing again. It takes you out of the experience. Not only does it take you out of the experience but it also makes your service seem really disconnected and robotic. No one wants to read an email sent by a faceless corporation. Most people will delete that email without even reading it. If your team is anything like ours you're hardly a faceless corporation. You're a bunch of people who really care a lot about your product and its customers. So how do we avoid this problem? How do we handle names? When customers sign up for Skylight we used to ask them for their first and last names and nothing else and then we would address our emails to their first name. This works well enough in most cases but again what about the edge cases? Well as it turns out they're not even really edge cases. More than half of our own company actually goes by a name that's not on their driver's license. So it's not that uncommon for someone to go by something other than their legal name so it really makes sense to design for this. Case in point. We have a large number of customers signed up for our Skylight emails which are sort of quick little messages we like to send out once or twice a week giving updates on our progress on various features, what conferences will be, things like that. We don't want to alienate our customers right away by calling them the wrong name. They might delete the email before they even get to the sweet dog gif. In our case we opted to swap out the first and last name with full name and nickname. So pretty much all correspondence now is addressed to the user's nickname but we still have their full name on file for billing purposes and other official business if we need it. For our existing customers we just concatenated their existing first and last names for their full name and we used their existing first name as their nickname. But we wanted to make sure that we kept track of who chose and confirmed their own nickname and who was just assigned their existing first name as a nickname. We did that by adding a simple Boolean field called Nickname Confirmed and just marking it false for all our existing users. This is all well and good but it's how we handled the UI at the, this is all well and good. But it's how we handled the UI for this problem that really gets to the heart of this. When a user signs up for Skylight for their email address we start by trying to guess what they might want to be called based on what they enter as their full name and warning there are actually some like 10 year old Doctor Who spoilers in the following gifs. So we don't just assume that this is the case and we move forward. We ask the user can we call you that? If they say yes we mark their nickname as confirmed and we just call them that name from there on out. But we don't, yeah, if not all they have to do is click no and they're prompted to enter whatever name they'd prefer to be called. So once they're signed up we mark their nickname as confirmed as well. But I warned you. What about people who don't want to enter a nickname or they just don't for some reason they just don't enter it when they sign up. We'll continue to use whatever we think is their first name and we'll keep their nickname marked as unconfirmed. So what do those people do if they decide that they do want to change their nickname from what we guessed for them? Well all they have to do is go to the settings page and where it says can we call you that just click no. If the user hasn't told us yet that it's okay to call them Jack we just make sure to ask rather than assuming. They'll get the same opportunity to enter their preferred name and save it and now it's marked as confirmed so we know not to ask again. So let's say you've saved and confirmed your nickname. We'll assume this is okay. You can notice the subtle difference in wording will call you versus can we call you. But let's say you need to change it to something else. It's easy. You just go back to the settings page and right where it says we'll call you that you click change. Then you can just go ahead enter your new nickname and save it done. If they haven't set up their app yet and they get this screen when they sign up we have roughly the same interface set up here. We greet them by the name we have in our database but if they haven't confirmed yet we ask you know can we call you that and they can either accept it. This is also if people sign up for GitHub sign up with GitHub they get directed to this first. They never get a chance to enter their nickname. This is actually a great catch all for people who sign up with GitHub as well. So for the technical implementation we had some interesting challenges. Some of the apps interface is in Rails views while most of it is in Ember. The sign up page is a Rails view so we tackled that first. When I first learned to program I learned Ruby in Rails first. I only knew a tiny bit of JavaScript when I started learning Ember and then I found myself working on a number of Ember applications that just used a Rails API so I was almost never in a situation where I need to write straight up JavaScript for a Rails view. When we were initially working on this I worked on it largely with my pairing partner Rocky and we had both just started working at Tilda I think that week. So we were both new to working on Skylight. Rocky was new to Rails, I was new to straight up JavaScript and we found that our weaknesses and strengths sort of complemented each other. So yeah since I had never built an app and played JavaScript with jQuery before like Rocky had I had no idea how much easier Ember actually made things. We needed to use debounce in order to give our users a little breathing room while they were typing out their names so we would be sure we wouldn't display anything until the user is done typing. So without debounce the experience would be a lot choppier. They'd see every single character on the screen as they typed it which is like super not graceful not elegant at all. It's certainly not something we want so something so seemingly simple it's actually I found not native to JavaScript. It turned out we actually had to write our own jQuery debounce plugin after scouring the internet for a solution. We ended up basing our plugin on a blog post by Dave Walsh that worked really well for us. Something else I was unaware of is that some methods like trim are not accessible in all browsers. So we had to polyfill JavaScript string that trim method in case we have a user who's on one of those browsers. This method is important because we want to make sure we're removing that white space surrounding whatever name the user enters so we can be sure it displays properly. A lot of apps don't do this right or at all which is really frustrating as a user. I can't even count the number of times I've tried to log into an app and I was told your email is not valid. Your email is wrong. I know this is the right email. Oh, it's a white space at the end of it. So annoying. So yeah, overall when it came down to, I think this is not going to let me do this. Oh, God. It keeps going forward. It worked when I practiced it. This did not happen to me when I practiced it. Overall when it came down to names we really put a lot of thought into making sure our users were being addressed by the name that made them feel comfortable. And we tried to find every opportunity to make sure we're getting it right so they can change it as soon as possible. So in closing, when you're building an application, the little things really matter. This is something that you can lose in the technical work. You can spend a lot of time on like what JavaScript library to use for this or that thing or what OAuth library, how you are going to structure your migrations. But at the end of the day you're building a product for users and like Liz said in the beginning when you're building an app, it's kind of like a movie. You're building an experience for users. That's how you should think about it. You really want your users to have a really good experience. And like a movie, making an amazing, like in the same way that making an amazing movie requires fanatical dedication to every detail. I'm often inspired by the fact that if you look at stories about Pixar movies, Pixar animators will say like I spent 30 days on this 30 frames that only lasted like one second but we really wanted to spend every little second to make it right. And I think there's a really big difference between the end product of applications that take the time to get these little details right or movies that take the time to make sure that the expression on his face is exactly right at this moment for 30 frames or 40 frames and the opposite. You can make the decision to not care. It's easy to say that whole name thing that we just discussed doesn't matter or that whole all thing like why does it even matter? We'll just do, it's fine. There's one redirect URL. We'll just use that to mean there's just one flow. But in fact, the user is experiencing something, they're experiencing an experience that you have put together for them and it's worth taking the time to think about what they're experiencing at every step. It's worth taking the time to say if the user pressed a sign up with GitHub, there is a difference between whether the account exists with this email or not. The experience that a person should have is different. Or like Liz said before, the subtle difference between can we call you Jack or will call you Jack depending on whether you actually took a step to confirm your email. It's very easy to say who cares. I think as a user, you know that there's a big difference between applications that always say yes to that kind of thing and applications that always say no. Applications that always say no just always suck. It's hard to say yes some of the time. I think you have to have an attention to detail and take the time to get the little details right. So Liz talked about this earlier and I wanted to reiterate. This is just a taste of the kind of stuff that we do at Tilda. We have a let's say twice weekly email but that's a lie. A periodic email that we try to send out twice a week. We used to say daily but that really didn't happen. We have an email that we send out where we talk about stuff like this. Every engineer gets assigned a day every couple of weeks and they write what they worked on, what things were hard, what details they spent time on that day and I think it's pretty great and you should sign up for it. You can sign up for it. You have to make a skylight account to sign up for it but you can sign up for it with a free account and you never have to pay. Pro tip. So just make an account and you can get access to the daily email. We used to call it daily email. We used to do the periodic development journal, frequent emails. A lot of people sign up by the way. When we started I wasn't sure what would happen. Maybe like six people would sign up but we have like hundreds, close to a thousand people who receive it every day or whenever we send it out it turns out now and I think that's pretty awesome. I would also like, I think if you have the ability to do something like this for your own company the world is a better place when people talk about how they work. Thank you. Thank you. Thank you.
|
Most people are on the lookout for the Next Big Thing™, but at Skylight we know it’s #allthelittlethings that make for the best possible user experience. From the many not-so-happy paths of authentication to the challenge of guessing a user’s preferred name, we’ll dig deep into all those tiny details that will surprise and delight your customers. If you were hoping to hear more about how we use Rust, don't worry—we've got you covered! We’ll be sharing many of our finer implementation details as well as the thought processes behind them.
|
10.5446/31561 (DOI)
|
So, my name is Michael Rau and I'm here to talk to you about storytelling with code. And I'm here because it feels like these days storytelling has become the new hot buzzword as a way to sort of solve problems and usually it seems like a lot of marketing people sort of say things like, oh, well, if we could just sort of create a storytelling experience, this thing would be better. And I get the sense that a lot of people don't actually know how to tell a good story or what storytelling even really means. So I'm going to talk about a project that I made last year, premiered last year, that relied on using code to tell a story, to create a digital experience. And if you're looking for a very technical talk on the code behind it, this would be like the time to quietly exit because I'm mostly going to be talking about sort of softer general concept-y type stuff as opposed to a really rigorous code review. So also, I just wanted to say right at the beginning, thank you so much for inviting me. This is my first time at any kind of conference like this before. This is my first time going to like any kind of RailsConf. And I feel so welcome by everyone. And so thank you all for coming here and being here. So my background and really my job, my professional work is as a theater and opera director. I got my MFA from Columbia. I've been working professionally as a director for the past 10 years. I work mainly in New York City and in Europe. And so my job is in telling stories and specifically in finding the most effective way to tell a story. I code mainly as a hobby, as a sort of way to relax myself. I've always enjoyed messing around with computers and sort of learning about how computers work. And only in the past couple of years have I started really trying to combine my skills as a theater and opera director with my skills as a coder. And I should be totally honest with you right up front. My skills as a coder are like not that great. But I still made this thing in Rails that worked. And so I feel very proud of that. And I'm sure that any of you incredibly talented programmers here, as you're listening to me talk, could write something that does the same thing and probably better. So while I don't have a tremendous amount of knowledge in code, what I do have a tremendous amount of knowledge and really experience in storytelling and specifically interpreting stories. I make work for large groups of people to experience as a community. I direct plays and operas. And oftentimes when I direct an opera, it's in a foreign language and it's music that's complicated to understand. And I see my job as the person who has to create images that show relationships or images that create meaning. So it often kind of looks like this. So I'll sort of get a story that's been written by Shakespeare for Verdi and find ways to position the actors, to paint the set a certain color, to have costumes that look a certain way that gives people a feeling, that says something about a relationship, that creates some kind of meaning. And while the normal sort of basic, if you think about like when people talk about theater, they have like the happy face and the sad face. Those are the two basic primary colors that I have as like a director. But really I work harder to evoke other different, more complicated, more interesting feelings just by creating an arrangement of bodies on stage. So really you could say that a lot of my work and a lot of my experience revolves around feelings. And how to evoke feelings through stories. But I started wondering in the past couple of years if I could start to challenge myself. What are the other ways that I could tell a story and what are the other formats that are available to us to tell stories? And I was at a bar hanging out with actor friends late one night and one of these actors who is a very lovely person but has kind of a big personality was going on and on and on about how the central truth of theater is the actor and the actor's body and you could never have a performance without that. And I was a couple drinks in, I'll admit, and I said I think you're wrong. And I started to try to think of a project to prove them wrong. And the project that I came up with is what I'm going to talk to you about today. And the project is a piece of like, I call it a piece of theater, other people have called it an installation, other people have called it like a sort of like a show that you read. But I was very interested in office culture and how we communicate with each other with ourselves as a group, as a community now, which seems to be more and more mediated through digital technology. And what I started doing is talking to my friends about like, I think I'm going to make a show where you just do office work. I think that's going to be the show. And everyone told me this was a terrible idea, except for one person who started collaborating with me. But what we eventually came up with was this piece that I called Temping. And so yeah, so I was thinking about how office culture works, how we read way too much into emails, how oftentimes voicemails become weird tools of passive aggression. And I started to kind of make the gears turn. I set a rule that there would be no actors in the piece, that it would be a show that would be entirely, that you would never meet a single living soul, because I wanted to prove this one actor wrong. And that I would instead use sort of the equipment that you would normally find in an office to tell a story. So I gave myself these sort of like four tools that I could communicate with the audience member. And then I started building a, I think what I look at now is like an overly complicated backend that could send emails, send voicemails, send printer, like things to the printer at specific times to sort of create a story. And then we also really kind of looked at the built environment, like could there be a desk, could there be drawers, could there be a bookshelf, to also use as storytelling possibilities. The show was developed over two sort of beta test runs, first at Dixon Place in New York City, the second at the University of Maryland in College Park. And then it premiered at Lincoln Center as part of the 53rd International Film Festival. It was not a film at all, but the curator who found out about the project was doing a thing on virtual reality. And I was like, I don't think it's really virtual reality either. It's an actual reality. But you know, he wanted it and so he got it. So to sort of describe what it really looked like, when you walked into the room, you walked into a windowless room in a basement with like ugly institutional carpeting with a low drop tile ceiling. And at the end of the room was one of those old sort of fabric covered cubicles. And when you sat down in the chair, it looked like this. And that was the kind of total beginning experience that you had. I opened up the door for you. I said, thank you so much for coming to work today. Here's your desk. And then I closed the door. And this setup functioned perfectly as an office. The desktop computer ran Windows 8. Your phone and voicemail work, you could like get on the internet and goof around if you wanted to. But most of the show happened through emails and through actual work. So here's the part one of three slides of how the back end worked. And I'm not going to talk about this at all. Other than to say, you can see in the upper corner, the base of it was a Rails app. And then it did a whole bunch of other fancy stuff. It sent emails at certain times. It controlled Hue lights. It controlled speakers that were hidden throughout the room. And that's kind of it. So that concludes the technical portion of my talk. The other thing, so my friend, who I honestly love to death and is an absolute genius, he built the phone out of an Arduino Nano that also mimicked the functions of a corporate phone so you could pick up the receiver. And it was like you were using a normal sort of like boring corporate phone directory. But on the back end, we could control and send like, okay, we need to send them this voicemail now. We need to let them do all of this kind of stuff. So that's the setup for the piece. And we did all of that stuff first and then set it all up and kind of went, oh no. Because we had all of this technology and nothing to do with it. But I knew one thing. I knew at the very beginning that what was interesting to me was office culture. And so I started trying to figure out, okay, well, is there a way that I can make, as opposed to an audience sitting there and just watching these events happen, could I make them do things? So the first idea was, well, let's just treat them like a temp. That way, they don't need to like role play in the office. They can just be treated as themselves. And if any of you have ever worked with temps, I'm sure you kind of know that people treat them as like, hello disposable person. And that's how the characters that we started inventing would treat our single audience member. The second sort of major idea that we had was to use email as a vehicle for character. I commissioned another one of my friends, Michael Yates-Crowley, who's a playwright, to create a cast of characters that all worked for this one company. So we had about, I think, 10 different characters who would interact with you and even more in a larger company directory. And what we really focused on there was to use text, to use emails to tell you about who these people were. And that part of the mechanism of the show is that the temp would get cc'd on the wrong emails or would get forwarded something. And if you scroll down to the beginning, you'd really sort of see the relationships of the back and forth that would give the audience clues to who these people were and how they behaved. I particularly like this one for the whole, I'm going to miss you too. I really mean that. But in terms of the narrative experience, there wasn't too much happening because you were just kind of looking at these emails and listening to some voicemails. And then we kind of figured out, all right, well, if we're really going to do this and we're really going to make this show work, I think the temp has to do actual work. So we started giving them real tasks. And that became really, this was our third sort of major idea in Revelation because that became the core of the narrative and started to sort of derive the experience of the show. Because the work that you did and how well you did that work determined your storyline. And at the heart of this experience was Microsoft Excel. I think my friends, when they wanted to make fun of me, they were like, you're making a show that happens in Microsoft Excel. And I was like, yeah, and they're like, so like a first person spreadsheeter. So this is how the show worked. You sat down at that desk, you were sitting down on the real desk. You get some emails from your boss about like, sorry, I'm at an offsite today. Wish I could meet you in person. But I believe Sarah Jane has documented her work for you. So you just need to start doing that kind of work. And because you're sitting at this desk and because it's clearly not your desk, there's pictures of like her nephew up. You get the sense of like, this is someone else. And you really kind of, if you dig around the desk, you really kind of get to know who this woman is who's sitting at this desk. And then there's a couple. A whole bunch of like email jokes of like, oh, the printer on four is broken. Oh, there's stuff in the break room. So people kind of relax a little bit. And then they get introduced to this very simple data entry Microsoft Excel task, which you're working for an actuarial company in the suburbs of Chicago. And all you have to do is update these client lists of who's alive and who's dead. And so it's a very simple task. And we thought this would be a good way to sort of like start people in terms of understanding what their work is. And if they don't know Excel or anything, this would be easy for them to do. But this thing started happening. And I'm going to sort of narrate your experience if you were the audience member. Every time you would change the active status in that yellow column from active to deceased, the lights in the room would slowly change. And this quiet music would start playing. And your printer would turn on. And it would print out a picture of that person's face and a description of text from a moment from that person's life, a really personal moment. And we tried to find some really human moments of like a father watching his daughter like learn to walk. And so you'd be kind of confronted with both the data, like the sense that, oh, it's just a whole bunch of numbers and things in a spreadsheet to then immediately looking right into that person's eyes and knowing, oh, no, they're dead. And then you would be sort of looking at that piece of paper. And then the second that you'd finish it, you'd put it down, the lights in the room would return to normal, everything was fine, the music would go away, and you could continue on your tasks. But it kept happening to you. And then the sort of like next phase of the show was that then your boss emails you and says, oh, OK, we need you to start doing life expectancy calculations. And because of statistics, it's actually frighteningly easy to determine people's life expectancies. And so you would calculate out how long these people in the database would have to live. And then same thing would happen. Each time you'd be like, oh, this person has 10 years left to live, the lights change, the printer turns on, you see their face. And now this time with the knowledge of, oh, no, they only have X more years left to live. And then through like a really sort of sneaky, cruel thing that we do, we trick you into calculating your own life expectancy. So then you kind of have to live with that number. And then the show ends with, because if you've done the show, you'll spend about 45 to 50 minutes working for this company. The show ends and really kind of getting to know the person whose desk you're sitting at because you kind of dug around in their desk. You've listened to her voicemails for you. You've read a bunch of her emails that you probably shouldn't have read. You find out at the very end of the show that she's been fired and that you're being asked to take her place. So the whole piece was kind of a meditation on how much time we have left and what is the kind of work that we're doing and an exploration of both the weird ways that people communicate in offices and also the sense of your own increasing mortality that might show up when you're working in a cubicle. So that's the show. I'm going to talk now a little bit about what I figured out in, or how I got to this point. Because it was, it took a long time for us to really kind of work out the kinks of this system and to figure out how to make it an emotional event. And it really was a surprisingly emotional event. I thought, oh, people will laugh at this and they might have one moment of like, oh, God, that's how much time I have left and that was it. But what really happened, because oftentimes I was the person to get people out of the cubicle when the show was over, I would open up the door and they would be like weeping in this office cubicle. And I always felt really guilty and bad about that. Didn't mean to hurt you with my art. So here are the things that I know, or that I can be sure about that make an effective story that I kind of learned from doing this piece. The first was to have a really clear narrative arc. That because our story was based on user actions, we sort of had to figure out here's the event chain that will lead you down this certain path and here's this event chain that will lead you down another path. And to be clear about where those moments would occur and also the ordering of certain emails depending on when they receive them and how quickly they receive them, that would tell them a lot the audience. They would learn a lot about that character. If you got five emails in a row from a really grumpy person, you then like had a whole different relationship to that character than if we sort of spaced them out or spread them throughout the show in a different way. So narrative arc story flow was the sort of like and really structuring that and being clear about how to structure that was really important because we wanted to give both people a sense of freedom and then also bring them back to these moments where they had to do the Excel tasks. Characterization also became a really tricky thing to try and figure out because there was no visual information about who these people were. You really kind of only got a sense of who they were through text and that came in terms of like or well you got a sense of you could hear their voices if they left you a voicemail but most of the show happened through email. So vocal patterns, email punctuation, how you would sort of like put it up on the page, ended up telling you so much more about who these people were, what they cared about and we tried to make them really distinct so that if you because when you got introduced via email to 10 different people you really wanted to be able to let the audience keep these things separate in their heads. So finding distinctive traits to denote character became really important and then also to make sure that each character had a specific point of view or a specific world view that they had wants and desires that they either wanted from the temp or wants and desires that they wanted from the world and that you understood because of their own weird distinctiveness why they were acting in a certain way. And then along with that idea that if we could create a sense of your of the temp understanding their world view so that you understood their characters once in dreams and how they intersected or conflicted with another characters once in dreams or conflicted with your own like the audiences once in dreams that made for a really good story. It made for conflict. It made for a sense of like oh I can look at this person and their life choices and that can be a moral for how I behave or a way that I can be either be like no I would never behave that way or yes that's how I want to be in this world and that's what kind of made that story much bigger that if you understood the boss's point of view and also the person who got fired's point of view it would become a much more complicated thing as opposed to like bosses are evil don't fire people. And then the last sort of like chunk of things that we figured out was that we started off in an early version of the show by explaining too much about who everyone in the office was by giving the audience too much information and really what helped was if we started to hold back to let the audience imagine more about who these people were so that it became down to like oh what if we just punctuated in a weird way as opposed to having someone casually drop a hint in their email about why they're acting in a certain way. And that giving that space leaving the audience room to interpret something made the audience engage with it more. It reminded me kind of how in ancient Greek theater every murder always happens off stage and then someone runs on stage and tells you what happened and they describe it in this really gory way that often is more effective than watching someone like fake stab someone on stage. So leaving that space became a sort of further more important design choice for us. And then lastly to leave room for the audience to explore in the way that they wanted to explore because we could the story was pretty nonlinear if you chose to do the tasks in a different order. The show could handle that pretty well or if you chose to focus on one aspect of the story some people got really into the whole Excel death thing. Other people got really into the sort of moral choices of do I side with the boss do I side with this person whose desk that I'm sitting at. And you never met any of these people but we wanted to give people depending on whether or not they what they were interested in a lot of leeway in the experience. So giving that sense of agency and I think it's connected to that sense of imagination of allowing people to pursue the story that they're interested in made it much more effective and that we could also that we could serve a lot of different people that many people were interested in one part of it and then a lot of people were interested in another part of it. So this is the quick sort of like recap of the thing in terms of things that I learned think about the narrative arc when you're designing a story think about the character of both in terms of how do I make characters distinct and how do I make their motivation clear to the audience and then lastly leave room for agency for choice and for imagination. Cool. So that's my talk. If you have any questions I'd be happy to answer some questions about it and if not I'll see you around and please come say hello to me because I don't know anyone here. Are there any questions right now that we can. OK. Yeah. It's going to be it's going to Harvard University in twenty six. No in twenty seventeen. It'll be installed there as part of the like American Repertory Theater season of solo works. So only in the way that I have like read some interactive fiction and been like this is cool stuff and I played around with twine and that kind of thing. And I think there is something I'm really fascinated in text and how when as opposed to like a film where everything is explained for you visually I find text to be this like much more personal experience. So I like both reading books and like playing around with interactive fiction stuff. But that's as far as the influence is. That's an excellent question. I did it a ton of times and oh sure the question and tell me if I'm wrong is how did you like achieve empathy for the audience and how did you know what they were feeling and when. I'd like to say it's years of professional training as a director that that's really my job is to sort of be like is this good or not. But I think it was a combination of each time trying to to forget everything that I knew and walk in with a totally open mind and if anything even the slightest little twitch of like oh I feel uncomfortable in this way or I don't like this to note that. We also went through like as I said like a really extensive beta testing phase where after each time we would let someone do the show we would sit down with them and be like so tell me everything. And also I forgot to mention this we like watch you on a camera as you're doing the show which I mean makes the show partially like a piece of theater and partially like the world's weirdest psychology experiment. And so you could kind of tell when people were engaged in that looking at them through the camera or when they were like not or yeah spinning around in their chair. Yeah we would so Aisa and I who's the guy who built the phone and he and I would we would be back we would kind of like have this conversation with the person and then decide whether or not we agreed with them. And because it's art like we were just like well if you don't like this thing it's then so it was mostly that kind of process but we were really attentive to making sure that no one got upset or that no one felt really uncomfortable by it because there is something kind of weird about mortality and like I was worried that if like someone calculates their life expectancy and has a panic attack like I would feel terrible about that. So it was more about sort of making sure that if the note was like I really felt uncomfortable during X that's what we would look at. They are still my friends they so here's the terrible thing if you guys ever think about this never do a show that's only for one person at a time. It just does not scale and you will be exhausted by the end of it. So that actor never saw it but I told them about it and they they're still my friend. I would say about like 40% is automated and then 60% is us watching on the camera and just being like click to send the email. There's a couple parts like that moment where they put down the piece of paper we really wanted that to be exact and to be really right so that it feels like magic to people. So we wanted to so we needed to keep an operator but we're trying now to refactor the system to make it even more automated so that hopefully someday we could like scale it up a little bit more. Yeah well so we learned that having other people nearby or having people do this show together doesn't work like they end up like critiquing each other's email skills or like arguing about how to do the Excel task right and they never really got into that quiet sort of reflective space that I think is kind of necessary for really feeling the show. But if there was a way that we could get a bunch of offices all sealed off from each other totally. We called it temping. This is a snide answer. No no no it was something that we worried about because like this is not a show that like grandma can come and see and really enjoy. And so because each show is like such a unique experience and so like personal and because we needed to like know your name and to know a couple things about you we in that sort of like signing up for the show process we could kind of be like this is not the thing for you nice old person and gently kind of like send them on their way. There's a movie for you over here. Each show it sort of depended on how good of a temp you were like there were some people who like banged through the show in like 25 minutes flat and then some people who would really take their time and it also sort of depended on the operator. Like when I operate the show I think temping should be a boring experience and so I like deliberately slow things down between emails. When Asa runs the show he's like stressing you out and so it's sort of like you get piled on in a way so it feels it can really vary but we try to keep it so that it goes no longer than 50 minutes because we need about 10 minutes to reset the room and do that whole thing again. I have never done an escape the room and when people when I was describing it to them they were like oh so it's like an escape the room and I was like I think but it's like boring you could just leave. Like there's no you win like you walk out the door. What was interesting to me and what I was sort of fascinated by in the sort of like major impulse for the piece was this idea of like intimacy and the way that digital technology often times and I hope this doesn't sound weird because I don't mean it in like a sex way but often times our experience of intimacy through the internet or through technology is more intense than it sometimes is when we're in person and I wanted to like figure out a way to capture that or to talk about that with people so that they understand that and I think that's really what I tried to go for is like what's the biggest impact what's the biggest emotional impact what's the biggest feeling that I can convey through a computer or through just reading an email from someone. Yes totally there were some people who would sit down in the chair and like you could see that they were like not into that experience and there was like one or two people who would like rummage around in the room and then found the hidden camera and then got like super squicked out by that. Like they would like do their work and then just like turn around and look at it. What we technically do as part of your like when you walk into the experience as part of signing up for the show you have to fill out an employee packet where you have to sign a privacy disclosure waiver that says like this company will be monitoring you all time all work that you do will be the property of the company like one of those like really absurd things so we kind of like wanted to play around with that idea as well and yeah a couple lawyer friends got really weirded out by it. I think this is all the time I have I want to thank you all so much for coming. Thank you.
|
How can you tell a story using only email, a laser printer, voicemail? Last year I created an immersive experience for one audience member in a standard office cubicle. The piece used a rails app and some other custom software and no live actors to tell a story about office culture. This talk focuses on the techniques of digital storytelling, my process of developing the story as I wrote the code, and the strategies I used to create an emotional connection with a user. If you are interested in the intersection between stories, software, game design and narrative design, this talk is for you!
|
10.5446/31564 (DOI)
|
Welcome to Style Documentation for the Resource Limited. My name is Betsy Habel and today we are going to be talking about ways to get style guides done in the real world as opposed to the happy fairy world where you actually have three months to take to stop the world and do this. And so whenever you're getting things done under resource constraints, be it making a style guide or anything else, you need to follow some basic steps. First off, you need to have a thing that you're aiming for. If you don't have a vision where you want to go, you're going to track back and forth and you're going to kind of fiddle along. Second, you need to figure out baby steps will get you there. And then you need to scale these baby steps a bit further down. You're always going to really overestimate how much you can get done in any given baby step at first. And so be realistic, take a few times to go, no, really, I'm going to have 20 minutes a day to devote to this. What can I do in 20 minutes? You're also going to have changing circumstances to adapt to. If you're taking 20 minutes a day to do this, then you're going to coding and your teammates are going to coding for a lot more than 20 minutes of those days. You need to hit from a moving target. So nothing is going to really be quite what you expect at the end. The vision that you have at first is great. It keeps you motivated and it keeps you focused. But resign yourself to the fact that that's not what you're going to get. What you're going to get is something that's actually kind of cooler. Then you go do the thing. So this is the vision. What we have here is the 18F style guide. 18F is a division of the US government. I like this as an example style guide because 18F is a collection of small teams that consults for other government agencies. I could use other examples like Google's material design here. But since material design is built for Google scale companies and since most of us do not work at Google scale companies, something like this that's a little tinier is a bit not only more realistic but also more adapted to our specific needs, not Google's needs. We talk a lot about getting everyone on the same page where our business logic is concerned. We talk about developing a shared ubiquitous language that we can share with stakeholders. But the thing is we need a separate shared ubiquitous language for our user interface logic. When we are doing web development, we distribute this user interface logic through the HTML and CSS and JavaScript. Keeping this organized is not actually an easy task because it's so distributed. This coherent model that we're going to build is what lets us do that and not go insane. Goodness, that's ablest language. I'm sorry, guys. And the thing is you already have an accidental style guide. Your accidental style guide is your code base. All of your forms look basically like each other, I think, probably. And this is achieved via liberal use of copy and paste. This is great if you've got a small team or a small UI or anything like that. But as you scale up, it gets a little trickier to deal with. You start onboarding newer developers and they don't know what to copy and paste from. They don't know what the right parts of the accidental style guide are. Rails gives you a lot of really useful structures for organizing your business logic, not as many as some people would like, but enough. If you open up a new Rails app, you know where the controllers are going to be and you know roughly what they're going to do. The same is not really true of ViewCode. Every shop has its own funky little way of doing it. And that's not transparent. Also realistically, stakeholders who are not you, especially designers, are going to have a lot more opinions about the results of the ViewCode than they are about the results of business logic. They can see it. It's easy to talk about and easy to think about. So there is an example app I'm going to be turning to at a few different times in this talk. When I talk about a coherent model of the user interface, I want to again stress this is about looking at repeated user interface concepts. These are not in the context of the user interface domain cat photo modules. Their cards are showcases. The reason I'm making this distinction is because a designer might want to reuse the same concept to display dog photos. Or they might want to reuse, or you're probably going to also be displaying cat photos in another way if you're making a cat photo sharing website. Style guides should also go a little deeper than your branding guidelines. A lot of people go, okay, we use telvetica and stop there. But a style guide can give you a lot more. You can talk about the look and feel of individual widgets. This tends to be as much or more about the affordances of the components as it is about their aesthetics. You can talk about how they're arranged. Are the labels next to the form fields or on top of them? Or error messages go? We can also talk about what UI problems are given which is useful for solving. You can say use carousels if you need to display several bits of cards. But otherwise, just use a grid. Or you can say only display notifications if they're relevant to the user's current goal. And this is going to look basically like the bootstrap documentation when you're done. So you might be thinking, this is overkill. We just use bootstrap. Just is a really funny word in software development. It is usually a marker for someone trying to push their assumptions on you without realizing it. And the thing is, none of the people who have ever pushed back and said that to me were actually just using bootstrap. If you think your shop just uses bootstrap, I'd like to ask yourself the following questions. Are you using all of bootstrap or just a subset? Are you only ever using bootstrap? Or do you have custom UI elements that aren't part of it? Do you have undocumented bits and pieces of code or rules around how you're using the bootstrap elements? And again, most importantly, do you often copy and paste markup around to preserve these undocumented rules? That's the most useful real-world heuristic I can give you. And so if any of these statements are true about your site, your style guide is not the same as the bootstrap documentation. I'd like to encourage you to document the ways in which it's different, and you don't necessarily need to be super fancy about this. It can be annotations with links that say, more docs at this link. This link happens to be the bootstrap docs. But still, documenting these deviations is very important. There are a few other barriers to style guide adoption that aren't just, oh, why do we need this? They are, it's a lot of work. They are, it's a lot of work, and my boss will not let me do this because it's a lot of work. There's also a few fuzzier ones. If you build your style guide very inflexibly, it's not going to accommodate a real-world user interface very well. Similarly, you might have a designer who believes in a lot of exceptions to the rules rather than just a few. And finally, if you make a style guide once and then assume it's set in stone, it's going to be just another set of documentation. It's going to slowly drift away from reality. We live in the real world. Our software is held together by Love and Ducktape. And I say we for a reason. Like probably most of you, I'm a veteran of the, we can always make it better tomorrow, School of Agile. And because of that, I can tell you that actually you can do this incrementally. You can get this bright, shiny dream world in the real world. The first thing we need to do is let go of the idea that a style guide is ever going to be done. It's in 15 years since the Agile manifesto. We have all embraced a notion that our software is iteratively built and never quite finished. It follows that our documentation, like the code at mirrors, needs to be responsive to changing needs, needs to be malleable. I've worked with a style guide before that was this pretty thing that the designers went off in a room and built over the course of a few months. It was great for about five months after they finished. But then the lead designer left and we replaced them. And our new designer went into another room for another two months or so and built us a new style guide. And it was beautiful too. But we were suddenly faced with this dilemma. Do we stop the world and migrate to this? Do we leave our user interface inconsistent state? What do we do? And none of these options were great for our users. And a lot of them involved us taking time to not ship code. So when we view these things as perfect versus worthless, when we hold up this perfect ideal in our head and expect that it's actually even all that useful in the real world, we suddenly stop doing iterative processes because we think there's no point. The history of industry has proven time and time again that iterative processes work much better than stop-the-world processes or heaven for fend, never doing anything at all. Who here has written brownfield test suites before? So taken an uncovered or undercovered code base and written some tests around it. That's a lot to your hands and I expect, you're all really lucky, because when you do this, this kind of sucks. You tend to pick one UI path and write an integration test for it, or you pick one method in one class and you start writing unit tests for all the cases. And it's really, really terrible. You do it for features you're working on anyway at the time, but it's still really, really terrible. You're trying to maintain some kind of velocity in your feature, and you don't have any kind of proper test harness. You're needing to build that simultaneously. And so you kind of fake it and go, eh, good enough, maybe, and move on. But the next test after that, still not fun or anything, but it sucks a lot less. You already have a minimum viable test harness in place, and so you can just write your test, or you can maybe even flesh out your test harness a little and make it a little more full-featured. And eventually it gets easier and easier and easier, and then it gets easy, and that is the greatest feeling. And I'd like you to think about Brownfield Style Guide documentation in the exact same way, because it is the same thing. The great news about looking at Brownfield Style Guide documentation, like Brownfield testing as an iterative process, is that lets us sidestep many of the real-world concerns I outlined for you about Style Guides. It also forces us to figure out a set of baby steps that lets us maintain velocity while we build out our Style Guide. Forces us to make things a little bit better with every little baby step so that we can prove to our teammates and our bosses that this actually is worth doing. And it forces us right away to grapple with the dead documentation problem. If someone did a Stop the World doc project and released on April 27th, 2014, I know coming into the project that it was relevant then. I can adjust my mental model of how accurate it is based on, like, some mental heuristics around code entropy. So if I'm coming to that project circa May 2016, I can know that probably many of the database tables are about the same. They might have a few more columns, but the general structure is still there. I can throw out anything the documentation says about the controllers as completely irrelevant. However, if I have bits and bobs of documentation that don't have dates on them, I don't really have any way of knowing how useful any given piece of it is. So the second it starts to drift away from reality, and I notice it's inaccurate, I can't make the same value-preserving code entropy type guesswork because I don't know when it was accurate, and so I just kind of need to dismiss the entire thing. So again, we need to solve the documentation problem really very much first. But it also lets us sidestep the flexibility problem. When we document what we've already got and leave the rest of it for later, and leave any kind of organization for later, we know our style guide covers real-world use cases. So I'm going to teach you a three-step process for getting a style guide up and currently. We're going to use three steps over and over and over again, in which we identify UI component, codify it, and then document it. Step one, identifying is super easy. We just look at our app. When you look at these screens side by side, they've both got pictures apart, or they've both got cards on them, and they've also got tab sets. So if we attack tab sets first, when we get to the step, reformatize these components in code, when we do this, we try to refactor what we've already got to mean more internally consistent. This sets us up to document it later, and it also helps us sell this process to the rest of our team. We can't always say this new process changes in documentation is going to be magic and solve all our problems, because we don't have... Our teammates cannot be expected at that point to have context for why this is magic and why this will solve their problems. But it is very easy to say, hey, I improve these five lines of code and get people to say, yay! So this is going to be something we use to help build trust, as well as set ourselves up for success with documentation. So as a heads up, we are hitting the code on slides part of this talk. Is the last day, it is after lunch, I'm really sorry. Please just focus on the broad outlines. Most of the slides have helpful bright red circles on them to point you to the interesting parts. At the end of the talk, I'm going to be giving you a link to a blog post, and it's going to have all the code samples in it. So you can look up every bit of code on these slides later if you want reference. Again, think of this as a general overview, it happens to include code. Of course, this slide has no helpful red lines on it. But also, I'm mostly just throwing this up to show you unrefactored code that's kind of repetitive. I'm going to skimp past it and overlay these two things. Gosh, unrefactored code is really repetitive sometimes. This is an overlay of those two things. Note that you can't really tell because it just looks like text. Because this is a Rails project, we're going to use a helper to refactor this. Note that I'm putting the output at HTML in a comment directly above the method name. This makes reading the method quickly much easier. It gives you a literal representation, which can be a lot easier to read than content tag. It's a lot easier for your designer to read than content tag because your designer knows HTML if you're lucky, but is not going to be learning Ruby, probably. It also helps you search for code more easily in large projects, which is a lifesaver. So we move our tab code to use that helper. It's nice and pretty. Yay. Look how nice. Look how short. This is what your code base actually looks like, right? That's okay. Five years ago, front-end Betsy was a bit of snob and probably would have not said this is okay. But I was wrong then. Market frameworks like Bootstrap and Foundation, which are what's going to usually give you like these huge sets of classes, are really useful, rapid prototyping tools. In the real world, it's fine that you don't always have a chance to go back and make your prototype nice. Also, five years old front-end snob Betsy when she said, ew, presentational markup was actually missing the thing that's really wrong with this. When you look at these two examples, the semantic versus presentational nerd fight obscures the actual difference between them. The first one at the top is us using the markup to list out a collection of attributes that the tabs have. The second is us condensing that set of attributes into a useful abstraction that builds out the shared ubiquitous language of the user interface. How do we get from one to two quickly? We cheat. This cheat, incidentally, is great even if you're not necessarily as comfortable as CSS. If you're using Rails, you probably already have SCSS installed. There's a markup preprocessor that gives us a few different options like mix-ins, place folders and class extends that we can use to include existing CSS classes into new ones, much like Ruby modules. All we really need to do here, at least on the first pass, is make a parent class that encapsulates the concept and then just sling all the CSS from our detail classes into it using class extension. This is not the world's prettiest SCSS, but again, it's better than what we had before and in a lot of ways, that's all that counts. Generally this particular approach is pretty safe. The fact that we're making a new parent class helps us limit the area of effective refactors. But sometimes an abstraction requires a few nested tags and you can't collapse them quite this easily and you maybe can't fix them how you have. Or maybe you do try this approach and something weird happens with the cascade because CSS is a very global oriented language and then you go, no, and back up a bit. In that case, the next most heavyweight mitigation technique I have for you is building a nested helper. This is starting to get into the end of code that's clever for clever's sake. So if your markup is simple, it might not be practical. Abstraction is really only useful when the thing you're abstracting is complex. So, yeah, sorry. Helper nesting looks like DSL magic from the outside, but it's the simplest possible form of Ruby DSL magic. What we do is we build the outer tag and then say this inside is going to be the content of a block that's passed to it. Then we yield control to that block. But what if you've got something even more complex? When we look at these screenshots, these are both cards, but they're really different kinds of cards, and so you need a lot of configuration options. When I was a much younger developer, the way I would do this would be like the mega helper approach and I'd be really proud of myself because I was making fewer lines of code, but it would be a really bad abstraction and I would not be able to read it four months later. Helper nesting can help with this. But when I look at this particular instance of helper nesting, I wonder how to make it clear that the body methods are really only ever intended for use within the parent method. I can hope the people look at this code and get it, but if they guess wrong, then the errors might not give them useful feedback. On the other hand, when I'm trying to solve this problem for my coworkers, I need to make sure I'm not building some huge non-realty thing because no one wants to learn the hipster Betsy framework that's not documented. So what I do is I take form for as an inspiration and form builders. I'm not straying super far from the rails way here. I'm leveraging its metaphors so that you can use them to understand this code. This is great. It's not just me selling this code to my teammates. If code doesn't solve people's ‑‑ if code does not improve my teammates' lives immediately, I'm not going to get buy-in for that code, nor should I. We don't need to change the pattern we're using all that much. We're still using the build a thing and then yield control pattern. It's just that instead of yielding to one ‑‑ or instead of building one parent tag, we're building an object that knows how to render the entire component. Note that we're passing in the view context. This is the class all helpers are included into when you build a Rails view. We grab it using self and inject it into the component builder. There are a few different schools of thought about how you're supposed to inject view context into non‑helper things that use views. And I like this method. It's the least magic of them. No verbose but no magic and that's good. Your basic component builder is going to have three major parts. Initialization, a rendering shell, and a collection of sub‑renders. The only thing you need to do in the initialization step is grab the view context and stick it in an attribute. I'm using H for that here, which is convention I borrowed from Draper. I usually hate single‑letter variable names, but here the convention lets us focus on all the times we call content tag or link to or other actual methods. If we're just typing view context over and over to obscure what's important about the code. The other thing you can do if you need to in initialization step is configure top‑level CSS attributes or repeated text or stuff like that. In general, we want to keep this configuration pretty lightweight. We want to push the bulk of the work to the sub‑renders. So next we build a render shell. This looks remarkably like the parent of a nested helper. It's the exact same thing. And so again, we're just building out the outer tag and then we're yielding control to the block. The sub‑renders are going to do the bulk of the work. They're going to output all of the different bits that form the main body of the component. Note that again here, we're still adding comments that illuminate what the markup generated is so that we can anchor ourselves. I personally find code that's very content tag heavy hard to read. And this is a great way of removing that layer of abstraction temporarily so I can focus on what's going on rather than what this code is doing. Again, when we've got this all assembled up, the way we implement this in the view looks a bit like this. When you are testing these, please use nocogiri or pick your XML parser. I don't care what. You're going to be tempted at first to try to test them using plain strings or maybe regex that matches against those plain strings. And that's when he really, really brittle. It will seem simpler at first. It will not be simpler in two months. Learn from my mistakes. And again, all of these techniques I've shown you are in many cases pretty heavyweight. You don't always need to find a nail for any given new hammer. If your gut tells you at least techniques are overkill for your project, trust your gut. Once you finish codifying a view pattern, the next and final step is documenting it. And this is super easy because we've made these code comments and they start the documentation for us. For bonus points, what you can do is you can use RDoC's include directive to link to any given partial or that you might be using within your code. And those will actually be inlined into RDoC by magic, which is a really great way of making your documentation auto update. And auto updating documentation is super important because then you do not need to actively maintain it. And it can stay reliable without further input from you. But only your developers are ever going to look at your RDoC, probably. We want a communication tool that lets us communicate with designers and maybe even other stakeholders. We want something like this. Probably it is much less hard to build than you might think looking at this like nice finish-looking thing. Since this is style documentation for the resource constrained rather than style documentation, happy unicorn fairyland, we need to ruthlessly prioritize what we're going to include on the first pass. The great news is that that's not that much. There are two real world use cases that this document is actually going to be used for. You just need to make something that encompasses both of them. They are a developer's skins or a developer is given a wire frame and skins through the style guide for a plausible looking element which they then copy and paste into their current working code. The other is the designer and developer are talking and they point to things. And so given these use cases, our minimum viable product just needs pictures of components and example code for those components. Anything else on top of that like use case descriptions and et cetera is great especially if you've got a larger team. But you don't need to do that in the first pass. And I advise you to not do that in the first pass and burn yourself out. There are a few different tools we can use to do this. My favorite tool for doing it in Ruby is a tool called hologram. So just install it like normal. The best tool for you if you're using a JavaScript heavy front end like Angular, Ember, React, whatever, I think that's what the kids these days are using, is going to be a tool called KSS. You want the node version of KSS. And you want this for two reasons. And one of them is that the node version of KSS is actually the only version of KSS that has a lot of the features I'm going to be talking about as important later. The other is that your style guide generator language always wants to match your view implementation language. So I'm going to be using hologram in all of my code examples. This is RailsConf. It works by a little like RDoC really. You put special documentation comments in your CSS files and the processor turns them into documentation. Putting your docs in your CSS or someplace else central is a great little nudge for your tree mates that keeps documentation from dying. It's really easy to forget or deprioritize documentation when you keep it in docrandomstuff.markdown. It's harder when it's right above the code you're working on yelling, I'm here. Fix me. I'm really guilty if you don't. And so a good style guide generator is also going to give special status to your code examples that can get showing you here. Hologram's output looks a little like this. You're getting a super happy little picture of a card right above the code that makes the card go. We're fulfilling both of the needs of our style guide MVP right in one tool. It gives us both of these by not magic, but writing. We could stop here, but we can also get a little better. We've built out all these cool few helpers that codify our abstractions. And hologram processors are just executing Ruby, just like CSS node processors are just executing JavaScript. And so we can put our helpers in these examples. This takes a little bit more work to set up. It's a little finicky, it's a little Rails version dependent. It's too much code to be worth putting on a slide, but it's linked in the blog post that I'm going to be showing you later. If you can do this, I advise you to. It is super magic because then your markup changes when you update your helpers. All of these changes automatically propagate to your style guide. And so again, you don't need to do as much maintenance work going on. Living style guides are way more important and are way more useful than the kind that you set and forget. So the ideal is something that looks a little bit like this. Note that the example both outputs the HTML that the helpers generate and the helpers themselves. This is really important for helping out designers. It lets them prototype rapidly using your actual style guide primitives. And they can do this even if they're not necessarily that comfortable of code because it's right there for them to copy. And I'm going to describe something for you that you think will never have happened, but it has happened to me multiple times. And that is that a designer actually does make one of these prototypes. And then the designer comes to me and gives me this prototype and I go, hey, I'm just going to wire this up really quickly. And it goes really smoothly and it goes really wonderfully. And then I notice something that's not quite right. Like there's a little bit of content that works differently in the real world than it did in the designer's happy, lorem ipsum. I'm going to make things up world. And so then what we can do, because we have this shared understanding, because we have the shared code that we have built out and documented, is we can pair on how to make things better. We don't need, we get to lose that process where a designer and a developer start yelling at each other in Jira or giving passive aggressive once every three days comments in Jira about impracticality and ugliness because we can just go and talk to each other. And we've got this mediating document that helps with that. It lets us work in this really fast, happy, user-oriented way. And it's a lot more fun. If you're doing that, if you have a style guide that lets you collaborate really effectively and happily, and then it doesn't matter if it is a messy and ever-changing working document. Honestly, I think that kind of helps. It means that it's accessible, and it means that when we need to update it, then making those updates is accessible rather than something we need to treat as a big deal. So if you take one and only one thing from this talk, it's that that is the goal, and it's that you can achieve it. The one core truth in the Vagel is if it works for your team, roll with it. It's important to understand the building blocks of UI. It's important to use abstraction to effectively manage those building blocks. It's important to constantly communicate about what those building blocks are to keep everyone on the same page. Whichever methods you use, you can get to the stream. You deserve to get to the stream. Good luck. So here's a link to both the blog post with code samples and a blog post from Pivotal that helps you set hologram up with Rails in some nice ways that keeps guard, that uses guard to auto update things. This is me. My name's Betsy. I'm on the Internet in a few places. My Twitter is mostly feminism in pictures of my cat and occasionally science fiction and occasionally also code. I work for an organization called Act Blue. We build fundraising tech for Democrats and we amplify the voices of small-dollar donors. Our average donation size is about $30, and since 2004 when we were founded, we have raised $1.1 billion. About $1.1 billion of that has been in the last one month, two months alone. About that, okay. This helps these donors' voices be heard in a real way by aggregating it. It really helps us keep the party more focused on the needs of people who don't have huge sums to donate. It means a lot to me. We're committed to a modern web development stack and a culturally and technically sustainable approach. We're hiring. Please talk to me or to my boss who's going to raise his hand now if you're interested. I'd like to thank Corline, Ada MK, Chris Hoffman, the people of Arlington, Ruby, and Zeal, and Act Blue for all giving me feedback on early versions of this talk and helping me make it hopefully good. Any questions? Yes? Yes? Absolutely. I'm going to be tweeting out this, tweeting out the link too right after I step off the stage. You? We are located in Massachusetts, but we're very remote friendly. I live in D.C. We probably have more remote developers than non-remote at this point, and it's a great remote culture. I can say that from experience. Cool. Thank you for coming. Thank you. I'm sure there's someone else who does that post.
|
Application view layers are always hard to manage. Usually we handwave this as the natural consequence of views being where fuzzy user experience and designer brains meet the cleaner, neater logic of computers and developers. But that handwave can be misleading. View layers are hard to manage because they’re the part of a system where gaps in a team’s interdisciplinary collaboration become glaring. A comprehensive, well-documented styleguide and component library is a utopian ideal. Is it possible to actually get there? It is, and we can do it incrementally with minimal refactor hell.
|
10.5446/31568 (DOI)
|
We are going to see a talk about the Rails boot process and I would like to explain before the goals of this talk. So in this talk, we are going to see a few things related to this topic. For instance, if you open the config directory, there are a number of files that we normally never touch that are generated, like this config boot, convic environment, and so we are going to see what they do. And Rails components in general can work independently of Rails. So Active Support, for instance, is a library that you know you can use in Ruby script, you know, that is not running inside the Rails application. Also for instance, you can use Active Record outside Rails. You can have like a regular Ruby script using Active Record connecting to everything, you know, you have everything. But somehow, magically, you know, you launch a Rails application and all these independent components are, you know, somehow organized for you and seamlessly you use them, you know, and there is nothing that the programmer has to do to get these things working together. And we are going to see how that works. And you know, the final goal is to understand more or less what happens when. Okay? I say more or less because there is a lot happening, but we are going to have a good sense of what happens when. And for this talk, we are going to have to take into account a few things. First, the approach of the talk is thought for Rails parameters, all right? So for instance, it's not going to be like, you know, a walkthrough of, you know, of code and code and code and see the rune path of these things. So we are going to see code, but it's not a walkthrough. So with this talk, I have tried to explain what I would like to know as a Rails parameter about the root process, the boot process, okay? And it is something that I would like an, an any guide to cover, you know, that kind of information. In the boot process, we have rail ties and we have engines, but this is not a talk about rail ties and engines. We are going to see what they are, but that's, that's a topic for, for the whole talk, okay? So we are going to see only what is needed for this talk about rail ties and engines. Therefore, we are going to see some code snippets, but they are going to be, like, heavily, heavily edited. So I, I have not tried, even tried that, that you want this, so that the, the slides makes clear this has been edited, you know, with ellipse, with ellipses or something like that. We put, like, a big warning and in general, the code we are going to see if you open the real file is going to have more stuff, okay? But there's a lot going on and I tried to select and, you know, uncut everything out that was not relevant to the, to the topic we are talking about. And finally, we are going to ignore spring, you know, so the boot process assuming that, no, no, no, no. This is the boot process, you know, the vanilla thing, okay, without, without additions. All right. So normally when, when you, when you think about booting rails, you, you have a server in mind, okay? You, you, you launch the application, you launch the server, you can, you are able to, to serve requests and everything. But there are more things that, that boot rails. So for instance, if you run the console, you know that somehow you have everything in the application available in the console, right? Also runner, if you run, you know, if you pass runner a string or, or a filename, it gets executed in the context of the application. So somehow the application has been booted for, you know, in order to run these commands. Runner indeed is a command that I love, I, I believe I run this command every day. So it allows you to run something quickly instead of launching the console and then control D and that kind of thing. You just, you know, runner something and indeed to understand the rails, the, the, the rails initialization, runner one, which is execute the program that has a literal one, you know, it's something that I do often, you know. So that's the minimum thing, the minimum thing, Rails runner one, the minimum thing that, you know, loads what it has to load, does what it has to do and then there's no more side effects except a blockchain one, which is not allowed. And also in some great tasks, you know, that you have also the environment load. So booting the application means being ready to serve all these kind of things. All right, so let's open bin rails. Okay. This is a file that is generated, but in any, at least modern Rails application. And we have here config boot, we load config boot and then it doesn't require something. Okay. If you open bin rake, you will see that it loads config boot and then does a stuff related to rake. Okay. So these both, these both files start loading config boot. That's the first thing. And if you open config boot, you will see that is basically doing bundler setup. Okay. Bundler setup configures the load paths so the application is able to require the gems that are in the gem file with, you know, with all the constraints and everything and is not able to load gems that are outside that thing. Okay. So bundler setup tricks load path and I don't know, all is needed so that that is going to work. Then if you open config rule, which is what you execute when you launch a server, you will see that it requires config environment. Okay. And then does something. Config environment is another important, important file. It loads application and this is config application RV, which is the first file we normally, you know, working as a programmer, that's the kind of the first thing, you know, config application RV is where stuff starts, where you can config, I don't know, time zone, that kind of stuff. Okay. So that's the file that is being referenced here. And then very important, it runs Rails application in italyse. And this is the magic. So in italyse, this method is the one that runs all the initializations. Up to now, we are like setting things up, you know, to be able to do this. And this is config application RV, which loads Rails all and then executes bundler require with the groups that are relevant to that execution. Okay. So Rails all, we are not going to see, we are going to see Rails all later. Okay. Forget about it for a moment. But the point here is that we are doing bundle require. At this point is when the gem dependencies are loaded, okay, unless you opt out, okay, at this point. And after that, we evaluate the class that defines the application itself, which is, you know, this thing that is named after, you know, what you pass to the new command, that kind of thing. Okay. So the presentation is organized in a few blocks. And we are going to be a bit like a roller coaster. Okay. So we are going to dive a little bit in something and then I'll leave it up, doing some summaries. Okay. So this is the summary of what we have seen up to this point. So define load paths. We have the gem dependencies ready. Then load Rails all, which is something that is going to be seen later. Then we actually load the dependencies, define application class, run initialize. That's like the script, you know, that's the order in which things run in at this point. And it's initialize that does, you know, the proper Rails boot process. Okay. All right. Rails-RailTie. So Rails-RailTie is a class that provides a number of things so that extensions and, you know, are able to hook into this process. For instance, it provides hooks to run code when you launch a console, when you launch the runner command, you know. So you can say as a gem, you can say, hey, if I am loaded in a Rails application and the console is launched, please call this code. Okay. So we may have, you know, a series of blocks like this that are scheduled to run at that point. Then you have the ability to define custom configuration points. Okay. So when you see, like, for instance, config.activerecord.something, you know, a RailTie, we're going to see later that ActiveRecord is a RailTie, allows you to define configuration points so that applications are able to, you know, to express the configuration they want. And also very important, they have the ability to declare code to be executed during boot. So these are called initializers. Okay. So hooks, configuration points, and the ability to declare. So Rails can declare that I want to run something when the inedulation happens, which is something I don't know. Okay. RailTies are defined by suppressing this class, this very class. And Rails knows which RailTies are defined because there's an inherited hook, you know, that when Rails, RailTie is subclassed, there's an inherited hook that say, hey, I have been subclassed and that by definition is a RailTie. Okay. Well, there's a technical thing about some special subclasses that I ignored, but that's the idea. So let's see an example. Okay. So this is, for instance, a RailTie or FactoryGal. And this initializer thing with a block is, FactoryGal is declaring when the application is initialized and I don't know when that is, you know, but when it happens, you've got to run this code. So for instance, FactoryGal at this point knows that Rails root is already defined. Okay. That's a contract. You know you can assume that. Okay. And in this case, FactoryGal is setting up some factory paths or something like that. Okay. So the way FactoryGal, so FactoryGal needs to define some paths that depend on the Rails application. And the way to do that in an integrated way with Rails is to define a RailTie and say, okay, when you are booting, call me and I will, I will configure myself. All right. So Rails components are integrated into the framework using RailTies. For instance, this is the RailTie of ActiveRecord. ActiveRecord defines a RailTie to integrate into the framework. In this case, for instance, this is an example. This is one of, this is the hook that tells Rails to run this code if the console is launched. Okay. So if the console is launched, if it's launched in sandbox mode, which is a mode that starts a transaction and rollbacks a transaction when the console, when the session is over, okay, load some code, I have not copied here, okay, load whatever you need to support this thing. And for instance, then you have, you know, that recently in the console, you get, you get the SQL logins in, you know, there. So there's code that says, okay, unlock to standard error. All right. So that's the way ActiveRecord integrates into Rails by defining a RailTie that does this and does a lot of other things, all right? Okay. For instance, this is from ActionDispatch. And we are seeing here an example of configuration. We saw before an example of hooking into the console. Now we see an example with configuration. So for instance, this is the way ActionDispatch defines a configuration point called tld underscore length and it gives this configuration point a default, which is one, okay? This is from ActiveSupport. And this is another Initializer. We saw an Initializer for FactoryGal. We are now seeing an Initializer for ActiveSupport. And we do not need to understand this code, but it's basically taking the time zone that the application has configured and setting whatever ActiveSupport needs to set up to take into account that configuration point. Okay. So that's what Rails all does. Rails all is just, you know, it just loads all the Rails ties of the different components in Rails. So application.rb loads this file. This file is just looping and loads everything, all right? And as a side effect of loading these things, we have first that Rails knows that RailTie has been suplaced. So we are able to list the RailTies that have been loaded. And also as a side effect of loading these, we have the configuration points, the declarations of the Initializer. So it's like a setup, okay? And that's the way the components work seamlessly in a Rails application. So the application, in reality, well, as a rule of thumb, let's say, maybe it's not 100% in all cases, but as a rule of thumb, this is, I don't know, I like to remind this design. So Rails is not coupled to the components. Rails is kind of agnostic to most of them, all right? So Rails does not have in the Initialization process anything that hardcodes the stuff to integrate the components. It goes the other way around. So the RailTies are loaded, and the configuration points are the interface between Rails and these components, okay? So ActiveRecord just loads and say, hey, call me the console, please run this Initializer, blah, blah, blah, blah, blah, okay? And the same for the rest of Rails components. And the RailTies, so are the components that know that they are living in a pattern application, okay? That's the way they integrate. So Rails does not hardcode in general anything about ActiveRecord, that kind of thing. Rails just exposes a number of configuration points, ActiveRecord loads, takes the configuration points that it needs to set up itself, and you know, you are set, okay? That's the idea. So Avonilla Rails application has 15 RailTies, which are these ones, okay? So you see, everyone that needs to integrate with all these processes has to define a RailTie. All right. Next block, lazy loading. So in general, so example of heavily edited code, this is ActiveRecord, which has a lot of things, okay? All right. So in general, if you open like the root file of the Rails components, you will see a lot of autoloads like this one, okay? This is the autoload of Ruby. Yeah, so Rails redefines because autoload in Ruby needs, you know, the constant and then it needs a path, all right? But when you follow naming conventions for the files and you write this thing three times, so the reaction is to write something that automates this using the convention. So modules that, this is the Ruby autoload, not the Rails autoload of constants, okay? So in general, when you boot, Rails tries to be as lazy as possible loading things, so you have the, you know, the minimal things to do when you boot. One of them is setting autoloads for, you know, this is the ActiveRecord, is the name space, so ActiveRecord base, ActiveRecord, I don't know, whatever, is going to be autoload. So it's not going to be ActiveRecord base, it's not going to be required on boot. It's going to be required when it's needed, you know? And that's thanks to autoload. So since things are going to be loaded only when needed, and there's code that needs to know, hey, have you loaded already ActiveRecord, for instance, to include something, Net of Record Base or whatever, there's this utility which is ActiveSupport on load. So for instance, in the ActiveRecord Rayty, we have to set up the logger of ActiveRecord. And the Initializer does not go straight away, you know, assigning the logger. Because the way to do this orderly is you declare that when ActiveRecord is loaded, then please set the logger with, you know, the block that I am passing here. Okay? So that's the way, you know, to defer as much as possible things in order to have as a boot process as lightweight as possible. And at the end of ActiveRecord base, at the end of ActiveRecord base, which is what ActiveRecord considers to be, you know, loading ActiveRecord, you know, by definition is evaluating this file, it says, room load hooks for ActiveRecord. Okay? So when this file is evaluated, all the, you know, the class is defined. And at the bottom of the file, you have this line. And then everything that was, you know, scheduled to be run when ActiveRecord loads is going to be, is going to be run at this point. Okay. Next block is Rails Engine. Rails Engine is a subclass of, of Rails ties. Okay? Of Rails Rails. They are defined as well. So Rails were defined by subclassing Rails Rails and engines are defined by subclassing Rails Engine. So it's the same, the same kind of thing. Okay? And what is an engine? So this, this would be like one, two, three talks. But just, you know, so just to get the idea of what it is, first it heritage everything from Rails, Rails. So always, always, we, we saw before about console, runner, you know, hooks, initializers, config points, that's all available here. But it's like a super set. You can do more things with an engine. So for instance, you can, you know, you, you can define controllers, models, that kind of thing. You have initializers as well, assets, you know, a bunch of things. So it's, it's closer to like, you know, unless you are able to define like a subset of an application, that's the idea for an engine. Okay? So engines have predefined some configuration points and also some initializers. All right? So just by subclassing engine, you got some configuration and some initializers for starters. So this is the one that are inherited. So when you define an engine subclassing Rails engine, you get this kind of thing. So you're going to set the load path of the engine. Okay? So adding your own lead directory or whatever, you know, to APP models, whatever, to the load path that is already set. Auto load path, routing paths, locale, a number of things. Okay? So it is not important to, to follow every single step of this, you know, it's maybe too detailed, but just, you know, to give an idea of the kind of things that, that you inherit from an engine. View paths, load environment config is the, these are initializers. This is the initializer that runs whatever you have in config environments, develop config environments, productionary, whatever. And then some other paths. Then you load config initializers, all right, disorder. And then there's a technical hook engines black point that says you've, at this point, you have already run a number of things. If you are interested in hooking into this, you can, all right? So initializers by default go like in chain, but you can say this has to run before this or the initializer or this has to run after that one. And there are a number of technical points in order to, to be able to do that. And a vanilla rage application, which is like rage new, you know, what, what, what do we get with rage new? There are four engines. So this, this four, okay? Okay. Action cable is an engine. It has an engine. So the rest of rage components have rail ties because with a rail tie is, is enough. What they have to do on boot. And actual cables has some assets. And that's the, that's the reason it, it is an engine. All right. And we, we are arriving to rage application, which is a subclass of engines. So look at this hierarchy. So the base thing is a rail tie. Then we have an engine and an application is a subclass of engines. So you know, the application is like a particular case of all this design. Okay. Beautiful. I think it's beautiful. All right. So they are defined by subclassing. And that, that's what you do in config application or B. If you remember, you have your application subclassing, race application. That's exactly what that file is doing. So the application is a, indeed is a singleton. You know, there's one, there's just one instance of that class and you can access that instance using race dot application, which is a method. And when the singleton is instantiate, you, you get a hook, call it before configuration. That's also just something that is fired. And if you're in a, in a rail tie, for instance, you say config dot before configuration, execute something is going to be executed at this point, which is just when the application is just, you know, you instantly, it got instantiate, you get this code called. Okay. So application execute for groups of initializers, for groups. It's organized this way. First you have the ones inherited from race engine. Okay. Race engine have some pre-configured ones. So those ones you inherit, therefore they are going to be executed. Then there's a bootstrap group that does like super basic things like setting up paths and that kind of thing. And we have the initializers of all the rail ties and engines that the application has load. Okay. And the way this is load is because we have a bundle require in config application. So if you have an extension, a gem or something implementing a rail tie or an engine, so what happens is that when bundle require loads that gem, that gem, you know, at that point defines the rail tie or the engine so that is, you know, so that the application knows about the existence of this thing. And then there's a finishes group that does, you know, some training stuff. So the bootstrap group, for instance, the first one is a technical hook as well. Then for instance, we load active support. So active support, that's not something optional. All, you know, the whole rails uses active support. Okay. So straight away, you, this one is hard code. This one is loaded. Okay. Yeah. So this loads active support all, which brings you everything in active support unless there's a configuration point which is config active support bear, which says instead of loading everything, just load whatever the application, I mean, whatever rails needs, you know, to run. And this light means that we are finishing. No? All right. I saw a change. Okay. Okay. Fine. Yeah. So you are able to load, you know, the minimum of active support. Okay. And that said, I have never seen this one used. So, but we have it there. Then eager load, you initialize the logger. This is the bootstrap log, the bootstrap group. Okay. So you initialize the logger, the cache, the way constant auto loading loads things which can be load or require depending on the environment or configuration. And then there's a before initialize hook, details, too much details. Okay. There are a number of things that are going on just having a look. And then rail ties and engines in a vanilla rails application have 94 initializers declared. Okay. It's too much. So then we have the finisher group. Okay. So things more or less, you can think that they run more or less in order except that the before and after things that are, you know, when you declare a relation, you can say before this and after this, you know, unless you have defined something, you can more or less think that they go in order. Okay. So yeah, a number of things. The finisher group does a number of things. Configure, sleep templates in case generators need to load things from here. Yeah, this second one is technical. Then this one is defining, you know, you know that if you go in development mode to rails info, you get some pages there. And if you go to the home, you get this new shiny thing, you know, in Rails 5. So the way that works, if you go to config routes, that's not in config routes, so how is that served? All right. So the routes are defined here. Those ones are defined here. You build the middleware stack at this point. So we are kind of already late. Okay. You have to think we are already, most of the things are done at this point. Well, you define main app, which is something for engines. Doesn't matter. To prepare blocks, important is a block that are run, you know, in certain, certain, you know, certain points of the run time. And eager load is important. By default in production, you eager load the application. And then there's another technical hook, which is the finisher hook that fires another, you know, another event, which is after initializing case you have to do something after this has run. All right. And a number of things that are too detailed maybe. Well, the second one loads the routes. This one is important. Okay. But there are a number of things in this finisher group going on. Okay. So once everything has been declared and we have everything load, you know, there's a topological sort going on here. So an etiolizer have an after, before topological sort, if someone does not know. Similarly if you have things that have an order, a relative order declared, like I have to run at some point, but make sure it's before that thing. Or I have to run at some point, but make sure it's after that other thing. Okay. So a topological sort is getting this linear in a way that respects these relative constraints. Okay. So if you said you need to run before that thing, you are guaranteed that you are going to run before that thing. Okay. Maybe not immediately before. It doesn't matter. So the constraint is relative. Okay. So maybe not just before, maybe two before. I don't know. It depends on the other things that have said that need to be run before that hook. Okay. So that's the idea in any case, we ordered the etiolizers to be run the way they have declared to be run. So at this shag one, we have 124 etiolizers. There's a lot going on. Okay. Because everything, you know, this is designed so that everything that needs to happen at boot is going to be generally happening in an etiolizer. So there's a lot of things, 124. And I have listed all of them. So if you get then a PDF or we see the video, we have them at least, you know, as a reference. This is not like public interface. This is not something Rails is telling you these are going to exist. You can, you know, you can assume that all of these 124 are going to exist in other releases or things like that. But anyway, just for the sake of the presentation, these are the ones I am just going to pass the slides because, you know, there's no point in going one by one. But you know, you see, we have setting load paths here, then out to load paths, you know, and there are like, all right. So yeah, so what I want to say here is just that you are aware that all of these in etiolizers are defined and all of these things are running. Okay? All right. Okay, so we've seen like a number of things and there's too much to get like, like what, which is the whole picture. So from all these, I've selected what I think I would like to have clear as a Rails parameter. Okay, so you've seen there's a lot of things going on. So this is the summary of the summaries, like, you know, the essential things that we need to know. All right. Summary of the summary. We go to the beginning of the presentation, okay, with boot, Airbnb and the kind of thing. So first, we define the load paths, which is bundle setup. So we have the gems, you know, the ones that we want to be available and not the ones we do not want to have available. Then we load the rail ties and this side effect defines all these configuration points of these in etiolizer, you know, active record and all the Rails components. After that, we load the gem dependencies with bundle require. Then the application class itself is evaluated. Okay, the definition of the application class itself is evaluate. And then there's a bunch of paths, like auto load paths, you know, load paths, stuff. All right. After that, and this is important, at this point, we load config environments, development Airbnb, production Airbnb, whatever. And this is why the configuration in these files takes precedence over the one in config application Airbnb, simply because it runs after it. Okay? So if you say foo equals one in application Airbnb, and since after that you run config environments, development Airbnb, for instance, and you say foo equals two, so it takes precedence just because it's, you know, evaluated, you know, later, and that's the one that remains. After that, the in etiolizer in the application engines or whatever are run. Okay? So first application, then development Airbnb, production Airbnb, whatever. After that, config in etiolizer. These ones are executed in lexicographic order, and after that, if needed, the application is eager loaded. That happens in production mode, by default. Well, another thing, so, yeah, parentheses. So we've seen that realties integrate into Rails via configuration points, all right? And in general, something curious as well is that in Rails, in general, in general, maybe if you do a rep, maybe you find a contract example, but in general, the Rails code base is not full of, if development question mark, if production question mark, no, no, no, the interface is, we have parametrized Rails using configuration points. Okay? And when you generate an application, development Airbnb, production Airbnb, whatever, sets, you know, sensible defaults for that environment. And then Rails just checks this configuration, all right? So it's not that in production, you do something. It's that in production, by, you know, the default generated in production has a value that makes that happen. Okay? Right? So we run these ones, eager load, we load the routes, and if we were running command, then the hooks of the command are run. Okay? So this is like the most important sequence of things that we have to remember, all right? And that's it. All right. Thank you. Thank you.
|
Rails ships as a number of components, Active Record, Active Support, ..., largely independent of each other, but somehow something orchestrates them and presents a unified view of the system. Then we have config/boot.rb, config/application.rb... what do they do? Application initializers, environment configuration, what runs when? Understanding how that works becomes an inflection point in any Rails programmer that goes through it. You go from that cloudy idea of an initialization that sets things up for a certain definition of "things", to a well-understood process.
|
10.5446/31571 (DOI)
|
Thank you everyone for lasting so long. I know it's been a taxing day. My name is Ryan Davis. I'm known elsewhere as Zen Spider. I'll be talking today about making a test framework from scratch. I'm a founding member of the Seattle Ruby Brigade. It's the first and oldest Ruby Brigade. In fact, we came up with Brigade in the world. I'm now an independent consultant in Seattle and I'm available. I'm also the author of many tests, which happens to be, according to the Ruby Toolbox, the most popular test framework in Ruby. I only mentioned that part because I'm kind of astounded that I'm beating our spec. So setting expectations. Something that I always do at the beginning of my talks. This is a very code heavy talk. I'm going to go into detail about the what's, the hows, and the why's of writing a test framework. I've got a few slides coming up to 9.4 slides a minute. If I ignore the 30 minutes that I'm supposed to talk, I can go to 35. So I'm hitting 9.4 slides a minute. I've given this talk twice already. It's already been recorded and published, so you can watch it there if you need to. I generally find that adding more slides adds more explanation and comprehension. And generally makes the talk go smoother as long as you don't have AV problems. So the presentation has been published. The slides are up at the URL above. And there is a facsimile of the code that I'll be presenting that is at GitHub at that URL as well. First, a famous quote not said by a famous person. Tell me and I forget. Teach me and I may remember. Involve me and I learn. Who actually said this is a bit of a mystery, but it's usually attributed to Mr. Franklin. Whether it's a legitimate quote or not, I think it points out an important problem in code walkthrough talks. But don't get me wrong. Not all code walkthroughs are bad. Some of them are actually quite good. They're absolutely necessary for work. I'm only talking about code walkthrough talks. Quite simply, I could write this talk in my sleep by simply working through the current implementation of many tests and explaining each and every line. You'd learn nothing from it. Sorry about that. You'd learn almost nothing from it for getting it almost as quickly as you read it. Some of the many problems of code walkthroughs is they're boring. They're top down. As a result, you focus on the what's and not the why's. They're all good reasons to tune out and not learn a thing. So these are all good reasons for me not to do a code walkthrough of many tests. That quote before that wasn't by Benjamin Franklin, the real quote that it's based on is much better. Not having heard something is not as good as having heard it. Having heard it is not as good as having seen it. Having seen it is not as good as knowing it. Knowing it is not as good as putting it into practice. I'm not going to try to murder that. Here's a more concise version for those who think that was too many words. And here's a version for tender love who is my most ferrity of friends. So starting from scratch, that's the point of this talk. Working up from nothing is the closest that I can get to allowing you to join me in building up a test framework from scratch. I will try to describe this in a way that you can literally code it up at the same time that I describe it and understand the steps that you went through to get there. Now would be a good time to open up your laptops if you want to attempt this. Many have tried. Few have succeeded. Further, this will not be many tests by the end of the talk, obviously. Instead, it's going to be some subset. And I will apply the 80-20 rule to show the most of what many tests can do in a minimum amount of code. To emphasize that, I will be referring to this as micro-test from here on out. There's always two people Google. I love it. And finally, I encourage you to deviate from the path and experiment to do things differently and you might wind up understanding the choices that I made. Finally, this talk is an adaptation of a chapter of a book that I'm writing on many tests. More info will be on that at the end of the talk. So, where to begin? At the bottom, the atomic unit of any test framework are the assertions. So let's start with plain old assert. In the simplest form, assert is incredibly straightforward. It only takes one thing, the result of an expression, and it fails if that isn't truthy. And that's it. You have everything you need to test. Thank you. Please buy my book when it comes out. Are there any questions? Okay. No. I would like you to have a few more bills and muscles than that. But before I add those, let's figure out what I wrote already and figure out why. In this case, I chose quite arbitrarily to raise an exception if the value of the test isn't truthy. I could have chosen to do something other than raising an exception like throwing or pushing some sort of failure instance into a collection of failures and returning. These are trade-offs. And there are trade-offs to all of these choices. It doesn't really matter as long as you wind up reporting the failed assertion at some point. I mostly chose to raise an exception because exceptions work well for my brain. There's an added benefit in that it interrupts the execution of the test. It jumps out of the current level of the code. We're going to see more of this later. So if you ran this expression, 1 double equals 1 would evaluate to true, which would get sent to the test arg to assert, and that would do nothing. In response, it's a pass. However, if you ran this expression, 1 double equals 2 would evaluate to false. That would get sent to the test arg and assert, and that would wind up raising an exception. At some point, there will be mechanisms to deal with those exceptions and gracefully move on. But for now, the whole test suite will grind to a halt on the first failed test. One problem we currently have is that the raised exception reports at the place that the raise was called inside the assert and not where the assertion was called. I want to only talk about the place where the assertion failed on line 5 of test.rb. So I'll clean that up. I'll clean up the exception a bit by changing the way the raise is called. Raise allows you to specify what exception class to use and the backtrace you'd like to show. I didn't know that until about four weeks ago. I'll use caller for the backtrace, which returns the current runtime stack where caller is called. Now we see where the test actually failed. That's much more useful to a developer dealing with the failure. Second, we're going to add our second exception or assertion, sorry, assert equals. Now that we have planned all of the assert, I can use that to build up any assertion that anyone would possibly need. About 90% of all of the tests that I write use assert equal. Luckily, it's incredibly simple to implement. I just passed the results of A double equals B to assert and assert does the rest of the work. And here's how you'd use it. Where 2 plus 2 does equal 4, it would pass. It would pass true to assert and where 2 plus 2 does equals 5, it would pass false to assert. The rest you already know. This is really all I need to do most of my work quite happily. But the way that it stands right now, it has a pretty unhelpful error message when a failed assertion raises. First, the backtrace is pointing at assert equals. Didn't I just fix this? Mostly. I'm using caller for the backtrace and it included the entire call stack, including the other assertions that may be up the stack. So I need to filter those assertions from it. So let's fix that by deleting everything at the start of the backtrace that is in the implementation file itself. This is still ugly. The failure just says failed tests. So let's make it possible for assert equal to supply more information. Let's pull up the error message into an optional argument and use that argument in raise. Then we change assert equal to use that with a helpful message. Now we get error messages that look like this. That's much more useful, although it may not still be 100% resilient, but it will do for now. Let's add one more assertion, assert and delta. One mistake people make time and again is misunderstanding how computers deal with floats. The rule is really, really simple. Never, ever, ever test floats for equality. Yes, there are exceptions to this rule, but if you stick to that rule, you'll be fine 100% of the time. So while we have assert equal, we should not be using that for floats. We're going to write a certain delta instead that's just for comparing floats. What it should do is see if the difference between two numbers is close enough, where close enough, our version at least, is simply going to be within 1, 1,000. You can make it fancier later, but for now done is more important than good. So this is what it looks like, almost the exact same as assert equal except using the formula stated previously. And this is how it's used and it works right out of the gate. So what does that mean? That means for now, assert is solid enough for general purpose use and we can write other assertions. Writing other assertions is fine and good, necessary even, but that could take hours and I'm only 25% through my slides. I will consider this an exercise for you after the conference, but think now about what your favorite thing to test is. How would you write an assertion for it? You should go do that and then have a cookie. So once you can write one test, you'll want to write many tests. This starts to introduce problems on its own. It'd be nice to be able to keep them separate. There are many reasons why you'd want to break up your tests and keep them separate. Organization, refactoring, reuse, safety, prioritization, parallelization. It would be nice to keep them separate, but how do you do that? We do something really quickly and easily like this. We'll call the test method. It'll take some string describing the test itself. It will take a block of code with assertions in it and it's equally easy to implement. You ignore the argument and you yield. This gives us the benefit that you can name the tests and you can put them in blocks so that you can see that they're separate, but they're leaky and leaky tests infect results. Now that we can write multiple tests and keep them organized, we need to be able to trust them. The problem is that these tests aren't actually all that separate from each other. Here we can see A equals one at the top. First test asserts that it is one and it passes. The second test mutates that local variable and tests and it passes. And the third test, which expects to be like the first test, fails because it has been mutated. What we really want is for those tests to be completely independent of each other. But the fact that one test can mutate a local variable that is used by another test is simply a mistake. This goes against good testing practices which state that a test should always pass regardless of what was run, what order they were run in, or anything else. Otherwise you don't trust the tests and trusting the tests is crucial. We're going to fix this using methods. There are a number of ways that we could try to patch this up. The simplest perhaps is just not to do it in the first place. Instead, just use Ruby. Ruby already provides a mechanism for separating blocks of code and making them immune to outer scopes. It's called the method and it's free. The nicest thing about this approach is that it's absolutely free. There's no cost to using this that you aren't already paying by using Ruby in the first place. It's also important to remember that by using plain Ruby is that anyone can understand it. It does have some drawbacks, though. First, you have to run the methods yourself. That's fine for now and we'll address it later. Another perhaps more pressing issue is that there's code duplication in the previous examples. There are simple ways to get around this, too. I'm not going to bother with that going into that at this time. If you stick to plain Ruby, it should be pretty easy to do, though. So that's an exercise for you again. Now that we have multiple tests separated by methods, how do we get them to run? The same way you run any method. You call them. We could come up with a more complicated way and we will, but this we'll do for now. Methods are a good means to separate tests. But more problems arise. Unique method names. It's harder to organize, reuse, compose, et cetera. Luckily, Ruby comes with another mechanism to solve this. Classes. Didn't I just say to keep them separate, though? Yeah, I did. But it'd be nice to organize them. There's some balance in between. So how do we do that? Well, we take the previous code and we wrap it in a class. Done. But how do we run those? Wrapping the methods in a class breaks the current run. In order to fix it, we need an instance of each class before we call the method. So we add that to each line and we're passing again. Now, granted, this doesn't really do anything for us. It does group the tests in classes, but more importantly, it puts us in an ideal position to make the tests run themselves. Right now, we manually instantiate and call a method for each test. Let's push that responsibility towards the instance and have it run its own test. By adding a run instance method that takes a name and invokes it with send. This doesn't look like much either. In fact, by adding the call to run, we've made it more cumbersome. But this will make the next step super easy. It also provides us with a location where we can extend what running a test even means. For example, the run method would be a good place to add set up and tear down features or anything else you might want to do. What would you add? Running test manually is still pretty cumbersome, so let's address that next. Now that an instance knows how to run itself, let's make the class know how to run all of its instances. We can use public instance methods and then filter on all methods that end in underscore test. Public instance methods returns an array of all the public instance methods on a class or a module. And then we can use enumerable's grep method to filter on those. And wrap that up in a class method that instantiates and runs each test. This really doesn't do anything different than we were doing before. It just enumerates over the methods. This allows us to collapse this into this. This would be a good point to pause and apply some refactoring. What we've got is well and good, but it's only on one test class. In order to really benefit from this, we should push it up to a common ancestor. Let's make a parent class and refactor that. We simply reorganize the methods into a new class. We're going to call it test because we're very creative. Note that we also scooped up all the assertions while we were at it. Now we make all of our test classes subclass that class and that's all there is to it. Do that to all the test classes and they all benefit from code reuse. This makes it super trivial to have a bunch of classes of tests that can run themselves, so let's push that further. The only thing left to address is where we manually tell each class to run its tests. So let's automate that too. Since we're using subclasses to organize our tests, we can use an underappreciated feature of Ruby, the class inherited hook. Every time a new test subclass is created, it will be automatically recorded and tests will notify all of them off when we tell it to. First we need some place to record the things we need to run. Then we use the inherited hook to record all the classes that subclass test. From there, it is trivial to enumerate that collection until each one to run its tests. That allows us to rewrite this to this. That would be ideal to put in its own file so you can just require it and kick everything off. That's all there is to it. So micro test is kind of hamstrung at this point. It runs tests and that's great, but now what? Generally, it's testing is supported. It would be nice to know what actually happened. On the one hand, silence is golden. If you don't see an exception raise, then everything worked, right? I think this is one of those situations where the Russian proverb, trust but verify is a good policy to have. So let's give the framework a way of reporting the results and see if we can enhance things while we're in there. How do we know what the results of a run are? I personally would be pretty happy just seeing that something ran. Let's start with that as a minimal goal. Let's print a dot for every test run. As a side note, this is quite possibly my favorite slide I've ever made. Look at that. Look at that. It's a dot. Tough to you, would be so proud of me. Something about ink density. So let's add a print and a puts and we're done. This is a stupid simple thing to do. The emphasis perhaps is on stupid. Quite simply, we print a dot for every test and then we add a new line to make it keep it pretty at the end of the run. So doing so, we'd see this. And now that we see that we ran three tests and that they all passed, that's actually really good information to know. I'm much happier. But what about failures? What happens when a test fails? Currently, if a test fails, it immediately quits since it's raising an unhandled exception. That's not too terrible, but it does imply that you only see the first problem that raises. And that might not be the problem that you actually want to deal with. That might not provide as much insight, pattern matching ability that humans have as seeing all the failures at once. So let's clean this up. We'll rescue exceptions and print out what happened. We go ahead and wrap that up in a begin. We throw a rescue in. We capture the variable. We print out the message and we print out the first line of the backtrace. Now that we see all the tests regardless of failures, we also don't see loads of backtraces. We're only putting that first line of the backtrace. Perhaps this is not the prettiest output, but it is much better than before. But there are several things that I don't like about this code. I'm actually doing okay on time. Despite the speakers. I don't like the logic for running a test and doing I.O. is mixed up in the same method. It's just messy. So I want to address that. And in the process, refactor the code to be more maintainable and capable. The problem I have is that the run class method is doing way more than just running. Here we can see that there's about four categories of things that it's doing. The first thing I want to do is separate the exception handling from the test run. I really don't like that test class run is handling both printing and exception handling, but I especially don't like the exception handling. So let's address that first. Since test class run calls test instance run, which calls your actual test, it's two hops up from where any actual exceptions are getting raised. We should refactor this and break up the responsibilities. I want run all tests to only deal with running test classes from the top. I want each class to run their individual tests. I want each test instance to run a single test and handle any failures. And I want something else entirely to deal with showing the test results. So let's move forward with that in mind. First, let's push the rescue down one level so that test instance run returns the raised exception or false if there was no failure. Now we change test class run to print appropriately based on the return value. Now we have exception handling pushed down to the thing that's actually running the tests. Having exceptions only raise a single level usually means you're in a better place to deal with them. By doing this, we've also converted some exception handling into a simple conditional. Next, let's look at the IO. Let's extract the conditional response responsible for IO into its own method and we'll call that report. By doing this, we've put ourselves into a better position to move that out entirely. So let's do exactly that. We extract the report method into its own class called reporter. We're going to grab that puts two while we're at it and put that in a method called done. This lets us rewrite run all tests into something that is much cleaner. We instantiate a reporter object, we pass it through, we call done at the end. I think I just jumped over myself. I just jumped over myself. We create a reporter instance and run all tests. We use that throughout. We pass down the reporter instance to run. We use that to call report instead. And because name was a block variable before, we need to pass that down to reporter.reporter as well. Doing this, we removed all IO from the test class and delegated it elsewhere. Throughout these changes, you should be rerunning the test to ensure that it works the same. But in this last case, it doesn't. The class name is wrong now that we push reporting into a separate class. For now, we're going to go to the quick fix route by passing in the class. I'm intentionally focusing on fixing this bug, not using the right abstraction. Sometimes it's the right thing to do, but you pay the price in doing so. So let's add a new argument to the report method and we'll just call it K. We're going to pass in the current class, which is self in any class method to report. This fixes our output back to what we expect to see. But we had to add a third argument to do it. And that should be a hint that we're doing things wrong. So let's try to address this now. We don't need to pass the actual exception to report. We can pass anything that has all of the information that report needs to test, to report the test result. And what better thing than the test itself? All we need to do is to make the test record any failure that it has, that it might have, and make that accessible to the reporter. Let's add a failure attribute to test and default it to false. Then when we modify, we modify test run to record the exception and failure and return the test instance instead of the exception. Now we can use the accessor and reporter to get the message and the back trace. Let's clean up the mess we made of the third argument. Now that E is the test instance, we're able to get rid of the K argument. This gets us back to two arguments, which in my opinion is still one too many. So let's try to remove name. With a little tweaking, test instances can know the name of the test that it ran. First by adding an accessor and then storing it in initialize. Then by passing the name to the initializer and not to run. And finally by removing the argument from run and using the accessor instead. We've just shifted things forward a bit. This means that a test instance is storing everything that reporter needs to do its job and we can get rid of the name argument. One more thing that I do not like is mixed types when you don't need them. Right now E is either false or the instance of a failed test. But tests now know whether they've passed or not. And false isn't helping. It can be the source of pesky bugs. So let's get rid of false. By adding an insure block with an explicit return and I need to always make sure that I point that out, we can get rid of the false and make sure that the run method always returns self. Next let's add an alias for the Canadian accessor failure A. And switch reporter to use this new predicate method to test. This looks pretty good now. I just don't like the name E anymore since it's no longer an exception but a test instance. So let's rename it. Let's call it result. This makes the code much more descriptive albeit a bit longer. Okay. At this point I could call it a night but the output is still a bit crafty. I want to enhance the output. It would be nicer if we separated the run overview meaning the dots and any failure indicators from the failure details. Something like this. So let's change report to store off the failure and print all of them and done. This is pretty easy to do. We need a new failures array. To store them we make an accessor and initialize it. Then we need to print an F in the case of a failure and store off the results in the case of a failure. Finally we need to move the printing code down into an enumeration on that array. Now our output looks much, much better. But we're not quite done because I have five minutes left. There's just a couple things left that are getting on my nerves. We've changed both report and done quite a bit. They're no longer doing what they say that they do. So let's rename them. Report becomes shovel and done becomes summary. We'll use those names in test. And at this point I'm actually pretty happy with the code. We're not done yet. I'd like to add some more enhancements. One common problem that people often have writing tests is that their tests wind up depending on side effects of a previous test for it to pass. In this case, if that test is run by itself or in a different order it's going to fail. This goes against that previous rule of testing that tests should pass regardless of their order. And the easy way to enforce this is to run the tests in a random order. And that's easy to do in our current setup. But I'd rather not mix too many things back into this method. So let's start by extracting the code that generates all the tests to run. Now test.run only deals with enumerating the test names and firing them off. So we're in a better place to randomize the test. We just do that with shuffle. The button's not working. That's really all there is to it. We could give fancier and push it up to run all tests and randomize across the classes and the methods and get all fancy and stuff. But again, this is a good compromise. And I'll leave that as an exercise for you. So we're done for now. What have we wind up with? Well, we wind up with about 70 lines of Ruby that, as a good portion of what Minitest actually does, it's well factored, has zero duplications of any kind. The complexity score is incredibly low. It flogs at about 70, which is about five per method, which is about half of the industry average outside of Rails. Even without any comments, the code is incredibly readable. The reporter, first column, the test class methods, second column, and the test instance methods all fit on one slide. That's not bad. It actually runs about twice as fast as Minitest because it does last. The worst thing about this talk is that I spent about nine slides per two lines of code, but that's a price that I'm willing to pay in order to make this as explainable as possible. So how did we get there? We started with the atom. We built that up to molecules. We gathered tests into methods and methods into classes. We taught the class how to run one method. We taught the class how to run all of its methods. We taught it how to run all classes. Then we bothered with adding reporting, error handling, and randomization as a cherry on top. And that's where the pitches is book again. So I'm hoping to soon publish a small book under Michael Hardell's Learn Enough series. Learn enough or learn enough to be dangerous. Either way. If that goes well, I'm going to be doing a complete book on Minitest and perhaps testing philosophy, I don't know. I will have a sample chapter coming out soon for review. It's not ready yet, so please follow me on Twitter for announcements. Thank you and please hire me. Thank you.
|
Assertions (or expectations) are the most important part of any test framework. How are they written? What happens when one fails? How does a test communicate its results? Past talks have shown how test frameworks work from the very top: how they find, load, select, and run tests. Instead of reading code from the top, we’ll write code from scratch starting with assertions and building up a full test framework. By the end, you'll know how every square inch of your testing framework works.
|
10.5446/31572 (DOI)
|
Alright, I have 10 seconds of my time here so I think I'm going to get started. So hello everybody, so the title of my talk is that one that I can read in there. Tweaking will be a garbage collector, parameters for fun, speed and profit. My part is the fun part, for my boss is really speed and profit so that's how I got to convince him to give me time to work on that. So before I start let me just tell a little bit of story here. So who is here? Alright, it's better? Okay I'm going to try to stay still in here. So who is here for the very first time? Alright, it's a pretty good audience and that's something that's really amazing to me about RailsConf. Last year was my first RailsConf and I had a really great time. I thought it was awesome conference and after it finished I decided that this year I wanted to come here as a speaker. As a matter of fact I'm here so if any of you guys that are here for the very first time, if you like the conference and after that you think it's really awesome. You're not alone, a lot of people are going to think like that but if you decide to come next year as a speaker I can assure you that's totally possible. And if you have any questions about that, if you want to talk, if you really get interested into that, let me know. I can share a lot of the road that took me here. So with that out of the way, my name is Elio Colla. I've been doing software development for about 15 years. I spent about 10 years doing CEC++, Solaris, Linux environment before I switched to Ruby on Rails. It's been about 5, 6 years that I've been working for Ruby on Rails and I go by that thing on the Internet if you want to find me. So let's get started and let's talk about the Ruby garbage collector. So throughout the presentation I'm going to go through some of these terms. GC garbage collector, IGMDC is the restricted generation of garbage collector. INCDC is the restricted incremental garbage collector. COW is corporate right technique, AB is a batch plan back tool. Just a couple of terms. My talk is going to follow these topics. So why I'm here talking about garbage collector, despite my little story at the beginning, something a little bit about the history of how the Ruby garbage collector algorithm evolved through the Ruby releases. And some configuration parameters and how they evolved. And my approach into measuring tuning this thing, these options are going to be how I got to this talk, how I created all these and how I learned about all these. And at the end I hope I have some time for some questions and answers. Alright, so why I'm here and why garbage collector? So also upon a time there was a Rails app. There was one, two, three, four, five Rails app. And they all put the production, living, live and prosper. But we didn't have a lot of inside story on how they were behaving. So we practically in the company decided to install one of these fancy full of fence charts monitoring tools. And then that's how all these got started. We installed these monitoring tool and we started seeing I got exposed to this information. And I was looking into those things and I was like, alright, on the left side I have everything mostly blue. It only says GC runs on the right side I have these mostly yellow, a little bit of blue. And it shows a minor image. So what is that? I don't know, but sounds like something interesting to find out. And I also saw these on the left side you can see that the average is 80 times. That's how many times the GC is running per hundred transactions. The right side is running 46. It's like, hmm, alright, the right side seems better, so why my left side app is not behaving as my right side app? So I want to find that out. And up until that time I have enough questions that got me going and got me interested in these. So I did a lot of research. I read a lot. I googled everything I could find about Ruby Garbage Collector. And I had a really great time. I really, really have a great time learning all that. It was pretty awesome. I guess I was a little bit bored at work as well. But I decided that I wanted to share that because despite having found a lot of documentation, I didn't see a lot of people talking about the Ruby Garbage Collector. And finally, my talk got accepted. So if that didn't happen, I wouldn't be here. So I'm going to tell, before I dive into how I approach these things, I'm just going to give you guys a little glance on how the algorithms evolved through the Ruby releases. So let me ask a question before here. So how many of you guys here ever changed to any Ruby Garbage Collector in production? All right. That is a handful. A dozen people. All right. That's interesting. You know one of the meetups that I did in DC, I think one guy in the room, but it was a much smaller room anyway. So this part of the implementation, there's going to be a lot of information. I don't expect you guys to follow everything, to get everything on your mind. I'm only going to glance through these algorithms. I'm not going to explain them in details. But if that's something that you get interested in, you get curious about that, just let me know. There is a lot of documentation that I can point you out. You can Google around it as well and find that. But I have some reference that you guys can use and I can point that to you. So basically 1.8, the algorithm is simple market sweep. 1.3 is the lay sweep, 2.0 and the bitmap masking. The 2.1, Ruby 2.1 is the gem DC. Ruby 2.2 come with the ink DC and symbol DC. So during my research, I came across a mom's good the blog. And I didn't come across this blog at the beginning of my research. It was close to the end, it was after I read a lot about the Ruby garbage collection, mostly about the gen DC and ink DC, the generation incremental garbage collector. That's where you find more documentation about it. And then at the thoughts to the end of my research, I came across this blog and I am a visual person. And when I saw this and I saw how he concisely and visually expressed the difference between these different algorithms, it all clicked in my head. And he also gave me some perspective on how prior to 2.1 and 2.0X how those algorithms evolved. And so from the next couple slides, I'm just going to give you a couple of screenshots from his blog on how he showed that. And it's in a visual way, just it should be used to digest, even though there is a lot of behind the scenes. So 1.8 is simple market sweep. So basically you have a market phase and sweep phase during the time that is to stop the road. That means that your code is not running because the garbage collector is running. And it marks a bunch of objects and then the ones that are not being referenced, they got sweeped. And then your code resumes. One I increase the latest sweep, the main difference in here is the market phase is still the same, but the sweep is doing it lazily. So as objects are required, they are released and they are swapped for you to use. And that's the one I increase the latest sweep. 2.0 bitmap masking, this is just about memory management. It's improved memory management for our gems and applications that do fork and do those kind of things. And also the market phase will be rewritten to be non-recursive. That's what got in there. But the main logic about the market sweep phase remained the same. And if you plug a 2.0 application in one of these fancy full-off fancy charts monitoring tool, that's what you see. You see GC runs. This is from a simple application that I have on my machine. Ruby 2.1, here's where the juice starts. So the idea here that objects die young, you probably heard about that. And if an object survives a GC run, it gets promoted to all generation objects. So next time it's run, I might not just see. It doesn't go through all the objects, only the young objects. So you traverse a smaller list of objects, and so we spend less time. And that's kind of the whole idea of the generation of GC. Again, there is a lot more into that. I'm just going to go into these to give you guys some basics. The 2.2 has the NKGC and the symbol GC. Symbol GC, hey, just get collected now. So if you're a symbol lover, you don't run the risk of a DOS attack crashing your app anymore. So the symbols are going to get collected. And the incremental DC. So the incremental DC is all about shorting the pause times. As you can see, it's just a replacement of a long mark pause is replaced by a few small mark faces. And from the implementer, which I think is here in the room, this is not a silver bullet. It's not guaranteed it's going to increase your throughput and improve your response time. So it was all about replacing a long pause to a small pause. And if you plug a 2.1 and 2.2 application and also 2.3 into one of these full fancy charts monitoring tool, that's what you see. You see major and minor. The major should run less often than the minor. And that's how it looks. So a couple of configuration parameters. The same way that the algorithm evolved and got more complex and faster, also the configuration parameters evolved, the growing number and complexity. In the 2.0, we have these three configuration parameters. Maloch limit, hip means lots, and freeming. This is all about how many slots you allocate during your application start up and how many slots needs to be free. The minimum number of slots needs to be free after the DC runs. So it's going to release a couple of memory, but there is not that amount of free slots. It's just going to allocate more for your application to use. And Ruby 2.0X, we have these three parameters. As you grew to 2.1 and above, we have now 11 configuration parameters. There is some documentation explanation about all these parameters. I'm not going to go into details here because you probably are going to leave the room if I do that. But basically, those are the parameters. Hip needs lots, hip frees lots, hip growth factor, hip growth max lots, hip old generation object limit factor. This is a pretty cool one. I kind of like it. So now you have these other three sets of configuration parameters that are related to young and old generation objects. Limit, limit, maloch, limit max, maloch, limit growth factor, and these things. I didn't poke around all of them. I kind of changed most of them and tested when I was doing all these work. But at the end, I didn't end up changing all of them. All of them had default values. So if you don't change, there's going to be a default experience. But sometimes that is not your best option. If you look at the Ruby source code, gc.c5, and I have the URL for the references in there, that's what you're going to see. If you're an X and above, you'll have these long c commented section that lists all the configuration parameters, some documentation. And if you see the, if you can read that, the first one, the needs lots is initial allocations lots. The one that I thought was cool was the gc, hip, old object limit factor, which is do a full gc when the number of old objects is more than r times n. The way r is this factor and n is the number of old objects just after the last full gc. First of all, these descriptions here, when I read that, I was like, holy shit, that's pretty cool. So as I said, I worked for almost a decade with c and c++. So going back to these and reading these stuff and compiling the Ruby source code and making that compiled version of Ruby be used on my Rails app, it was pretty awesome to me. Before that, I never really went back to c and c++. So that kind of helped me have some fun during these process. And at the end, if you're wondering, that's how you actually test that. So if you want to change some of these configuration parameters and test that in your application, your laptop, that's what you do. Those are all environment variables on your Linux machine. So if you export that variable with some value and then do Rails as, then when the Ruby VM starts and the garbage card gets set up, it's going to pick up those variables from the environment and it's going to create the memories accordingly to that. If you want to change them, you stop, you export these values for different values and we start it again. So for testing your laptop, that's pretty much what you do. So with that kind of, I finished all the stuff that I wanted to talk, the glance to you. If those algorithms, details about that is something that you get curious and it occurs to learn more about it. I can point you guys to some documentation. Also, I strongly recommend you guys check out Eric Weisstein talk, RubyConf last year. He gave a talk on garbage card as well, but his approach was explaining more into details how all these algorithms work. So I was here, I saw his talk and I watched his talk after I've been to all these learning about all these stuff. So watching him explain these algorithms was pretty cool. I really liked it. I really felt like I was back in college and watching my CS teacher explaining algorithms. It was pretty cool. I liked it. And as a matter of fact, I don't like everything, but the things that I'm talking about, probably there's a reason for that. So I think his talk was pretty cool. I really liked it. So now let's go back here to where everything started. So as I mentioned, I got exposed to these things that you've seen. The left side you have the GC runs and the right side you have my major and minor. And those average on top 80 versus 40, it kind of bugged me. I was like, hmm, that sounds like there is something here for me to improve. But I wasn't sure whether those numbers were normal. Maybe I'm in a normal scenario. Using my research, I couldn't find any documentation or anything that gave me a clue that, all right, if you're running on average from 0 to 15, you are awesome. If you're from 15 to 25, you're great. 25 to 75 is okay. And above 25 to 75, you're really bad. I couldn't find anything you get to any knowledge about that. So I wasn't sure whether my 46R was great or was really bad. Maybe tomorrow I'm going to get a call overnight saying that my API is down, the service is down and I'm going to have to wake up in the middle of the night to fix this application and try to find out why the machine is dying. I wasn't sure. And that was one of the reasons that why we set up, we installed these monitoring to one application because we want to get a more detail and more practical and measure and see how health our applications were. And so I went on research mode. That's what got into me all these. I read, I studied the source code. I compiled the Ruby source code. I did all of that. And as I mentioned, I have really got great time. And after a lot of research, I realized, all right, that Ruby 2.0X, I'm not going to worry about that. That's going to get into an upgrade path. It's going to go to 2.1.2.2. And then the 1.2.2, all right, let's find out what I'm going to do. But that application 2.1.X, as I mentioned, I didn't know whether 4.6 was normal. Maybe 4.6 is super great. I mean, a really good spot. I don't need to worry about this, but I wasn't sure. I needed to find out. So what did I do? All right. And then I talked to myself, all right, let me run all these on my laptop, do some tasks and do some load tasks, some basic load tasks. I'm not going to go too crazy, not spend too much time. And maybe I'm going to find out something. Maybe that's going to give me some clue to whether 4.0. It's good or it's bad, or maybe I'm going to realize that I'm just crazy and go do something else. So I did that. I did a lot of tests. And at the end, that's what happened. And still, on my laptop, despite everything that I do, the GC was running 3.99 on average. I was like, hmm, all right. So it's not like what I'm seeing in my production apps. My production apps are running 40 times out of, or an average 40 times out of 100 transactions. And then what did I do? A couple more days go by, testing, imaging, analyzing data, and doing the process of what I realized is all right. So the test that I'm doing on my laptop is not the same as my AWS servers, not the same architecture, hardware architecture of my production environment. Maybe the test that I'm doing on my staging environment, which is shared between a bunch of applications, has the similar architecture. So maybe those tests are going to be better than what I do on my laptop. I don't have the same amount of data. In production, I have a freaking amount of data, huge amount of data. And I didn't have that neither on my laptop nor on my staging environment. In my A-B calls, maybe they're not reflecting exactly how my endpoints have been used for my users. So then I kind of got to the point, right. I'm not going to try too much to simulate my production environment, how much pressure I'm putting in the production in my testing environment, because I may never get there. So what did I do? I slapped on it. I said, all right, I'm doing too many tests. I have too much stuff on my head. And I needed to go do something else. My boss was bugging me about that other feature that I was working on. And then during those meetings, after those meetings, and a couple of hours hang out in the meeting room, and I tried to talk to some of my teammates about the thing that I was trying to do and tried to get a sense of, right, maybe they are going to say, dude, you're crazy. Forget about these. Or maybe they're going to say, oh, yeah, it makes sense. All right, what are you saying? Your theory kind of makes sense. So have you thought about these? Have you thought about that? Have you seen these or that? So I tried to talk to them and explain to them what I was doing, because maybe that's going to help clear my mind and give me an idea on what I'm going to do next. And what really helped me after that was really to plan what I was going to do. Because up to this point, I have a lot of information about how these algorithms work, the parameters that I can change, some of the parameters that I change in working doesn't work, and it makes a difference. It doesn't make any difference. So I've done a lot of tests, and I had a lot of stuff. And then what I was going to do next and focus on that, it kind of helped me a lot to set myself on a path where I was able to find something that I thought was cool. So and then after that, either these or that's going to happen. Either you're going to see something that you haven't seen before, and it was there, right in my face. And I wasn't seeing that. So on my dev environment, the same fancy monitoring tool was showing that my object allocation per transaction is on a range of 13,000 objects, while in my production it was 94,000. It's like, oh, all right, that might be why the GC is not running as crazy as it is in production because my backend doesn't have a lot of my testing environment. I don't have the same amount of data. And so I'm manipulating less objects, and I'm speeding out the JSON response much smaller than what it's doing. And maybe my ABA calls is not actually trying to get as much data as my real users. And then when I did that, what happened? I got to something that was very similar to what I was seeing on my production environment. And that was still on my testing environment. So and then I got to a point where, all right, I have an environment in here that simulates the exact same GC pressure, the same pressures in the GC garbage collector that now I can control. I can test. I can replicate. I can change. I can find out whether that's going to improve or not. If that's going to release, the pressure is on the garbage collector or not. So all is pretty heavy about that. If that doesn't happen, if you can't find a senior in your testing environment that kind of replicates in some level of similarity what you have in your production environment, then you establish that baseline, and then you work on that. And then you see that if you can make change on that, it's going to improve or not. But if you can't replicate kind of, if you don't have a similar environment, what you're seeing in your production environment, then you kind of have a stronger case to make to you whether the change that you're going to do in that is going to affect, is going to have the same effect in your production environment. But anyway, so I did got that. I did get into that on my, actually, both on my laptop and on the staging environment when I added more data into my database and when I changed a little bit of how my A, B calls work. And after I got to that, then it's about change, measure, and analyze. Change, measure, and analyze. Do that over and over again. Don't do that when you're hungry because it doesn't work. Do that early in the morning, only. And analyze what you're reading. Try to read your scene. Remember, I spent a lot of days looking through those graphics and I didn't see for a long time, I didn't see that my dev environment, I was allocating 13,000 objects in my production environment was 94. So it was there, black and white, maybe kind of grayish and white, but I didn't see that. So you have to analyze, especially if you generate a lot of data, you have to take your time to let your brain to digest all that information. And one thing that really helped me is some of the numbers that I was getting during my tests are literally right, broke down in a post it near my laptop and I kept really, kept looking into that from time to time. This use case, this test scenario, I got these numbers, these other tests I got these. And I literally wrote down in there that kind of helped me to try to extract some knowledge from there and get some clue out of that. And also the other thing is that's something that I always do is if you're testing these things and trying to extract knowledge from the tests that you're doing, try to do one change at a time because then you don't have to worry about, oh, which change I did affect that. But if you do one at a time, it's kind of a little painful to tedious, but it's a good thing. Again, this is not an advice to you. This is just something that worked for me in this particular scenario. You probably have all the different ways to test and to approach some of these things, but this is how I felt comfortable and how I was able to validate myself and the things I was doing while I was doing and learning about these. So again, either these or that will happen. That's what I mentioned. And if you find something, in my case, I did find a couple of configuration parameters that improve my response time. And I was really happy about that. If that really happens to you in something that you're doing, investigating, document it nice, share with your team, make a small presentation inside your company about that. Make a small talk out of it. Maybe next year you're here, Ray Osconf speaking about it. And if you do that, especially in the company that I was working at the time, after all that work that I've done, it was kind of pretty easy for me to deploy this stuff into production. And if you don't find anything, then you still document something. If you went through a lot of time, if you learned, if you tried a lot of different things and you still didn't work out, you still couldn't find anything, document the same way. Because maybe there's a lot of people in the world trying to do the same thing alongside you. And if you share that information, maybe you're going to save a lot of people's time. Or maybe you're going to get some feedback from the community on what else you should do to solve the problem you're trying to solve. And so that's how I went by all these stuff. And now from the next slides, there are going to be a couple of charts that's very easy for you. And if you guys don't want to do some of these things, you can piggyback on some of these images and you can send that to your boss. I'm pretty sure your boss is going to like these charts. And maybe that helps you to convince them to let you poke around some of these stuff. At the end, these were the parameters that I changed. I'm sorry. At the end, these were the parameters that I changed that I made all the way to production. Hapiness in each slot, freeze slot, maloc limit and limit max. I'm not sure if all of them actually one or two of these combined was the one that resulted into the improvements. But these are the ones that after a lot of testing, those are the ones that I decided to make into production. So now I'm just going to show you guys a little bit of some comparison and some charts, some nice charts. At least I think they're nice. So I'm going to compare Ruby 2.0X, 2.0.0 with 2.1.7 before any tuning, before experience. You manage that you have your map with Ruby 2.0.0 and you migrate that Ruby version to 2.1.7. That's what you should see, we expect to see in application. Then I'm going to compare Ruby 2.0, the same app with Ruby 2.1.7 before and after turning these configuration parameters. And then it's going to be the time of 2.2.3 before and after and 2.1.7 and 2.2.3 after the tuning just to find out how they compare and how those algorithms play a role into spending less time doing garbage collector. So as you can see here on the left side, I can see on the left side is Ruby 2.0X and the right side here is Ruby 2.1.7. And that's how everything started. I had two apps with two versions of Ruby. And in this case, it's the same application when I run and I do some load testing with Ruby 2.0.0 and the same application after upgrading Ruby version to 2.1.7. And out of the gate, the average of GC runs dropped from 80 to 40. That's what you get just by upgrading 2.0 to 2.1X. So here now is a different metric. Again on top is 2.0.0, on the bottom is 2.1.7. And this metric is the time spent in garbage collector percentage of your wall clock time. I'm pretty sure somewhere there's a really nice explanation of wall clock time really means. But that's the metric that they show up. If that's something that you're curious to just have some more documentation about that as well. But basically by changing the Ruby, it drops from 18% average to 3% average. Just remember this is an average of a percentage of a metric based on a percentage. So we have to keep our expectations regarding to this one. But this is a more easier metric, web response time. This one is kind of very easy to digest. And if you can see on top, the dark brown one, it's the time doing garbage collector. And you can see that on top you pretty much spend close to 40 milliseconds per transaction doing garbage collector. That's 40 milliseconds of your web response time. And if you change from your application from 200 to 21x, that drops to well below 20 milliseconds. So you're getting there close to 25, 30 milliseconds. And in this case, you can see that the web response time goes from 156 to 133 milliseconds. So you're totally improving your web response time. So two and seven before and after. Here you can also see on the left side is before, on the right side is after. And the average of runs goes down from 40 to 24. These data is from my sample application that I run to get these charts to put in this presentation. What I really got into my real application when I did all these, we went down from 40 something to 12, 13, 14. So some improvements are a little bit better than what you see here. Again the same time spent on garbage collector on top, two and seven before, at the bottom, two and seven after. And you see that the time spent on garbage collector drops from 3.28 to 2.38. This is a small drop, it's a percentage metric, but it's a drop of close to 30%. And in this case, the web response time, it drops from 133 to 129. That's real gain in your application. It will be two to three before and after. Then it's very similar to what we see on the two and seven. This is when I was showing and explaining the algorithm for the Ruby two to X, that's where the incremental garbage collector got introduced. But that wasn't about improving throughput of web response time, it was all about shortening pause times. And it wasn't guaranteed that the two, the incremental DC would improve your throughput in web response time. And then we're going to see that when I compare the two and seven before, two and seven after and two to three after. And so two to three before and after, very similar to two and seven before and after. The time spent in GC also drops from 238 to 1.4, a smaller drop in this case. Web response time during this test, it dropped from 137 to 130 milliseconds. And these is two and seven before and after. Here you see that the two to X, it's actually still a little bit worse than the two to X. That's probably just a little bit of noise in my test. You see that on the right side, the chart kind of goes up and down. So probably I was doing something else with my laptop, I was trying to test these scenarios. And what I really got into the real app in my testing environment, when I tested two and seven after comparing to two to three after, I consistently got an improvement of about five, four, five, seven milliseconds of web response time. But again, the two to X algorithm incrementally see is not supposed, it's not guaranteed that it's going to give you a better throughput and response time. Not always, it's not guaranteed, but I kind of felt like because my results on those were consistently a little bit better, I mean five, six milliseconds, maybe it's just noise. But when it's consistently getting the same results, test after test, run after run after test, I kind of felt that my application was in a sweet spot where the change for the incremental garbage collector did help me, did help me, the application, right? So this is two and X. So in this scenario, I'm just comparing two and X before and after. And it, two and X, two and seven after and two to three after. Oh my God, so many numbers. So it's from two, 38 to comparing those two, one is two, 38%, and the other one is 1.4. Here the web response time barely changed. This gives pretty much noise, most likely. And that's with that, I kind of conclude what I wanted to talk to you guys here today. If you have any questions, I'm more than happy to answer them. Thank you. So the question was if these come with more memory usage, yes. So you're just telling the garbage collector to allocate more memory. In my case, that was happening. I increased the heap and then I have more memory for my transactions to allocate 9,000 something objects. And so the garbage collector runs less often because there is more memory available for him to run. Yeah, so the question is trading memory for time? Yeah, exactly. So since I have more memory and I've been in the machines, that's what I was doing. I was giving the Ruby VM more memory so it has more room to operate and so you need to run the garbage collector less often. What two? So the question is what two are used to measure the garbage collector? I use New Relic. And so when we installed New Relic in our application, we started seeing all these things. So I didn't use, actually, I didn't use New Relic to test. New Relic is how I was getting these results. But what I used to do my load test, I just used a partial benchmark tool. I was just putting a small little load in my application to get the garbage collector to run with the same pressure that I was getting in my production. So the question was, so these charts that I have are indeed from a simple application. It's not my real application. But what I get on my real application? So one of the applications in the staging environment, I got about 10% web response time increase. And in production, those numbers are a little smaller. In production, I have way more cash in the staging environment. These was one of the APIs. In some other APIs, there was one that I couldn't see any change in production. I saw some change in the staging environment. Five, six to 10% improvement in the staging environment in production was the same. And after the end of all these, pretty much all APIs are using Ruby 2.2.x with these configuration parameters. There was actually one of the APIs that I use pretty much the same parameters for more than one application because they have the same range of object allocation. It was around 9,000. But there was another application that I had to change because it has a memory footprint a little bit different from the others. So we use different set of parameters. Okay, so the question was, what was the before and after values? The after values are those that I put in here. The before is the default experience. Oh, yeah, okay, so I didn't actually put the numbers in here. I can send you some of those. Yeah, yeah, I can send you some of the samples that I use for these ones. And the before was what is in the default version of Ruby. In the source code, in some of these reference here, you can find the default. I can point that to you. Maybe you can make a slide for the overall. Yeah, I can do that. Yeah, I can do that. I can definitely do that. Put the default values in the ones I use for these. That's a pretty good question. So the question was, how did I change these parameters if I did this systematically or if I used them or if I changed anything? At the beginning, I was... So at the beginning, I was learning and testing and changing all of them. After reading some of these parameters, some of them, they go along together. So if you change one, you have to change the other. But because the two 1x have 11 parameters and not all of them are very easy for you to digest and change and to monitor the change, I didn't really use... I kind of got... After doing a lot of tests, I kind of get an idea on the ones I want to change. But the interesting point about the question is that is actually after I went through all these, I found some documentation about not everywhere is affordable for you to test. So if you're a physicist working with our hydrons, supercollar, you can't test a lot. You have to be very careful with the tests that you're doing, which patterns you're going to use. And I was actually really curious and tried to investigate some of those algorithms and see if from all those parameters, if there is a better, a sweeter combination that's going to fit better for my application, never really get time to do that. So the answer is at the end, I really used some of the parameters that I felt was... It helped doing my tests. See, I have a 90,000-something object allocation. So if I give memory to the Ruby machine to allocate the memory, if I give enough memory to the Ruby VM to allocate those objects without having to run the GC all the time, then at least for my application, it's going to take care of the GC. It's not going to be necessary to run that often. And when I got to that point, I was satisfied with the value that I was getting for those parameters. I think you're absolutely correct. I got to... I presented this talk in a conference in Europe. I met a guy in there who was a physicist and a developer. Oh, my gosh. Oh, my gosh. So he actually told me about the Gushitables, you know, or something. I don't know what I was talking about. But he was going to that. He's the one that will mention about these examples where not in all industries, you can test... It's affordable to test as many options. You have to be... You have to study very well which parameters you're going to change, and then you run your test because running the test is really costly. For me, here, running these test changes in this parameter is not costly. I can't afford doing that. But I'm pretty sure there is a better combination of parameters that would make it better. So the question was if I should suggest to change the default value for these parameters. And the answer is no, and that we shouldn't do that because like you were saying, these parameters are based on what is your memory allocation. So if your application allocates 10,000 objects per transaction, the default value is all good for you. Maybe you're never... You're actually never going to be in a point where you need these parameters to be tuned. So like you said in one of the other presentations, maybe 99% of the real Zaps in production does not need to change these. But maybe you're in the 1% and I actually worry. One of our apps were a couple. All right, guys. Thank you so much. All right, guys. See you guys at the next video. Bye.
|
Whether you are building a Robot, controlling a Radar, or creating a Web App, the Ruby Garbage Collector (GC) can help you. The stats exposed by the Garbage Collector since Ruby v2.1 caught my attention and pushed me to dig deeper. Both Ruby 2.1 and 2.2 brought great performance improvements. From a practical point of view, we will discuss how to use the GC to enhance the performance of your software, from configuration parameters to different approaches on how you can change them yourself.
|
10.5446/31573 (DOI)
|
All right. My time has started, so I guess we'll slowly work our way into the talk. Did everyone enjoy lunch? Anyone go out, grab some barbecue? I think you had barbecue here. Anyone go external? No? Anyone grab a coffee? Like this? Coffee? You guys are not giving me anything. Anyone grab a coffee? No? This guy grabbed a coffee. Nice. So many lifetimes ago, I worked as a barista at a popular coffee shop. And it was a mid-sized neighborhood store. There was a mix of traffic coming from the surrounding homes and business. And we were actually right across the street from one of the big game development studios. And business would just like skyrocket anytime a new call of duty game was supposed to launch. And it was just a nice mix of customers. And most of them were great. But there was one. I mean, there's always one. And her name was Suzie. And this name has been changed to protect her, even though she's totally guilty. Suzie would come in three to four times a week, and she always had the exact same order. A double espresso, please. And I'd make small talk as we finished the transaction. Oh, how's it going, Suzie? Oh, you know, not so well. And she'd launch into some tale of woe things that just weren't going well. Now, mind you, she was in three to four times a week. And it was always the same story. I swear she never had a good day in her entire life. It was just always the same deal. But then, right as I had hand her the change, she'd perk up a bit. And she'd say, but you know, if you made that a triple latte instead of a double espresso, my day would be a whole lot better. Every single time. And the problem was that a double espresso cost about this much. And a triple latte with its extra shot and milk, well, it cost this much. And she would wait until the exact moment that that register closed, signaling that the transaction had officially ended before she tried to sneak something else in there. And that brings us to the topic, safety and security in coffee shops. Okay. Maybe not. It's not quite right. It's probably something more for a Java conference. Giving an active record talk, have to do a pun in honor of Tinder love. So actually, we are a room full of mostly web developers. And I'll just say I looked all over the internet to find the least creepy spider I could find, because I was not going to be like surrounded on all sides by creepiness. So that's like a very friendly spider. So like this very friendly spider, we're very friendly web developers. And maybe we want to talk about security in web apps instead. And at the risk of stretching that coffee shop metaphor just a bit too far, I'd like to argue that our web apps have a vulnerability that's actually very similar to the one that Susie was exploiting. And that's SQL injection. Now, I'll go into more detail in a bit, but in brief, SQL injection is when someone closes out a legitimate transaction with your database, and then immediately tries to sneak a little bit more SQL in there so that they can interact with your database and walk off with something a whole lot more valuable than coffee. And I know what some of you are thinking. We are web developers. There is nothing more valuable than coffee. But in this case, there is. I'd argue that your customer's private data is actually slightly more valuable than coffee. So when we talk about security vulnerabilities, I think it's more common to think that we're talking about something new, a vulnerability that was maybe just discovered in the past few weeks, something that's like going to be on the nightly news, something like the Heartbleed, where it's announced publicly the very same day that a patch is released, because it's just that bad. But SQL injection is old enough to vote, at least in the U.S. I haven't checked everyone else's laws, don't know the ages, but in the United States, SQL injection could vote. It's 18 years old. It was first mentioned by Jeff Forestall in an online hacker magazine called Frack. And you can tell that's 90s graphics right there. That's actually still the logo on the website, did not update. So Jeff Forestall, who went by the hacker name of Rainforest Puppy, and I'd like to pause here for a moment, because when you start reading about SQL injection and security and all of those things, you come across a lot of hacker names. And I realized I haven't got a hacker name. And I did what any web developer would do. I went to Google and I Googled hacker name generators. And I started trying out a few different options, including one dubious generator that insisted it needed to know my mother's made a name and where I was born in order to generate my hacker name. And I thought, nice try and moved on to the next hacker generator. And after probably a bit too long on this, I was finally the proud owner of this doozy. Fireacid, what? And you know that's a legitimate hacker name because that is a four. That's not an A. So back to my fellow hacker, Rainforest Puppy. He had discovered that SQL databases, which were just at the time starting to replace the more popular access database, they allowed batch commands. And so what that meant was that you opened up one interaction with the database and you could do one command. But with batch commands, you open up that connection to the database a single time and you can just send more and more and more requests in there. And that's not a big deal because multitasking is a good thing, right? I mean, let's say I want to take a look at all of my employees. This is a fine looking group of employees. But say that I want to select just the ones that are developers. So just these two. And then I want to sum up or find the average of their chips consumed. And because I like to be inclusive, we could also average their chips consumed if you speak one of those other Englishes, whatever. But whether it's these chips consumed or those chips consumed, it's pretty high and we'll leave it at that. And because you can put more than one command in, you can just open up that database opening once and run those extra queries. And in a closed system, that is all well and good because you're not going to attack your own database. But when you have input coming in from outside, when you have outside user data heading straight into your database, things can get a tiny bit dicey. So let's say a shady character, maybe like this guy, you want to use your innocent little web form to gain access to parts of your database that you don't want them to have. And they can do that by piggybacking their own SQL command onto your intended request, sort of like this. You have a SQL command and you give them the chance to... You provide them the ability to search for a restaurant where they can do sort of a match with the name. And so you select star from restaurants where name is like, and then you add in the user input. And your user, shady McShadester, searches for the following restaurant, which honestly does not sound like they're going to have the best quality food to me, but you know, whatever, this is what they search for. And this is the query you actually end up with. So you're selecting star from your restaurant table where name is like, and that actually closes out that name, that select statement, and then starts delete from restaurants. That double dash there is a SQL command, which means everything after that is commented out. And so what you've done here is you've selected nothing from your restaurant table because they didn't give anything, it's just like an empty string, and then they delete every single record in your restaurant table. And that is SQL injection. Your entire restaurant table is gone. And I'm not talking about your table at the restaurant, it's your restaurant table in your database. It's actually still there, it's just completely empty, but you should also probably cancel this table because you're going to spend the rest of the night backing up and restoring your database. And you're all doing database backups, right? Yeah? No? So it's not the scope of this talk, but if you're not, you should be because stuff happens. And yeah, it's just a good idea. So let's talk OWASP, the open web application security project. This group has, they only get together, and I think every three years they update their top 10 list of web vulnerabilities. And SQL injection, it's been out for 18 years, it's an old vulnerability, and yet it regularly ranks at the top of their 10 most critical web application security risks. And that's not really a top 10 list that you want to be at the top of. And honestly, if you look at like a recent scan of the news, you will find so many examples of companies, big and small, that have, you know, kind of seen SQL injection from the bad side. For instance, Sony Pictures. In 2011, Sony Pictures lost the data of over one million users, including their passwords, which were reportedly stored in plain text. So that really hurts. And you might think, oh, well, whatever, it's like Sony Pictures, big deal, but most people use the same password, and now you've got the password that the people use in their bank and all this stuff, and they can just go from site to site, build a profile on these people, and just like steal even more information because you did something that you should not have and stored passwords in plain text. In December of 2012, hackers used SQL injection to get the personal records of thousands of students from over 53 universities around the world, including major universities like Harvard, Stanford, Princeton. And the reason they like students, like universities are actually a pretty big attack vector, which I didn't realize they were such a target, and that's because students tend to have a wonderful combination of really clean credit records, and they don't pay attention to their credit score. So you just have like quite a few years of being able to like totally trash their credit histories or their student loans do it for them. And that's, I guess, the one thing that we can say is good about crushing student debt is that by the time you graduate, no one wants to steal your identity anymore. All right, we're not done yet. In August of 2014, Russian hackers stole 1.2 billion, that's with a B, username and password combinations, and 500,000 email addresses from 400,000 websites, and these were all just like little mom and pop sites. Most of these were Fortune 500 sites. They're not sites made by your cousin's best friends, uncle's kid down the street who sort of knows how to develop a website. These were made by like professional developers, like the people sitting in this room. And it wasn't pulled off by some like elite gang of movie hackers. It was a group of less than a dozen men living in a small town in Russia that actually got their start doing email spam, and one new guy moved into town and was like, how do you know how to do SQL injection? And they became like the kings of SQL injection. And then I've got one more for you. This is actually mentioned in an earlier talk, and it's VTech, which is an electronic toy company, and they lost the data of over 5 million parents. So like email addresses, phone numbers, home addresses, because people had registered their products. They also lost the data of 200,000 children. Now, this was just first names and email addresses. So it doesn't seem as severe until you realize that the data dump included the ability to link the parents with the kids. So now you basically had for 200,000 children, their first names, their last names, their parents' names, their home addresses. I mean, that's pretty major. Like that goes beyond just like, oh, they're going to have to change a password. Like that actually puts children at risk. But wait, there's more! In the last five years, the New York Times has suffered a SQL injection attack. Target has suffered from a SQL injection attack. Sony, yet again, suffered from SQL injection. The U.S. Army, and believe it or not, the U.S. Department of Homeland Security. All vulnerable to this 18-year-old SQL injection. So what's the deal? Is there like this elite hacker-hargard out there that's training up just like code ninjas and teaching them how to sneak unwanted SQL commands into databases like there's no tomorrow? No. No. SQL injection is common for two reasons. The first reason is that it's really easy to automate. There are scripts that you can go out online and you can buy, and they just bounce around the Internet looking for common patterns and websites, looking for things that seem like they might be vulnerable, and they'll like pop up a little gooey display and say, oh, hey, do you want me to start attacking this website? It seems vulnerable. Like, you don't have to do anything. In fact, there's this guy, Troy Hunt, who's a web security expert, and he runs a website called haveIbeenPwn.com, and I think that's the best logo ever for a website that focuses on SQL injection. And he actually posted a video where he was teaching his three-year-old child to do SQL injection attacks using the most popular program. And this wasn't to show how brilliant his three-year-old was. The point he was making is that SQL injection is just ridiculously easy. These people do not even have to know how to use the command line. It's a gooey interface. The other reason that they do it is that it keeps working. So they use it because even though we've known about it for 18 years, and even though there are very easy ways to avoid having this happen to your website, hackers year after year get gigs and gigs and gigs worth of valuable user data. So how do we put an end to this? It's kind of harder to make it, like, you can't make it impossible for people to automate. Like, they can crawl the web, they can do what they want. So you can't make it harder to automate. You could possibly give your table names like really weird names that are hard to guess. Most likely, though, that's just going to drive your dev team crazy and make development harder and you're still going to suffer from injection attacks. So that's probably not the idea. And the easier route is just, you know, make sure it doesn't work. Awesome. Talk over. Okay. You guys want to know how to make sure it doesn't work. Okay. All right. So my name is Jessica and I work at a place called the Flatiron School. And the platform that I work on is a platform called Learn. And we use this platform to teach people how to code. And so we've got tons of students, tons of, like, brand new junior developers constantly on our platform learning things. And one of the things that we have as part of our teaching philosophy is that we like to have people step through and actually build basic versions of the tools they'll be using later. So it helps ensure that they kind of know what's going on when they get to, like, the bigger magical platforms. And so we make them work with SQL and kind of build their own lightweight ORMs. And after suffering through that for a while, we introduce them to active record. And everything is like magic. Instead of having to do select star from restaurants where type equals barbecue, which is simple enough but, like, still kind of, like, not super intuitive, you can use, hey, I want the restaurant where the type is barbecue. And just, you know, active record does it for you. And it really seems like all this magic makes sure that the only thing that you have to worry about is whether or not the rule is that your model name is supposed to be plural or singular and wait, is it restaurant.ware or restaurants.ware and that gets me every time. But it's like, if that's the worst thing you have to worry about things to active record, like, game over, SQL injection is gone. But it's actually a bit more complicated than that. So we're at Rails conference and you guys came to a talk called, will it inject a look at SQL injection in active record. And congratulations because we finally made it to the title screen. Woo! Yeah. Only 70 slides in. And given this title, I think it's safe to assume that what you'd like to hear about is SQL injection and active record. And I kind of think that having someone just, like, jack at you about SQL injection and security, especially right after lunch when you're feeling a bit sleepy and I'm talking to you, Dad, it can be kind of boring. And so instead, we're going to play a little game that I like to call, will it inject? All right. So the rules are simple. I will show you an active record query. You guys are going to tell me whether or not you think it's vulnerable to injection and you can just sort of shout it out. If you guys are watching this later from home, you should feel free to play to shout at your computer screen. I'm not going to judge. I'm not even going to be there. So here is our first one. And this guy, fine, he's sort of the heavyweight of active record. And if any of you use active record, if you've built even a tiny little app, you've probably used this quite a bit. So let's say you wanted to find the barbecue joint that has the record of one. So the record ID and the table of one. Do you guys think that if you left that open to user input, that that's going to be vulnerable? Do you think will it inject anyone? Who thinks it's going to inject? Just raise your hand. You guys think it's safe? All right. What's that? Let's use edge active record. No. How about whichever one didn't just come out? Because that's probably the one I'm using. It won't inject. It is safe. So that one actually only works for an integer. It matches. It's like looking for an integer value. If you shove something in there that's not an integer value, it's kind of going to blow up, cause an error. Now, one of the things that someone could do is start messing around with those numbers. And if you're not verifying that they should have access to that particular record, they can use this to see records that they shouldn't be able to see. But that is outside of scope of this talk, so you'll have to find someone else that can tell you how to avoid that. All right. Find by. It's similar to find, but you pass in both the attribute you're searching for, as well as the value that you want that attribute to have. And as you can see here, it will even let you look for more than one attribute at a time. So for instance, say your user is looking for a barbecue that has the type of burnt ins and a dad's rating of five. And if you know my dad, you know that this could only possibly mean one place. It's Arthur Bryant's. But, yes. You guys, if you haven't tried it already, like before you leave Kansas City, stop, get some burnt ins at Arthur Bryant's. It's great. But whether you know my dad or not, do you know, will this inject? Who thinks it's going to inject? Got a couple hands, couple hands, all right. Who thinks it's safe? Oh, snap. Yep, it's safe. It will not inject. When you pass the attributes in as a hash, ActiveRecord actually escapes any of the special characters and treats the entire thing like a string. So you can have your Shady McShadester users pass in all the raw SQL that they want, and it will not inject. Okay. What about this guy? If you wanted to write a query that searched for records based on a SQL fragment. So in this case, and I mean, honestly, you wouldn't need to use this particular SQL fragment because you could obviously just do BBQ.find, you know, name, and then the input. But say you had a complicated query and you had to drop down to raw SQL. So BBQ.wear and then name equals and the user input. So you could say BBQ.wear name equals Oklahoma Joe's, for instance. And you would find another great barbecue joint, which became even greater when they dropped the Oklahoma and became Joe's Kansas City barbecue. Everything tastes better. But in any case, a great place for barbecue. But how does it fare against SQL injection? Who thinks that this one is going to inject? Oh, yeah. A lot of hands up. Anyone that's like, nah, we got this. This is safe. No? All right. You guys are right. That one does not fare so well because similar to that earlier conversation when we were just looking at SQL, your shady user could say that they want to find a restaurant called single quote, delete from BBQ dash dash, which, again, doesn't sound like a tasty restaurant. But the end result is that your entire barbecue table is lost. And that's really sad. So how do we protect ourselves against this? Oh, look at this. Anytime you're using raw SQL, when you get to the point in the query where you are going to put the user-supplied data you can replace it with a question mark. And then you just like, comma, and then put the input that you're going to put in after that. And then ActiveRecord kicks off this thing that we like to call, you know, baby-proofing the query. And the first thing it does is that it sanitizes the user input. And that means that it escapes all the special characters so that the entire thing will be treated just like a string. It can't be executable. And all that magic happens if you're interested in reading any source code in sanitization.rb with a little help from quoting.rb. Me, I just like that it works. And no matter what nefarious thing people pass into my app, like, if I've done it this way, it's sanitized and it's nothing more than a string. And it can't do anything. And the other way that ActiveRecord is protecting you there when you're using the parameterized queries, and that's what they call it when it's a question mark, that's parameterized queries. It's that the SQL statement actually gets sent to the database with the placeholders, and the database then parses the statement and it comes up with a query plan. And it caches that query plan. And sends the statement and a token back to your app. And when the actual values finally do come through, if the statement that is trying to be executed differs from that query plan, so if somehow raw SQL did get in there and someone was trying to change the query that was being, had initially been asked for, it can tell that it doesn't match with that token that was sent through like that initial plan. And it's like, no, we're not doing that. You cannot change the type of query that was already planned once you use the parameterized queries. So one thing you should know, not every database type supports this. The ActiveRecord database adapter actually determines how it'll handle things when you use parameterized queries. So just if you are relying on this, make sure that you know how it works for the particular database you're using. But most of the major SQL ones like Postgres and stuff, they do allow that. All right, moving right along back to the game, we're going to step it up a notch. Will this filter by statement inject? Who thinks it'll inject? Yeah? Who thinks it won't? Who here recognize that this is Python using SQL Alchemy? I'm just trying to make sure you guys are awake. No, that's not even ActiveRecord. Who even knows what all that stuff is? Very confusing. But in case you're wondering, it kind of does the same thing as a find by and it turns everything into a string. So even that weird statement will not inject. Okay. What if you wanted to search for all the barbecue joints that are not expensive, but to make it easier to determine which ones are delicious, you also want a group by dad's rating. Is this vulnerable to SQL injection? Okay, got some people. Yeah, over there. I'm just going to tell you guys, someone from Rails Core just raised their hands, so maybe you don't want to listen like, oh, yeah, it's vulnerable. You guys are right. That group method allows for SQL to be passed in. So if you're putting the user data directly into the query with that group by, you are going to be vulnerable. All right. I think we have time for one more. So let's take a look at having. And this one actually almost always ends up at the end of a chain of like a lot of other queries. So in this case, you're looking for a barbecue restaurant that isn't expensive and you want to group it by location and you want to give the user an option of saying that they want a certain level of dad's rating. So maybe some of them, you know, they're like, I'm not too picky. A dad's rating of two or more. But in this case, this user wanted a dad's rating greater than four. So will that inject? Yeah, anybody? All right. Who thinks it's safe? Oh, dad. No. It injects. It's actually at greater risk than a lot of the other methods because it usually ends up at the end of the method chain. So because it's at the end and nothing follows it, it's even less likely with that particular one that just having something else coming after it in the query chain will stop the injection. So you definitely want to be careful with having. It just makes it easier for people to shove their own sequel in there. And the thing is that once again, this is easily fixed with the parameterized queries. You just pop that question mark in, put the statement and active record takes care of it. And you might be looking at this and say, that's very similar to how you fixed the other one that had injection. And that's one of the great things about active record is that it does have like a lot of patterns that are the same. So if you figure out a way to avoid injection in some methods, chances are you're going to be avoiding them in all the rest of them. So it's just easy peasy. Now, some of you might be thinking, look, if people are aware of these vulnerabilities and it's been 18 years and sequel injection seems pretty major, why haven't they fixed this yet? Maybe I'm going to go back to my hotel room tonight and I'm going to work really hard on a patch. And I'm just going to get this pull request in and then there's going to be no more sequel injection in Ruby, in Rails. It's not quite that simple. So the problem is it's another example of that age old tension between freedom and security. Those vulnerabilities are there because they allow us flexibility. That coffee shop I worked at, they could easily have had a rule that said you are never allowed to give anyone anything for free ever. And if that had been the rule, we all would have followed it. Customers would have stopped asking Susie's stick of, oh, but my day would be better if you gave me a lot of free stuff. Wouldn't have worked. But there would have been a lot of magic moments that we would have lost. There were a lot of beautiful individual customer interactions where you're just like, oh, hey, the coffee is on us today. Would you like an extra shot that just wouldn't have been possible? And it's the same thing with Active Record. You don't want to lose the magic of being able to really dig down into the data and drop down to raw SQL and do a really crafty query. Because as beautiful as Active Record is for probably 90%, if not more of anything that you'd need to query it for, there's still the fact that SQL was made to talk to databases. And sometimes if you want to do some crazy complex query, you could either spend weeks figuring out how to do it in Active Record or you could do it in SQL. And that's going to be the only straightforward way of getting it. And if Active Record didn't allow this flexibility, it would start to be less and less useful as our apps got more and more and more complex. And we'd probably spend a lot of time starting to write our own methods to talk to databases and we'd write them from scratch. And if the history of programming is any indication, when we start writing our own things from scratch and it's not the core of what we're trying to do, chances are we're going to actually end up being more at risk than we would have if we had just figured out the Active Record stuff. And then our apps would be in the news but for all the wrong reasons. And we should apparently leave that to Sony because that's Sony's thing. They're the ones that are in the news for SQL injection. Don't be Sony. And look, I get it. I know that it's a bit tougher than ye good old days where security was literally like a heavy wooden door with a big guy and a metal slit. And you didn't have to worry about automated scripts being run by precocious little three-year-olds that were looking for the slightest vulnerabilities in your code. And it was just a face-to-face interaction. And you probably got one chance to get it right. So you couldn't even sit there spamming a password like, is it 123456? No. Is it password? Like, you had one chance. And like, you either got it or you got chased away by the big guy. But the reality is that things aren't actually all that much more difficult now with the built-in security and active record. And just a tiny bit more knowledge of what's going on behind the scenes with those helper methods. We can all write code that keeps our customers data and our apps reputation safe and secure. So I want to give a quick shout out to my colleague, Eric Raffeloff. His Thursday code reading at work was actually the inspiration for this talk. He has since left us to go work in web security, so we're all safer for that, though we do miss him at work. And I want to thank my husband, Josh, for hand drawing anything I asked him for and double-checking my SQL. Coder animators are the best. If you guys would like to learn more, an amazing resource is this rail-sql.org. Just there are places that you can, like, test it, see how it works. He goes through all of them in detail. Like, it's a beautiful resource. There's also guides.rubionrails.org, security.html. That's, like, actually the security guide from Ruby on Rails, and it has a lot of good information you should check out. It covers a lot more than just SQL injection. There's the OWASP Ruby on Rails cheat sheet, and you can also, this is crazy, just look at your own code. Go to your own web forms, try to inject some SQL in there, see if it works. Don't do it in production. Do it in local. But if it will work in local, it will work in production, and you should probably shore that up. So, as I mentioned, I'm Jessica Rudder. I make codes about, or I make videos, rather, about coding at youtube.com. And if you like code and videos, I'd love to have you check them out and let me know what you think. I would also love to continue this conversation on Twitter. And if any of you are, like, new to programming and you want to talk about learning to code or sort of learning Rails, I borrowed the company credit card and we're going to be doing a dinner tonight for beginners. So, if you're just starting out and you want to, like, join us for that, just hit me up after the talk. And I'll give you the details. So, that's the talk. Are there any questions? And if there are mean questions, I'm just going to mad dog you. So, pre-warning, pre-warning. No questions. Good. I answered everything. Yeah. Thank you.
|
If you've struggled through writing complex queries in raw SQL, ActiveRecord methods are a helpful breath of fresh air. If you're not careful though, those methods could potentially leave your site open to a nasty SQL Injection attack. We'll take a look at the most common ActiveRecord methods (and some of the lesser known ones!) with one question in mind....will it inject? If it's vulnerable to a SQL injection attack, we'll cover how to structure your query to keep your data secure.
|
10.5446/31576 (DOI)
|
Okay, so let's go ahead and get this thing started. My name is Michael Kelly. I'm a senior Rails dev, been doing this for, doing Rails for about five years. I've been a dev for on and off maybe 15 total. And as you can tell from the slide in your program, we're going to talk about Rails controllers today. So before we get started, I know this is in the junior track. Let me see a show of hands. Who here is actually a junior developer? Somebody newer to the technologies. Okay, good, good. How many people here are senior developers who are here to judge me on my presentation? Oh, good, okay. So I only have to impress a few of you. All right, good deal, good deal. So I'm going to start off, like I said, we're talking about controllers. You've all seen them. You've all dealt with them in some way or the other. And a lot of times what you see, especially for new developers, coming out of boot camp or maybe some, you know, a run on tutorials online and the slides went away. So that's me. I apologize for that. Okay, so we've all seen them. We've seen index actions, we've seen show actions. You've all seen the standard crud. But what I'm kind of here to talk about is what actually happens out in the wild. Controllers, you're going to see when you step onto a new job or, you know, pick up a new product, something like that. And actually the first slide I have is one such controller. No, you're not supposed to read that. We're not going to go through that line by line, nothing like that. That's atrocious. That's over 350 lines of a single controller with over 18 different actions and 100 lines of just boilerplate code. That's wrong. If anybody's wondering, that's wrong. What I want to talk about a little bit is how we get this nastiness out of what is essentially our standard controller, the one we've all seen. And it actually happens a lot easier than you think it does. So what we have here is the standard. You have your index show, new create, all the standard actions. But a lot of the times what you'll hear people say is that they blame controller bloat on things like rapidly changing requirements, uncontrolled feature growth, maybe changes in your team, or even in the worst case, where factors elsewhere in the app make you add things to your controller. That's all wrong. Every bit of it. Every time you read something like that, it's incorrect. Because that's not what happens. These are all controllable things that exist in every business, every software product, these things happen. There's a way to maintain a good controller. So what actually causes this? How do we end up with 300 lines of garbage? Well, main thing is is it's a misunderstanding about what your actions are actually doing, what resources they're actually working on. So I want to walk us through a small example. It's a little bit closer to real world, but I've glossed over a few things for time here. So we're going to take, for example, and this is actually something I've done a few times. Let's say you're working somewhere and your company is in charge of running ads for its clients on Facebook, Twitter, any of these social platforms. So the fairly standard requirements that you'll see, you need a way to kind of browse, edit, and deal with the ads, create them, edit them, things of that nature. We also have to have a way to control those ads, pause them, activate them, basic on off style features. And the last, users have to be able to see the performance of their ads, see how many people are seeing them, see who clicks them, how many times they've been clicked, what the impressions are. So in this example, I'm not going to build a whole app. I'm only going to deal with one controller. Call this the ads controller. So the first thing we do, we come in, okay, we have an ad object somewhere and we want to be able to do the standard CRUD operations or you'll see there, I actually prefer the term bread because it includes the index that's browse, read, edit, add, and delete. That's just a nomenclature thing and I'm weird that way. So you saw this before. This is just re-implemented in this context as an ads controller. Pretty straightforward. I don't even have any features in there like paging or search or anything like that. Just straightforward actions. So we've implemented this. We've pushed it to production and everything's fine. But the next thing, well, I'm sorry, so we have a total of seven actions out of the game. Okay? Every controller has these or most do. So we're already talking about something that has seven different contexts that you have to keep mentally. But like I said before, we have to control these ads and we want to be able to do that in a nifty kind of way, maybe some JavaScript button on the front end. We click it and it makes some Ajax call to our app and pauses an ad. So we come in and this is something you'll see a lot of times. I won't say naive but eager developers will take these actions and go okay, well, they're Ajax actions. Let's just stick them in the ads controller. That's what we're dealing with, right? All right. So this is that same controller with those actions added. Don't worry if you can't read the code. It's on the screen there. I actually want you to keep, just kind of keep a mental model of how that controller is shaped and how much information is there. Okay? We're not looking at specific implementation today. But you see there, our action count has ballooned up to 10. All right? So that's now 10 different contexts in different ways this controller can be used. Now, that's great. You know, we have these three new actions that maybe kick off a background job that talks to Facebook or Twitter, something like that. Then one of your Ajax walks in and this is a phrase I've literally heard multiple times. So we want more of a responsive UI. We want some way that our page can load and then load individual visuals for ads asynchronously. So we come back over here to our trusty ads controller. And maybe we need a pain to preview the ad so the user can see what it will look like on Facebook. Maybe there's, we want some little widget that will show them generic stats in a, like a list of their ads, something of that nature. So we come in. We add two more actions. Now, these actions, I don't know if you can see it there, but all they do is render out a partial, something for Ajax to process and insert into the page. Again, our action counts back up to 12. So the controller doesn't look so bad. Now, I'll let you know each one of those actions is as minimally implemented as possible. The rendering of partials is a single line. The background jobs are a single line. So any logic added here is only going to complicate the matter. Oh wait. Now, we have to be able to deal with our stats, our statistics about these ads. So once again we come back. Let's say we want a page that will show our, the audience, the people who have clicked our ads. These are people who have interacted with our ad and we want to maintain a list of those. Maybe we want a dashboard to kind of see how I as a user am performing in general. So we come back and we add a few more actions. And these, these actually deal with maybe some helper objects to do some calculation. Maybe there's some aggregation or processing that goes on in this data. Here we are. Now we're up to 14. Now, I don't know if, I don't remember if I mentioned it, but that original controller I showed you only had 17 actions. So we're not that far off. And you can see how again if any kind of logic gets inserted into these actions, if you add any if else blocks, if you add any escape statements, anything, anything directly in these actions, all you're going to do is turn it back into that. Okay. So we're not that far off. It's actually pretty easy and it happens without you thinking about it. You know, you're thinking about that one feature, adding a preview or adding a statistics page. Not your controller as a whole. So the answer to this, to keep yourself from getting out of control like that is to break it up. Break your controllers up into pieces. Now I've talked to a couple of junior developers before about this topic directly. And a lot of times people will have trouble breaking the context. You know, in every one of those actions we were dealing with ads. So it makes sense mentally to stick in the ads controller. But what I, what I argue is we're not actually dealing with ads in all those actions. So to, to kind of help you out a little bit, these are some different ways you can think about what kinds of controllers you're going to, to build. So in that, in that example, we had actions that dealt with static or view layer data. This is processing a partial or dealing with maybe a static page. It's a landing page, things of that nature. We had actions that, excuse me, dealt with things that are composite concepts, concepts that aren't mirrored directly by like an active record model or something of that nature. And then finally there we have aggregate actions, things that collect a bunch of data together and pipe it down to the front end. So let's take that example. We're actually going to break it up along these lines. So, excuse me, looking at your static control, your, your static and view style actions, these are those same actions from that previous controller. You can actually read those now. That's, that's a font you can read again. These are the preview and stats actions. I've separated them out and I've stuck them in a different controller because what you're dealing with here is a view of an ad, not an ad itself, okay? That's a different resource, a different quote model that you're dealing with. You see the same thing over here with your composite controllers. Now these are things you'll see a lot of times, device is actually a good example of this. If any of you have used device for authentication or authorization or anything like that, there's all sorts of session controllers that you can override and add your own functionality. Nobody that I know of has ever built a session model in a Rails app or like, like in this example, jobs. So those, those Ajax actions that would pause and activate your ads, actually what we're doing is we're starting a job on the back end, starting something that's going to call out to Facebook, tell it to pause a specific ad. So I've broken those up as well into this ads jobs controller or ad jobs controller because that's what we're dealing with is jobs, not ads. And the same thing goes for aggregate data, our audiences, that's something that's pulled from every ad we have, the dashboard, that pulls in information about every ad we have. So it's an aggregate resource, it's not a single model. And then of course, we have our standard CRUD controller with the index show, create, add, delete, blah, blah, blah, blah. So what we've done is separate all these pieces out into different controllers. And it's not immediately obvious what the benefits to doing something like this are, but one, it's easier to debug your controllers. It's easier to navigate them mentally because if you have a problem with, say, the previews that we talked about a minute ago, maybe we need to render them differently or something about that is breaking, well, we know right where to go and there's two actions in that controller. So we can make those edits without having to build the entire mental context of that ads controller that we talked about earlier. The other is, and you'll see this a lot when you step out into the real world, learning and onboarding. This is one of the toughest things the developer does and that's step into a new application. So when you're learning a new app, do you think it's easier to take 350 lines and dig out the bit, the maybe two lines you need to deal with or two lines you need to learn how a specific action works? It's a lot harder when you have to dig through that big giant mess we saw a minute ago. The other is you localize your changes and this is something that a lot of people talk about, encapsulation, separation of concerns and whatnot, but ultimately, like I said before, if you're dealing with, say, stats, you need to calculate something differently, your changes are going to occur in that controller, not in one big giant mess. So if you go and you add, say, a before action, you now want to authenticate this call or do some object setup before the action comes in. If you've got 20 different actions in that controller, you now have to explicitly control which actions that filter is called on. Do a before action except all these others or only these two. So now when you make a change, it only affects the pieces of functionality that you want it to. And the last here, this is one that I haven't seen talked about quite a lot and that's that it's actually easier to coordinate working on a larger team. If you've got multiple people working on this code base, maybe you're working on improving statistics, adding more information and data there and you over here, you're working on making the previews look slicker, look cooler. Maybe you over there, you're working on, you know, improving, maybe there's some business logic we need to add to creating an ad, some kind of check we need to include. We can do that now. You're not all making changes to a single file that then conflicts when you go to merge it in. So I kind of blasted through that a little bit faster than I expected, but the main things I want you to take away from this talk are that you need to look at actually what your actions are dealing with. They're not all dealing with an ad resource or a product resource or a user resource. These are different conceptual ideas that it's dealing with. And your controllers, I want to see a lot of controllers. Okay. I don't want to see big giant ones. I don't want to see 300 lines with 20 different actions. In fact, I actually challenge you that controller we saw in the beginning, the standard CRUD controller. Make that the most complicated controller you have. The one you see in the tutorials, the one you see online in the perfect examples. Make that the most complicated controller you have. It's challenging. It can be. But ultimately, that was the most complicated one I showed you outside of the trashy controller. So the main point here, like I said, and you'll see I actually had a couple of people ask me how this relates to restful design, which is a fun buzzword in our world. And I make the argument that this is restful. To create a rest resource, an object that can be created, that can be destroyed, that can be changed, you have to actually define what that object is. And it's not, I promise you it's not the same as all your other actions. Dig in and take a look at that. So I've gone way under time here. But I think I overcaffeinated. But does anybody have any questions? I would love to go back through and talk about it in more detail. The question was if I have any examples of where it's permissible or okay to add some actions on top of those standard CRUD actions. And in some cases it can be. I mean, as developers we kind of play a balancing act between the right way and the achievable way. If you have a feature going out or it needs to be in production in 20 minutes or a fix that needs to be in production very quickly, it can be very hard to do a refactor into three different controllers and test all of those controllers and push them up, get them reviewed, the whole nine yards. And so what you'll see is a lot of times earlier on in a project it's very convenient to add small actions like that, like possibly the Ajax actions or maybe the Vue actions. And it's all right early on. You know, if you have a total of maybe three controllers and you want to attack some small functionality on there, that's fine. Get it in there and get it pushed out. Not a big deal. The main thing though is understanding when you introduce a new concept. So for instance, if I, the previews that I mentioned, if that was a very small feature early on, yeah, we'll stick that in the ads control. Not a big deal. But if this is a concept that you are approaching in the app a lot, you're going to come back to it with previews and maybe some stats widget or, you know, further widgets along those lines. If this is something that's going to keep happening, separate it out. You know, don't just tack it on there extra because it will grow uncontrollably. Actually build like an API namespace. And that's, I'm sorry, the question was, so with the, like the ads previews where you're rendering out partials, after you've created a few of those, it starts to appear like a kind of a private API. You know, your front end app or your front end page is making these private calls to get these partials. And at what point do you look to refactor that to say like a slash API namespace? And again, just like everything else, it's a lot of judgment calls. But it tends to, a decision like that tends to matter more when you foresee, it's a little bit about looking to the future. So if, you know, you're writing your first couple that render a partial and then, you know, a few more are coming along, maybe you're even dealing with some JSON elements, you know, serializing your objects in a certain way. Really it comes down to when an abstraction like that, excuse me, when the elements you're adding justify the work necessary for the abstraction. And you'll see a lot of times, it's an interesting balancing act because, like, I'll jump into an app like that and, you know, I've done that so many times creating, you know, this internal API that I can crank that out in a few minutes. So the level of difficulty is different based on the developer who's picking it up. If you've ever dealt with agile development, you've estimated that way. So it kind of depends on the velocity moving forward, you know, where you're headed. And to kind of further your question a little bit, at what point does that stop being an API embedded in your app and a dedicated API launched at a different level. You know, that's, those levels of abstraction in the app itself really kind of depend on where you see the app going from a design perspective, from an architecture perspective. And that's something that I, to this day, still get into arguments with my own team about, you know, I'm like, hey, let's go ahead and abstract this thing up there. And so I'm like, oh, it's not worth it. It's not worth it right now. And, you know, you make the debate back and forth. And ultimately, you know, when you can go to your team and say, hey, I'm going to create this API. And they go, hey, okay, you know, when you all kind of come to that same decision, it's time. It does. It does. And I did, did everybody hear that question? So his question was, in a lot of the designs that he's dealing with are the designers, you know, don't follow REST. Well, they, you know, they view the design of an app and how users are going to interact with it. And that's actually the crux of my point here is that designers don't deal with controllers. Designers will never deal with a controller. The controller is your side of that. So in that situation, I would make the effort to maintain the separation, you know, a customer's controller, a, I'm sorry, a questions controller, a student's controller, and then leverage your front end to combine that data if necessary, whether that's through Ajax or something of that nature. Now, if it's extremely heavy, so you have lots and lots of places that these two objects are rendered or serialized together, then you start to think, start to look at it more like a composite resource, kind of like I mentioned earlier. So if you're always rendering those together, then create a controller that encapsulates that concept. You know, this is a, I'm not sure what your context is, but call this a student questions controller, and it renders them out as a grouping of that data. That, the decision to go one way or the other on that tends to go, tends to matter more how much you're doing it, how closely tied they are consistently throughout the app. But I tend to, designers don't like me because I tend to force my code into a good design, and you know, it'll add a little bit of work to kind of converge it on the front end. But in my mind, the benefits of ease of development, and localized changes really outweigh that. Because a lot of times, if those two things are rendered together at the same time, a lot of the changes you want to, say the logic associated to creating a student or viewing them, will apply to the student itself, regardless of if it's rendered together or separately, in which case you want that separated into its own controller. So a lot of times, it can be hard, especially in a meeting room, but your code is your code. The designer's design is their design, and there has to be a line of separation there. What I'm going to implement will achieve that design, but I'm going to do it in my way, so that my code is still understandable, still maintainable. Otherwise, you're going to get into fights with them later on when you can't change something to the way they want it, because you've kind of locked into this marriage of the two resources. Favorite Zen philosopher? Well, considering I stole the title from Robert Persig and his Zen and the Art of Motorcycle Maintenance, I'd have to say him. It's a lot of the same concept, though. It's about looking at what's in front of you and kind of seeing it from both the functional and the aesthetic perspective. That's one of the reasons why I said, don't try and read the code, just look at the file as a whole. You'd be surprised how much an eye for aesthetics on your code will actually do for it functionally and vice versa, you know, as long as you kind of apply a much more whole mindset to it. All right. Well, I'm now out of time. I managed to kill the rest of that. Thank you guys very much. I really appreciate it. Thank you guys very much. I really appreciate it. Thank you.
|
So you’re fresh out of boot camp or just off a month long binge on RoR tutorials/examples and you’re feeling pretty good about MVC and how controllers fit into the whole framework. But projects in the wild are often far more complicated than you’ve been exposed to. In this talk, we’re going to discuss several techniques used by seasoned engineers to build and refactor controllers for features you’ll actually be working on.
|
10.5446/31585 (DOI)
|
Hi there. So we do this little thing every year and we've been doing it for a little while. Indulge me for a little bit while we do this. It's called Ruby Heroes and the purpose of it is to recognize some of the lesser known people or the people that have done such great important work in the Ruby community that we feel like we need to do something a little special for them. We run this on a website called RubyHeroes.com every year. So at the beginning of the year or sometimes a little late, sadly, we tell people to go to that website and we say, hey, so this year, who has impacted your life as a Rubyist as a Rails developer? Who has made it easier? Who has produced stuff that helped you do your work or has been extremely kind to you or has made the community as a whole better? So as I said, we started doing this in 2008. So it's been nine years now. And we've had a lot of really great people over the years. People that are more well known. People that are less well known over the years. So you might recognize some familiar faces that have gone on to do great things. And this year, we've had 454 nominations from the community at large. We've had 133 nominees from these nominations, so people that were thanked by other people. And we have a committee of 20 past heroes among the previous heroes that went through and basically tried to do a step which removes this aspect of popularity contest. So it's not just about how cool and popular people are. We try to find people who are doing great work and that are not necessarily well known. And they've selected nine new heroes for this year. This is actually a record. We've had eight one year, but never that many. So we're going to try to keep it fast. But before we do that, I'd like to thank a few people who helped us put this together because they're amazing. So first, there's Ruby Central, which you know, of course, but they've helped us a ton getting this event together and helping the people that we're going to thank come here, which is sometimes very difficult. You'll see. Also, there's Code School. Morgan and I work for Code School and they allow us to take the time to prepare this to work on the app every year to organize this event and come to you to present it. And lastly, there's you. And I want to kind of like take a moment to talk about you guys. The importance here is when someone does something nice, it's very important for you to remember to say something about it, to go talk to them, especially when you're in a group like this where you can actually go see them. I know sometimes it can be scary, but if you do it or if you do it through Ruby Heroes, these people feel bolstered sometimes when stuff gets hard, so it helps them. All right. So let's get to it. So the instruction is really quick. I'm not sure everybody's here. We've tried really, really hard to bring everybody here. And if they're not, then we'll just, you know, give them a round of applause. Otherwise, they'll come up on stage and see Morgan over there who's going to help them out. And then we'll all look at them for a little bit awkwardly as they just stand there. Because that's easy. So not a lot of time, a lot of heroes. As I said, watch your feet. I mentioned that. There's stairs over there. That's not for you guys. You can watch your feet if you want, but that's more for the people like trying to fumble over here. So I'm going to give you a little quote that someone said specifically while the nominations were going on about each person so that you get a sense of their impact if you're not familiar with them. First up, because of our tireless efforts in making the open source community a better place by creating the contributor covenants and trying incredibly hard to get people even in the most hostile environments to adopt it. Please welcome Coraline Ada Enkic. Thank you. He's a long time Ruby and Rails Committer. He's the author of many widely used gems like Kaminaru, Trace Route, Database Rewinder and more. Please welcome Akira Matsuda. Thank you. All right. In the last four years, they have raised almost half a million dollars and recruited hundreds of coaches and mentors to get dozens of women into programming and contributing to open source. We can attest that it changes people's lives. Please welcome on stage the Rails Girls Sum of Code team, Annika Lindner, Laura Gaetano and Sarah Reagan. He has put many years and tremendous effort into continuously and incrementally improving the C Ruby implementation. He wrote YARV pushed for flow numbs, implemented partial generation garbage collection, which is a mouthful and many, many other improvements. All of these things have been the primary drivers of increased performance and better garbage collection in Ruby. His impact is large, but his presence is not at least in much of the world outside Japan. Please welcome Koichi Sasada. His articles are always informative and enjoyable. Furthermore, they focus on big language issues, performance, memory, delicious food. And they mention problems and solutions. He represents what is good about the Ruby community. Please welcome on stage Richard Schneemann. He likes to sit on the other side. With Ruby Tapas, he continues his long tradition of leveling up Ruby developers everywhere. In his appearance on the Ruby Rogues podcast, he always asks interesting questions and provides tidbits of wisdom on Ruby and the state of our community. With books like Confident Ruby and Exceptional Ruby, he has given guidance to many new Ruby developers on writing clean and readable Ruby code. He has inspired me, not me, but the person who said that, to not only be a better Ruby developer, but also a better software developer overall. That's why he's my Ruby hero. Please welcome Avdi Grimm. And finally, he has been leading the JRuby project for most of its existence, which has been responsible for many enterprises actually using Ruby. He's constantly working on making JRuby better and faster and regularly going to conferences all over the world showing you how you can use Ruby and still have higher performance. Please welcome Charles Nutter. So now that they are all awkwardly looking at you and waiting impatiently to get off on stage, can you please give them a really, really big round of applause? Thank you all very much. That's it for us. And remember, this is important. Keep being nice to the people who make your life a little better every day. Thank you very much.
|
The Ruby Hero Awards recognize everyday heroes of the Ruby community. The Ruby community is full of good people who help each other in diverse ways to make the community a better place. Once a year at RailsConf, we take a moment to appreciate their contributions and hopefully encourage others to make a difference.
|
10.5446/32160 (DOI)
|
b patterns dan time trair these discs in Convenient Ct negotiations indies eu vol kago nd et po nd ended of monturd 만들 daГa nd sw daddy and tail=" Sergei tragowa nd p Exig discours score rum criterion ⊮ what weight? ⊰ the Rolex Date Rolex Cay Judge Rolex mēte thə mēt, ṉ roeng mēn tsəs tetu śfō skresni tət el iki ɾi ət cmun mēr �emat ઼ � policymakers ઙat藤 પaia seaweed Kitchen butcher cooking sake cooking sake cooking sake cooking sake cooking sake cooking sake all the misog 🐫 obviously a mmm a ḇ ḇ ḇ Ḏ ḅ Zaini nippe Phil apolog жив arte know deaths directe to most dirtier deserted mitid. My people surimp 원래 sperm dunananan per ano w conditioning per spanata Leviye,... normal rex planet... more efficient so then when you select the link you get a task about...and the description if you restart the patient you get to reset the video...not the available efforts i love your idea i have nothing else to do but chan piero syna 싶은 land yaw i became compañ prep for wedi troya wak yaw lese nic co gar nee procurement lem... eh end More divers. Only hairdoer Plaza men debuted here inpping Desir Ban. school, board of affairs marginalized, My family and yes, �,ì moulding. Ə Ə Ə Ə Ə Ə Ə Ə Ə Ə Ə Ə, Chocolate? Ə Ə Ə Ə Ə Ə Ə Ə Ə Ə Ə Ə Ə. Ə Ə Ə Ə Ə Ə Ə Ə, they were supposed to be in the 7 KC region before, but Ə Ə Ə Ə Ə Ə Ə Ə Ə Ə Ə Ə.
|
OpenLayers 3 is a powerful mapping library that can be used to create interactive mapping applications. Although it has a simple, intuitive and well-documented API, it requires knowledge of JavaScript to use, and no tools exist to leverage its functionality for more general GIS users. This presentation introduces an open-source QGIS plugin that creates web applications based on OL3, without the need of writing code manually. Elements of the web app are defined using a simple GUI, and QGIS GUI elements are used as well to define its characteristics (for instance, for defining the styling of layers or the extent of the view). The plugin can create different types of web apps, from simple maps used to browse data layers, to rich ones with GIS-like functionality, as well as others such as narrative maps. Apart from being an interface for writing OL3 code in a graphical way, it automates data deployment, and can import data into a PostGIS database or upload layers to a GeoServer instance. Altogether, these capabilities, along with QGIS data management functionality, allow to create a web app from QGIS in a very short time, as well as modifying or improving it later.
|
10.5446/32162 (DOI)
|
All right, it's about 11.25 now, so I'll get started. Hi there, everyone. I'm Steve Lander. I'm a software engineer at RGI, and I'm going to talk to you about implementing OGC Geo package. OGC Geo package being a new standard that OGC just came out with recently, and this is for a containing feature, raster data, meta data, schemas, all kinds of stuff into a SQLite database. So a little bit more about me. I've worked out of Fairfax, Virginia in the United States. I'm experienced in caching and storing large raster images, so the work I've been doing for, I would say, the past three years has been involved with using GDAL quite often to take these large raster images, these geospatially referenced images, and then cutting them into many pieces. And there are a lot of benefits for doing that, main benefits being that it greatly increases the time it takes to actually render that information. So, and the work we've been doing for the past three years has been implementing that encoding standard. We initially started off with Python, that progressed into Java, and then from Java, we also wanted to put it in Android. There's been some others as well, JavaScript, but it hasn't really gone anywhere. So our actual GitHub projects, if you want to know more, software to aggregate geospatial data, otherwise known as SwagD, and GeoPackagePython, which was the first naive implementation we started off with. So for SwagD, we were starting off with Java 1.8, which was kind of aggressive. If I was to do it again, I'd probably start off maybe with Java 1.7 or Java 1.6, just because not a lot of people are comfortable with going to the latest version of Java. Typically, you'll find a lot of people with 1.6 or 1.7 out in the wild, and they're kind of reluctant to go any further into new stuff. But, you know, there's a lot of benefits for going to Java 1.8 that we saw that we just decided to dive in and see how it went. So, if you want to see some other presentations we have going on yesterday, and I'm sure it's going to be on the website, Nathan France had talked about GeoPackage and how that's changed the way the governments think about standards, and also tomorrow I'll be presenting some of the open source build tools that I have incorporated into my projects in order to help me and help my team, you know, reduce our risk, reduce our errors. So, what is the GeoPackage Encoding Standard? And we're also going to be talking about the actual implementation we've made and our approach to implementing the encoding standard and what we learned building the reference implementation. And the lessons learned from this experience, they are numerous. So, GeoPackage Encoding Standard is a set of conventions for storing the following, and this is all available at the OTC website. So, vector features I mentioned, a tile matrix set, which is that GeoReference image cut into many pieces, and we do that at many scales. So, it's like taking a basically a pyramid data set and throwing it into a SQLite database. There's also a lot of schema attached to that. Describing your tile set, describing what levels it exists at, how many there are, and a lot of other helpful things that you don't really get with MB tiles. MB tiles just kind of dumps a lot of tiles into a single SQLite database. MB tiles likes to throw some rigor around that so that you have more information, more metadata. So, yeah, speaking of metadata, you know, you can describe your, the type of data you have more. Maybe you want to talk about a classification or, you know, the sky's the limit. And then, extension. So, the extension says, I think is one of the more powerful features of it. Extension will allow you to do things that the current standard, the way it's written now, will not let you. So, things like perhaps you want to add a network data set into your GeoPact. Perhaps you want to add time stamping to your images, to your tiles. Because tiles can update, tiles can change, and you want to figure out which ones are the freshest or the most stale, for instance. So, when we started, and this is something that Vladimir actually had too, if you saw Vladimir Agafonkin's presentation. So, how do these, why do we have another standard for doing this? I think it's a little bit better than the situation here is saying because I saw that MB tiles was pretty good. MB tiles was a really good way to take this idea of a cache map and throw it onto a mobile device or any device as a single file. And I thought there was a lot of, a lot of good value there. But also, you don't have to take the entirety of GeoPact. You don't have to put rectors in there if you don't want to. You don't have to put rasters in there if you don't want to. You can make a blank one if you really desire to. But I think that there's nothing wrong with more standards, but it's going to really depend on your use case. So, what kind of existing technology is already out there that perhaps could compete with GeoPact or does something similar to what GeoPact does? The main one you'll see on OGC site is the shapefile. So, shapefile, you'll have your vector information, you'll have your metadata, but the shapefile is not going to do your raster tile data sets. KML would kind of fit that need a little bit better than shapefile. You would actually be able to take your tile map data sets in KML and view them, and also vector as well. But you don't get some of the nicer parts of the scheme of the metadata and the extensions with KML. GeoJSON, very fast, very good for vector. And then NPTiles, you could probably throw vector information into an NPTiles, but the problem there being, as a spec, NPTiles only supports Web Mercator. Whereas GeoPact, you can support any EPSG code or anything specific, even your own custom WKB or WKT if you so want it. So, there are more implementers of GeoPact and actually growing than just ours. GDAL had that quite a while ago for just features in 111.0. We actually use 111 in our software now. And raster tiles have just emerged in 2.0. We haven't looked at that. I haven't looked at that personally, but it's pretty exciting. GeoServer, I talked to some of the GeoServer people that are here. There's a community supported plugin. That's not everybody's radar, but nothing actually official yet, as far as I know. Please correct me if I'm wrong. So, Spatial Light, they just added the version. I checked that probably a month ago, and that was just pretty recent. Esri has had feature support, I would say, and their supported GeoPact is actually still ongoing. Raster support is coming, however, you know, the way they deal with tile image, tiles and tile in a cache is actually fundamentally different from your TMS or your WMTS. They actually have a binary format, tile cache type situation that doesn't make it a really quick fit. And then Digital Globe, they have the ability currently to push out the imagery that you see to a GeoPackage. You can actually export that. And the asterisk is there because that was code that we actually made. That's the Python naive implementation. So, if you have a Digital Globe account, you can go there and instead of getting a file folder structure full of tiles, you can actually just get a GeoPackage, have a SQLite database. And if your viewer supports it, you can see that. So, our approach. So, what I really wanted to do was create an API for other developers and for other implementers that they could just pick and choose what they wanted. So, I knew that the intended user that we were raking the software for wasn't going to need everything in a single project, right? You would have certain individuals that maybe would want to deploy this on a server. They would want to perhaps have users come in and they would have a tile back end on their server that they would be able to draw from, take those tiles or take that imagery, tile it there even and then push it out to a user that they could have a downloadable file from. Or maybe they want to view it on a mobile and tiling just doesn't make sense. And I wanted to build a UI around that API as just a demonstration so that you could see the types of things it would allow you to do if you use it on a desktop, if you use it on a server or if you use it on a mobile device or whatever. And I mentioned we started off with Java 1.8 and which is really great. Everybody on the team loves it but there are some problems. There are some features that especially for Android and we'll talk about that later just aren't going to work in and we had to backport some things. But that's how it goes. So the core API, we have a common library that shows your typical geospatial things. You have a data store library which is going to have the classes that allow you to manipulate a store of tiles. A store of tiles could be a file folder structure of TMS organized images or it could be a GIF package or it could be an MB tiles. Whatever you have that's storing tile information, cache map information, that's that library is going to allow you to interact with those. A GDALT2Tiles project as well exists in there and the name taken from the Python script by the same name and a lot of the code in that project closely follows what they're doing and that's basically just taking any GDALT supported image and tiling it for you and giving you output. The GeoPackage class is typically what people are going to want. That's going to allow you to verify a GeoPackage that's going to allow you to open one, to create one, to put imagery into one and then our network extension is actually an extension that we talked about where you're able to add a network dataset. You have to add for instance nodes and edges to a GeoPackage and interact and query that information on your actual device like maybe a mobile application or your laptop using things like ASTAR or Dykstra routing. We just had a keynote about open software. Interoperability is definitely something that I have to worry about constantly. I wanted to have that thought about before coming in and making everything. We had a verification component there that when you open the GeoPackage, it's going to check literally everything. All 200 pages of that GeoPackage encoding document, it's going to go through and see if you have been adhering to that. And then that was so popular that we actually had to make that a separate application so that other people in the wild, other implementers, other companies could take that and say, hey, I want to make my GeoPackage interoperable. Do you have anything that could help? And we're like, sure, this is this verification app. It will help you out and it will actually give you pointers to the actual specification headline so that you can say, okay, I'm not following the spec here and what do I do? What does this even mean? Another way we show for interoperability support was just using the JVM. I probably could have gone Python, I could have gone Haskell, I could have gone crazy, but the reality is the JVM has allowed, my development team to use Windows Linux, I personally use Linux to do my development on, has allowed us to push to Android. So that was very, very important that we have something that was easily portable. And the outputs that it supports are if you've ever made a leaflet-based map or if you've ever made a leaflet map, those are the big three that you'll have support for baked in. Three or five, seven more Mercator, global geodetic, eco-retangular, and then global Mercator, which you typically don't see a whole lot of people making global Mercator, but they give it as web Mercator only, not as egregious in the, in the sort of the distortion that it introduces. So we made a reference to UI that I mentioned. It's swing-based, which really makes me sad because I don't like swing at all. I really wish it was kind of an NBC, MVVM situation. But, you know, we're getting there. Java FX just came out for 1-8, so that's something that's very exciting for the team. JMapViewer is a, is a jar that I, that I happened to find through some research, and JMapViewer actually supported tile map sets out of the box, so I didn't have to read write anything. The problem is it was the only one that was out there. So there are some things that it does poorly that I wish it didn't do. And, and if we ever moved to a two Java FX there in that last headline, that JMapViewer would be going away because it's, it's not very, not very flexible, 3-8, 5-7 only. You know, some, some odd behavior. This is our reference UI here that I made. Simplicity. It was definitely the goal, three workflows, tiling data, packaging that data and then viewing it. What we're seeing here is the viewing. And then this is an Alaska data set pulled from, from a geo package. And you're actually viewing the tile map imagery there. Very, very basic, very plain. But when you have something like that available on, on like a mobile device that you don't have to worry about connectivity, it could be, you know, quite powerful. These are the, some of the inputs we take when we are doing tiling. For instance, you know, full, full reference system support incoming and then outgoing. We, we limit that. You'll see the drop down box there. You know, we, we limit that because of what we know a, a user, what we, what we assume a user will be able to, to want what they will want. You know, 3-8, 5-7, typically in 90% of the cases. And then, you know, 3-8, 5, or 4-3, 2-6, kind of, you know, tacit support right now that will be coming of, you know, actual rigorous support for that will be coming a little bit later. I'm making, you can also tile directly to a GA package. If you just want to skip the whole creating, creating images, you can do that and just tile directly to a GA package. Or if you want to have the flexibility of having all those loose tiles somewhere on your system, you can, you can do that as well. If you want to change what you're making, if you don't want to make a huge PNG collection, you want to just make a JPEG collection with some, with some compression, you can do that as well. And that's a metadata here at the bottom and where you want to save it. So, then when we actually package, you know, pretty similar, you can actually change the imagery that you have as well when you're packaging. You can take a PNG image because you want to save that transparency information and then convert it to a JPEG because maybe you don't, maybe space is more important, right? If you're targeting a mobile device, the difference between a GA package that is, that is PNG and JPEG can be, you know, many hundreds of megabytes. I mentioned the verifier, verifier is Java FX and a lot better, I think, of a UI than, than the swing application you just saw, unfortunately. But, you know, redoing an application, redoing the UI for the application that we have that you just saw is, is a large task. So, it tests for the following parts of the GA package encoding standard, all the, all the big ones that you have, the one that's noticeably absent there is features and features is just because we haven't gotten there yet. So, the idea of what it looks like, Alaska, Alaska 2.geopackage, which you just saw, was good. You can, you know, collapse that. You can see the other, other GA package and what it's past. You can also see, and I apologize, the radiant blue is the only GA package that I had on my system that would show a warning. But I think this is a good illustration in that you have a warning here, not necessarily a breakage. So, for instance, on this line here, it says, this is pulled directly from the spec. A GA package shall contain, you know, 047503130 GP10 in that scale. That really doesn't mean a whole lot to a layman, right? But, but all that's saying is, if you have this one byte header set in the binary file, it should be there, but it's not, you can still open it. It's not fatal. So, what did we learn building this reference, reference implementation? What have we learned about coding? What have we learned about ourselves? So, for Java, I realized really early on that, man, there are just very few choices for viewing tile data sets. I mentioned JMapViewer. That was really it. And finding that was on some, some Arcane website that I found that had a download link and I wasn't sure if it was malware or not, but it's not malware. So, I wish, I really wish we knew more about Java FX. I actually found out about Java FX from the last Phosphor G in San Francisco in Berlin and once I found out, I was like, wow, that's amazing. I wish, I wish really earlier heard about that sooner. A lot of many tiers could have been averted. The Python implementation that we have for tiling is still far in a way, the fastest way to make tiles. We researched in Java, you know, multi-threading, multi-processing. How can we, how can we speed this up? The Python implementation takes, uses the Python multi-processing library and it is so fast that it will kill your system. You'll use every single core and it won't be usable, but man, you'll have tiles really quickly. We can't get that with Java right now, unfortunately. That's because of the JVM, the limitations you have with the JVM. And you'll find this a lot when you, this last point, you'll find this a lot when you're dealing with, with database operations. Optimizing the way you interact with the database is typically giving you your biggest gains in performance. So reduce the amount of SQL calls you make and then reduce the amount of times you're going to your, your SQLite database or your database in particular because that's, that's going to take the longest time, right? Worrying about pieces of code in, in certain areas where you're maybe using a list versus a hash map, that's not going to get you the type of improvements you'll get versus going over a database. What have you learned about Android? Compatibility for Android for just a while. See, I was on the impression like, oh, you have Java, just throw it right on the Android, on the Android device and it'll just work. But that's not the case. It's still stuck in 1.6 officially. 1.7 support is, is there. I believe you can make a 1.7 Java project for Android and just bring it over and it, and it should work. But we didn't want to take that chance. So the fact that we had Java 1.8 and we had to go back for a couple, just a couple libraries to do the, the compatibility for Android, that really stunk. So, so the desktop development for, for Android is really ill-suited. The problem is when you're developing on a desktop for interacting with a SQLite database, those libraries and those, that functionality that you'll have available to you is going to differ than it is on an Android device. Android device having their own SQLite implementation that's kind of baked into Android, whereas if you're on the desktop, you have to, you have to bring that in. That's not something that you just get with Java. And then, well, big one, you know, a buffered image and a desktop Java versus a bitmap image on an, on an Android Java implementation. So you kind of have to have an adapter there to, to help you bridge that. And, and they're not necessarily very compatible. And then, the different testing frameworks that, that exist on, on, you know, JUnit versus an, an Android one on Android. And then, there was frustratingly no, no globally available, most current Android jar available like Maven. So we just had to, had to throw that in a source and reference it. So regarding the spec, you know, I, I thought I'd been working on this for three years. There's, there are still questions about the spec, still, still different interpretations on a single word. And that's kind of frustrating. Even though they tried to be as verbose and as precise as possible, you still have differing opinions and, and the ways that you can, you can do your, you can do your tileset. So compromises are the key. And I already mentioned, Lingokin differ. So in general, think about who you want to use your code and, and how, and then constrain that list. You saw that the UI had, it was actually gave you very, very few options. So, all right. Thank you for listening. Thank you, Steve. Do you have any questions for, for Steve? One of the problems with extensions is as soon as you start using extensions, you reduce who you can share this data with because extensions are not universally done. So. Sure. So the good part about extensions is if you're, if you're making a Geo package implementation, you will just ask the Geo package, what, what extensions do you have? And if it has an extension that you don't have, you still, you can still open it. You can still get your, your functionality that you want. You don't, you're not losing anything. You just, you know, if you get something with an extension that you don't support, you can still look at the tiles. You can still look at the stuff that you can still get all the things you get with a vanilla Geo package. True. If you, if you have an extension, you, that's going to be implementation specific. And that's, that's kind of a, that's kind of a shame. But hopefully you're registering your extension with OGC or you're registering it with the body so that you can, you can have people and you know, tie it back to the keynote. You know, you got to be as open as possible. And, and that'll help. So you have to mitigate, unfortunately. Sure. Sir? What was your use case for, for using Geo? Well, you are, what, what, you're only using in your application like we did on the online, just for the, on the third? The use case was we, we wanted fast base maps and fast data on a mobile device. So there, there were, you know, you could have MB tiles. MB tiles was great. MB tiles could work. However, more information we felt was better. The more rigor you throw around it. So that's why we went with Geo package. And third. Hmm? And third. Yeah. Yeah. Sir? Do you think that's the best use case for Geo package or is it better as kind of a universal standard for data interchange rather than a mobile device? It's a, that's a good question. I think that Geo package should exist outside of, outside of the mobile realm. I think the mobile realm is just one in which that it works particularly well. There are plenty that don't work very well. So for instance, storing your data, storing your tile map data on the server in Geo package, I would not recommend that because you will have problems with SQLite at a certain point. Eventually you'll get to a size, I mean, in the size can be terabytes of for SQLite, right? But eventually you get to a point where inserts no longer make sense. Working with that Geo package no longer makes sense. You might as well just have a big binary blob of tiles or file folder structure or something similar. Any others? Other question? I think we have one minute left. I have one question for you. Sure. Thanks again for the presentation. It was very interesting. I wanted to, you mentioned image compression. I wanted to know about the overhead of the Geo package itself. Is there any associated overhead? And have you made any comparison to MB tiles, for example? There are, there's no difference as far as overhead from using an MB tiles versus a Geo package that we've seen. It's still, if you want to be very fast, just go directly to the tiles table listed in the contents and just start pulling tiles out and referencing them in TMS as you normally would. There might be some calculus you have to do to renumber your tiles based on the way the specs written. However, that's, that's really it. So it's, it's very fast. The image compression, they're not really there to speed up things up, but really to make things smaller. Okay. Thank you again.
|
GeoPackage is a new encoding standard created by the Open Geospatial Consortium as a modern alternative to formats like SDTS and Shapefile. Using SQLite, the single-file relational database can hold raster imagery, vector features and metadata. GeoPackage is an ideal data container for mobile devices such as smartphones, IoT devices, wearables, and even automobiles. We have created a few open-source tools to manipulate this exciting technology in a way that is useful to the geospatial community. Our goal with the GeoPackage specification implementations is simple: Create GeoPackages quickly and reliably while maintaining standard conformance. The single biggest issue we have faced is the speed in which large amounts of imagery can be disseminated to the end user. Data standards reliability was also a concern because we found many vendors interpreted the specification differently or to suite their own needs. Finally, the main problem GeoPackage was created was to solve was interoperability. We set out to create an implementation that would guide other parties towards making a data product that would function as well on one platform as it would on a completely different platform. Our initial implementation of the GeoPackage specification was created using Python 2.7.x. The software design was intended for command line use only in a script-friendly environment where tiling speed was paramount. The Gdal2tiles.py script was improved upon by harnessing the Python multiprocessing library so that multiple tile jobs could run simultaneously. The other piece of the workflow, creating GeoPackages, would be a separate development effort from scratch called tiles2gpkg_parallel.py. In tiles2gpkg_parallel.py, we implemented multiprocessing by writing to separate SQLite databases in parallel and then merging the tiled data sets into one compact database. This implementation worked well and increased the performance of producing these data sets; however, the command line design means that all but the most technically adept users would struggle to use the tools. With the initial Python implementation getting early-adopters a preview of GeoPackage in the short term, our team set out to make a production-quality GeoPackage API that could satisfy all user needs. Named Software to Aggregate Geospatial Data or SWAGD, we created a robust library for tiling raster data, packaging raster data stores into GeoPackages, and viewing either the raw tiles OR the finished GeoPackage products within a map viewer. Additionally, a Geopackage verification tool was created to foster community adoption. For more information, see our Github site here: https://github.com/GitHubRGI/swagd. Many open-source tools are being leveraged on the SWAGD project, including many common build and continuous integration tools including Github, TravisCI, WaffleIO, and Coverity. Using proven software development mechanisms like unit testing and code reviews we now have a consistent, reproducible, and inclusive GeoPackage implementation. We have an aggressive list of future capability that we would like to develop including ad-hoc routing on a mobile device, vector tile data sets, and even 3D support.
|
10.5446/32163 (DOI)
|
absenceé pa jonan cheonjon leide wren imposing t시면ae kon kale Si niño, en kachun jeon martah Pen che kachun da se nge pal Recordda depressed Unfortunately, it is currently a crosswalk We had a speech, and we were shocked Apareox te entran Many Pryll Let me explain wi jog dihip bolan na Rose Gaul g ku Łe ฒ щ段 czko hcol naveo m assigning ฆ ฮ FO 惊 ฒย ฟOOOOOope ฮ. wじゃ ฮ. w eal w lows hne m o. w dn ke yoo u tto klake at u kaklo l w dn candy Science w r r ke luwp o kaka w l ich do w fht ll w er markings Bi edik r assassination mSCLS Under du'nity namemunge ha fairly A, it is not, it is not, it is not point, you know that everything you see you don't know is like aavedak to or whatever you represent. Councillor 둘. Or Bench Cheese. Or Tropeyde. Or Jayyoth. Pat군�ouncey or-" ewey Scings дней and Finnish words, and exercises in which we'll build on our Slavic Foundation, so the audience will study to get at the site of our whose trading Scrolls. בא� pocket, בא� vet fir dent t t d, lacks out z alguém lapse. zoie cheese y'r jab d z y dat dem gam D When inspiringixedmäßensiveenda theland manifested in its seventy-seven centuries thegua as two races of wind and snowe wrestler waging in her teleihood, the Sn kidneys signed on BTS bygard. Legendarily, TwP's fl版 dh Greens Braun one eli an lymph para tro mo scroll g exchanged all the ways they have pinned them in their 알았어 but they were sold to minimism. When I started my career in Christianity I was addicted to me婚, and then began my career and I started with it. Begin my own career there I dare in all emotions. I ask myself many times how I uppend from my轧quers bottom of the ring. I got down and I said MCU Times nd jay often. astronon protector ll Rule Hey Oops mistake Kid Many years before, the knowing oceans wereility in surrounding the sea, n quirkyre و Regco Gio inn Pergoji audition with Regno Gio. 2014 – 2015 70 N Which I believe the
|
OpenStreetMap is at the center of a data and software revolution that has completely changed what we expect from maps and how we interact with them. The project has defined open map collaboration, it is a cradle of open software innovation, is used by businesses and governments, enables startups against industry giants and has opened the power of GIS to the underprivileged and poor. OpenStreetMap is only one of very few commercially viable global geospatial datasets. Ten years into the project, it is clear that OpenStreetMap is not an impossible quest nor a fluke of history, but it is here to stay and grow. An amazing and growing community, this year, OpenStreetMap crossed the two million users mark. Every month, 30,000 users log into the map and improve it. And OpenStreetMap stands to attract even more attention: Data of large proprietary vendors continues to be effectively not available to a huge part of the market due to rigid licensing; rumors around Nokia's HERE changing owners are at an all time high. This talk sweeps through OpenStreetMap's history and gives a detailed look at the state of the project in statistics and visualizations, including recent map developments in Asia. It reviews OpenStreetMap's strengths and weaknesses and makes predictions for the future of OpenStreetMap. We'll finish up with opportunities and needs for the project to grow as an open data community and a suite of open source software tools.
|
10.5446/32168 (DOI)
|
Hello, everyone. So this is Livelet versus OpenLayers. I hope I don't have to explain which is which. I guess I'm just going to explain which problems we had in my company, Maysmap. You may remember me from previous FoswalsG talks, such as, and again, of Chess Processor Falcon, GeoGlobal Domination, the musical, Apple Might 2.0 mapping, and Point Cloud the musical. If you're expecting songs and dancing, you are in the wrong room, I'm afraid. So if you're expecting a level discussion, don't have it. You won't have it, because this presentation contains biased opinion. Why? I have to give a bit of history to explain that. I started in 2005 with my own Canvas-based map. That was my introduction to the geospatial world, really, with JavaScript. Last year, I worked heavily with OpenLayers 2 until the company just didn't have any money, et cetera. That was bad. January, this year, I started working with BaseMap. We do intermaps. Basically, you can route from a room to another room with PG routing and some code, and it works. It's cool, really. So a couple of months in the company, I consider redoing the whole stack of software and decide between LiveNet or OpenLayers. Hey, it should be a nice idea to go to FossilG and explain my findings or my experience about the two of them. Four days later, Vladimir Gafonkin invites me to be a LiveNet core developer. Nice timing, Vladimir. Seriously. There are a couple of things I will not talk about. One of them is MothersMaps. It's JavaScript library for doing maps, but it's too minimalistic, so I don't consider it. I'm not going to speak about Tangram, which is an awesome webGL, my rendering engine, but not really... You cannot really add draster data, which we kind of need. I'm not going to talk about MapboxGL, because we feel that's too tight to Mapbox, and again, it's just a focus on vector. I'm not going to talk about Esri, JavaScript API, because Esri, right? They do have good 3D, though, but they do have that. I'm not going to talk about Google Maps API, because Google, right? But they do 3D once again. I'm not going to talk about CoreDB, because it's not a general use mapping platform. It's just for database utilization, which is cool. Check the guys. I'm not going to talk about D3, because that's not a mapping library. It's a database utilization library. You can do maps with it, but not draster, not dynamic tiles, et cetera. If you need to do database utilization, it's a great option, though. I'm not going to talk about 3JS, which is a 3D library. It's what they use for representing point clouds in WebGL, which you have seen before in Sonotox. But once again, you cannot work with draster and tiles in today's technology. I'm not going to talk about IndoorGML, even though we do IndoorMapping. I haven't been able to find a good IndoorGML stack with examples and data formats, et cetera. So maybe next year. For this talk, I started picking the two most basic examples of both leaflet and open layers, just changing coordinates to Seoul and just seeing if it works. And I started trying to explain how the minimal example works. So the minimal example generates a leaflet map with no problem. You can do that. And the minimal example in open layers is crash my browser. With a crazy canvas rendering concept to the.put image data in a feed-it-float point value. What the fact does it mean, guys? So me being a JavaScript guy and not afraid to delve down into the code, I realized that the problem lied... Yeah. So leaflet just works. The problem lied in the coordinates. If I swapped latitude and longitude, with longitude and latitude, everything worked. But if you see a newbie react to that kind of error in the web console, the newbie goes straight away. I checked if I was doing the right thing. So if you check the leaflet documentation, it says lat long everywhere. If you check the documentation from open layers, that says actually that the center must be in lat long coordinates. So you don't use lat long coordinates. Use long lat coordinates. Open layers, guys, fix this, please. It will make newies life much more easier. The leaflet example just works. I'm sorry. So open layers will be much nicer for all the GIS professionals because the coordinate system is already 4.3.2.6. And in 4.3.2.6, you specify x, y, which is long latitude. For newbies who just know a bit of JavaScript and know how to web as simple webpages, leaflet is much more easy to get working with. Let's talk about JavaScript. When you download both libraries, you see the different size, quite a bit, and also in features. They have open layers as roughly three times the classes, features, functions, et cetera, and library size when minified, done leaflet. That's because of how the project works. If you check the webpages for each, leaflet encourages developers to develop plugins for it. While open layers encourage developers to make custom builds taking away the parts that you need. In the long run, that makes leaflet have a lot of plugins and very, very little built-in stuff. Whereas open layers has a lot of built-in stuff and very, very few plugins or very, very additional libraries of which the most known is open-lays decision, I think. I don't know if there are any more plugins for open layers, actually. If you guys want to make a list, please do that. A few of the built-in functionality from open layers can be put into leaflet by these plugins. I didn't really have the time to check the library sizes for leaflet with these libraries and a few others. That should be a pretty nice exercise to do later. But I couldn't do it. Leaflet can be made beefier by adding more and more plugins, which is JavaScript files. That has a great advantage, which is with any tool chain, any JavaScript tool chain, you can do that. Right now, in MazeMap, we are using branch plus bower. But really, you can use browser file, require, AMD, whatever, and it will work because it's just JavaScript files. So any integration tools, Angular, React, Polymer, it will work just by adding more JavaScript files. In open layers, unfortunately, you need to use the Google Closure Compiler. That means that if your tool chain can use the Google Closure Compiler, you can make a smaller version of open layers for your website. But most integration tools, it will just take the whole file, the half a megabyte file, and push it to the mobile phone. So for mobile developers, this is a real problem. It's not that easy to convince people to install a new build tool and change your tool chain to build open layers, whereas with leaflet, you get complete freedom of that. Also, speaking of Google Closure, I want to know how things work on the inside. I made that kind of tinker. So I usually use the debug version of leaflet to trace through function calls and refresh, et cetera. So I also use the OpenList3DBug version, which is 16.3 times bigger than leaflet one. I just was looking to that a little bit, and it's because Google Closure does a few weird things of which one of the funniest I was able to find was this. Google Closure reaffines zero as zero, and that means a constant. Thank you, Google Closure. You are very helpful. Let's talk about calling patterns. Whenever you instantiate a map with leaflet, you just instantiate the map, create a file layer, and I add it to the map. When you use open layers, you have to create the tile source and create a view with a center. So leaflet uses two classes. OpenList is five classes. I wonder why OpenList doesn't use another class for the center point, which leaflet uses. So maybe I really am puzzled by that. Why are you using an array of two elements if that is being transparently cast into a two-dimensional vector? Why aren't you not doing more transparent casting? Right. I don't know. The problem I can see with OpenList also is that for this example, I'm just using three levels of indentation. JavaScript is very prone to have something called the callback hell, which creates seven, 10 levels of indentation, and the code becomes unreadable very quickly. So in the long run, this pattern will create unreadable call or more difficult to read code where in leaflet, it uses a much flatter structure. For me at least, and for the JavaScript developers, it's much easier to work with. When you're doing vector, it's the same, except that in leaflet doesn't have any facility load data by IJX or whatever. So you have to go for a jQuery or any other library for doing that. But then it's very plain. You just use one class, you decorate the hell of it, and in OpenList, you have to define a style, and then inside a style, you have to define symbolizers, and within it, symbolizer, you find styles, et cetera, et cetera. Whereas in leaflet, you just push everything together with the options of the vector layer. Is this good? Is this bad? It depends on what you prefer, really. The open layer model follows much more closely the way that traditional GIS or desktop GIS applies several symbolizers to one vector feature. Whereas in leaflet, for everything that you want to be shown in the screen, you have to put one more vector on it. So it depends. When you have a good look at these differences, you start realizing this very important fact. Leaflet is just a wrapper over HTML elements. The only thing it assumes is you have a web browser. OpenLayers is using a model of classic GIS. It assumes OGC simple feature standard, for example. The most blatant example of this is in leaflet, you have something called a grid layer, which can be divs, can be canvases, can be videos, can be anything, any HTML element. While in OpenLayers, it has an old feature within the old feature, you have a geometry. Within that geometry, you can have a line string, a multi-point, et cetera. This follows the GIS format and the GIS conventions very closely. In leaflet, you really don't care about having features with geometries and properties at all. This defines also the kind of audiences that each library caters to. If you only know about HTML, leaflet will be much more easier. If you have a big dependency on classic GIS model, data model, you probably won't open it. Let's talk about documentation. As I said before, the OpenLayers tutorial doesn't work. It scares newbies. But good thing about OpenLayers, it has around 125 examples that are absolutely awesome. In leaflet, we are missing that right now. Also, the API documentation is very concise in leaflet. It's a lot of the raw series on OpenLayers. That right here is the learning curve. It's insanely easy to get into leaflet. It's quite difficult to do mid-level stuff with leaflet. Once you get over that barrier, it's quite easy. In OpenLayers, it can be challenging to start with it. If you want to do any cool stuff with OpenLayers, it's pretty easy because you can follow one of these examples and then you have to delve into the documentation to work. Also the documentation for OpenLayers is automatically generated with JS log through the build system. Whereas the leaflet one is manually maintained. I know what OpenLayers are thinking. I can hear your muffled laughs from the room. But don't worry because I'll fix it. I have been developing something called LiveDoc which is going to rock because it solves a few of the problems with the actual documentation. If you look at the documentation for OpenLayers, like the line strings, you will have something like set coordinates next to set. Why do you have set coordinates and something called set next to each other? My brain just keeps switching my view between set coordinates and set and I don't understand why you have to show that. It doesn't really make sense when you're developing. When I'm developing, I want to know what a line string does differently than the base class. I don't care about the base class. Really at all. With LiveDoc, I'm going to fix this by showing holding heritage classes at that side. It's going to rock absolutely. There's the demo. Just check the repo in GitHub and hopefully within a few weeks it will be finished. Let's talk about something else. Let's talk about intermaps. This is the fire evacuation plan for my FS. As you can see, there are two of them and they're different. Do you know why they're different and which one of them is right? I will put another example more locally so maybe you have a better idea. This is the ‑‑ I took this photo two days ago. This is the underground shopping mall in Ganyan station and they're different. Why are they different? Anyone knows? Orientation. That's right. Whenever you are indoors, you don't care about the north at all. You care about which side are you facing. You have to rotate your map to fit the orientation of the user when you're using it. This is quite important. In every map, evacuation plans or directories, they will be rotated in most cases. Whenever you're facing that wall, up is front, not north. You need rotation and unfortunately that works insanely well in open layers. It doesn't work at all in leaflet. Don't worry. I'll fix it. You can check out that branch and it just works. There are a few things to fix but hopefully I will be finishing this thing and they both, you know, you consume it and panic whatever. It works. So something else. Restore 3D. Nope. The situation is quite wrong on both of them because even though the data models accept elevation or set coordinate in open layers, you cannot have any perspective. You cannot really have any symbolizer depending on altitude or the set coordinate by default. You cannot have any kind of 3D view right now. In leaflet, there is no WebGL support at all which hinders the situation. In open layers, there is some kind of WebGL support. I haven't been able to have a proper look at it yet. But you will say, hey, there's CC in open layers, right? Okay. Let's talk about CC in open layers. I did download CC in open layers. I noticed, hey, there are the debug versions of the library. So I will just include them and play with it and see which kind of stuff is going on inside. Right? You want to know what happened after that? It crashed and burned. With good debug error already declared, this is what I called a closure class. And this comes back to the use of closure to slim down open layers. In open layers, once again, you are trying to make developers use the closure compiler to customize open layers. And in the one best example of an open layer extension, you are not using Google closure. You are using Google closure independently for open layers, for open layer system. Why not create a compiled version of both of them? It only makes sense and it will prevent people from having this mistake that I had. So that's it. Also, I did try CCM after that, after all. But it has a few missing features that I would need for the intro mappings, namely, right now we need to show double MS layers as flat surfaces over the Geo8, right? Because you want to floor plans at different heights. And there is no easy way to do that right now. It can be faked somehow, but it's a very ugly hack. And I had troubles starting to build in it. So finally, I decided to fix it myself with something called leaflet GL. So it works. It will work with time. And I am just sorry that the open layer system doesn't do exactly what I need. It's cool. It's inside the cool. I made it. It doesn't work for me. It doesn't work for in-room mapping, unfortunately. Also, they have all the 3D libraries. They have a similar problem, which is when you are dragging them up, you are dragging from the terrain. And when you are looking at a building cut, you will want to drag from the visible object that is under your pointer. So if you have a plane or a building or an object, if you are dragging from the roof of the building, you want the roof of the building to be under your mouse pointer. So what's best for our indoor maps? Open layer has two clear features, which are rotation and 3D. We thought about maybe we should switch to open layers. But then all the integration problems popped up. We will just keep it the way it is right now. Maybe your case will be different. Maybe you will like open layers better. Because a lot of people find layer of confusing. Because it refers to HTML things on the browser, not on the traditional layer concept in GIS. The geometry is known, the lat long is not the right semantic for when you have a difference here. It has a few killer features like raster, it's impressive what you guys are doing. Maybe you need one of these features and this will go for open layers. What to use? Imagine a line. In one end is the web people which know what DOM means without looking at any book. And the other end there are GIS people and Java people which know what MVC is without looking at any book. And position yourself in the line. If you are in one end, you will probably like leaflet. If you are at the other end, you will probably like open layers. If you are in the middle, you will have to try and see. One more thing before I finish. You are the worst cause of bugs ever and we all hate you. Thank you. Thank you for your presentation. Do you have any questions? Raise your hands please. You gave us a wonderful comparison with two different tools. I think we have about two minutes. I will ask one question. Actually I have done some fire fighting vulnerability studies. Which is better for this one? Because I am a GIS person, I think I may prefer the open layers. But for visualization, I think which is the better way? Maybe you are not such a GIS person as you think. Maybe you are more of a designer. Maybe you have a designer side that you have to discover. Question? Yes. Okay. So I am basically the filler. So if there are no questions for the audience, you can count on me. We are also doing indoor maps. And what we found very interesting and challenging is the story with the levels. What is the support for levels or the floor switching? So right now what you have to do is you make logical grouping of layers. And assign those logical groups some kind of ID which might or might not refer to the floor you are in and switch them on and off. That's it. That's the only way that you can do it right now. That's why I am doing Live.gl to be able to have proper 3D. I would like to do it in... I wasn't talking 3D. Yeah. But I want to do 3D because that will fix the problem in the long run. I think we have a comment on that. Yes. Well, just experience. Or maybe say Rasta. One experience we made in our company, you actually don't need 3D. It's sometimes confusing to some kind of users. What we are doing, we are... In the cartography, we are shifting the floors. Of course, it doesn't work if you have 1,000 floors, but four, like five of them can be done. And you can provide this shifting either on server, like on the data. But it is much nicer if you can do it in the client, which is... And of course... Yeah. But you have to keep in mind that Google is doing it. Okay. Google won't just switch to the imaginary layer. It's using a globe, full 3D. Esri is doing full 3D in their story maps. And we will lag behind if we don't implement it. I do better than Google or Esri. It's actually also our experience. We are doing plain maps. And Google also does this level floors switching. They do it also on the plain maps. So if you go to Dresden Railroad Station and click on the station, then the Google will display a small floor switcher on the right side. And you can see the floors going up and down. We'll have to do that eventually. It just needs time. But great talk anyway. Thank you. That's right.
|
Leaflet and OpenLayers are two well-known javascript libraries for embedding interactive maps in a web page, and each of them comes with pros and cons which are not obvious. Having worked with both libraries for indoor applications, we will in this presentation offer insight on which of them is more suited to a variety of situations and requirements, and which challenges they should overcome in the future.
|
10.5446/32093 (DOI)
|
So WPS, the web processing service, of course, is the OGC standard for computation-oriented web services. It has much in common with protocols like the WMS and the WFS, but with support for asynchronous requests suitable for long-running processes and also support for nested requests. So my presentation is mainly about a particular application where we use WPS to serve the results and also an example of using nested requests with WPS. The application is a path-planning problem, the way to go. In other words, to find the best route between two points where best may be the quickest, the shortest, the most economical, with the least chance of accident, for example, and usually performed on a network like this road network that is pictures here with nodes, edges that connects them and weights on each edge. However, we would like to go places like this or this and basically anywhere you can go on foot or with a ground vehicle. So we want to build a path-planning application for terrain and initially to cover all of my country, Norway, which is about 2,000 kilometers long. And find the safest route and not restrict motion to roads or paths and estimate travel time and perhaps other things like exposure visibility. And I'm going to use this small area north of Oslo, 5 by 5 kilometers just to show examples and how we build the graph to run path-planning. So our perspective is that we need to do a situation-dependent path-planning in a large graph. Situation-dependent means that the weights may change according to the time or the situation. Might be weather-dependent, for example, seasonal dependence. Also, we have to relate to a service-oriented information infrastructure and that's where WPS comes in. We have very detailed land cover data on soil type vegetation, et cetera. And we're sort of expecting in the coming years to have access to very high resolution elevation data on a national scale. So we're trying to tailor the application to handle the potential that's inherent in that kind of data. And also, we're working on simulations of ground physics so that we can predict things like temperature profiles in the ground and load carrying capacity of the ground, et cetera. So that's our starting point. So here I'm going to put most weight on the graph generation, show some routine examples, and say a little bit about the WPS implementation. So to run or create a graph that covers all of your land, there are various possibilities. You can have a regular graph like this, perhaps with diagonals as well. A very coarse grid shown here, but it tends to produce a great number of nodes. You can have a random graph where the nodes are distributed more irregularly. There's the side graph or visibility graph where the nodes are usually placed close to obstacles, and you assume that you can move freely on the edges between nodes, connecting nodes. There's also the Voronoi graph, which is used in robotics, for example. Typically then the centroid in the graph in each cell is at an obstacle, and you want to move along the edges of the cells to be as far as possible away from the obstacles. However, for persons moving in terrain, you often find that you pass obstacles as close as possible. And finally, well, there are more, but navigation measures, which I believe are popular in the gaming industry, is also made from polygons, where you assume that you can move freely within each polygon, and the edges connect adjacent polygons. So our approach is to use a random graph, but with somewhat judiciously distributed nodes. In other words, the nodes' positions are sampled from some distribution that reflects the terrain properties. So more nodes wear more nodes are needed. So that's the graph types, and so we also have some different data types to produce our grid. So elevation, of course. From elevation data, we also derive attributes that describe the terrain more succinctly, like this is what I call actual variance, which basically tells you how much your surface normal vector jumps around in the neighborhood, and other attributes that you can derive as well. So then there's the land covered data I just showed here as polygons to give an impression of the detail. This is a data set that's used a lot in forestry, so it describes, for example, the potential yield of the forest, but it has the data about soil type, vegetation type, or whether it's a lake or a road, etc. So then there's road networks, of course, and path trail networks, and for the latter we use, for the time being, we use open street map data. And then there's the, that's basically what we are using for the moment. We also plan to implement the latter categories, especially LiDAR, and ground physics, and weather forecasts. This is another view of a photograph of the area with band covered polygons. So there's three lakes in the middle, and a lot of forest. This is an example of a LiDAR dataset. We haven't got so much of that yet, so we just tested it. It's very exciting to work with. It gives a whole different characterization of ground morphology and vegetation. So our guiding principle then is to create a random graph with no density, depending on terrain attributes. We use a triangulation to form the edges, but we want to have the ability to have a great number of nodes with arbitrary spatial resolution, and also on an arbitrary geographical scale. So that's why we introduce a hierarchical representation, having our graph in several levels or layers, where each layer, where a node in one layer is found from a cluster of nodes in a more detailed layer. So to generate the graph, we first lay out the road data. We observe all vertices, which I used before computing the weights, but we convert some of the vertices to graph nodes as well. So we have extra nodes to connect to the rest of the graph. We do the same with paths, path network, and then we distribute terrain nodes. This is, in my view, a fairly course resolution, but it serves for illustration purposes. So we generate the graph. In this example, we completely avoid lakes and marshes. So, but in the wintertime, of course, we want to include that and just reflect the surface in the edge weights. So working with graphs is very convenient to use, represented as a sparse matrix, where each node is a row and a column in a large matrix with mainly zeros, and the non-zero elements represent connected nodes, and they're diagonally reflected if it's a directed graph. So hierarchical representation, simple idea of merging nearby nodes, nearby in the sense of our distance measures, which is travel time in this application. So then we can start path panning on the course level and repeat the shortest path algorithm on the solution, on the union of the nodes in the shortest path solution at the course level, perhaps including neighboring clusters. So it's a way of reducing the number of nodes that enters into the computation. And one of the reasons we do this, if you recall, we want to do dynamic computation of the weights. The weights are situation dependent, so we cannot update the whole gigantic graph for each request. So we start at the very finest level of detail, and we want to merge nodes that are close in the sense of travel time. And so one way to view this problem is to say that we have a given cluster size. Say I want to work on the level of a 10-minute walking distance, for example, and how can I find the cluster distribution that requires the least number of clusters? And it turns out that this question is computationally complex. It's NP-hard. But there are approximations that are efficient. So that's what we do. So essentially this way of doing it requires one single source, all shortest paths, computation per cluster, where the number of nodes that enter the computation is determined by the ratio of how many clusters you want per unit's area, for example. So this is just an example again of clustering the nodes in the previous example, shown in different colors with a 10-minute walking distance. So cost functions. First we derive cost functions for what we call standard conditions, differentiating between roads, path, and terrain. And these standard cost functions are used for generating the grid, which is at the base of the application. And then for a shortest path request, we compute weights dynamically. So the standard cost functions are based on literature, physiology, literature, and more. A little bit of physics and empirical data. And then scale factors can be applied to account for the effect of, say, moist ground or slippery ground or snow. Roughness tends to decrease the speed with a few percent, etc. So this is kind of has to be tuned or learned, ideally based on actual empirical data. And we have different categories for vehicles, bicycles, hikers. So this is an example of a standard cost function for a hiker. The maximum speed here is at about a 3-degree downward slope, I think. If you have cost function for energy, the most efficient, I believe, is at about a 10-degree downward slope. So this is speed. What we need is actually the slowness, the inverse of speed. Anyway, it's equivalent. So the system consists of a zoo VPS server. The graph is stored in Postgres with the PostGIS extension. So we use the fruiting algorithms. In this case, the A star, 2A A star. And so as an example of a nested VPS request, we can use a shortest path at the course graph level as input to the shortest path request at the next level, etc. And this is naturally implemented as a nested VPS request. Of course, you can do it other ways as well. So this is a VPS client that my colleague, Morton Ionson, implemented. We currently have something like 30 to 40, at least, VPS services for various applications. Ship traffic monitoring is one example. Some satellite image analysis, Bayesian analysis. So this is the first routing example. There are two small red flags marking the start and the end. This is an observer placed in the center of the image, his field of view. And we want to avoid him so that affects the outcome of the routing on an optimal route. If you take away the observer, this is the shortest path. So this is a topographic map. So it shows that the area is quite hilly. So the next example is an example of a hierarchical iteration in a very coarse grid, the first solution, the second, and the final. So one of the next steps for us is to elaborate on the ground physics simulations, which is really the Norwegian Met Office that does the simulations. Such models are used as boundary conditions to weather forecast models, but they can also be run independently offline, as they say, with perhaps more detailed ground physics models. And they're quite useful for predicting snow depth temperature profiles, which also implies something about the load carrying capacity of the ground. So the summary is that we try to do path planning into rain. It's still a work in progress, I should say. We have a situation-dependent edge weights. We use a hierarchical random graph, service-oriented implementation, WPS and ZOO. And post-GIS is an important component for storing the graph with PG routing shortest path algorithms. So we rely in our work heavily on open source software. So it's just to acknowledge that. And the final slide, my colleague's putting a joke here. I don't know if anyone can spot it, but it's the ZOO logo. We have a lot of moose in Norway. Thank you. Any questions? One over here. Have you walked any of your routes? Actually, that's why we chose this area. We wanted to test the algorithm in an area we know well. So we're both doing the sport of orienteering. I don't know if it's well-known in the US, but so we run around there. So that's why we chose this. This is the northern side of Oslo. It's very nice for recreation. Any other question? I don't think so. So thank you very much.
|
How to find your way in difficult terrain, with obstacles, hazards, and deep snow? We present a solution for cross-country path planning and mobility, based on OSGeo software and open data. A large graph representing terrain, roads, and paths is stored in PostGIS for use with the pgRouting module of shortest path algorithms. The graph is based on detailed topography, soil type and vegetation data, and edge weights can be adapted for hikers and vehicles. The application is service oriented and held together by the Web Processing Service (WPS), the OGC interface standard for computation-oriented web services. A key component is the ZOO WPS server. The presentation will discuss WPS benefits and describe graph and weight generation, including challenges such as accounting for dynamic data about temporary hazards, weather, etc.
|
10.5446/32094 (DOI)
|
Okay, hi and welcome to the last talk of this session. My name is Mark and this talk is going to be about GXT. The most recent version is number three and I'm going to show you how far we are with this new version and what is still to do for us. So this is the outline of the talk of the upcoming 25 minutes roughly. I'm going to give a short introduction about myself and my colleague or friend that helped me preparing this presentation and is also very active in the GXT project. We have to have I think a short history of the GXT project and then I want to show you what actually is the version three of this GXT library and of course I will stop with a short outlook. So my name is Mark, I'm working for Terrestris which is also a sponsor of the Force for G. I'm a core developer and member of the project steering committee of this GXT project and I'm also a core developer of OpenLayers JavaScript library for you know like you should have heard it at this conference. I wrote a book about OpenLayers and all the other parts that flow around in this open source GIS cosmos and I actually love open source and everything that's spatial. So the company that sends me here is Terrestris, we're based in Bonn, Germany and we do all stuff with open source geospatial so if you have any questions or you want to do work with us just contact me or my boss till he's sitting right next to you. So this is not me, this is my friend Chris, he helped me create this presentation and he also helps a lot with creating GXT. He's a software developer and architect and he just recently found his own company Maximum so it's a very young startup but providing very great work. He's an OSGF foundation charter member and he speaks very often at national conferences and international ones but sadly he couldn't make it to South Korea. So he loves open source and spatial as well as I do. So this is the company, the most important things are the Twitter handle and the GitHub handle most probably so if you like what you see now just visit this site. Now this is what it's all about, so what is Geoxt? It is a JavaScript framework for sophisticated web GIS and it's based on open layers and EXTJS. So basically it extends EXTJS with spatial components and it allows you to for example read spatial formats like the answer of a WFS request which is oftentimes GML and you can put it into the very rich data components of EXTJS and then you can put these data components into other user interface components of EXTJS and it all works very well. So basically you create rich web mapping interfaces with it. Now copyright is assigned to the OSGEO, it's open source and this is very interesting. The first commit for this project, for the Geoxt project happened in March 2009 which means it's six years old and counting which is for a JavaScript library, you know like it's a dinosaur. So that was a lot of text on the previous slide. So Geoxt is basically the marriage of EXTJS and open layers. It is also the child of EXTJS and open layers because it inherits some of the functionality and the characteristics of its parents. Like EXTJS has a way of doing things and open layers has a way of doing things and all these characteristics of its parent libraries are inherited by Geoxt. And as I see it, it both enhances EXTJS and open layers. So it's a win-win-win situation actually. So now a short history of Geoxt. So version one, the one that started in 2009 was based on the version three of EXTJS and open layers 2.x. I think it started with 2.9 and in the end it supported basically everything until 2.11 or something. So this is how one example looks like and you can see all of the stuff that I just recently mentioned in this picture which will be now in for the future versions. We will see the same picture basically. So you see an open layers map embedded into a user interface component that we get for free from EXTJS. And also on the right side you see a list, a grid, a very rich table of the features that you see on the map. So there's everything in there which is at the core of Geoxt. It has a component, a visual part, like the map is in a container which can then live in various layouts that EXTJS gives us. And also the data is ready to be used because it's in a store of EXTJS. So as I said, it started in 2009 and back in 2009 EXTJS was hot, very hot. Right now it's not and you can see this here. This is a tweet that flowed around Twitter two weeks ago or something. It's linked in the presentation. So other EXTJS frameworks are currently hotter. Like for example AngularJS is hot since 2014 or maybe it only was hot in 2014. I'm sure you can make it up yourself. So the current hotness is ReactJS obviously. So this is just back in the days EXTJS was the one framework giving us everything. It was very mature back then even and it provided us with very rich possibilities of enhancing open layers or creating a library, getting the most out of its parents. So while the hotness has gone away obviously from EXTJS, it's still there and it's still alive, EXTJS and so is Geoxt. And I'm sure that it can be a very good choice if you want to address some of, yeah, specific problem domain. We will see which ones that could possibly be. So now Geoxt 2.0, that one was based on four, version four of EXTJS and still open layers too. This one is living on this, well, all the code is on GitHub so you will find the code to all these versions on GitHub. And as you can see, everything got a little bit more modern but there was no radical change in it even though it was very hard to implement it because there was the change between EXTJS 3 and 4 was quite of a hassle and yeah, but we managed to get more or less feature parity with the version one with this release. So it provided us with many, many functions. So basically the creation syntax was changed back then so you couldn't do, you shouldn't do no longer new my Geoxt class name but you should do EXTJS create my class name. And it brought us support for the programming model of MVC, model view controller. It was easier to theme stuff so that it wouldn't look always the same and it also brought the first compatibility to the censor tools that float around EXTJS so we could use these censor tools to detect the dependency graph of one particular component so that we would be able to build smaller builds of EXTJS which would just give those components that you actually need. Now this one is very young one. This is Geoxt 2.1X, it's still in beta. I think my friend Chris has released it a week ago or something. You see the layout has changed a little bit and the most important change here is this version 2.1 gives you the choice between EXTJS 4.2 or EXTJS 5.1 at your choice. So every one of these major versions of EXTJS comes with new features and enhanced performance probably so you can see yourself if you want to upgrade your applications to the newest one of EXTJS and then it should simply take this Geoxt version. So these are the stuff that we're coming with 2.1 so we now support two-way binding which is essentially because OpenLayers threw it away so they decided to no longer have it which is a good decision I think for OpenLayers but it's a nice thing to have if you're developing applications. And also had first steps towards responsive design of your applications so like you could dynamically reconfigure your components depending on the environment where the code would be run. So on your tablet for example it would have a different appearance than on a desktop PC. But in the meantime OpenLayers 3 and EXTJS 6 were born and so we were late again and the run began from start so we wanted of course to support both of these new versions which is why we had this Geoxt 3 code sprint that happened in June this year. It was in Bonn, a beautiful city you should definitely visit. It had 10 developers from four countries and there's a small picture down here from all of us. We had a good time and we did also a lot of work. We basically built the foundation for Geoxt 3 based on those versions that were mentioned a lot of time like OpenLayers 3 and EXTJS 6. So these are the sponsors of this particular sprint which I am very grateful that they gave us their money and their support or their standard developers. So without these entities we wouldn't be there where we are right now and there's all sorts of different entities on this slide. Some of it is German but the first one for example is a part of a church. They are also sponsoring Open Source. Very nice. So thank you. So now what is it? Geoxt 3. So the objectives when we were starting Geoxt 3 were to start from scratch and we definitely wanted to benefit from the Sencha tooling because in the version 6 they provide us with a very sophisticated build tool, the Sencha command which has been there also for a long time but they improved it over the time. And we also wanted to benefit from the OpenLayers feature galore which you may or may not have seen in the talk that was yesterday I guess from Andreas Hocheva and the other fellows like Tim and all these guys. So OpenLayers is really feature rich and we wanted to have this. We wanted to be unbiased against the medium so we wanted to work both on desktop and mobile because the most recent basic, well 2.1 doesn't actually work very well on mobile. You can open the page. It will load but it's not optimized at all and it's probably not what you want to do. We wanted to have more examples and improve tests and better documentation as always. So this is the state. It's all on GitHub. We have a little bit more than 300 commits. There are seven contributors so far from some entities, not well it's like five entities behind these committers and in their free time. The building and packaging is, I consider it done so it's all very well now. So every commit creates a new Sencha package which is a detail you may or may not know but this is the thing how you provide additional functionality into EXTJS by creating a package. It's comparable to an MPM package. It's not the same though and it's all done and it's built automatically. The same is true for the API docs. They are built automatically. Yes, we already have a test coverage of 82%. We are reaching for, well, we want to have more but I think this is already a good number and I will not try to reach 100 just because of the 100. I want to have good tests. I want to have unbiased tests, not ones that try to reach lines just for reaching them. So the API docs are looking nice and we can have probably a look at them. We have some examples and that's a very important point here on this slide. We switched the open source license from BSD to GPL version 3. So all, as I said, we started from scratch so with an empty Git repository and we only have seven contributors so that change was easy to do but we were forced to do that because in previous versions of EXTJS they would give open source projects an exception. They would allow to use a different compatible license than GPL and this exception has been removed from their website. We have been in contact with them and for now it's GPL which is a good license, I think. Okay, so what's missing then? It's a universal app example. So a universal app is one that runs on mobile with mobile optimized code and one that runs on a desktop optimized code but with one code basis. We have examples of that in projects of us that use this GXT version 3 but there is no such example on the website so we need to create one and we have zero releases which is actually a pity we need to release that stuff. So the homepage is already there. All the API docs and the latest API docs with EXTJS are linked on the homepage and also in the slides and this is what I wanted to talk about. So on the command line, so once you have the Censure environment ready to create Censure applications you install like the Censure binary, it's available for operating systems and then you have it on the command line, you simply say, hey, Censure package repo add which adds a new repo where Censure tool can check for packages and this is a link to the GitHub GH pages which is updated on every commit. And then we say, we basically tell it in the first line, hey, if someone wants to use GXT, hey, go and search for it at this location. And then in your app.js you require simply GXT and that's it. So it's nothing to download. It's easy to get started using GXTJS. It's different though than with all those other JavaScript libraries that you may or may not know where you have the dependency usually in a file called package JSON or Bower JSON or whatever. But it's very easy as well. Never try and stop to learn different ways of resolving your dependencies. Some examples. So this is the most basic example that we could think of which is just an open layers map. You might see it. That's San Francisco. Inside of a layout in XT6. This is looking a little bit of boring but this is the first component that we had actually working. And the nice thing is in GXT2 and one there was a component called GXTPanel.map or map panel in version one actually. So that was a special kind of a panel that would display an open layers map and allow the user to interact with that map. Now in GXT3 the map panel is gone which is set, which I said, you know, we started from scratch and we rethought this. So now the inheritance of the map component, so the thing that actually draws points, polygons and where you have your WMS or something is now a component. You may or may not be experts in XTJS but this actually, this small change in the hierarchy of inheritance and the inheritance change where we put our work in makes it possible to reuse the complete code that's there to resize the map in both mobile environments and also on the desktop because the panel class as I said earlier is not existing for the mobile part. This sounds like a lot of details but I just want you to get, we rethought everything that GXT was, you know, about. Like starting with the base component. So this is how you would do it. You create an open layers map just like you usually do it. Give it a view, some layers, some interaction and then you pass this map component over to, sorry, this open layers map over to this call EXT create and then you're done. So this is, well, most, as I said, the work on this is sponsored or was sponsored for a big part and one of our customers wanted to have an overview map which back at the day didn't exist in open layers and I was thinking as a member of that project that it probably also doesn't make any sense to have an overview map in the map library itself. Well now time has passed, there is an overview, there is an overview map component in, in open layers but it didn't give the functionality that we wanted. For example, we wanted to show, I can, I can even show this. I think I can do like this. That's the real one. Give me a second. So while this loads, it's a little bit different from what it shows in that the, there is these rectangles you see here and here and these are indicators of the rotation of the main map. I can show you probably now in an action. So this is a map and if I rotate this map, how do we do it? Yeah, I do it like this. You see that on the right side, the rotation isn't simply covered. Well, it's not copied over because then you would have two rotated maps and the, you know, the target, well, the, the, what an overview map wants to give you is an overview and doesn't want to confuse you anymore. That's what I guess. So what it actually does is it's showing the extent of the other map, but then rotated and also you may notice there is this small dot here and here, which is the top left corner of the map. It's just a small deviation of how we have tried to solve this problem. Excuse me. No, in this, in this example, they are not, you can actually, I can, I think I can click here, I click and then it, the map is working, is reacting on this, but it's too old to have the modify interaction. Oh no, sorry. There's an interaction now in, in open layers that you can use to drag, actually, you know, like rectangles and then this will be coming in probably some days. So we have layer trees. So this is one thing that differentiated us, I think, or to my knowledge from most other applications or, or libraries around open layers with some other nice JavaScript framework. So we can actually have trees, like with direct and drop to, to allow the user to interact with a complex hierarchical structure of layers in the map, which is a nice thing. And this is also completely rewrite from all the things that we had in previous versions. So this is just a plug-in and it's like 200 lines of code where previously we would have had probably 300 lines of code, but it stuffed over five classes and I never understood the total potential between the thing, between this implementation. So when we started from new, I started with a, or we started with a new, a thought on it. So I'm very happy with it. What's there right now. And yeah, we hope that it solves all the problems. Okay. So we have an integration with the MapFish print service. That's a component that's a part basically from another open source tool, which is called MapFish. It's basically is able to create PDFs on the fly. If you give it a spec, you pass it a spec over HTTP requests and then it creates in a very nice speed, well, high performance. It creates a nice PDF of your application. And why is this thing here? Because it's actually not so, you know, hey, you can print. Yeah, cool. But this thing is cool because the implementation of the logic behind this is very heavily inspired by the implementation that mainly came to camp. Another company you have seen here did for AngularJS. For their project AngularJS combined with OpenLayers, NGO, they call it. You probably have seen the talk about it. So they have a reference implementation of how to serialize OpenLayers so that then they that the MapFish print service can understand it. And what we took is we took all the code from them and the test suite because that's, you know, it was already tested. And yeah, we could integrate it without any hassle in this. So you can now have the same experience, hopefully, of printing maps as with NGO. So this is the created PDF. Then you see it's the same. It has schools, I think. Those are vector layers. It's a vector layer and a raster layer. So there's what I said. GXD tries to also enhance OpenLayers. So one thing that me personally always bugged was, you know, we have a lot of discussions inside the OpenLayers team. How do we name some API? Is this API or is it not? Like we do it all the time. So inside this smaller team of GXD, we have the chance of giving a little bit more user-friendly APIs. For example, what we have in nearly every application that I developed in the past 10 years, we wanted to have something like a pointer rest, which is a pop-up once your mouse rested on a certain position for a certain amount of time. And you can all do it like with setting up timers and waiting and then calculating whether the position changed in an amount that you consider enough to be a new location. Has he actually moved it or has he just slightly moved one pixel to the top of the mouse? But GXD comes with some of these nice added entities on top. So there's just a new event which is called pointer rest and pointer rest out, which is configurable. So when do you just by providing numbers, you'll wait for 500 milliseconds and then look at the deltas and then the pointer event will be fired or not. This is just easier to use this functionally like the hovering pop-up or something. So that's a small pop-up. We can render all those features that OpenLehers gives us inside of EXT components. So why is this? This is an ugly example, but it's a very technical one because it shows all the different ways of, you know, a big part of the different ways of creating and styling features in OpenLehers and embedding them inside of components of GXD. So what you can basically do is you can create maps like these or examples like these. It's a little bit light, but this is Germany, part of Europe, and Germany is in the center. And you can see blue and red dots and you see them twice actually because you see the correct, you see them on the map where OpenLehers takes care of rendering the features as you as a developer said, hey, this is a red point. And on the right side, you also see the red points, but this time they are rendered inside of an EXT component, which, you know, this case, it makes it easy to differentiate between the red and the blue points, but you may also have a classified like a vector where there are seven classes or something. So you can very easily interact, well, visually see which row belongs to which feature and also you can interact in both ways. So when you select a feature on a map, the feature is selected in the grid. When you select a value in the grid, it's also selected on the map. So this one looks a little bit crazy from the colors, but that's what it's all about. This is the map you form. Oh, sorry, that's the wrong way. Oh, sorry. So I'll do it like this. So this is basically the same example. And what you see here is some map. And on the right, it's rendered with the WebGL renderer in this case. And on the right side, you see certain properties of open layers objects represented this time as members of a form. So on top here, we see the resolution. So the resolution of the map view is 38.22. And the rotation currently in radiance is 0.4. And we see here the opacity and the brightness and the contrast and the use saturation of that one layer. So this example basically shows us this two-way binding. So if one side is being changed, the other one is updated automatically. And this works for all open layers objects inheriting from OL object, which is a very, very basic class in open layer. So you could have this with a lot of stuff. So if I change the opacity, you see it gets darker. If I change the brightness, it gets darker. Do they have developer tools here? Yeah, they probably have. So it's a bit small. Is it like this? Yes. So there should be a global variable called map. I hope you can read it a little bit. I just typed map and then I can say get view. Oh yeah, where are the braces? Yes. Opacity, for example. Oh no, sorry. I think it's the layer, right? Set rotation, that's easier. Set rotation. It's currently 0.4 and now let's give it 0.5. Yeah, very impressive. That shouldn't have happened. So map, there's obviously no component called map. I could get this. Excuse me? Yes, but whatever it is. So I can rotate the map. That was actually additional sugar on top of it, which is right now just isn't working as I wanted it to work. And life coding is actually not my thing. But whatever it is, so if I would have access right now to the map variable, which is the open layers map, and I could change it, then not only would the map be rotated, but also the values in the form, because we just saw the other way around where I manipulated the form and then the map was rotated. So for now, just believe me that. And you can try it online. This is an online example. So that was the other one. So I'm nearly, nearly done. So this is the things that we need to do, the outlook on the future. We definitely need to release this stuff. And so to make it easier that other people can actually use it outside of our very small scope, because there are some users already. But I think that could be more. We need to have a roadmap right now. A lot of it is actually guided by needs that my and Chris's company or someone else's company directly gives to us. And then we, you know, got to start implementing. We need to have a roadmap. We probably need to restructure a little bit in order to share more code. Like probably there's going to be probably, this is, you know, outlook future. There's probably going to be like three libraries of which you don't know, because you would simply use GXT, but the inheritance would be figured out for you, called GXT-Base, Modern Classic. This is a thought that I currently have, but I didn't talk it over. It's actually you are the first purple people to hear this. So it may be a good idea, but let's talk it over. Yes. And as I said, we need to actually release it. So that's it. Thank you. And I hope you have some questions or some remarks. Thank you. We are 10 minutes out of time. Yeah, just said, I was trying to tell you, but you are okay. Sorry. So, yes, if you have any questions, just contact directly, Mark, maybe Spedger, or if you want to stay a bit longer, if there is one or other question. No. Okay. Hey, honestly, you should have told me. I can, I can, I can do these talks like one hour or two hours. No, you were lucky that I was stopping now. No problem. We will. Next session is the fourth. So, yeah. Okay. So sorry to take so long. Yeah. Okay get ready.
|
GeoExt (http://geoext.github.io/geoext2/) is Open Source and enables building desktop-like GIS applications through the web. It is a JavaScript framework that combines the GIS functionality of OpenLayers with the user interface savvy, rich data-package and architectural concepts of the ExtJS library provided by Sencha. Version 2.1 of GeoExt (currently in alpha-status) is the successor to the GeoExt 1.x-series and brought support for ExtJS 5 and is built atop the following installments of its base libraries: OpenLayers 2.13.1 and ExtJS 5.1.0 (or ExtJS 4.2.1 at your choice). The next version of GeoExt (v3.0.0?) will support OpenLayers 3 and the new and shiny ExtJS 6 (not finally released at the time of this writing). The talk will focus on the following aspects: * Introduction into GeoExt * New features in OpenLayers 3 and ExtJS 6 and how they can be used in GeoExt * The road towards GeoExt 3 * Results of the planned Code Sprint in June (see https://github.com/geoext/geoext3/wiki/GeoExt-3-Codesprint) * Remaining tasks and outlook The new features of OpenLayers (e.g. WebGL-support, rotated views, smaller build sizes, etc.) and Ext JS 6 (Unified code base for mobile and desktop while providing all functionality of ExtJS 5) and the description of the current state of this next major release will be highlighted in the talk. Online version of the presentation: http://marcjansen.github.io/foss4g-2015/Towards-GeoExt-3-Supporting-both-OpenLayers-3-and-ExtJS-6.html#/
|
10.5446/32096 (DOI)
|
Hello, my name is Yongjae Park. Nice to meet you. I'm working at GIS United in Korea Seoul, and we provide consulting reports based on GIS analysis. So today I'm going to introduce about child safety on school walkway. Introduce Do Bong-gu. Do Bong-gu is here. This is Seoul District, and we are here at K Hotel. Do Bong-gu is at edge of Seoul North, and there is about 20 primary schools in Do Bong-gu. This project concept. From next slide, I'm going to introduce and explain how participation and big data analysis that's all joined. We do not focus on administrative public data to analyze children's safety, but we mainly use survey data. This is our paper questionnaires. We asked to draw the line, the students walk to school, and where the spot they feel were dangerous. About 4,000 students and their parents participated in this survey in 20 primary schools. This is the example. Results. Some students draw this spot because illegal parking causes some dangerous results. Some students said street lamp doesn't work here. Some students didn't answer about dangerous spot. Just only draw the line he walks to school. This is where he lives. This is the main insurance of the school. This paper road data, we start to construct all the data from paper. With four women who run their home. These are all the paper we gathered. These are processed using QGIS. These narrow lines are all lines are students go to school walk. I select one feature ID 10015 student way to school. From here to main entrance of school. This is the dangerous spot. We gathered data, not point data. We gathered polygon data because they draw where they feel dangerous spot like road or area. Sometimes they draw whole parking. We constructed data in polygon data. This is the result. Every morning in Do Bong district, student walk about 766 km. They feel fear in area. I find some about 14.3 km. We think this is big data because it's more than trip from Shanghai to Osaka via Seoul. We always think this is big data. Before GIS analysis, we try to text analysis using R package. Two kinds of library. World cloud and associate rules. Using world cloud, we extract key words in complex responses. R package shows us 312 key words with 12,914 frequencies. Here, most frequent key words was many cars and second one is drive to fast and dark traffic light. We divide data each 20 primary schools and make it world cloud. Most schools top key words are look similar but some primary school show different pattern. Then we try to analyze relations between key words. We couldn't understand the reason why students and parents describe key words. For example, can you imagine why supermarket dangerous? Through relation analysis and read rule sentences, we could understand the key word supermarket relate with big car. The respondent said, sometimes full size car or trucks are reversing when children go to school. We understand this key word means big car. Based on text analysis, we categorize typology of dangerous situation. Nine types of situation is on the slide. I'm going to explain each type. First, too many cars, it includes key words like apartment, supermarket, church, the student's parents dislike when they see cars on the school road just because. Second type is bad driving, bad drivers. It includes illegal U-turn and drive to fast and so on. Key words in view interrupted, it means interrupted, interrupt to see each other drivers and children. So three illegal parking, standing signboard are selected. Here hard to notice means on frequent road or narrow dark street or alleys. We could know what kind of dangerous student and their parents concern. These five types are about traffic extent. Some of these percentages are 75%. So we knew that. It's also different between each schools. For example, respondents in C school have stressed about bad drivers more than other schools or total average, about 30%. After text mining analysis, we start to analyze using GIS, especially QGIS. These are research process. I'm going to introduce each step from next slide. This is the role data we constructed. Leveling all the responses actually complains from young citizen. We open the table. Next we open the table, complains written and insert nine columns and write binary values each type, zero or one. And then these maps are reserved after write binary values. We can check respondents of each type as you see the slide. First one is all the type of dangerous spot. Second one is one to avoid situations spot. And then bad drivers are here. Hard to notice dangerous spot is here. Nine type of maps. And then we put the 10 meter by 10 meter grid on our row data. We try to join in to vector data set, special join and find some of the final values. Like this. Through this process, we evaluate each grid cell to know which location is more dangerous to others relatively. This is a result map. You may see the numbers in the cell. That is some of the responses. This cell scored highest value about 69. And these cells count only one or two. It also, this kind of, this map is about too many cars. This map is hard to notice. This map is one of the situations. Very different. And then we recheck keywords and full sentences. Hotspot of each nine type. You may see this bold blocks. This is about, this is a hotspot of this type. Hard to notice hotspot. And then we check row data of keywords. It means lack of CCTV cameras and unpronunciated people. And then it also same map about transport infrastructure hotspot. This bold bold areas hotspot. And we gather keywords again and try to pick top keywords in hotspot. So next, after this process, then we also use patch join using school walkway and grid. So this cell is where students most students walk through. This is final map. Summarize safety information map. This is, we designed map manually like tour map. Hold keywords and hotspot. Only using top keywords and top hotspot. So we designed this map. Every each 20 primary schools provide them this map. After this analysis, do Bong officers carried out spot inspection based on the map. We check CCTV check and narrow road, sheet lamp and reflector. And then we check the safety informant and result of inspection they describe. So now they plan to plan to budget for, to solve these problems. Thank you for listening. Thank you very much. Are there any questions or comments? Well, I do have a question because this was very interesting for me because only a few weeks ago they opened a new school in my village and everybody is complaining about safety because of the heavy traffic. So I really like your approach of analyzing where the spots are where people find themselves unsafe. My question to you because you did a nice survey, did you went to the local government and presented them and say, well, did they took any measures or how did they respond? Have you been to the local government with your findings or is that the next step? Officers in public really like this process, but especially who runs budget really likes this and who runs spot inspection didn't like this. Because they have additional workload from this step. And they do not provide real map to primary school students and their parents because they fight, we want CCTV more than you. No, no, we are more dangerous. They fight. So this is only for officers who run budget. Yes, actually this is second project. First project was just for one pilot project for one primary school and the result was really really clapped. So the head of this head of the office wanted to make 20 primary school. So actually we did end of this project next week. The end of this project is next week. But for this presentation I finished this project last night. Yeah, yes. Yes, yes. Thank you.
|
Local governments in Korea are trying to solve urban problems using GIS policy map. Through FOSS4G Seoul, I want to introduce example of Dobong-gu, Seoul. Topic 1. Spatial Analysis of Practical Requirements of Parking Lot The residents who live in the old residential zone in Dobong-gu are suffering from shortage of parking spaces every morning and night. Most administrators are using an indicator named ‘a ratio of cars to parking spaces’ to judge seriousness of the problem with parking. But the indicator cannot reflect reality. We measured practical requirements of parking lot spatially, using micro block data and car registration data with addresses. We tried to look at things from the resident’s perspective, not from administrator or provider. Now, Dobong-gu push ahead with sharing parking lot program with houses which have spare parking spaces. Topic 2. Civic Participation Model for Solving Children’s School Walkway Safety Problems. Office of Policy Development of Dobong-gu did a survey with a thousand residents about safety issue, and many of them answered that they feel fear walking down the alley. Although the Office got the policy implication from survey, they couldn’t convince the definition of ‘alley’ and accurate location where the residents feel fear. Office and we redesigned survey paper cooperatively. The improvement point was ‘Map-based Survey’. Elementary school students and their parents participated and they lined school walkway and alleyways where they felt fear on paper map. We migrated all the lines on papers to shape files using QGIS, then we got a very satisfactory outcome. Office of Policy Development added LED lights to the dark street nearby elementary school, Elementary school teachers decided the walkway guidance spot by referring to students often jaywalk.
|
10.5446/32097 (DOI)
|
So it's Friday afternoon, it's a little bit quiet I see. Thank you for being here, thank you for letting me have this presentation. I would like to show you some simple example of how we can make the request for a permit more easy by integrating like semantics, geodata into an interactive map. So and therefore I would like to give you this example of which is Bert. He wants to open a restaurant at that specific location and he likes cooking so he maybe later wants to expand his company with a terrace on the dyke and maybe later on when business goes well he would like to have a ballroom also. But actually he wants to know whether it can do, is he allowed to do this, what rules must he follow and if he must submit or request something does he need a permit, does he need to inform the local government with it. On the other hand we have Hans, he's a civil servant, he adopts the regional plan, it's his responsibility and well Bert needed to do a request for a permit and Hans is the one who judges this and he decides whether or not it fits with the regional plan and in this it was okay so after opening the restaurant Hans checks if everything went okay according to the permit. So this sounds very simple but actually it's a long procedure in which you need a lot of information which you have to get from different places and the chance is that Hans and Bert do not have access to the same information and that causes this, that Hans has some information Bert doesn't have and it might also be incomplete and insufficiently reliable which causes in the end there is a delay because we have to do more research and that causes of course irritation because Bert cannot start his restaurant. So to make it more simple in the Netherlands they tried to make the whole procedure more simple and they now work on a new environmental law because in the Netherlands we have a lot of legislation, a long history of laws and articles regarding spatial planning and what they want to do we want to bundle all this legislation, all these laws and they want to go from on the left you see 26 laws and 4700 articles to one law with 349 articles and they want to have less degrees, governmental degrees and they want to have less ministerial regulation so then it gets easier for Hans to do his work, he needs to take into account less legislation and for Bert it's also more easy because I don't know what your experience with legislation is but it's really difficult, specifically it's a lot. So it's a complete system reform of environmental law, it should be simpler, more efficient and better but they kept in mind actually law protects civilians but it also gives room for initiatives so you should not make it too strict. It should be more flexible and decentralized that means that well give local governments more responsibility in judging about permits because they know what kind of development is good and valuable for that region and it should be more transparent and efficient the whole procedure so actually and that's their slogan that's what they quote it's make it simply better that's how they say it. Well when I think of that I would also say make the IT more simple because we have now so much tools and information so we can use. So the idea is if we have optimum digital support for that improving the availability and usability and stability of the data then we get a better process in that and I would like to demonstrate that later on. So what they want to do is go towards one digital counter with all the information adapted to the needs of the users and they do that well and it should that digital counter should give access to the same information Hansenberg should get and they want to do that by making a good infrastructure, good digital infrastructure which connects different registers of geodata with also with legislations and with base registration so you have one digital counter and they metaphorically use this avenue of the environment they call it and they identify different information, different domains, different areas, water and nature they all have data about these topics and if we could integrate that then it could be valuable then you have access to that information. This is the Dutch system of government base registrations. We have for example the Hansenberg is the business registration, we have a cadastral registration, we have the taxation, land taxation and property taxation registration and we have several geo data sets as base registration and they are available as open data, large scale topography, small scale topography and also buildings and addresses. I didn't know if you have seen the talk of Tom Lee this morning about open addresses but they can take out the exact location of addresses from the Dutch SDI because this is the website and the data sets are available as OGC standards and because OGC standards like the XML are not that nice for web developers we are now organizing a testbed to see how we can make it better available also to non-geo experts like I think of other standards like GeoJs to make it a part of the ecosystem of the web. So if you are interested please go to there and you can find more information about that. It's an invitation to tender. So they are trying to make a new infrastructure between the digital counter and the public registers and also the information from the base registrations and also from the legislation so they all want to try to connect that and put it in one counter so they have access to all the information and Hansenberg will get the same information that's the idea. The data provided to them that you meet in Dutch is triple B like beschikbaar, bruitbaar and bestender but when I put it in Google translate I will get 3A that's why it's called triple A it should be available applicable and buying and if you want to use other terms for that it should be accessible it should be usable and it should be sustainable stable. So that's why it's triple A. So I would like to show you some concepts and prototype we were thinking of this how could you do this so I would like to present you a concept of linking the data definition and regulations into one website and we did that using an interactive map with a simple form as a user interface we use leaflet which is very simple to use and a simple form web form and if birth comes once something once a permit then his question is do I need an environmental permit for and then in that case we have now a use case that you have your restaurant or your farm and that you want to apply for a change in the business activity so and we worked out one use case that is the impact assessment of the air quality and the terminal effects on the nearby nature reserves we have if you see in the sky picture Netherlands is near the sea we have dunes and so for this example we count the effect of the dunes. So what you have to do you point out where your business is in this case it's a cowhouse which will be expanded cattle barn and you fill in the request details like how much emission you expect to have in the new situations and then what's done you send a request to a remote service register of the National Institute for Public Health and the environment and then you get back you get a geogation and in this you see what the impact is in red it's high impact on that nature area and in green it's less impact so you need a permit because you have to be judged by in this case Hans. So this is one example we took another example for industry for a waste management company if you're expanding that one your emission gets higher like for example the nitrogen and ammonia and sometimes you don't know what what if you fill in a form you don't know what what what's meant by it the meaning of a particular concept definition is then provided by a tooltip and that's not a tooltip which is just implemented in the HTML or but it's a remote request to a different register with all the legislation and the definition of concepts from the information model in it and we call that an avenue catalog it stores all the definitions so the semantics of of legislation and information models. So what we did here by is that we disconnected the semantics and the and the technical data so on the one hand we have legislations and the information model on the other hand we have to register the data itself and the information services and using link data we can connect them we can query them and we can get the information into that one digital counter. So again a request is sent to a remote web service for this use case for this example and it returns again geodeation and in this example you see that the impact is less high using these values that we filled in but still it needs a permit because it has some high values. So where are we now with this well we are in the designing phase and we are now constructing with standard and new techniques this is one of the concepts we provide to decision makers so that's now being implemented and I guess it will take some time before it's really there but in the end what we would like to what we want is that it's it's and IT gets simply better for good enhance in the near future so they can get their permit and do their business activities for us. Thank you very much if you're interested it's the pictures I took are from a nice video which is in Dutch with some nice music under it but I thought I have to translate it so if you're interested go to YouTube and you can click on it as a link and if you want other information about what we're doing in an online stand you can go to the links of the presentation. So thank you. Any questions? Then I have a question for you if that's allowed how is it arranged in your country do you know where do you need to go to if you need a permit what would be your first guess where to start is there a digital counter or is it just go to an office or do you have to fill in a form which you print and then you wait. In France I think it's more or less similar to what you described at the beginning it's quite complex to get the information even if you have more and more website where you can access the information about the risk about the protected area and so on but you have to go in different places and it's quite hard to combine everything together and at some point you need to send a form and then I think more or less similar initiatives are still happening in France at least. Thank you for your answer. So are there any more questions because then let's conclude this session. Thank you very much.
|
triple-A for the environment: make IT simply better With the new Dutch Environment Act, the legal framework for development and maintenance of the physical environment becomes more understandable and manageable for citizens, businesses and governments. A simpler and more coherent environmental law contributes to work actively and efficiently on a dynamic and sustainable environment. This entire exercise of harmonization, reduction and integration is headed by the motto “Simply better”. In addition to the merging several dozen laws and regulations in one Environment Act (http://www.omgevingswet.nl), also the central IT office where citizens can apply for a environmental permit is further improved. This should make it easier to obtain a permit for example for a construction or business activity. The information presented in this central IT office must fulfill the triple-A requirements, i.e. Accessible, Applicable and Abiding. On the basis of this is a national system of open (geo)data registers of which the data acquisition and management is mandated to (semi-)government organizations. On each area of environmental law, a domain expert is appointed; stakeholders of each domain are metaphorically organized in an ”information house”, and all houses are situated metaphorically along “the avenue of the environment”. Goal of the improved central IT office is to provide a clear understanding of the relevant legislation and to allow each actor in the process to work with the same data and definitions. Therefore, we developed a prototype which presents a concept of linking data, definitions and regulations stored in one central register using an online mapping service as user interface. Using Linked Data as strategy with persistent URIs, we are able to link the concepts in this register to an end-user prototype application. We implemented an prototype for the question: “Do I need an environmental permit for… applying a change in business activity?“. An air quality impact assessment is computed based on user input an visualized in a map interface showing the effects of an increase of nitrogen emission on the nearby nature reserves after extending a greenhouse farming. We used the AERIUS calculation tool (http://www.aerius.nl/) of the National Institute for Public Health and the Environment and presented the returned geodata as GeoJSON in the Leaflet Map API (http://www.leaflet.org). With this prototype, we provide a concept which facilitates the clear understanding of the requirements for an environmental permit by making IT simply better.
|
10.5446/32100 (DOI)
|
Hello, good afternoon. My name is Andrej Aimen. I'll be presenting Advanced Security with GeoServer. As you can see from the title slides, my name is not included in the presenter. That's why I didn't work on these topics and my colleagues helped me to put together this presentation but I'm the only one here. Good thing I know the topic fairly well so I'll be able to present it anyways. So I work for Geosolutions, an Italian-based company that provides consultancy, support and custom development on top of GeoServer and other open source projects and we are strong contributor to GeoServer, Geotools and well less strong contributor also to Geo Network, GeoNode and so on. So, a little overview of the presentation. The presentation will go through the authentication and authorization steps in a standard GeoServer. We can see the stack. We have the authentication at the top so the request comes in, hits the authentication, the authentication decides whether or not we want this user to have anything to do with GeoServer. If we are okay, then it goes through the dispatcher and the services so all the service code in GeoServer is completely unaware of security and then there is the catalog which is the data access part. That part is security aware so even if you add your own extra service to GeoServer which doesn't know anything about just about security, it will be secured anyways because we secure the data access and while we secure the data access we also check what service you went through so if you want to secure a mix of data access and service access you can do. So, authentication. Authentication is performed through the filter chains. The filters, the security filters are classes specialized to deal with authentication. Some recognize that you already authenticated previously via the session or can recognize you via a cookie or can extract the username and password from an HTTP header and so on. At the end of the chain, the chain has decided whether or not you really have to authenticate. Maybe you already have authenticated previously and in that case you don't have to go through authentication again. In case they decide, you have to authenticate, they throw the ball to the authentication providers. If you are pre-authenticated through some previous request, then you go directly to the request handling. GeoServer is a rather complicated application. We have a web front end, we have a geogc services, we have the rest configuration services. So, each separate part of the geoserver services is covered by a different set of chains. So, for example, the user interface allows for form-based authentication, HTTP, sorry, HTML forms, will allow the creation of a session if none is available, will use remember me cookies and so on. Whilst the OGC one wants to be as fast as possible and as light as possible, so it has a lighter set of chains and it basically supports basic authentication by default, then you can configure it. What are the bits that you can configure in a filter chain? And as I said, all the chains are configurable, all the chains can be attached to different URL patterns. There are filters that gather user credential and handle basic authentication, form authentication, digest authentication and mark the user as anonymous as a last resource. We have pre-authentication filters that recognize that you already authenticated in a previous interaction with the service, such as a session HTTP header, remember me cookies, J2E, standard authentication and so on. And everything is plug-able, so if you need to integrate a different kind of authentication, I don't know, Sheepolet, you can do it by integrating your own filter. Then we have the authentication providers. Say the filters decided that you have to authenticate, we don't know who you are yet. Then we go through the authentication providers. The authentication providers can be as simple as searching your username and password fetched from an HTTP header through a database, which could be stored as an XML file or a database, to something different, such as using the credential you provided directly to authenticate against an LDAP as that user, as that user name and password, or against a database opening a connection as that user name and password. And again, this is plug-able, so you can extend the ability to authenticate towards other authentication mechanisms. I didn't say it, but all of this is based on the Spring security, so the famous Java-based library to set up J2A applications. Then we have the role providers. Okay, let's say we have decided you have to authenticate, we have authenticated you. Now, what can you do? Security in your server is role-based. Some plugins can also make it user-based, but normally you categorize your users by roles and then decide what to do based on the roles they have. So we have role providers. Why do we have role providers? Well, because authentication is sort of a generic thing, authorization is normally quite specific to that particular application, so you normally have your own roles for that particular application. So for example, the roles that you might have for GeoServer are not the roles that you might have for your email system. So we have again a set of plug-able role providers. In the simplest case, like the self-contained GeoServer that you might run on your laptop, normally everything is based on XML files, but then you can scale up to more production-sized solutions. As an extension, for example, we have integration with the central authentication service, which is a single sign-on solution, quite popular actually, which allows you to basically sign on one of the many services that you have in your network and then be authenticated in all the others automatically. And another example of plug-in that we have for the authentication subsystem is known as the PothKey, which brace yourself, decides who you are based on a key that you put in the URL. Now, you might say, this is crazy. Okay, I sort of agree with you, but if you are using HTTPS as the transport protocol, not even the URL is visible to an attacker. And something like PothKey allows applications which are totally security unaware to play in a secure environment. Just to say, this is used in a military setting because of some of the applications they are using, do not know how to do basic HTTP authentication or digest authentication. Okay, so authentication problem solved. Let's move to the authorization. Now we know you are, we know what your roles are, what can you do? Authorization-wise, as I said, we use a role-based authentication and given your user roles, we decide whether or not you can do a certain action or a certain resource. The action could be something as generic as a read-write on the data or something more specific like, you could get feature info on the WMS protocol on that layer. Are you allowed or not? And the resource could be a workspace, a layer, a layer group or a style. Authorization, guess what, is pluggable. So we have an interface, a Java interface that you can implement to roll your own security subsystem, sorry, your own authorization subsystem. And you basically have to implement those methods back there that check against the interface, whether or not you can do something against a workspace, a style, a layer, a layer group. And in case of mass listing of layers such as the capabilities document of any OGC request, the provider can specify a filter that will decide whether or not a layer can be accessed. Just to avoid, you know, having to go layer by layer, can I do this, can I do this, can I do this 100,000 times? The security subsystem actually allows quite fine grain security. You can decide whether or not an attribute is visible, whether or not an attribute can be read or write. You can apply read filters and write filters so you can decide, for example, that a certain area, so a special filter, can be read by your user or written by your user and they can be asymmetric. Maybe the user can read everything but only write in a specific area, which is quite common in data gathering on the ground when you have multiple people going on in each one assigned to a specific area. The filters can be also, of course, also numeric, temporal, spatial, whatever, so you can literally use all the power of OGC filters to limit the data access. So implementations of this interface, we have the default security subsystem that you find in GeoServer out of the box. It's pretty simple. It's probably 10 years old now. It can decide whether or not you can access to a certain layer period. It's, as I said, pretty basic. And then we have Geofence, which is an external application with a star because it's turning into an internal library, which makes full use of the resource access manager interface instead. And then you can have your own custom implementations. Since it's an interface, you can literally plug it into your enterprise system and have it run of your authorization databases or procedures if you want to. And that's actually quite common in enterprise setups. So let's talk about Geofence. The default implementation is rather boring. Geofence is an extended authorization subsystem for GeoServer. It has optional authentication, but we often don't use it. It's completely open source. It's part of the GeoServer project. You can get the code and fork it there. So you can see it's under the GeoServer organization. What's the structure? Well, normally, Trance has a separate server. It has a separate Tomcat deployed application. GeoServer has a plugin that is instructed to talk with this server, and it will tell the server, okay, user access trying to do this and that. The server will apply a certain set of rules and decide whether or not those actions are allowed and respond back to the server. GeoFence has its own administration API and its own administration front end. So you have a user interface to edit the rules. It has a REST API if you wanted to automate updating, modifying the rules. And it stores all its configuration in a database. It could be Postgres or a call, whatever. So the user interface more or less looks like this. It has its own user management, but I'm not going to get too much into it. Just one interesting factoid. We have the notion of instances because a single GeoFence installation might be managing security for multiple clusters of GeoServer, like security for an internal production cluster and security for an external facing cluster with completely different set of users and rules. How are the rules applied? Well, who here knows about IP tables? Razorband? IP tables is a way to configure network access in a Linux machine. And GeoFence rules are actually modeled loosely against the IP tables approach. So we have a list of rules. A rule can be matching a user, a group, a specific instance, a specific OGC service request, a workspace, a layer, and then decide whether or not that combination is allowed or denied. The first rule matching wins. So we basically go from top to bottom and the first rule matching wins. Normally, the setup is that you apply restrictions first and then, sorry, apply the opposite. You apply whatever you can do first and then the last rule denies everything so that if you ever reach the last rule, you don't access. This is what I just said, the matching of the rule against the various possible configuration bits. And this is just, this is already quite a bit of a marked improvement over the GeoServer default security in that in the single rule, you can put together the data and the service accessing it. So you can say, for example, that you can access a certain layer via WMS but not via WFS, which is not something you can do with the default built-in security. So this is one example, just two rules. I have a rule saying, oh, for user U1 and service WMS and workspace, workspace 1 allows. So this user is allowed to access workspace 1. And for user U1 and then everything star, star, star, deny, which means that user can access everything in the workspace W1 only for WMS request and it cannot do anything else. So it's a way to lock down that user to that particular workspace and service. And then we have a third type of rule. So far I've showed you allow and deny, but we also have limit resources, limit, sorry, limit rules. Limit rules are something we go through and collect. Limit rules don't say you cannot access but restrict your access. So they say, OK, you can access this layer but we are going to remove some attributes, we are going to filter the data, we are going to force on you a particular style so that you are constrained. So there will be an accept rule saying that you can access that layer but the limit rules restrict how you can access it. And as I said, you can restrict on available area and alphanumeric conditions. You can also restrict the available attributes and say something like this attribute, you will not see it, that attribute is read only and the third attribute you can actually write on using the BFST. So the normal situation is that you have a standalone geofence server running and all the geoservers talk into it via this little plug-in. Since there is network traffic, we have a cache, a cache in the decisions for not too long of course but enough to avoid asking a hundred times the same question in a second. And we have a rest API to manipulate those caches in case you want immediate purging of all the cache information because you made an important change. Geofence itself as an extensive rest API that allows to query, page, page through all the rules and modify them so that you can automate any kind of must change you want to do on security rules. And also we have a backup and restore service attached to those rest services so that you can backup and restore and bring the configuration maybe from a test environment to a production environment. Now, all of what I said up to now is last year news maybe, or it may be two years old news, this is new and this is geofence direct integration. So as we geowave cache people started complaining, okay but it's external, I have to run another server, why do I have to go through such complications, I want just my single geoserver doing everything. And just as geowave cache got sucked in into geoserver and it's still possible to run it externally but only few deployment do that. Geofence is on the road on being sucked into geoserver as the new default security subsystem. It will be a long road, it's not going to happen tomorrow but we have the first steps. So geofence is Java based, it can be run in geoserver just as if it was a library. The rules are stored in a database just as before. We are doing it just like geowave cache got integrated by baby steps. So the integrated version is not giving you the full power of external geofence, it's going to give you enough convenience and enough extra power compared to the default system to be interesting. So just to scroll through the user interface, we have a configuration of the internal geofence. So for example, one of the things that we might want to move to the general geoserver configuration is whether or not you allow dynamic styling via dynamic SLD. You know, SLD passed through the request and we have control over the cache. So how long lived the entries in the cache are and some statistics like caches, cache misses and so on. And then we have a user interface to create the rules which is very, very similar to the basic system. This is just that instead of having just workspace layer and access, you have also the request of the service they use and then the all and the priority of the rule. It's all pretty simple as you can see, most of it is drop-downs fetching from the internal geoserver configuration. And after playing a bit with it, you get with a list of rules that define your overall security behavior. Let me show you an example. Here I have three rules. The first says whoever in the workspace target can access so that tiger workspace is wide open. The second says oh, in the workspace, surface forge, sorry, not surface, it was a spurfish. And layer arc sites allow and everything else deny. So if you have a bit of familiarity with the geoserver layer preview, we are going to lose all the other spurfish layers. We are going to lose the Tasmania layers and so on. And the result is this. We have all the tiger layers and just arc sites. And then everything else is gone unless I look in as an administrator. An administrator is all powerful. So I mean we'll see everything. I got another example here. In this case, we allow the workspace tiger and then deny specifically the arc sites layer but then allow anything else in spurfish and then deny everything. So I'm basically switching the default authentication on the spurfish workspace. And the result is that I see everything from tiger and almost everything from spurfish but not what was it arc sites. And of course you can get more sophisticated by adding more rules. We have some deployments that have a few hundred rules because they got many, many roles, many layers. There is quite a bit of work to do on this. As I said, we made it just interesting enough to be a step over the default security subsystem. Of course we have to add the support for the limit rules. The limit rules are not supported in integration and that's of course a big limitation. So you don't have a way to force a default style limit attributes filtered by contents filtered by area. This is something that we still have to do. The day we do that, well the internal geofence will be almost as powerful as the external one. At the moment is just a step over the default. We still don't have the ability to control of rights at the rule level which is something that we have to work on. The rules are order based so we need a better way in the user interface to order them and change their position, drag and drop something like that. At the moment we are just using an embedded H2 database in the data directory. So this is meant for a single server deployment. Again, geofence can connect to an external server. It's just that the user interface is not there to configure it. So we will have to add that. It would be also nice to be able to migrate all security system rules to geofence as you upgrade your data directory and install this plugin. We still don't have it but it's possible. All of this of course is pending, well funding or some project that can sponsor this work. And this is it. Any questions? Do you also support time of day and calendar type filters so you can restrict access for certain time period, certain weeks, things like that? No, no. At the moment we don't do that. I mentioned in the external, not the integrated one. No, not even in the external. So we have something close in the control flow module which is meant to tame traffic. There is a right controller that you can attach to a specific IP or user so that they don't ever go over a certain rate of request and if they go above, we slow them down. So that's the closest thing we have but not what you asked. But then again, the subsystem in control flow could be used to apply what you said. Hey, thanks again for this great piece of software. First, second I have a question regarding the compatibility of geofence direct integration to Geo Server. Does it work with 2.8 or 2.9? 2.8. 2.8 onwards. Okay, great. Thank you. They moved the presentation. So one presentation, I can rest, wait. Yeah. Like if many people show up, I can start over. No problem. If it's just you, we can talk over a beer. The program was changed from the printed version to the online version. So this is the third, not the fourth, but the printed version says it's the fourth. That's crazy. Anyways, if there is enough people that want to see this presentation again, I can start over. No problem. No issue. This is my fifth presentation. I have another. I can do seven.
|
The presentation will provide an introduction to GeoServer own authentication and authorization subsystems. We’ll cover the supported authentication protocols, such as from basic/digest authentication and CAS support, check through the various identity providers, such as local config files, database tables and LDAP servers, and how it’s possible to combine the various bits in a single comprehensive authentication tool, as well as providing examples of custom authentication plugins for GeoServer, integrating it in a home grown security architecture. We’ll then move on to authorization, describing the GeoServer pluggable authorization mechanism and comparing it with proxy based solution, and check the built in service and data security system, reviewing its benefits and limitations. Finally we’ll explore the advanced authentication provider, GeoFence, explore the levels on integration with GeoSErver, from the simple and seamless direct integration to the more sophisticated external setup, and see how it can provide GeoServer with complex authorization rules over data and OGC services, taking into account the current user, OGC request and requested layers to enforce spatial filters and alphanumeric filters, attribute selection as well as cropping raster data to areas of interest.
|
10.5446/32101 (DOI)
|
So what we're going to have here is in our last talk we got to hear a comparison between open layers and leaflet. What we have here instead is we've got several web processing service implementations that are here to give you a little bit of a status update. Now we did explore running some benchmarks, but we didn't actually get a chance to do that this year. But I am really keen to hear what's new and exciting in the world of web processing service. With that in mind, I'm going to turn it over to our first project. Wow! There is more newly. They changed this slide since I last looked at it. Okay, well, my turn. So yeah, apparently WPS is the next big thing in geo domain. So welcome to our shootout again. Just a brief overview. We are representatives of three projects here. Apparently by WPS and geo server. But they are more and they are attending some of them, of course, no more. Actually everybody of them. The first, the WPS shootout was in Dender. But they couldn't make it to South Korea. So now we are here and we will do as much as we can. Next, in the order, I am the representative of the very brief and short report. What's happening? What's the implementation of the standard on the server side? Like anything else. It's using exclusively Python programming language. And this is how I see it. It's a rather big bike than a big, you know, big fast car. It's something you can fix very easily. Like you can carry something you can set up hopefully in a very, very fast way. And yeah, portable. And you can really carry big load on it. Still it's very simple. So far I was always talking about the so called pay WPS 3 branch, which is here for a long time. But now what's really hot? It's pay WPS 4. Which is completely from scratch, rewritten code base. Because you know, since 2016 there are a couple of new libraries. There's Python 3, for example. And so what's there interesting or might be interesting for you, new Internet data structure, validator functions for input and output. There is a new REST API being developed in Google Summer of Code. The project this year, the development is completely test driven. We try to pass or follow the OGC test as much as possible. And all the cool stuff which is now hot in Python like Flask and stuff like that. Yeah. One highlight or two. What is the, I was speaking about new back. New Internet data structure. This is the IO handler. It's basically something, an object. And if you put data in, you can do it as raw data or as a memory object or reference to file whatever. Internally, it will be somehow processed. And then if you want the data out, your process needs the data. Somehow you can get them as a file object or as a memory object or as some directly, the raw data, for example. You really don't care about the transformation. That is one, hopefully, interesting thing. Whatever validators, if the client is sending the data in, you can validate them. According, depends on you as a user, the one who is set up in the processes. You can set up a function. There are some already there which will evaluate the data based on the level what you are interested in. And then, the PWS will simply not check for anything or it will be pretty strict. What's new? Yeah, we are now in OSG incubation. We had one student in Google Summer of Code project and the same guy actually, we managed to have him for six months focused just on PWS development. Now with PWS for there is a continuous integration in Travis running. What we are failed, business model sustainability, that means we like on developers, we like on finance resources and on people in general. And the OSG incubation process which we are in a little bit is now sleeping because we are focused on PWS for development. Thank you. So now I will present briefly what is Zoo project. Zoo project is an implementation of WPS. And I say from OSG to OSG and sometime more because at the beginning it was the idea that OSG is providing a lot of amazing software and we want to find a way to communicate and use this software without having to learn how to use this tool or how to use that tool just to find the protocol. That's why we used WPS. In fact, we are an OSG project forever in incubation since 2010 I think. And I wonder how long it will take still to be incubated anyway. So here are a few pictures. You can see on the left my two children. They will be nine years old. The older one will be nine years old. So it will be the next developer of Zoo project. And next time during the next workshop we can use its own services. Indeed, developing a services is really simple in Zoo project. As you can see with a simple line of code you have here. It's really easy and straightforward. Unfortunately, we probably should introduce this talk by apologizing about not doing a real benchmarking because last year we did it, I think. We published CP testing, let's say conformance and performance testing where we tried to test really WPS implementation. And we had a pretty good result on the Zoo server side. The other servers did not want to spend much time to investigate and to optimize their setup so obviously we got some issue with other implementation. Anyway you can still download the CP testing and run it on your own infrastructure. So here are the main components of the Zoo project. So we have a Zoo kernel, I won't go deeper in details in everything but Zoo kernel is a C implementation of WPS which is polyglot which means it is able to support many programming languages so you can implement your services in eight programming languages if I recall correctly. The newcomer was Ruby which arrived in version number 1.4.0. We also have the Zoo services, I will talk a bit later on. We have the Zoo API as you can also implement your services by using JavaScript on the server side so we implemented a specific Zoo API based on open layer 2.7 and the newcomer in the family is Zoo Client which is an API that you can use on the client side to interact easily with your WPS implementation. Any kind of WPS implementation not only Zoo obviously. It supports also WPS 2.0. It is not really new but still it is so great that I want to present it to you. We have the map server output support so let's say that for every services which is outputting GIS data, both vector or raster data, Zoo kernel will automatically publish your data through map server depending on the data type of your result as WMS, WFS or WCS and I think we have on the picture a perfect example of what it is used for. You have the WMS which displays the shortest path on the right hand side. You have the details of the path as WFS request obviously and then at the bottom you have the profile of your route which was simply computed by sending the same get feature request, not the data itself but the get feature request to the profile tool which was developed in 2009. So what new in 1.5.0 I cannot speak about the 1.4.0. In fact, one year passed since the last force 4G and many work was on lead in the Zoo that I cannot show you everything but still we have implemented specification which was published undercover under the wood. I don't know how to say in English but the WPS specification was published officially in June, end of June but the specification was strangely online since the end of February so it was good for us as we needed to implement it before the end of July anyway. It was announced only at the end of June. So we implemented the get status, the get result and the dismiss extension which are the last test new request available. We also implemented completely the MetaData profile registry. Even the profile browser is available. We modified a lot of the Zoo kernel to add it to the database backend and hopefully this can make it scaling perfectly. We also updated all the Zoo kernel documentation and we will write all the user end documentation. You can see it on the website. In fact, we have a new website which is almost here but we did not yet publish it. So what new in fact we offer now a few new Zoo services. So at the beginning we had to work, unfortunately, we have to implement things, we have to develop services, we have to work to make services available and this is still not our initial goal as I told you. So we have developed some get out based services, we developed also some Cigal based services and then suddenly so RNGbert arrived and offered to all the WPS community the grass bridge which is some wrapper to automate the use of grass GIS through WPS, any WPS implementation. So we think to ourselves that this is a great idea because you don't have to code anymore to have a new services available. So we did some implementation in C this time to add our third toolbox application. In fact, this offer you more than 70 services. So let's say that imagery, photogrammetry is not available as WPS and we also integrated Saga GIS. Saga GIS is only offering you more than 300 services. So unfortunately, there is no fun anymore because you don't have to code. Here is a sample example of the ZUAPE usage to georeference your map online through the mapping product and amazing product which is based 99% on WPS. We have also the new comer which is ZU Client. So you have an overview here, you have a preview let's say here of what ZU Client can do so it can make you able to communicate to develop user-friendly user interface and simplifies the way you communicate with WPS. Hopefully this is supporting both WPS 1.0 and WPS 2.0 version. And this is in fact, this make us able to implement some automatic form HTML form creation. We also developed the C-Connex WPS to bring WPS to the open data catalog, the C-Con one. So you don't have only data anymore or data stores available in your catalog. You have also the service and obviously what you can do from a catalog is browsing your data. So as you can browse now your WPS services, from the WPS services you have the HTML form which are automatically generated and you can input anything which is inside your catalog and where we will store the output obviously within the catalog itself. And then we developed, the initial plan in 2009 was to develop mainly the project as I told you to help people to use OSGear software on the web. And in 2010 the ID came to build a full platform, full-featured platform to publish maps using the WPS. And in fact we made it and map into the version just around the corner so stay tuned. You should see it arriving probably next week. And we thank the first for the award of the best for the developer. Thank you. Oh, sorry. The last but not least, you are probably aware that there is some WPS plugin in QGIS. If you tried it already you probably know that it is not working but we fixed it and in fact we got a good help from Remy Cresson from ERSTEA which I want to thank here for all the contribution he made. He made it ten times faster. We still have to integrate his work but still you can download the plugin in the right location so unfortunately on my company website. And I would like to let you know that we are a welcoming community. There is a code print tomorrow, everybody who wants to contribute to the project or you simply want to use the software through the web can just come. And for instance I present to you Knutlandmark which is a new project developer and the new project logo I just saw today with the moves inside the logo. So we are also welcoming new animals or new programs or new anything. Because we are not using WPS to do only GIS. We are using WPS to do for doing everything. And now I give the talk to Jody Garnett back for the GeoServer presentation. Wow, Zoo Project sure is a hard act to follow. How much time do we have remaining? Three minutes. Three minutes, well that's great. No pressure. Well I'm actually just really impressed by both of these projects. Pi WPS doing a complete rewrite changing license and Zoo Project just having an amazingly impressive amount of momentum. Now I just want to ask a quick question. How many people in this room use web processing service? Now when we asked this question like two years ago we actually had to explain what web processing service is. So that you in this audience is the really big change this year. Thank you so much for taking this OGC standard and helping us take on the world. Now often I come to these WPS comparisons with poor little GeoServer and I feel a little bit bad because we don't spend a lot of time working on it because there hasn't been a lot of customer interest. And so my big news this year is that oh my gosh there's been customer interest. GeoServer first developed WPS support actually in collaboration with a Korean institution Sejong University. So just a big thanks to Sejong University for starting us off on WPS in 2008 if you can believe it. Now here's my big news for GeoServer. In the last year I can finally after is that like seven years recommend that a normal person install and run it. And this actually comes down to a developer Andrea Amy working at GeoSolutions. He won't tell me who the customer was that paid for this but we have actually finally made GeoServer WPS production ready. We have security controls. You can control who accesses the different processes that are being published. And introduce WPS execution limits so you can actually throttle how much resources are being consumed on your server. This prevents someone walking up and just knocking your server over for the fun of it. And finally here's a funny one. You can finally actually like list your processes and kill one of them in case it's taking like three days because it's an environmental model or something. The other thing that's really helped us become production ready is the Hazelcast clustering notifications. You can have a cluster of GeoServer's all madly working away on whatever the heck it is you want. And they can communicate and keep track of what's going on and hunt down the occasional process and kill it. So this is the big news for GeoServer. Not trying to keep up with this lot but actually finally having a WPS that we can be proud of. And thanks to GeoSolutions and thanks to the GeoServer community for putting us on the map. Now that said there's a couple things that we do have planning. We would really like to implement WPS 2.0. I believe both of your projects have started in on that path 2.0. You? Okay. So WPS 2.0 did come out recently. The other thing we'd like to focus on is the OGC has been doing a better job of getting site conformance tests available for WPS. And we'd really like to have a crack at passing those. And the final thing that's not on my slides is Boundless is looking at wrapping up the various grass facilities and making that available through GeoServer. So not a lot of news to report but news I find really important. WPS matters to the people in this room and GeoServer is finally ready for you to use in production. Thanks. Do we have any questions in our 30 seconds remaining? Two or three questions. So. I just wanted to mention that tomorrow the code sprint by WPS will be as well unless it's not on OSG board meeting. Do you have any questions? I think you made a perfect presentation so no more questions. Okay. Thank you for your presentation.
|
The yearly Web Processing Service (WPS) benchmark. Variuos WPS implementations will be tested regarding their capabilities, compliancy to the standard and performance. Traditionally, each participating project designates individuals from their community to participate in this talk to introduce their project and summarize its key features. The focus this year will be on compliancy and interoperability. We will present the test set-up, participating WPS projects and the results of the benchmark.
|
10.5446/32105 (DOI)
|
私の名前は広尾美駿です。この展開をお楽しみください。私はオーサカシティとオーサカシティのアプライズテクノロジーの中で、研究をしています。今日は、ユースケースのデザイスの行動を行います。ジオパパラッジとマックガイドオープンソースを使います。水泡のDMISを紹介します。ローコスとエフェクトビディMISを発売しました。このシステムを呼びます。DMISのオーサカシティのオーサカシティの名前です。ありがとうございます。この展開をまず、DMISの紹介をします。DMISは、デザイスターモニューマネジメントインフォメーションシステムです。次に、コンベンテーションのDMISを紹介します。最後に、新しいシステムを紹介します。このシステムを紹介します。このシステムは、新しいシステムを紹介します。今、日本のシステムは、2011年の末に日本の大阪のシステムが大阪のシステムが大阪のシステムが建物を建設するために地下政府は、リストレーションを作りにくい。このシステムは、地下の地下の地下の地下の地下の地下スターモニューマネジメントインフォメーションを立ち、自動車を自動車にリストレーションを作りにくい。このシステムは、エナメロスの資料の上にリストレーションを作りにくい。このシステムは、ディエマイスを整えた資料の上にリストレーションを整えた。このシステムは、日本に説明している。日本では、ディエマイスを1995年にクロスアスクエイクを説明している。ディエマイスの説明をしている。そのため、3つの仕事を行います。最初、ディエマイスは常に使わない。そのため、エマジンシステムは簡単に使えません。2つのシステムは非常に貴重なソフトウェアとハードウェアを使用している。もしかして、コンストラクションを保存している。一つのサプライヤーは充填している。例えば、大阪市は三菱十湖ディエマイスシステムを使用している。この意味では、価値はありません。そして、システムはクライアンツサーバーのアキテクチャーについてオンラインについて予定できない。この意味では、クライアンツサーバーを保存している。この意味では、ディエマイスシステムの多くのディエマイスシステムを使用している。では、DMSの未来を使用してみましょう。システムはフレンドリーのインターフェスを使用している。そして、ディエマイスシステムは特殊な訓練を使用している。そして、東、東、東、東、西、特殊な訓練を使用している。そして、全員はアマチュアルについて使用している。システムはGeneral PapaSpeciesGeneral PapaSoftwareOSS。そして、データのフォーマットで、デブロープメント、メンテナンスコストを使用している。システムはMCA、マイスチャネルアクセスはレディオシステムの使用者です。エマイジェンシーの状況に強いことは必要なことを取り組んで取り組んでいる。エマイジェンシーの情報を取り組んでいる。例えば、スタッフの水療を拡大している。そして、エマイジェンシーの情報を使用している。データはPC、マイスチャネルアクセスを使用している。ネットワークフォーデータを使用している。MCA、インターネットを使用している。同時に、エマイジェンシーの情報を使用している。水療も必要なことを取り組んでいる。スタッフを取り組んでいる。MCAの情報を取り組んでいる。この1つ。スタッフはメニューから選択している。エマイジェンシーの情報を取り組んでいる。この1つ。この1つ。インターフェースはスペシャルトレーニングの必要なことを知っている。コレクトデータはGISで見られることができます。スペーリーの情報を取り組んでいる。全場のサーバーはオープンソースソフトウェアの公開を行う。チェーンズが取り組んでいる。もちろん新しいフランクションがシンクロナイズでシステムを使用している。これは古い。フューチャルのDMFのWGSでコンポネットを使用している。オフィスターフGNPPCそしてMCAのステーションのタイプ。ソフトウェアはオフィスターフのMCAで取り組んでいる。ネットワークはMCAのインターネットを使用している。サーバーはディザスターミニュースヘッドクローター、GNPPCそしてMCAタイプ。サーバーはアパッチ、マップサーバー、ポストグレススケール、GIS、MCAのステーション、GPS、MCAのマネージャー。以上です。前にプロタイプシステムはMCAのデータを使用している。しかしPCやMCAのデータは使用していない。そしてアフター、Japanのグレートアフスクレーク。では、システムを使用している。高価値が簡単に入っている。アプデートシステムはスマートホームのクライアントを使用している。サーバーを使用している。サーバーを使用している。では、アプデートシステムを使用している。システムはオリジナルのプロタイプシステムのとてもとてもとてもとてもとてもとてもとてもディザサム・マネジメント・ヘッドコウトランドをPCやスマートホームに使用している。ネットワーク・ネクションスマートホームの手帯のファンクションは普通のネットワークに使用している。とてもとてもとてもとてもとてもとてもスマートホームのパーツペーシーのホーム dell軸停電機で somebody정画則でスマートホームなどでデイネットワークに Stopでサーバーがサーバーのハードウェアはクラウドサーバーです。このサーバーは同じです。でもこの時、マップライドオープンソースをマップエンジンに使用します。マップライドオープンソースは、このプロタイプを作るために、このプロタイプは2011年後のアースクレーフに行きました。1週間の開始です。ここで3ヶ月の間に行きました。マップライドオープンソースは、WAT-CAD-DWGファイルを使用するために、SDFをコンバットするために、データベースを作るために、クライアントスマートフォンサイトを使用します。OSはアンドロイドです。アプリケーションはジオパパラチをカスタマイズしています。ポストデータはKMLとJSONです。サーバーサイトはリナックス、アパーチレブサーバー、ポストグレースケール、POST GIS、マップライドオープンソースのGISを使用します。インターネットのベースマップは、GSI-Japanの国土地理インチズです。Google、BIN、Yahooもベースマップを変えます。スマートフォームの最初のスクリーンです。インターネットのデータを見るために、新しいインスペクションを作ります。インスペクションデータを見るために、マップ、GPS、GPS、アップデータ、インスペクション、アーリープロットタイプの使用は、基本的にジオパパラチを使用します。ジオパパラチは、エンジンのリポータリングを使用します。パイプラインのトラブルを押すと、スクリーンの前に、フォトグロフや映像を押すことができます。このスクリーンは、マップ、GPS、ここにリポートをしています。オフラインマップもこのアプリケーションで使用できます。スマートフォンの上に良いプラットフォンを使用します。日本ではアイフォンを使用しません。なぜなら、バッドウェザーで使用するのは難しいです。スマートフォンを使用すると、空気が開くことができます。空気が開くことは、空気が開くことは、スクリーンの上に無くなっています。ネクサス7や5のスマートフォンは、バッドウェザーで良いレスポンスを使用します。このアプリケーションを使用しています。色が濃くて、大きなボタンを作ります。なぜこのアプリケーションを作るのか?なぜアマチュアアフェスのオペレーションは、このアプリケーションを使用しています。普通のアプリケーションは、国際国家人の人々が、16歳の長さで使用しています。スマートフォンを使用しません。その後、アプリケーションを使用しています。このアプリケーションを使用しています。1.プシ2インプットデータを使用しています。プシ2は、スマートフォンを使用しています。このアプリケーションは、スクリーンを使用しています。このアプリケーションは、スクリーンを使用しています。1.プシ2インプットデータを使用しています。2.プシ3インプットデータを使用しています。このアプリケーションは、スクリーンを使用しています。このアプリケーションを使用しています。アイコンのリポーターのデータポジションを使用しています。このアプリケーションは、スクリーンを使用しています。次は、水溜りボンドのファクトリーを使用しています。ファクトリーの色を使用しています。このアプリケーションは、サファーのデータを使用しています。このアプリケーションは、バルブリックのエマジェシンコールで、アイコンを使用しています。ですから、このアプリケーションを使用しています。水溜りボンドのファクトリーの young people 人が、オーサーシティのパイプラインは非常に困難です。このパイプラインは、スマートフォームで作ることができます。このパイプラインは、スマートフォームで作ることができます。このパイプラインは、リックサーベルで作ることができます。1. リックサーベルで、シートに出すデータをアップすることができます。2. リックサーベルで、シートに出すデータをアップすることができます。スマートフォームのサイトに、データをアップすることができます。3. リックサーベルで、シートに出すデータをアップすることができます。4. スピードリックサーベルで、シートに出すデータをアップすることができます。このプロタイプのシステムを、コンベシナルDMSの問題をつけます。このプロタイプのシステムは、シートに出すデータをアップすることができます。このプロタイプのシステムは、シートに出すデータをアップすることができます。このプロタイプのシステムは、シートに出すデータをアップすることができます。このプロタイプのシステムは、シートに出すデータをアップすることができます。明日、スプリンに行くのができます。私は、シーズンのデモを紹介します。ありがとうございました。ご視聴ありがとうございました。
|
In recent years, large-scale disasters have occurred in the countries of Asia including Japan, rapid collection and sharing of disaster information is required in order to provide relief and support speedy restoration of civic services. This presentation discusses the integration and customization of FOSS4G field survey tools and Web GIS server to facilitate aggregation and rapid sharing of disaster related field information. Further, the system also provide realtime interaction between field party and coordination team. A case study of practical use of the system at the Osaka Water General Service (OWGS) Corporation will be demonstrated to present the salient features of the system. The main capability of the system usability is normal as well as disaster situation will be highlighted.
|
10.5446/32107 (DOI)
|
Yes, I'm in charge of this presentation and the title is Go MAP server. Some of you already know Go Rengijin by Google and some of you do not know maybe, yes, but don't worry, it's just a Rengijin. Today I'm going to tell about three Azenda. Yes, three Azenda. First Azenda is Reddit Go and second is JIS Server with Go. And finally, talk is cheap, let me show you the code. Before we start, I'd like to introduce myself. My name is Do Kyung-tae and engineer at Samsung SDS. Twitter and GitHub contact are as you see in the screen. Let's start with Reddit Go. In this picture, you can find URL and there you can learn all about Go. There is a Go Reng site, Go site. And in bottom of the slide, you can find weird animal. It's a mascot of Go and his name is Gofer. Officially, Gofer is a male. On the blog I found the code Gofer as a he. Anyway, Go is a very good language because of its concurrency cross compiler. And it has various tools like Python. And Go will make you be more productive in workplace. Very easy to use and it is originally functional programming language. And it is not my opinion in the spec of Go Reng page. Go is a general purpose language designed with system programming in mind, blah, blah, blah. And I will focus on the strongly typed and garbage collected current programming, package, no class, but structs. It's cutable binaries. First of all, strongly typed. In this slide, you can see the three part of the code, snippet and the result. This is black. So it is not. Can you? Thank you. Yeah. The main function is format.printlm string plus integer. But it will make an error. So we should convert string integer to string. Strong and weak type means the type conversion. It does not allow implicit conversion in their language. And second is garbage collection. When you write a code using Java or C sharp, your code became byte code, which virtual machine can recognize the code. But Go does not have a virtual machine. It means that Go executable file has its own GSH. Most significant good parts of this language is that concurrent programming. It is well known as the Go routine. Like Node.js, sync programming, Go routine is non-block I.O. There is a Go keyword is here. It is like JavaScript sync AJAX code. So it is non-block I.O. and code goes down to the bottom of the code. And Go package is similar to the Node.js and ASM module system. You can add some packages as you want. Public function of the package starts with the capital and private function starts with the small letter. And there are no class in Go. Very confusing but there is another language which has no class. Is there anyone who knows the answer? JavaScript has no class. I mean JavaScript. But Go has a struct. You can make struct first and extend your struct as you wish. Like JavaScript prototype training, Go allows inheritance in structs. Go is basically compile language and does not use virtual machine. By typing Go in store, you can make your application into binary. Go in store or Go build. It is very good part of Go in performance. In this chapter, I covered your information about Go. Strongly typed means Go does not allow implicit type conversion. Go binary has its own garbage collector. There is non-block I.O. in Go, package system, struct and binaries. If you have any further information about Go, you can find more good articles in the GitHub Go Wiki. You can find it there. Let's move on to the next chapter. In this chapter, I will tell you about the architecture of the Go map. It's very simple architecture, but I think it's very powerful. In this slide, you can see the picture of Go map result. Go map has image service and will have vector service. This client example is based on leaflet and WMS. It's just POC nowadays, but it will became more good service maybe. Today you can understand the architecture of image service. I will tell you about the image service. Most of all, I think database is most commonly understood in here. Go has secure interface. There are many database drivers in Go. You can find PQ packages for Postgres. In my module, in my source, I use the PQ packages. Like this, new language like Ruby or Scala, Go has ORM packages. Go map is using GORM. GORM is the ORM package of the Go. GORM is very simple. You type your database name and column name for using. In my module, there are so many web frameworks in Go, but I think the reverse framework is the most powerful framework until now. We'll see how to use reverse framework. Go has its own web API, but we can make web server very easily, but framework makes us more easier to use Go. Image processor. In order to make an image service like WMS, image processing is very important. Go has drawing API itself, but it's very poor. But LLG code made it easier. LLG code is GitHub ID, and he made a draw 2D package in Google code. But nowadays, Google code repository is out of service. So it has some major package has a problem. So Ninja Spear has solved this problem. So I can use this image processing packages. This slide shows the whole mechanism of the Go map. When client like leaflet or open layers make a WMS request, reverse web framework receives that call and its controller will do some sort of task you see on the screen. Using the request and convert its parameter to the SQL query and make a query for post GIS and post GIS answers it. After receiving query result, it converts its result to the object. If you want to respond in vector service, convert the object to the GoJSON. If you want to respond in the image service, draw 2D, using draw 2D, you can make a PNG file. So it's very simple architecture. Let's move on to the final chapter. In this time, I will show whole process of making Go map. First of all, you should install Go language. When you install Go language, the most important part is setting the Go path and Go routes. After setting the Go path, I will show you the... Excuse me. In here, you can see the Go comment. If you... Maybe you can see here. This is the path of Go path in my computer. From here, this is the workplace of my computer. If you use a Go get command and pull the source from the GitHub command, you can use it from the source. Sorry. This command is go get and this is repository where the level is, level exists and go get the repository, push the source code from that point of view. After installing level, you can make a new application by using level, you can make a source like this and here is the, when you first time get the code from the river, there is an app configuration and document message public folder and test folder. Here we know shape folder and I install that shape files in here. After installing level, making a new level is something like that. Your for the hierarchy is something like that. Again I downloaded the public shape files from the source site source and convert the shape files to the square files and square load to the postage. It is go, I made this and like a spring initiator, they are in the source. In the application folder, any go is initiator of the application. There is a level on abstract, controllers, I was import this controller and this controller enables the column activate. It is a source of the column, it uses a PQ as I mentioned and lever and GRM. So when server starts, it activates the database and make it ORM tool for this application. I made the immediate service as I mentioned, like that there is a model, as I mentioned I downloaded the shape file from the server, the government site and it is a public data. I made the toilet struct, as I mentioned there is a struct. The toilet has a geometry, it converts the database binary to the object, like this scan function. It converts to the object, so I can use this image and draw the point of the service. After setting column code and drawing code I made and make an immediate service. And there is a client who make a WMS request. Reflet settings are shown as the slide, yes, it is very simple and this is other thing, maybe, it is a code template, HTML template. After setting this, we can see the result of the toilet. As I mentioned, I made the post-GIS server, I converted shape file to the post-GIS and it is shown as just like this QGIS and I will make a command called on river, run my source and it is started. You can find that in the browser, just like that of the QGIS. Here you are in the toilet, so you can use it very easily. It is all of my presentation was poor because of the presentation tool. Thank you for your listening. If you have any further questions. Thank you. Are you aware of anyone else working on GIS? I have one friend working on it and he has looked and never found anybody else working. I didn't find, but do you know? Yeah, I have one friend that has a popular application on top of post-GIS and I will connect you to, he is looking for people. Really? Yes. I am curious how you have found the draw 2D package. There are a lot of limitations in terms of line caps and line joints and some color blending issues. Have you found limitations? It looks like there are a lot of improvements recently, but have you found the draw 2D? Sorry. Do you feel like the draw 2D package is working well? For rendering minds and polygons or have you just been using it for points? I just tested a point and I tested a line, but you mentioned about the cap and something like more effect, but I didn't test it that fully, but I will check the... It looks like they are making improvements on it. Basically, the go draw module is the image package, so it is not the drawing package. So draw 2D needed for the manipulation of the points. Thank you. Thank you.
|
GIS Server architecture with Golang. Find the better way of Golang GIS Server.
|
10.5446/32108 (DOI)
|
So, good afternoon everyone. I'll be your last presenter for this session. My name is Engineer Ben Hrupintor. I'm from the University of the Philippines Department of Judetic Engineering. On behalf of Mr. Niko Boy Katanyag and Assistant Professor Maria Rosario Concepcion Ang, I'll be presenting Court Vision PH, a system for the extraction of field goal attempt locations and special analysis of shooting using broadcast basketball videos. A short outline of my presentation will be as follows. First, I would provide a introduction of as to what the study or the system is about, the problems that it wants to address, its objectives, the methodology that we used for development and application of such a system, a discussion of the results, and finally, some conclusions and recommendations that we gathered from this study. So, I'm a really big basketball fan. A few years ago, in 2012, I encountered and read a research by Dr. Kirk Goldsberry, which she presented during the 2012 MIT Sloan Sports Conference in Boston. It's title was Court Vision, New Visual and Special Analytics for the NBA. Basically, if you've read it, you know that he wanted to address the issue of who the best shooter in the NBA was at that time. And he wanted to address the fact that most of the conventional statistics being used in the NBA during that time failed to account for the special aspect of shooting. So, from his study, he used sport VU player tracking system data from 2005 to 2006, 2011, and he found out that a majority, if not all of the shots taken during a basketball game was limited to a scoring area of 1,248 square feet. He divided it into one square foot cells, resulting in 1,248 shooting cells. Using that scoring area, he was able to create two metrics which was spread and range. Spread was nearly how many cells, unique cells, a player attempts at least one field goal, and range was the number of shooting cells wherein a player averages at least one point per attempt. Now, as I've said, he used the sport VU player tracking system by stats LLC for this study. This actually, the system uses six cameras above the court to track players, the ball. It simultaneously observes and records a basketball game. And recently, it's become available online. NBA data, special data is available online. So, from this, what did I get when I read that paper and I realized that there was something like this? I realized that basketball is special. And because I really, really love basketball, and if you've ever been to the Philippines, you know that we're crazy about basketball. It is the number one sport in our country without a doubt. It has a rich history, it's part of our culture, there's a lot of money involved. So, I wanted to actually do this kind of study in our country. The problem was, I encountered a few actually. The problems were, first of all, there was actually no system in place to gather special information from basketball games. So, unlike the NBA where they have this sport VU player tracking system, there's no such thing in place in the Philippines. So, I have a problem with data, first of all. Second, the type of analysis and management in the country is still very traditional. If you look at the premier basketball league in our country, the Philippine Basketball Association, they still limit their analysis, their statistics keeping to simple counting and ratio statistics. For shooting, they use field goal percentage, three point percentage. So, it doesn't account just like Dr. Colt Goldsbury study, it doesn't account for the special aspect of shooting itself. So, what did we want to do? We wanted to develop a system that could actually extract field goal at locations from broadcast basketball videos. We decided to use broadcast basketball videos because it was the most readily available. It is usually, it is uploaded on YouTube so we could get it publicly without needing to actually pay the providers for such videos. So, after extraction, we wanted to be able to perform special analysis after extracting the data and then be able to present the results using statistics and visualizations. Aside from that, we wanted to show that special analysis of shooting actually has advantages over the conventional non-spatial statistics that were used during that time and still actually currently being used in the Philippines right now. So, for methodology, we divided into two. First is the development. So, we wanted to develop a system. So, we decided to use Python and it's some of its libraries. So, NumPy for Computations are databases, SQLite, taken care for the GUI, OpenCV for video manipulation and rendering, and Pillow for the images. So, if you'll notice, most of it are just built-in modules in Python so we didn't need to import or add so many external dependencies. So, for the development of our system, we divided it into three main parts or functionalities. We needed a data management system that could actually store both the information that we need. We needed an extraction system that could extract those field goal locations from the videos. And lastly, we needed a system that could analyze the data that we extracted from those videos. So, for our data management, basically this, the other user that can actually input all those information which into a database. Basically, you have a GUI for that and on your right, you can see the ER diagram of a simple ER diagram of what the database contains. For our extraction, it's a bit manual. It is actually manual. We have a video, you have a system that plays the video and the user selects manually the shots, the field goal attempts from the video itself. So, this is actually the bottleneck of the system. It's a limitation. We wanted to look into automating this process but the time, the constraint of time did not allow us to actually automate the process itself. So, we decided to use, to actually, to manually extract those information. So, what actually happens during extraction is this. Wait, first, since we're using broadcast basketball videos from just one camera, and that camera is usually, it's usually positioned in such a way that it's oblique from the court itself. The coordinate transformation that we use was 2D projective coordinate transformation provided by those formulas. On the right side, that was the court model that we used with 23 control points. If we all know, if you have two images, a map and a real world image, you can create a, you can get transformation parameters by solving for them. If you know the coordinates of one of a point in both coordinates, in both coordinate systems. So, we also defined the scoring area as a 15 meter by 10 meter grid composed of one square, one meter by one meter cells. So, we had a total of 150 total cells for our special analysis. So, basically, for extraction, this is what you were doing. The, you would pause the video, it would, you would have to select the control points from the image and select the user, the shooter. And after that, you allow the system to compute for the transformation parameters. It would actually output the computed RMSE, just to validate if it's a good transformation or not. And it will also back project the court model that I showed you before onto the image itself so that you'll know, you have an idea whether or not the transformation was successful or, or, or not. So, in this example, you have the player there shooting the ball, the blue one. Then you have one, two, three, and four, five control points. Then you, the system computes for the transformation parameters. And on the second image, you see the court was back projected. And it was actually, if you look at the court lines, it was actually, the back projection was actually very, quite, very good, quite good actually. So, we, you accept that, you accept the computed parameters and the coordinates by the system and it inputs it onto the database. If it's not that accurate, you could actually, you could actually prevent the system from inputting it into the database because it would just, if it's an inaccurate transformation, it would just corrupt the, the, the cleanliness of the data that you have. So, aside from extraction, we wanted to be able to perform the spatial analysis. So, once you have gathered, you have enough data for analysis. You have, you can actually, a user can actually query the system to create statistics or visualizations based on those field goal at implications that were extracted. So, this is just the, the gooey of it. You put a query on top or on that text box. Is a, we, we decided to use a text based query so that you could actually create scenarios. So, you can't, you, you're not limited to just querying teams, players, et cetera. You're, you can actually query specific scenarios like say, you wanted to know how well a team performs during the last three minutes of a game. So, you query team is team A time, quarter equals four, time left equals three. And the system, the system will will tell you the results of what, of what it is that you're querying. So, after the development, after we've developed the system, we used it, we applied it to study the performance of two teams in the University Athletics Association of the Philippines or UAP Season 76. This was in 2013 and 1040. Those two teams were the UP Fighting Maroons, which is my alma mater. And the Della Sol University Green Archers, they were the champs, they were the champions of that, of that season. So, the data that we used, as I've said, they were videos publicly available to YouTube and then to to be able to actually validate those data that we gathered, we used box scores and play-by-play data available online. We also excluded several field call attempts from the database. So, if it was outside the scoring area, we excluded it at the same time if it had bad RMS or back substitution results, we excluded it. So, when we when we checked the database, we found out that about 20 percent, there's about 20 percent difference between the number of extracted field call attempts in those from the actual box scores in of that league. So, we credited this kind of error to, well, first, the personal limitation of the user since since you're manually extracting points, you're manually extracting field call attempts, you can and if the user is not that knowledgeable about basketball, you could miss some data, some field call attempts actually. And at the same time, as I've said, we excluded shots outside of the scoring area as well as those with low RMS E. So, those shots were not included in the database itself. So, from that, we were able to compute certain statistics. If it's a spread, special statistics basically spread percentage, range percentage, the how many shots or what percentage of their shots are taken within this distance from the basket, how well do they shoot within that distance, how much points did they how many points, how many points they score per attempt. If you actually, if you if you try to look at it, you'll notice that the UP Fighting Maroons and the DLSU Green Archers actually have this almost the same distribution of shots in terms of distance. Only that UP has a slight slight slight advantage for shots near the near the basket, which is less than one meters and three pointers. But in all other areas of the court, it was the DLSU Green Archers who had a better significantly better performance than the UP Fighting Maroons. So, what makes the system glow, the snapdrazzle statistics, it's actually the visualizations. Because if you are able to see, if you are able to account for where on the court a team or player performs better, then you can actually prepare for them better. So, this is just the range visual, range percentage visualization of UP and its opponents. On your left is UP and on your right is its opponents. If you look at the map, the very first thing that you'll notice is that near the basket, on areas near the basket, UP actually performs really really poor. They have like a yellow one and orange one and very small and very small sizes of the boxes. Because the sizes of the boxes indicate the amount, the number of field goal attempts taken in that cell. And then the color indicates the how many points are scored per attempt. So, if you look at this area, UP actually performs really really poor here compared to what their opponents do. So, from that itself, you can actually conclude that this team has a problem converting and defending shots near the basket. Then, aside from that, we can also look at, if you look at this, this is the Lasal and then its opponents. The main thing that you can see here is that Lasal actually converts at a high rate, special at this area of the three-point line. So, if you are playing them, you could actually say prevent them from taking shots here. Just let them take shots somewhere here or near this baseline. Because that's where they actually perform poorer. This is a comparison of two players, one from UP, which is Mr. Marata and one from the Lasal, which was Mr. Tang. If you can see Mr. Marata's range map is very peppered. That's a peppered look. It means that he takes a lot of shots everywhere. Even though he actually only makes, he is only actually effective here and about here. Compare that to Mr. Tang, which was, where most of the shots are concentrated near the paint and the paint area. He really takes mid-range shots and he really takes three-pointers. Of those shots that he takes within the paint area, he usually succeeds. He has a very high point per attempt average in this area. These are just the observations. As I've said, they have similar distribution when it comes to distance from basket. Only a slight advantage for UP in areas near the atrium shots and three-pointers, but significant difference for Lasal. Advantage for Lasal on close range to mid-range shots. Again, UPS difficulties converting and defending those shots near the basket if you've seen it in the map. Only 30 per six actually of those shots were taken near the basket. Compare that to their opponents who took 52% of their shots within that area and converted at 1.15 point per attempt. This is what I was saying about Lasal. They allow their opponents to take only 36% of the shots within that area and force them into long three-pointers and mid-range shots, which their opponents actually converted at a very poor rate, 0.62 point per attempt. If you look at an average, the average should be at least 1 point per attempt. Again, these are just Mr. Marata and Mr. Teng's comparison. For some conclusions, we're able to develop at least show a proof of concept that a system can be made that can extract field goal attempt locations using broadcast basketball videos, and then we're able to perform spatial analysis of shooting using freely available resources and data. We're also able to demonstrate that the spatial analysis, it provides better characterization and appreciation of shooting because if I just give you a set of tables and of numbers, you won't actually appreciate it. You better appreciate data more if it's visual. You can see it on the map. The visualizations provided by the system does that. We also found out that aside from the quality of the videos that we used, the system was pretty much limited by the user how he could actually extract the field goal attempts from the videos because being able to complete and correctly extract those would make a significant difference into the accuracy of the analysis itself. So recommendations. You could expand the database, you can change it, don't use SQLite, make use something more spatial basically. You could add better video and image processing algorithms. This tree actually is very, very good. You could actually automate the shot and position determination if you wanted to do that since the system was made in such a way that those three functionalities could be edited without actually changing how it reacts or how it communicates with the other two functionalities. Lastly, you could use your own cameras or video capture device systems and then you just use those coupled with some image processing, video processing algorithms to actually detect the shots on your own, not just like the way the sports video player tawing system does it. So, the data fed into the system in terms of application, the data fed should be complete. You can use supplementary sources in the play by play data box course available online to make sure that it's complete and you can also allow, if you're using the system as it is right now, you should use it. You should allow a person with intimate knowledge of basketball to be the one that extracts the data because someone who's not a fan of basketball won't be able, will have a very difficult time of extracting those information. So, just some references. Thank you. I was thinking, Mr. So, do you see any use for this in, could this be used for other sports? Yes, yes, actually. Like soccer? Yeah. And one more question. Do you have, did you use only one camera or did you use multiple cameras? As it is right now, the system only uses a single camera, which is the camera of the TV station capturing the game. But of course, since we're just special scientists here, we know that if you have more points, more cameras, more views, you'll have better, you'll actually get the position more accurately. Yes. And you have to manually know which number the player who's shooting has. Yes, yes. Yes. How much do you have to do manually? I saw the slide, but you have to like point on the person shooting and you have to type in the number of the person shooting. Yes, yes, the number. And then it will actually check the database if that number is actually assigned to a specific player. So, if it's not, it will tell you, no, that's the wrong person, the wrong team. That's not the one playing. Could you say anything about the amount of manually work? How much time does it take for one person to do one match, for instance? Say, for one basketball game, you could actually say, you can do it after the game, about two hours. So, the next day, you can actually provide the team if they're asking. If they're asking for data, you can actually finish it within a night. So, tomorrow, if you ask me, if you give me data today, I could give you the results tomorrow. Have you been trying to maybe track the ball in the video? Not yet. We actually wanted to automatically track everything, but it was difficult to do, given the amount of time that we were given to finish the research. So, we opted to just use manual extraction first and then try to look into automated tracking of players and the ball. One of the big limitations was that the videos that were used were very low quality. So, it's very hard to differentiate between side from players. If the ball is moving fast, it's very difficult to find, especially for low quality videos. I think it was a very interesting presentation. So, thank you. I'll echo that. This was very cool. Thank you. I have just a couple of quick questions. First, I assume, because the conference we're at, that this is open source, could you talk about how hard it is for someone to set up this system? It took me about six, no, two months of thinking about how to create a system and then another two months to actually creating it. Because I just divided it into three parts, as I've said, it's a data management extraction and then special analysis. So, I tackled it separately so that if ever I could find better ways to do something, I could just change it. So, if you'd notice, I just built in modules by Python to make it, it's a proof of concept. I wanted to show that it could be done. Okay, but could other people run your code? Is it available somewhere? The problem is, actually, I have an agreement right now with my thesis advisor at the university. I'm still waiting if they would allow me to actually upload my code. Okay, fair enough. And then the last one before I give up the mic. I'm curious how you figure out how high the ball is off the court when it's released, right? Because you need to know how high someone is jumping to intersect with the transformed court. So, how do you figure that out? We didn't. We assumed that if a person is jumping, we just assumed that the person is jumping vertically. So, he ordered, no actually, we picked the location before he jumped. So, while his feet are still on the court, that was the position where we picked as the spot where he took the shot. Okay, so you clicked with the mouse or something. Okay, thank you.
|
The presentation is about the development and application of CourtVisionPH. CourtVisionPH is a system developed for the extraction, storage, and analysis of basketball-related spatial information. It focuses on the extraction of field goal attempt (FGA) locations from broadcast basketball videos and the spatial analysis of shooting by means of statistics and maps/visualizations. The system was developed using the Python Programming Language. It features a database for storing spatial and non-spatial information and a Graphical User Interface (GUI) to help the user and the system interact. The modules used in the development include Tkinter for the GUI, SQLite for the database, Numpy for the computations, Pillow for image processing, and OpenCV for video rendering. The system has three independent but interconnected functionalities each with its own specific task: (1) Data Management which handles database connections, (2) Spatial Data Extraction for user-assisted extraction of FGA locations from videos using 2D-projective coordinate transformation and validation of transformed FGA locations sing RMSE and back-transformation, and (3) Spatial Analysis that computes statistics, generates maps/visualizations, and query-based analysis. After the development of the system, it was applied on UP Fighting Maroons and the DLSU Green Archers during the 2nd Round of University Athletics Association of the Philippines (UAAP) Season 76 (2013-2014). Videos publicly available online through youtube.com were used for extracting field goal attempt locations. Shots taken too far from the basket (half-court heaves, etc.) or those with bad RMSE or back-substitution results were excluded from the extraction. The extracted FGA locations were then validated using box-scores. Afterwhich, the system was used to analyze and compare the two teams and their players using statistics and visualizations and show that spatial analysis provides more information and allows for better characterization and appreciation of shooting than conventional, non-spatial techniques.
|
10.5446/32109 (DOI)
|
Stirr屋さん peer. passagethis is not today's presentationfourth 4けれどもok五が付いているok次に、ジョグラフィーの教育を紹介します。私は楽しむことができます。今回は、日本で教育を紹介します。まず、日本の教育を教育することは、もう一度死にます。教育の教師の難しいことは、彼らは忙しいです。彼らは、ジョグラフィーを教育することはできません。子供の教育は、ジョグラフィーの教育を教育することはできません。しかし、ジョグラフィーの教育は、彼らは、ジョグラフィーを楽しむことはできません。この教育の方法を変えます。次に、私の心配を探して、ジョグラフィーの教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を20年前に、日本の教育を集めたことはできません。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。この教育を紹介します。
|
We propose one of the practical case that the students are able to handle Geospatial Information and to make a map by using the FOSS4G. In recent years, the informatization of education is progressing in Japan. Its aim is to distribute one information device per one child in 2020 by informatization of education. However, it is not easy to implement the information device as the educational method. It is the same situation with respect to geographic information technology for education. From such a background, we founded the NPO in order to help the school by using a geographic information technology in 2011. We have carried out some of technical workshops for teachers, development of GIS teaching materials, and the provision of curriculum. Especially it is important to use geographic information technologies in geographical and historical education. In the classroom of geography and history, students can understand with realistic by using the GIS teaching materials. Therefore, we provide the teaching materials created by GIS for teachers or students. GIS can develop the teaching materials to maximize the imagination of students. Mainly, we have been using QGIS in the development of teaching materials. The KML file is an output from QGIS. The method is to provide database system in web by KML file materials. The name is OpenTextMap. The FOSS4G have been effective in this activity. Our goal in this talk is to share the educational practice by FOSS4G to other people.
|
10.5446/32110 (DOI)
|
이번arla는 Open Sourceonder의 Three bit 이에서 이 트레젠주는 kleνο사는 희롤러지컬 모델과... 많은 사람들이 QGIS 이 ground이예요. stool is While First Eye Tyler Web 그 Donnel 2012년 GRMced어의 재발 mé퍼를 통제 הק 이거를 문화용 생 هذه 하내不会х일�니다. Along oder allen caller ell아요 키이징いました �痛 Francet院 두배 일 으 저는 6 정 2013년 document мы кон subtle Mirindaέρ Geschis transitioned 수�边의 어 GRM는 그린앰트와 웨이브 모델을 사용하고 있습니다. 그린앰트 모델을 적용하는 것입니다. 그린앰트 모델 요점을 보 filling을 유지할 수 있습니다. 그린앰트 모델에 속도를 Schön, 너무 нельзяический rebuiltื่ pouvait 유지하는 점인데요 connector Settings 그 다음에 아주 훌륭한λεizing도 눌릴러 sequences aromatic GRM이 absolut2lerdeolly Partnership Result을 Romanizeacking tradition is of Text Valuatefor Watch Point resulting votes by users tightening measuresTime Series of fulfillment of A frustrated은1 hoe Savage Mode둘 am Um It's ever Raz NF 얼룩 Go societies in angina are researched in preparing the data on district model tailored to single modeling. girmit is used. design in console, environment with GUI, real time module, auto calibration, etc. 이제 genX desql 개발서는 김정'은 auf, b, m, b, m, m, n, d, i, s es Tigre, some I wish orlike caused it MY armpit GS Cnię Gruho am Igmail B 월드컬러의化시고 플러스도 JS X ELMAP을 반대하는 і Thermal Приborupa1 서 registry.menu 맵AKE. GIS의 function, select layers, and read the cell value, and map operations, and draw grid lines, and display the flow directions. G RM reads cell value, and set the control volume, model parameters. GDAL는 또 다른 소프트웨어입니다. GDAL는 오픈소스 소프트웨어를 통해 GIS 데이터에 그�isser corpo 교를 Τ집어 통과 속은 발 OUT DIE Word가azo이 senators fez puisque haven to new 그�ec 걍 이面에서 전기 scriptures 해결 을 급증에는 미사일 사전에 고�raf baptized 그래서 gRM, 인터페이스, 소프트웨어, gRM, and PEST. main function of gRM PEST is to make PEST info files such as PCF, PIF, PTF, RMF. 그래서 user can select the gRM parameters to estimate and set observed data. And if the user want to reduce the running time, you can select the PEST options. 그래서sungفp田다 arithmetic 13^^ following 넥스트 3rt secret 지하절ates 안녕하세요 Nickel Springs slaves 아는一直 앵irst in 청미천, 청미천 캐치먼트. 청미천 캐치먼트는 한국에서의 iHP 테스트 및이 있습니다. 그리고 캐치먼트는 TDR 스테이션을 소울 컨텐츠에 관여합니다. 소울 컨텐츠와 이의 조합을 takes is initial soil-saturation condition에 Deal with simulation in the chart, 다양한orfodiabshef data 드라이브를 사용하고, 그리고 블루 라인은 드라이브를 사용하고, 레드 플럭은 드라이브를 사용하고, 물건을 사용하고, 그래서 GRM 이 경우에는 GRM가 드라이브를 사용하고, 소일, 세트레이션을 사용하고, 다음은 렌즈 슬라이드입니다. 렌프로듀스트로 시도하고, 그리고 소일 세트레이션을 사용하고, 렌즈 슬라이드의 트레이로 세트레이션을 사용하고, 렌즈 슬라이드가 렌즈 슬라이드는 한국에서 렌즈 슬라이드는 인용하고, 렌즈 슬라이드가 렌즈 슬라이브에서 2015년 2006년 그리고 렌즈 슬라이드 1030's, 211's에서 렌드와 상ę�800 as estimated, 자 acknowledge misalignedfer' 그리고 중기점인 렌프가 바지중에 передov equipoPower � metall 감성 그리고 저희는 하 schnell 아� contributed per dan 아� blood of the can't i d 10 o'clock 10 o'clock 규모의Res款이 50mm 사용慣這是 ele path 감기 체력 그래서 does is to change физinch로 돌아갑니다. 다음은 자료들이 inviting. 전 in front.하운드 suggested or 어떻게ua budget vword é vinegarypta- climbing-system- globally 제 Report slack- vomi 땅棒 양 참 b m box constellation tm3s grapumm parameter editing windows grm mod Several is i Fightingadian 본techess 아드 스토� ASMR dis 이 그런 이 5 이런 유럽의 aparece slower ratesgame Fact-kov 까지 샵위에서<|id|><|transcribe|> television 액티를panzee 제ogue Sup楼 살포�platz에는rieben destroyed 지금 팬티 ней스 빗바라 소스 Can they barода n banana source code가 오픈하지 못했으면요,iction, 반대 freakinات이 maintained 않습니다.mumbles Tightens 자켸페레켸페레켸페레켸페레켸페레켸페레켸펜립isl 된 수요ります. favorite fully in 이집 china admitted 2 8 이캐ob picked blunt ot re t 12 q 또 다른 불�どう 열어� 자꾸 2003-2007udo Wheel installation- 음.... 그리고 또amoto 음..... 저น decreme 워play Proswor like 생 попыт 사람と 예 핫 퀴즈 Wakarr Follow 밸지 Terrarium議재지 dolphin이 수�ner Boran 사�ㅠㅠ 저는 한lando 성변입니다. 고 split soup 5 보증 150 b 원 괜찮아요. 1977년에ities 그런데ad d 아티스트 utveck WA ELe Le e e e medicines and w e r 그ssä 좋은 마치에요. working is in progress analysis 하지만 It is so free strategy Minnie 또 다른axolution 감사합니다View 그리고ours 한 Bundled sources 다드러도 영상 Lotour
|
This presentation shows the processes and methods for developing distributed rainfall-runoff modeling system using open source softwares. The objective of this study is to develop a MapWindow plug-in for running GRM (Grid based Rainfall-runoff Model) model (MW-GRM) in open source GIS software environment. MW-GRM consists of the GRM model, physically based rainfall-runoff model developed by Korea Institute of Civil Engineering and Building Technology (KICT), for runoff simulation, pre and post processing tools for temporal and spatial data processing, and auto-calibration process. Each component is integrated in the modeling software (MW-GRM), and can be run by selecting the MW-GRM menus. In developing MW-GRM, free software and open source softwares are used. GRM model was developed by using Visual Basic .NET included in Microsoft Visual Studio 2013 express, pre and post processing tools were developed by using MapWindow (Daniel, 2006) and GDAL (Geospatial Data Abstraction Library), and PEST (John, 2010) model was used in the auto-calibration process. The modeling system (MW-GRM) was developed as MapWindow plug-in. System environment was Window 7 64bit. MapWindow GIS ActiveX control and libraries were used to manipulate geographic data and set up GRM input parameters. ESRI ASCII and GeoTIFF raster data formats, supported by MapWindow and GDAL, were applied and shape file (ESRI, 1997) was used in vector data processing. GDAL is a library for translating vector and raster geospatial data. In this study, GDAL execution files were used to develop pre and post processing tools. The tools include data format conversion, spatial interpolation, clipping, and resampling functions for one or more raster layers. PEST is a model-independent parameter estimation software. Parameter estimation and uncertainty analysis can be carried out using PEST for model calibration and sensitive analysis. PEST is developed as an open source software, and single and parallel execution files are provided. This study developed GRM uncertainty analysis GUI as an interface system of GRM and PEST. GRM model had been a DLL type library including APIs to support developing another application. But PEST needs a model execution file, which can run in console execution window without user intervention. This study developed GRM execution file (GRMMP.exe) running in console window. It can simulate runoff using GRM project file, and no user intervention is allowed after the simulation has started. GRM uncertainty analysis GUI makes PEST input files (pcf, pif, ptf, rmf, etc.) by setting GRM parameters, observed data, PEST parameters, and selecting single or parallel PEST and PEST run automatically using GRMMP.exe file. In this study, all the functions necessary to develop GRM modeling system and pre and post processing tools could be implemented by using open source software. And MapWindow plug-in of GRM model can simulate runoff in open GIS environment including automatic model calibration using PEST. The study results can contribute to the wide spread of physically based rainfall-runoff modeling. And this study can present useful information in developing distributed runoff modeling system using open source software.
|
10.5446/32113 (DOI)
|
on a Good morning. I am Glenn Debra of Tine Drabau Philippines. On behalf of my two researchers, Alvaro Sobilo, Miss Aurocell Alejandro. So, we will be sharing to you our research that is a little bit free of source software, which is QGIS and different data and the profitability analysis of wildlife for climate change adaptation. So, just to give you a review of our price consumption in the Philippines. So, 97 million Philippines price, which accounts for 20% of the average household expenditures. But then for the last decade, the country has been able to produce supply the demand of this price consumption. In 2010, it imports 20 million tons making the country the largest price import in 2010. So, the graph shows our price imports from 2003 to 2012. Now, sometimes. But ironically, the country is home to the most under-rised scientists based under the International Rice Research Institute in the West Banyos Laguna. And also, it has a bank of 112,000 types of rice variety making it the world's biggest rice collection in the East Banyos Laguna. So, for the constraints, why we are able to produce or supply demand for our consumption, the 300,000 square kilometers of land area, which are very mountainous and of small islands. We have 7,000 on the island. So, most of these are... there are only 43,000 square kilometers of harvest area used for rice production, comparing to the rest of the countries. So, if you can see that in the graph, the last part of the Philippines, the largest is India, Thailand, and Indonesia, and Thailand. So, other also, the conclusion with the constraints, under-rised farm infrastructure, the conversion of agricultural land to the residential, commercial, and land. And also, the biggest is Typhoons. So, you can see these are the Typhoons that hit the Philippines in billions of pesos. Then also, the red one, this is the year 2011, 2012, 2013. This is Typhoon that hit Mindana, which is a sildam hit by Typhoons. And also, the rapid population drops of 2% per year. So, we are in mind to be tanking that information, that's 97 million. Now, according to our view, we have reached 100 million. So, to address that concern, the Department of Agriculture of the Philippines is launching Food Stakeholds Efficiency Program. So, one of the, I mean, these are, they are looking for crops that would supplement rice. Not necessarily replace rice, but simply to supplement rice. So, one of the crops that we see, is the Aglai or the Colourful Grounds, here. They transplant the family that collect corn and rice, that is a good source of aglai. So, this crop is not known to most of the Filipinos. So, this crop is then cultivated by the Sabanin tribe in San Blanca, New Sur, for centuries. And this is the Sabanin tribe that is being planted by the Sabanin tribe in the now. And then, Philippines, they are region 9. So, because the Philippines is propagated into regions. And also, these are other red portions of the other areas in the Philippines that grows aglai. Some are for food consumption, for wine making. You can see, the country's<|transcribe|> So, my country would be in region 11, in Davao City, or Davao region. So, there are the waterway crops that have been attacked, the ability to try us in some areas in the country. And tribes within poor quality soil, gross well in sloping area. Today's water lagging and it's best resistant. So, our objective is to determine the suitable area for aglai production in Davao region or region 11. So, our objective is to determine the suitable area for aglai production in Davao region or region 11. So, our objective is to determine the suitable area for aglai production in Davao region or region 11. So, our objective is to determine the suitable area for aglai production in Davao region or region 11. So, our objective is to determine the slope and elevation using QGIS. So, the data will be the soil type, which is from the Pilipa Department of Agriculture. It's a slope class, which is from the Pilipin GIS. It's an online site that you can get all the data, data from or adjust the special data. And then we have our digital elevation model from the USGS. So, these are the prominent soil type in region 11 based on the Pilipa Department of Agriculture. And then, our base on the DEA and the slope classification. And then we remove the from 0 to 8 percent slope because like I've said, this study will have, because we'll use aglai as a supplement to rise. So, we remove the slope because that would be aglai slope is suitable for a rise production. So, we just remove that so that the remaining will be for aglai production for continuation. And then, with the limitation of our computer super, we just classify the elevation into lowland and upland. So, the lowland will be less than 100 meters and upland will be 100 meters. So, for the result, the red one will be the areas that are supposed to be suitable for aglai cultivation. So, we have some part of the northern part of the Vau city and the central part of the Vau de Norte and some areas in, I mean, I'm not sure if you know the place, but then these are the areas that we can plant or cultivate at like. So, most part of the Vau Rintal which is the East Bank Sea word. And then, most part of the Vau Rintal. So, these are the areas that we can, suitable for based on the three variables, suitable for aglai cultivation. So, that's it for our presentation. Because I was thinking that my presentation will be like 20 minutes, but then, decreasing all the slides, I mean, the research into the small slides, number of slides. Then, because we are down to 20 minutes, so that's the end of my presentation. Thank you. Thank you. Yes, sorry. Yes, yes, yes. So, for the first question, I ran it for about three months, but then, because the data is available, I mean, because I only have three variables to consider. And so, it would be, I mean, because they are available online, okay, with other also related, related to research regarding this study. But then, my, the literature is more complex, because they have this access into most, I mean, the data needed. So, what I have only is three variables. But then, because like I said, our geographic performance system in our countries, not very young, but because the government is only our, the one who is using GIS, and they're using this expensive softwares, like ArcView, can I say that, ArcView and ArcGIS. And then, so they are limited resource person that could use this technology. So, in our part, we discovered, I mean, I've been using QJS, like, four years from, four years now, for four years now. So, of course, I'm also new to the software. But then, we are trying our best, because we have to help the government in, I mean, spreading, like, for example, like GeoHusard MOP, because every, like in my place, double city, or in Mindanao, it was supposed to be typhoon free. And then, the government has been leaving these GeoHusard MOPs. But then, every time, it will take a long time for them to distribute these MOPs, and then another typhoon came, so they had to be, to update. And then, because they have no resource, I mean, persons to do that, we have limited persons to do that. So, what we are trying now is, we are trying our best to use, or to implement, or to help the local government units to use free open source softwares. So, this one will be our, supposed to be a showcase, or have to convince them that we can use this one. Now, for the second question, yes, I, supposed to be, that would be the plan. We have to consider the GeoHusard areas. For example, in that northern part, the country, that's the typhoon that was hit by Pablo, at the time. And so, the plan, that's the plan. They have to consider that, and also the land use for the area, because there are areas, even though it is mountainous, there are other also crops that have been planted there. But then, because of the limitations of the variable, then we just settle for these variables first. Okay. Yes. I'm not sure if I got you clearly, so I'm curious about your data source. Is it from the government or other academic institutes? And the second question is, how do you plan to use the result of your analysis, or what you should plan, or how are you using the result of your analysis? Are there programs to help farmers to grow at lie in those areas? And if yes, what are the farmers feedback? So, thank you for that question. Now, the sources would be, I'm not sure how to play. Right. I mean, sorry. So the sources would be like the, for the soil type, you have the department of agriculture in our country. So, I mean, they say, I mean, that the data that they gave us will be from 1950. So, I mean, so we are asking because I, that does very soon why I have, I found, I mean, I have this research in Adelaide because when we go to their office, and then there's a lot of this research, been stuck then not been given to the farmers. So, I have, so I asked them to, if I can get some of these research, I mean, about Adelaide. So I just, so their problem is that they to implement that to the farmers, give it to the farmers. So what, because like I said, they have these trials, adaptability trials in some regions of the country, and that was the result. So they are, so in that program, so they are planning to, but then they have that program, very, very good program. So foods, what did you call that, I forget. Okay, so they have this adaptability trials in some regions. And then, so they're planning, I mean, but then the irony of that is that we didn't, if I was, I'm an, an academic, right, so in, at the University, but then nobody knows about Adelaide. We have this shortage in rice. I mean, our food security is, I mean, at stake here, so, but then nobody here about Adelaide. So then I have seen that they have this program, then they've been, have this adaptability trial, sorry. But then the farmers don't know about it. They still, they're still working on, they have these problems in food security about rice, and that's it. So these are other crops that can supplement, I mean, our food security or our food supply, I mean, supplement rice. I think I said. I have a question that did you, did you evaluate or analyze the impacts of different facts, factors on the results? And for example, that in some regions, the factor of, the factor of climate may be crucial, crucial to the results and in some other, other regions, slope may be the main factors on the impact on the results. Thank you. Thank you. So there are, like I've said, I'm limited to that, to that variables, but then I am from, I live in that, in that, in that region. And I've been also doing some, my research is about also some environmental research. So I've been, been, I'm also a mountaineer. So I climbed to the highest, I mean, in our country, Montapo. So I have this idea of what it looks like, the climate. But like I've said, because we have this other study about certain crops, and then the variables, because the Philippines is, I mean, especially in my, in my region, Armindanao is very suitable for, because it is tropical. And then the climate and the temperature is suitable for most of the crops that especially adly, or other crops. So I just removed that variable, even though, yes, it would be very helpful if I have that variables, that information about the temperature. But then, like I've said, our resources is limited. So I mean, we have the Manila Observatory of Atinio de Manila University, which they are, they have that information. But then, because of the lack of time, so I wasn't able to get that data. But then I'm doing this research in the assumption that most of crops would be, would, I mean, would be good for, in that region. I am sorry, I wasn't able to, okay, I hope I answered your question. Going back to your question about the prog, how am I going to plan to, so like I've said, this is a stepping stone for our GIS, or using free open source softwares to, to our, to promote using any products that are available. Any products because, or any crops in our, in our government, because like I've said, when GIS softwares are very expensive, and only the government can, can afford, even though our institute is planning to buy, because they are very supportive, when I introduced GIS in our, in our university, they are very interested in using, I mean, conducting, because we are, we have in our budget for either, in our research, we have, even though it's not that big, we have 10 million pesos a year for our research. Okay. It's not that big compared to other universities, but, but then it is a good start. And then they are, most of the research, especially if it is GIS based, and then they are interested to, to fund. So, like for an academic, for a university, we have this, I mean, like I said, it would be very easy for us to communicate with, with the government. Okay. So especially if you're going to show, show this one, of course, we have this, there will be a lot of improvements for this research, but then it's a good start for us. Sir, I hope I answered your question. Thank you for your valuable presentation. I don't think there are climate factors in your research. No. Do you have, do you consider climate factors such as climate change scenario, with me five I should be. Yes, that would be, I want to consider that one, but then we don't have, we don't have that in data yet, because it would be, like I said, we have the lack of time, but then, but then we are planning because I have already suggested this research into our, our, or other partnership between Fuk GM. If we do have a, if we have any specific point, then we've got something that will trigger the public's. So, in other words, we have already decided to do the consortium. So we're not able to communicate this one. I mean, because of lack of time. But then they are willing to help in improving the data. OK. Thank you. Yeah.
|
With 43,000 square kilometers of rice producing farm lands, the Philippines is considered as the largest rice importer in the world according to World Rice Statistics (2008). The increasing demand for imported rice in the country has been largely attributed to topography, underutilized farm infrastructures, typhoons and rapid population growth. Given the need to supply a stable food source to Filipinos, the Department of Agriculture (DA) has been studying the feasibility of the mass production of Coix lacryma-jobi L or Adlai, a traditional food source abundantly grown by indiginous people in the country for centuries. In contrast to rice, Adlai is naturally resilient to pests, diseases, droughts and floods, and does not need irrigation. In its study, the Department of Agriculture wanted to evaluate the adaptability of Adlai in different parts of the country for it to become a complementary staple food for Filipinos. The results of the tests in four regions (II, IV, V, and IX) have been very promising. The study found that Adlai does not need fertilizers and insecticides, it can survive with minimal rainfall, and it can be planted in upland areas. To complement the current work of the Department of Agriculture, this study aims to map the agro-edaphic zones or the areas that are suitable for the cultivation of Adlai. It will apply free open source software (QGIS) and open data sources (ASTER GDEM, PhilGIS, and DA). The selected set of variables (slope, elevation, and soil order) will be cross tabulated, and the result will represent generalized classes of associated soil orders in combination with both elevation and slope. The result of this study could then be utilized by the Department of Agriculture to determine areas in Region 11, excluding the arable land for rice, that are suitable for the cultivation of Adlai. Sources: Japan-Space Systems, Phil GIS, Manila Observatory, Environmental Science for Social Change, Department of Agriculture, Bureau of Agricultural Research.
|
10.5446/32114 (DOI)
|
Hello, sorry for this wait. My name is Ismi Al-Turj. I'm Portuguese but I'm working for UNEP WCMC in Cambridge in the United Kingdom and I'm present I'm doing a presentation on one of our projects that's called Protected Planets. So this is our office in sunny England. We have around 120 people working here. We have several programs working in different areas on biodiversity and basically we are the biodiversity assessment branch of UNEP, United Nations Environment Program. So we have several people working in protected areas, climate change, ecosystem assessments and we have also an informatics team that's where I work as a data manager and geospatial developer where we create web products to show to the world the work that our scientists do in order to improve the policies around the world and UN policies on biodiversity. So this project is called Protected Planets and basically is showing all the protectors of the world in one website. You can just go to this URL and start exploring our protected planets. This is based on the world database on protected areas that is a database that has been growing for almost 30 years. It's a joint work by the IUCN and UNEP and every year there is a report on this world database on protected areas. We also have it's being constantly updated. We have monthly releases that you can just download pressing here and we try to gather all the information on protected areas in our planet. For each protected area we have a page that shows the map and some other information like the IUCN category, the area, if it is a national park, if it is just a regional park, all the information that we can get from governments we just put in this protected area page. And then we have our GitHub repository. So our code in all our projects we have web tools, websites, databases and we have our GitHub repository so everyone can just go there and get the codes and use in different ways. So everything that we have is and that we produce is now open source. So let's go back to our data set. We have a data set that is in a file geodatabase due to legacy issues because we still in our informatics program we just work with open source but we still have some other departments on our organization that still use Esri. We may be half using still Esri, half now using open source and open source desktops solutions like QGIS. We have several projects running in countries under development that we are introducing QGIS to them but in the specific case our protected areas program is still managing these in a file geodatabase that we will have to convert every month to our website. So we have more than 200,000 protected areas in this database. Ninety-one percent of them have the boundaries defined so we have polygons for them and just nine percent are from countries that they don't have their own GIS system, they don't have their own protected areas in GIS format so we just have points with latitude and longitude and some of them they also have areas for them so one way of calculating statistics will be buffering those points according to their area. And then we this our protected areas program does a monthly release of this database so they are in constant contact with different countries. Countries just send them in GIS format their new protected areas or they updated their updated protected areas and they are added or if they have removed protected areas they also remove them so we try to keep this database always updated but of course it's difficult because we have more than 200 countries which we work for so it's not updated day-to-day. So we have in one side one department one program of our organization the protected areas program they contact countries they update the database and they create an annual report with annual statistics this protected areas still works in Esri software and then we have the informatics website we just work with open source we built the website we create tasks to convert the file geodatabase to post-gis that is the database behind our website and we try to automate statistics. This is very important so this is what I'm talking about the team with Esri software usually takes almost a month to create statistics but if we want to have statistics updated every month in our website we have to speed up that that process and we are using post-gis to do that. So our objective is to calculate every month and automatically without any human spending time on that calculate the territory covered in each country by protected areas we have also terrestrial and marine protected areas and we must also know what is the area of land covered by protected areas and land of marine areas so exclusive economic zones and territorial seas covered by marine protected areas and then we can have percentages of the areas that are covered in each country with protected areas. We work mainly in Ruby on Rails web development framework with a post-gis database to calculate the statistics we look through all the countries so we don't have a script for doing everything we just go automatically country by countries because the process is quicker like this. We start the geometries in the post-gis table and then we show the data in the website using Ruby on Rails and JavaScript. So this is the result for instance for South Korea the brightness is not very good for this. Is that possible to change the brightness on the projector? Just the brightness as it is too bright. Okay anyway you can see that those small bars there so we have for South Korea the percentage of area that is covered by protected areas that is around eight for terrestrial around eight percent of protected areas for marine areas it's around four percent. We have these kind of results for every country in the world and they are updated every month. So this is not so easy because many of the protected areas are overlapped. We have here an example in Korea so we have two different protected areas but in fact they are in the same place. This happens why because we have a place that can be a World Heritage Site but it can also be a national park. In Europe for instance you have the Natura 2000 areas that can also be a regional park. So in some places we have different laws for the same place but for protecting it so we can have 10 protected areas just overlapped for the same place. So this brings us some huge challenge because if you want to calculate the area we can't go just simply summing up all the areas in the country we need to have a flat data set just with the area that is covered by protected areas. And this is a big challenge in countries like Germany where they just have very small protected areas but they have thousands there. So it's a huge spatial analysis that we need to do every month for countries with very complex geometries and very different ways of protecting areas of policies that they have. So we have an entire planet like this. We have a lot of places covered by protected areas and we need to calculate the statistics for this. So firstly what we need to do is to dissolve all our geometries in just a flat data set. We use just one post-gis script. Well for the non-developers here I'll just try to be simple. So basically there is some lines of code there and that is the basic selection for getting all the protected areas but what we do is what we did is just develop that small script in a very big script to meet all our requirements. So firstly we dissolve all our geometries. For that we need to split by countries so we have a column that says which country is the where is this protected area in which country so we can just go country by country using our Ruby script. We need also to split by type. We have marine and we have terrestrial protected areas as you can see there. I'm just putting also images of examples of places where we are building the query. We then need to add point geometries and for this as we just don't have some polygons we use the area to create buffers and we add those point geometries as polygons with the area buffers. We simplified geometries and this is where we save a lot of time so with this we can just from one month and several days processing time we just take a few hours doing this. So for countries which have more protected areas we just simplified the geometries in a way that doesn't influence the overall results so the margin of error is smaller than what we show in the statistics page. So we simplified the geometries for countries like Great Britain, USA, Canada, Germany, Spain, New Zealand, Poland, and the Czech Republic. So we do that and we also deal with transnational protected areas. We don't have many but there are several protected areas that are in two countries like this in the United States and Canada but if you want to have just statistics for the United States we need to clip and use the country boundaries to get just the United States and the Canada area. So we also need to deal with this and we also have some some protected areas in our database that they are still proposed or they were not reported so these protected areas we don't use them for calculating statistics and then we have to make all the geometries valid this isn't more technical question. So then we apply the countries table with the flat geometries we have a flat data set for with the protected there the areas protected by by country and then we calculate the statistics properly. So we finish the geospatial stuff there we we have geom fields on a post-gift table with just the areas that we want and now it's easy it's basically calculating the area we we convert it to mall wide same area projection so it's everything okay and then we from that we get the the the percentages this this is just a circumference here so this is for United States they have 14 percent of of area protected of land area protected we need to handle new null values too this for China they have 17 percent of land area protected and then we we start on a stats table all our information so this is basically the the the work that is done on on this script. So my final remarks is that with open source and with post-gift we are improving the way that we work so the work that we usually do in one month our team with as read does in one month we can speed up to six hours dissolving those job and simplifying geometries so we still do every year for our main report the long way and not simplifying geometries but for every month we have updated every month in just running a script during six hours and without needing the need of a new man being being there doing calculations so they are automatic and we have a full open source solution I'll show you how this is working the internet speed is not so fast here so this is this is our website and this is all in in postgres all the data that you have here you can free update the data set in in this format so if you want to use in your own GIS desktop software you can you can just download here you can check the the the terms and conditions also in the website and for instance if I write here Philippines as the previous presentation yeah I can just go there and get the the the information about the the protected areas so they have 51 000 square kilometers protected 11 percent of their land areas protected 1 percent of their marine areas protected so we can just search for your country and and and get your information on this so thank you very much if you have any questions please come back to me first of all it's great work and it's great to have the website open source and and the whole data set that's downloadable as I'm seeing the terms of use and the data there's a clause that no supply is in or a distribution of WDPA data so I can just put there yeah so I can download the data and use it on my desktop GIS software but if we modify the data or I added our own attribution it's not allowed to republish them on the web so not exactly conform to the the practice of open data so what is are there any obstacles to use a more open license in that aspect yes so basically this is with the way we we get the information some of the countries don't want other than commercial use so we need to be always with the country with which has more afraid of giving this data around so so we are still dependent on the countries if it was if it was our option we would put just it's just open to so everyone could use no matter what's what the the proposals are but that's the issue the the data is sent directly from countries and and they they have some concerns about it anyway you can always use that email if you want to have a different use than than what we have there they they are quite quick to to to reply so we just drop them an email and then they will see what kind of use and if your use is according to to to any to to the concerns of of other countries so yes we still we will have it's not the ideal ideal solution but that's that's what we can have because there are several countries with different policies on date thank you very much i think i missed the first half or so of your talk but i i use this data set all the time i work on lakes around the world i go different places and it's excellent so thank you very much for making this available um related to this question um two points one is obviously different countries have different policies right some are willing to just say let it go yeah some say no yeah um but the one who sort of bundled it here as a single download that's the second point i have when you you go to that that download option there you get the whole world right have you considered making um one a download specific for a country maybe or just select by polygon or something yeah we are developing something like that so yeah firstly it will i think we'll have that for for country uh so yes uh i'm not sure when it should be but but yeah uh starting by country we can we'll have something like that okay very good we are also developing an api to just get that also through through just getting the polygons according to the wtpa id you may know that yeah the idea of each protected error okay thank you very much um a related question maybe about your processing in the uh dissolved commands and simplified geometry things like that um approximately how long does it does it really take you guys to go through and let's dissolve your whole data set without the simplified commands in the system you're learning and what machines are using because i couldn't do it yeah yeah uh we just quit before before finishing this okay yeah so yeah that's what we so basically it was getting a while to do that even using postures and use even using indexes and stuff like that so we i ended up trying to see what were the most difficult countries to dissolve that were taking i when when i as i was telling you uh we are in this process we just loop through every country to calculate statistics and when we got to germany or when we got to usa it was just taking a lot of time doing that more than two days the the what i remember for germany for instance so i ended up um um seeing seeing how different was from from the the previous results that that they took a month to calculate and we got the margin of error was less than than five percent or so so for the entire country one percent is off of their area is much more than that so so we went up with the same results rounded by the the unit so so that's we we we we ended up finishing it but as i was telling if our protect errors program just migrates to postures we'll need to do that on postures so maybe later i can just give you how how long should i take to to run it actually if i may one follow up for international borders um what data set do you use for that is your official un set or something after you know and that is a big problem that we have in our organization because we still don't have an official one so i for this as we are not showing the border but just calculating statistics with with a small margin of error i just used the the natural earth data sets okay thank you okay thank you first of all it's a great work you are doing there and i'm very impressed by the great amount of time you devoted for this project and my question is i'm wondering how you would address the discrepancy if any you would encounter with national statistics they publish nationally so have you any experience encountering this yeah so yeah what can happen and what usually happen is that our team is not big enough to every month get updated even the countries don't send uh every month updated statistics so we have a team of six people that are working permanently contacting governments in in every country to get the information so what happened is that for instance in a month we get the protected areas for korea on on the on the following maybe 10 months we will not contact korea again so we will not have updated statistics so yes we we can have and we we we are aware that that some of the statistics are not updated uh to to the country statistics and uh um yes so this is a work in progress and that's why we have that send feedback button there so any person or or any government that doesn't agree with the data that we have there they just can contact us and we'll just change it just in that month so i think it would be quite quick to to to address that okay thank you very much
|
ProtectedPlanet.net is the online interface for the World Database on Protected Areas (WDPA), a joint project of IUCN and UNEP, and the most comprehensive global database on terrestrial and marine protected areas. The WDPA is released every month and consists of a point and polygon dataset of over 210 000 entries. Over 91% of this data is in polygon format and the remaining 8% are points that can have an area as attribute. Displaying protected area coverage statistics is one of the main features of this website. It is very important for the users to know what percentage of the territory is covered by protected areas in a given country, region or the entire planet. Previously, these statistics were calculated manually and every year a team spends several days calculating them for a report using ESRI Software. We had a great challenge this time: Can we automatically calculate the statistics every month for all the protected areas and countries in the entire planet? In this case time matters: if we want to calculate statistics every month, it can't take 2 or 3 days of processing. To work through this, we chose a full open source solution with PostGIS to do all the back end tasks that we need to calculate statistics. We were able to limit all this to 6 hours and we can now run automatically every month keeping coverage statistics up to date.
|
10.5446/32122 (DOI)
|
A ↗ ab An entropy o begins to be here of non-contact bety karmelby naронina kichpose min-chound Aaa thought Elite... I feel it is amazing usage of today's concept of non- torso traversal forces HELkgAW udah war comprehensive Narc and Some part of which, it's unfortunate that we now have qiang that might have ignored the function of theam account. Let's try Fifth Formulaasaki bok izak yüz 👌 fandom's prize in 2016 inclusion is four Sawler and是 nd也 nd out yet. W you Oh Maria Ḥ xLon aW Sorry skin for your unskeless look and hot fur other visit even my young live at Chay odds with my actions.ён mine may be empty, while youden may end to pur Im o longevity to cue for one of the most important ammonable Fellasry Ha Mae kiandom oh datte pa yunforozo tanto a tum nina pog ga boom o koril td that wne Sir affithdle in français bitcoin oso ca because he didn't feel that I was maybe too good. ale y o fn gen orchestra descri So fortress or incidents or... jron jeune en ragdouge sfe ar jyesh What's snd in the declaring e bou dun dok pyr circ bal eht saks stro neutral ka t colleagues gefocale núka n ul eket земh Watch attacked giood film t ha bam efficiency a ia Learnloe dwe��� embunaya diht sko seat ژvek tar i DAS ژavek hb tal tie Phar Dhe ژavek rung le T ژavek grw pela ژaai dwe rung le człon uware o kailan dot szewin controllers obp o adwbl Ich raining десят 운 yw adjective honour Cuz karma rows którzy catela respects Narasimu nusch ���suhaa kou. cart paar i a oucan sha בכn 一 d achieve kar reあiter ram-ata gen ostrina Actually, I received multiple Magnifiers as well. So lightweight wanme is of million dollar costors ensitive. Sofrin na pa squared fibre Правile Krohl Armstrong belgion I prom Hellish D comprom salmewunie fem i dwe k ge Jackson Trat vimosViwan o如 ek an Lady Y ona ersçellons부터 2nd exam 3rd lezion Ats sí Y an 6th exam d resolution, an Madame. cen Fifa May to know why that people who were made into this world, this Joe Andree transmit a vision to language as a future. Remember the same NCE questions they used beautifully called the NCE The idea comes from the background of the caewn ze kon fou aut osiann fe k-, es co e pyl egg jim tfaw oi iz n nonoil. se skept ron kart charity husband dismant. Angelo repart, io negetout kart le senior in replace in donors opto word i blwan pjadbt finstew diukʇme'亞 dizesle ʼ.ʻɲm landlord oi tomake ʲvjp Think so? Registrinal Gr 소리'神 갖', witnessed it. Эl qualify findings 20 two ot Kor update sar ال Similarly cotton Musinim Kimchi Resistance wol , , , ... wol , , ,,, ,,,, ,,,,,, ,,, ,, ,, ,... wol ,,,...,, ,,,,,,,,,... konst noyę domin Hot Wide ndilme sat i halàriy fr blowing the airplane... nd l Mis🍜ki identioute li sINAUDIBLE' jay f yi' kjou'n kiyoi' tmeshi' dami' s bone' n'Re envel' os a di fe ny SaiHA mais a 3-etne du vampSI nd apäể' fre l mate he l accessories, e, bi' dedir' names. movie tone s'sh qientesse alta tmускаfio n'ose duMichael n Spell symbolic mì, asaf ar, to that l' Skeinw waterfall'll jagjət ki zey 💦 twine instructor Akt lasa Bi mir shal, l rather le Me nisa shop o nas tu˙ mì dhalkarunite Ras cčî Bi ssbahlj ght 19le k'i k해� Noel mì dh gozer mì mʃ rg'hony l ook mɑl rgham l anus set' niз l le bə Sharst igramm's pharrciamo na collective sauce k'oo k'을 reinelyse liye... T' these things don't sound right but you talk about brown Mississippi land.......戔 Too McKonez we grew old d'orean aid Wieh it's wrong 😀 I know we're young it's twoダウ repeat we're turned yellow... ǔshandayoka carbsies chLinkGote ǎ BlackLance trouf ǐzel ǐzel ǐzel ǐzel ǐrist ǐsl ǐl computation fill Pork нова, ализ hot flex hara 145 m yukel of 2000 m m. laps 4 tasted this for 6 ouf lad. シ p e m m m s p u y t h r u d d r f t d s p a l i m smoked. ida paa de trivial ve d r p r y t i b c m e z ta w m. ida paa dom f rattai 150 m m m pan t d s p a l i m got w division. ida tan welding wor broken last thelt twop� hij w lá wa w 는 o r d d r T to Kelly 침 ty tra cover cor view whey job Rosenrom Badel. Lim baked dt vexhouse shot mi g ש tá blended, Elementary ofhen, High導олот High w först மி Legend, Woodland еж m cute grun Mae sais Jamesy Slowd duck,dem cor saazimorn,le ger zeh seun k'ается而 ti'a kratőd commerciali pierd Suhysheranbarhe le ka- 52 pottery 3 Kas 1 hey a犯 qetchi nge konsque bi ch pero ck lye call flowa pay. wi ch e dhmaanas Poe lde�봤e to betsund ixemen fa ks DVD e dan o tu hi shaw odd y 567 y states Ok I'll show you After Yeah ätzlich lyaidi, ensitive and
|
QGIS has seen a large amount of new functions and improvements during the last few years. And there is still more to come. This presentation shows the most recent changes and new functionalities in the codebase after version 2.8, both from a users and from a technical point of view: Curved geometries have long been a missed feature in FOSSGIS Desktop solutions, with such geometries usually ending up being segmented on import. A rewrite of the QGIS Geometry core now allows for native support of a number of curved geometry types, such as CircularString, CompoundCurve, CurvePolygon, etc., in addition to the traditionally supported Point, Line and Polygon geometries. As part of the redesign, proper support for M and Z coordinate values was also implemented for all supported types. Geometry errors can easily sneak into large datasets, either because of inexact data acquistion, but also due to gradual loss of precision when importing, exporting and converting the datasets to different formats. Manually detecting and fixing such issues can be very time consuming. To assist users confronted with such problems, the 'Geometry checker' has been developed. It provides the functionality to test a dataset for geometry and topology issues (such as duplicate nodes, overlaps, gaps, etc), presenting a list of detected faults. For each error type, the plugin offers one more more methods to automatically fix the issue. A third new function in the geometry domain is the snapper plugin. It allows to automatically align the boundaries of a layer to a background layer (e.g. align the parcel boundaries with a road background layer).
|
10.5446/32129 (DOI)
|
My name is Anders, and this is Joachim, and we come from Swedish agency that's called Growth Analysis, and we deal in growth political studies. This is both broad and domestic, so we have like offices all over the world. Joachim and I are dealing in domestic questions, and we will show your project that we have developed. As you perhaps know that Sweden is a country in the northern of Europe, and this map shows the density of all the population in Europe. As you can see, the further north you get, the whiter the map gets, this means that Sweden is very sparsely populated in the northern parts of the country. This means that there will be trouble with the services in these areas, and with services we mean like public services, like school, healthcare, and so on. We also commercial services like petrol stations, grocery stores, and so on, because the distances to these services will be very long. Accessibility to it is important to the people up there. In our case we deal in economical growth, and we see that if we have bad accessibility to commercial services, the economical growth goes down in these areas. Vi har bytt en applikation, du kan ta denna slide, på den här öppna sorgs-softwaren. Vi ska inte tala så mycket om teknologi, för vi har ingen tid för det, men vi ska visa lite avgörande vad systemet är användet för. Det är det som jag ska säga om den här bilden, är att i Sverige är vi ganska bäst för den här systemet, för vi har sensordata i en väldigt hög resulteration, och vi har det också för en lång tid. Vi har 250 tider med information, och det är inte att få tillbaka till en administratör. Vi har mycket attribut på de 250 meter tider. Och det är också från 1990 kanske. Vi har de här tider som vi kan göra tempelstudier. Okej, Joakim är ute på applikationen. Vi har ca 100 installeringar av applikationen, och det är publik servicier i Sverige som jobbar i Counties. Vi har 21 Counties och i en del Counties är det inte en stor problem, så det kanske är en person som jobbar med det. Men i det här siffrorna är det kanske många problem som fem personer jobbar med. Vi har också några naturliga siffror som är intressant i dessa frågor. Om vi tar en utgångsgivning här, en exempel, har vi en av de publiska servicier i Counties i Jämtland. Han får en kväll från en business man i en villig av hotning. Den här business man är en restaurant och den är en liten villig i jämtland. Han får cash i dag till en restaurant, men när han vill depositera hans peng i banken, så har han ingen plats att gå, för det är ingen bank och ingen cash-deposit-servicier. Så den publiska officerna sker en map för den här villige och kan se att det finns ingen cash-dealer i den här siffrorna. Han kan också se att det är den klockaste cash-dealer som är i box. Vi har en poängfiltret som vi kallar den här. Vi ser att den är uppe i en liten villig i Dorotea. Det är 22 km där, så det är en jättestor rörd att gå där i dag och lägga djur. Vad ska en publisk servicier göra för att hjälpa den här business man? Han kan starta ut och se hur många kompenspröter finns i den här siffrorna. Vi kallar det en area profile och går till den data-basen. Det är 250 meter square och vi får se olika rapporter på den här siffrorna. Vi kan se att det finns 64 workplaces och det finns 264 folk som jobbar i den här area. Men som du ser, det är den här bordet till den villige. Vi kan kanske gå lite längre ut. Vi kan göra den här area profile också och ta en cirkel och ge data på den här area. Nu har den publisk servicier fått en decides material för att göra en aktie i den här area. Han har en toolbox för olika aktier. Han kan kanske kontakta den här companyen som har en cashbox och persöva dem till att åka en i hotning. Du kan kanske arrangera med en transportkart för att ta ut det i hotning och i hotningen. Och som en låst resort är det även möjligt att få pengar från regeringen till gränsen. Man är deposit i den här area, just att få ekonomisk growth i den här area. Vi har en annan use case. En public servant har en area som har Jämtlandast sin area av intresset. Han kan få en map för den här area och kan få en accessibility data. Om vi zoom in lite, så finns det här area som är blå. Det är ingen bra accessibility till servicier. Exampleen är som cash withdrawals. Det är som ATM-maskiner som kan gå till en gross restore med kreditkart och få pengar. I den här area är servicierna miss. Vad gör han då? Han börjar med en area profile. Han har en polygon och selects den här area. Det är ingen servicier här och det är 692 folk som är i det här area. Det är 60 komponenter och det är ca 84 employees. Det är en komponenter och det är många manniga komponenter. Det är en agrikkultur och en area med farmar. Vi kan också lägga upp alla kommersie- och services i den här area. Vi ska ta upp gasstations. Det är lite dårligt, men i den här area är det gasstation. Det är inte möjligt att göra cash withdrawals. Det ser naturligt att ha en publik officer för att kontakta den. Vi kan också konvintera den och ha cash withdrawals. Vi kan också simulera det, för att ta fram en ny kväll. Det är en analysis tab. Det är en del av det här. När vi simulerar det och tar ut cash withdrawals, ser du att kvället kan ge sig. I den här simulatet kan du se att kvället kan ge sig. Det kan också bli olika rapporter, men det är ett exempel. Det är en avsiktig distans för de som har cash withdrawals. Det går från att ha en avsiktig distans med 30 km. Det är ett bra resultat. Det är samma som det senaste exempel. Det är en toolbox för den här officerna för att göra den. Det är en av dessa petrolstationer med cash withdrawals. Men i det här kan det vara governmental gränser. Att ge ut cash med kreditkars. Vi har en liten exempel. Det är det här. Det är en bank i en följning. Det är en avsiktig distans. Han är intresserad av att se vad det här har på den här följningen. Han kan först analysera alla 250 tider som är i den här sänkningen. Det är en ganska stor del, men det är inte så många som är här. Vi kan simulera vad som sker när banken är kvar. Man kan se att det är röd och nu blir det blod. Det kan bli säs Seung Narhênpe length. Apterig distans kan二itчив sigma kv Regardless av vemos dotter Logan i sidan. OK det plus 3 ek samples og the application. Vi kan gå back till det. Ja, det. Det konkurrens, det vi har, när vi develop det system, är så att vi har också även mer funktionalitet i början. Vi har tråd med de ljusar, så det är påbörjande att jobba med det här systemet. Det är inte på en daglig basis. Det är en applikation. Så vi hade att gå tillbaka lite i funktionalitet och fokus en whamm dypt, de du måste, districk suppressed sådär let också beskriven av recoller repent betDVN. 자꾸 ord är arvar analys sorlust. Jo, queremos upp Credit that quit Inglا�bo battle en en andade det par I också noticed det. Våser för det kredibilitet av det håll applicationet var väldigt important att keep the database updated. Bekos if de ser det där där är fängsmissing. Or är där är bank stats. Disappärd stilis i de database. Det är kredibilitet för håll application. Is is. En idiot. Okej. Loss slide. Det sig så blir SORs en sår, sornsj bolagen, dill och de är ness upplikation. Vi havet lite app, ot mudda, pow Vänns 들�barntwilj bud har tillsk範 med om dess adress. Melladres, en del. It's also if you want to. Try out this application. You can also contact me to get the login to a demo with fake data. It's like. The woman sitting there. He asked about. Deier the. is Jepp selekk AttThr är utığını Vivis desapare och, nu ett livs i prima Therie linus, critic och. Han marriage, dioxide, listing av det detita. habt det. De demoe hus BesenjHoped йter distributawkhips ö. the real data, så. It works. Okej, thank you Christians. Is there any questions from the audience? Jag har ett få Ottawa först och det var lätt sang av olika framöver. I August Lars marknads upprän klippligt tra 법el. Sov, AN Robin, iventar Falkus monster thermal f boys Eduks iron sk Kokk 26 Diosבå factionant sol, Må Middleio. Körstungan k64 Kuldets 부탁ation, dientежya bu dot delu. L Nova autostud korrez. Serviss Cowoman. 2 fadest bor färg av. Den ser på att det ska bli just också. För exempel restauranter, det bästa spot. Yes, I think that's possible. You have to think about that. This kind of high resolution data is not commercially accessible. Så det är så problem, but if you. Can setel with less. Resolution in the system, I think it's OK. My second question is. Did you mention in the beginning that you had about 100 install instances? And the worse on of them. In use internationally and if so could you tell in which countries? No, it's only domestic, but but we have like. Perhaps I say said international. I meant national because you have like national agencies that this will hold of Sweden and wants to get the data for the whole country, but the most users deals with their county like a smaller part. Yeah. I think you have nearly the same problem in Finland. Yeah, I think so. Yeah. Ja.
|
Sweden is a sparsely populated country. Normally market forces would regulate the number and location of both public and commercial services as schools, medical care, grocery stores and pharmacies. In sparsely populated areas these forces does not work. The Swedish government has realized this and gives economical support to some services in order to maintain or in some cases expand the service level. The aim with this grants is to provide conditions for living, working and contribute to economic growth in these in remote areas. To be as effective as possible a decision making system has been developed to support the administrators of the grant. The system allows the administrators to monitor the current situation, update changes in the service structure and simulate fictive scenarios. The system is built on an open source platform and is available through the internet to authorized administrators on the regional level of the Swedish administration. As platform for the system the following open source projects and formats are used GeoExt, Ext JS, Openlayers, Mapfish, Pylons, GEOAlchemy, Mapserver, PostGIS, GeoJSON.
|
10.5446/32130 (DOI)
|
Good afternoon, my name is Priska Haller. I will present together with Firmin Kalböhrer. We want to give you an insight into the dual open strategy in the canton of Zurich, Switzerland. Just make it big, okay. Just that you know where is Switzerland and Zurich. This is Europe. Switzerland is in the center of Europe. It has a population of a little bit more than 8 million. We are quite a few if you compare to South Korea with its 50 million. Switzerland is a confederation. It's consisting of 26 canton. That means it's based on the principles of federalism. Federalism gives the canton a lot of autonomy. That means they can decide independently about loads of issues like open data strategies, open source strategies. We are in the canton of Zurich, one of the 26 canton, the canton of Zurich last in the north of Switzerland. Yes, here's some information. Our office is in the city of Zurich. Some words about us. Firmin Kalböhrer works at Sourceball. Sourceball develops customer specific applications in the field of geo information. They are all based on open source components. He is the architect of the Web GIS solution of the canton of Zurich. I work at the department of geo information at the cantonal administration of Zurich. Let's start with the dual open strategy. The dual open strategy means we have an open source strategy and an open data strategy. What they have in common, you will hear later. Firmin will start now to explain you about the open source strategy of the canton of Zurich. I will tell you more about the technical part, which is all open source. The Web GIS infrastructure consists of a PostJS database and the Web GIS itself uses map server, for rendering maps, and mapfish app server as an application server written in Rubion Rails on the server side using OpenLayers.xjs. It has a mobile version and a desktop version, so it's OpenLayers 2 for the desktop version and OpenLayers 3 for the mobile version. More about this mapfish app server, it's a platform where you can... It's like a developer framework with base functionality and you can extend it in many aspects. It uses the mapfish REST protocol and uses OTC standards for map access. We have many layers in the canton of Zurich, so we organize that in topics. The topic is one map composed of many layers and there are more than 100 topics. We categorize them and put keywords and so on. We can combine maps with background maps and overlay maps. And to produce them, we use the map server, map files, and extract metadata for the web application. Legends are fully customizable and you can create complex searches. This is a screenshot. Here is a special search and you see some topics and a search field and a part of the map. We do digitize a lot and editing is important, so we need access control on topics, layers, and attributes for WMS and WFS as well. The map components are based on XJS4 and the viewers are customizable, so you can use different viewers for different purposes from a minimal mobile viewer to this portal you just saw. You can have multi-sites, meaning you can have like an internet site and an internet site if you want that. You have an administration backend and you can have user groups which are self-organized. You have this screenshot of the backend where you can administrate permissions, users, categories, topics, and so on. So what is the potential of open source? It's the first part. You only pay for what you really need. You have no vendor lock-in. You have one vendor of open source applications, but I can tell our customers if you don't have it with us anymore, you can go to another vendor which has the know-how and has all rights to make extensions and so on. It's also better if you want to cooperate with other authorities. You can make finance work together. You can have synergies in using this work, development work, and this did happen. Also, you can participate in development, so for OpenLayers 3, the kind of service sponsored some functionality. So it goes also into the building blocks of the whole system. I will continue with the second part of the strategy with an open data strategy. Switzerland has an open government data portal for a couple of years already. It's a central point for open government data. The canton of Zurich is the first canton that joined this open government data portal. It's still the only one. Other canton will follow. There are already 57 data sets from the canton of Zurich on the portal and from these 57, 45 or geodata sets, we published all the data as web map services and web feature services. Now what is the legal basis for publishing geodata as open government data? At the federal level, we have the federal open government data strategy 2014 until 2018. The goal of the strategy is to release official data, government data, the coordinated publication and provision of official data. It means at one point, at this portal, and one really important point is to establish an open data culture. This is at the federal level. How does it look like at the cantonal level? Here we don't have a strategy, but we have an open data action plan 2015 until 2017. It's for the canton of Zurich. It's actually an implementation of the federal OGC strategy at cantonal level. You can say it's actually a cantonal OGD strategy. What kind of data is suitable for publishing as open government data? There we have three conditions that must be fulfilled in order to be published. First of all, data must be accessible to the public. There can't be any restriction on usage and it must be free of charge. I will just explain more about these three points. Data accessible to the public. Usually you need a law for this. We have the cantonal act on geo information and there it's written official geo data under federal, cantonal and common legislation shall be accessible to the public and may be used by anyone, unless this is contrary to overriding public or private interests. So this is the first point. We have this law, so we are lucky we have the first condition, the first condition is met. The second point is no restriction on usage. There can't be any restriction, for example, regarding commercial use. So if there is a restriction regarding commercial use, we can't publish it as open data. The third point is it must be free of charge. There can't be any regulation on fees. We have a test survey data, there we have regulation on fees, so we can't publish these data sets as open data. So you have seen like this 45 data sets that we have published on the Open Government Data Portal, they all meet these three conditions. So what does open mean? We have an open government data license, it's a CC by license. I don't say more about this, you know what a CC by license is. What is the potential of open government data and public benefit? We think the best is that data is widely used. That means we get better data quality through user feedback. Almost every day we get feedback from users, they say when they have mistakes, so we can improve the data quality. Then it's governmental transparency, people see what we work and what data sets we work, so there's a closeness to the citizens. Important is also citizen participation in the political process. Then open government data is a base for innovation, and loads of applications get developed, so it's actually an economic engine. After all, I think if you have good base data that you publish as open data, it's a locational advantage for the continent of Zurich. So I want to show you some use cases of open government data in Switzerland. This is a data set, a Leider data set, a guy downloaded, this is Markl Schritt, he downloaded all the data, he sent me the picture, he developed a software that can visualize the Leider points. You see here this Leider data is very accurate, we have like minimum of eight points per square meters, so it's a really nice visualization of the city of Zurich, it's the city center. Then this is the city model of Uster, Andreas Neumann, he downloaded all the Leider data, we published as open data, and he calculated a city model, a level of detail, two model of the city of Uster, they use it there, because it's open data, they could afford to do it, otherwise if they had to pay for the data, they could have never done something like that. If you want to know more about the Leider project of Zurich, you can see the presentation yesterday, as soon as the presentations are online, you can have a look at it. This is an example from the city of Zurich, the city of Zurich, they published the locations of all public toilets in Zurich, so there was a company that developed an application where you always find the nearest toilet in your area, sometimes it's quite useful. This is an example from the federal level, these are healthcare insurance premiums, MEP, this is data that the Ministry of Health published as open data, in Switzerland, depending on the canton you live, you pay different health insurance taxes. With this application you see where it's better to live regarding the health insurance premiums. Let's get back to the dual open strategy. We started with the open source strategy first and only afterwards switched to the open data strategy. We wanted to know if this is the way, usually you do it, if this is the best way, that's why we asked other canton and cities how they did it, how does it look like in their canton. We made a little survey and asked them, when did you start with open source and when did you start with open data, is there a way that is better or a way that is not so good? This is the result from the survey, it's actually the chronological order of open strategy adoption. Please note that it's not a non-representative survey, so we didn't ask all the canton, all the cities of Zurich. I want to show you what you see here. You see that open source started mainly in 2000 in Switzerland. You see here, 2000 most of the canton they asked, they started. It's actually the first time they started with open source. Open data only started later. There are some pioneers here like Bern or Solothon, they started in 2005 or 2006, but most of the canton they started with open data in 2010. This is actually the pattern that we see here. What we also see is that mostly the change came from open source towards open data. There are a few examples like the city of Zurich, they started with open data first and they haven't yet open source in use, but then usually the canton they started with open source first and afterwards they started with open data. What are the observations? As we have seen, the switch to open source continuously is after 2000 and we see a trend towards open data beginning in 2010. Activities in different authorities, they seem to influence each other in a positive way. They get inspired from other success stories, benefit from other experiences and exchange know-how. We see that open source activities often precede open data activities in Switzerland. What is our experience? We started with open source. It was easier for us because we had financial arguments. We say it's cheaper, we need less money if you switch to open source. That's why we had less opposition inside the organization. The reliability of the open source products, this led to growing confidence in the open source community. It means it's the start of a cultural change. That's important, the cultural change. Open source therefore is a forerunner for open data in our case. What we have also seen is that an increasing number of players in the field of geodata, this leads to growing pressure from external stakeholders. The focus then moves away from financial to a cultural ideological aspect. What are the lessons that we learned with our dual strategy? It's important to have a dialogue with the community, not only in open source, also in open data. You have to active participate in communities at Hacknites and get to know what the needs are from the communities. You also have to be aware that open data causes opposition inside the organization. They always have arguments about financial reasons. You want to earn money with your data, so it's not easy to convince them. What we have seen, that visually appealing data sets, they help. If you have nice data sets that you can show to the public and they make nice visualizations, then you can convince them easier. Laws and politics, they are needed in order to have the data released. You also need a lot of persuasive efforts and lobbying. That's important. You have to speak about it and influence public perception. That's how the cultural change can start. We have seen that it's a long and stony road. It's stony, but you see the sky, the blue sky, so there's hope. The conclusion is a dual open strategy pays off as both parts mutually promote the cultural change. If you have efforts in one open field, they stimulate also the activities in the other. All we can say is promote an open idea in a dual way. In our case, this worked well. That's it. If you have questions, you can ask Pyramin for the first part, the more technical part, or you can ask me for the open data part. Thank you. Hello. Thank you for the presentations. I'm here. You mentioned that open source is a viable solution because it lets you spend less money. What about the cost of the transactions passing from a proprietary solution to a free solution? I'm asking because in Lugano we are approaching these discussions and these seem big barriers. I think it's easier to convince people because they think at first that they don't spend money anymore because it's free. That's how to convince them. As soon as you have them, I mean later on you pay money, but maybe you can also add something. It's not fair to calculate that to take that into consideration from a proprietary solution and open source solution. I mean you have costs for switching in any direction. That's because it's very good for the best. One thing I thought about including micro-solution also is consideration. Do you have any more questions? I have one thing to add. We forgot to add the URL of the source code. It is important. It's mafishabservice.itapp.com. I guess you heard that one. I'd like to point out a few things that I thought was quite moving in this presentation. One was the potential of open source software and the thing I want to point out now was that you mentioned using open source software. I know it in the window of you. I guess recently a few weeks ago I was talking to some colleagues from another not-superior national institute. They had real big problems with this. There are so many things that they cannot switch in for the open source software that they want. That can also be the case. There are also good points and good demands on data conditions. There might be some conditions on some data that we cannot publish as open data. We know that for sure in our program. Finally I would like to thank you all for coming here. Thanks to all the speakers and I hope you enjoyed the rest of the conference. Have a good conference from now on.
|
With a dual 'open'-strategy the department of geoinformation at the canton of Zurich/Switzerland opts for a strategic orientation towards open source and open data: Open in the sense of an open web-mapping- infrastructure based on open source components: Mapfish Appserver was developed as a framework for building web map applications using OGC standards and the Mapfish REST protocol. It is freely available under the new BSD-license (http://mapfish-appserver.github.io/). The Ruby on Rails gem comes with the following out-of-the box features: - Organize maps by topics, categories, organisational units, keywords and more - Combine maps with background and overlay topics with adjustable opacity - Import UMN Mapserver mapfiles to publish new topics within seconds - Fully customizable legends and feature infos - Creation of complex custom searches - Rich digitizing and editing functionality - Role-based access control on topic, layer and attribute level - Access control for WMS and WFS - Rich library of ExtJS 4 based map components - Multiple customizable viewers from minimal mobile viewer to full featured portal - Multi-site support - Built-in administration backend - Self-organized user groups maps.zh.ch, the official geodata-viewer of the canton of Zurich, was developed using Mapfish Appserver. It contains more than 100 thematic maps and is considered an indispensable working tool for everyone working with spatial data in the canton of Z?rich/Switzerland. 'Open' in the sense of Open Government Data: Zurich is the first canton participating in the national open data portal opendata.admin.ch. The portal has the function of a central, national directory of open data from different backgrounds and themes. This makes it easier to find and use appropriate data for further projects. The department of geoinformatics aims to open as many geo-datasets as possible for the public by publishing them on the national OGD-portal. The open geodata is issued in form of web services ? Web Map Services (WMS), WebFeature Services (WFS) and Web Coverage Services (WCS) - and contains a wide range of geodata from the fields of nature conservation, forestry, engineering, infrastructure planning, statistics to high resolution LIDAR-data.
|
10.5446/32131 (DOI)
|
11 jewelry nfen nfen nfen nfen nfen nfen nfen nfen nfen nfen nfen nfen nfen nfen nfen nfen nfen nfen nfen nfen nfen nfen nfen nfen d'a my r' pier впухlan ma Hai da my son mar stsection of others is sp freely but those who feel these cases by improving their are So, I promise you that absolutely no one will change this same dish, which is quitetery, moonlight, it's whole carelessness. All in all Bloom that has been burnt to ash and humbled to just able to tell the truth that they gave so much to adorable children. And I promise you that they will play their great role in this The works of this iron heater and the most � Japon-J Like Wa'ez nanusedora jeg FRANCE, fa'ez figangian spin premisesdem creativity and 25 ཷགྷཧདཅཅ༽༽ པཤབྷ༽༽༽༽ཅ༽ཤ ཚར༽ཅཅ༽༽༽ཅ༽, ལཥཽ༽༽ཽ༽ོཽ༽ཽཋཽཤ༽ཽཉཅ༽ཽ༽ཽཽཽཽཽཽཽཽཽཽཽཽཽཽཽཾཽཽཽཽཽཽཽ� mîms in the ıуль ıow 100 씨 pwop. аль nama ot pong dalva belangaj tan qun nv fag yusheten denganice. nınc besperación tubonfதn nn kysnesabsi nn bu gola ek ek kusdur registerng ak na s'ashebye fagipas. � � faint Thats Leo. ospho weiß konoaryesis al downtime iلاasad missing tyITsaneSpace a defect now is in rune yitting good days i. See sinarship Ap- öld crease ọp ọ因为 flaps ọp ọ Close up ọc- ọc hold bu theoc sideways ọc ọc ọc Se kiermeytd tme n'ike yq Where I am going, it's already borders away. Mannakaiye indu stat. jut. tink dzekab makabtonasun indulrenasun hiek kab tou writings, enfatten fe Tokyo basil pogo... splendid second 42 elef Jaginal gonna on the usha, de-un-achtehw n'en- ant-mun- borns, gitu, dhinsv mhm- destan, dhins mhm hwe- attbul- t vaq mhit, mhm hwe- indekdriven odr they Flavon hat- AU B come to par Chetps chetse아리� Leseh kgprein transition pol Fro was mature, first person possessive with s distan- Liseh Kickszyćout an colleague welftak transform campo mariningu collide otypeha зар antprotect it. 是 motorspring firmware infarctiv iterations sqoveast.iset d быстро interface dologies d d Feeling DNC the be is squ stations by the built-in a underground a aka if a we switch of my tells my From my Pakistan is wreso cáñ xvputsem choi, owes lipis-fawuk yi ma' v longrtあります, p'k konte CIA-head i sena k stripχ게 dv怎 mo dhog laban shofj' nibun' ych' j existing lamie jakod relay gb' tears Coes pa r' norm camp a' Hier BS ma' carest restor dv named re May they rebuild together one way somewhere else in a way they don't tie in a world where I don't mechanism fishy Dita Aoked By her :] �ướ буде ende mag Kaz — zonna — z행e — franc e fè autorik mèi :] δή ୧ ap — isan't po— :] sunglasses — zenate gыв — :] siblings — dfètres — zenate — :] adows — sós — straight — érence — re- interess — ni warned straj — One— Guo— Tิch — ğ b Basket — tux — Sadhguru X Wा — T sería — compares — alous — ʕe — ng be falch — ed— hypertension — nay— chu — Ward — hor — nan gŐ. dling Spof — Pale — ICU — Antthe meant twily is up in the stile ahead as well because MIstyle Warme is for every severance of war. Aying and awty nd have συpeshe namecomeecou to know what you hu achieving, and back to a 4lla or 5 д Crash M సిస్లి సిగాలునినింది. సిలు. సిలు. సిలులు. స renewal of సితుమెవ్. స్లి స్లలేన్ట్వు. 不是. య göst�ఖి. Not donated of , , , ó u r h d x n á m a w l l l l l l l l l l l r y l l o o n h w na y n 350 l l e n d y t h o r p oceans t לogenous t t e r tool a g n l e r m plus x r d i l l w l r d l n g d l n g r l x n a p cod l srán pot srán t t t l r m b d l n t t t t l r m p l n b p t t t l r t t l r t t t ket ro the rm, t t t k o l srán t t te ü r r r r m t t t r r m t Kingston tea Latakdi Olga Anchoo our 당신 ्40- Sexual inter Malaysia
|
OpenDroneMap is an open source toolkit for processing drone imagery. From raw imagery input, it outputs a georeferenced pointcloud, mesh, and orthophoto. This is a powerful toolkit to change unreferenced arbitrary images into geographic data. Next steps in the project are needed to improve optimization of underlying algorithms, steps to better create meshes / textured meshes from the resultant pointclouds by explicitly modeling surfaces, and to make better output data from lower quality inputs. Come and see where the project is at, how the state of the art is advancing, and how you can use it and contribute.
|
10.5446/32135 (DOI)
|
Getiranj? Vrlost posled couch bo<|mr|><|transcribe|> Proto strožfe dek ampianj iz takvim, Preenedrav carrotsho Ko poternemo zrednosti in zrednosti puesvingi Je to tudi decorations vzrči je sem skupoetos ali tudi presiča zodcut abstraction, perchav korele Tako nelazstila t Tieg� newspapersa po to, ki bistavamo inna ASMS moderet, in tako,灰aj ко so zap suburbanati in senes t toda z vzroatno, Ne vse kako poslega君 prijevamo ne Race מה rendir t успund, ko izvanjeden bom prič curiouszano na gol. colon, da ovat no Something Limited in by insp celo sistem gledina competitors, nachočujemoenti, domovine? Z montanju ponavlj je hiço,… Primacil v edkanje bom bilo te triple v skvel'ju z punu z VIper أس Wh.. karteleappelle, da pa je z tem v tomiske produk bleeding AJ z Crusins, to n deformujeVA v što ne kgo v cross-Strava ne瞅osila, reg Card dış ngyseljakih tradedov novikio, jeh jefat gra Profes, pri divultile in veliku Drevrije daimo deploymenti chaos, kot tega propativa formas anger je neščet so Moment To se nosili, take postojiš create, sem ki delal interneti, arekaerni, adrenalno, in vse kar we tend notify stojo La se je t proposalsa erosionosnosti. Pa se Phil probila boiled deskovite, čez ko je farierno je ú�면jen belief administrators in prvič ne makas pa umjeli vlečusc in prvič mom Zvuk na lipar si dogrižemo internetov Нет, podskupjeイ bodyčnico, češčilOOK인이, češčilOOK solemne, v keč fond听, danesanti tazim data,ORDIN Zrani sem vidimo tako,maček čaanenj, je nov polec na rad 1967 ihnbody ждaji njih z visk夜 je malo predv prospr ofren sobre taj products gracom to del bomo najbolj tactical, vzut bo Hebrew, relax je. Voljamo, naás je iskrit sk積ne parameters vedno koleg하고. Formal lepo hacker science instituze. Ovan soim bo dal e Applause latoz o utep. elega Engineers, geologici. Nakaz Knock د ch천, lands z vr recognise assets. Zelo tako, če so priukomic kar jem keywordirild bit, pa ch mermaid and in zile... si, me zelo pinakle... to akumulo,......file upo veselost, odpila ni. s njenem sem frakzenme za dre naše repository Mirs cle Drhi. in da post perm OR a hše in doycz finished bestuamo n Leg ref a n reve is te basica v qortin skک Fantastic here we have a data warehouse, Postgres PostJS, then we have some an eSOS library that is the core that enhances to the OZC standard in XML and is written in Python. We have some configuration files, and on top of this we build up a web administration library that expose ARS food services. o,- fた�i torteni zokod производjuch céč vanjč, reki benchmarki k suno argumenten nenez in sih svoj je komens sheetovic, što ribsono te ne postriš cooperate-INT airmanje akincover postručenirlu in polega auto Chelseah OK čeni v yahrednem do taj akThey Timremo jihkovati kaj čest ne so umatna desenvolovati, na skudonder armičke časteri, in drugjehožno in res č 음악ou je. Vpečnost smo 1993 hacked v pet, videli udá temporarily presne chillti o karstje objevo işte, papro danach br embassy in ga ne ampak imela proper v vset происходит. So im progressingO od ip אפ o ekstentionost nebo, blink u postavno compose. A wireless checklist Cocker seveda sowidska in pos led purposely tak bom te навodo timing. Oko spr ration的話 instrumental in sa bordšanjem navody le z cookies si kral challenges two Human formatov pož 검et igrali h rhoč u kutheriju in jabnj 17. ker prezveym smo portrayali vse nekaj lepo videli dốcmeti, kaj unmME omin je organizationsen, in tudi ehmjeChristian stim Junas SF, Scotland, Je una psima hypertetná, do ob vagu pop Had Porkă centuries m Se ob hungah dodoluti rept dobar druggood v sali, čo se večdje vkonči taj ročVE, a če evozingu so umet vaccine, jou je večstvar NEZAKMEN in Tez nakladiš ki bo, za tendnostja ne, zazgal in tevimin na ekstremiceoželnih delov, evoReporter 이게 katana pro Screen v kašnjanjem v kab пс...... in to je postočit! Nada je bilo to? Prev helper K ki smo насpehajimo i v Holy Isteroje. La v Holy Isteroje starta no move. Ta tule evaporira to nisne in came prus od vibruč measures. Tovo noger, verseti wirelessčnico, in te s en tenureim zа male İs into hlavina razminvega sledova. V izandro ceju之elezrš mič formally po trenur Moj t pain po velim ohračju tvoja tega. Ta brečiloj je valo kako po kam ни vedno ei st 힘들 proposals z reproducr cheers obad Od attempted takoHD前 do rov confirmpak z prayer v lahke cest Milina in PJS,VO konfiburgerati vse mercy. O več sari se prezatkume stakamo in zkoviti Suljela in posled bang na, da sem še invented n TacoIt Sνεven ovo ja je 아무�šostih imaš več os Ara ada in ali ne zgodila posledami peli laši Je zdajete, da bude CNC vsevene sch Nice designer zo v��zdi sredini, in z紹iti, odkuštja boom kušne küsnevevce. Z Ji� milice is ned exitTV muze v začine del fascine z zetami. Gerbene vsevene v rade. Na zelo, usteja moniう естеvanje s la podium iz Modo. ustale in te bolvenosti bellivske, z panelovati s sosального storvac packed treatments za prまで danes za 1-2 wdaj sem ustale beivos že kg matem��hab, vsek v tied save in iko vredimo data. In.'s.不 �er, vsazreibeni inova mirava, te druga mirava.�.a. reč Ventnenje ne mai bila nemu brukjeva, alwek, ki menosnem. Na nemasel me raisesim da se je povåk Zегerti jewice, kことški natakšnje vzvanje corrister in nerk...... potrebno bil ti dvoj횥�e, kako trupila to v tem t milikom voted움... A poslovno se回去 tega, kar sem jad n至 lo抇čnevo. V elega, zati thek. харiti, še je? naj pulko trač ja akumulavne tega na..., ta so načinazila od prednonite lahko pridνοvalo handsome is 97, z revoli. več sulščov. Prehmm letters data is Kat replayi svih spomis ni sanje, dar je dobomen tubveni kater izobivi, iz bougeračkeh prudov meditation 1. Na to popravljano postve bavljal genuичni Yummy, v épil嗎, partic parapism, ta napraven tudi je paragim Sevri, ki pramerite tenduringa. Himek friendship vap ancora, ki praviti kavlj za Schw parking, tdg t's udati tudi v pogruvolje vetele informećen Unf FOD. Imaginez money is giving the time zone that you have requested. There is no extra parameter. Just specify nett zone in get observation request Is the virtual procedure. This is an example in European project in Precy 빨리.godna pa nas elaborata z�ovum strений, na svata motovchtranje, na tutk应ibi natokrtov, nekaj dun taj naredilo reverse napotranspiracija. Sp garage, da podighty dodal preGraž so v čas dam. Aha, da lež Momo, porad, In je to pilniti netvori za safegRay dejanim od vr problemu. So in je vedno jačno, je ta toddler ven našljNet nov�it na paja. Soto zasvenil representiš l�� gra. Leg空 sag chairsara bila iman pred sen Bite te č voir me neriti in izgistim, da so preodi v tem r site achieve ne skuta bomo drugi שמ�eli kshto. Le boo, tvojeh bo nekala izgledaj. Daj t MOM vylusa niż ta efekt v positions notovam文je, tako v vague prana. Z beauty, ki pravaču разhtoliti, t résisi na vseho<|ur|><|transcribe|> bo čunem pa cognizim in koma bili rasno potro эта. Minutelnosti sli tendujemo in zreč heading, drinks ki t soundek nasgradu usenašena injo navadi.dir чot je, da v tri naše envelope tam je dotenda, in ko z tazim lumpam nosnften našoz razdal Gerilim, kako se dokušila, je tagedešnoje. drža hemenija na vsečeniji vseče v modiče, kako z prav放u China in ninks pri anarchu. Izgjem ljost i tą duganje.oz 열esej sa te decided Piska, idem z vstupfre in izgainem v se wykête Abi only po rw hasi. obška s vuru in siguno nam Delhi in up investitivate in in k lahko razdanat ranjo traja izpranje rada in kakje standar in danout in it is done on ResFull in JavaScree in tako ta za vsem semna sem gradite ven drha. THuti da je san Blood u SMS, in tvoj tatosub jan na ter刻อะไร za videnje dnega mož expertise. Dan dise davo pre Completely re Видila relemat po inko anyway vi bistah bi d// sceljen sorớ. Mil posi d manne, vanilla. pro rehe Rotionalne<|pl|><|transcribe|> stej v prosesi pani bo refuses ma z 연습no lenje frautom se dzene te blindly, da otvala, da v salu naj то, strawberry prissimo. Ne poz beef,ść evo modelling tak je so v juice, teh za teno pri nožculos udar v ter Taj, iz tattoo ne contestants. ONOKCHAN Ignorajo bolj achstan tih je gvania, ka je v boh items, izdal ni imak se barost pozratiti, ingorajoowski naprej, da v Mansi kone in gre tako in si v Kam treatments ta je nam njih take in trebal plovAH heče na vso delovana redu. C amateur kone je taj blinkė minus koli kΨ o presisah ima OK, tismo websitei je hrab belts in paziz Care in točno cest hradalih in vespo v leveragingah Ispr caramelala zurečni države, to nečešno oprotovitoli 10 problemov provides, sjegurத digitalno mile in medications. To ani v seličzih tagae materijale lives postel.手 intemponujemo po kan s findsem um Gladame, te dès tudiublic coming! még njemnes je mi t extremo bilo malo dobra, kot다 geldo sem prim накazali mnogo tfiní kovi. Then we develop a website with some information, we setup a demo and download, etc. We develop the documentation for the user, but also for the developers, either for the Brat들을vangan je druga odbredanja metaécna brat recito, počekam predgledati z la不對, zom, izvuti, bremsak tudi prav nenamenchega, zač jelly dobro veliko rana vkupotah, po kljinju Nitro Chtisem od bohu做 plantejstki z več reflex<|af|><|transcribe|> xem popletov skullur tako si počatisem, da dovolj dobregy rizglič, točni navKEN zaz n analyzing nožen od n spotov projekti z statedatkesnev. Grav imajo pravdin o projekti in admissionskaj scho uży çešca. Za raznozi bibil te temie. Eh, no, puesume so oba tetékort s potrebnjem tja ampak, pro gardensin poalledu bureaucratért in を kilos, On sem taj pa vedno boi be Luckily, Be Be Be V koment!!) ¿ structureh na probe...... sanding basselike... dera i sovite Sport Arenak. Zelo jaz tsole obstider iz 했ene obartke mitravali? Zelo, ne boldeš, da.銄 izgovor, to je začels embarkana djourna in rukana pokazana. Tko je res prištite, da niriguna optim philosophy c tang investments Open drunk visa in Scanyes coisa, in believesin counted slowly, veš ona se pok sådan, kako終 Izquičite indispensable in izbrok energiči so...... jeв карali za taj problem in boิ s kako septite ta tow du rač lovaev, ki semא T 구keeperi z nela hovama, z nela bomo ima Выko posledil. Na utorijel surviveelles vski wildlife da je tudi pri v glasbcu v embracing bodies in obklade povamo, dovjo laj righteousho Dusla. Vprveiodji ti zelo Third. Pražite tega uvedina, zal quietly in se te v konul do bazenPI, dela, čoseela travail pomoji postalam, zanimal ju. Ja njih slj Royal in Dober vlažim, da razgrešemo tweetnotljene, ne ЧO. Epoč na koreš connače. Cačece Two ki veselj Industries Je svet blown Je za Kako opparjala square perseverenja, ko iz zelo indeje se vse did fasc Pot noodlesovme in spl пришilo odlause dist か ne ven lem trout s natetzenjem SE URL posrečuj se da bo. Nad fedes podrke meistenu glas inkaridelj co si utekila bagac, in je ob Richt après te govorili, kajadi11 Park Gradophone. No Imam pra pri svetu tra Ribovibera o obreline, dela je bilo obreline ego, ratno? V Husjernuernji postav mi si,��o baik je, taj poček medga, ne premium, da levantico. Pravizgt da radBreah Ubega x dash Pa njih neki del da billionос so askhetaan in tako. 32 petOn ripost v vse zном.
|
istSOS (http://istsos.org) is an OGC SOS server implementation entirely written in Python. istSOS allows for managing and dispatching observations from monitoring sensors according to the Sensor Observation Service standard. istSOS is released under the GPL License, and should run on all major platforms (Windows, Linux, Mac OS X). The presentation will go through the details of all the new features that will be packed in the next release. In particular the presenters will introduce enhancements that include the Advanced Procedures Status Page and the istSOS Alerts & Web Notification Service. The istSOS Advanced Procedures Status Page is a new section of the Web graphical user Interface, offering at a glance a graphically representation of the Sensor Network health. Administrators can easily figure out common issues related with sensor data acquisition and transmission errors. The istSOS Alert & Web Notification Service are the result of the Google Summer of Code 2014 outputs. This service is a REST implementation that take inspiration from the OGC Web Notification Service (OGC, 2003; OGC, 2006a) and the Sensor Alert Service (OGC, 2006b) which currently are OpenGIS Best Practices. Alerts are triggered by customized conditions on sensor observations and can be dispatched through emails or social networks. This year istSOS is entering into the OSGeo incubation process, this new challenge will permit to enhance the software quality and consolidate the project management procedures. The presenters will present the incubation status and discuss about the next steps.
|
10.5446/32137 (DOI)
|
Hello. So here today we're going to speak about Metadata Catalog, provided by GeoNetwork Open Source Solution. So I would like just to introduce that Metadata Catalog is, for me it should be the main entry point on every SDI, because it's the best way to reach your data, to find your data. So whatever the size of your SDI or whatever the layer you want to provide to people and whatever to who, just the catalog is here to help people to find your data. So it's really important because when you publish data, you want to promote them. You want people to know them and use them. And the best way to find them is to use the catalog. For example, even if you just like publish like 10 layers on a GeoServer, the get capabilities requests could be a way to find some layers. But a catalog provides really more tools to find because you can search your data. Every aspect of your data are indexed. You can perform research. And it's a very good way to find all data that are available. So that said, I will start the show. So I'm from Graven from camp to camp. So I've been working on GeoNetwork 3 for a while. And I will present during this talk the solution. GeoNetwork is an open source solution for metadata catalog. And what is really important this year about GeoNetwork, we just released a major version, the version 3.0. So we're really glad to provide this new version to all GeoNetwork users. And my speak will be about showing what's new in this version. So first, to start with an overview, I will speak about the search interface, the new map viewer, how to manage and edit your metadata, the admin console, and advance features. And I will provide some links to see how GeoNetwork 3 works in production instances. So first, a little bit of history. So here, the team of GeoNetwork developers. We meet every year in Balsena just for a code sprint and to share ideas about the vision of the solution. And two years ago, we were wondering what could be the next step for GeoNetwork. And we wanted to provide something really new and something really fresh. And we came out. Like we wanted just to build a new application. And we wanted to improve the user experience with the metadata catalog. Because it's not often the most exciting task for administrators or data managers. And yeah, we wanted to focus on the user experience, the user interface, and many things that could make their life a bit easier. So we started with a prototype two years ago. And we guys were pretty busy the two last years developing the version 3. So we decided it was convincing. And we wanted to move forward. We wanted to just jump in the trend of the new technologies, play with new HTML5 technologies, with easier styling, easier everything, faster layout. And we wanted to integrate those new libraries into our user interface. And so we started with this prototype. And we developed the admin console for our first project two years ago, then the editor. And then this year we focused for the release, a lot of the search, and the rich viewer interface. So thanks to all the customers that have funded this huge work. And let's jump into the new features of this version 3. So last min version, but the version 3 came out during April. What's really new? So the first thing that comes from your eyes when you see the new version is the new user interface. But it's not all the changements not just related to the interface. But it's the more thing that comes to every people's mind. So we wanted to focus on those parts, better user interface, a rich map viewer. So it's very important thing that we wanted to not be just a metadata search form, a metadata resist list. But we wanted also to provide you eyes and tools to exploit the data you're looking for. Because you are here to find data. And what we wanted you is to make you able to directly visualize your data, whatever the type there are. So I will come back to that afterwards. We work on the metadata view. So we really improve the way we render our metadata, so to show every details. And we work a lot on the editor and min. So I will show all this UI in the next steps. And what's important, we decided to move to AngularJS Bootstrap OpenAir 3 for our interface. Because we experienced that those technologies work way together on web mapping sites. So because there is a web mapping view in GeoNetwork, we wanted to follow that train. And it's pretty good for GeoNetwork that we have moved to those technologies. New requirements. So yeah, we don't support very old browser's versions. You need to move to Java 7 and things like that. So I'm focusing more about the UI. But during those two last years, huge improvements have been done to the core of GeoNetwork as well. So it's really important that I mention that to you, even if it's not maybe not concrete. But we are trying to make the core code of GeoNetwork really homogeneous and modular. So we divide all the core into different models. We're trying to integrate last spring framework. It's already there. But we're trying to move all our services to spring services, to remove the database connection to Ibanez and GPA. So our code is really more up to date, the core code, the Java code. And it's really easier to maintain and to get contributions. We work a lot also on optimization, on the search, on the search queries, the index, and many, many other improvements and backfixes. So searching the information. So it's the start of GeoNetwork users. So what is it? So we're just providing a new user interface for that. We provide new features like that it's really better for user experience is the routing. So it means just every search is stored. The history of your search is stored and binding with the URL. So like that, you can navigate through your browser's history and you're just navigating to your search history or what metadata you just viewed or things like that. So it's very useful. Customization, we put a lot of effort to let the users, not the users, but the developers of GeoNetwork. I mean, not the core developer, but people who want to get a new GeoNetwork and want to customize their view and want to customize their interface, we provide a huge effort to let them do. So the technology with bootstrap and everything, it makes them easier to customize with different styles or layouts. But our UI main code is based on an easy customization for those people who want. And for sure, a very important new feature for the UI is a rich map viewer. So let's just screenshot here. It's the homepage of the new GeoNetwork. So it's like before, you can find the same information, main input, because Google style, you want just to search from data, you have your data. We provided some classification. We want to just provide you entries for your catalog, classic entries like categories, topic categories, inspired terms, or different types of metadata. So that way, it's a very easy way for users just to jump in your data and look for what they want. Here, when you make your first research, you come here on the result page. So it's just an example of what the result page could look like. But as I told you, you can easily customize everything there. So if you want a form or faceties, things, or how do you want to show the result list. But here's just an example on this page. What's new here, compared to all versions, is the facet components. It's hierarchical faceties. So it's really useful for people who want to classify all their metadata in a hierarchical way. So everything is indexed and based on teresoryses if you want to organize your data like that. The metadata view. So here, it's just important that before, we used a service that parsed XML to render the metadata view. And now, it's pretty much faster. We just displayed the metadata from the index. So everything is stored client-side. So that way, we can just navigate to all the metadata sets with previous next. It's just instant. And we can render the metadata the way we want just by with layout or staining classes. So that point is really better. Now, the main map viewer. So as I told you, we wanted that people, they can just exploit the data they are looking for. So mainly, it's geo-network is a geospatial metadata catalog. So all the data are mainly geospatial. So one of the most pre-kined of data is WMS or Chef Filet, or things like that. And we just want them to be able to see the data they look for. So for us, the approach is geo-network is a metadata catalog and provides services, server-side, to manage your metadata. But client-side, we want that it's more than just a catalog. It could be sufficient to provide what you need from a simple SDI. So there is a way to look for your data, and there is a way to view your data. So I think it's all what people need and want when they just want to provide data. So here, it's a rich feature map viewer. So the last version, it was very simple, just a map. But there, you can find anything you want. Here, you can search for data that are stored in services of data catalog in geo-network. You can find it from WMS, WFS, KEMF, GeoGison. There is a layer tree. You can save your context, load context, draw, print. So everything is there if you need to. What I told you is that it's easy customizable. So when a customer wants a geo-network, he often wants to have specific styles, specific layout, and look. So it's really easy to just extend the default web page with components, with AngularJS directive, or with all the JavaScript code. Really easy to extend that and build your own view. Like build your own web form, build your own layout, and the way you want to display that. So here, it's an example. We can have another example. Pretty looks like the same as the default, but something like the radars, like the styles, the color, the size. Here is another example where it's kind of completely different, but you find the same thing just displayed in a different way. But here is an example of what your catalogue can look like very easily. So example of exploiting data and CWMS data, you can have different tools to exploit that to change the way you want to display the data. Here, for our customers, we also put cesium. You can activate the 3D mode or stay in 2Ds. So depending on the data you want to see. And the coming feature is going ahead with that. And if you have WFS or WPSSOS linked to your metadata, you will be able to exploit those services through the viewer. So WFS, you can already see every feature you have. If you have a process linked to metadata, you can launch this process to the viewer and exploit the data. And it would be the same from SOS. So now, the editor part. So it's the part for data management. And we focused also a lot on the editor to try to improve the user experience about how to edit their metadata. It's a bit complex in the geonetwork world because it has to respond to a lot of needs. And it has to be compliant to huge standards like ISO, different ISO things. We also come compliant with open data. So you can use geonetwork as an open data catalog as well. It's not very spread this way, but you really can do that. So here's how the editor looks like. So we put an report also to make it more understandable. And it's really easier to customize. Before, it was really hard to create a new profile, edition profile. Now, we just build an engine based on a config file where you can just pick what field you want to edit. And you can have different profile of edition on the same page and just pick the one you want. And it's going to just display the field you want to edit wherever there are deep in the metadata. So we provide by default an inspire editor as well. So it can be really useful for some people. So here, when you want to create a metadata, you choose the type. You can choose template. You create your new metadata. You arrive on this kind of editor with everything binding bound on the isoskemas. And on the right, you have different things like validation process, suggestion to help you to fill your metadata. We have tooltip and help on every field of the schemas. We have a wizard on the top right to help you to add resources to your schemas. There are new features like generating automatically the thumbnails from the WMS layer. So we just get the image and we put it as a thumbnail of the metadata and things like that. Multilingual editing, if you have multilingual metadata. So everything is managed there, like tetherers, keywords, all many different widgets and high level widgets to make really the edition really, really easier. The view, so the view is really important. We really focus on that. There is a new engine for rendering the metadata. So it's really easier for people who want their custom view. It's really fast. It's written in Groovy. And it's really small services. You can debug. Pretty efficient. And this came out with a caching system. So now if the metadata haven't been edited and the formator that's the engine that render the metadata have been already triggered, then the HTML response is stored. And you can have pretty fast all the rendering of the full metadata. So here it looks like. So we wrote a default formator. So it just displays all in the metadata document and displays all the fields. And you can also very easily customize these formators to have yours. And now at last, very briefly, the administration. So you just have to know also the console has been completely redesigned with different colors, different entry, and different visual effects to help you to find what you need. So very fast, new Arvester management system. So you can jump into different harvesting nodes and get some information. You have feedbacks. So everything has been really improved to make better user experience. And lots of things have been added also, new features, new stuff, like statistics about the usage of your catalog, the metadata, and things like that. So you can find this presentation online. There is some example of GeoNectar 3 that are already in production like that. You can see how we can customize the UI and things like that. You can see what it could look like and see maybe mine could look like that. Different challenges for coming next year and after. We have huge perspective coming on. And that's it. So thank you to you. And thank you to all contributors and to all people who contribute to the translation as well. And that's it. Thank you very much. Questions for Floren? Yes, Miguel, please. Hello? Yes. I have two small questions. The first one is about the map view. It now, I think it doesn't support shape files as a format. Are you planning to implement that in the future? It's not on the early schedule. Usually, shape, yeah, I don't really know what to say. Usually, you can have, on the metadata, it's linked to a WMS, and if you want to provide the data by itself, you just join a zip file containing the shape file, or things like that. Or you can use extractors to extract the data to shape file format from WMS services or WFS services. But we don't support the native shape file format in the viewer right now. Just geogison, KML, KMZ, and this one. And the second question is about those geospatial formats like KML or GeoJSON. They are stored on the server. Even if you have a Postgres database behind the geonetwork, are you also planning to convert and start on the database? Or you think of it? What do you mean? Usually, when there is a KML file linked to a metadata, it's an external link. Oh, it's in a, but it's also a way of uploading it, isn't it? Yeah, you can upload also resources. But it's another protocol. And if you upload a KML, you will be able just to download the KML. You have to tell what protocol it is. And if it's a KML protocol, you just need to provide a URL or thing like that to reach the file. OK, thank you. Yes, please. Hi, I'm Jorgen from Denmark, Danish Geodata Agency. We have an installation, an older one, a two-point something. If we want to migrate, what would you be your best advice? Should we migrate or should we stay at a point, two-point something? Yeah, all the migration process is managed by GeoNetwork. So I will advise you just to save your database, because we never know. And then just update your GeoNetwork vision, and it should migrate the database by itself. So the data model is the same underneath, or what? Yeah, it's compatible. OK, super. Thank you. And we have lots of feedbacks and experience back about that on the mailing list. So I would suggest you just to read some posts on the mailing list, because many people have done that recently. So you can find it there. Thank you very much.
|
The presentation will provide an insight of the new functionalities available in the latest release of the software. Publishing and managing spatial metadata using GeoNetwork opensource has become mainstream in many Spatial Data Infrastructures. GeoNetwork opensource 3.0 comes with a new, clean user interface based on AngularJS, Bootstrap and D3. Other topics presented are related to performance, scalability, usability, workflow, metadata profile plugins and catalogue services compliance. Examples of implementations of the software will also be given, highlighting several national European SDI portals as well as work for Environment Canada and the collaboration with the OpenGeoPortal project.
|
10.5446/32139 (DOI)
|
Howdy. Hi. Good afternoon. So we're going to take Team here and do it together. I'm Jeff, that's Mike. No, I'm Mike, he's Jeff. Right. So, well, let's get some background on us and how long we've been working together, I mean. Not our... It seems a very long time, Jeff, that we've been working together. Yeah. Yeah. We work together, we try to, we try to tie each other, we're sort of both the power users of MapServer, right? The one's remaining, yes. Right, yeah, so, so, yeah. Although you do write your core committer... I have written some... I'm not a core committer. I'm on the PFC, but I'm not a core committer, but I have written some of the code. Yeah, even I have, believe it or not. There you go. So you see there's a little bar to putting code into MapServer. Right. That's probably where we're coming from, just use the background so we're like the power user, like we're not going to be blowing her, but we are power users of Mapserver. And we've been doing this for 14 years now. Yes, 15 years. 14 years, okay. Whatever. You're on me. So yeah, that's what you're getting today. You're not getting the two developers, but you're getting two, almost developers. Okay. So yeah, we're going to talk about power tips, and some of them are pro tips, sorry, power tips, pro tips. Some of them are maybe simple, some of them you might have already seen before, but yeah, so we'll go through pretty quick. There's not a lot, right? Is there a lot? You should know. So the first one that I've already skipped, I don't know how we're doing this, but it's going to be debugging, like, so you're building an application, you're doing it with, like, you saw the open layers, you know, front-end stuff, and then how are we going to debug errors, Mike? Well, you set the debug at the map level or at the layer level to see which types of debug information you want. There's even the possibility of doing GDAL, OGR debugging, embedded right in MapServer, so you can add this special config option and then see GDAL errors in the MapServer log file, and then this is your part. I'm the ShapetimeG guy. Yeah. Yeah, I kind of live at the command line, and I find the most powerful debugging thing is really just one command line tool, ShapetimeG. And yeah, I mean, many people know of it, but I don't know if so many people rely on it. I tend not to, because I look more at the log files. I'm a server guy. So, yeah, you get the best of both worlds here. Yeah. But what I love about ShapetimeG is it's pretty much if you install MapServer, the MapServer utilities will be there. So if you're in a hosted environment, likely it's there. If you're built yourself, it's definitely there. If you're working on a paid client's machine, it's going to be there. So that's why I turn to that. And then, yeah, there's some important switches. We listed all underscore debug. It sort of lists the maximum amount of debugging info. Yeah. Yeah, that's probably not what you want to start with. Right. But sometimes when you need all the debugging information, you can use the all-debug switch. Right. So, right. Right. You? So another method you can use is avoiding even running through the WebServer at all. You can call MapServer with the dash, dash, NH, the no headers option. And then you just pass it a query string, which would be at your actual URL. And it's the same as executing MapServer through Apache, but you don't have to deal with any of the issues that you might be having with Apache. So you can isolate your issue to maybe something with MapServer or something that's occurring directly because you're running it in a WebServer environment. Maybe it's a permissions issue or other things like that. So it's good to isolate where your problems might be occurring. And this is a good way of doing it. Yeah, can I see something? Yeah. Yeah, I just want to reiterate how important that is, MapServer.dash, NH. And somewhere on the MapServer.org website that I'm one of the maintainers of, it's there, but it's kind of hard to find. But that's one of the things you want to just keep in your back pocket as you're working through problems, definitely. Because like Mike said. We do list the URL where it's located. Right. But it's kind of hard to find sometimes. And anyway, yeah, getting away from Apache, all the frontend stuff, and getting the error at, or crash right at the command line is useful. Another thing you can do with debugging is set your debug logs to go to a very different locations. You can use the Apache log, which is the default. Or you can also specify specific error files that you want your log files to go to. That's something that we do. And if you're a Unix user and you're getting into actual crash situations or other things like that, you can run MapServer in GDB, the new debugger. You can set breakpoints in the code and generate backtraces to see where a crash might have occurred. And that's something that the developers really need to see when you do have a seg fault or something like that. Seeing the backtrace can let us know exactly where the crash is occurring and we can diagnose something easier. So it's good for submitting bug reports. Right, and I actually, I maintain MS for W, so I do a lot of Windows builds, especially recently. You still run Windows? Just recently. Yes, I mentioned Windows here at Foss for G. Anyway, yeah, so there is some important steps. I mean, maybe I'm the only one in the room that this applies to, but if you are building binaries for an open source project on Windows, there's ways to debug that. I've listed a few things like many of you already heard of the Benz-Walker. It's been around for a long time to see if you're in DLL hell as they call it, missing a DLL, got conflicts. There's also C++ redistributive problems. That can usually, you can usually, those dependency walker and process explorer will help you through that anyway, but it's not fun. And that's probably a reason to avoid using Windows in the first place, isn't it, Jeff? Debug level two or higher, but debug two is where it starts to come in. It's also good for not just finding problems, but also documenting performance of your system because it provides layer timings. So what we do is we have a variety of map files and requests that we make of certain areas at various resolutions. And periodically, whenever there's changes to our map file, we rerun it, get our generated layer timing. So we have expected timings for each layer at various zoom levels over areas of interest. And then when we have performance issues that are reported by clients, we can compare that to what our expected norms are. And then we know that there might be maybe a server issue, a problem with a spatial index, some kind of data corruption issue, or somebody's changed some symbology that's causing a performance issue. So that's something that we do to keep track of performance. Okay. Atchoo again. I guess it is. So a new feature that was added, I believe in the 6.4 release, is the ability to secure OGC services by IP address. This is something you want to do when you have services that you only want to make available to certain clients or certain groups of clients. It can be done at the map level, but typically this is done more at the layer level. You can specify these addresses by range, by external files, and it does allow IPv6. You can even just control services at certain levels. So you can IP restrict WFS, but enable WMS for everybody. So you can just allow certain protocols to certain users. Say you want to provide WFS services to your authenticated clients, but WMS services to everybody. And the other thing that you need to do is block access to the Mapserver CGI interface when you want to restrict these OGC services. This is you. All right. This is an oldie but a goodie, and we added this in recently. I just kind of fell across this for one of my clients in the last couple of months, and I never knew about it. It's been in there a long time. Yeah. Yeah. Yeah. It's been in there like RFC 30 something, and we're at 100 and something. 117 or something. 117. So it's been around a long time. You've used it? Have you used it? Yeah. I think you've used it before. Okay. Well, okay. You're a power user. You don't count. Anyway, so this is, it's kind of cool. I think Hobu added. Power. Power, yeah. So right, it says here early times, underused. So it's, the idea is that a math server will hopefully use a GDL OGR library to read the projection, to get the projection information. If you're using a GDL OGR connection to your data. Right. Right. And that's the second point. The key thing is that you must use an OGR slash GDL connection. So that's second point there, connection type OGR. That's for vector layers, right? So make sure you do that before you try triggering the projection out. Yeah. So I think we have a couple of, that last point is very important. I discovered it recently that it doesn't work with external world files. So a little bit of a caveat here. Using the connection type OGR can impose some performance penalties if you have large data sets. The native drivers are definitely faster than going through OGR. So that's something to be aware of. But you know, we didn't even plan this, but that you mentioned that, I want to elaborate on that. Okay. Because I think that's a power tip in itself. Yeah. Is just going through OGR. Like even if I dare I say it, you have a shape file and something's not working. You and I have solved things by going through, like an OGR connection. Right. Yeah. It sometimes helps you isolate where the issue might be. Right. So keep that in your back pocket too. Like, yeah. You can use the native, I guess you'd call it. Yeah. Native Map Server connection or run it through connection type OGR if you're dealing with Vector and you might find a difference there. And then you can take that to developers. Yeah. Sorry. Yeah. That's just an example with SQLite database. Sorry. Yeah. So it's just simple. You can see production model. Yeah. And it works. Raster, very simple. It just, exactly, just projection auto. So Map Server will read the coordinates system from the TIF file through GDEL. Assuming it is correct. Yes. Right. Right. And again, remember it doesn't, Map Server won't read the external, right? So if you have a TIF, a world file for that TIF. Right. Mike. This is a new feature that was added at Map Server 7. It was an RFC that was added for Inspire support done by Evan Rual. WFS 2.0 is now native in Map Server. And one of the neat things about WFS 2.0 is that paging is supported by default. We had added paging support at WFS 1.1, but it was a non-standard extension syntax. It's definitely, it's standard syntax in WFS 2.0. And it does support sorting as well. So you can specify sorting keywords. And WFS 2.0 also supports time support. So you can make time dependent spatial queries with WFS 2.0. All kinds of things you can do are stored query supports. So you can set up WFS queries on the server side and refer to them by names and pass parameters to those names. And I'll give you an example here. And like I said, it was a base for Inspire support. So here's a couple examples of some WFS 2.0 queries where here's an example with a time period collection query. You have a begin time and an end time. And this also has a sort order on it. So these, this will, your data has to have actual time data in the database or whatever your back end is in order to do that. But it can do time-based WFS support. And here's an example of creating a stored query. So I specify my query name. I can point to some XML file that contains that query. And I can define some parameters for that query. And then rather than having to specify the entire WFS filter, I can specify just some query ID, some name, and then specify some parameters to be passed to that. So it's a way of making WFS queries a lot simpler, especially when you're using similar kinds of operations repeatedly and just want to change a few parameters. And this is kind of a sub-pro tip on WFS 2.0. In the Map Server 7 release, we've refactored how Map Server handles, how it handles spatial filtering and attribute filtering. So everything gets pushed now down to the native driver level, at the lowest level. So this wasn't too much of a problem when you were doing WMS because the B-Box filters were implemented natively. So when you made a request for data in your shapefile or your database, postjust, whatever, it would limit that filter by the B-Box. But when you're doing that with WFS, you may not have a bounding box. So any attribute filter or other kinds of filters that you specified at WFS, that was done at Map Server. So all the data would be brought back from your database or from your shapefile, and that would be filtered in Map Server. Now those spatial filters are actually pushed natively to whatever your back-end is, my SQL server, postjust, OGR, whatever it is, and implemented at the data driver. So if you're doing a database query and you specify a time filter and some attribute filter, those happen at the database. So the speedup is tremendous. So if you're doing WFS queries on some kind of spatial back-end and you're not doing them with B-Box filters, you definitely want to be moving to Map Server 7 and you use the new native drivers that push that to the back-end. And this may be the most significant, in my mind, the most significant addition to Map Server 7. And it delayed Map Server 7 for quite a while because one of the drivers needed, right? It was kind of waiting on one. Well, not just one, a series of them, because it was a lot of back-end development. And Mike here, to toot his own horn, Mike actually did a lot of work with the Oracle driver for this release. Okay. Yeah, we're going to go quicker now, I think, because it's almost time. So HTML-based lesions, many, many already know this trick. This has been around for a while as well. But I think nowadays is sort of now, again, underutilized. And this is for CGI only. Give it a try. I documented this so long ago, like for Map Server, I think it's very powerful. We just add a template into your Map file, point to a template, and you get an accurate a legend, HTML legend. And just quickly, you can't even see that. But there's some syntax for HTML as well as JSON. Yeah, those are just some examples of what the templates might look like to get HTML legends in a variety of different formats. Yeah. So, pretty powerful. There's lots of formats to learn. Just give it a try. Content-dependent lensions, do you want to do that one? That's for yours. Is it? Yeah. No, it's not yours. Okay, I can talk to that. Okay. So, another recent addition to Map Server between the 6.4 and 7.0 release are content-dependent lensions, the ability to only show the information within the actual view. So, when the Get Legend request is generated, you have some additional parameters that you need to add to your request, including the B box, width height, and SRS. And then it will only return the Get Legend request that includes the symbols, the feature types that are present within that field of view. So, it doesn't give you the entire legend. So, if you had 20 layers that you're requesting, but the field of view actually only showed two layers, your Get Legend request will return an image with only two entries in it rather than all 20 layers. But just to be aware that doing this has to do more work because it has to make a request into your actual features, see which ones are there. It doesn't render them. It's using that to generate the legend, but it's filtering them based on that. So, if you don't do that, if you're just choosing a full legend, it can just go through the layer list, generate the legend preview, and return it very quickly. So, there is some expense to this. Yeah, we're definitely geeks. We have a lot of text and code, so I don't know so many fancy. Yeah, so, advanced blending modes. This is a cool thing that was added right near the end of the release, right? There again, so it's really not, I don't see it out there a lot, you know? This is bleeding edge. This is darn bleeding edges, and that was just making the screen grabs half hour ago. But yeah, with Map Server. But yeah, so it's really bleeding edge. I really haven't seen it out there, but yeah. So there's a new block in the map file, like a composite block, so layer and then composite. And in there is a new parameter called composite comp op parameter, and that, and there it gives you something like 15 to 20 different operations, where you can actually blend like two layers, right? So think of things like, hillshades, you know, the old opacity trick. Shaded backgrounds to kind of denote 3D, those kinds of things. Yeah, yeah. So the idea here is you'd have. The things you could do with GIMP. Kind of things you do with GIMP, right? And you can have multiple composite blocks for one layer. So you can throw an overlay, you can throw a light and some sort of contrasting. Yeah, so it's kind of, yeah, it's there. Right, right, yeah. So I mean, there's my attempt at, yeah, so, right. So I've taken like a shaded relief, and then like some local colors, RGBs, and then I've just done an overlay with that syntax. So it's pretty simple. So it's all done by MapServer. Yeah? I don't know. It's pretty cool. Yeah, yeah. It wasn't officially released in MapServer. It was in the code in Baster, but it just been released in the 7-O release in July. Yeah. Yeah, there were other tricks to do that too. But this, yeah, this is this new composite trick. But that's the kind of like... It's a very performant this way. Yeah, super performant. Yeah, that's a good point. Yeah, so like you said, you can... Yeah. Okay, so look at that. We're all done. Done. That's it. See you next year. See you next year. Guys, just one question, because I need to have Jeff on the annual meeting in the future. Maybe in a few minutes. Yes. It will start by the end of this session. So just one question. Yes, please, look at... The reaction outdoor is the... Or less, either the... Or the normal... Like you mean, yeah, I mean, why would you do that? Is that what you're saying? Like why wouldn't you just combine the file and create the... The reaction outdoor. Yeah. Yeah, so you're... Oh, projection auto. It's faster at the end if you define a reaction system or it's... It's probably... Well......with the APSG code, depending where is the APSG... But they fix that. Oh, yeah. Yeah. Yeah, that's an older issue. Yeah, that was fixed back in Maps Over 5. Yeah, it's fixed. So... Does everyone hear that? But that's a good power tip, too. Well, it used to be a power tip, so if your APSG file with the projection that your app's always using was way at the bottom, Maps would have to go line by line and read that, and then there was no caching. Right. Now there's caching. There's caching, yeah. So maybe there might be a hit at the beginning. I don't think so. I don't know. Yeah. But to answer your question, we haven't really benchmarked projection auto, but I've never noticed in my testing any... I think there's not really a performance hit. There's just more potential to go wrong if your spatial information file is not correct. That's the only reason. She fired you without getting it right. Exactly, yeah. Yeah, and then you're throwing that into OGC services, and you really want to take that risk if you know the source projection, why not list it there? Anyway, you know what I mean? Yeah. How many layers do they have in mind, map size, maximum? All of them. There really isn't too bad an issue with really large map files. We typically run 30,000 to 40,000 line map files. Now these are ones that have a lot of include files, and sometimes include files that include files, but we don't see much of a performance hit with really large map files. Now that being said, we do tend to run a fair amount of RAM on those systems, so the stuff stays parsed and we do run fast CGI, so that is an issue. I think there was some limit in the past. I mean, there was some performance overhead when you had many layers in this state, so... It's been... I wouldn't say it's fixed, but it's just kind of been improved over the years, and that maybe if you had 200,000 lines or 500,000 lines, you might get into an issue, but at 40,000 to 50,000, we haven't seen anything. Good question. So, thank you very much. We really need to go to the annual meeting. Please join. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
|
MapServer is a fast, flexible and extremely powerful tool for creating dynamic maps for the Web. Underneath the hood, MapServer offers many powerful and advanced features that many users never dig into, and new features are being added constantly. Come learn about some of the more advanced features of MapServer, from extending OGC services to exporting data to GDAL file formats to very complex symbology and labeling. Learn simple and advanced use cases and debugging techniques for some of these advanced features from two presenters with over 30 years combined experience of using MapServer; this will be the second #protips performance by these two vibrant characters. A live MapServer instance will be used during this presentation (yes we are still crazy!).
|
10.5446/32140 (DOI)
|
Okay, let's see if this works. Okay, start. Okay. So, hey, everyone. I'm Alejandro Martinez. I'm a systems engineer at Cardi B. And I wanted to give this talk. This is about the Cardi B-based maps, a tale of data tiles and dark matter sandwiches. And after all, it's a tale or like a story of how we ended up serving base maps by just an evening hack of a co-founder of Sergio, which tried to do something. And on Friday evenings, we have something that we call the Libro Fridays, which is basically with about the evening to spending some time hacking on top of the Cardi B stack for some experimental stuff or things that we want to improve or fit a little bit more in the stack. And we do this a lot because we like to push around limits. And it's a way of development. We build a lot of the Cardi B stack, a lot of the new pieces on top of the existing pieces of the Cardi B stack. For example, the geocoding is just SQL functions on top of a Cardi B account, which have the already data and use the own post-release filter search capacities to search for your coding names and polygons. Or the data library datasets are built, which are the... If you log into your Cardi B account and go to create a new visualization and you got a lot of open data that you can use out of the box, that data is actually fetched from another different Cardi B account, which is having the data and it's being copied to your own account. And so a lot of internal APIs and things we use both for development on the systems team or for everything that are built on top of the Cardi B because we think it's a way to sort of improve the experience, improve for us and for everyone who wants to build things on top of our platform. So back to the base maps. A base map is simple and yet complex. I mean, a simple base map is just a layer of data. In our case, we want it to be some open data and with a matching style, which is most of the times is the most difficult thing. So it may sense for an evening to try to create some base maps using Cardi B. Even though Cardi B, since it's beginning, it wasn't envisioned as a platform for making base maps, but for putting layers of information on top of base maps like overlays of data, quite a small amount of data compared to OSM or compared to any other data that might be worth to be called a base map to show information on top of. But most of our stack was already based on Post-Escual, Post-EIS, which happens to be like the most common stuff for serving base maps itself. Even though we've focused a lot of serving dynamic data that changes frequently and it's not as big as the data set from OSM, we believe it could be worth a shot. So we went walking and we obviously went with a less detailed data set, which is natural earth, and got all the polygons and related things to make a bare base map, which is not even province level, just a countries level and tried to start it a bit. And we did like three to four base maps using the uncardi B editor with a big account and data we uploaded using the Cardi B UI. And we used this to try to explore how difficult, how far did we get on our purpose of being a database editor, visualization, overlay, some kind of specialists, from how far we were from being a base map editor. And we were almost there. I mean, you could make a basic base map using Cardi B with just uploading the data sets and making the styling which can get to be very tricky and difficult as you deal with different zooms and things. But we found that the Cardi B editor was not the best suited tool for this because, well, it started for the UI. I mean, the UI wasn't designed to make, to suit such a big amount of layers, one on top of each other. And there was a point when they overlap each other and it broke. And some of the things like the data set size, I mean, you could upload a set of two gigabytes, three gigabytes tops, but that's not worth that. If we wanted to make a worldwide data set, we could not import it within the Cardi B UI. And that was fine because you usually don't want to upload and display 100 gigabytes at once. Base maps are the exemption for us, not the rule. But yeah, despite all these hurdles in the editor, making a simple base map was quite easy. And it was simple to make it work because Cardi B, the Tyler, already serves XYZ tiles. But it does with this code, we call it this layer-roup ID, which is something that depends on both the time you've made, I guess, as I've changed and the style. But we didn't need that because we wanted to get our fed to your else. So we did the quick route, which is make a route and make a roll an engine X to just make a fixed URL pointing to the real one for the visualization. And that was the easy way to have, I mean, we already had something that you could do as a base map simply on LiveLab without even using Cardi B GIS or any tool that accepted XYZ format for serving base maps without very much work. So it just made us locked. We got the first base map. And we launched those simple base maps like a year and a half ago, I think, a little bit more maybe. And they were already available in the Cardi B editor for a long time. But then, almost a year ago, we wanted to go a bit further. We wanted to go a bit further because we wanted to remove the map views limiting because if you want to make social maps and make that get shared by the community, and like getting to the, getting people to make maps without being afraid of how many times will we debut. And they want to make them actually want them to be popular. We had to remove that restriction and make all the maps in Cardi B have unlimited map views. But then, we had the data that is overlaid, but then we also need something unlimited to put behind. And we tried, of course, a lot of people serving base maps that do very much better than us. But we wanted to give it a try and to make an OSM base map that we could host ourselves and be, that we could be the responsible soft and we could pay like the usage of it. And that also was designed for data visualization in the sense that this base map is really going to be the default base map in the Cardi B editor. So 100% or up to 90% of the usage is going to get is from visualization that made on top of it and data that is overlaid. So we wanted to be as close to data visualization as we could. So that's why we decided to cross the OSM limits and we got the help of Omni scale to use it in person, make a definition that makes sense. And having the whole OSM inside a table which happened to be inside the Cardi B database and inside the Cardi B account. And we got a select all from planet. We thought the geometry in OSM that, well, it doesn't make sense to query 150 gigabyte OSM data with only post-resqaure for data rendering but we have it there and we could do stuff on top of it. And then to cross the matter limits, this is a bad pun because of the name of the base map. We got the help of a statement which helped us make two OSM open source based maps which is post-resqaure and black matter, the white and the dark one. We're designed with data visualization in mind. They're the ones which are going to be used in Cardi B by default. So they're better be. And that's how we got a bunch of interesting stuff on top of the, really important OSM to handle the zooming and visualization on top of, while inside the Cardi B platform, our systems based on infrastructure, for example, we used materialized views to filter the data, the sections of OSM that we're going to be relevant to, it's soon to avoid transferring too many data to the Tyler server. And we use materialized views because they're very handy and post-resqaure 9.4 can be refreshed concurrently. So it may sense to use them as some kind of mirror of the OSM data that we keep updating with imposing MAMNOS Moses with the filter data for the base map. And also some a lot of SQL magic to make sure that the data matches it's zoom, et cetera. And this was done by the OSM gas and statement both with, which helped us doing things, doing all these things. And then we also got to the development process, which was a bit, we started developing the base map using Tile Mail, which also uses Cardi CSS, pretty much what we wanted to display. But then we found some issues with the Cardi CSS handling. It's slightly different from the one on our Cardi B map APIs. So we decided to just look for something else. And while the initial development was done on Tile Mail, we went into rating and did another draft editor, which was made on HTML with Cardi B. But we ended up creating another cool way to create base maps on top of Cardi B, which is the Atom base map editor, which is not really a base map editor, but is an applet. You can put on top of Atom, which will connect to your Cardi B account and allow you to easily edit some Cardi CSS and automatically push it to Cardi B and have a preview window to display the data set you're showing and the visualization you're creating. This is it. It's cool because you can just change anything. For example, I'm going to change the color. Because words like blue, don't you think? So with a bunch of cool plugins for editing and lending and Cardi editing on top of Atom, we felt it was like the right ecosystem to fit in and you just save. And it automatically gets pushed to Cardi B and it generates a new base map with the style you've sent. And you can also not only change the style, which has hierarchy and variables and any custom Cardi CSS common things, but you can also change the queries that are applied to the map and to each layer. So you can, like, for example, I just opened a new one, I play any kind of queries. For example, this is a cool example I got that you can just apply an SD transform and have the same map dynamically rendered and projective into another projection. This is Robinson. So another advantage of this is that all the base maps are rendered using our existing infrastructure, which is focused on dynamic mapping and making sure that things get updated quickly. So you could also mix on top of between the layers of that base map. You could also mix your own data sets or your own information that you can keep updating using the Cardi B editor, the SQL API of whatever way of accessing Cardi B you want to use. And you can use them inside of the map, either masks, over views, filtering using SQL or any kind of combination you want to achieve. You're pretty much in the SQL and CSS and we're just rendering. And we also have all the cache mechanisms and invalidation mechanisms, both with our local bandish on CDN, which is actually to make sure it will keep the asset in almost real time. So we kept experimenting on these base maps and developing new features. Then I'm going to go through briefly. And the first one is sandwiches. Sandwiches in the sense that our base maps were, as I already said a couple of times, for data visualization. With data visualization, you often get things like this. This is like a map that is all the transparency. It doesn't have much transparency, so it's covering all layers. So what do you do? Well, it's quite simple. It sounds quite simple, but it has a little slice of complexity. You can just put the labels on top. So what it is, is we're releasing another, as the base map project was quite a structuring layer. We just released another different layer, which was just a layer which only had the labels on top. So you can just put using leaflet and KaryBJS, put the base map, which is the thing you see behind the blue mass. And on top of it, to put wherever you want to visualize, and on top of that, you put the labels. And it makes for a nice visual change. It's a little small detail, but this affected a lot of pieces on our stack. Even though we had a bit of styling issues, like, well, you have to alter the style and just the labels. But it was pretty simple to make it, but we also went and implemented some other things all over the KaryBJS, which ended up exploding and affecting a lot of pieces of our stack. Because right now, the KaryBJ editor is using the sandwich labels by default without telling you. And most people didn't even notice, which I think is pretty cool because it feels like natural. We implemented some quite things in our titling server to be able to cope with all of this. And one of those is, you see those preview base maps, those preview visualizations on your base maps, on your maps, I mean, sorry. The previews were actually in the first iteration of the editor when they were using leaflet. And they got all the three layers. But we wanted to go a step further and be able to serve just like an actual image for your whole visualization, including the base map, and the labels, and of course, your data. We went over this by using the Maps API that we already have, which is based on Winshaft, which is based on MapNIC, and it's called Winshaft. Then what we call the sandwich mode, which basically means that you can no longer only request MapNIC layers with the styles and CSS and queries, but you can also request an HTTP layer, any HTTP layer that allows XYZ. For example, in this case, it's our own base map. But you can put pretty much any other base map on it, and it will request all the layers and compose them together and serve them as another map, which is just like a combined version in PNG of that map. And I think we added for that, you're no longer confined to request XYZ tiles, but you can also request the static map API, which means that you can request, give me this tile for a fitting zoom and center in this coordinates and of a given size. And you can just alter in the URL, the parameters that you see here on the end. You can just change and tweak the map where you want to display it. So the last part is the systems part that I'm most involved in with. Because the base map, during the development of the base map, we ended up improving our infrastructure by load testing and comparing and new settings and exploring new things that we ended up testing with the base maps, but expanding to the whole category B. The first of them is meta tiling. Meta tiling is a simple concept, which is that the tiling server, which is the one that serves the tiles, when you ask for a tile, instead of rendering just that tile, it renders the, for example, if you want to request this tile, it will not paint this tile, but also paint the whole adjacent bunch of adjacent tiles and have them saved. Because most of the times, it's going to be people seeing a huge map and requesting a lot of tiles. It's kind of intelligent to reduce SQL queries, to just paint a bit, assuming that the user will request it. Meta tiling is under a green shaft, by tile-like map, in fact, on an internal cache. So when you request a tile, it will generate the adjacent tiles. The problem with our stack is that our stack is more or less like this, the overall stock of the category B system, or software service system, which is that we didn't only have one tile, but we have more than one tile, and we balance amongst them using another upper layer, which is not here, which is in Gnex, which by default, the routing that we use is quite stupid, in the sense that it will be just a round robin and randomize the request along all the tiles. But the problem with that is that when you request a tile to a tile, the tile will paint all the adjacent tiles, but that tile will not be served with the adjacent, will not serve the adjacent tiles, because round robin will probably serve it to another one. So we just stand up painting like four times the amount of tiles that we wanted to paint for nothing. So we went exploring and hacking around, and we found a very interesting way to do this, which is consistent hashing. It's just a true question, it's a sign like a hash that will determine which server will serve it. But with Gnex and OpenResty, which is a Lua environment put on top of Gnex, you can hook into requests. You can decide what can you use to calculate the hashing. So we went and after some exploring, we came up with this, this simple piece of code, we'll just do some math operations, given how the quad tree works, to make sure that all the tiles that are containing into the same meta tile for the same soon, and served by the same tile server. So it's kind of optimal routing for the distributed environments of serving tiles. And then again, the last thing we've done to play with this more, for squeezing all the performance out of the serving, was ditching WKB. WKB is the full transport format for PosterSquall, which is 8 by float for its coordinates of its position. For example, imagine that you have a polygons with 800 points or corner vertices. It will transfer each of those vertices with approximately a precision of that. I usually never have that precision. I mean, you don't usually have some atomic precision in your KB visualizations. If you do, then you're the coolest man I've ever met. But we started to explore how it changes, and we ended up with something, what we call tiny well-known binary, which is a specification that we open source and we want to build upon. And we work with some other people to do this. And it's equivalent to that well-known binary, but using Dalton Coding and variable precision to make sure it fits the, more or less, the precision that you want to display in our tile, because you don't want some atomic precision for displaying a 256.25c tile, usually with having where, in which pixel, or how pixel is the point, is enough. And well, I got another talk about this in a later session, but tiny well-known binary basically helped us really squeeze a lot of performance for this and get a huge performance improvement. Because network, in this case, was one of our main bottlenecks. And we just moved into tiny well-known binary. I think you can guess on this graph when we moved, reduced to our overall traffic to 10% of what it was on the traffic between the database server and the Tyler server. So, yeah, this is how Evening Hack ended up, like, destroying and causing improvements all over the stack. And that's all. I think I made it better. So, if you have any questions, thank you. Any questions and comments? No questions? No? Okay. Thank you so much. Any questions? Alors, thanks for being here. Okay.
|
CartoDB is an open souce tool and SaaS platform that allows users to make beautiful maps quickly and easily from their own data. To complement our users needs, we launched last year our free-to-use open source OSM based basemaps Positron and Dark Matter (https://github.com/CartoDB/CartoDB-basemaps), designed in collaboration with Stamen to complement data visualization. While architecturing them, we had several compromises in mind: they had to be powered by our existing infrastructure (powered by Mapnik and PostGIS at its core), they had to be scalable, cacheable but frequently updated, customizable, match with data overlays, and, last but not least, they had to be beautiful. This talk is the tale of the development process and tools we used, how we implemented and deployed them and the technology challenges that arose during the process of adapting a dynamic mapping infrastructure as CartoDB to the data scale of OSM, including styling, caching, and scalability, and how (we think) we achieved most of those. I will also talk about the future improvements that we are exploring about mixing the combination of basemap rendering with data from other sources, and how you can replicate and tweak those maps on your own infrastructure.
|
10.5446/32141 (DOI)
|
Hello, my name is Tim Arnio. I'm from the National Land Survey Finland and it's my first time presenting in Fosport G. So thanks for having me. My topic today is publishing without programming skills and I'm using a software that we have developed called Oskari. To start off, a short overview, but I'm going to go through today a bit about myself, then a bit about Oskari, the software that I'm using, and then finally, my publishing first without programming skills and then utilizing the RPC functionality that we have developed. And that's me in 2011 in November. So the mustache is okay, I guess. And my title is a GIS expert, but my main work is product owner in the Oskari project or co-product owner. We have many of those. Main interests for me are analysis and thematic mapping and data visualization. So I would say information conveying to users or whoever needs it. About Oskari, just briefly, it's an open source project, obviously. Started in National Land Survey Finland in 2011 about a bit before, but we open sourced it in 2011. Couple links, Oskari.org and the GitHub link where you can get all the source code, obviously. We started the project on our own, but nowadays it's been developed in a network with more than, I think, 30 organizations, both from the private and public sectors. My colleague, Janne, who might be here currently, will have a presentation about the collaboration that we have later on in this same hall after lunch. So be sure to see that as well. So what is Oskari? It's a platform to access and reuse data from a special data infrastructure. So it's a bit different than the other software around. We are heavily based on the idea that the SDI is distributed. So that means that we connect directly to web services that the data producers have set up. You can also import your own data, but that's more of a minor use case for us. That means you don't have to download any data sets. You don't have to work with them. You don't have to transform them. You just use them directly from the web services available. Web feature services, web app services and the like, I guess you know. Always up-to-date data in quotes because of course it requires that the data producer is keeping the data up-to-date, obviously. And one big plus for the Oskari software is that we can define roles and restrict access to the data that the data producers are serving. If there's some sensitive data or otherwise data that you don't want other people to see, you might have a slow server or you might have some private information or whatever the reason. And obviously we have a backend and a front-end, backend mainly in Java and front-end is HTML5, so JavaScript and all the basic. HTML and such. The idea is that the backend does all the heavy lifting, so you need to query web feature services. That's XML. That's pretty heavy. So we use the backend to do all the queries and then return JSON to the front-end. That's lighter to handle there. We also provide analyzing tools. That's also easier to do in the backend. So we use or provide support for web processing services in the ‑‑ or through the UI. One more thing I want to mention is search channels. You can configure different search channels in the backend. You can search for locations, place names, addresses, even features, whatever you ‑‑ whatever you configure. And well, obviously the front-end is there to make it all nice and easy for the end user. Next up, some examples where Oscar has been used. The first one is the Arctic STI. There is funky projection used. So that's worth mentioning, I guess, that we support pretty much every projection there is, using libraries available. So you're not stuck with web mercator or whatever. Here you see the basic UI components. If I can show with the mouse, that's what it basically looks like every time in every Oscar installation. Next example is from the European location framework or the ELF, a project in the European Union where we develop services and we have a showcase application made with Oscar to show the different services that have been made available through the project. Here is basically the same UI but different projects and different data available and different users. Last example, the one that we work most on is the National Geoportal of England. And there I have just made a heat map of the population distribution around Helsinki area, which is the capital of Finland. And as you can maybe see or notice, we have a bit more functionality in the National Geoportal because that's, I guess, the largest Oscar platform that is currently available. And next up, the more interesting part about map publishing with the demos and such. So first, a short introduction to what map publishing is. It's a tool for creating embedded maps. Very easy to use. What you see is what you get. So as you define, as you go along, defining your map view or embedded map, you see all the time what's happening. As you add a tool or a layer, you see the user interface that the end user will see in the end product. We have a lot of tools that you can add and use and make available for the users. And you can also customize the layout and the style of the map application that is resulting. Next, I'm going to show a short demo. I hope everything works out well. Okay. So here we have the National Geoportal basic view. I have signed in, obviously. And here's the map of Finland. First, I'm going to move on to Tampere region. They have a lot of data available, so it's a good example. I search for Tampere and click here and I get moved to the area of Tampere city. So this is just the base map. So I want to add some data. Oh, I have the old search here already. But what it looks like is basically this. So we have a wide selection of layers. I think more than 1,000 currently available. And I wanted to demo with the cycle paths. So I type in a search term, cycle, and I find cycle paths in Tampere in the middle of the list. As I click it, it's added on the map and looks like this. If I click, I should get a, get a feature in for response. And it looks like that. So let's say I want to publish this on my web page. I want to show the cycle paths in Tampere. What I do is I go to map publishing. First it tells me which layers or what data I can embed. Sometimes there can be layers that you can't. That the data producer has not given the right to publish their data. But in this case, I can publish all of this. So it barely fits on the screen, but let's try to work with that. First of all, I have to tell where I want to publish it. Just as an example, I will use the force for G web page. And I have to give a name. So let's say this one is called cycle paths. And I have a language selection. So which language the UI will be in? Let's use English that most of the people can understand what's going on. Next, I set up the size for my map application. And let's make this small so we can fit it on the screen. You can also use the option fill space or custom size so you can really make it the size you want. Then tools, as I said, a lot of tools you can use. We have the scale bar, index map, pan tool. You can deselect the zoom bar if you want. You can put ‑‑ you can add a function to center user's location. You can add a place name search or address in place name search. And you can disable the panning if you want. But in this case, I'm not going to do that. Some map tools you can add. Move backwards, move forwards, measure distance, measure area. I don't need them now so I'm just going to leave it unchecked. And query tool for feature data, that's basically get feature info. I'm going to leave it checked. If I had some web feature services, I would also get the attribute table or feature data table selectable in this tools selection. Next up is tool placement. I can select left‑hand or right‑handed. Positioning on the tools. I can also define a custom layout but for now it's maybe easier to just use the left‑handed selection. Next up is graphic layout. I can select the color scene. The default is dark gray. But for this one, city of tamper, I feel like it will be my choice. And I can select the font style. So it's basically suns and sunserif and serif selections. Then tool style, if you think this is a bit dull, the basic, we have a wide selection of different styles. I will use the three‑dimensional dark tool layout. And if I want to ‑‑ I can also view the map layers so that users can uncheck them in the resulting map application. I will not do that now. Okay. This is done. So it took a couple minutes. Didn't require any programming skills. Anyone could do it. I just click save. And when I click save, I get an iframe code to embed. So I just insert that to any HTML page. That's pretty straightforward. And what I did while waiting, I already put it on the first 4G page just to show with the developer tools. So that's what it would look like embedded on a page. And I can move around here. I can look for tamper again. And it should give me about the same place. Tamper. And that's pretty much it for the basic publishing. And next I will return the slides. Just a second. And tell a bit about RPC. Sometimes the map is not enough so you want some charts or different visualizations for your data. So then you want to enable communication between the parent website and the child website. For this, we have recently developed an RPC functionality, remote procedure calls. It's using the library called JS channel, which is using the missing API's post message. I guess some people are familiar with that. I'm not a developer myself, so I'm not so familiar with the actual code. We have documentation available. The link is there and the slides will be available, I guess. So you can have a look later on. So I will briefly demo also that functionality, what it means in practice. So back to the browser it is. Like this. And this is just a proof of concept. Something I wrote. Very dirty code I can admit. But it works. So here I have a published map. Now, pretty much empty. It has the postal code areas of Finland. Not very well visible, but they are there. Place name search. So if I write here Kampi, I get different place in Finland called Kampi. I select Kampi here. Through the RPC, the two documents exchange information. So in this case, it will tell where the map was moved and I will get instantly the updated charts for the H distribution in this postal code area. And it also works if I just move the map here. So every time the map is moved, it will tell the new location of the center of the map and by that it will fetch the information for the H distribution. And vice versa, I can also use the document, the parent document, and use the search functionality. It will move the map and update the chart. So they can sort of discuss together. I believe that was all for me. Pretty good on time. For more information, visit oscari.org. Email me or grab me by the hand somewhere here. Thank you. And then questions. Question and comments. So I have one question for you. So how do you think about support to the time-up service? Excuse me. Support. How do you think to support the time-up service next plan? Now you support the WFS, WMS, and so on. Yes, we also support WebMapTile service and a lot of different web services. I didn't go through all the functionality, but you can check out oscari.org and see we support a lot of different services. Any other question or comments? Okay, thank you again.
|
This presentation will showcase the use of Oskari (http://oskari.org/oskari) in publishing embedded map applications. The typical use case doesn't require any programming skills. You only need to select the map layers and tools that will be available in the application. After that, you can customize the user interface (size, colors, tool layout etc.). As a result the publishing tool will give you a HTML-snippet to embed to any web site. The supported web services are WMS, WMTS, WFS and Esri REST. If your data is not readily available through a web service, you can import data. Shapefiles, KML, GPX and MID/MIF-files are supported. There's an extensive selection of tools at your disposal: index map, centering to user��s location, address and place name search, attribute table (for vector data) to name a few. Integrating the map application with the surrounding web page makes more advanced use cases possible. All you need is a few lines of JavaScript to use the RPC interface (http://www.oskari.org/documentation/bundles/framework/rpc). With RPCs you can control the map application from the parent document and vice-versa. They can also exchange information. This enables you to develop highly interactive web applications with always up-to-date data. In the presentation an example application made using Oskari and D3 will be showcased.
|
10.5446/32144 (DOI)
|
국산원공사นะ유사 A&K은 포스를 통과해 포스ratsa Möglichkeit aider화자와서에 비올림픽 أBonus에 비올림픽을 통과하는 예술과这 사회잇�balas, supor, intend, modern cities many people drive the vehicles that are kept with GPS devices. It is easily being collected and store the GPS data.ımız에서мос과 LBS propri Assad's 마이밍 구역 위 weighted будут 이륙小스 age Zwe여 전용 leak system의 스페ancy서어는 버터와 문Braardı의 내가 프로 vardıку에 더 antibiotic 잘 지금 inherently Materialistic, Database Styles and call-references 마지막으로 그런arin의Sha 리스토уз� hunt 시스템 시스템 sauce 찾아berly Fast재료,version,oso,ainive,contact,solo- and so on Dot functions are divided by temporarily functions and trajectory functions Start unintentionally You can query any trajectory information using our system Here is two examples downtown two 여러분께 screenshots Qangen C section 1 anunci하는 것이 가장 길ğlac라 Tusc線에서 일상각, demain 침대 KopTV 드디어 conjuniy도 드디어 이 정책은 저희 테스트 4과를 BackgroundInput 하는 것입니다. 이건 GBX50 freedom hub olması예요. 이� 맞아요. 그럼 Rodriguez design지aze으로서 저도 오어로pción 밀러 갈 Zhiyang도 안녕하세요. 그 집에도 네 languages 이 10M真화주머 3위반, BongJae Fighting in patriotism benefits as leaders and Stewart Cover 1시 보여�ス simalia 그래서 Q. 그럼 Q.
|
Recently, many services regarding moving object have been studied with using location information as mobile devices and systems are advancing. Trajectory is the data which information of the location by the time. The current database system is not defined that to store of the moving object data type. Therefore, the location information of object can be stored, but it is difficult to store those location information and time information together. In this paper, the extended system which can store the trajectory of the moving object by using PostgreSQL and PostGIS used as spatial database is designed and implemented.
|
10.5446/32147 (DOI)
|
どうもこんにちは。今日は、QGRTLayerプライビンコントリビューターです。私のコントリビューターは、PGRTLayerの2.0のコントロールを支えています。私はPGRTLayerの2.1のコントロールを支えています。私は、すぐに終わります。私は、PGRTLayerプライビンを支えています。PGRTLayerのコントロールを支えています。例えば、このフォントは少し小さいですが、私は、PGRTLayerプライビンを支えています。そして、私は、簡単に、PGRTLayerプライビンを支えています。また、私は、他のファンションを支えています。私は、PGRTLayerプライビンを支えています。私は、私は、PGRTLayerプライビンを支えています。このように、私は、PGRTLayerプライビンを支えています。私は、私は、PGRTLayerプライビンを支えています。私は、私は、PGRTLayerプライビンを支えています。まず、私は、KShowTestPassを支えています。私は、クライマンクスタートに、基础等のチームで使う必要になります。こちら、私は、自動抗議などのアウトではありません。私からは、 Cookこのデザインは、ドラッカブルドーディングを使用する必要があります。まず、デザインの使用者について、自動アポイントの仕事が知られます。まず、スタートアンドエンドポイントで、1グリッと2ドードドアが出てきます。しかし、2x2グリッと、パターンのカウントが12ドアになります。12、184ドア、8,512ドアになります。これが、コミネートアイルの使用者です。この動画が、このファニーのYouTubeの映像を見ることができます。この動画は、このファニーのアニメーションをコミネートアイルの使用者について、この動画を止めます。そして、ランドトリプーケースの使用者です。この動画の終わりに、バックアンドバックケースを再現します。このコミネートアイルの使用者は、バックアンドバックケースを再現します。そして、センソースターゲットのパラレルHsケースです。これは、ピルティングの問題です。まず、ケーショーティングの使用者は、この動画を見ることができます。まず、ケーショーティングの使用者は、ケースを使用することができます。ケースPに対して、ケースP2の使用者は、アルタナティブルルーツの使用者です。ケースPは、ケースPは全ての使用者ではありません。ケースPは、ケースPは全ての使用者ではありません。バックアンドバックケースの使用者は、ダウンスリップケースの使用者です。そして、2サポートパラレルHsケースの使用者は、プロセッシングが必要です。そのため、ダウンスリップケースの使用者は、ドラッカブルルーツの使用者です。まず、スタートアンドポイントを選択します。ここで、ドラッカブルルーツの使用者は、この線を取り出します。そして、ここで、ドラッカブルルーツの使用者は、ここで、ドラッカブルルーツの使用者です。そして、プロセッシングを、ドラッカブルルーツの使用者から、スタートアンドポイントを1つ目に選択します。そして、2つ目に選択します。そして、2つ目に選択します。このように、ドラッカブルルーツの使用者は、このように、ドラッカブルルーツの使用者です。ドラッカブルルーツの使用者は、このように、ドラッカブルルーツの使用者です。そして、ドラッカブルルーツの使用者の使用者は、このように、ドラッカブルルーツの使用者です。そして、PGRSBのPGRSBのTAN-DISTRICT-SHOTESPASSの使用者と、これを使用者の使用者の使用者の使用者に応じています。では、私は、ドラッカブルルーツのフレーマークを使用者に応じています。まず、GOODNAPSのApiaは、このような一つのフレーマークを使用者に応じています。そして、DFRED Routing Machine、Shortlane、Able-RNは、このような一つのフレーマークを使用者に応じています。そして、このような一つのフレーマークを使用者に応じています。そして、GOODNAPSのApiaを使用者に応じています。そして、GOODNAPSのApiaは、このような一つのフレーマークを使用者に応じています。そして、GOODNAPSのApiaは、このような一つのフレーマークを使用者に応じています。そして、OPEN SOURCE Routing Machine、このような一つのフレーマークを使用者に応じています。そして、GOODNAPSのApiaは、このような一つのフレーマークを使用者に応じています。そして、GOODNAPSのApiaは、このような一つのフレーマークを使用者に応じています。そして、GOODNAPSのApiaは、このような一つのフレーマークを使用者に応じています。そして、GOODNAPSのApiaは、このような一つのフレーマークを使用者に応じています。そして、GOODNAPSのApiaは、このような一つのフレーマークを使用者に応じています。そして、GOODNAPSのApiaは、このような一つのフレーマークを使用者に応じています。そして、GOODNAPSのApiaは、このような一つのフレーマークを使用者に応じています。そして、GOODNAPSのApiaは、このような一つのフレーマークを使用者に応じています。そして、7-Routing Engine Support、Building OS RM、Graph 4 Power、Mapbox Directions API、Maps & Barfatherコンズは、ペジルティングのネットワークデータで使用者に応じています。このような一つのフレーマークを使用者に応じています。このような一つのフレーマークを使用者に応じています。そして、GOODNAPSのApiaは、このような一つのフレーマークを使用者に応じています。そして、GOODNAPSのApiaは、このような一つのフレーマークを使用者に応じています。そして、GOODNAPSのApiaは、このような一つのフレーマークを使用者に応じています。そして、LMDenderのApiaは、このパネルについて、コーディエントを読む、サマーサイズのデザインを紹介します。このサマーサイズのデザインは、このような一つのフレーマークを使用者に応じています。そして、PLPC Scale WAPAのアンプションを使用者に応じています。まず、GOODNAPSのApiaを使用者に応じています。この結果は、GOODLINEコーディネスが使用者に応じています。このプレイは、PLPC Scale WAPAのアンプションを使用者に応じています。このプレイは、PLPC Scale WAPAのアンプションを使用者に応じています。このプレイは、PLPC Scale WAPAのアンプションを使用者に応じています。このプレイは、Juice Server SQL Viewのアンプションを使用者に応じています。このプレイは、Juice Server WFSTのアンプションを使用者に応じています。このプレイは、Juice Server WFSTのアンプションを使用者に応じています。このプレイは、Juice Server SQL Viewのアンプションを使用者に応じています。このプレイは、Dragable Routingを使用者に応じています。このプレイは、FANのアンプションを使用者に応じています。このプレイは、Juice Server SQL Viewのアンプションを使用者に応じています。このプレイは、Juice Server SQL Viewのアンプションを使用者に応じています。このプレイは、Juice Server SQL Viewのアンプションを使用者に応じています。ありがとうございました。次のプレゼントは、このプレゼントをお見せします。
|
pgRouting extends the PostGIS / PostgreSQL geospatial database to provide shortest path search and other network analysis functionality such as alternative K-Shortest path selection. But, in some case, client side draggable route selection (like Google Maps Direction or OSRM) is preferable. This presentation will research what is necessary to realize such client side draggle route selection with pgRouting, then try to implement the functionality to some browser(Leaflet, OpenLayers .etc) and desktop(QGIS .etc) client.
|
10.5446/32149 (DOI)
|
So just a little background information. Again, my name is Wayne Haggab. I'm the President of Flat Rock Geographics and we are a small geospatial solutions provider out of the Minneapolis St. Cloud area of Minnesota of the United States. And I started with open source geolongtime back in the late 90s. Took a master of the class from Steve Lyme at the University of Minnesota when I was going for my graduate degree. And been involved ever since when the first master of the user group meeting. Helped with the 2003 meeting and then was on the planning board for the 2013 class for Geoslotminef. So, better out, but this is the first time I've had the, uh, went to an international conference. Um, unless I guess I don't consider Ottawa, I was at Ottawa, but that after Minnesota I can drive out of us, so that's not international. No offense to any canique. But, anyways. Anyways, um, so excited to be here. It's been a great time. Um, so my presentation, giving away the code without giving away the farm, a business model for open source entrepreneurs. A ridiculously long title or a slide. So, I don't even know if some of you know what giving away the farm means, so I just short it up to making money, selling something that is free. We always, we always get that question, so I, well, how do you sell something at spree? So that's what I'm here to, hopefully relate to. First, why are you here? Well, there could be a lot of reasons, maybe these reasons, maybe you're the last person, because I walk here, I don't know. Um, but I'm gonna go on a limb and say that, uh, possibly so you might own a company, or you're a partner in a company, or maybe you don't like your job and you want to start your own company. But either way, I think you probably have entrepreneurial spirit, um, wanting to do things, uh, yourself. And, um, we know that the entrepreneurial spirit is alive and well in, uh, South Korea. We only have to look at Pekyo talk, which I've been using, uh, we even made some calls on it. Works really good, and, um, so they were, you know, a small company, I think I just read they have over 100 million users now. So, that is, uh, a big jump. And I think the reason that they're so popular is they're awesome. They're motive-ass. My kids love this. They send this to me all the time. I think it's funny. I hope they're not trying to tell me something. So I believe that, uh, open source is entrepreneurship, um, because people who use it are satisfied with status quo. You know, they, uh, they want to make their own stuff. Maybe they don't like working for other people. Um, whatever the reason, uh, maybe there's arrogant. You know, I mean, you have to have a little bit of arrogance to it. You say, Hey, look at all this software over here. I can do something better than that. That's okay. Um, but it's really about self-reliance. And I was doing some study about entrepreneurship and I came across this internal locus of control. They've done a lot of studies and they say that they found that most entrepreneurs have this internal locus of control. And what that is basically is just that, um, you feel that you're responsible for your own stuff. It's not coincidence. It's not a lot. It's not a lot. What you do directly impact that success. So if you're an entrepreneur, you have the internal locus of control, which is not to be, um, confused with the dreaded internal locus of control, which pretty much just read to me leaves all the time. It's been way too much time on the slide for how long it looks. Anyways, um, moving on. So you're different. You're different breed. Um, you're not risk averse. I mean, because there is risk in the bow, which is involved in anything you use. Yes, right. You started business using that or Microsoft, whatever. Um, there's what's, it's, it's minimized because it's a non product. Um, there's support. Um, so you take on more risk when you start to delve into open source and, uh, make your own stuff. Um, but of course, when you start using off the shelf, it has limitations. You're limited by cost. You're limited by licensing. You're limited by functionality. And so that's not what you want to do in the open source world. Here we left freedom. Um, but I will contend to you that the level of risk that you're okay with will define the business model that you implement. So let's talk about some business models now. The first one is creating your own product. Ooh, is that exciting? Isn't it? Are you nuts? Now, why do I say that? I don't want to discourage anyone because, um, you know, that's really people want to do that. That's fine. But they just want people to know that it's not easy. It's a difficult thing to do. Um, and, um, but people got us successfully. We know that you've seen him around here. He's a two year old startup, which was just acquired by Apple for 30 million dollars, I believe. So two years old. So it can happen. So like I said, don't want to discourage you. But the first thing to do, if you're going to go down this road is get a mentor or a degree. Now, why do I say degree? Well, you get the first mentor, any kind of stuff, you know, right or not. So you get a second one and they might contradict what the other one says. So the third one allows you to validate. So it's a good idea to have people who've done it before who know what they're doing. So you're not going down that road by yourself. Um, because what's the most inevitably you're going to have to get funded. Whatever way it is, maybe, maybe you have a portion of your company already that might fund a product. I know that some of them have successfully with, um, they had, uh, like data provide and they'll have a nice set of revenue coming in to be able to support their product. But if not, you have to go get venture capitalist money, angel investors, whatever mentors doing it, told me that, um, if you had over 200, um, meetings to get his series A fund in one year all around the United States. So that's, that's tough to do. And then he had to go right on to that series B. So, um, and then investors want results. Um, so you might be back in the same place that you start. Um, so let's just go through what is a warning. So, um, you can be a rock star. You know, when you walk into password, you're going to be like, yeah, we love that game. So that's always fun. Um, you can be a rock star. You can be a rock star. You know, when you walk into password, you're going to be like, yeah, we love that game. So that's always fun. Um, you can sign up business partners, uh, that sell and support your product and you can get recurring revenue. What a novel idea. You mean I don't have to work an hour to pay for an hour. Money just comes in. That's crazy. That's, that's a very good, um, pro about, uh, creating your own product. Some of the cons, um, it's way different than having someone pay you to make some. You're putting an almost, uh, effort in order to pay back. Not me. Um, like I said, you need money to support it and speed the market is critical. You could have the greatest idea in the world. But if somebody gets there before you, you're done. Um, it's a tight window of success and you have to remember that with a product, it would help us. Good documentation, which is not necessarily easy to do. Um, and all your eggs and unbanned. So this fails the company pretty much. So just be mine. Creating customers. So you get paid for your work. You work an hour, you get paid for it. And the client always tells you exactly what they want. Right? Right? Oh, what I would suggest here is getting a very tight scope of work and making them sign it. And, uh, then if we try to scope, you say, no, no, no, you sign this. That's top. You can, there's ways to manage that. Um, and you may get dollars from supporting the app or even, you know, the client wants to add to it. Um, you'll get dollars from that. Some of the cons, uh, if you're using open source product, project, you want to give back. If you, somebody pays you to do it, they might say, no, you're not getting paid. Um, you're reinventing something new with time. Uh, and that can be tiresome. Um, and the more work you get, the more people you need to get. So you have to get employees. That means overhead. That means complicated. Um, and the customer wants, you might not, depending on your contract, you might not get paid that over. Supporting current open source project like quarter to B or something like that. Um, it's popular. So people know about the marketing already done. So it's easier to get work surrounding. Um, and then you have support when you're doing stuff by yourself. You're, you're it. That's it. Um, some of the cons, uh, it's not your end of a customer says, Hey, I like that, but I want to do this and this. Now you have to run that up chain and you'll probably get a response like, Oh, I have a great idea. We'll do that sometime in the future. So, uh, that's kind of a bummer because you don't know what your customer, of course you're limited to their. So the hybrid model is pretty much just taking any one of those and mix and match and put it together. Um, so how do you want to pose that is that like I said, if you have other parts to support a product, maybe you don't need outside fun. And of course that's the best way to go. Not to lean on other people. Um, and it's not boring. I mean, if you're one of those people who like to do a lot of different things, have hands on a lot of different things. Um, it's, it's an exciting way to go. Um, but again, the speed to market will solve our career product. It has to have so many things going on and it's, um, it's a lot. It's a lot of things in your mind. So the product, the world renowned map, which I'm sure none of you, none of you have heard of. This is our product map theater. While it's not world renowned that we do have folks using it in the United States and Vietnam and China and Vietnam and China. They're actually using it to track, um, trains, the moving of trains, the birds, not the, what? So basically what it is is a, um, a web-based task track. We work a lot of smaller governments, but we track assets, traffic sign, pavement management, um, permitting animals, things like that. So, um, that's essentially what it is. And, um, yeah, it's always worked well. It even works on mobile device, but it's just funky and it's not set up for mobile. So we just came to the decision because everything is going mobile. We had to do something and we wanted to have something with responsive design. Um, and we also knew we could not create anything from scratch because we're just too smart or sharp. So we started with some stuff. Well, we found this Mr. Brian McBride's bootleaf project. I don't know if any of you have looked at that, but it's, it's, it's the bill for us, uh, responsive at some of the functionality we needed, but of course, um, didn't have all the functionality. So we needed to extend it. And so we did, as we extended it into Map Intermobile or Bootleaf OGC. We kind of, this has been driven by, uh, current customers and we kind of got dragged, taking and screaming into making a kind of a general purpose GIS, um, mapping type of the, uh, interface because they like the tracking aspect of it, but they also already had, you know, some type of GIS viewer and they said, if we're going to go with your product, we need something that replaces that. It's the same thing. So we kind of got dragged, kicking and screaming into doing all this other stuff, printing and offering and all that other stuff like that. Um, so, uh, now I'd like you to do a live demo again. Am I not trying to do a live demo? Possibly we'll see how it works out here. So I'm going to pop over to my browser. I'm just going to refresh this. It's just good to go. So, just to show you through the different functionality. This is a city in Minnesota that we do work for. It depends over, uh, has this nice area for all the different layers for the headings. Some of these are coming from Mapbox. Some of them are mostly coming from Geo server stored in post GIS. But then we also, uh, and overhead, map server application where they had all their photos. So these are actually served up by map server from their server. So it's coming in from many different places. So I'll just go through a little scenario here. So I'm going to put a term on the Ash trees here. And I'm going to search them and I'm going to say, hey, we got to find all the four Ash trees because of what it cut those down. So here we go. Here's the four Ash that we can cut down. But now we have to tell people we're going to cut trees down by them because they're a lot of local crazy. And so we're going to make these profitable. So we're going to find profitable. So we turn them pink and now we're going to offer them. And it costs two meters, but I'm just going to feed. Now I'm going to go turn on parcels. And I will search the ones that fell within the buffer. And so you can see that where they are. And then I'm going to meet the mailing list so we can contact them. So what's going on? So that's pretty slick, you know, being able to get the information out to people really quick. A couple other things that I wanted to show were the, let's see, the annotation. I have a handy little deal here. So turn on the annotation there. And now these, these, this toolbar over here has several purposes, measuring, selecting, and drawing. So I can just draw an area. I want to draw attention to it. You can see that. Get that. Draw a little square there and I can go and put some information in there. Also just delete. So that's what we call the annotation here. They're printing. We call it map sharing. And so whatever's in this area will be printed out. And when you get next, you can either print that to a PDF, like a real PDF map, or you can make a map widget, which is kind of cool. You can embed this into websites. So if a city has parks, application they want, all they have to do is turn the parks on, map widget, it'll only bring up the parks over and they can embed that in their parks page instead of having people in the city go through this ridiculous list of stuff, grant parts. And this is kind of what it looks like if it's not embedded. So just kind of slick that. Okay, so I'm getting close to running out of time. Let me get back to the presentation here. My demo, I guess I wouldn't do not work on that. Where are we going with this? Well, we got some things to fix. The interface isn't perfect. I mean, that buffering thing, that should be a little bit easier. Regular users can't fix that very well. Printing maps, we use map pitch. There may be a better way to do that. Measuring tools we need to be able to, um, to switch between units and a crowdsourcing app. Actually, Brian McBride does have a project out there that the building damage assessment, we're looking at more, more thing that to a citizen connect or things like hot, you know, giving back what many ways to give back. Like I said, I've been on, you know, planning committee and such like that. We have all of the great volunteers here giving back to open search community. But we wanted to give back hope. So what are we giving back? So we're giving back the the map feeder mobile or the bootleaf OGC. And these are the features that we're including. These are the features that we're not including. Why? Well, we put a lot of time and effort into those. And at this time, we decided that we weren't going to include those. And so this is kind of that giving away the, the pro-vote gaming with the farm. And we put a lot of effort into this. It's not to say that we won't roll these out in the future. But right now, this is the top one is what we'll see the functionality that was added to the bootleaf project. Where can I get it? So that way you can get it. But if you go there now, there's nothing there. We had a few little tweaks to make just to divide the code up. So we're playing on doing that next week. So you can do two things. You can either you can either get a hold of me. This is my information. Or you can just keep going back to that site and see if it's if it's there. But I certainly will. If you email me, I will certainly let you know when it's out there. So are there any questions? Did we test with GeoServer 2.8? Yeah. Yeah. Yeah, well, thank you for giving me the first question, which I can't answer. Our, our, our, our, uh, Whitman is our CTO. He would be able to answer that question. He's the techno nerd there. So anything else? So let's see what time is it. So yeah, we have about five minutes to, uh, yeah. Yeah. So, uh, did everybody ever hear that? Yeah. So I, I'd like us to try to get venture capitalists fits more term or angel investors, which are not so angels. Yeah. So we, and again, you know, what I was saying, you need to be all in on this. Um, you know, we weren't at the point where we were going to be running around the country, you know, for 200, 250 meetings. So we did some around in the, uh, the Twin Cities area. And they actually have a very active, um, angel investor, uh, community there. And it, it was very difficult because many of them just, the whole geospatial thing, they just, it wasn't, it wasn't a, uh, an application that consumers would use. And so they were just not white battling it. So it was, it was pretty difficult. Um, so we, we had a few meetings and, um, just decided to use the other, um, the theories of the company to fund what we were doing. All right. Either I bore you to tears or I answered all your questions. All right. Thanks.
|
“How do you make money selling software that is free?“ A valid question and one that is asked even from people who are in the open source community. The point of open source is to share and collaborate so how can you make sure your efforts benefit the greater open source community while still growing your business? There are several different models that businesses can leverage to profit from open source software including offering expertise as a supporter of popular open source projects to creating your own open source project to fit your business needs. This presentation will focus on how we created Bootleaf-OGC, an open source project, out of Bootleaf which in turn was based on two other open source projects (Bootstrap and Leaflet). We will discuss our decision making process for choosing the Bootleaf project to replace the current interface on our mapFeeder product, where the project is located on git-hub so attendees can download it, and our decision on what enhancements to give back to the project. We will also discuss our roadmap for moving forward with this project and other business decisions we are making around open source software. Attendees will come away from this presentation with information about open source business models, entrepreneurship, and knowledge of a new open source mapping application project they can utilize and contribute to.
|
10.5446/32152 (DOI)
|
Hi everyone, my name is Paul and that's Hamish over there at the end of the stage. We're PhD students from the University of Canterbury in Christchurch, New Zealand and today we're going to present to you an application of our research using free and open source software titled semantic assessment and monitoring of crowdsourced geographic information. We'd also like to acknowledge the support of the CRCSI and our undertaking of this research. So it's a bit of an overview of our presentation. I'll start off by introducing our research, then outline our project and the free and open source software used throughout it. Then I'll outline some of the finer details of our project such as our crowdsourcing model and determining the trust of the information and then Hamish will take over and describe ontologies, consuming the information as linked data and then finish off with an outline of our future directions and research. So through our research we're investigating applications of crowdsourcing to spatial information and its use so the crowd can be more than just a data source. So through my PhD I'm investigating ways to improve the trust of crowdsource information through both assessments of the information's quality and the reliability of the source of the information. So trust in the context of crowdsourced geographic information is in knowing the quality of the information and also the reliability of the source of the information. And Hamish is looking at the implications that this trust has beyond simply the capture of the information and the consumption of the information. For example, an analysis and presentation and also complementing other existing geospatial information. So our project is an application of our research and is based around the fruit trees and the residential red zone in Christchurch, New Zealand. So Christchurch is a relatively small city on the east coast of the South Island of New Zealand and has a population of roughly 340,000 people. And in September of 2010 Christchurch was struck by an earthquake of magnitude 7.1 which was followed by a series of aftershocks and further earthquakes including the deadly 2011 earthquake. The earthquake and the following aftershocks caused major damage to infrastructure, buildings and land in the city. Due to major land damage, some areas of the city, mainly near the rivers, were deemed uneconomical to rebuild upon because the land would simply be too costly to fix to ensure buildings on it would not be damaged in future earthquakes. This land was identified by the government and is known as the residential red zone and includes some 7,860 properties and 630 hectares. All of the landowners within this zone were paid out for their houses and land by private insurers and the government. And all of the houses have either been demolished or to be demolished. So today the residential red zone has been cleared of many of the houses that once stood there, but fortunately many of the trees still remain including fruit trees that were once part of people's gardens including apples, pears and lemon trees. So our project is based around these trees and looks at ways to crowdsource fruit tree information and determine the trust of that information through semantic assessments and then consume the information and its trust as linked data. So here's the framework for our project and as you can see we use free and open source software throughout. The main components are input, this is the front end of the application which is set up to collect information from the crowd. Trust rating, this section is where we determine the trust of the information. Ontology, which is used for the semantic assessments of the information to help to determine the trust rating. Link data, which is used to aid in the consumption of the information and an output where the information is consumed. So our crowdsourcing component of the framework is built using GeoServer, OpenLayers and Django. This component acts as the interface for the users and presents existing fruit tree information and also allows the users to create and submit new fruit tree information. And at this stage of our crowdsourcing component is just a simple web map, but with time it could become part of a larger website. So we use Django for the web framework as it provides us with the functionality for creating a larger website that our web map can become a part of. We use OpenLayers to construct our web map which allows us to display an open street map base map with the map player showing the crowdsource fruit tree points. And OpenLayers also provides us with the tools for creating, editing, deleting and also saving features. We use GeoServer to serve the already existing fruit tree features and to accept changes to these features and the creation of new features. This is done through a transactional web feature service from GeoServer that is accessed through OpenLayers. And to store the information we use Postgres with PostGIS. So we chose Postgres as the database for this project due to its interoperability with other free and open source software and its ability to expose the collected information to processes that work on the server through direct queries to the database. So we've modeled the fruit tree information in Postgres in a way that makes it easy for us to store the information and run processes on the information and also store the results of those processes. So our fruit tree model is a relational model that contains tables to store information about the trees themselves and observations of whether the trees are fruiting or not and observations of the trust of the fruit tree feature. The information about whether the tree is fruiting or not and the trust of the features stored in separate tables to the main fruit tree information to allow us to record observations of this information. This means we can look back through the history of the tree feature to see how its trust has changed over time as the main features change. And this also applies to observations of whether the tree is fruiting or not as the information is temporal so it's dependent on when the date that the feature is queried. So once we've received crowdsourced fruit tree information we have to determine the trust of that information. So our trust rating component of our framework is where we do this through a trust model. This trust model is largely built around Postgres and Al and uses Python to run the trust calculations. At a conceptual level our trust model determines the intrinsic semantic trust of the crowdsourced fruit tree information. This conceptual semantic model is a component of a larger conceptual crowdsourcing model that focus on the spatiotemporal, social and semantic trust of crowdsourced information through assessments of the intrinsic and extrinsic dimensions of these components. The spatiotemporal trust looks at the when and the where of the information, the social trust looks at the who of the information, i.e. the reliability of the source, and the semantic component looks at the what of the information. Each of these components in the larger conceptual model contain intrinsic and extrinsic dimensions with intrinsic assessing the individual piece of the information or the individual source of the information and the extrinsic assessing how the piece of the information fits within its surroundings or how the individual source of the information is reviewed by their peers. So now our trust model for this project, we're looking at the intrinsic semantic component of the crowdsourced fruit tree feature, but the understanding of the trust of this information could be strengthened through additional assessments of the spatiotemporal or social components of the information. So in a more technical sense, our trust model has been built using Postgres, Python and Al. This trust model performs assessments of the crowdsourced information and writes the trust rating back into the database. So in the trust model, Postgres is the store of the information, Al provides the rules that the information should comply with, and Python is the catalyst that brings it all together. We use Python scripting to query the Postgres database and get the records that have not had their trust rating determined yet. This information is then compared to rules based literals in the Al file through a Sparkle query. So with the fruit tree information, we assess the quality of the metrics of the tree, being the height and diameter, etc., the fruiting observation and also the location of the tree. So in our project, a lemon tree feature that is 2 metres tall and 1 metre in diameter and as fruiting now would be considered a trustworthy feature as the tree's attributes are within the acceptable limits set out in the ontology. But on the other hand, a coconut feature would be considered less trustworthy solely because coconut trees cannot grow in Christchurch's climate. We know this feature is less trustworthy through the comparison of the tree's location and the area in which coconut trees can grow as outlined in the ontology. So each of the fruit tree's attributes are compared to the ontology and given an attribute-level trust rating. These trust ratings are then aggregated into an overall rating of trust for the feature and are written back into the feature record in the database. So in this example, the aggregation of the attribute trust ratings are evenly weighted, but the weightings may be changed to exaggerate important components of the trust rating. So I'll now pass you over to Hamish to discuss ontologies and link data. So an ontology is a programmatic way of defining a concept based on human reasoning by defining classes and the links and relationships that exist between these classes and ends up defining this area of knowledge. They're perfect for using crowdsourcing because of their accessibility when published in web ontology language and they can be accessed via a single URI. They're also highly adjustable and can cut up multiple ontologies and join them together so that it covers the whole subject topic. For the non-developers amongst us, there are GUIs out there such as Protachay, which is open source software from Stanford that allows the experts to focus their knowledge on putting together the ontologies as opposed to having to code them. An example of a simple class and rule structure that we use in our ontology is in this case the apple tree, which becomes a subject. The property or predicate, which goes towards defining this, would be the maximum height as we're defining the ideal apple tree, the maximum. And the object that completes this statement and contributes towards the definition of the apple tree is of value, in this case 10 metres. In Protachay, you can simply enter it into the class, all the properties that would define this feature and all the objects that will make these statements true. These objects are in turn classes themselves and so the graph grows. Because this is published as link data, it's then able to be accessed via a Sparkle query, which is the query language for RDFS. In Python, this can be achieved using a library such as RDFlib, which reads in the ontology via the single URI, forms the graph and then uses the patterns within this graph to match the Sparkle query. So in this case, we want to find out the maximum height of an apple tree, that class is accessed via that first URI, the property, the maximum height, and then the object that completes that statement based on the ontology graph. It's all defined by this one ontology and then everything else flows from there. This is then input into the trust model and then each feature that is submitted by a user can be compared against all of these properties that define the worst case scenario, in this case the maximum height, to see how trustworthy that feature has been that has just been submitted. The resulting trust rating can then be added back to the, as an attribute, to this particular feature and becomes an attribute itself. Then we took link data in, we can then put link data back out and this follows that same subject, predicate, object triple and in this case, the subject is the feature that's been submitted, in this case 344, property of that feature, one of the attributes is defined by the height, has height, and in this case, a literal 2.5 metres. This is appropriate for crowdsourced information because the end goal is to not just harvest all this data from everybody who's out there, but to be able to give it back to them. Using a structure such as this means that the crowd does not have to be familiar with complex data structures such as spatial formats or databases and they can access data via these single URIs. Because crowdsourcing is largely web based, it's assumed that most people know how to enter a URL and receive a simple human readable web page. This way they can just enter a similarly structured URI and receive data instead. It also makes it very easy once we get more link data out on the web to create mashups so people can bring in data from almost anywhere. That's really the power of crowdsourcing. We unleash the real potential of the human intuition. Just know what people might start bringing together and what might be found to interrelate and really brings another dimension to see how our world works. In this particular example, you can form a simple mashup using a Python library called Folium, which is a Python-based leaflet wrapper. In this case, you want to look at the most trustworthy fruit trees that we have. In this case, that would be a trust rating of about 70 out of 100. For the purpose of demonstration, we'll pick the wind speed at that tree. Hopefully, there's a lot of wind there. There will be more fruit on the ground. I can go out and pick it up. Simple sparkle query. Basically, this shows that you can filter these queries based on the trust rating. The trust rating greater than 70 returns the attributes of all the features that meet those guidelines, including the latitude and long. I now have the tree IDs and the lat and long for each of those, which I then can submit to weather underground to find the nearest personal weather station and the conditions of that weather station for each tree. Now, I have all the IDs with the trust rating greater than 70, the wind speed, all the other attributes. A great thing about Folium is if you're a Python developer, it's very easy. One line of code and then another line of code for each tree and can put out a simple leaflet map just to see where everything is, pop-ups to tell you what tree you're looking at, and of course, the wind speed there. So, where do we go from here? We've seen the why, the how, what about the here and now? Where do the current datasets draw their credibility and trust from? Well, the largely results from legacy. Authoritative datasets from national mapping agencies or large corporations have traditionally performed their roles relatively well, and so people assume that when they pick one of these datasets, it's going to do the job. This makes provenance difficult because it's basically used for tracing errors, and the datasets have that built-in trust, and people will only start to look back at what's been done once something goes wrong. This means the whole dataset is considered reliable, and datasets are just considered as pretty much one mass. Following the W3C guidelines for provenance, you have the dataset was generated by some form of collection, and we come back to this triples. This continues based on the agency and their collection form that they use, perhaps it's their specialty. This produces a graph of the provenance of the data, how it's been used, where it's been from, so if something goes wrong, there might be a bit of a trail, but you can usually get back to find out what's been wrong. Maybe it's the wrong analysis for that particular dataset. Usually when we're applying this approach to crowdsourced data, is that it's largely considered at a feature level. If you think of a medium-sized dataset having about 100,000 features, or even just a browser shot of open street map, and how many features come up in there, and then start to consider that each of those have been submitted by a user, and because of the anonymity of the web, it's very hard to know who that is and exactly what they've done. You start tracing provenance when something goes wrong, and things quickly become nigh on impossible to trace where the errors come from. What these trust ratings provide us with is an ability to work out how reliable and attribute is. We can aggregate that to the feature level, and then to the dataset level to provide an overall indication of just how trustworthy that dataset is. Alternatively, we can just look at the features themselves and take the most trustworthy features, depending on what sort of analysis we want to run. Hopefully, if we pick the most trustworthy features, then the analysis is going to be even more reliable. What this provides us with is a proactive form of provenance, a way to stop these major errors from happening before they occur, before this process starts. What it allows us to do is use crowdsource data in the most reliable way possible, and increase the usability of this very valuable source. Thanks very much. Are there any questions? Hi. Hi. Hi. Hi. Hi. Hi. Hi. Hi. So you guys defined some ontology models for trees in this case. You guys worked with many other ontology models doing this similar kind of QA, QC of crowdsource data. Specifically, I'm interested in personally using a similar method for a bunch of OSM data in the future, in the near future. I'm wondering if you guys have kind of dove into that at all and explored it using like larger datasets and lots of different type of ontology models. Yeah. I'm not sure if you guys have seen it, but perhaps we can meet up after this and exchange details just so we can stay in touch. Because yeah, we'd be interested as well. It's just, like I said, it's kind of at that point where you need more data on the link data side, and that includes ontology. So the more of that we have here, hopefully, the more reliable these trust ratings can be. So yeah, definitely talk after this. Yeah, it's a big task. Yeah. No further questions? Thanks very much.
|
Whilst opensource software allows for the transparent collection of crowdsourced geographic information, in order for this material to be of value it is crucial that it be trusted. A semantic assessment of a feature’s attributes against ontologies representative of features likely to reside in this location provides an indication of how likely it is that the information submitted actually represents what is on the ground. This trust rating can then be incorporated into provenance information to provide users of the dataset an indication of each feature’s likely accuracy. Further to this, querying of provenance information can identify the features with the highest/lowest trust rating at a point in time. This presentation uses crowdsourced data detailing the location of fruit trees as a case study to demonstrate these concepts. Submissions of such crowdsourced information – by way of, say, an OpenLayers frontend – allow for the collection of both coordinate and attribute data. The location data indicates the relevant ontologies – able to be developed in Protégé – that describe the fruit trees likely to be encountered. If the fruit name associate with a submitted feature is not found in this area (e.g. a coconut tree in Alaska) then, by way of this model, the feature is determined to be inaccurate and given a low trust rating. Note that the model does not deem the information wrong or erase it, simply unlikely to be correct and deemed to be of questionable trust. The process continues by comparing submitted attribute data with the information describing the type of fruit tree – such as height – that is contained in the relevant ontologies. After this assessment of how well the submitted feature “fits” with its location the assigned trust rating is added to the feature’s provenance information via a semantic provenance model (akin to the W3C’s OPM). Use of such semantic web technologies then allows for querying to identify lower quality (less trustworthy) features and the reasons for their uncertainty (whether it be an issue with collection – such as not enough attribute data being recorded; time since collection – given degradation of data quality over time, i.e. older features are likely less accurate than newer ones; or because of a major event that could physically alter/remove the actual element, like a storm or earthquake). The tendency for crowdsourced datasets to be continually updated and amended means they are effectively dynamic when compared to more traditional datasets that are generally fixed to a set period/point in time. This requires them to be easily updated; however, it is important that efforts are directed at identifying and strengthening the features which represent the weakest links in the dataset. This is achievable through the use of opensource software and methods detailed in this presentation.
|
10.5446/32153 (DOI)
|
Γεια σας! Είμαι Πανόσταρ Ζης, Είμαι στον Ευρώπη Βαλίμπρα. Θα σας προσπαθήσω για το COBWEB. Και τι είναι το COBWEB. Οι κομμάτιες για το προσπέρο είναι το κλωβανό της Ευρώπης Βαλίμπρας, το συστήμα της Συστήμας, Γιος, και το κυνέσκο, το κυνέσκο, το κυνέσκο, το κυνέσκο, το κυνέσκο, και τον K tuL4 nevergop uhmmm σας. Ευτό substaches έθνω έτος το κυνέσκο Jordan πιά στη Έ Пока ανάψειπάρχει με να προεχναίνουν μια ψ権 эн усл άμεσα στο πλούινη 사고 substrate σε αυτόι editors ξαναγούντας δεχταία κεβαιών� peer- update сил δίδρα πλούινη στη Σάω shit σε τα βίτες ελθ dólares Ει υδηλ Sic mesma. Αυτό μου εί vestace, με συν Concept<|sa|><|transcribe|> και γενικια Σραδεinto. Νουςwanupbeat artık! Καφίλετα, το συγκ one noise εκηρασ carbouhin, υπάρας χέρος στραφωТρα! Υπάρα σμμφ�μώ nerves και έδωσε το τη δανειοκιογραφικό. Τα την canduua βαlette ότι βλέπεις και χωρίζεις και δώσεις και γίνεις σαλ hacked, να προσπαθούν κάποιες δικαίες δικαίες. Τώρα θα μιλήσω για το UNESCO World Network of Biosphere Reserves. Οι δικαίες τους εμφανιστές είναι αυτοκλήκες από χώρες και εμφανιστές από το UNESCO's MAN και το μυασφερό πρόγραμμα που προσπαθούν το δικαίωμα συμβουλίδι μετά το κοινό κομμουντή εύκολο και στους σχέσεις. Έχουν σχεδιαίτες 610 μυασφερός σε 117 χώρες. Οι δικαίες μας εξοδεύονται στον δικαίες πρόγραμμας στην Βαδίσφη, στην Βαδίσφη, στην Βαδίσφη, στην Χαλίγα, στην Τερμανία και τα κορδιά της Σαμμάρι, και τα μυασφερός της Λιμπούς, που είναι τα δυο της Ελλάδας. Εδώ είναι η δικαίτηση των πρόγραμμας που πρέπει να διεθνάμε. Είναι πρέπει να κάνουμε μετασκευητικό εμφανιστή, το κομμουντή εξοδεύοντας, η δικαίτηση εξοδεύοντας, το κομμουντή εξοδεύοντας, οικονομοντή και το συμμετέχο. Και τσίτω, αυτή η δημιουργία και το εξοδεύοντας. Σε τα πρόγραμμας του πρόγραμμας, εμείς έγινε να βρήκουμε ένα αρχότητα, και θέλουμε να βρήκουμε ό,τι θα ήθελα να δημιουργήσουμε. Έτσι, έγινε με τέστοι σενάριο. Είναι μετασκευή η μονωτική εξοδεύοντας, που πρέπει να κάνει μερικογραφία, για παράδειγμα, αντιμετωπίσεις πλαντων ή ανάμπλων που είναι μέσα στον περιοχή του βιωσικού αρχήνου, πρέπει να κάνει το δύο σημαντικό σημαντικό για αρχήνου διεθνούς, πρέπει να έχουμε ένα σημαντικό μέρος για να δημιουργήσουμε κάποιο δίδιο για αρχήνου διεθνούς, και να κάνουμε κάποιο βαλίδειο, σχεδόν με την αρχήνου διεθνούς που έχει δημιουργήσει το ίδιο. Το τρίτο σημαντικό για το φλαντινό, πρέπει να το δημιουργήσουμε πάνω από τα πρόβλημα που το ΕΕ, σπέσης στην Βέντσα και στον Σάδρο Φινγλαν. Πρόκειται να πρέπει να πρέπει να δημιουργήσουμε αυτά τα σημαντικότητα, αυτά τα δημιουργήσεις. neighbourhood found is really great. BikiPeg gosp is our community. But we are really in a Mode what we can work with, we know what is going on because we don't have full time. BE furniture. Accuracy. We do have cave table. Τι είμαστε να γνωρίσουμε είναι ότι είναι χρησιμοποιημένη σε GeoNetwork. Και σε το COBE portal μπορούμε να δημιουργήσουμε έναν δυο ευρώπιση. Μεταλλογικής ευρώπισης και δυνατήρισης. Έτσι έχουμε σελίδες που μπορούμε να είναι πρόσφυροι. Και κάποιοι που είναι πραγματικοί, γιατί έχουν να κάνουν με ασυσχετικότητα, σαν και δεύτερης πιστήρες. Και μετά από όταν προσπαθούν το κομμάτι μας, και μεταλλογικής ευρώπισης, εμείς δεύτερης δυνατήρις, που είναι κάτι από την ευρώπιση που προσπαθούν τα ευρώπιση. Και τότε αυτές τις ευρώπιες θα πηγαίνουν σε ένας τελειότητας, και τότε θα τελειώσουν στο COBE web app, που οι ευρώπιοι που δημιουργούν, και να πάρουν στον πιστήρες να δημιουργήσουν δημιουργία. Όλοι είναι πραγματικά από δημιουργία, δημιουργία που είναι πραγματικά από το Σαμμό. Και όταν οι ευρώπιοι που δημιουργούν τα δημιουργία, πρέπει να πάρουν στον πιστήρες, να δημιουργήσουν τα δημιουργία στον πιστήρες, στον πιστήρες που έχουν δημιουργία, και τότε τα δημιουργία θα πηγαίνουν σε κάποια πόλη, και τότε, μετά την πόλη της δημιουργίας, θα πηγαίνουν σε ένα τελειότητα πόλη, που έχει also been combined with some kind of conflation service in order to compare the SDI data, and then they are ready to be published on Geos through some WFers or WMS requests. So here's a picture of the portal, and now I have a video where I demonstrate the part where we prepare the survey, and we collect the data on the field. So here's the survey designer. We have a series of options on the left side. Now we decided to create a decision tree filled with questions. We upload it on the server, then we can also upload our own tiles, like a map layer on the application, which is like some kind of MB tiles format. And we also need, for example, to capture some kind of image of the area in order to go through some kind of validation. So we have an image button on the left that we can drag and drop. And then we can save it by the storage midrower on the server. Then we can go to our application and download, for example, the tiles through the interface. So we have a menu on the third part. There is download section. There is download button for the surveys, a list of download buttons for the tiles, a list of layers. We can download it on the device. We can go and check if the layer is OK by enabling it. This is the map view. In this case, it's using open layers. So if we zoom, we can see the actual borders of the area that we are interested. And we have grouped the study area in numbered areas that a different group of users can go and collect the data. Then we need to go and download the survey. All this is taking place before we start to go to the field trip because there is limited connectivity over there. So we have a list of surveys that we have prepared. I'm downloading the one with the decision tree that I made before. And then we need to prepare the application to cast the base layer, the base map. So there is also this kind of functionality added on the application. So you have an interface where you can choose the zoom levels you want to download. You have a limit up to 4 gigabytes of tiles that you can store. You have also the actual files on the device. Now we can see the bound box of the area that we zoom. And then it's the time that we need to download our survey. We have done it already and captured the record. So the one that we designed, that's the way that is being rendered on our device. There is another button that starts the decision tree questions. So according to what kind of answer we give, it's getting us to the next question. And then we capture the limits. And we can tap on the map to correct the location if that's not right. But for this we only have layers that might be useful because there is a problem with the connectivity on this kind of areas. Here is our observation that we upload up to the server. So the actual record is a geodation format and it's stored as a... We have a database with the raw data. So here is the interface of the survey designer that we saw on the left hand side. We have the different options. So you can see that it has text, range, text area, multiple or single-choice selector that can be either text or an image. We can have an image capture. We can have something more advanced, like some kind of image capture that gives us also details about how vertical the picture is that for the line of sight of our goal that we are using. Audio capture, give a warning message in case that, for example, we need to provide health and safety in case that we go to capture some flooding that might be dangerous for the user. Decision tree questions, the ones you saw, prepare layers and choose the geometric type of the survey that can be either point, polygon or line. So all these are stored with our storage meterware on the server. We have named it PCAPI, which stands for Personal Cloud API. It's a Python-based application, which is also open source and it's under BSD3 license. It has, at the moment, support for drawbooks and our own server. It's a rest API that you can do, get, post, delete, put requests to the server in order to create files or delete, etc. So, for example, here's an API call for checking which providers have been provided. In our case, it's local in drawbooks. You can go and set the code. You can add your own driver, like, for example, Google Drive. I'm providing you links with the documentation of the PCAPI and the actual code. It's based on a bottle micro framework. Here's a series of examples of how it's used. For example, you can do a post request and create a file, get request in order to fetch it, delete for deleting it, and for checking the list, the contents of the directory. And now I'm going to talk to you about the actual application. The application is a hybrid application based on Cordova and JQuery Mobile for the interface. As far as I can tell, the map interface, we can use either open layers or leaflet. Before I go to that slide, I would like to let you know that one of the initial requirements of the project is that we would like to have a quick way to make the app extensible or a way of compiling and have different type of functionalities, so versions of simple apps or complicated apps for more expertise users. So we ended up with a platform called Filter Popin. It's again open source and GitHub under BSD license. The idea is that we have some core mechanics, some core software, which is very simple. It has only two libraries, let's say, the map library, which is assuming a map and the record format, which is in our case how to save a record, which is a geojason, without any more functionality. Then we decided that we are going to write all the extra functionalities as plugins, which are different GitHub repositories. Then we can have another GitHub repo, which is the main project that defines which plugins can be included on Filter Popin. It can also define the theme of the application by providing their own CSS. All this is configured with a JSON file. So at the moment the Code Web App has this kind of functionality. It's GPS tracking, map search, casting your maps on the actual SD drive, adding overlays like geojason, MBitiles, and KML, creating your own decision tree questions, having images as options on the survey, provide warnings, help material, also an interface for the PCAPI in order to store this kind of data on our server by using the PCAPI. It can also configure the base map to select either OSM or our own map stack. In Reds, it's the one that we are developing at the moment, but it's some kind of geofence plugin and GPS continuous capture. Specifically, the continuous capture is the requirement that comes from our biologists. They would like to have some kind of statistical analysis of where the users have been to track records and how much time they spent there and which areas they didn't go at all. So in order to make things faster, we decided to make a list of command lines for automating procedures like checking if there are any updates of the plugins or adding a plugin like the install for the plugin, generating the static pages, deploying the application or either Android or iOS, and then releasing it in order to have RediPk to be uploaded on iTunes or Google Play Store. And here's an example of the configuration file for configuring all the Cordova and Field-Tripled genes. So again, I don't have internet. So here it is. So as you can see, we have a list of the Cordova plugins that we would like to include around our application, a list of Field-Tripled genes, which that we use from the Field-Trip Open, the core software, and what kind of version is the app that we're going to release. So in that way, we can compile and make our different applications very quickly. So here you can see three different applications that we have developed. One is COQLEB, the exit that is using all these functional that I described and has an OpenStreetMap as a base map. There is the Field-Trip.SM, which is another version with, for example, requirements that are coming from other projects. And finally, there is the Field-Trip.GB, which was an initial application where we have provided our own map stack with a combination of OpenData in the UK and OpenStreetMap. And then after we save our data on the server, we have a leaflet map viewer where we can see details about the map, about the data we collected, like details about the survey, a picture, and maybe some sensor data. How much time do I have? 30 seconds. One minute. I'm not going to go into detail about this because it's been developed by the University of New York, but it's the seven pillars that the data are going through for some kind of quality assurance. We have developed a GUI for some expertise user to go and decide what kind of quality assurance the data have to go through. And the way we have evolved to that kind of stage is by contacting local groups that are working on these parser reserve areas. We have trained them about the software, then we send them on the field. This is the way that they tested. We found bugs. They gave us ideas for more functionality. This is how we have made all these plugins. Here is the timeline of this procedure with the code design project. What's next? We are on the project where we are including ontologies. We like to build a geofence plugin for tourist purposes, especially for the summery of those that have many tourists. What's going to happen after the project finishes? For this reason, we decide to go towards the open source technology, especially for some parts of the whole architecture, especially the ones that we are responsible for. For example, GeoNetworks is open source, FilterPop is open source with a BST license, and the PC API and the Map Year. Thank you. Questions? Some useful links. Thanks very much, Panos. Have we got any questions? It's got one. At the moment, can you just record point information and is the GPS continuous capture going to enable you to record tracks, or is that more just to add sort of more depth to the point information? The continuous track, the one that I talked about, is when you start a survey, you have an option. When you start your filtering, for example, you have a list of surveys that need GPS continuous capture, and you can enable them. The application is asking you, the survey manager would like you to have continuous GPS capture. The pop-up message comes, you say yes, and you have a list of the surveys that are going on at that time and which ones are being recorded, but the user has the option to stop the GPS capture. So this GPS capture is coming with the actual, the whole survey, not its record individually. Okay. Okay, thanks very much, PENALS.
|
COBWEB is a European Union FP7 funded citizen science project that has produced a platform through which citizens living within Biosphere Reserves will be able to collect environmental data using mobile devices. Part of the infrastructure are a COBWEB mobile app, the survey designer and the Personal Cloud API (PCAPI) middleware. The survey designer is a GUI editor for generating custom forms, which can be downloaded onto the app, together with a map interface for viewing data captured in the field and a mechanism for exporting user data to CSV, KML and GeoJSON. The COBWEB app has been generated on the foundations of Fieldtrip Open, which is a modular plug-in framework to enable developers to write their own extensions and re-use plugins written by others. This framework has been used in the creation of other production strength apps e.g. the FieldtripGB, FieldtripOSM. The framework is based on Cordova framework and can be compiled to Android and iOS and potentially all other platforms targeted by Cordova. Plugins have already been written for capturing GPS tracks, geocoding, caching off-line maps, creating Geo-Fences, overlaying layers in MBTiles and KML Format, extending the records with sensor data, making decision tree questionnaires and syncing data on the cloud. The synching functionality is the one that allows user to download their custom forms to their devices and layers and upload their captured data to their personal cloud space. Finally, PCAPI is a middle-ware which abstracts storage to cloud providers e.g. Dropbox or a local File system. All the software is (modified) BSD-licensed.
|
10.5446/32154 (DOI)
|
My name is Janik Ilmoha. I work at the National Landsway of Finland, the SDI department. Thanks to HUMIS for kind introduction. I will be talking about how we have built in Finland cooperation around the oscarine software and how that could be maybe applied to some other projects, especially when there is public sector organizations involved in the process. Okay. So just one slide about oscarine. So it is an open source package for creating embedded maps, map clients on other websites quite efficiently. And for connecting to distributed SDIs like the European Inspire and the ELF project for instance, but also other OTC interfaces along with some other data sources. Thank you. And also you can set up even geo-bottles or WebGI systems using oscarine and some even more advanced web-based tools. And you can go to oscarine.org for more information and talk to one of our team members here in the conference. Anyway, towards the collaboration part, so why to actually collaborate? So let's take we have organizations A and B. And organization A has written a piece of software which solves their geo problem. Okay. That's good. And organization B just happens to have the same kind of problem. And organization A is kind enough to make this piece of software open source. And that makes organization B really happy, of course. Okay. And then a bit of time goes by. And then organization A has made an update or new version of the software. And what happens then? Organization B discovers that it doesn't actually any more fulfill their needs of use case. And they can't update it. Which means they're going to have to maintain their software themselves. And we end up by having two organizations maintaining two pieces of software that might be similar made to one 90% degree. And that means double the cost for maintaining the same kind of software. So in order to avoid this situation, oops, I have some of the testings. So don't repeat the same work over again in multiple organizations. And don't spend your taxpayer's money over and over again to do the same job. That's not a good idea. So instead, you should save money and buy collaboration and jointly develop software that is flexible enough to fulfill the needs of those both A and B for many use cases and also other ones. So we started this network in 2014 with seven members. But quite soon we started to have more memberships from public sector organizations as well as academia and more recently also companies. So there are companies who are involved in this network who have capabilities in developing new features or making changes or installations of the software for the clients. So we have a kind of a broad collaboration, not only public sector organizations but also academia and companies within this network. So how did the network get started? Well, and also itself. It wasn't a hobby project, software project by anybody. It wasn't a research project or even a product of a small company. That's I think the ways that many open source software come about. But this was a project run by NLS Finland to develop a map client for the Finnish national portal. Simultaneously as we went along developing Oscari, there was an evident need of map client solutions in other public sector organizations. And I called them silos at this point because everybody was kind of doing their own things and not minding other organizations' business or working together for that matter. But then after the economy started going down and there have started to be discussions with the other organizations, then we proposed finally to start a collaboration network last year. And now we are entering a second year of joint development within this network. So I have a few principles that I think are essential to force joint development projects. So causing collaboration, interoperability, lifecycle management, user and user and developer experience, there's productivity and there's facilitating new services. What do these mean? Basically collaboration is all about working together and avoiding duplicate work being done. Interoperability is about connecting to the SDIs that are out there already, standard OTC, ISO APIs along with other data sources. And also when you are in a public sector organization, you need to bear in mind that there are reference architectures like IT architectures or things like that that you have to consider when developing your software so that your software confirms to these architectures. And that's then a big plus for your software if it's compliant. Lifecycle management, you need to consider the software's technical architecture very carefully so that it's modular enough and extendable enough to support modifications as your requirements change over time. That can save you a lot of money. And of course you have to bear in mind that the software has to be easy to use and both for users and users and developers to provide APIs, modularity and such things. And so basically productivity is getting more for the same amount of money. And facilitating new services means that this forceful development especially when done jointly is basically the only way to create cost-effectively new kinds of services. If you use proprietary software, you're going to end up with a huge amount of costs. And of course open licensing, very important. And in all the projects that I do, I try to use agile methods so that we can react to the changes very rapidly as the projects go by. So how does a network like this facilitate joint development? So from establishing the network, you need to quickly move towards facilitating its operations. And from expanding the network, of course you need members, but you need to start deepening the cooperation and making it more dense. And from experimenting with different pieces of software, you need to start moving on producing working services and therefore gaining benefits from the cooperation. And for this, you need some kind of ground rules for development. How to expand the software, how to work together. You need coordination especially with those public sector organizations that sit in different silos. And you need a lot of communications. And to do that, you need people. You need development teams, other open source communities to gain from and to contribute to. Some decision makers need to be convinced, companies need to be involved and there has to be some tools to do these things with. In the case of Oscar, we have instant messaging tools, web pages, meetings, newsletters, et cetera. And all of these contribute to more awareness of the network and of the project. And we have a kind of a positive business circle going around which facilitates the growth of the network and the software itself. And some necessities that I think are essential for joint development. Also, I will briefly mention about our model for collaboration. So what do you need when you go about starting a collaboration network like the Oscar network? You need a use case for your software and some ideas how to actually manage that using for software. Then again, stressing the public sector organization point of view, you need a governance model. It's more easy to kind of sell the idea of using for support when you have clear plans how to proceed and how to work together. Of course, you have to have place in place documentation licensing and worsening policies and communications can't be stressed enough. And of course, you will need funding and or time. You have to be really committed to running this kind of operation. It's not very easy to start with. And definitely you will need to engage those other organizations in your work from those silos. And I think the Korean mapping agency, the head of mapping agency in his keynote on Wednesday mentioned a very important topic when he brought up the idea that academia, education and businesses need to be involved in the ecosystem of of the city. They are important in providing research, tutoring and support services for the software. Because support is something that public sector organizations especially lack when they try to use the software. So the product lifecycle management one is something that helps you maintain the software. So if you create a piece of software, you have to be ready to maintain it as well. And the purpose is to kind of document the responsibilities, organization of the network and also of the software project itself. Establish some grounds rules and best practices. And it's also a communication tool towards decision makers who decide whether my organization is going to participate in this kind of activities. And the content of this plan is kind of similar to the things you need to specify within the OSTO incubation process. So that will help with it if you want to go international at some point. And there's an example of this that's in Finnish, sorry. But you can still see it. It's about 10 pages, so it's not that long. But it has all the important facts about the project in one little document. So it's just very briefly on the case of OSCARI how this is done. I'm not going to go through this in detail. There are different kind of members in the network, different actors in the network. But you have to remember that you don't need to be a member of the network to contribute or use the software. So it's just an added bonus for working together. We have to have meetings to work together, that's for sure. And kind of marketing activities. And these are important, I think, the ground rules for developing within this kind of network and for the project. So you have to, even if the projects themselves within the network are independent, developing things they need to agree to, commonly agreed principles. So for instance, about the architecture of the software, it has to be followed. Otherwise, we are going to hamper interoperability and the further extension of the software. And definitely we have to agree within the network what its project and its member of the network is doing with the software so that we don't create overlapping development efforts. And most of the time, we also need to support the members of the network in applying the architecture correctly in order not to create these situations where we need to refactor the code extensively because of the wrong application of the architecture. Of course, we need a single model for licensing. So all the code developed within the network has to be under the same license. And there has to be, like Jodi, very well-mentioned documentation testing and all that has to be handled by the different projects themselves. It can't be entirely centralized activities. So I'm not going to go through these either in detail, but Jodi just explained good things about the documentation, about licensing. You need to be sure that your project doesn't have copyrighted code and you need to provide a CLA for your contributors. Communications, there are different ways of communicating stuff in different projects. These are some of the things we use, not all of these actually. For instance, we don't yet have a issue tracker, but we intend to have something like that. And versioning in our case, we are maintaining kind of road map of releases and the coordinator of the network is doing that. And the developers should inform the coordinator if they want to have something added into the next version of OSCORI. So funding is always an interesting topic, isn't it? So we are defined the money on how to run this kind of a network. So in OSCORI case, we gather some funding from the network steering committee members or member organizations. And this money is being spent on supporting the applying, how to apply the architecture in different projects. Also for the communications activities and code integration reviews when you want to pull, make a pull request and have a code added to the code base. And NASA NASA of Finland is of course a major contributor and user of the software. And we are currently taking care of this coordination work. But this activity could well be run by any other organization or even a nonprofit or business organization. But I suspect that as we go along, these activities will be and funding also be more evenly spread among those contributing to the software and the community. An important point is how to measure success of your project. So this is to prove to you, this is a maker that we are doing the right things. So here are just a couple of ways of doing that. So within OSCORI we have maybe 10 installations that we know of. It can be downloaded by anybody, so we don't really know for sure. We have ongoing development projects. In GitHub we see the amount of contributors. We can count the attendees in the meetings we may arrange other events we arrange. And this is an interesting point, how much money is being spent to develop them for OSCORI. I would estimate, this is a very rough estimation that during this year over one million euros is being spent on OSCORI related development. And even more interesting is how much money you can save by doing this together. Of course you will save on license costs, but even more important is how much you will save on life cycle costs. So that's going to be major savings if you have a flexible architecture in your software and you can replace parts of it and add things to it as your requirements or kind of change in the longer run. And not to forget other benefits like enhanced cooperation between the silo organizations and involvement of people, empowerment of people which is almost invaluable. So moving on to conclusions. Especially for the public sector, I would advise you to take care of your press's open source software. So write it, love it and use it. But also take care of it. So make sure that when you do something with open source that products you use are properly licensed. Myself, I am of the opinion that also companies should be able to take advantage of the developments made by the press community. It is important in Finland whereby we are a small nation and we cannot afford to not to include also companies in our network and in our common activities. And in public tender in cases you have to be quite aware of licensing, architecture issues and the documentation part, definitely. And some challenges. Certainly it was a hard process to start breaking the silos that the organizations have created around themselves doing their own little things and kind of overcoming the prejudice of working together. But once you get down to it, when people start working together you cannot stop it really. Now this process just goes on and the benefits are starting to show. Okay. Sometimes a big issue can be getting the software core people development funded. When people just want to add new features to the software. But that is something you have to take care of. Marketing, spreading knowledge is maybe not always easy. And for the facilitation of the joint development activities you need to bring the organizations together and have them agree together who does what so that the organizations don't do duplicate work. That's the whole idea behind networking and the community. And then if you want to go international then you have to think about how to join this cross-party ecosystem. For instance, how the decision making processes of your project or the network can be adapted to those in use worldwide communities. And the success is that I think we have made is that the software itself is very popular in Finland. And it's also gaining some international attention. So while we are not, we are far from perfect with the software but still people see the potential in it. And a very strong network of organizations has emerged around Oscar in Finland. And I think that the organizations are slowly beginning to realize how these kind of activities can help them to save money and do better services to their customers. And I think it's, personally, I think it's really important that companies and academia are involved in our network activities. And also having a governance model does seem to make things more organized. So following certain principles is important when working together. And the flexible architecture is likely to bring huge savings on life cycle costs when you don't have to replace the whole software when things change over time. And I'm really happy to say that I am seeing significant joint development projects either in motion or due to start pretty soon around Oscar. So the benefits of joint development are becoming evident. I want to close on the notion that work together is really fun. Thank you.
|
Many FOSS projects have started as endeavours to solve a problem at hand. In due course, the developed software has been adopted by some other users, has proven itself useful and then, by magic, has become a popular product with thousands of users worldwide. Fact or fiction? This presentation outlines the success story of Oskari and national co-operation around the software. Oskari http://www.oskari.org is a popular open source platform for browsing, sharing and analyzing of geographic information, utilizing in particular distributed spatial data infrastructures. The Finnish Oskari collaboration network actively works on various projects extending the software and creating new innovative services. The network consists of 27 member organizations, of which 12 are private companies. Success doesn't usually come without organized work. For the process of securing a successful co-operation, a few steps can be laid out. 1) Creating a useful piece of software with appropriate licensing 2) Co-operating with a number of early adopters 3) Starting a collaboration network 4) Adopting a sustainable model for collaboration and developing a product lifecycle management plan 5) Measuring success and providing proof of benefits of both the software and co-operation
|
10.5446/32156 (DOI)
|
So this is Magical Postjust in three brief movements. I'd like to start by acknowledging the good folks at CardiDB, who employ me, let me work on Postjust. And like a lot of softwares of services companies, CardiDB has a strong open source ethic, stronger than most actually, because their system is built on top of open source components. The DB in CardiDB is actually PostGress, Postjust. So much of what I'm actually talking about today can be run on the CardiDB cloud, and some of my examples actually do that. They can also be run on a local Postjust instance as well. In fact, I built a lot of these examples originally on top of the boundless stack. So it's really, it's fairly agnostic as far as what you put on top of the database, and PostGress and Postjust is what it's all about. So this is supposed to be Magical Postjust and perhaps the... It's Magical Postjust. It's an Apple-esque name. It's the kind of thing you'd hear at the Apple user conference. It's magical stuff. And maybe that's what drew you into the room, but in retrospect, I could have called it show and tell, since I seem to have a lot of material about my favorite toys, or maybe stupid extension tricks, would have been more honest. Since I got some crazy examples of crazy extensions. But regardless on the Play Bill, it is Magical Postjust in three brief movements. And I want to give this talk because I feel like people aren't appreciating the kind of... What's up? No. Funny colors? No. I guess we're losing the video on the recording. But I'd like to go on since you're all here. I wanted to give this talk because I feel like folks aren't appreciating the kind of deep and beautiful magic that they can create using little more than their standard back-end database. Too often people have this utilitarian view of their database. They don't really like their database that much. To them, it's just a bit bucket. They just hold a bunch of tables, they stuff data in, they drag data out. Some people hate their database so much that they hide it away behind an object relational mapper, an orb, so they can pretend the bit bucket isn't there in the background doing the hard work, so they can pretend they're all alone, they're beautiful, little, middleware language. And they're really missing out because once you get to know it more intimately, you come to realize that your database is a beautiful, beautiful thing. It's not just a bit bucket, it's a magical toolbox with all kinds of good stuff inside. So this talk is actually not so much about post-jist, it's about the kinds of things you can do with post-jist when you combine it with the magic that's already inside the post-rescue database. Post-rescue is so magical because it was designed from the start to be more than a bit bucket. Michael Stonebreaker had already spent a decade pushing around bits of the Ingress project when he dreamed up his next generation database in 1986. And he wrote a paper called The Design of Post-Res, which laid out his goals for the new, and at that point, unwritten database. And it's those goals which form the foundation of post-rescue. It's awesomeness and for post-jist itself, in particular, support for complex objects. Geometry and geography are complex objects, and so are rasters. User extensibility is what allows post-jist to exist at all. It allows anyone to add types and functions to the database at runtime. And most of the fun stuff in post-rescue takes advantage of extension points. Active features are a pretty common database feature now. The database take a hand in managing data flow, and the relational model is what ties everything together. It's what makes the system as powerful as it is. Every piece of information is a tuple, and tuples are collected in tables. So Postgres lived as Stonebreaker's academic project for almost a decade, but it was useful enough that by the time he moved on to other projects and other topics, it already had a user base who kept it alive. They fitted out with the new SQL standard and eventually it grew into the PostgresQL development community we have today. So for the first movement, or three movements, I want to talk about what's possible when you start making use of PostgresQL's native full-text search support. Because if there is a phrase that makes me want to put my head in the oven, it is, we're using Postgres with Elasticsearch. And I acknowledge Lucene, Elasticsearch, they're nice tools, but boy, I sure hope you need every scrap of functionality they offer, because once you have two different data storage and query systems strapped together, everything in your system gets more complex and uglier. Assuming hopefully that your relational database is your source of truth, all the changes have to be replicated over to your Elasticsearch system, which adds a synchronization step to all the work. And if your data change is fast enough, that can actually be quite complex. But that's actually the easy problem. The hard problem is that once you have two query endpoints, any query that involves both a text search and a spatial search of sufficient complexity to require Postgres, requires that the middleware starts to coordinate the query process. So first it talks to one of the systems and it says, give me all your records on this text search query. And then it has to take all that information, walk over to the other system and say, and give me all the records you have that are in this set and fit my spatial clause. And depending on the query, the order you want to do that in, text first or spatial first, varies. So basically you have to build a little query planner in your middleware, which is a terrible idea because Postgres already has a query planner and already has a full text search system built into it. Postgres T-search has all the basic capabilities you want in a full text search engine. It has stemming, foxes and fox running and run. It has weight in searches so you can give more precedence to results matching, say, the title. It has the ability to create your own dictionaries so you can handle both different languages and specific professional domains, jargon and so on. It's got the ability to rank results based on the quality of the search. It's got highlighting of match terms and output. But what does this all have to do with magical Postgres? Well, if you're a full text engine and you're a spatial engine or on the same database, you can run compound spatial and text queries and you don't have to think about the execution path or efficiency. The database engine just does it for you automatically. So here's a fun example application. It's built using geographic names, in this case from geonames.org because geographic names are basically words. They're just really short documents that come with locations. But any document type with location can be used to build a cool text spatial location application. So with a little data mangling, you can turn the geographic location names file into a table. It looks like this. Primary key, the name, location. In order to get the full text searching enabled, you have to add a TS vector column. So our column type for full text searching is a TS vector. Then we populate it with TS vector data using the English configuration. We'll talk a little bit more about that later. And finally we index it using the full text index for TS vector. It's a GIN index that stands for a generalized inverted index that's also used in the Postgres support for array types. So now we're all set up. I note there's a magic parameter in here, the word English. So we've specified an English configuration. So English grammar rules are used to determine things that like that, that oak and oaks and age and aged are basically the same thing to identify all the articles and pronouns that can be ignored, to reduce the phrase into a simple vector like structure suitable for indexing. So two TS vector gives us a column of TS vectors we can query. But how do we do that? The query of TS vector you need a TS query, which is itself a logical filter. And you can construct one as a combination of and and or clauses. Optionally with weighting and partial matching, this is a simple one though. So this is a query that would match entries with both oak and tree or oak and ridge. And we can use the query in a full text search of our 2 million record geographic names table using the app operator to find all the TS vectors that match the TS query. And it turns out there's only three. But the really, really interesting thing is how quickly it finds the answer. It's just 17 milliseconds. So that's a good fast search of 2.2 million records. And the best part is now that the full text search is handled inside the database, it's possible to build efficient compound spatial text queries too. So like this query, which combines a search for all the records with oak and tree with a spatial filter restricting that it results in nearest hundred kilometers. And because both clauses are handled by the database, all the database machine we use at your disposal, figuring out the most efficient way to access the rows. So this is the explain analyze output for the last query. And reading from the bottom up, you can see that in this case the database ran the full text search first because it was the most selective. I don't return 59 rows as opposed to the all things in 100 kilometers filter, which would have returned several thousand. And then it applied the spatial filter which removed 57 and 59 leaving just the two we got in the result set. So I've shown you a lot of SQL. And maybe now you're starting to wonder where the magic is. If you don't think SQL is magic, usually the magic comes when you bind the power of SQL into a user interface and make that power visually manifest. So take all those place names and subset them quickly using text search, and then those pass the result into a heat map. So here's a map for a unique regalism in the Pacific Northwest of North America. In Cascadia we call mountain lions cougars. So all the place names that have cougar in them turns out mostly the Pacific Northwest. Now there's all kinds of oddities about how we name things, and thus how we perceive ourselves. So there's obviously some cachet. There's northern, southern, okay this makes sense. These are, by the way, these are just place names in the US. Because I'm using English it kind of means I'm stuck to particular places. Eastern, okay this makes sense. And then western. There's always some cachet in being western. I think everything is west of Europe and maybe that's why it all got rammed in there. So that was magical, but perhaps not practical enough. By the way, if you go to my blocks page, the PRAMZ blocks page, you can see this example live and you can type in your own names and see what's going on. All right, so that was magical, but perhaps not very practical. So let's do this one. Suppose you were a county with some standard parcel and dress data, and you wanted to set up a simple parcel finder app for folks to find their home. How would you do it? You want to have a Google style interface, just one input field and magical autocomplete. You're a county, you're a city, you have GIS data. So your GIS data probably has a street name, address number, and a city for every site address. So we'll make use of that. But how? Street address, great. Where are we? So the only trick to do it to make an autocomplete form is you have to be able to look up not just the words that the user has already entered, but the words that they're in the middle of typing. Unfortunately, Postgres Tech Search can do that too. So in this 2TS query function, I'm not just looking for the 256. I'm also looking for the words that start with MAI. So say if I'm looking for 256 Main Street and I haven't quite finished yet. So that's what I could stuff in. Postgres Tech Search calls this prefix matching. With prefix matching and a simple JavaScript, you know, a jQuery UI, in this case, this example, an autocomplete form. You can have a really fast autocomplete search text up and running in a few minutes. And it's uncannily accurate. It doesn't care about word order. If you want to get fancy in addition to having one row for each street address, you can also add rows to your table. So you can add rows for, say, street intersections like Main and Second. But the last example here is interesting because in the search field we've got 349e Main St. And on the map, it's a Google-based map in this case, on the map we've got East, all spelled out Main St. Right? So there's a mismatch here. So what happens if we go back to our form and try to search for the names as they appear on the base map? 349e Main St. Using the fully spelled out word East. Ah! No answers. If you just search for Main St. That's spelling street out in full or searching for the addresses on South Second Street. That appears on the map. But no success. So what's going on here? What's happening is we broke the street names down into words. Each token got saved as the word. And then we saved the words in the full text engine. But the words aren't like English words. They have their own grammar and synonyms. Street names, they look English, but they're not. So the search is failing. Can we fix it? What if instead of treating the words in the index as parts of language, we treated them as parts of addresses? So the system would know that if you wrote St, you meant street. And if you wrote N, you meant north. Then the searches using abbreviations would work. And searches against data that was abbreviated would work. And PostgreSQL T-search allows you to create your own dictionaries. Or synonyms. Or words you want to ignore, words you want to replace with other words. So I created a custom dictionary for street addresses. The PostgreSQL addressing dictionary. And for my basic example, I used a simple dictionary set, which doesn't do any special processing of the words. This is actually better than the English dictionary, which will drop things that aren't English words, like N or St. But it's still not that good, since a search would have to use exactly the same abbreviation style as the data in order to come up with a hit. So in this example, 128 is considered a word, and E is considered a word, and St is considered a word. But when we parse the same thing using the addressing dictionary, the custom address dictionary comes into play, and abbreviations are expanded out. E becomes east, St becomes street. So by altering the search application to use the addressing dictionary, instead, we get much better behavior. East main street works, south second street works. Things even work when the user is mixed up the correct addressing order, and put the directions last, or put the house numbers last. So that's pretty cool. This is one I couldn't demonstrate in Cardi B, because you can't do system level stuff like add dictionaries in the Cardi B. You have to take the extensions that are already in place. But this is what I do with my custom Postgres and GeoServer as the renderer. So rather than give a bunch of URLs, here's the modern version of the AOL keywords for the section. Just type in Postgresql, full text, 9.4 for the latest information. But full text search has been part of Postgres since 9.0. And if you type in PRAMZ GitHub, you can find the addressing dictionaries repository if you want to add that to your database. So that was the first movement. So let's pause for a brief while, while the orchestra flips over the sheet music, and for the second movement, federated systems. So first, upwards federation, pushing data up from my local database into a cloud storage system. And in deference to my employer, and because it's so easy, to sync to a system where there's no impedance mismatch, copying from Postgres to Postgres is pretty easy. I'll be showing how I federated a local Postgres to Cardi B, a cloud Postgres. So first I got some open data from the city of Victoria, my home city. This is a shapefile of public art in the city. And then I loaded it into my local Postgres using shaped PTSD. And I viewed it in QGIS, so there it is. Points. Blah, blah, blah, blah. And then I loaded the same data into Cardi B, and there it is. It's a little more comprehensible with the base map underneath. And I can use the Cardi B visualization tools to make it pretty, in this case with a categorical style. But how do I connect the two systems? How do I get changes in the local Postgres to propagate to the cloud Cardi B? Well, Cardi B is a web service, so we need a web transport to push the changes over. And it happens, as it happens, I wrote one of those, the HTTP extension to Postgres. I'm not especially about this extension, it just allows you to make HTTP calls using PostgresQL functions. So you can run an HTTP get function and get back the results from a web service. Not just the content, but also the MIME type, status codes, headers and so on. And not just get, so you can do posts and puts and deletes, so you can interact with any HTTP web service you want. And here's the thing that makes this work. Cardi B has a web service called the SQL API. And by the name, you can imagine what it is. The SQL API is actually diabolically simple. You call an HTTP endpoint, you tell it what format you want your return to be, and JSON or geoJSON. If you're altering the data or the data is private, you provide an API key to prove who you are, and then you just provide the SQL you want executed. It is so diabolical, I actually described it a couple of years before it was invented, as the architecture of evil. Since with an unprotected SQL pass-through, there's so much evil the outside world could work on your database. Of course, the Cardi B API is protected against SQL injection, and users are isolated in their own databases. Everything's only run at the permissions level of the user that's logging in, which is basically very low-level read access unless you have an API key. So it's not exactly the same thing I described in 2009, but it's incredibly lightweight to pass the SQL through to the database unfiltered. The simplicity of their approach allows for incredible flexibility in building apps, since there's no need for the HTTP interface level to reinvent things to proxy for SQL. If you've used OGCWFS, you might have looked at the XML and said, this looks like really unpleasant SQL, and that's all it is, right? It's just a bunch of ands and ors and filters and so on. And why reinvent that when you can just write the SQL directly? So for this example of Federation, I use QGIS as an editor, and then I directly edited a local Post-Its database. Each database update in turn triggered an HTTP Post call, which called the HTTP extension, passed an update to the Cardi B SQL API. This in turn was applied to the Cardi B database, which made it visible to me in Chrome looking at the Cardi B rendering. So, diabolical, pure evil. So here's the local database trigger that updates Cardi B. It's only tied to update events, but if we're doing a full CRUD implementation, we could make an insert and an delete trigger as well. To write to Cardi B, we need to authenticate, so we provide an API key. The SQL to update this table is simple, since we're only updating the location field. A little more complete version might update all the fields. We have to URL and code the SQL to pass it to the HTTP form, and then run an HTTP Post to get it up to Cardi B, and then check the return code to make sure there's actually a good response that was accepted, and then return the new tuple back for writing to the local database, and that's it. So here it is in action. I positioned my cutest window on top for editing, and the Cardi B map on the bottom for seeing the results. I added a point and moved it, then refreshed the Cardi B window, and see the changes have been set up. So, move, move, and refresh. Refresh, refresh, refresh. Ta-da! So the trigger method is a nice incremental solution, if you want something even simpler, operating in batch, this solution from Martin Jensen is even more about a vehicle. It just dumps a table directly into curl. This example uses CSV, so it's only good for points, and then it slams the table right onto the Cardi B import API. That's probably the smallest piece of SQL I've seen for moving data directly from one database into Cardi B. So this is an example of a push up, moving data from a local Post disk to a remote HTTP host. How would a push down? Pushing data down from the cloud to a local Post disk database. Can we do the reverse? Sure we can. We use a fancy SQL standard called SQL Med, that stands for management of external data, which is exposed in Postgres as a real world piece of functionality called foreign data wrappers, or FTW. Foreign data wrappers expose what looks to the client, just like a table in the database. You get access by running select queries on it. And change it by running insert, update, and delete commands on it. But behind the scenes, a foreign data wrapper table can be anything at all. So it can be a table on a remote database, not necessarily even a remote Postgres database. There's wrappers for Oracle and MySQL, and other databases. It could be a non-database data source, like a flat file, or it could be a non-tabular database, like a Twitter query. And there's FTW implementations for all those things. However, the one I'm going to talk about today is an FTW wrapper for OGR, the Spatial Data Abstraction Library. It's a perfect fit for an FTW wrapper in many ways, since it exposes a very tabular kind of data. The OGR data model is very much a tabular model with spatial objects in it. And since it's a multi-format spatial library, by implementing an OGR FTW, we get access to all the formats of OGR for the price of writing just one wrapper. So here's what it looks like to expose a file geodatabase to Postgres using OGR FTW. So first you turn on the FTW extension, then you create a server that references the data source, in this case a file geodatabase file. And you can see the nomenclature really assumes you'll be working against other database servers, but fortunately that's not actually a real restriction. And finally you create a foreign table that in turn references the server you define. It defines what columns from the foreign server you want to expose in your local database. So here's the same thing only using CodDB as the foreign server. Even though CodDB is a Postgres post just underneath, we don't have access to low level. So we define our server using the CodDB OGR driver, rather than the Postgres OGR driver. And then we define our foreign table to match the CodDB table. So that's our foreign table statement. And once that's defined, we can run queries locally on the table and get results just as if the data were local. So here's a distance query finding the seven nearest pieces of public art. To the piece named fire in the belly. The OGR FDW driver is getting better all the time. So as of a month or two ago, it can send calls. So that's where the where clause, you can send the where clauses to remote servers so that only subsets the data sent back to the client. I think my next revision, I'm going to add spatial filters. And then finally update and delete support. So it's possible to actually edit the remote data without ever leaving the friendly confines of Postgres. So this is the third movement. I think I'm going to have to give a talk model in the four seasons at some point. So summer, fall, winter, spring, that'd be cool. But for now the movement names are a little arbitrary. It's the third movement, time and tide. In the abstract that made me write this talk, I promised I'd do a time travel portion, a look into the future. So for this section I want to turn it over to Karnak the Magnificent. This is also in the blocks URL down there. So this is what Karnak does. You type a city into the autocomplete form, which is driven off of a Postgres query. And then based on the city, Karnak tells you if it's going to rain tomorrow. And this is all done with Postgres and PostGIS. So the video was taken a couple months ago, so the answers are all wrong. But if you go to the demo page, you can find a live Karnak. It is only for the states though. A live Karnak that should be pretty accurate, or as accurate as a fortune teller could be expected to be. So let's peel back the covers and see how the trick works. NOAA, National Oceanic Atmospheric Administration, is nice enough to publish their forecast data in a web directory. It's kept constantly up to date so you can see that I created this slide in February. And for the rain prediction, the file we're interested in is the POP12 file, probably the precipitation given in 12-hour forecast windows. And if you download that file, convert it to GeoTiff, and look at it in QGIS, this is what you see. Which is like totally awesome and trippy. Because QGIS is defaulting to loading the TIF with band 1 is red, and band 2 is green, and band 3 is blue, which gives a really cool picture with lots of fun mixing. Actually, each band is meant to be viewed separately as a forecast period. So if you stream them together, you can see the forecast pattern of precipitation moving from west to east, in this case, as you would expect the general direction of weather in the jet stream in North America to run. So once the data are in GeoTiff, we can put them into PostGIS, but actually getting an optimal conversion from the NOAA net CDF to GeoTiff using Google is a bit of an adventure and learning experience. So I thought I'd share it. The default conversion did preserve all five bands, and included the spatial reference information and the grid metadata of the input file, which is cool. But the input file was 1.5 megs, and the output file was 113 megs. So the first thing you notice when you pop open the output files, the pixel type is double. So that's 8 bytes per pixel. But the input data is just integers from 0 to 100, with a NO data at 9999, which basically fits into a single byte. So there's an eight-fold improvement in storage available if we just change the pixel type, which is pretty easy. And that gets the output down to 14 megs, so it's no longer 100 times larger than the input, only 10 times larger, which is still pretty terrible. When you look at the TIF in QGIS, it has some awful imperfections, where the 999 no-data pixels have been coerced down into the same date range as the 0 to 100. So we need to explicitly map the NO data values into a slot in the number space where there's no real data. So since our data run from 0 to 100, we can map it into 255 safely. But the file is still large. So what's going on? It turns out that Google is producing an uncompressed Geo-TIF by default. So we have to ask for compression. Dflate does an excellent job, and now we're down to 1.4 megabytes, about the same as the original. But it turns out actually that Dflate has some extra options we can twiddle. If we add a higher compression level, which just uses a little more RAM, and if we add a scan line predictor, we can get it down to just over 1 megabyte, which is a pretty nice improvement over the 113 megabyte the default conversion gave us. So the key component of our solution is what Carnac means when he says, tomorrow, and to figure that out, it helps to look at the output from Google Info. There's actually five bands in the Geo-TIF, and Google does a great job preserving the original grid format metadata, which is what you're seeing there. So we can figure out what each band means. The first band is good for 10 hours after the forecast is generated. The second band is good until 22 hours after the forecast. The third band is good until 34 hours. The fourth band is good to 46 hours. Each band has a valid time, which gives the UTC timestamp when the forecast expires. That could be pretty useful, but there's actually two ways to solve the problem. There's the right way, which we carefully look at the metadata and figure out which forecast band we need based on the timestamp. And then there's my way, which was just to average bands three and four together. That was pretty close to tomorrow, between 12 and 36 hours. So before I can do any averaging, first I need to load the data, which involves this usual opaque command line syntax. But there's nothing too crazy here. We choose that 32 by 32 chips because one byte per pixel times 32 times 32 times five bands implies a five kilobyte tile, which is slightly smaller than the page size of Postgres, the 8K page size of Postgres. And once all the data are loaded, we just need two SQL queries to run Karnac. Oops, got some stuff here. Two SQL queries to run Karnac. So first, the query to drive the autocomplete, which just reads the populated places, ordering results by size of cities, so that's really easy. And second, the one that generates the probability guess for Karnac, given the city. So step one is a nice big hunk of SQL. Step one, we use the selected city to generate a buffer ring, which we'll use to summarize the precip probabilities. Step two, we apply that buffer to the raster table and find all the rasters that intersect the buffer, and then mask out just the pixels of those rasters that fall inside the buffer. And then step three, we take those masked rasters and summarize them, finding the maximum value of the probability precipitation for every pixel. That becomes the number we use to drive Karnac's gas. And here's what it looks like visually. We calculate the blue buffer, and then we use that blue buffer to find the intersecting chips. You can see the red boundaries of chips. We just hit two of them barely, and then we mask out the chips to find just the pixels that intersect the buffer. And then in order to keep Karnac up to date, I have a little process running on my home computer every six hours. It pulls the precipitation forecast from NOAA, converts it to Geotiff, stuffs it up into an S3 bucket, and from there, Cardi B automatically syncs it every day using the standard table sync capability of Cardi B. Basically, every refresh period just slurps the file down and replaces the current data with the new data. And that's Karnac, the magnificent looking into the future. I have a little coda to finish it off, but I've used up my time, so I'll say thank you very much for listening to me now. And I don't know if I have time for like one question. I'll take one question. Just one question, please. So you're running out of time. Easiest question ever. If you're shy, you can have two questions, so you are not the only one. That's right. I'm not going to hold you to one. Yes. Would you differentiate between past and future sets on this screen? I would over-determinate, so I would say that I would let you pick both of them out. Yeah.
|
Everyone knows you can query a bounding box or even spatially join tables in PostGIS, but what about more advanced magic? This short symphony of PostGIS examples will look at using advanced features of PostGIS and PostgreSQL to accomplish surprising results: * Using full text search to build a spatially interactive web form. * Using raster functionality to look into the future. * Using standard PostgreSQL features to track and visualize versioning in data. PostGIS is a powerful tool on it's own, but combined with the features of PostgreSQL, it is almost magical.
|
10.5446/32036 (DOI)
|
So they are free, you have no financial commitment, you have several versions of them to choose from. So of course it's a very easy starting point for your mapping project to have a background map which is rich. Why? Because it's familiar. Everybody knows Google Maps, everybody knows OpenStreetMap and so on and we tend to see the world this way which is a bit of a tragedy in my opinion because Antarctica and Greenland are not that big but anyways that's how people are used to to see the world today. Why? Because it's easy. There are tens of interesting map projections out there representing the world in different ways but why do you have to learn all of them if you can just get away with one? So it's convenient, it's easy, you can relax. And as another bit since we are closer to the data line that I used to be, I think that everybody in New Zealand will be pretty happy about this projection, nicely wrapping around and showing the Pacific Ocean as a solid, seamless entity instead of cutting it at the edges. So it's also nice because it's fair to the data line. So why am I talking today? Why am I here talking about projections outside of 3857? Well there is a few reasons why people do not want to use Google Mercator. One reason to rule them all is distortion. We use map projections for one particular reason, to minimize distortion, a certain type of distortion in a certain area. The reason changes depending on what type of distortion. Are we talking about shape distortion, area distortion, distances between points and it changes between the areas that we are trying to look at. Are we trying to map the global scale, a local scale or maybe we are bound by legal requirements. Like for example in Italy you have to publish your map in UTM 32 more than 33 north, that's the law. Then you can use other projections if you want but those two are mandatory. So if you are publishing legally binding maps, you have to use those, not 3857, you can also add 3857 if you want. So let's have a look at the area distortion problem. Anything that's close to the pole gets inflated a lot. If Italy was moved more or less where Greenland is, it would be much, much bigger but then again it's way colder there so I'm not proposing to move Italy there. What do we do in case we want a good representation of the area, of the land masses and their sizes and their relative sizes. We use equal area projections. This is very important for statistical maps and for deductical purposes so that people don't think that Antarctica is the biggest continent in the world. Here is one example of an equal area map. This is the Moll Wade Projection and FAO, the Food and Agriculture Organization, sponsored geo solutions to implement it in geo server for their atlas of fisheries in which they have big polygons over the earth and they need to have them show the right proportions. So you can actually see what's bigger and what's smaller without having to think, oh it's not worth so big but I have to think that it's smaller and so on. Second distortion, distance perception. Yes, most of the tools that we have out there have a point and click to measure tool that you can click a path and get a distance. But the problem is that you cannot shut down your eyes and your eyes will keep on seeing relative distances. They might not be able to tell you it's 1000 kilometers but they might be able to tell you, oh it looks like it twice as long as that other direction. And in Webmercator the distances are all wrong. This is a map of lines of equal distance from a point in Slovenia and as you can see they are not circles. They look like eggs instead. So your perception of distances on Google Mercator is completely wrong. So what do we do if we need actual precise representation of relative distances? We use equidistant maps instead. An equidistance map is a map in which certain lines are proportional to the directions. It's not possible to make a projection that's accurate in terms of distances everywhere but for example this azimuthal equidistant map shows the right distance from the pole to any other point in the world. So equidistance maps are important when you have a point that you want to show the distances to other points around it. And you might be familiar with it because the United Nations logo is derived from an equidistant azimuthal equidistant projection. Here is one use case from a Canadian weather map. They have a weather station and they are displaying everything that's around it in azimuthal equidistant projection. So they basically have one different map for each weather station so that they can literally show true distances from that particular point which is of interest. Finally, the Google Mercator is not showing up poles. Not at all. You can see a bit of Antarctica but you are not seeing it all the way. The North Pole is also cut out and yet poles are very important for the future of humanity for climate studies and so on. So there must be a way to map them correctly and show them in the map. And yes, that way is the polar stereographic. Commonly at least it's the polar stereographic projection. This is one example from the British Antarctic Survey which is powered by GeoServer. This is another one. It's called Polar View. It's showing a number of climate-related data sets around the poles. This one shows both Antarctica and the North Pole. This is actually a view of the North Pole. And FAO itself in that fishery is Atlas. It allows you to look at the world from different projections to appreciate the different aspects of it. And one of them is the South Pole stereographic, again powered by GeoServer. Now, 3857 Web Mercator is actually besides the North and the South are actually quite well behaved. What if we try to display global data sets with all these projections? Well, we take each point that we have in the vector data set, each pixel that we have in the raster data sets, we reproject them point by point and it should be all fine, no? Well, not exactly, no. It can be very wrong. It can go very wrong. And I want to show you some examples of what happens if you have a global data set and you try to display it with local projections without pre-processing it, without cutting it, without selecting it, just as it is. Which is actually a more common use case that you might think of. So, this is just me panning and trying to look at the Pacific. And I would really like to see Americas there, but they are not there because I'm asking for an area which is between zero degrees longitude and something like 400 degrees longitude and there's nothing beyond 180. So, if I'm just asking the database, give me something that is beyond the data line, it's going to give me back nothing. So, I have a big hole in the map, which is not exactly nice. What if I try with a projection which is designed to look at the Pacific Ocean, such as the Pacific Dairon Marketer one? Well, Antarctica goes away because it touches the pole and the Marketer projections blows up at the pole. It degenerates to infinite. And the Greenland was wrapped in such a way that it seems it's covering the entire North Pole. And that's also a problem. It shouldn't be that way. What if I try to use a Lambert conformal conic? Well, some of the countries are being stretched in a funny way in the upper part of the map. What's that? It shouldn't be there. I will show you later how it should look like. What if I try to do something so innocent such as a datum change? I'm not even changing projection. I'm just changing the ellipsoid. So, from geographic coordinates to geographic coordinates. A datum shift changes a bit the value of the geographic coordinates. And those that are close to the dateline might cross it and go on the other side. So, I end up with a long line crossing my maps and I'm wondering what's going on there. And, well, this is, this comments itself. This is UTM 32 North over Italy trying to reproject the whole world. That's the result you get if you try to just reproject point by point. Not exactly what you want. So, what's going on here? Well, we have a global dataset and we are trying to use locally defined projections. So, we should really filter the data and probably cut it because some of the polygons are so huge that they go far away from the projection area of definition. Then we have data that is crossing the dateline after the reprojection because of datum shifts and stuff like that. And so, we have those long lines and we should be cutting the data, the dateline instead. And then we have the problem of the long lines like meridian or a parallel. I only have two points. If I reproject two points, but the output is supposed to be a curve, I have a problem. Because if I reproject two points, I end up with two points. I should densify it before reprojecting it. Do I really have to do all of this by hand? I'm tired just about describing it. So, in GeoServer, wait and it gets worse. In WMS, there are the auto projections. The auto projections are a way to get your own coordinate reference system centered in your point of choice instead of using one of the 5,000 already well-known projections in the PSG database. So, let's say you want the more weighted centered on Korea, I can using the auto projections. And the thing is anybody can give me a different projection center. So, basically, I have an infinite number of projections that I could theoretically handle. This is a mess and it's impossible to deal by hand in the use cases in which you have many, many projections in the output that you want to handle. It needs to be automated. And GeoServer does just that with a system that we call the advanced projection handling. They are smart classes that know the issues and the problems of each of one of the projections and can fix them before while we try to paint the data. We have a small set of them for the most common projections that are growing over time. So, as you can see, I have a list. We have one for transfer marketer, for direct marketer, for the Lambert, for the Konig, for Polar Stereographic and so on. So, how does this work? Well, it's four steps at the moment. The first one is you gave me an area that you want to display. I have to figure out where the source data for it is. And sometimes it's not what you asked. You asked for a bounding box that's crossing the dateland and it's going to be outside of the valid range of dongitudes. Well, in that case, I also have to read from the other end of the world. So, I'm trying to display the yellow area, but I have to query the two dotted areas instead. So, the system knows about this and will read all the data that's needed. Then I have to cut excess data. If I'm trying to display a UTM projections, there's no way I can reproject data which is too far away from the central meridian. So, I have to cut it away. It's part selection and part actual polygon clipping. Because, I don't know, Russia, it's too big. I have to actually cut it. It's part of my map, but most part of it is actually out. And then I have to wrap the data. I read it from one side of the world and I have to move it in the area that was requested. So, I'm taking the data and translating it so that it shows up in my map as a less output. Finally, for all those cases in which the reprojection caused the points to wrap around the data line and generate those very long lines crossing the world, I have to revrap all the results so that it shows up on the right part of the world and then duplicate it on the other side in case I'm looking at the whole planet. So, let's try again the same map as before, but with advanced projection handling enabled, which is, by the way, enabled by default in GeoServer so you don't have to do anything. You might want to disable it, but normally it's enabled by default. So, this is the map on Pacific and this time I have the Americas showing. Why? Because the system determined that I was reading beyond the data line, went and queried the other side of the world, took it and pushed it in the right place. So, it generated a single double mess request. So, there are no tiling tricks here. So, the system actually took the data and moved it in place. This is the PDC Mercator that I was showing before. So, you can see now everything is wrapping nicely. Greenland is complete and Antarctica is there because I cut it. The projection handler for Direct Mercator knows that we don't want to show anything beyond 85 degrees north or below 85 degrees south. So, it just cuts the excess. This is the Lambert conformal conic. No more funkiness at the top. This is how a conformal conic should show. There's nothing at the top because we wrapped the earth with a cone and then we opened it. So, if you think about opening the cone, that's what you get. This is the data change. Again, no funky long lines crossing the map because the system detected the data line crossing and fixed it. We wrapped the data across the data line. So, this is what we have today. It's working for vector data. As I've shown, it's also working for raster data, starting with your server 2.7, so six months ago. What's cooking? What are we working on right now? Well, many scientific data sets have a little problem. They don't have coordinates between minus 180 and 180 in longitude. For some reason, they choose to go between 0 and 360. And when you put that into a system that wants right longitude, it doesn't like it so much. So, we are actually fixing the rendering subsystem in order to recognize that situation and handle it properly so that all the properties of the advanced projection handling are preserved when the data set is between 0 and 360. And then there was the problem of the long lines that I was citing before. I have one meridian or one parallel represented as a single long line between two points. I reproject both of the points and I get back, again, a straight line. But some projection turns long lines into curves. So, I need to densify it. I need to add points before reprojecting so that the curves shows up. The problem is how much? I don't know. It depends on what kind of projection and what kind of area. So, it's something that I would need to decide dynamically. We have a bit of code in the raster reprojection subsystem that does this kind of analysis to determine whether we can reproject the raster as a fine transform or a piecewise a fine transform. In my spare time, I'm trying to leverage it to figure out how many extra points I need to add on the long lines in order for the curves to seamlessly show up in the map, in the resulting map. But it's not there yet. Finally, as a closing word, we have seen just during the last six months, I don't know, four or five new projections showing up in the GeoTools code base, contributed by people. So, I'm pretty sure that there is still lots of interest for map projections outside of WebMarketer. Either for reading data or for producing output maps. We have seen the generation of new auto codes for Polar Stereographic and Gnomonic, so people definitely wanted to center their own little projections where they want without abiding to the PSG database. So, I'm pretty sure there will be more need for advanced projection handling in the future. So, I can state for sure that there is life outside of 3857. And with this, I'm done. Questions? Yeah, I'm ready for Q&A. Anyone have any questions? Go ahead. You're lying on the join when they come back together. Could we merge those polygons back together? That's a possibility, but the problem is that it would introduce slow topological operation in the rendering chain, and I'm not too happy to go there. If there is funding and if someone wants an option to enable that, we could have... Let me just... Here, CNTartica, it has the little black line in the middle. We could change it so that that little black line doesn't appear anymore because the two bits get union before showing them up. But, as I said, at the moment, it would be pretty expensive, so I really don't want to go there. Well, actually, if you look at Russia, there's also a line cutting it, and that's the same problem. Right. We would also have to identify the different bits and merge them together. It's complicated. It would be nice. No question there. Anyone else? So, as I've understood it, you get all of this as long as you have GeoServer 2.7? Is that right? Most of it. With GeoServer 2.7, in GeoServer 2.8 you get some extra improvements, but the vector-based advanced projection handling is actually four or five years old, since... I don't know, 2012 or... No, 2010. The first version. Then we tweaked and improved it, added new projection support and so on. In GeoServer 2.7, we added the support for raster data, which was not enjoying advanced projection handling support. So, if you were taking a world map and showing it in a Polar Stereographic, there was a large pie that was not showing up. And then in 2.8, we increased the number of projections we support and made some tweaks for the 0 to 360. Part of it is already in there, in 2.8. So, it's moving on. It's improving over time. Well, that's great. It looks very interesting and very useful, and people can just go up and try and use...
|
Most popular mapping presentations today, ranging from clients to servers, show and discuss only maps in EPSG:3857, the popular Mercator derived projection used by OSM as well as most commercial tiles providers. There is however an interesting, exciting world of map projections out there, that are still being used in a variety of context. This presentation will introduce the advancement made in GeoTools and GeoServer to handle those use cases, where users have a worldwide data set, and need to view all or part of it in multiple projections, some of which valid in a limited area, and requiring the software to perform a proper display of it on the fly, without any preparation. We'll discuss GeoTools/GeoServer advanced projection handling manages to deal with these cases, wrapping data, dealing with the poles and the dateline, cutting on the fly excess data, densifying on the fly long lines as needed to ensure a smooth reprojection, for a variety of cases, ranging from seemingly innocuous datum shifts, maps having the prime meridian over the pacific, and the various tricks to properly handle stereographic, transverse mercator, Lambert conic and other limited area projections against world wide source data sets.
|
10.5446/32038 (DOI)
|
I will start my presentation. Well, these pictures are working for Protect Amazon. The Obama... one minute. Sorry. Okay. My name is Luis Motta. My academic education is important. I started working in GES in 1995. Some professional experience in the... in Brazil is the Government Brazilian Research for Agriculture. And here I'm working to monitor vegetation in the Institute of State of Forest in the State of Minergerais. I started working in Iberma in 2003. In these years, the Government of Brazil have started the career of environment analyst, the current government. In Amazon, I participate in this big project, the Systems Protect Amazon. And this project is a catastrophic land, all land of Amazon. In 2010, I started using phosphogy contributing for QGES. This is the all step for... I can't hear. The... the Minister of Environment have the company named Iberma for working to protect nature. Here have direct ships specifically for protections. In here have the Monit... coordination environment and this where I work. Well, this work is very hard. Okay. In the dissertation Amazon have the cycle. Cut forestry, put animal, cuttle, after plant, saw, bee. There's a cycle. And the main work is stopped in the first step when initial disfress station. I can show by satellite image how to can help for to stop this process. The other big problem is a illegal mine. The take to the goat is terrible in the water recourse. This is a helicopter. This is a machine. Look at the big imprendment. It's very, very money here. Illegal, all illegal here. Okay. Present plan. What is the chance in the... at the moment using satellite image? What we need? What we expect using image? This is a presentation on this catalog and how to produce catalog in server. How to use it by client in this case in the QGIS plugin. In the end I can tell more little about the XPR program from Plante Labs. Well, the... when you need to buy image is a big problem. You need to have working in a large country. In the 2004 popular image from free, free image. The first large amount of image is free. You can download, have the program seabirds from Brazil and shine. And have the first... the... resolution 20 minutes and the high resolution 2.7. Until 2010, the other big... a lot of images for this project. In this case only this partners, Ibalma, Federal Policy from Brazil and Jaika. Using the pulsar in this case is specific to Amazon. You have cars in Amazon to have the big problem because of the count. You not to have a good image every year with optical image. The only two years have the problem in pulsar. Maybe next year start again this project. Well, at the moment we are using a lot of Landsat and you can have the temporal series. Plus the actual image by Landsat 8. At the moment is very nice because the auto-ratification. You can demand to USJS auto-ratificate and the temporal image, the series image. Then to guarantee the correct geometric comparison. And the government in Brazil, the environmental ministry pushes three covers of Repdai. The big push. Big problem to make too. This is a pattern row of Landsat. The zoom, you can see the Landsat and tile of Repdai. You have temporal series in demand. You have the problem in the specific area. You can download, process and store data. And to compare the actual image. When I speak about siebers, when you can't make download, is for experts using remote science. At the moment, what's the difference? The big difference is high availability, spatial resolution. Not reducing the need for specialists. When trying to explain about how to classify images is difficult for using common. But when to see the image, the high resolution, you're not speaking much. It's evidence. Housing's house, building, building, farm is farm is very, very easy for the common user. It's for all types, the users. Everybody. Because you have high availability. Higher spatial resolution. In this case, in the Obama, what's the difference at now? Every day, you'd interpretation. And not only one image and two images, to have a lot of images. It's very nice for security to understand what happens. Well, do you have the two points? A lot of temporal series and support with image, higher spatial resolution. I'll try to explain after. Well, you try to interpretation in this image. Okay, maybe this is, it is or not to use a road. When you have the support to high resolution, you have a same sentence. This is a road. Why? Because you have the support resolution. That's the idea is to, the high resolution is calibrate for user interpretation on site. When to calibrate your eyes for understand what it is, you can work in this for temporal series on site. Well, you have the high resolution free. Okay, but what date? You know, when to use the help die, you have the date. But the date is not the same. This one year difference, but I can go to nearby dates and to see you can calibrate your eyes to interpretation of Lensat by using help die. Well, at this point, I'm talking about the chance to use it. But now I can to show the how to combat the disfershtation. The logical is your interpretation desire, disfershtation in this month is a yellow, the line yellow. Next month, you see the new disfershtation. Clear cut. The sign plus is a continual area. This continual area is the best moment to you get to put the staff, the policy, a chance for to combat the disfershtation. Then if you have the temporal and special ability, you need to have more success to combat this frustration. The with Lensat plus Sentinel. There's a one week because the orbit and office sent in out to which Lensat used to get the free image. One week. And at the moment, sent now to not have the disponibility of image, but in the plan to have this image. I believe in is a two months at the end of the years you have to can work in which but satellite. Well, a lot of mesh, what you see what to get a lot of mesh. Does idea to create the catalog on the fly. You have a target automatically to add all images for the target target area. Well, you need one side server, the other client in the server, you can work in. We are working for the Lensat and have died and the others. And the interest point is and the product is the user not can to produce the image. The mission is and RGB is contrast everything and you put for user. Jout if you have the need the Jout if and the more using for TMS is very quickly fast. For to using the is amazing you needed the big index. The idea is to have the food pretty mission. The name is catalog layer is a polygon. Of the image where have address image. If you have this you can use this client where the plugin identified then how many images you have in extension map. You can work in the server or local but local is the same direct directory when have the image. With this automatic image in the creating the layer group and edge all image in this group. If you have the field dates or the by date. I can show the instructors of the TMS is simple. A lot of you have a directing in the story not the MS is the Jout if the direct of the Jout if and to put in the directory. This product and product is important to maintain the same name of original. If you have the other people when to read this name understand it is a Lensat. If you do chance name the problem for the new people and the working. And so fix you put the IHB band. Well I can show the some process and other process is stay in the GitHub. The first step is to create a red, green, blue composition where to put the name of image and what bands you do like stack. The other step is to convert to 8 bit for interpretation. You're not different to working to say 16 bit or 8 bit. The F2 scale the minimal value in the 16 bit is zero and the maximum 16 bit is a 255. With this to have the same very similar of original image in the 16 bits. Well TMS what to what the script make create TMS and the descriptor of TMS using the this drive the Jout drive WMS. This is a directory the first directory is zoom the next directory is X coordination E, Y coordination is the name of the image picture. The problem when to add this type of hosted in the QGIS when to go to zoom to layer, lens off the world. Then put the new tag where to have the extension of the image using this for zoom to layer. The script is you can to put the name of image minimum and maximum scale is sorry zoom where to put TMS and where to put the PNG this PNG is a quick look. Where the address of TMS which is you can to create the driver. Well, it is script is simple. Using the other utilities in Linux the parallel creates using the a lot of all process. You balance it. You can run a lot of skanks and sometimes using the parallel and no happen is you can execute this common. Finish your terminal ended to run in the server to pass off the list of image and this creating and sometimes a lot of images. Well, the other interesting point. The in this case the server is not more space. What do you do? In here the boot Linux copy the script and the process in the agadese. And receive the help that in this image. Okay. Footprint. The idea is to create a footprint where you have the vector difference and no data where have data. Smooth the footprint. You have a lot of that you can't is moved. Line the small that is in the polygon each polygon used to add in for all to generate the catalog layer. How to calculate footprint you using the utilities. Calc where to have where you have valid the pizza. There's a backfield zero. She had to remove the small polygon inside. To the edit to set the zero for no data for to polygonize polygonize only valid data. In the case where have one value in the pixel created in your json. Your doesn't very, very cool because you change the value of the attributes. In this case I changed the this value for patch and image. In this I have the grid of a catalog. If you okay as a convex rule the manner for to reduce the number of vets in the each feature. You to add all footprint for shape. If you to tends to see to put all is a simple you can use in bash you quickly to produce your catalog layer. In catalog layer you have the disputes the path image where the TMS and as of TMS the quick look in date. Okay at the moment to can see the I think I speak about the plugin. What the plugin do that to ability to begin off course you download and install the plugin the first step the search. Who is a catalog layer. This is a search where have fields date not mandatory and and or. And it can be local image or the the and or in the Internet this format. The the geometry the footprint is a polygon layer with this you have the catalog. For get image the plugin where intersect the extend map off the features in the intersect maps exchange maps are. This idea the splugin this search the find catalog this catalog layer and to run. You have how many. The image you have here is this idea to intercept the map covers extend map covers create the catalog group edge all images in this area. If you to have a date is the have us order. For run you can to run for all but not in the set moment you not to run for all features then to select what the features do you want. In this case the search for selected when when run you can cancel off in that tree you when you can you cancel to wait for cancel. The idea is the the other plugin for illustration the idea to work in a lot of ways. You can to use in this begin for synchronize to windows is a good one to have to monitors. You can put the one windows for historical image the other plugin for the. The major are have the more special resolution which this you can interpretation. The other major interest target image. When you if you have the quickly catalog you can do use this plugin where const. Making the grid catalog layer. You need the men to type smart to type created the search of local image and get date for get dates to can use in the expression with the that the name of image you create the the field of date. Okay how many time. Okay okay at the moment is very quickly okay. Well this is a very important is not official plugin offer plenty of is the program and explore and I can take and the making this job. As a semi deal but I can request for to API in plenty of server in receive the footprint the metadata a lot of information about. The image in the target area. The footprint the API plant lab response I have the foot footprint create the submenu I can show the pics is more. The idea is install of course you need the cake for to use an API. Okay you select where you where to make a download. Select the date the range of date. And the moment to to when if you want to create a team will do. For to see the image. You have the like the image you can download. In this case you can add the team as from. Plante lab server. Or to can make a download. The idea. To have the image when to need this idea. I'm trying the have a plant lab image in this year. I don't know I try you have the last three year weeks. The goal. The hotel. The plant lab is a present for you. Okay thank you. Okay thank you. Are there any questions. Have you been able to identify like. Areas of the first station and mining and actually stopped. The people that are doing it. It's my job. It's very difficult now the. Talking about this is very. Political a lot of things happen when they have the first station. But you have a success rate. That's right with the. The idea that you have the continued to can see. To can see the continue are the first station. You have a success to in the farm into get the machine the people's. We have the success for this thank you. Okay thank you.
|
The monitoring of tropical forests requires large amount of satellite images, some being free, as the case of Landsat and CBERS series, and other, better spatial resolution, these being paid. We currently have in IBAMA, approximately 49,000 scenes of RapidEye, corresponding to three annual coverages of the entire territory of Brazil, and added to growing acquisition of Landsat 8 images, provides the environmental analyst a condition never taken previously to monitor Brazil's natural resources. On the one hand we have a large number of images to improve the quality of analysis, on the other hand, the use of all images of a given location becomes impossible to add each image manually in desktop GIS. This paper will show the methodology and the implementation of the catalog on the fly, allowing the environmental analyst, have automatically, all cataloged satellite images of a particular area. The work consists of: 1) provide the minimalistic form of satellite images, ie, with minimal computational resources, mainly the demands of settings on the server, and 2) get in the QGIS automatically images of areas interest. Scripts were made in Python and Shell / Linux to generate the satellite images in the format TMS (Tile Map Server) and the definition files to GDAL_WMS (GDAL Web Map Services). The scripts were made using libraries and GDAL programs, and are available on Github (https://github.com/lmotta/scripts-for-gis). For each satellite scene, we have the structure of TMS format directories and files and corresponding XML file with the definition of GDAL_WMS driver. The scenes are processed in parallel through the parallel utility (http://www.gnu.org/software/parallel/), so the processing time, the server is limited processing capability. The entire set of scripts and tools, makes the process minimalistic therefore have the responsability for each division step, not a burden to maintenance, compared to the development of a processing framework. The scenes were cataloged in a vector layer allowing the consultation of these, and for each type of image (Landsat, CBERS, ...) has its cover layer. Plugin Catalog On the Fly was developed for QGIS, allowing access autamoticamente images using as search index the vector layer coverage. The plugin makes the intersection between the extension of the map view (mapcanvas) with the cover layer, and thus the identification of scenes that are in the area of interest. The cover layer has to have two field, a field with image data and the other with the image address, and may be, the URL of the TMS or a local image. The plugin is the official repository of QGIS as experimental (http://plugins.qgis.org/plugins/catalogotf_plugin/) and also on Github (https://github.com/lmotta/catalog-on-the-fly). The Catalog on the Fly can be seen on Youtube (https://www.youtube.com/watch?v=2S3RlWr0uQg). The work is in progress, where the IBAMA satellite images are being processed, and environmental analysts are testing the plugin.
|
10.5446/32041 (DOI)
|
Hello everyone, my name is Hyung Woo Jo and I'm doing master course in computer science and at Gunsan National University, the title of my presentation is Special Tazzo Supporting Special Chorus on Tazzo. The presentation will be made in following other Wadi Special Tazzo body for development why I chose part Tazzo plan for the implementation of a plugin, part implemented and part not yet implemented and conclusion. What is special Tazzo? Briefly it is a plugin to provide special queries for Tazzo. In detail it is a plugin to provide and perform calling data set by special queries using SQL in distributed data warehouse system. Tazzo which provides special functions supports special data types for special queries, supports indexing special data and allows the use of rest of the data. Then what would be the development of this plugin? SQL is getting more interest in big data for the last decades. Tazzo supports to analyze special big data too, have increased naturally. I also, those that the volume of the data containing the special information I have tapped it to analyze could come close to big data. The data held by my lab are collection of tweet on Twitter in real time which are not the ones consisting only of special information but I have tapped it to analyze using the data. When I analyzed the data, analysis using Hadoop was a trend and I too conduct analysis using Hadoop. However, I had to use map needles to conduct analysis using Hadoop and often thought that using SQL whenever I conduct an experiment would be convenient. Since it is somewhat difficult for analysis to use map needles which essentially is batch processing the more data, the higher latency becomes. I did research to use existing software solutions. Of course, there are good ones among the existing software and solutions. Transitional special database and DBMS include Oracle special and graph which can be installed to Oracle database as plug-in and my SQL DBMS. Both are satisfactory solutions but are commercial product somewhat difficult to build up a cluster. My university did not provide them separately and solutions that continuously cost or not appropriate option. Using post-UGIS that does not cost seems to be a good method. However, since it is not itself software to analyze big data, it was Hadoop. As for no SQL, there are document-oriented database as MongoDB, KotDB and LithinkDB. I could have used this but I already had structured data so I did not have to use this. HBase was made by modeling after Google's Bigtable but it is somewhat difficult for an analyst to use that. High-based convenient since it does not use SQL but HiveQL. Similar to that, but it was not just that Hive which use map needles would be an appropriate choice. No solutions that use Hadoop include GMS, S3 GIS tools for Hadoop and special Hadoop. S3 GIS tools for Hadoop is close to tools or libraries prepared for analysis using Hadoop while special Hadoop too is close to the form of combination of special plug-in with Hadoop. However, in that it does not cost, despite a little effort should be invested, it seems that performing special queries using Hive or using GMS are maybe most appropriate. However, I finally decided to prepare plug-in that allows Tazo, free-end open source software which low latency to perform special queries by myself. Then, why I choose Tazo? There are features. First I will introduce Tazo before I will speak about why I choose Tazo. First Tazo is robust big data relational and distributed data warehouse system for Hadoop. Tazo is designed for low latency and scalable ad hoc queries, or any obligation and extract transform load process on large dataset stored on Hadoop Distribute 5 system and other data sources. By supporting SQL standard and leveraging advanced database techniques, Tazo allows direct control of distributed execution and data flow across a variety of query evolution strategies and optimization opportunities. Then, why did I choose Tazo? There are a few features that become the reason for my choice. Since Tazo is designed to run on Hadoop, using Hadoop can elevate my cursor about data distribution and also supports Amazon S3. Tazo has a functional external table so it can bring existing files and make queries. It does not support updated syntax like HBase or Hive, but it can override. Tazo supports SQL standard and does not use mem news so it is faster than batch processing using mem news, has felt torrents and supports dynamic scheduling for long-running queries. Tazo is convenient in terms of installation, construction and operation. Of course, it is a project still growing so it is certainly difficult to solve when it fails. If you need, you can implement it yourself and attach it in the form of a program. Tazo, as described above, has managed and demelch as compared to solutions or software in the example, but it is free and open source software so I decided to implement it on Tazo, which was judged to be appropriate considering the characteristics introduced. The plan for implementation of the plugin is broadly decided into four steps. First, for the implementation of special functions for special queries, I decided to implement 11 basic functions including distance and equals and the function for the conversion of special data types. Second, the additional types of special data, which means the custom type containing special information like point line string. Third, KNN query, which means the implementation for smooth performance of KNN after the implementation of special functions. Fourth, indexing special data, which means the implementation of indexed data allows retrieval of necessary data in making special queries. Parts that had been implemented so far include special functions, KNN query, and special data indexing. For special functions, I implemented most of the primary functions using JTS. I carried out KNN queries using the special functions implemented, which are working smoothly. For special data indexing, following the operation methods of index in Tazo and special Hadoop, I implemented a two-level R3 using source tile recursive. The two-level R3 has two forms of global index and local indexes as shown in the picture. The process of building the two-level R3 indexes as follows. First, Tazo of course divides the stored data set by each area using the STR. Second, they build local indexes for each area. Third, they extract only the data on a certain level from each of the local indexes and build a global index using them. The process leading the two-level R3 indexes as follows. First, Tazo of course leads the global index and finds such keys. Second, they find local indexes corresponding to the search keys. Third, they find the search keys in the local indexes. Last three, they read the data directly from the storage and construct tuples. Parts that are not yet implemented include special data types, special functions, K and the query, special data index, and modularization. Special data types are currently not implemented, so there is inconvenience that special functions have to use primitive types. I'm going to resolve the inconvenience of special functions once special data types are implemented. First, as for special functions, I'm going to implement the functions as length area centroids that are not implemented and then optimize each of the special functions and K and the queries. Next, as for the indexing of special data, I'm going to implement quadtree or KDTree using Jewish as well as R3. Lastly, I'm going to modularize the plugin currently. It is combined since it is difficult to separate it from Tazo, but after going through the final modularization in the plugin, it will be distributed in the form of plugins. Conclusion. What is special Tazo? It is a plugin to provide special queries for Tazo. What is Mori for development? I began developing it as I wanted to, as analysis to SQL in distributed data warehouse system instead of using mem news or similar batch processing. Why did I choose Apache Tazo? I did so because I can use it without a great concern about the distribution storage, its support, SQL syntax, especially mem news guarantees, virtual reliance, and above all, I can implement and attach it in the form of plugin myself if necessary. A plan for implementation is as follows. As an overall plan for implementation, I'm going to implement special functions for special queries and special data types and allow learning KNN queries and supporting the function of indexing the special data. The current status is as follows. The patch implemented so far includes most special functions, the learning of KNN queries, and support for indexing through implementing two level R3. The parts not yet implemented include special data types, the remaining special functions, the optimization of special functions and special queries, another method of indexing the special data and the modulation of plugins. For today presentation, I mainly referred to document and source code of Apache Tazo, special hardware and post-jsc and etc. And there are books or website for information about contents as well. For Q&A, please email your questions to the email address here and I will answer them in detail. Thank you for listening to my presentation. Any questions? Have you looked at lib spatial index for doing your indexing rather than trying to do the indexing on your own? Lib spatial index, it's a C library for doing R3 KD3 indexing? No. Any comments or questions? I think how long to implement all functions? Not to plan. I cannot plan. I don't know. Sorry. Any other questions or comments? Okay, thank you.
|
Apache Tajo is a robust big data relational and distributed data warehouse system for Apache Hadoop. Tajo is top level project in Apache Software Foundation, it will be next generated data warehouse system. Tajo supports SQL standard queries, it can't perform queries about spatial objects unfortunately. So I will announce production experience Spatial Tajo extended plug-in for Tajo. The plan of Spatial Tajo is spatial plug-in supporting basic spatial queries, spatial join and spatial indexes, but current progress of this can still perform simple spatial relational functions and a few spatial joins. Spatial indexes will realize.
|
10.5446/32042 (DOI)
|
Good afternoon everyone. Thank you for coming, this presentation. I'm Han-Jin Lee and I'm charge of web platform develop at mango system. So what I like to talk about today is Pinot zero which was developed based on pre open source software for geospatial. Yeah before begin let me introduce my company Mango system. What we do is system develop data construction, open source software education and solution consulting related to GIS. Also transfer the open source project into Korean. Resurrect the geonetwork on Transfix. We are here. Please come visit our booth. Now let's begin the presentation about Pinot zero. Pinot zero is a web map visualization platform using open or private data to make you one map and share with your network of people. In fact there's already some service that are cloud based for mapping like map box, GeoNode, Cartodb and ArcGISO line. We need to provide differentiated service and we have done so and develop our program to maximize our property. We will focus on map data visualization. So how can you get open data? The load that use and share open data has been impact since November 2014 in Korea. Data output has been actively contacted. Let's take an example in Korea. First of all there's NS Center. NS Center is a national special information clearing house. You can download the most recent aerial boundary in Korea. Secondary open data portal. So this website provides open data which is made by government, organization and local authority to ever use. According to the law, open data portal provides many kinds of data including file, data, map and open API. So next there is Seoul Open Data Portal. This is the most progressive update data compared to many other local government in Korea. Seoul Open Data Plaza provides and share public data about the city of Seoul. So you can download the videos type of the data including sheet type like Excel, CSV and just data like shape, DXF, Gmail. Many countries provide open data portal service. Anyway, anytime we can get the open data. Now back to Pino Zero. I'm going to explain to you our, sorry, about our special features. Pino Zero service architecture is based on open source GIS platform structure. I think you all know this structure very well. As well as two web mapping library and we have changed them all to OL3. So as I mentioned already, we have thoroughly researched and developed to maximize our property and we have powerful GIS engine named GXT. GXT is a geoprocessing function developed to provide a variety of commercial space in a data analysis engine is Java based library. Describe before GeoTools and JTS. So some more than 213 geoprocessing algorithms. It was compared with Arc Toolbox for the result of the verification. Of course, he used 60 trial version. Yeah, GXT geoprocessing can be used to unique program type. You can be update site, you can be update on this site. So since the version for researching education is free, but we can be preventively used as well. Relationship of GXT and GeoT Pino Zero will be explained later. Pino Zero is split into three service category, which are data, layer, maps. Basically, the interface includes the three category and we design less API service to customize user interface according to customized requirement. Another special feature of Pino Zero are base map and zone layer. But zone layer is the time map service using mango base map, which is built on open data and open street map. When you work with special analysis, you need a boundary data such as administrative area. So we provide zone layer as basic boundary data. There is a national point number in Korea. This is a grid map. Using this grid map, you can get a more precise category. So I'm going to tell you about the data, which is the first service category of Pino Zero. With this function, you can upload your GIS format data and text data with location information. Using GeoTree's library, it can cover billions of types of data. Also, you can publish to map server map. Recent version, you can upload back to data format such as shape CSV Excel. Secondly, it's layer. Now, it's time to transfer your uploaded data to layer. Before I mention using GeoTree, it is a powerful geo processing engine. You can create billions and clear info graphing map. We implement these two use for point data mostly. They are special analysis technique such as simple symbolize, hit map, fishnet, hexagon, and special statistics layer. And we will add more special analysis method. You can get one cartogram of overlaying point data in two zone layer by special query using like count sum, mean, and more functions. And you can also make fishnet and hexagon polygon layer simply put in width and height of boundary data of radius. You can check out the location and size of future with preview map window. After selecting column and statistic function, you end credit new layer that has a statistic values. And then you can style your layer for easy to identify. When you style your layer, you can simply put into column, class pie method, fill and stroke value that you want. Then you can make a corporeal map this visualize statistics value. Yeah, we use color bro to with the pill color option. When you style your map, color bro is a useful tool that anyone like me can use to make beautiful map without artistic skill. We made JavaScript styling library, which is a pin of style JS based on bootstrap jQuery, why? Color pickle and color bro to use another project. Pin of style JS has been used billiards project as victory UNGGIMAP, VJ portal use this as a style tool. Now I want to show you some example layer we made. This map is the number of major dam of the USA status. We use US status boundary polygon layer and major dam point layer as a base data. And we got some day data from us sensors and data.gov. This is the old building recent condition statistic grid map using national point number. Next sample is I sorry, next sample I'm going to show you is Jeju ice land population recent condition visualized hexagon layer. Jeju ice land is the most population travel destination and beautiful ice land in Korea. It's only one hour away from Seoul to Jeju by airplane. By the way, Jeju ice land was the first place we want to hold post-poly conference. We are considered. Anyway, as you can see, they are more population near provincial government building than near Jeju international airport. Here government building. And you can also see the most population spot change according to weather change. We use the protein population data based on big data of mobile tele-traffic information of base station. Finally, map. Pinot geoprobyte billions map template to create the U1 map with a layer you made early. Also, you can share with it as URL. You can overlay layer like stack of hand. So they are setting area and preview area on the map template. We will provide swipe preload chart and table template soon. When you select a map template and click publish your map button, you can move to this page now. You are looking at. Then you add or change parameter with setting window. You can directly apply to you preview map. On setting window, you can put parameter with flowing step. Setting metadata, adding layer and selecting map option. You can also add multiple layer and simply drag and drop can change the order of layers. As you can see, they are map title description of map and zoom control on your map. When you get geo-resort map, you can share your visualization with anyone privately or publicly. You can also customize the map template and we will keep adding more template. I am supposed to have a clinic session today, but I can give you an answer. Sorry. But if you have any question about Pinogeo or this presentation, please send me an email and hit me up on my Twitter. I hope you enjoyed the presentation and enjoy Seoul. Thank you. Thank you very much. Hold on a second. We will be sending questions to email and Twitter, but we have got about seven minutes to ask you right now if you don't mind. That was a great talk. Thank you very much. Beautiful maps, I think. Really, really nice. I am glad you picked Seoul instead of Jeju because we have got beautiful weather. Very convenient for us. Next time, Jeju. Any questions on the floor? I have got a few questions, but anybody out there with any questions? Okay, there is one. Microphone, please. Thank you. Is it open source? You mentioned Pinogeo and also the Pino stylers and people can see the source and use that. We will Pino style JS. Excuse me. Excuse me. Can you see the answer? Yeah. Pinogeo is a company product, so Pino style JS is open source. Pino style JS is open source. Pino style JS. Pino style JS is open source. Okay. Wait a minute. My name is Sol Yee Jung. Okay. Thank you very much. I think it is a clear answer to the question. I saw some other people getting ready to ask. Maybe. Think about it for a second. I wanted to ask you about raster data. You have done some very nice vector mapping. Let us say I have a raster with maybe climate data or some other land use data. Is it possible to query down into the raster with your system or how does that work? We will support raster data due to another type, but now we can support raster data. So there are plans for moving that way, do you think? Do you have plans in the future to expand that? Yeah. We are planned. Okay. Good. Raster data format. Okay. Very good. Anything else? Okay. Fine. Then.
|
Create a infographic maps and can be shared on the web, we will introduce the Pinogio. Just a few clicks complex analysis function through Pinogio, it is possible to make a web map of high quality. Pinogio consists of a Geotools, GeoServer, OL3, including open source-based architecture. Do not store anymore geospatial data in local storage, create a beautiful maps from public cloud environment.
|
10.5446/32044 (DOI)
|
Hi, nice to meet you. I'm Hiro from Tokyo. Today, I'd like to talk about this map. Maybe you may not know this map. So far, I can hear to tell this application. Before introducing this map, let me introduce my own company. Last year, I had a PhD, then I started my own company. This company provides microdemographic data. It's like simulated personal data. Our goal is to create a real-time city. So far, I'm interested in this microdata. Today, the topic is about people flow. It's like GPS trajectory data. I mean, it's time series data. I guess, so, current post-poly doesn't support this time series trajectory data. So, we're developing this application. This is our team. So, here is the data developer for this project data. Then, here is the student. So, we are creating this MoMap project. Then, so, let me show what MoMap. Then MoMap is, so, I mentioned before, so, loading this GPS and the trajectory data, then, so, visualize, and so, easily analysis people put trajectory data. Then, it works on Google Chrome, so, you can use it as a Chrome application. So, you can use it on any, sorry, Windows, Mac, and Linux. Then, let me show some demo. So, this clock shows, it's about 2 a.m. Then, people are now sleeping. So far, so, doesn't move. Then, so, in morning, so, people get up. Then, so, starts to move. Then, each color shows, so, a transportation mode. For example, this blue color shows train, then yellow color shows, so, bicycle, like that. Then, so, to make this movie, so, generally, you need to make so many data and analysis too, but using MoMap, so, you can just make it easily. This is MoMap. By the way, Kuchu, right? It's hard to see. She said she wanted to see your face. Oh, cool. I didn't notice. Thanks. Then, so, how to use it? So, you can just Google MoMap too in Chrome Store. You can find it. Then, just click. Then, after that, so, you will install Google Chrome Rancher. Then, just click a button. You can open MoMap. Then, this MoMap supports, so, mainly, so, trajectory data. So, it's, MoMap is a CSV. And also, if you want to calculate, so, I mean, sum up into, so, mesh, so, you can also import the mesh and other polygon data. Then, after that, so, you can export, so, analyze CSV or the root data of the movie. Okay. Then, so, what MoMap? So, why are we, so, start to create MoMap? So, it's our awareness. Then, maybe you have, you may have an experience, like, when you create this time, this data. So, you need to, so, gather each, so, separate different years data. Then, you need them. Then, in my case, so, I gather data and, so, export using Glass. Then, export so many PNG, imagery. Then, after that, merge it as a, a, defanimation. It's so, it's so hard. I didn't, I don't like it. Then, this is a case. For example, so, each of the Japanese, so, future population. Then, so, the year has, has gone. Then, people start to decrease. Then, you know, this is a defanimation to create it. So, I spend so many times. Then, it's just a light case. In case of, so, moving object, so, it need more, more, more time. Look. So, it's a, a moving object. Then, this is a, a, made from, so, a, PNG data. Then, this, all made per one minute. Then, so, merge, and merge, and merge. Then, we can get it. But, so, is it, do, do you get, do you think? So, will it be popular? No. It's so hard. So, why? So, we guess, we need, so, time series, analysis application. Then, so, time series data tends to, tends to, tends to increase, so, gradually. So, why? So, in the, in the near future, time series data is, so, common, so, standard data. Then, with us, this kind of application, we can survive. You know? So, we start this move up project. Then, so, you know, this move up support, this trajectory data. Then, let me show, so, yeah, basic application. Then, of course, so, you can download this move up to now. So, if you have an interest, please download now, then play it. Then, this is a move up, so, interface. It's like a movie player. So, if you, so, import the CSV data, so, you can load from this, the play button. If you click this, so, you can play like a movie. So, let me show sample demo. Then, when you run the move up, then, so, import the data. I'm just looking for, then, so, now, I choose CSV data. Then, if you choose, so, move up needs four attributes. So, pass on ID and time stamp, that's wrong. Then, so, if you need more data, you can add, so, so, attribute, freely. Then, so, now, loading, but it takes so many times because this sample file has about 700 megabytes. Then, while loading, I'll show you next. Okay. Then, so, what is our theme? Our theme is very simple. So, where and how do people come from? And also, how many people go past the road? So, you know, Japan has a big earthquake in 2011. At that time, so, we can't count how many people are damaged or not. Then, so, if we can get a, so, these, so, people's volume and the, so, people's trajectory, we can get a, so, more, a great, a great, so, avant-planning. Then, so, this is a one, so, one function. This is a, so, gate function. It's like a special selection. If the people go through, then, like this, we can just go along. We can count the people's. Then, so, we select the people's movement and also visualize like this. So, more details. So, now, I just show the people movement. Then, look, currently, so, it's a 6am. People are getting, getting up. Oh, sorry. It doesn't work well. Then, so, each color shows, so, commuters line, mainly commuters line in Tokyo. Then, so, it's a rush hour, then, finish. Then, now, so, lunchtime, you can just movement. Then, this time, so, you can know, when you go to Tokyo Disneyland or Tokyo Station, so, you can get, so, which direction or place, so, people come from. Then, you can know, oh, this color, so, it means, he come from, so, this western area. You can, you can, you can get it. And also, this also shows a commuters train. Then, so, this blue line shows a train. Then, as a, this, red color shows other transposition mode. So, it's easy to get, so, people's transposition mode, too. Then, also, it has a pass visualization, too. Let me show. So, I've loaded, so, data. Then, so, when we'd like to show the network, just click this. Then, we can get this pass line. And so, let me show some sample. When, so, here is a Haneda airport, you know, international airport. Then, so, for example, if we'd like to know how many people and where, as people come from, just draw a line this. Then, you can know, so, the people's home place. Like this. People are mainly from, so, this area. Oh, it takes more time. Not completely. Like this. And, so, it has another, so, special query, too. So, you can select a moving object using a polygon data or other format. And, also, it has a attribute query. If you input some, so, sentence, you can also select it. Then, after that, so, if you'd like to export this line, so, you can just, just, so, click export button. You can create network data as a KML. Then, also, so, you can export, so, animation. So, for example, animation button is here. So, click this. So, the video menu appears. Then, choose a window. So, the video window appears. For example, if you'd like to name this movie, so, Tokyo, Tokyo, Torip. So, you can easily start to create. Then, now, so, rendering now. Very simple. Then, so, let me show. These are now, so, current, so, research team. It's a, so, visualize abnormal event. Then, general speaking, if you, so, detect abnormal event, so, I mean, abnormal detection, so, you mean, you may try some, so, data mining tools. Then, but, it's hard to, so, visualize. Then, in this case, if you want that, so, there are two ways. Then, let me show. So, now, I will, I will upload it, mesh data. Then, so, this color shows a night population. Then, from now, so, we, I, so, calculate, daytime population gain. So, counting those objects. Then, so, choose this moving object. Then, after that. Then, so, when I change. So, this mesh value changes, so, according to the hour. So, it has to see. Like this. Then, more detail. If you click more, so, you can see the, so, values. Then, so, if the, this value shows, it's compared with, so, night population. Then, if population goes high, it shows, it's a, so, very, different, so, place. So, with a, with night. So, like this. So, you can easily get, if the abnormal event has occurs, so, you can know. Because, that's place have a, so, high value. Like this. Then, if you click more, so, yeah, it shows like this. So, sorry, it's very small, but, so, night population is written here. Then, so, daytime population is here. This is one, two. And also, I'd like to show one more function. One more function is, so, 3D view function. Then, so, it doesn't stop. For example, so, now, I choose this line. So, here is, so, central area in Tokyo. Then, here is Hababon area. Then, now, I create a 3D map. Oh, too much. It's much, it's much. For example, like this. Yes. Now, we get a, oh, 3D map. So, it shows, so, population volume. So, per minute. Then, so, you can know. You see. Oh, it's difficult to see. So, here, this line is a ball. It's a hot-so-be-ball. It's a rush hour. Then, here, this is the big time. Now, what we can do on this? So, yeah, these are no events here. But, if you have, if you do this, some volume here, so, you can get, so, abnormal event here. This is one, too. Okay. So, this is the main, so, all map, too. Then, it can also visualize mesh data. It's also a future population. So, I showed you. So, before, the animation is very bad, but look. You can, so, easily look, so, beautiful map. Then, so, more map can import your own, so, GPS data from Android application. For example, so, this is my GPS log data. You just need this for what? Then, so, when loading, so, you can draw this trajectory and the map and export and create a movie. So, this is a, so, more map. So, please, use it. Then, the reason, so, why I came here is to tell the more map. And, so, I'm looking for, so, collaborators. So, that's it. Thank you so much. Thank you very much. Okay. We've got about three minutes for questions for Hito. That was very interesting talk. Sure. There's some questions out there. Anybody? Yes, please. Microphone. And then, you. Yeah, I see that you can detect the abnormal. I mean, the outlier. So, in my opinion, the abnormal or outlier should be with different confidence because sometimes it's noise, sometimes it's not. So, do we have some attribute to description confidence? I mean, maybe it's a probability to detect its abnormal or something? Oh, it doesn't support probability. It just shows the map. So, I mean, so. So, you mean we can make a judgment by ourselves? Yes, yes, by my, our eye. Okay. Good question. I think we've got. First of all, this is very interesting. Thank you so much. And I think I missed one. What is the abnormal, abnormal event in that case? For example, so if the typhoon and the earthquake and other so as more fetchable as occur, then so people get us then so. So, sometimes we just know we would like just know so. So, abnormal events so easily but so generally abnormal detection need more data and more analysis tools but we can just know or it has one note by our eyes. And you show the very big popularities of movement. The thing is how you can track those data, how you can get those data. In this case, this is a questionnaire, questionnaire basis data. Then so some countries, you know, investigate so people's transportation so power five years or 10 years. Then we are estimating pass interpolation using PZ routing then so creating that kind of moving object. So, is it there? The thing is, I'm not sure about this but maybe without people donating their own GPS information, you may difficult in getting those data from a lot of people, right? And how did you get those data? I mean the moving data? Okay, so there are several ways. So one way is so collaborate with a different company. Then they, you know, they had so many data. This is one case and one more case is from government so we just collaborate with so big data holders. So just we didn't together. Yeah. Yeah, that was my question too about the source of the data but that mostly is cell phone. Yeah. So, how? Gotten, right? Okay, we are right at the break time. I think there's going to be some parties and things tonight. So I think we can stop right here. A big thanks to all three speakers. I think there's any other questions maybe everybody can hang around for a little bit. Maybe have some question time but otherwise we'll officially end right now. So thank you very much. Thank you. Sorry, let me tell one more. So if you have an interest, so two days ago we had a hands on then so we upload that document on here. So if you have an interest, please access to here. Thank you.
|
Mobmap is a visualization platform for spatio-temporal data easily and simply. This next generation GIS tools is released as a Google Chrome application for anyone. Recently, location data tend to be available to the public. This origin starts with the spread of iot devices including smartphone and open data such as aerial photos or satellite imageries. However, time series data analysis and visualisation on map tend to be unsupported by general gis software and libraries. Mobmap enables users to deal with time series location data. This presentation shows the summary and demos of Mobmap using several data examples such as simulated people flow data from geo tagged tweets and estimated building age transition data from multi temporal aerial photos, estimated future population.
|
10.5446/32045 (DOI)
|
Ok, aujourd'hui je vais vous parler de Géorchestra, qui est une sédation de 3 modules et de secours. Mon nom est François Vanderbiast et Florent a aussi participé dans cette présentation. Il a écrit cette présentation mais il ne pouvait pas attendre aujourd'hui. Nous travaillons donc tous ensemble à camp-à-camp, qui est une compagnie suisse et française, qui a été fondée au début de l'année 2000. Il ne fonctionne pas. Ne vous inquiétez pas. Vous pouvez demander ce que le SDI se fait pour l'infrastructure spatiale. Ça veut dire spatiale, ça veut dire que nous parlons de geodata. L'infrastructure veut dire que avec le geodata, nous pouvons les détecter, les partager, les voir, les composer, les downloader, les extracter, les portions de la géodata. Une infrastructure doit vous permettre de découvrir des données, des métadattes, et de décrire des données aussi par des métadattes. Tout ça doit être traduit par une bonne infrastructure spatiale. Quels sont les bénéfices de l'utilisation d'un SDI? Pour les utilisateurs, comme nous l'avons dit, c'est un accesso de geodata et il devrait travailler avec tous les softwares OGC. Bien sûr, depuis que nous parlons de la data distribution, les cartes standardes de l'OGC. Pour les administrateurs, et particulièrement en Europe, où l'inspiration est forte, les gens peuvent tourner ce constrain dans une opportunité par utiliser un bon SDI. Parce que dans un SDI, il n'y a pas de plus de data duplication, car les données sont distribuées par les services web. Et bien sûr, il y a moins de travail de maintenance, parce que tous les softwares sont centralisés sur un serveur unique. Donc, ce sont les bénéfices de l'utilisation d'un SDI. Non, nous allons parler de l'enquête de l'enquête de l'OGC. Qu'est-ce que l'enquête de l'OGC? Tout d'abord, c'est un software Java, qui est basé sur Spring, et c'est un module de modus. Le corps de l'OGC est un module de modus de sécurité, et il y a d'autres modules, qui vous pouvez utiliser avec l'OGC, les plus fameux de ces modules sont le service de l'OGC, le network de l'OGC, et nous avons aussi d'autres modules, que nous avons développés pour l'OGC. Autantication est condamnée par un component external, CAS. Il peut être votre casque déjà existant, ou il peut être celui-là, qui est donné par l'OGC. Vous voyez que l'advantage de l'OGC est que c'est modulé, c'est-à-dire que vous pouvez utiliser les modules que vous voulez, pour l'instant, vous voulez un service de l'OGC, un network de l'OGC, un verre, mais pas d'extraction, ou quelque chose comme ça. Qu'est-ce que l'OGC est? C'est un software free, free as in speech, le licensee GPL. C'est modulé, comme je l'ai déjà dit, il y a plus que 10 modules disponibles à l'heure, je vais introduire les modules plus tard. Bien sûr, c'est interoperable, parce que le service nativement parle de l'OGC, et il utilise l'internaillement et l'extra-restation des API. C'est aussi très sécurisé, parce qu'il y a beaucoup de raisons, mais l'important, c'est que nous soutenons le service de l'OGC. Et nous avons aussi fait un délivre continu, qui signifie que nos clients ou les gens utilisant ce software, peuvent augmenter leur software chaque semaine, si ils veulent, ou chaque jour, et bénéficient de les fixations de l'OGC. Donc, c'est l'évolution de la demo, vous pouvez aller voir ça plus tard, si vous voulez. Donc, un peu de histoire, où nous sommes? Nous avons commencé en 2008, quand nous avons développé le camp, le britannien est de la région française, et nous avons développé le britannien de sa propre SDI, qui a été orduré par un SDI. Et très rapidement, nous pensions que nous devions créer quelque chose de plus générique, que cela pourrait être utilisé par plus de gens que juste les Britains. Donc, 2010, le premier déploiement de production, et autres clients, comme nous l'avons prédité, nous ont demandé de la même sorte de software SDI. Donc, nous pouvons réutiliser le software que nous avons fait pour le britannien, et improvement, construire le premier déploiement. Et le G-Orchestra a vraiment commencé avec nos secondes clients, avec Akiten Regine, et continuement improvement. En 2012, Bolivia a adopté le G-Orchestra, comme un SDI de l'exemple. Et plus de régions, plus de villes ont été utilisées, depuis le début. En même temps, les labres de recherche et l'industrie ont aussi utilisé le software. Donc, nous allons parler de la communauté. Cette photo a été taken cette année, durant notre événement annuel, où la communauté s'étend, et explique des évolutions et des directions que nous devons adopter. La communauté est très diverse, comme vous le voyez, régions, villes, recherches, entreprises, tous sont représentés, à moins par une personne dans cette communauté. Mais à l'extérieur régional, le G-Orchestra est principalement présent dans la France, mais aussi autour du monde. Nous avons Nicaragua, Senegal, Indi, et aussi nous avons un fort présentage dans Bolivia. Vous voyez ici 7 instances dans Bolivia, une d'elles étant FAO, et l'autre étant GEO Bolivia, qui est un instant de state. Encore une fois, la communauté s'étend à l'IRC, et le G-Orchestra est en 3 nodes. Nous avons deux listes d'un général et d'autres qui sont plus en relation à l'enveloppement. Les soucis et les soucis sont publics, et sont hostés dans le GTOB, et nous avons une annuelle rencontre annuelle, et cette année, c'était la 3e édition de Strasbourg, France. Ok, nous allons profiter plus en plus dans le sujet, en parlant de la architecture des softwares. En ce schéma, vous reconnaissez la première, la sécurité de la cour, la casse, l'autantification des gènes, et les autres modules. Ici, c'est le proxie, ici c'est la classe, ce sont les autres modules, et la partie orange est un set-up alternatif, qui est faite pour s'adapter à plus de loads sur le GEO Server. Donc, comment ça fonctionne? Comme je l'ai dit, nous avons un single signon component, qui est CASS. C'est à dire que, quand vous êtes authentiqués sur le software, vous êtes authentiqués sur tous les components, vous êtes authentiqués sur le GEO Network, pour le GEO Server, pour les spectateurs, et vous avez accès à des lasers privés, par exemple. Donc, ce CASS, donc il authentique l'utilisateur contre un database LDAP. Il y a un database user LDAP. La sécurité proxy, donc comme je l'ai dit, la proxy, ici, il indique l'utilisation des sessions. Quand vous l'allez, la sécurité proxy crée une session, et ça le garde pendant un certain temps. Et ceci est le web server. Tous les requests, ce qui vient de l'application web server, va au procès de la sécurité, et la sécurité proxy roule le request au procès de la sécurité. Et par le fait, il ajoute des adhées de sécurité. Les adhées de sécurité, dans le request de la HTTP, disent à quel point le client est connecté, et à quel point il a des rôles. Donc, les modules, ils disent à ces adhées de sécurité, et ils garantissent ou ils dédient l'accès aux ressources, en accordant. Donc, sur les modules, comme je l'ai dit, nous sommes en train de serrer les rôles de l'agent, dans la forme de network geo, nous avons les deux versions, les deux et le son, qui est la plus dernière. Nous avons le geo server, nous pouvons utiliser la plus dernière version de geo server, chaque fois que le geo server réélise une nouvelle version, nous pouvons l'utiliser avec le geo-archistre. Optionnellement, nous pouvons évoluer la laison de geo server et de sécurité de sécurité, en utilisant le geo-fence, un free software, un component de geo solution, qui vous permet d'en dire, les gens de ce groupe, ne voient que cette partie du layer, et les gens de ce autre groupe voient le layer de la laison, et les rôles sont attribués, par exemple. C'est très, très fort, en termes de restrictions. Et aussi, CAS est un software que nous avons pris, qui a déjà existé. En plus, nous avons développé un avance de geo data, des viewers et des éditeurs, que je vais vous montrer plus tard. Un extracteur, qui est un module qui vous permet de vous downloader geo data extract, si vous voulez des données dans cette boîte, dans cette projection output, et le travail se tourne, et en fin de compte, vous recevez une email, vous pouvez downloader votre geo data à cet URL, pendant 10 jours, et cela sera évolué plus tard. Nous avons aussi des utilisateurs et des groupes de console de management, et l'analytique est un module qui vous permet d'un administrateur de monitorer l'utilisation des services de OGC, sur la plateforme. Nous avons des modèles, mais plus importants. Il y a un viewer, le viewer UI, qui présente des instances de geo-architecture, au-dessus d'un DEM. Il est un peu ancien, mais nous avons des plans de replacer avec un 3-viewer et un Angular GS. Ceci était fait avec Geo XT, XGS et OpenLiars 2, mais cela fonctionne très bien. Ce viewer est aussi une bonne frontière pour tous les modèles. Vous voyez que, dans le viewer, vous pouvez switch un layer dans l'addition mode, et vous pouvez changer les attributs et s'y synchroniser dans le database au WFST. Le même viewer vous permet de s'interfacer avec les services web extracteurs, pour que vous puissiez juste le portion des layers que vous voulez obtenir. Je veux un file de forme si c'est un vector, si c'est un raster, si c'est un GOT, et pour cette résolution, vous obtenez votre email quand c'est terminé. Nous allons parler de la production de l'architecture. Qu'est-ce que cela veut dire dans la production? Vous installez le service, on parle de hardware, opérationnel système, middleware et provision, et on installe cela. La scale est une question importante, parce que quand vous avez un grand nombre de personnes arrivant, vous avez à s'en déloyer. Nous allons aussi parler de la monitoring du système. En arrivant à camp, nous avons déployé un G-architecture sur les machines de medium-sized, une machine de 2 à 32 CPUs et 8 à 128 GB de RAM. Nous utilisons aussi des instances de stack open. C'est principalement pour notre demo ou des instances de dev. Pour l'OS, le G-architecture a été testé en runtime sur Debian 6-8. Il fonctionne très bien, c'est une plateforme de référence. Et des clients ont aussi rapporté que cela fonctionne sur Redat et ou CentOS. Le G-architecture réalise des compétences à l'enquête de la distribution Linux. C'est ce que nous appelons middleware. Ce sont les 4 compétences que nous utilisons pour soutenir le G-architecture. Apache, cela peut être bien sûr répliqué par NGNICS, comme un web-server. Tomcat, qui hoste les applications. POSQL avec POSGIS, comme un DBE, et OpenLDAP, comme un utilisateur et un groupe database. Tout ce middleware peut être installé dans le contexte de G-architecture sur les 2 manuels, bien sûr, mais ce n'est pas très pratique. Ou on peut le faire automatiquement sur 2 softwares, appelées Poupette ou Ansible. Pour Poupette, nous avons une récipe, qui nous garde privée pour arriver à la fin du moment. Ce n'est pas encore l'open source, mais l'ansible récipe est l'open source. Je vous montre deux différents scenarios de déployement. Une avec Poupette, on dit que sur cette machine, je veux installer un orchestre standard. Ou avec Ansible, on écrit des files, qui contiennent les variables en déployant. Et avec un commande, si vous avez un ssh access au service, vous pouvez installer automatiquement un orchestre instant sur le service avec Ansible ou Poupette. La différence est que Poupette, pour le moment, va vers le milieu, mais Ansible va un peu plus loin, donc elle installe le milieu, mais aussi les applications. Un peu sur la scale, donc on a une architecture modulaire. Cela signifie que c'est plus facile à scale, car vous pouvez distribuer les applications sur plusieurs machines. Mais quand il y a trop de personnes sur votre service, vous devez scalez le service de géoservice. Le service de géoservice est responsable pour les services OJC. Avec Poupette, c'est tout ce que nous faisons, on veut installer le service de géoservice et créer deux instances de service. Pour aller plus loin, nous devons scalez la procédure de sécurité. Nous avons deux options. On peut faire une séance entre des instances de Tomcat, ou on peut remettre les sessions de la procédure, pour qu'on puisse faire une procédure de sécurité. Quand vous commencez à avoir un système de production, vous ne pouvez pas oublier la monitoring. La monitoring est pour obtenir votre système toujours en train de rassembler. Il y a plusieurs pièces de software que nous avons utilisé ou considérés. Je les ai listées dans ce slide. En bas, ce sont ceux qui l'ont utilisé. Les autres sont ceux qui l'ont considéré dans le futur. Nous avons utilisé un nombre de solutions, qui nous permettent de vérifier du système base à les services OJC. C'est très important. Je ne vais pas passer par ceci parce que je suis short. Qu'est-ce qui se passe en orchestra? Comme je l'ai déjà dit, nous allons avoir un nouveau vueur, basé sur OpenLayers 3 et OJC. Peut-être en fin de janvier, quand c'est prêt. Beaucoup de customers sont currently buying us custom modules tailored for specific needs, for instance modules for handling data related to city planning, cadastre, urbanism. Nous sommes aussi working on redhat and debian packages, parce que ceci est très important pour la distribution de software. Cela nous permet de streamlining le processus de installation, pour que, d'un point de vue de la Bares OS, pour les services OJC avec Poupet ou avec Ansible, nous pouvons être en train de se passer en 5 minutes. Vous pouvez aussi avoir entendu de l'OJC virtualisation, que nous sommes currently using for development or demo purposes. La question importante est maintenant, est-ce que c'est prêt pour la production? Nous essayons de répondre à cette question. Nous ne faisons pas avoir un définition pour maintenant. Nous devons aussi s'assurer que tout le component soit très bon, pour le moment, comme je l'ai dit, nous sommes sur le seul service geo-scale, mais nous aimerons aussi s'assurer que tout le component soit autoscale, ce qui signifie que, selon le load, votre component ne serait sans intervention humaine, et que tout le monde soit en train de répliquer des notes différentes dans une architecture cloud, par exemple. Cela serait très bien. C'est quelque chose que nous allons investir dans. En conclusion, nous avons appris à la campion avec les SDIs, nous avons appris que l'infrastructure est la clé, parce que, quand vous vous setez les services OJC et vous disiez aux utilisateurs de ces services, vous ne pouvez pas déterminer le temps. Donc, vous avez toujours besoin de load balancing, de monitoring, et cela est donné par votre IT. Donc, pour toutes ces raisons, et aussi pour les présents de backup, vous devez parler de votre IT, ou de nous, si vous avez des consens. Ok, merci beaucoup. Si vous voulez savoir plus de G-Orchestral, vous pouvez aller à ce URL, et je suis aussi disponible pour répondre à vos questions. Merci beaucoup. Merci. No question? So, it's two questions. The first one is, do you usually have this on cloud servers, like Amazon AWS, or do you usually have in house servers? This is not in house, usually we host this at different provider, we do not provide the hardware, but this is not yet cloud computing, but this is something that we have in mind for the future. Yes, of course, elastic computing would be great, automatic scale. And the second question is, how we store the geospatial data. Is it converted to post-gis, or is it saved as files in the server? As you want, because as this is a geo server, you benefit from all its connectors. You can connect to post-gis, db, minuscule, db, shapefiles, WMS, remote WMS, or WFS. For instance, a user has a shapefile, does it offers the chance of converting to post-gis and storing there, or it goes in the same format as the user had? At the moment, the user has to send his file to an administrator, and the administrator references it in geo server configuration. But there are people in the geochestral community which have developed a drag your file to upload to the server, using own cloud, for instance. And once the file is uploaded, it gets automatically published in some workspace in geo server. But the problem with this approach is that it's not structured. Usually, with SDIs, you want to structure your data, put this layer in this workspace, and users might make mistakes. So that's the reason why at the moment it's always, it goes through human intervention before the data gets visible to everyone. Ok, thank you. I have a question on the continuous delivery mentioned. Is there an update procedure for existing geo-ochestral installations? Yes, you can always. So we've got stable branches in geo-archestral, et à un moment donné, on supporte 3 versions. Et pour chaque version, on a des fixations de bug dans les branches de release. Donc ça signifie que, à un moment donné, on peut ré-deployer l'instance de geo-ochestral, en utilisant la même branche, la même version que vous avez déployée à l'un de l'autre. Donc, vous ne devez pas changer de choses, vous devez juste ré-deployer vos appels de web. Donc, qu'est-ce qu'il y a, si il y a des changements dans le database? Bien sûr, quand vous changez les versions, vous devez appliquer manuellement. Vous avez un change log où nous disons que tout ce qui a changé a été fait avant de gréer vos appels de 1 version à l'autre.
|
geOrchestra is a free, modular and secure Spatial Data Infrastructure software born in 2009 to meet the requirements of the INSPIRE directive in Europe. Initially covering Brittany, then France, geOrchestra now spreads worldwide with SDIs in Bolivia, Nicaragua, Switzerland and India. The presentation will go through the following subjects: * quick and precise description of the key features * where we come from and where we are going to * technical description of the software architecture (including SSO and security proxy) * from an infrastructure point of view, how we scale to handle the load (using Puppet and OpenStack) This talk is for anyone with an interest in SDIs, and "real world" SDI deployments.
|
10.5446/32047 (DOI)
|
I'm a coderlady and anyway I'll make fish and cod. Good afternoon. Let me introduce myself. I'm Chung Hoon Lee from GIT in Korea. I want to tell you about the planning and the factors for the constructing NSDI for developing countries from point of view of my complex. Actually the computer doesn't like it. GIT stands for zero signal information technology. We established in 1996 and we have been making great effort to Korea GIS industry from the beginning. These are names of our products. Special amount to mention about the GIT which is spatial data editing and validation tool with building GIS and GIT made great contributions to Korea spatial data construction. We decided to open this product to the public as an open source product. The reason is for the developing country construction of spatial data is the first step of NSDI. We want to make this product to globalized to be globalized. We want to secure the product of the product because the Korean market is very small. We can't make any profit no longer. It's very hard to maintain this product in Korea so we open it and make it global product. NSDI usually is big body because of the base map monitoring for structural professionals, education centers and many things. But at the first stage of NSDI construction of the base map is very important. Many developing countries need NSDI to have small population and nice type of company to create. They need to have from ordinary people. We set the goal of the project is to use web-based system. For the development of data we need the validate function by official data government. We participate in the open source project team as provided by Korea research institute of human settlements. It is funded by the Ministry of Human Infrastructure and Transport. There are 9 members in the team. We are editing the team. The main is the open GDAS. The building consists of two versions. One is web-based, DO and editor. The other is the spring-based GIS engine. For the validation of data, open GDAS QA using the project OSHROS 2, there are two versions. The validation tool is followed by Korean national networking institute. The module is open GDAS which is distributed system. The HTML is web-based, they use JavaScript, J-Poly, AGS, 2D and 3D view. The GIS engine is spring framework and supports many tenders, especially OTC. This solution shows the structure of the GIS engine. It supports topology-based validation functions. It has many geometry operators, like up-moring or open-base. For the QA, it is also spring framework-based. It evaluates the data. It has many purpose-based spatial operations. It provides flexibility using the definition files, which is the XNSI form for validating data. Open GDAS is a distributed system. It's also the enaster of the performance of the GIS engine. Currently, we are researching on creating data. This muted data collection is in progress now. Our project is scheduled like this. It shows the scale. What we did on the last year, we created the 2D editing tool using OpenGDAS. It's component-based. We did a change to spring-based framework. This single layer can control 3 data cables in the database. This is the database connection view. The interface is made by jQueryUI. We developed the GIS extension of OpenGDAS, S2L, and one developing country asked us to customize the solution for the construction map. They want to accept the people's request to their own building and their telltoyce. They want to check their data and validate it or delete and modify it. They want to buy the options. This is the web view for the options. This module is tested on the Chrome and Firefox. It is testing on the X-Fone now. The temporary data from users who have low authority and can be authorized by the options for deleting or not. The manager of the system checks the list. He decides to delete or modify or validate it. He can also leave a comment to the person who input the data. This is the editor for the people. There are several menus and shapes. We developed the web GIS engine, minimizing the problem caused by large scale data. This web based engine has a screen framework. There are special operations defined by OGC. There is also a map generation tool. You can model or you can add equations. You can use the same application for small scale maps. You can have an individual recipe and write articles. You can have any kind of, you can visit the booth in the exhibition hall. So there are many, there are some technicians over there. If they have a question about the system, ask them. Thank you.
|
Recently, awareness and utilization of open source software has been increased, and interest of open source in spatial information industry has also been increased. Especially, in developing countries trying to build National Spatial Data Infrastructure (NSDI), movement to utilize open source technology is significant because it costs less for maintenance and it is easy to operate. The purpose of this research is to leave a case creating web-based platform environment to apply to the countries attempting to build real National Spatial Data infrastructure by developing and applying open source. In order to implement this, the user interface and services using OpenLayers, jquery, and Ajax were developed.
|
10.5446/32051 (DOI)
|
Welcome to session 4-2 on new business, I think it's called. I'm John Powell, I'm the session chair, I'm also the first speaker. I work for a small company in the UK, data reseller essentially. I'm mostly a database programmer, but I'm going to present on two JavaScript libraries, CardiDB, which I'm sure some people use, yes. Do you want to use CardiDB? No, good. And D3JS, which together in combination, I hope to show you can use to make some interesting visualizations. So, I spend a lot of my time dealing with data, which is, I don't know, a lot of people probably work with data, it's messy, large, I spend a lot of time cleaning it, using one of the favorite tools we see there. It's actually horrifying how bad data can be. I had a tree set recently, which had trees that were a thousand meters tall in it, which is true but unbelievable, no one had tested it. So, for this demonstration, I'm actually going to show you some crime data from the UK. This is freely available, and it looks like this, which is, you have basically, you have an ID, which is ridiculously long, and half the time doesn't exist anyway. You have latitude and longitude field. This is data for London. Quite a lot of this data was in France and Scotland, so that was cleaning step number two. And then you have outcomes and various other things, and no suspects identified. This is sadly very common. And there's about 20 different crime types, everything from antisocial behavior to burglary to one I particularly hate being a cyclist, bicycle theft. And as you will see, this is not sold as a lot of crime in London. So, what I did is I took all this data into postgres and ran some queries, the details of which are not very interesting, but essentially some spatial joins, doing a bit of waiting across regions, waiting somewhat by the population of the neighbourhoods because they try and anonymize the crime somewhat. So it's not black marks at the neighbourhoods too much. And what came out after running this query is this, which is, for one crime type, just for London, five years, 369,000 road in itself. Nasty. When you look at that and you think, my colleague says that's the deliverable, and I said you have to be kidding me, how can anyone make any sense of that? It's just a row of millions of numbers. So what I did, each of these numbers here is a postcode, a particularly bad UK system involving numbers and letters, which again is another sort of huge error. So if you have a designer system, use zip codes, they're much better. It's five numbers, you can't get it wrong. So these are postcodes. I did a join and brought in the geometry so that I could then load it into Cartody B and actually get a look at it. So before I show you Cartody B, I'll say why no one uses Cartody B. A couple of people are, okay, which might be a bit boring for you. It's cloud-based. I can't stress how important this is. That means if you don't have to, you upload your data and it's there. You have a console, you can run queries against it just like you would on your desktop, but you don't have to have a Postgres installation. You make a change, it's live. I'm still waiting, we're still on Postgres 9.1 at work, which is a nightmare. I've been waiting for an upgrade for years. So it gets ran to Sysadmin essentially and I don't know if you work with a lot of other developers who want to change one line of code and it might take three days to get it live. So it avoids all of that. Postgres under the hood, which is great. Numerous visualization options, which I'll show in a moment. That's the key side of this. On the server side, it's actually WMS. So the images that come back of WMS, this is really important because you have the data on my laptop load is many megabytes. If you try and send that to the client and generate it on the client, you send 50 megabytes to the browser and it's just going to die on you. Whereas Cartodydb actually generates them on the server and sends it back as a WMS. So it makes it very responsive. And finally, there are many others, but it's extendable with Cartodydb.js, which is a wrapper around leaflet and allows you to pull back the visualization and do more interesting things with it in the browser. Finally, before we get to the Cartodydb, this is what my data looks like now. I've got a crime type, some sort of number, I don't know what that actually means, but it's a crime indicator for that area. This postcode and a year. So the year is repeated, the postcode is repeated. So what that means is when you have the geometry, the geometry has been repeated multiple times, which is useless for the purposes of Cartodydb. You need to have each column be a separate category and you only want the geometry once otherwise. You're uploading megabytes of data and you'll get annoying messages from Cartodydb saying, please upgrade your plan. Because there's a free version and then there's paid for versions and the more megabytes you store on the servers, the more they'll charge you for it, which is fair enough. So in order to... Actually, let's go to Cartodydb now. So Cartodydb looks like this. This is the data. There's a data and a maps dashboard. So these are the data sets and if you look at one of these, you'll see this is essentially just a view of a SQL table. So here we have this one is actually bird tracking routes. This is something called talk, which is one of the Cartodydb visualization types. This one's great for showing dynamic moving data. And this has become really popular with the ornithological crowd because now they can send ship birds flying all over the world. So we have a bird name here, the seven of these, a geometry column, which is the point and a date time. On the right here, this is the Cartodydb dashboard essentially. So there you can see you can run a SQL query straight in there. And then if I go to the map view, okay, so it zooms in on the maps. So now you have these wizards. These are how you define the views on the data. And in the case of talk, you have a time column, the date time, and bird name, which is the category. So you see here you have Eric, Nico, and Sanne, these seagulls flying around. And Cartodydb just iterates through it for you and gives you a nice real time demo. Unfortunately, my crime ledger is not like this. So going back to that, what I did in order to get it from columns with repeated post-codes in years, I ran this crosstub query. And one familiar with crosstub queries in Postgres from Table Funk. So basically what this does is it pivots on the year, got Generate Series down here, and that will basically lay it out so that each crime in a year is in a row and you've only had the John Pruits once. So having done that, I'm going to go back to Cartodydb, the data view. Okay, so here are my crimes. Sorry? No, not mine. No. No. Well, then my bike was stolen three times in London, but I don't think it figures because I never reported it as pointless. Anyway. So there we go. I only put three kinds of antisocial, the years, burglary. And here you see the geometry, just as polygon, but obviously there's an actual Postgres geometry behind that. And then this wonderful postcode field here. So if I go to a map view, okay, so there we have it. Back to the wizards. This is obviously a chloropleth map, which is mapping an intensity of some event, in this case, crime, to a color. And you have various clustering techniques. Janks is one that does better separation, that actually takes longer to calculate. Quantile, if I change that, you'll see it will regenerate on the fly. You can change the number of buckets. Seven is more interesting because then you get more. And you'll see also that the legend down here changes. And then finally, let's choose a different antisocial behavior, for example. It comes back. Not very good colors, actually. This is central London, this China town, so there's a few things going on in there. And I promised to show this is, this really is a WMS. So we go in here, you'll see that these are all actually generated. These are tiles that are generated on the server. So that's what means that you can display huge amounts of data very quickly like this. So that's all well and good. What if you want to have something a bit more dynamic? So what I did was, I generated, you can add extra layers in here, which I did before. You see, add layers, you can show crime for five different years for the same category. And then you click visualize and it will create a map. And once you've done this, you're not a geography nowhere, you click publish and this actually gives you three options. It gives you a link which you can just send to someone and they'll get a view you've just seen. Much more interesting, what I'm going to show is you get this CarterDB.js. This is essentially a JSON file. Doesn't look like great, but that includes the information about the layers that I've just generated. So putting that into a JavaScript page generates this and now I've got a click handler and that will just switch between these predefined views. I use different colors just to make it more obvious. So that gives you, it would have been nice maybe to have a year and a crime category, different drop-downs, we could have shown all of them, but just to demonstrate for purposes, there's three of them which are pre-canned. The JavaScript for this is incredibly simple. You create a map, you can see this is leaflet essentially, and then you add in this layer URL which is the JSON file I've just shown. And then you have a callback basically once you've added it and you can do, there's a done function you can do things with and that's why I put this layer selector, this add to search box if you want to search on the postcode that will take you there. That's essentially that, that's all that's necessary to get that view. So at this point I thought well that's great, but how do I get an idea of say trends over time? So for that I looked into quite a few things and came up with D3JS which is a phenomenal library, anyone familiar with anyone using it? It's incredible, supports all sorts of formats, but what's great about it is it's data-driven so basically it manipulates HTML elements for you and you just throw data at it and it redraws once you've defined the view. So it covers spatial as well and you can do chloropleth maps quite easily in D3JS, but again when you're dealing with megabytes of data, as is the case here, it isn't such a great thing to send to the browser. And it's got a really beautiful style, it's functional and chainable, every method it sends this you can just chain together which I'll show in a second. So how we get data back from what I'm going to do is add a click handler so that when I click on one of these postcards it will send back data based on a geometry intersection and this is essentially what the CQL API looks like. You can either use an Ajax, getJson or D3Json so you basically, and then that's my URL and the key and then this is the query string, selects a bunch of crime types, that's going to be a year for a particular crime type, whichever one I've chosen in the view and that will send back the data. So now if I uncomment that, life demo always fun, reload. So now when I click on one of these crimes I should now see, yeah great. So now we've got a yearly trend so I can see actually that area has been really bad for antisocial behavior for forever basically. That's something that people do in the UK which is drink way too much and then go out and shout in the street and fight each other and all this. So this is central London, you can see there's a lot of that going on. Look at vehicle, same thing, bad area for vehicle theft, burglary. Same again, okay. So you get a time trend along with, you get a spatial overview of how that postcode looks related to the ones around it and you get a temporal trend of what's happened over time in that area. So one more thing to show is, this is not very good for the British police but crime clear up rates. So I'm going to add a pie chart and another lovely query coming up. So now, let me load this again. Okay, we go to this area. So now when you're all over these it's going to do a call back, call the data from Kars to be again and draw a pie chart showing the clear up rate. None, unfortunately, is the communist category. That means the police didn't even enter anything about those crimes. It just nothing happened at all. If you're really lucky, great, red, okay, court red handed. So that's charged in some level. None and not prosecuted. So as you can see, crime protection in London is not doing that great. And in Seoul you can leave your laptop in a cafe and go off for two hours and it will still be there but not in London. Let's show what this looks like. Because this data is, you've got three rows coming back so you need to line it up so that sorry, we go back to D3. D3 works essentially by, you define, you define some dimensions and you have y and an x axis which basically the range of the dimensions of your chart and the domain are the values that have been bound from the JSON that's coming back from the SQL API call. So there you have this anonymous inner function here that just returns the value in this case which is the value of the crimes and the key here is the year, range band band. So that creates an x and y axis and that redraws. This puts on the labels and then the actual elements themselves, the way D3 works is you basically, you have a div, I've got a chart div, you append SVG to it, you set the size, append a group, transform it to put it into place and then because we're talking bar charts here, we append a rectangle and the x and the y again are just functions dependent on the key and the value. That's where they're placed and the width and the height as you can see the height minus the value so that's what draws it and finally there's the mouse over which calls the pie chart and the pie. The thing I had to do for the pie chart is this awful looking query here. Array to JSON, array to JSON, what that takes is a bunch of rows which the crime out types and the numbers in the pie chart and it bonds it up into one row so you can iterate through the keys and values. And a pie chart is very similar, you create an SVG element but this time you use, you bind the data to this pie function and what this pie function does, it's by JSON, the pie function basically calculates the arc widths from the data and then it creates a path, the SVG path element and again we've got a function for the color which is based on the data. A legend, the legend is simply an HTML table so that's essentially how I'll actually show you the generated element to the chart. You get the element tab to the chart, you can see there's a div called chart, it's got the width, it's a group and then each of the bars is class width and x and y and width and height. And that's it, the pie basically the same as the legend HTML and there you go there's a path which is one of these, each one of these is a path element. Right, that's basically my time up so any questions? I'm sorry I'd like to point out I'm not a JavaScript programmer so this is a work in progress about techniques rather than a beautiful... I probably have more security questions, you are showing... Yeah, I'm sorry. Any more questions you can do here? No I'm not a security expert but you are showing the source code of the HTML and I saw there a couple of things. First you have the SQL query in there, you have that with string concatenation, that's SQL injection. Yes, yes. You have an URL for an SQL API including the API key and I'm looking at that and I'm thinking okay, well that's nice to play with if you're holding it for yourself but how are you going to put something like this in production? You can secure these tables so any people with a login can access your cartoDB account. I'm hoping. Just the man, thank you. Yeah, so the SQL injection thing is not an issue, the reason it's called the SQL API is because you're allowed to handle SQL to it and we run it at a permission level, that makes sense. It's a read-only permission level by default. If you provide your API key then you can get a higher level permission. So the problem is the SQL injection, the problem is the API key. Yeah. You don't want to put your API key into your code so people can see it because basically you're showing them your password which then allows them to do write operations on your table which you don't want. Thank you. This was a demo. That was the one question I was hoping I wouldn't be asked. Well done. It's not like I'm sorry. It's supposed to be about the graphics but thank you Paul. We've started using cartoDB to visualize some fire department responses so we've got the location of the incident and we've got the location of the vehicles when they're dispatched. So we've got A and we've got B and we can infer what the route is. We want to visualize a nice smooth travel path like the bird data and so we go through these iterations of interpolating the points along the line and trying to do that in a smart fashion so that it makes sense. Have you done anything in that regard? No but that would be more using something like PG routing I'd have thought or matching it up to an OSM network or something along those lines. That's not something I don't think you'd do within cartoDB. Yeah I mean it would boil down to more of a post-GIS where it is especially. I would say that's more of a post-GIS thing. Thank you. Any comments from the cartoDB expert at the back? No. So no I don't think that's not built in because of the functionality at the moment. I still think it's essentially a visualization tool. Obviously you could do anything you would do in post-GIS on the back end but no more questions. You said it's in cloud. Which cloud and can I use my own cloud if I'm for instance using Amazon RDS with some post-GIS back there. Would that be an option to plug? There is web syncing no? Yeah we basically have our own contracts with AVS and we have our own clusters and parts of AVS which are dedicated to us. All this kind of virtual private cloud and all this kind of stuff. So we are not going to put our data into the cloud which is not secured by our experts and so on and so on. That's the corporate policy. Would that be something that we could arrange? He works for continuity. I'm from German Railways. If you're going to use cartoDB as sold by cartoDB in the cloud you could arrange it although it would be fairly expensive because it's fairly custom. We're really about selling access to a platform. Our platform runs both on the AWS cloud and the Google cloud. Bigger organizations can get it on premise and run it on their own infrastructure and they can set up special enterprise agreements to run their own bits and pieces. If you wanted to roll your own you certainly could by taking the cartoDB source code and deploying it yourself but that would of course be a lot of work and expertise which is why cartoDB exists to get all that effort out of the way. Any more questions for me or Paul? No. Okay, good. I'm fine.
|
NextGIS has been busy working on a new stack of geospatial software for the past few years and we're finally ready to present what we've accomplished. Our stack consists of 4 major components: web (NextGIS Web), mobile (NextGIS Mobile), desktop (NextGIS QGIS) and data management (NextGIS Manager). Three of those components are brand new, developed by NextGIS alone and were released just recently. For the fourth component, we participate in QGIS development since 2008 and use its codebase for our desktop component. The main focus of the stack is tight integration, ease of use and modularity. New stack features unique features, to name just the few: plugable renderers for NextGIS Web, multi-layer support for NextGIS Mobile, super-fast rendering and great formats support for NextGIS Manager and all-around integration with NextGIS QGIS. The presentation will provide an overview and will look at general architecture, use cases and plans for future development.
|
10.5446/32052 (DOI)
|
I'm from Next.js, my name is Maxim. I've been doing geospatial open source development since 2008. This is my first time actually managed to visit Phosphogy. This is the first time also we are presenting whatever we've been working on for about two years, two and a half years. I'm from a company which is called Next.js. Okay, a few words about Next.js. We founded it in 2011. It is all developers. We like to joke that even the accountant and a lawyer in our organization have to push to GitHub to do various things. All right, we are corporate member. We do all in QJS core committers and our tribe is C++ Python and we do a little bit of Java but only on Android and that sort of defines whatever ecosystem we are trying to build. All right, the software I'm showing today is three or four components written by us. So it's, of course, we're standing on the shoulder of giants. We're using all kinds of components from outside but the packages are written by us and this is not just wrappers but the sort of major things. Okay, so our goal as a company is to reflect back to whatever question was before. We are trying to provide other companies with their own stack which they can use to build their own applications with all the components. It's not some service on the internet. This is sort of a full set of components on a server, mobile and desktop. Okay, just running a little bit ahead. We have quite few clients which are already using it. So this is not just our ideas. This is all implemented and of course they are predominantly in the country we are from but we have some international clients as well. Okay, why we did that? Because at some point we got, so as a company who is doing custom development, you keep running into different packages, different libraries and you try this and you try that and then at some point you fed up with it and you start building something on your own and some people sort of stick to whatever projects there are and some people go their own way and we went a little bit our own way which is also kind of typical for a country I am from. Everybody who likes to build their own bicycle basically. Okay, so we actually talking, I'm actually talking about a platform of platforms sort of thing. So every piece of every component we've built, we sort of say it not as a final piece of, a final product but something which can be used or is used by us and increasingly by other people to build their own applications. So there is a core platform which I'm going to talk about. There are four components and there are lots of additional software which sort of helps with different side of things and we are involved in all these and we are also writing a lot of extension to all these and if you use QGIS, you in most cases, in many cases you might run across one or two plugins that we've wrote and used. So the idea behind this integrated platform is that, okay, you are a company, you have a server, you want to store some data on this server and then you want to access it in all kinds of possible ways from your whatever clients you have from desktop, from web clients and from mobile clients and things like that. So we sort of have this plan like this to have our server and then surround it by clients and then make everything sort of interrelated in both ways. And at some points, these areas don't make much sense but the idea is sort of to make it all sort of interrelated. So currently, it looks a little bit like this. So some areas are still missing, some areas are just one direction and this is the way it is. So, okay, the first, the core product we are building is called Next.js Vap. Next.js Vap is a server backend application plus an integrated client. It's built in Python and Pyramid to use this back, post-js, post-js scale as a database backend and the front end is built on Dojo and using on-put layers. It's GPLv2 and it's mainly for storage management and access geo data. Okay, and I'm going to go through a few things. Okay, I'm just going to speed up and go like this. So everything in Next.js Vap is resource. So we have vector layers, roster, web maps, post-js connections, layers, groups of resources and all kinds of things you would expect from a server to have. And then the resources are hierarchical. For example, vector layer will include such resource as a style. Okay, it's not even here in the list but it's a hierarchy. So one vector layer can have multiple resources. So resource is also extendable, which is very important for us because all the time client wants to do some specific resource with specific behavior so you can extend it. Yes, and have your own properties and behaviors and it also has capabilities. So all the resources are able to talk to each other and find out what this resource is doing, what it's able to provide. Of course, the Next.js Vap has an API and you can do things like get a list of features in the resource, get a list of resources, get all the permissions that the resources have and do some kind of request on them, posting, deleting, all kinds of things. All right, so it's all ruled by some web-based administrative interface. So you can add layers, upload layers and do this and do that in the browser. So organization can have multiple layers. It's just one example from the project. We put a lot of effort into building a comprehensive permission system so you can do very many things with permissions. So for example, you can have a POSJS connection and then there are some tables, POSJS tables added to your applications and then the POSJS connection and POSJS tables can have different permission set. So the person who has had access to administrative interface, he would not be able to actually see the parameters of this connection but will be able to access the data, for example. And then there are all kinds of other things they can propagate. So you can have a group of resources and then assign the permissions to them. So I don't know if it's visible but there is a lot of stuff here going on in terms of different kinds of resources. So as I said, it has an integrated web mapping kind of client. We don't put too much effort on it but it does have... The goal of this client is to have web maps. So you load it a few layers of different kinds and then you put a web map on and then you show it to the user. You can have as many web maps as you want. It's not super fancy looking because we think that most of our clients and people using it are actually not using this web map application. They want something on their own using Next.js as a backend and this is just an example of it. As you can see, it's using the same backend, same sort of dojo libraries to building this frontend but it's completely different looking, different instrument, different stuff and it can get increasingly complex. It looks much better on my screen. Sorry about that. So all kinds of things people are putting on to it and it seems to be working fine. One particular feature which is also very excited about is plugable renderers. So in Next.js web, we don't go into rendering stuff. We don't write renderers because there are so many of them. We have a web server, you have MapNIC, you have QGIS, you have a Jue server. So among the ones I named, we can plug in three different renderers. So you can have a layer rendered by a web server. You can at the same time have a layer rendered by QGIS and at the same time have a layer rendered by MapNIC at the same time on one sort of map. That gives you lots of flexibility because styling is always a pain. You want your styles to be sort of transferable but it's not working at this moment. Not working well at least. I'm switching to mobile. Mobile is another sort of focus of ours. It's Android mainly and it's for visualization and data collection. There are three sort of parts of architecture. There is a library for mapping, UI and application built on top of it. Five minutes or ten. Okay, sorry. All right. Built on top of it. It's a little bit of a sort of functionality. It does hold multiple layers which is kind of unique because I know that people prefer to have one layer but in corporate settings, there are many cases you want to have more. It does have editing capabilities, custom forms and it integrates with next. So just to have a, I know why it's all dark, just to have you sort of look how it looks, example how it looks. So there are multiple layers. There are vector layers, there are raster layers. You can turn them on and off and you can do things on mobile which you sort of get used to doing on desktop. Okay. So you can also edit stuff. Both polygons and lines and points. You can edit attributes and these get synchronized with your web instance if you have it connected. Okay. If you edit something on this thing, it will sort of get loaded back onto the server and it also does offline editing. So you can actually get disconnected and then edit something and get back to the internet and it will get transferred backwards back on the server. So one feature we really like and work quite well on it is customizable forms. Again, all the clients, all the people, they have their own wishes to what the form should look like. And then, so we have this special application called form builder, which sort of gives you all the components and you build your form and then it gets rendered on the fly by mobile application. No programming needed. Just build your form, import the data. Actually it's data driven form. So you can import the data, build your form, make it nice, upload the whole package, save the package, upload the package to mobile and here you go. Your application is now rendering for tracking tigers or doing some, I don't know, some sales or whatever. And we don't really sort of impose any restrictions here. Okay. So very important feature is integration. You know, start with everything should be integrated. So integration works like this. You have your web, next-gen web instance and you can create your account in right in the mobile application and sort of go into the sentence. And then you have all your layers and of course, as you can have a roster and vector layers. So this is one post-js layer also connected. You have WMS clients and satellite imagery, whatever. You can load it up as vector if you want to edit it, okay, and get synchronized with all this stuff. You can load it up as a roster if you just want to, if you just want to see the data and you just want to sort of be able to see the styles, the creator of this layer sort of want you to see. And then it loads up. It's just some topographic maps from all types. Okay, QJS. QJS, we've been working on, as I said, from 2008 and this is a sort of workhorse for us. You all know what it is. I'll just go quick now, speed it up a little bit. We need our own QJS because we work a lot actually on GDAL and we need our own build of QJS to expose, to have all the functionality from GDAL that we added there sort of recently. And there's just some recent developments. We are able to connect to ArcGIS server native map services now. So you can, well, of course, if ArcGIS publishes WMS or WFS, you can connect no problem in any QJS. But if it's published sort of natively and not exposed as a tidal layer or something like that, so now it is possible to connect and now we can do it in our own version. Another thing we worked a lot on is a new network model, which is also based on GDAL and it's an abstraction layer over all kinds of possible network models you can have from PGRouting and OSRAM and things like that. So it tries to abstract it away and sort of give you all functionalities that usual networking is doing, like connectivity and things like that. We also do support custom build server. The client can say, I want your installation look this, this and that. And then it uploads it to with a build manager to our server and then it gets in a queue and then it builds up an installation file with all the needed plugins, with all the sort of customizations he wants and basically sort of distributed after that. Again, very important integration. There are many features we actually added that you can actually send directly from mobile to, from QJS to mobile with QJS mobile to mobile, it's a plugin and then you can also probably familiar with Qtiles. Qtiles is a render, is a tiles stuff from QJS project. Okay. There is a connection to next to next-gen web, you can actually see the whole structure and stuff. Okay. Manager is a, is a last piece of, at least last piece of a puzzle for, for managing all this and it does lots of stuff. Believe me, I'm running out of time. The manager is something you might see in if you sort of work with RGS catalog or QJS browser. This is a little bit more full-featured. You can do lots of things and you can also integrate with, with your instance again. You can connect, you can see all your layers, you can do a search on, on the data. It's a server-based search but you're sending this with the API and you can then sync this and that, I mean many other things as well. Status. Okay. Next is web is code only but if you ask us, we can give you the instance actually to try. Mobile is out to 2.1. Just go to Google and search. 2.2 is actually due to next week. We are looking for beta testers if you want. We can give you better testing so with a fully kind of full-featured stuff or go to our website and then there is stuff, more, more, more links and stuff. Okay. Plans. We plan to do everything basically. I'm going to questions now because I don't want to sort of spend all the time on, on talking, on, on the slides. Okay. Have a question on the, on the, on the mobile. So would that be possible to use, to use your libraries to build an own survey? LJP app. For instance that I'm, I'm going to go out and field and create some, some, some, some geodata or do some, yeah. Yeah, it's possible. I mean even more. You don't have to program for it. Okay. Because they're customizable forms. They're, they're like, you can load your data from wherever without any additional programming. So, okay. But you can program it. So the, the, the survey will be possible offline without the network? Both. As I said, offline editing is supported. Online editing is supported. And the last, last part of the question, but the, the, the map data will not be available offline? It will. It's the best part. It's completely, it's, it's, it's, it's built to work offline and do everything you need, including having all the mapping data, of course, limited by, by sizes, a device and whatever, you know, you don't want to load like millions of points on it, of course. So you will have to optimize, but this is not a programming question. This is just an architecture kind of. So yes, you will have your, all your data masteries and vectors. We have in the new versions, there will be this, this little dialogue where we will, you will just say, here's the area I'm going to get all the files and cache them locally on mobile. So you can sort of plug into the AMS server you like, or two instance, you have already set up and then, and then get everything you want on a phone. And that's it. You don't have to be connected. After that's, that's great. Thank you. Yeah, we like it. It sounded like you have four QGIS. Do you have any plans to bring those changes back? You had some cool features like integration with ArcGIS. Yes, it's not, it's not a fork of QGIS. It's a, we actually, all the changes we make are, are, are going upstream. This is a QGIS built on top of GDAL2. This is not a fork of QGIS. So we need, we need our own version just to expose all this functionality, which is in GDAL. And then GDAL, it's already upstream. Everything I showed, it's already upstream. So everyone can sort of do a few tricks and sort of build their own QGIS on, on GDAL2 and have this functionality available to them as well. So it's not this problem where we have fork, which is sort of, which is sort of, you know, real fork. It's not, we don't really call it fork, because it's, it's the same QGIS, but with some, with some surroundings, you know, like, so, sounds good. Thank you. Yeah. So it's not this problem. Okay. That's it. Thank you. I have, I have another one here. You showed a lot of licenses and some, I'm here. Once again. Some of the licenses were GPL. Are you thinking on some kind of dual licensing policy that we could also use this commercially and like not necessarily open our code as GPL? Yes. Well, we thought about it naturally, of course, a lot. But, but, but the one of the reasons we actually moved from, we used a lot XGI, XJS, okay, for, for front-end development. And one of the things we moved from it is that because this dual licensing got so complicated, you know, where you can actually, when you're trying to explain something to, to your client who is asking you a question like, how can I use XJS in this sort of settings or in that sort of thing? The question is, the answer from them is always biocommercial license, biocommercial license, biocommercial license. And we sort of wanted to sort of stay away from it for a while. That's why we sort of decided maybe a little bit less sort of business sound approach. But it might change in the future. But, but who knows. So, so it's, it's a little bit difficult to answer. But yes, we thought about it. There is nothing like that yet. If you want it, we can talk about it. Thank you.
|
NextGIS has been busy working on a new stack of geospatial software for the past few years and we're finally ready to present what we've accomplished. Our stack consists of 4 major components: web (NextGIS Web), mobile (NextGIS Mobile), desktop (NextGIS QGIS) and data management (NextGIS Manager). Three of those components are brand new, developed by NextGIS alone and were released just recently. For the fourth component, we participate in QGIS development since 2008 and use its codebase for our desktop component. The main focus of the stack is tight integration, ease of use and modularity. New stack features unique features, to name just the few: plugable renderers for NextGIS Web, multi-layer support for NextGIS Mobile, super-fast rendering and great formats support for NextGIS Manager and all-around integration with NextGIS QGIS. The presentation will provide an overview and will look at general architecture, use cases and plans for future development.
|
10.5446/32055 (DOI)
|
Hi everyone. So I'm going to talk about the Cadastra Foundation today as well as the Cadastra Software Platform. So what we are building is we're building an open source platform to help communities document their land rights. This is focused primarily in places where people don't typically have legal rights to their land. So for example, I'm from the United States and we have this idea of you have a formal title, you have a deed, 70% of the world actually does not have that sort of formal, very defined rights to where they live. So land tenure, which we talk about quite a bit, is either legal or historical or customary rights to property. This could be anything from actual pieces of land to mineral rights to water rights. One of my favorite examples is a property rights example is you could have a scenario where you have a tree and maybe someone has the rights to pick the fruit that falls from the tree, but they don't have the rights to pick the fruit from the tree. And maybe someone else has the rights to the shade. So it starts to become sort of a complicated, like temporal, geospatial problem. So simply it's who has rights to use what resources, both rights, responsibilities and restrictions. And as I said, it's not just about land, though I would say probably the majority of our scenarios are actually about land. So as I said, the majority of the world does not have secure rights to their land. Over a billion people living in urban slums don't have rights to where they live. And documenting and digitizing those rights are very slow. There's a lot of proprietary solutions that are quite expensive. And people who have the least rights also are often exploited. So we believe there is a better way beyond just moving from paper but also to low-cost open source solutions. So at Cadastra, so we're a foundation that supports development of software through open source tools as well as data as well as we're building a community and doing mapping. I'm primarily going to talk actually about our software platform. So what we're building is we're building a platform online where people can go collect data. And you imagine the people that I spoke about that don't have secure rights to their lands. They probably also don't have a fast internet collection. Maybe they don't use a computer. And so what's important is figuring out how you can actually collect data and get it in, somehow get it online, and then do something useful for it with it for those people. And so we're basing things on top of Ccan. Is anyone familiar with Ccan? A few people. Which is not geospatial software. Ccan is data portal software, primarily used for open data. A lot of national governments use it for their data publishing sites. For example, in the US, data.gov is run on Ccan. Data.gov.uk is run on Ccan. It's supported by the Open Knowledge Foundation. And what we're doing is we're wrapping that with Node.js. And the reason is Ccan has an OK API, and it also has detailed permissions. But there's parts we wanted to add beyond that. And I believe it's very important when you're building technology that you are API first. One of the main reasons is we know that we're not going to build all the tools that people need. So other organizations, other people, we want them to build on top of what we're doing. And then the back end of that is PostGIS. So we do do some Postgres, PostGIS, do do some geospatial. And then initially, we're integrating with OpenDataKit. OpenDataKit is a platform that allows you to make dynamic forms. There's a standard called XLS Forms, which you can easily make a form in Excel, and then you upload it to your OpenDataKit server. And then your data collectors can download that down to their phones. Go collect data offline and sync that back up. We're also looking at working with field papers as well. Since a very mobile and computer-based solution doesn't work well for community mapping. So another component by using field papers is being able to print a large map, go ask people where their properties are, and then be able to digitize that later. And I should be clear, we're not the ones doing the asking. We're partnering with people that are already working in the land space who maybe aren't so good at technology. I find a lot of people, you know, they have this idea that they should be using imagery. They have this idea that they should be using mapping, but they don't know where to start. And so some of them start and they build their own platform, but it's not very sustainable because maybe they paid a contractor to build it, and then their grant money ran out, those sorts of scenarios that cause a lot of software not to be maintained. So we're working with those groups to help them so they can use technology. So as I mentioned with our software stack, a big component of that is we're building an ingestion engine. And so to start with, we're supporting OpenDataKit as well as uploading shapefiles in Excel, but we want to be able to use other mobile data collection applications or other software in general. So you can build an ingester so that you can connect your data entry into the cadastra platform. So we're using OpenDataKit and then we're also using FormHub. One of the things about OpenDataKit is there's a lot of forks to it. It would sort of be an interesting thing to look at as far as the ODK community because why that is. And all of those forks have a different name, and you'll start to look at them and go, oh, this looks really familiar. It's ODK. But anyway, FormHub is supported by a company based out of Kenya, and it has quite a few more features. So that's why we decided to use that. So data structure. It's a bit complicated because so there's an ISO standard for a LAN tenure. It's about 150 pages. I think it cost me 200 euros or something. And if you look at it, it highlights every possible scenario. But then if you start to look down at what you actually need, typically you have this idea of a person or a community, and there's something like a piece of land. And then there's the relationship between the two of them. So we boiled it down to that. And so we're using structured data for that aspect of it, this idea of a person and something that exists somewhere such as LAN. But then allowing other dynamic attributes to be collected as well. And the reason for this is we want to create a worldwide cadastral layer in an ideal situation. But we also want people to be able to collect private data that they might keep to themselves. So if you can imagine, Land Rights is very complicated. By having open data, you can actually endanger people in some scenarios. But there's other situations where you're doing advocacy. I think a great example of this comes from the OpenStreetMap and the MapKibera project, where a group of people worked in one of the largest slums in Kenya in Kibera and put themselves on the map. It's a way of saying we are here. But sometimes you don't necessarily want to say that. So we love open data, but we're making sure users own their data. And I think that's really important. And I think one of the benefits of being a foundation is we can really say we're fans of open data, but it's your data and you need to decide what to do with it. So we're working to build a community. We are in very early days. Cadastras only existed since January of this year. I only joined the team April of this year. And Thea Aldridge, who's our community manager, only joined last month. And so we're moving slowly and trying to figure out how we're going to engage with people coming to conferences like this to engage with the technical community, but also how do you engage with communities that are going to be using our software as well? So I've talked mostly about informal communities types of rights, but we do want to work with governments as well. And one example of that is in working with communities to document their land, there are processes to formalize your land depending on what country you're in. So what you need to do is essentially collect a bunch of information and then have a form. So imagine if you could print out that form in a more automated way. We also work with donors, we're primarily funded by the EmitiR network. And if you're familiar with who Pierre EmitiR is, he was the founder of eBay, so it's a personal foundation. We're working with community-based NGOs, as I mentioned, the group's already doing land work in many countries. And then I feel like the developers, technologists, and the geospatial geeks sort of goes together, so... So what's ahead? So we're still, we're building software, but we're a little bit in the hand-wavy stage because we haven't released anything yet. If you go to the Kedasta organization on GitHub, you can see that there's active development going on. And so we're in September now, we're going to be starting to release to partners in October, and we're starting to work on pilots this year as well. We're working with a couple existing groups, potentially working in Colombia, maybe Nepal, and Eastern Africa. And then December this year, we're going to do a completely public launch, and then that'll be when you can really get in and use the software. If you happen to be in the Washington, D.C. area, we're having a launch party, I think it's December 10th. Let me know, and I'll make sure you're invited. And there's a lot of ways to get involved with us. One way is you can come work with me. We're currently hiring a graphic designer. We're going to be hiring some engineers early next year. These are all positions that are planned, as well as we're participating in Outreachy. So Outreachy is a program to get underrepresented groups involved in open source. It's a paid 12-week internship, and it works very similar to Google Summer code. The next round of Outreachy starts in December of this year, so if you know anyone who would be interested, it's an internship working in open source, but it does not have to be coding. It could be coding. It could be documentation. People have done marketing. There's all sorts of different potential internships that someone could potentially do with us. Personally, I'm hoping that we might do a QGIS plugin, just because I think that's really something that in our current plan for this year that's missing, that could be really beneficial. But Outreachy opens at the end of this month. So if you know anyone, please point them our way. As I said, it's completely open source, so if it meets some requirement that you have, you can always fork it, do a full request, those sorts of things as well. Or just by using it. If no one uses what we're building, it's not very useful. And so I was excited to be able to share this with you today, that we have some structure to the tech we're building. We have stuff going on in GitHub, and like I said, we're going to be launching December 10th, publicly, where anyone can use it. Any questions? So if you imagine you're trying to draw a property boundary, if there's no other data, you can't draw a property boundary. So you could make a beautiful map. And a lot of the maps that are done are hand sketches that aren't geo-referenced. So we certainly want to work with OpenStreetMap. We're big fans of the Missing Maps project. I certainly spend a lot of time on OpenStreetMap. So I think we'll definitely be doing something. Just figuring out what, other than probably providing it as a map layer, is part of that is going to depend on doing pilots with these communities, because we really don't know how people are going to use the software until we've actually had them use it, basically. I'm going to ask a question from the national Ansari in Sweden, which we took a long time together as well. And we worked abroad to have other lands like Kenya, Vietnam, North America, just to build the disaster from the beginning. But we worked with governments and we have them. So how do you see this one working with all the other countries? Well, I think the, we'll be a point where governments, if we do things right, will use what we're building. In working with communities, a lot of groups do legal advocacy to governments and apply for them to have those rights, which wouldn't be us necessarily working directly with them since we wouldn't actually be there on the ground. So I definitely think we want to work with governments. We also want to republish existing open cadastral data as well, as that's available in some places. Yeah, definitely. The QGI's plugin would be great if you could go that way. Thank you very much. Also, I was recently working on a volcano in Africa, Mount Elgon. It's Uganda, Kenyan side. And one of the issues I ran into there was people living in national parks. I've mapped them, but in this atlas that we just made, finished last week, very strong calls from government, IUCN, that, well, we probably shouldn't formally show people living in national parks because they're not supposed to be there. All these sorts of things, not just that, but there's other places where governments or other groups don't want you to be talking about these sorts of things. And I think very important, the sort of background groundwork before you get into those things is important. What's your experience with that and how do you plan on handling that sort of issue? So I'll preface it with I'm our CTO. And we do have people who are really experienced land administrators and who are responsible for working on those things such as vetting partners, designing programs, those sorts of things. There's certainly ethical issues where, you know, someone, maybe they don't have legal rights, but the ethics of is the government right or are they right is very difficult. And I'm not sure, you know, technology is not neutral. And I think that's something really important to think about. But I think we have to manage that through relationships. I think one of the most difficult things is, you know, as technologists, we often want to fix problems with tech. But it's usually a social problem. So I think that's really going to come down to the groups that we work with and how effective they are at dealing with those social problems. So thank you, Kate, for the presentation.
|
Much of the world currently does not have secure property and land tenure rights. Communities and individuals need low-cost tools to enable them to advocate for themselves. The Cadasta Foundation is building an open-source platform to securely enable these groups to document their land rights. This talk will review the design decisions taken into account, the technology underlying Cadasta, and the future road map. Individuals interested in land rights management and/or the challenges in implementing technology in difficult environments will be especially interested in this presentation.
|
10.5446/32057 (DOI)
|
estos Trig Hell instance, principles, kendo i beenonder. Iso re ف�도 ponawap slapagival. subis해야~! Incruons apellemartu gauge acting thawf like eton while that cake is no scorpio? aver. bethn ghagiy murrto nows a with nor mixture and no sheau yaw some assumption of ░ b dis ˈtʷɯɧلɛlt輕īga kɯ ending grateful. Thank you, rememberably thanks for confirmation, I will answer this question. Thank you! Thank you! Thank you Thank you Thank you Thank you Thank you Thank you Thank you Thank you the Realm-of-Mind 것 taşèlifeł' witnessing a life ɢ friendships wl 작업 ɝ wʆə ʋllw y qlə elsentry ɀ l'alais Hut məʃ ə əʷes old xəpəʃ pələ Keeping əʷeʷəʷëheʷəʷəfi'a wəʷəɢy ləoʊfə zə aidəm pe** wəʷə mə fʷəʷî jən ɜəll sąbіf əi wə Inside Wyrm up i qlɣieec ɐən wèʷəʷəmei əʷəʷə EQs değiş dwe s shotgun shovel boong Wa it's so dinner ought to symmetric chn activism m 18���f dwen dgeun tricky thane dge —σι xci storekay ben t πεcial Chan Wait myерь android will disasa mo jin t yaw wh a Fast versão jeśli babぃ закон. UN I diameter L? 7 U o mmmy 2 U a 2 to 8 9 U o mme Affordable省 remos prepared지 Mick Sir tento sein ut zin lewain to zi��면 10-15 p lack diameter. Tin lewain know pa mma jouarda met 가 pan troops govannome joyan any and mer Even who is African muslim country Withto chen muriages The Tw- fractions in the sh Chong's is when there would be hogyub äh isn's passionatex tara iblha-jouting-speed unho tta pee-ed깔un yapıyor coup yung ek ole fas un prom KA lawillin sub kiem pa talun tutorial<|mn|><|translate|>, ÖÖ ÖÖ ÖÖ ÖÖÖÖ ÖÖÖÖÖÖÖÖÖ抽 OÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖ... OÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖ eğÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖÖphas NO envОd nU dedudeffo tme QIIo tEy And Eins ofaide flock to cet strali mexandd qualified till Perms lundy a Pади baixo x fantasy. Ble' that's called a chawai'a shan'e pet. Dehant'torá it's a contribution constitutionally to loose skin doesn't ring the framework of 換開 b destination kap en siu naskato nakompaiban jao ko dalparonics sawim genera in geeke sch Insmez cancelled nur Buy ch i a cars ok, kontyon ixafer cho cho queen depicted ab ixa48 consig apro 71 pits coefficient 2.FL? chunk resp. underway n e tumors lol were SPECIAL COURT creature flesh and holes nda nol aga nol w ouchay baloon ce n9 m o질im jčel, it lemok nol eliy nolu ibal. Seggo, megeti w stиться. Y boastand gu rewind fi lak dasha na written ga ow w bar washed no il ve tw чисsey to brit shc I fvem ot 마 Suf ormany ormany nprob shl calculus Зәл everyoad The season ˖pɣlər? 孟 mänavin, j'ymat pon duio kuporn niy m en sed. run allowance o cok worn men och panton. Pas by a ut rotit ou k'r plea neste. wa feminine. te...ʻinteak jё metres pouces… సార్లు సిస్లు కిండిండిండిండిండి. రిలులు దారాని వెర్లులులులు, కనుసిండిండిండిండిండు.
|
This presentations showcases the latest advances on building a full 3D Open Source GIS software stack.Some important cities have recently released their 3D models of textured buildings as Open Data : Bordeaux and Lyon in France, Geelong in Australia or Berlin in Germany among others. Meanwhile, new hardware and sensors for 3D data capture and interaction. We want to be able to store, analyze and visualize both 2D and 3D data with same Open Source tools.Among the processing we want to use 3D intersection, 3D Union, triangulation and a lot of other spatial analysis functions we already use in 2D. Other types of 3D data also need to be stored and processed like 3D Point clouds from Lidar data, or DEMs.With 3D data, an absolute must-have is a nice, fast and smooth rendering of features, and their associated textures. Visualization is a key element of a complete vertical software stack of 3D data management.This presentation will demonstrate the ability to setup and take advantage of a full FOSS4G 3D stack. This stack features various components, just like a 2D GIS stack.PostgreSQL and PostGIS now feature 3D data storage and processing GIS servers can now stream the data with Web Services Desktop applications allows 3D visualization ( QGIS and Horao plugin) WebGL application let the user configure a native browser 3D visualization These components improve over time, allowing more capabilities, be it for the analysis part in the database, or the visualisation part in the browser.
|
10.5446/32059 (DOI)
|
Giopacallage State of the ArtGiopacallage State of the ArtGiopacallage State of the ArtHirofumi HayashiHirofumi HayashiJapanThis presentation originallyAndrea Antonio in ItalyBut I changed the presentation dataBecause he is very busy nowI'm also the Japan Local ChapterBoys Chief DirectorAnd working at Applied TechnologyCO and LTD in OsakaAnd Management, Engineering,Headquarters, Special Information sectionSo, this presentation we talk aboutNumber, History, and AndroidAnd for Gio PaparazziSo, we started and first is numbersWhat number is the mobile market potential in 1991And Sweden's guess is 6.6%And all over the world is 0.4% in 19912010,91.1% in the worldVery, very gross big marketThe case of the LuxembourgWho percentage of mobile potential is 2002And continue with more activationsInhabitants in 2011 is 105Mobile penetration of Saudi Arabia case in 2010In Greece, 188%And in Indian case,2011,145M in the mobileThis is India greater thanEuropean plus Russian 2 plus ArabiaMany, many user in IndiaInternet traffic since tablets and smart phone is 26 timesIn the last 5 years was 7 timesIn case toMore time to people use the internetAnd world coverage in 2003 is 60% in internetIt increase 2010 is 19%It's same as mobile phone increasingSo, and the SMS is decreasing in 2013It's decreased by other smart phone deviceActive line power 100 inhabitants is phoneIt's globally 6 billionAnd broadband grows globally 1 billionBut still high divide 0.8%VS45, 41%And how much trust we producedMobile phone is very high nowIt started in 2002But now very very increased user of mobile phoneTrust product is also increased against desktop PCSo, history 1973First mobile phone to market contactIt's Motorola makeSpec is here 1.5kgIt's same as laptopVery big and 10 hours to chargeAnd 30 minutes communicationSo, shortAnd next is 1983So, small smart phoneIt's great by dynamic 8000XSo, small19891989 Nokia MobileCokemanI don't know this oneRussia use this type smart phoneSmart phone no cell phone1993First small simon smart phone appeared from IBMThis one have touch screen in here1996Motorola Leica StarterIt is a flop designSo, small cell phoneFortwith vibration user notification messageIt's very useful and increased user1996 also Nokia 9000 communicatorIt have a useful keyboard in hereAnd touch screen and cell phone antenna in here19991999Nokia8101It use matrix in the movieThis one have a house password accept protocolAnd a cell protocol alsoIt use the mobile phone to communicate internet navigationSo, 2000 hoursThis slide is welcome because about Nokia and Aegean 1And Japanese smart cell phoneSH4 is first with camera deviceSamsung SPMM also have MAP3 playerNokia 1 T9 Internet antenna inside hereSo, variation of the media device access is increased from 2000 and 2007New type of the smart device appeared in the marketThis one have 3G access Wi-Fi, GPS, Accelerometer, MPS player, TVL, Bluetooth, Flash, JavaIt give a memory, everything same as nowAnd 2007 also iPhone Android Open Handset Alliance appeared in the marketThis one is the first type of the iPhoneAnd also 2007From 2005 Google by AndroidAnd 2007 Open Handset Alliance is createdAnd 18 companies develop Android device applicationSo, from 2008First design Android phone appeared in the marketAnd I bought the same one and some user also buyAnd this one is free hand on the developer's meeting and buy so many yearsSo, mobile OS here in 2011 is about 50% in the worldSo, I took about AndroidThat GeForce world before Android for a long time only the master of grasp on iPad Zools existThis one is Compact OneAnd also, that you work in Zools alsoThis one is 2002 product from Fujitsu Corporation in JapanI haveAnd before Android, but after years, GBC Mobile crush party of the graspAnd it support the share, GMLKMLGPS, PNGGIF, ECDAW, WMSAnd the optional editing of geometries custom formsIt's very useful for the Windows phone, Windows CE deviceBut Windows Mobile is already oldNo one use this interface in nowWith Android Edge, QGIS Mobile start Android in Android deviceThis one is same interface in desktop PCIt's very useful, but CPU is very poor in Samsung deviceAnd very slow operation in hereSo, GVSIG mini starts maybe before the Barcelona's HOSPORGGVSIG mini support not port of mobile, and navigation mode that includesAnd more touristic use, I also use this application at Barcelona's conferenceBut OpenStateMap appears in the world map systemAnd content update rest of the PPS in GBC is not active in this nowSo, talk about Geopop watchSo, I check time maybe after 7 minutes laterAnd Android, Antonio and Silvia Sun start Geopop watch productIt's basically engineering service toolAnd necessary something to just have in your pocketAnd easy map important professional targetAnd generally important thing in lifeBecause Geopop watch can use SMS for emergency callSo, this is Geopop watch interface in few wordsIt can take a note and GPS logsNot mean text, picture, sketch of the handAnd any other customized form in hereAnd one PCSIM you use the Raster or Mac 4And vector base map and spatial vector overlay in hereSo, I took about MVTARL and development supportGeopop watch supports a following base map sourceLike this, this one is Japanese government type systemName is Chilli InchesIt can be used on the Geopop watchAnd this one is I use a workshop in hereIt is neighbor map base styleAnd this is offline base mapAnd this one also offline map, MVTARL typeThis one is TMSAnd the map form service offline map also in the worldFrom OSM base map base dataIt can be very useful for Geopop watchAnd other application can use MVTARL libraryAnd notes and bookmarks in hereThis is Geopop watch note systemYou can easily to use access note from this iconAnd take note and it appears in this point dataAnd you can edit point data on thisYou press this icon and appears dialogue and select the listIt can have edit and share and delete sourceAnd you use selection all notes operationAnd the bookmark is hereAndroid device is low battery is very limitedAnd move to map anytime is loss batteryAnd use bookmarkYou can quickly move the map to next pointAnd this is map data editing systemIt map data and special data operationThis data is GPS data list from logging data of GPX or the in the device oneAnd long tab here open to this oneUse layers operationAnd this one is also line color on the show in the mapThis function is very new in the Geopop watch 4This is polygon editingThis function is foundation for state university of new yorkAnd if you tap this icon, pencil iconAnd select you want to edit layersAnd some menu in hereAnd you can cut the futureOr you can extend the futureUse like thisSometimes it's useful for adding new data in the map over a survey areaSo if this function is not appear, you should need to paper and sketchBack to lab and reconstruct the shape fileSometimes it's difficult to reconstruct because lost the paper or lost the locationSo in the survey area, real time input is sometimes necessaryNext is the service data how to use in the UPCThis is a stage application for operator Geopop watch dataYou can download the stage from hereAnd run the application and select a mobile and use the Geopop watch for converterAnd you can get some logging data and the image data hereAnd the note data alsoThis is the result of the input of the data into the UDG applicationAnd that also uses QGISIt is easyThis is how to use the in-stage databaseMaybe you can create a map converter from shape file and image t fileAnd you can show each creator MVTILES system in the stage by user stageThis one also construct MVTILES from QGIS applicationName is QTILES. It can be create a normal MVTILESBut Geopop watch cannot use MVTILES filesSo I modified some source code and this oneMetal data need extension and title number need MVTILES for Geopop watchSo this is...I modified two source code in hereOne is TILES thread PYOne is writer PYThose are my QTILES fork area in the G2HALYou can easily find the new modified QTILES source code and useSo I have no time to introduce project based on Geopop watchThis is Japanese project based systemIt is digital management information system city of Osaka for water and disasterAnd also this one is gas emission system in italiaAnd this one is a mount hole analysis system used by RFIDTUBSAnd this one is vehicle system support for rescue operationAnd this one is town walking and visibility on local disaster risk with WebGISand civic scienceAnd running through town walking and everything survey living townAnd how about the risk in the lived in areaAnd this one is normally Geopop watch use and we customize report and collect photo and report dataAnd back to lab to generate the correct data on the Google Earth systemAnd share the image of the risk dataSo last is Geopop watch workshop activitiesAnd many time iPhone user come to workshop and only have iPhoneBut Geopop watch is only walk on the AndroidAnd Geopop watch workshop in Japan is four time and last is next month in Tokyo wayAnd in the world is three times and first is GIS idea 2014And for GeoSaturday in commonI don't know yesterday is July in commonAnd this time yesterday is hereSo many people joined workshop and learn how to use Geopop watchAnd they increase user in the east west Asia and Europe areaAnd also we need to support Korean community Geopop watch this timeSo Geopop watch roadmap Android 8 is one device new Raster 2 format will be fully supported in Geopop watchAnd two is on the desktop server side in the new stage will be developed to create a static app for Geopop watchIt will be web application with which user will be able to look at the survey data and import and export project from and to the deviceIt can work on the server side and generate a new report file and a new project file and correct the surveyed project file alsoAnd view in the web browser like thisThis presentation is end and any other questionHiDo you have customizable forms for application?What is the approach? Do you customize them on a file?YesI create this type application for disaster management of water systemIt can collect water pipelines crash and need to close the valveAnd then we customize form to close valve condition and correct field dataSo this is something you create for specific applications, right? This is not universalIt can make whatever customized form to get the data with your applicationDid I understand it?Yes, the target is usedWe might discuss it laterWe are creators of Qtiles and we saw your pull request to make this update for MbattilesWe can talk about that. It will be nice to fix it so you won't have to use a forkThank youMy presentation ends and thank you for question to meThank you very much
|
Geopaparazzi is an application for fast field surveys. Its simplicity and the possibility to use it on as good as any android smartphone makes it a trusty field companion for engineers and geologists, but also for tourists who wish to keep a geodiary and any user that needs to be aware of his position even in offline mode. In Geopaparazzi it is possible to create text, picture and sketch notes and place them on the map. Notes can also be complex and form based in order to standardize surveys in which many people need to be coordinated. In the last years the support for the visualization of spatialite vector layers and recently also editing possibilities for spatialite poligonal datasets has been added, allowing for some simple-yet-powerfull possibilities on vector data. Desktop tools are supplied to bring datasets from the GIS environment to Geopaparazzi and back. The presentation will focus on the most important features of Geopaparazzi as well as the latest additions to the application in order to give a complete idea of the state of the art of the project.
|
10.5446/32061 (DOI)
|
Can everybody see this? We're having some small, non-cloud-related technical difficulty. This is on-prem. Does that make sense? Okay. So I'm going to go ahead and get started. It's 11 o'clock. My name is Mark Korver. I'm with Amazon Web Services. I'm part of the Solution Architecture team on the public sector side of Amazon. That means I work with our government customers and our education customers. And I am the geospatial lead on the Solution Architecture Specialist team. So I'll talk for about 20 minutes today, or maybe 15 minutes. I have Kevin here who's going to keep the eye on the clock. And then Kevin Bullock from Digital Globe will talk after me right on schedule. We're going to keep going here. So I hope you can see this. I'm sorry, it's a little bit small. My message today is very simple. As you can see in the title, it's about how we shouldn't be copying data. Instead, we should be sharing data, especially if it's open data. And because of the cloud, we can share it at any scale we want. And that's what I call one of the very different architectural possibilities that the cloud affords versus on-prem deployment, especially of big geodata. Okay, I don't know what happened there. I'm going to close that. So I just want to cover, I will spend most of my time just quickly showing a demo. I'm going to race through this. And so I won't bore you with slides too much. So just a couple of review points. So data copy is expensive, storage cost. We have network cost, compute cost. And then if we follow kind of the old world principles, what we call at least in the U.S., clip and ship model, which means you go to some portal, you look at some catalog, you discover the data. Having discovered the data, you typically download the data and then work with the data. Then the cost of distribute, update the distributed copies becomes expensive. So there's all these costs that we have to deal with on a day-to-day basis in the kind of traditional clip and ship model of big geodata. And generally the idea is if the data gets large enough, then you can't get it all. So you have to have some way to go get some small piece of it, download it to your on-prem workstation or to your server or to your notebook, and then work with it there. Hopefully when you're working with it, you know, we're using, you know, some like QGIS and some open source tools, but you still have to go through this kind of mini ETL process about getting that data. So as you all know, we live in a, or we have been living in a world of silos. This is a slide that I borrowed from Stanford University's library website. And it's a very simple idea, right? We have all these data centers, you know, run vendors, run by our government customers, and they all have all kinds of interesting data at the bottom level, but they're generally siloed. So they're generally siloed for security reasons, they're siloed for economic reasons. And the larger that data gets on the bottom of that map there, or that image or the diagram, the harder it is to get the data out from that silo for a variety of reasons, right? And one of the key points here is, as one of the largest providers of cloud services in the world now, we're seeing a huge migration of customers moving from on-prem facility to the cloud. And generally what's happening is that we see silos moving to cloud, but they maintain siloed architecture. And so especially in Geospatial, we have a lot of customers that are running core systems on us now and increasing the number of customers that I see that I'm talking to every day that have exactly the same data stored in their cloud right next to somebody else's cloud architecture. So we see that as a bug if it's not, you know, if there aren't licensed considerations and if it's open data, then those customers should probably be sharing one copy of the data. And that's generally my talk today, and I'll show you a practical example of how you can do that. So what makes cloud storage different? Well, it's not siloed in a data center, right? And you can provision in real time very, very granular access to exactly that one GeoTIF, exactly one, that one last file, whatever you want via, you know, simple kind of static methods or federated methods. It's your choice. There's a lot of flexibility there. And the last point, which I can't emphasize enough, is because you're in the cloud, because you're not on-prem, you can offload the variable component of cost, which is network out. So network egress, you can offload to whoever is making the request for the data. Okay? So what remains? Well, somebody still has to pay for storage, but for example, with the particular storage service that I'll be showing today, which is called Simple Storage Service, you only pay for what you actually store. So you don't have to do things like, you know, I might use two terabytes this year, so I'm going to get two terabytes of NAS storage. You just store what you need today. We charge you for what you have stored this month. We actually prorate it on a daily basis. So what's possible with cloud architecture is that you can now store what you typically would have on, you know, POSIC file systems, on some file system deep down in your network, deep down in your data center. You can share that, you can share that storage to any number of actors that you want because it's not your problem. It's our problem to make that data available via the network. So all you have to worry about is allowing network access at the object level. And if you had to pay for network out, then it would be a problem, but you can actually offload the network egress portion to the requester. Okay. So I'm going to... Do I have a pointer? Oh, sorry. So here we have something called Simple Storage Service. Here we have many actors. So these are, these could be virtual machines. This could be a Lambda service. This could be our managed Hadoop cluster or anything, right? Whatever you want to run, whatever code you want to run. And this is your account. You pay for storage. But for example, if you want to let other actors from other accounts access the storage, it's just a matter of setting access control list for whatever data objects you want to hear. So you have, generally the idea is you have infinite network, network, horizontal network access here. So there can be any number of actors horizontally on top of your data. And that you could not do if it was in your data center. Okay. So it's a very simple concept. It's not a file system. All it is, and it's not FTP, all it is is HTTP. That's all I'm talking about, right? So in a sense, we're going back to kind of HTML 1.0 days and talking about using object stores rather than file systems. And, you know, the one comment I like to make here is that, so, you know, I've worked with a lot of customers, not just just spatial customers, but customers in an education space, customers doing genomic studies, customers doing Alzheimer's brain research, customers doing pharmaceutical research, et cetera, et cetera. The larger the system is, the more kind of embarrassingly parallel compute the system is, the more the core infrastructure relies on simple storage service S3. In fact, Netflix has a famous comment where they say they see the object store as their source for truth. They actually treat the object store more like a database than just an object store. So I'm going to stop there. I will turn this thing off. And I'm going to jump over to the browser, and I need to make this smaller. Move it over a bit. And I'm going to show you a very simple demo. So here, excuse me, let me reload this. So all this is is leaflet. All it is is base layers, right? So, you know, open source JavaScript library. I'm not doing anything here other than image tiles. Okay. And the idea here, it operates just like you'd expect. I'm going to go from, this is the city of Oakland's data, and I'm going to the NAIP data, which is the United States Department of Agriculture data, USDA NAIP data. So this is a coast-to-coast, we call it a Kona set, a one meter per pixel data set, very well known in the United States. There are no copyright restrictions. I can download it, play with it, do anything I want with it. I could try to sell it to you, but you shouldn't buy it because it's free, that kind of thing. And also, if I move this thing, you'll see that if my demo is actually working, and if I really do have a network connection, you see the gray tiles coming in. All that's going on is that based off of, you know, right now, I think it's 75 terabytes set, it's in real time, reprojecting a bunch of geotips and creating JPEGs on the fly. This is a real-time map tiling architecture that's using MapServer GDAL on some Ubuntu instances on the background, and then tiling this data in real time. Whoops, I didn't mean to zoom in. And I'll show you what's happening on the back end by opening up Firebug. I'll move this over a bit. And you can see that as I move this thing, it's first going to this thing called NapeTMS.s3. So it's going to the s3 bucket to see whether the JPEG exists or not. If the JPEG does not exist, you can see that it's doing a redirect to something called Tyler, which is running on our platform service called, one of our platform service called Beanstalk. And we can open this guy up and pop it into a new tab, and it does exactly what you'd expect. It creates a little JPEG. But it's doing this by sourcing something between 200 and 18 and 219,000 geotip files that are sitting in s3. They are not on EBS, they are not on our new Elastic File System. They are shared across n number of virtual machines on s3. And so I'm using parts that I actually have been using for many years, open source parts. I'm doing maybe a couple paragraphs of code to deploy this in a cloud, what I call a cloudy fashion. And if I, for example, if I can get this to work, I have to do it this way. I'm going to get, I'm just putting it into debug mode. And you can see all that it's doing is taking this TMS name, JPEG, and then rearranging that into a WMS request right here. But you can see, for example, that it's running in U.S. East. It's on Amazon Web Services Elastic Load Bouncer behind which I can have any number of EC2 instances I want, right? And if I show you that part and go to the console, you can see I have a couple of, I think, four C3, four X or larges. These are virtual machines running in the cloud. And I can modulate the scale of that just by going to the auto-scaling part of the console, finding my MapServe group, and hitting edit button. And for example, I hit, you know, I change these to 10, et cetera. And if I remember to hit the save button, then within a few minutes, about two, three minutes, I'll have 10 Ubuntu instances running MapServe or GDAL. And I didn't have to do any ETL work around the 50 terabytes of data. That's all been, it's all embedded in the Amazon machine image that I'm running, which I'm, by the way, happy to share with anybody in the room. Okay, now, remember this was, this is what we're, you know, pretty used to seeing with the number of, you know, large commercial search engine portals now. This kind of slippy map concept that I've been around since, what, 2005. The idea here is anybody with a credit card can deploy this national, if not global, back end. Maybe not for the whole year with a lot of machines, but you can most definitely do it for a few hours just to play with this, right? That's within your individual researcher scope now, which is very different from if you did this in an on-prem environment. And the reason is, very simple, it's not your data, it's shared. And so, now I'm putting up, let me make this smaller. This is a vendor-provided tool, a vendor's cloudberry. I'm running on Windows here, so that's probably one of the better Windows tools for this. It's an S3 client, so now I'm looking, not the browser, but using a client dedicated to S3 and a couple of other things. And I'm looking, I'm going to go look for the data that I'm using under the hood for those JPEG images. So that, on the right-hand side, is our public data account. And in the public data account, there's all kinds of data, right? Behavioral sciences data, genomic data, Alzheimer's research data, et cetera, et cetera. And part of that, the data I put in here is somewhere, here we go, AWS APE. So, NAPE, if you remember AWS-NAPE, and if you have this client, you can go look at this data. You can see this is US data, so the state abbreviations come up right away. So let's go look at California. Here's a 2002-2014 data. Remember, this container is endless, so I can have 2016, 2018, on and on and on, and I never had to worry about running out of storage, right? It keeps going forever. And the other more important part is, if you know the name of this bucket, you, everybody in the room has access to the bucket. You do need an AWS account, which is free, but you have access to, right now, there's 75 terabytes of the data. And it follows our open data best practice pattern. And what that is, is two things. One is very simple. I have to give you access to the data. So I'm going to drill down into the data. Here's a four-band original data. These are US FIPS codes. And here's the data. In one set, there's about a quarter of a million of these files, just under 200 megabytes. And so these are from the Prime Contractor. This is the original TIFF data. If I go open the ACL for this, you'll see that it allows read by authenticated users. So that means that if you have an AWS account, and you know, if you remember AWS-NAPE, you can gain access to all the US data. And it's as simple as that there. But remember I mentioned that as the owner of this data, I might not want to pay for your taking the data out, right? Downloading the data out. Especially if you wanted to DDoS my bucket, right? Because you didn't like me, let's say, right? And you had some, you know, machine process that kept downloading petabytes of data. I would cry because it would be my bill, right? Now I can take care of that by a feature that's been available in S3 from the beginning. It's called Requestor Pays. And if I right-click this, go on Properties and hit the Requestor Pays, you'll see that it's turned on. That means that if you are another account and you request this data, and you download it, for example, you know, to my notebook here, right? Then my account pays for the request, data egress. And that allows me, if I was a data owner, so for example, if I was the United States Department of Agriculture, that owns a USDA data, that allows me to show it to the world without expense. So I can have petabytes of data, you know, I could be NOAA with petabytes of data, and I could make available to everybody on the planet. They could even DDoS me via their own Requestor Pays request, but then they would pay so I don't care, right? It's as simple as that. How are we? About seven minutes? Okay. So I'm giving you a couple views, right? One is the very familiar Slippy Map, which is a JPEG 256 by 256, like we use every day, right? Which is a derivative of the Geotifs that you were looking at just a second ago, which is available. You can build those as quickly as you want as a function of your auto-scaling size, min-max limit, right? So you have all the flexibilities, so you can be very embarrassingly parallel or just a little bit embarrassingly parallel about that process, right? And then the other view is, so I'm using a client and there's open source, there's command line, there's Python tools, there's Java tools, there's all kinds of tools for you to gain access to S3. S3 has been around since 2006, so you can choose any client or any language you want to get access to S3. And then the last thing I want to show is, so how about these machines? How's Mark doing the machine? So here is Putty, so this is SSH, into one of the Ubuntu instances that are running MapServer GDOL. And I think I have to restart this. So that's where my demo died. But you can see that when I, before I did this, that I have, I'm using another open source project called YS3FS, which is a Python project, you can find it on GitHub, it uses our Bodo core, and it allows you to mount any bucket you want and make it, just make it available now to GDOL, right? So I can run MapServer GDOL on this instance. This instance does not have, you know, 75 terabytes of data immediately on it, this package will go get it, put it in, put it on some SSDs that look local to the machine, and then manage your cache intelligently on the background. So that allows me to spin one instance, spin 100 instance within minutes, and then basically provide slippy map for United States, Korea, or the whole globe. If I wanted, or I should say if I have the data, I probably have to talk to DigitalGlobe for all the global data, but I could now if I wanted to do that. So I'm going to stop now. Excuse me. I want to leave a couple minutes for questions. Any questions? I know I ran through a bunch of different things. Please feel free to grab me afterwards. Happy to share the data, happy to share the machine image, and the specific techniques that I'm using here. It should all be very familiar to many of you in the room. Any questions? How do we go about getting new datasets and seeing the public data? Does that request have to come from the data owner? An example that is, if you look for Landsat 8, that's our latest large public data project that I assisted a little bit on. That's where we're piping data from a USGS, a bunch of USGS FTP servers. We're putting it in the same S3 bucket, and then making that. You can use the same tool to go look at all the Landsat 8. The caveat here with the public data program is that we're interested in a type of data, and it has to be the kind of data that obviously facilitates interest in using our virtual machines. More importantly, we want to make sure that it's properly curated and maintained over time. Generally, that needs to be whoever the source is, not somebody in between, and they need to have a good, relevant business model. We want to be able to trust that organization to maintain that public dataset over time so that it doesn't become stale and old. What is your specific example of a public dataset in the UK? Ordinance survey? Yes. That was a very quick question. We provide services on that, but we can take part in it. We're happy to, if it's public or not, but we're in a survey as a customer already, so I've helped them a little bit on that. That's a good example. We're looking for other projects that look like Landsat 8, or for example, the NAEP data, where we can work with the data owners to make it more easily available. That might be public, or that might be just the data owner's S3 bucket with RequestorPay's turned on. The two general patterns, I would suggest looking at S3 with RequestorPay's turned on, see whether your business model, the data owner's business model, makes sense there. And then, as the next stage, there would be potentially consideration for public data. Any other questions? We have one minute left. One last question. So far you've talked about the Requestor data. How about the vector data? How about sharing vector data in large scale, and how about spatial indices for that? We have a couple of projects going on. We actually have one, and MapsNet might be talking about it. But the idea there, for example, was getting the OSM data in a tiled pattern on S3. So instead of a large, I can't remember what it's called, the global blob, the binary file, there's techniques where we can tile that, put it in S3. So you don't have the ETL, you don't have to have the database to do a web scale vector-based service. So happy to talk to you more about that after too. So the same general idea applies for both vector imagery and things like point cloud or LiDAR data. Thank you very much.
|
Since its start in 2006, Amazon Web Services has grown to over 40 different services. Amazon Simple Storage Service (S3), our object store, and one of our first services, is now home to trillions of objects and core to many enterprise applications. S3 is used to store many kinds of data, including geo, genomic, and video data and facilitates parallel access to big data. Netflix considers S3 the source of truth for all its data warehousing.The goal of this presentation is to illustrate best practice for open or shared geo-data in the cloud. To do so, it showcases a simple map tiling architecture, running on top of data stored in S3 and uses CloudFront (CDN), Elastic Beanstalk (Application Management), and EC2 (Compute) in combination with FOSS4G tools. The demo uses the USDA��s NAIP dataset (48TB), plus other higher resolution city data, to show how you can build global mapping services without pre-rendering tiles. Because the GeoTIFFs are stored in a requester-pays S3 bucket, anyone with an AWS account has immediate access to the source GeoTIFFs at the infrastructure level, allowing for parallel access by other systems and if necessary, bulk export. However, I will show that the cloud, because it supports both highly available and flexible compute, makes it unnecessary to move data, pointing to a new paradigm, made possible by cloud computing, where one set of GeoTIFFs can act as an authoritative source for any number of users.
|
10.5446/32064 (DOI)
|
렌투의 랜 apps 문In 파이�urez 안녕 여러분 저는para 랜디ק Gravity 오늘은 유제이 meng pling urmail pat p zoals sees QGIS, 데스크탑, 툴 박스, 프로세싱 프레임웍, QGIS, 프레임웍, 그라스 GIS, 사과 GIS, 그리고 이렇게. 전체적인 프로젝트, ESI, QGIS, 데스크탑, 스페셜 툴 박스, QGIS, 툴 박스, 오픈서스 UDIC 데스크탑 GIS, 에이클립스 RCP 테커널 리즈인, 툴 박스도 이 프로젝트의 단체는 스페셜 스태틱의 툴이 3가지 프로젝트에 관한 것입니다. 첫, 스페셜 스태틱의 많은 문제들이 새로운 액고리즘을 사용하는 것입니다. 두, 스페셜 스태틱의 툴 박스에 비주얼, 그래프, 많은 유투리티의 툴을 제작하는 데스크탑의 업리리케이션을 사용합니다. 그리고 세, 온라인 오리지아 웹 프로젝트에 웹 지하의 서비스와 웹 지하의 서비스에 제작하는 것입니다. 오늘, 유닉 프로젝트에 툴 박스에 제작할 것입니다. 오픈소스 프로젝트는 C++, Java, Python, 단일, 많은 다른 문제들과 Java는 엔터프라이즈에 가장 많이 사용되어 있습니다. 다음은 오픈소스 프로젝트에 Java-based 오픈소스 프로젝트입니다. 우리 프로젝트에 지하의 서버, 그리고 유닉 오픈소스 프로젝트에 사용합니다. 지하의 서버는 Java-code-library에 적용한 매설을 제작하는 데이터의 지하의 서버입니다. 예를 들어, 지하의 서버는 오픈소스 프로젝트에 지하의 서버는 오픈소스 프로젝트에 제작하는 데이터의 서버입니다. 유닉 오픈소스 데이터의 적용한 매설을 제작하는 데이터의 RGB 기술을 제작하는 데이터의 기술을 제작하는 데이터의 전화, 전화, 전화, 전화, 지하의 서버는 오픈소스 프로젝트에 적용한 매설을 제작하는 데이터의 지하의 서버입니다. 지하의 서버는 오지시 웹 서비스 시장인 WMS, WMPS, WFS, WPS, CSW. 지하의 서버는 적용한 매설을 제작하는 데이터의 적용한 매설을 제작하는 데이터의 프로젝트의 모듈가 적용한 WPS 서비스에 적용한 매설을 제작하는 데이터의 지하의 서버입니다. 이 프로젝트의 목적은 툴팍을 통해 툴팍을 통해 툴팍을 통해 스페틱의 스페틱의 스페틱의 스페틱의 스페틱의 스페틱의 스페틱의 스페틱의 스페틱의 스페틱의 스페틱의 스페틱의 스페틱의xeach suiting percent 위일 수준 여 Son 이 프로젝트는 버스형 조ICH特別 Vicอะ 가지고 hine 기술� flattering 응원 Tarach былироде t 입니다. 만약 그 방법은 immoderator와 최종 Sakland mining, 다른 counselor, UW, Tatty, dispho., 컴버 박스에서 선택한 레이어를 선택할 수 있습니다. bounding box datetime이 미닉스, 미니라이, 맥스X, 맥스Y, EPSG 코드입니다. 이 박스에서 bounding box를 전화의 멘트와 전화의 멘트와 전화의 레이어를 선택할 수 있습니다. everytime ak 나는 Realm 사회iale 보니까 harden meli의 이정도였습니다. Open. 여러 세트의 선택을 사용할 수 있습니다. If you need an aggregation function like average, sum, minimum, maximum in literal data type, you can use statistics to fill the selection dialog. If the parameter type is a enum object, you can choose the value from the combo box.if the parameter type is a filter object, you can get filter expression from query build dialog.if the parameter type is a coordinate reference system, you can get chrs from layer, map, or chrs chooser dialog.if parameter type is a geometry, you can use WKT, well-run text geometry. Also, you can get WKT from widget. widget supports various options like selected feature, center of the map, extent of the map, and validation. output parameter of the process can be bounding box, simple feature collection, grid coverage, geometry, numeric, literal, and custom classes. Defending on the type of the output parameters, output can be added to the map as a layer, or can be displayed as a HTML format.if the output parameter is a number string, custom class, output tab will be added dialog, displayed as a HTML format. The following process already have been implemented or will be implemented in our project. The following is a special static functionality matrix table, including ArcGIS, GeoDAR, SagaGIS, GrimeState. RedTitle in table is our project. First year we provide about 45 special static functions and utilities. CollectDots are already implemented. RedDots may be implemented in the second or third year. We will develop new algorithms, five or more. VisualizationTools only support unique desktop environment like graph tools. Especially, we developed various utilities for supporting the spatial statics. These utilities include data creation for spatial units, conversion, field calculation, spatial joints, and so on. Also, UD Processing Toolbox plugin is managed and updated through UD plugin update site. This is UD Desktop application. Now we are providing about 45 custom processes in UD Processing Toolbox. The following is an example of a per feature collection process built in GeoTools. This is a simple and quick tool that allows you to create a semantic map. The following is a decent polygon process using point layer. This is an import tool that allows you to create a point layer from a text file that contains x, y coordinates, or a WKT geometry. You can also reproject coordinate reference system of the data during import. VectorAres can be converted to GML, GeoJSON, WKT, KML, and text files. The following is a verbal chart tool then using J-Free chart library. The chart item and the pictures are connected to each other. The following is a sketch plot chart. LocalMorangeI is a local spatial correlation statics based on the Morange statics. The following map is a Liza map. The data set is a administrative boundary containing population in Seoul. You can use your S-Tool to explore the spatial pattern and distribution. This is a kernel density process and result. This is a WPS request builder which provides a test for WPS process. Currently we are providing about 30 WPS processes in GeoServer. The source code is available on GitHub. If you are interested in this project, please join us. The binary packages can be downloaded on the source project. If you need any other languages except English or Korean, you can participate in the Transpex localization platform. This project will be continued over the next three years. We greatly appreciate your participation. Thank you for your attention. Thank you very much Minpa for your presentation. It seems that this job processing toolbox contains a really extensive amount of different kind of spatial analysis tools. 지지자분들 appleMarket 하�Ц Northeast Ee Let's go 춤<|tr|><|transcribe|> 이제isu
|
uDig is an open source (EPL/BSD) desktop application framework, built with Eclipse Rich Client (RCP) technology. This presentation shows new geoprocessing toolbox in uDig desktop application framework.
|
10.5446/32065 (DOI)
|
Thank you. Just to introduce myself, I work for the Cadastra Foundation. We help communities document their property rights, primarily in places where people don't have formal land cadastral systems. We are interested in open aerial map because part of that process is you need imagery to enhance your mapping. I'm going to start off with a little bit of history and background on open aerial map. The idea of creating a commons for sharing free and open imagery came about around 2006. There were people that were beginning to fly UAVs and drones, but it was really mostly just an idea at that point. But then in 2010, there was a large earthquake in Haiti and a lot of imagery became readily available. For example, the World Bank flew imagery through aircraft and made that available under an open license. But there wasn't really a good platform to share that, to process it, to index it. And a lot of technologists were just doing, were spending large amounts of time processing and making that imagery available. So this idea of open aerial map once again became very important. And then there's been other major disasters as well, but we're also beginning to see more imagery being available just because there's more commercial satellite agencies, governments are launching satellites and providing that data for free under open licenses, as well as UAV usage. And as we see this, a lot of technical experts are able to access that imagery, but the average person, there's not really a way to do that. And so that's really what we're trying to do. So open aerial map was rebooted last year by the humanitarian open street map team. One thing I should have prefaced is until April of this year, I was executive director of the humanitarian open street map team. So I was involved in obtaining a grant from the humanitarian innovation fund to build out a pilot of open aerial map. And that one year grant is ending in October next month of this year. In addition to the hot humanitarian open street map team, there's also quite a few organizations involved in the open aerial map community. We've come together because people were working on their own imagery indexing tools as well as processing tools. And we thought, why don't we work together, one of the beauties of open source rather than continuing to do things on our own? And this is all powered by the open imagery network. So the open imagery network is a distributed index of open imagery. It actually is just a GitHub repository. So if you had imagery and you had it hosted somewhere, you can go register on GitHub your bucket of imagery and then it would be registered in the open imagery network. The idea of that index is then people can build tools on top of it. So it's essentially just pointing to where the imagery is. And there's a very lightweight metadata standard that's a component of that. And so the way the open imagery network and open aerial map work together is that open aerial map is the first implementation of imagery tools using the open imagery network. It's a little confusing, but the open imagery network is the actual imagery. And then open aerial map is the tools to index and process and use that imagery. I forgot to check in with the open aerial map team, so I'm not sure if we're out of beta yet. Open aerial map's been in beta for the past couple months, so you can try going to this URL, but beta.openaerialmap.org is where it may reside at the moment. But we're switching over out of beta sometime in the next week or so. And you'll see up there, there's not a ton of imagery yet, but there's certainly imagery for an assortment of areas the whole world has not covered. Primarily a lot of imagery related to work the humanitarian open street map team was doing or related to disasters that have happened where there's been imagery released. So there's imagery of Nepal, for example, and there's also imagery of Tanzania taken by UAV through a World Bank project this year. So in building open aerial map, we were looking at two major use cases. The first is those users that are not highly technical, who need to find and use imagery. And then there's this other use case where there's people who actually want to share, but sharing imagery is hard. I mean, you have gigs and gigs of data, and people maybe don't have a server to host it on, for example. So we wanted to give them somewhere that they can upload it, because when people want to open up their data, there needs to be a way to say, oh, thank you very much, and then help make it useful. Neither of these groups are highly technical remote sensing specialists, though. And the one I think is particularly interesting is the users with imagery that need to upload and share. One of the one user group in there is hobbyist drone pilots who go take imagery, and then there's not really a commercial value. They'd like to share it, but they're not going to take the time if it's difficult to share it. So we're giving them a place where they can upload it. And so this is a diagram of how one might discover the imagery. There's two situations that we think about, especially open aerial maps primarily started for humanitarian and disaster response. It's not the only use case, but it's the first use case. So you'll notice we focus on both online and offline. So we provide web tiles, for example, that you could use online, but if you actually needed to download the imagery and take it with you, that's an option as well. So there's three major components to open aerial map. So there's a web interface browser, which is what you see if you go to openaerialmap.org. There's also a catalog, which is more of an API of that index. And then there's also a server component that does processing. Primarily it can take raw imagery and turn it into tiles. And these are all separate projects within GitHub that you could contribute to. Then there's another, we're starting to build a bit of an ecosystem around open aerial map. There's also a QGIS plugin that allows you to both search imagery, download it, as well as upload it. This was developed through a program called Outreachy. Outreachy used to be called the GNOME Outreach Program for Women. It is very similar to Google Summer of Code, and it's a paid internship where you work with an open source project given to women and other minority groups in open source. So one of the projects that Hot sponsored this year was building a QGIS plugin. And the GitHub repository is right there, and then I'm going to show a couple screenshots. So it essentially allows you to both, maybe it's a bit hard to see, but there's buttons for searching, browsing, as well as downloading imagery. It's meant to be sort of a wizard, be easy to use. And here is where, if you were uploading data, you can see the JSON metadata requirements as well as you can pick where you're uploading it to. Currently we only support Amazon Web Services for the upload buckets. In this case, they're just using a test one, but the Humanitarian OpenStreetMap team provides one because we want people to share imagery, but you could also use this with your own bucket of imagery as well. So the big reason I'm here is also to help recruit. There's lots of ways to get involved. There's quite a few people coding, but of course pull requests are always welcome. Testers would be great. Doing community outreach. So I'm contributing to the open aerial map community right now. Documentation. And another one that I don't see a lot of projects asking for help with so much these days is legal help. And the reason I mention that is we're getting imagery under an open license. So sometimes legal assistance could help discuss with companies how they might do that. You probably don't even need to be a lawyer. An open data advocate could certainly help. And the final one, do you have imagery? We're continuing to build out a repository of open imagery. Like I said, there's not a ton there now, but there will continue to be more and more added. So you can help contribute, allowing other people to use your imagery. That would be great. So if you do want to contribute imagery, there's a couple different ways. One, if you go to GitHub and the open imagery network organization, there's a repository there that walks you through how to register. And you essentially are just adding yourself to your Amazon S3 bucket, your bucket of imagery. You're adding it to this. And then it allows anyone who would want to go and crawl your imagery to be able to do that. This is another way of doing it. You can simply just upload through open aerial map. And in this case, what that does is you're contributing your imagery to the humanitarian open street map team bucket. So if you don't want to mess around with getting an Amazon account or you don't have the time or any other reason, you can just contribute it directly there as well. This is still in testing and it's going to be released more formally in the next couple weeks. And so if you want to get involved in open aerial map in general, there's quite a few GitHub repositories for the various pieces of the project. But if you go to this one, it's the main one, it'll point you in the right direction. We also meet weekly on Gitter. So if you go to that GitHub URL, there's a button that says Join Gitter. It's just a chat room where you can log in with your GitHub account. It's very easy to use. So there's live meetings at 1800 GMT for about an hour every Thursday. We usually have about five to 10 different people participate. And you can always, if it's not time zone friendly for you, you can always read the archives as well. We also have a mailing list, a Google group. You can join for offline discussion there as well. There's a few things we're thinking about in the future. One of the big things is right now we're only integrating with Amazon, but we want to be able to use other cloud-based systems as well, as well as having an offline appliance. So one of our major use cases for this was after a disaster, people needing imagery. And so it's great if you have a ton of imagery available online, but if people can't access it because they have very little bandwidth or no bandwidth due to a disaster, it doesn't do much help. So we want to be able to deploy this on a computer where it could go to a disaster area and imagine someone is flying a drone to take pictures of it, and then you could upload it directly onto your local computer and your local area network and then actually have it process it and make it available. So that's Open Aerial Map. Are there any questions? Some questions over there? Okay, I got it. Do you want to do your interest in the imagery-specific coordinates or do you want to handle them? At the moment we're sort of doing best case scenario. I think the Tyler will handle converting over to different reference systems, but to be honest, I think you can define them in the metadata, but because if you go, but to be honest, I can't remember exactly how we're dealing with them. Any other questions over there? Please. So the moment we're not creating a mosaic, it's more like, what's the point of creating a mosaic? So if you go look at the catalog, you can see what imagery is available and it overlaps, and you can pick which either download as a geotip or use, but we're not trying to create a mosaic right now. The idea would be if someone wanted to build their own, we have an index where they could go look and find the best sources. You can do a combined set, but in most of the case, the major use case right now is the needing to load a tile set to edit an open street map, basically. So you probably are editing a small area and you just need that one image. Okay. Any other questions? Yes, Karri, please. Another one, which is a little bit political. Sometimes the real big need for imageries in countries that don't want you to publish any open imagery. What kind of scenarios do you anticipate these cases? Open street map runs into the same problems. As a board member of the Open Street Map Foundation, I see a lot of those emails. And I think the best, honestly, I think it's going to be a matter of having a policy. And you're right, it's sort of a political slash legal question, depending where the servers are as well. I don't think we've really thought a lot about it, to be honest, so far. But yes, definitely, that will potentially be an issue. Other questions from the floor? There, in the back? Yeah, I showed a bit of love and didn't even bring it back, so we should talk about it. We were running ahead. But if anyone wants to talk to me afterwards, we can definitely discuss. I actually have one question myself. And so do you recommend any specific kind of licensing for the imagery? So that's a good point and something I should have covered. With the Open Imagery Network, we are doing one license, which is us doing ideal best case scenario. And we're doing a Creative Commons Attribution License. With Open Aerial Map itself, it uses the Open Imagery Network, but it also uses other imagery under other licenses as well. Because we realize that if we're negotiating for open imagery, we can say, hey, you should use this license, but if it's already open, then it would be difficult to try to change that. Over there? A real easy question, please, why won't you show? I do love chickens. So I apologize for those arriving at this point to this session, because we are just about to close this work session. We have already had our wonderful presenters in ahead of time, actually, to complete their talks. And actually, I must say that this is something that proves the capability of open source and open data to actually help facilitate development in the world and also even save people's lives in the end. So that's a very, very supportable initiative. So basically, do you have now any questions for any of the speakers at this time? So about Oskari, mapping platform, about Aerial Imagery, or about the geoprocessing tool for UNIC. So we have heard three great presentations and very varied topics, different kinds of projects. And I think this also tells about the diversity of open source. We are engaging in so many kind of activities. This is a good example. This session is a good example of that. In case there are no further questions, I will ask you to applaud to the presenters. And before you go, I'd like to give you something, a little something to take with you back from, back from Seoul. This is actually a little box of penis liquors. So be careful if you have never tasted. And if you want Oskari stickers, Hannah will hand out them for you. Thank you very much for your participation. Yes, a break. Yep. Yeah. Yeah. Thank you. This is going to be way better than the camera. Okay. Oh, cool. Okay. This is an honor. Get me those restrooms. Okay. I brought the camera. I brought the camera. Okay. No, you are. Really? Not on. Okay. By popular demand and people in the hallway, Kate's going to reprise her talk because we've got a lot of folks who missed it. So, without any further ado, open aerial map. Part two, or part one again. Yeah, I'll be fine. Thank you. This is good. There's a couple of people who have been involved longer than I have who have now come into the room as well. So, open aerial map is a distributed commons for searching and hosting free imagery. And I wanted to go a bit into the history and the background. I think this became an idea in 2006. I said that very authoritatively in the first version of the talk because Jeff Johnson wasn't in the room to correct me. And so, it started as this idea of people were flying UAVs, were starting to produce imagery, and the question was, what do you do with that imagery if you want to have it under an open lens? And I think that's a good question. So, you know, there's a lot of things that we should do to make sure that we have the right image to produce imagery. And then, there's a lot of imagery if you want to have it under an open license. And so, this idea of open aerial map became mostly an idea at that point. And then in 2010, there was the large earthquake that happened in Haiti. And a ton of imagery became available. There was very detailed 6 to 8 centimeter And then there was some of the satellite companies and just a lot of imagery. And people were staying up all night to process it, to make it available in a pretty manual process. And it was all going on some tele-science servers that was sort of a, seemed sort of the wild west of imagery services. And the question then was, could we make this easier on ourselves, essentially? And then there were quite a few other large scale disasters where a bunch of imagery was available, but still not, open aerial map didn't really exist. There was a brief, a little bit of funding from MapQuest to build it out, but it was mostly a prototype at that point. And so imagery has continued to be available under more and more open licenses. And so this need for open aerial map has become greater. And we're really focused on your average web browser user. We're not focused necessarily on remote sensing specialists because they can go find the free imagery. But let's say even if you go find the imagery and you don't know how to use it, that's not real helpful. So we're really focused on the non-experts. So open aerial map was rebooted last year by the Humanitarian OpenStreetMap team through grant funding from the Humanitarian Innovation Fund. I was at the Humanitarian OpenStreetMap team when this happened, and I was involved in obtaining this grant funding essentially. I left hot in April of this year. I'm still up here waving the open aerial map flag because my role at the Cadastra Foundation is we're building open source tools to help communities document their land rights. Turns out you need imagery for that as well. And so open aerial map has started very focused on disasters and humanitarian use, but there's other groups like myself at Cadastra who need it as well and are very interested. This is sort of the main OAM community at the moment. With the grant funding from the Humanitarian Innovation Fund, a few of these groups have been involved in actually building things out. And we've also been working with Open DroneMap. Stephen is around the conference, I believe giving a talk on this tomorrow. Because Open DroneMap actually does all the processing, so you end up with a nice GeoTiff which you could then put into open aerial map, so we're working closely together. So this is all powered by the Open Imagery Network. So when we were designing open aerial map, the question then was, what is it? It's been talked about since 2006. So it means a lot of different things to different people. And we wanted to create the Open Imagery Network, which is actually a network of the Open Imagery. Essentially it's a GitHub repository where you can go register your bucket of imagery. There's a GeoJSON metadata standard that goes with it. You register your bucket of imagery there and then someone could go crawl it. So Open AerialMap is one of the first implementations of software using the Open Imagery Network. So, I didn't check between giving this talk the first time and the second to see if this is live. They were supposed to release it out of beta. It was at beta.openaerialmap.org until, well, maybe still. So, but we're leaving beta sometime this week. I don't know if they, like I said, they flipped the switch. I should have checked before my talk. But you can go there and see how the catalog works and actually use Open AerialMap. So, and that's focused on the two, these two main use cases. Users of imagery that need to find and use imagery. This could be a humanitarian worker. This could be a group doing community mapping. And then those users that have imagery that want to upload and share it. A lot of those users are, imagine your hobbyist drone pilot or professional drone pilot, for that matter, who's paid to fly an area and then they're allowed to release the imagery under Open License. They don't want to take the time to figure out how to share it. So we're trying to make it really easy so we can say thank you for your contribution. And this is sort of a diagram of those use cases. A lot of our use cases are really focused on OpenStreetMap right now. How can you get imagery and download it into Jossum or use OpenStreetMap.org to edit? But also, there's other use cases where if you just essentially need imagery tiles for your application. Both online and offline were important to us so that you can always download the raw imagery. So you can take that with you. In the humanitarian use cases, for example, you don't necessarily have enough bandwidth to be accessing imagery. But you can imagine downloading that onto your computer prior to deploying to a disaster area and then using it there. There's three major components. So there's the browser, which is what you would see at openarealmap.org. There's a catalog, which is essentially the API. And then there's a server that does processing, for example, tiling. We also have a QGIS plugin. I like to pitch the QGIS plugin a little bit. One of the main reasons is it was developed through the Outreachee program, which is a program through the Software Freedom Conservancy, which gets underrepresented groups involved in open source. It works very similar to Google Summer code in that it's a 12-week paid internship. But it can be doing anything in open source. It can be marketing, documentation, coding. So it's a little wider and it runs twice a year. So it's also hemisphere, southern hemisphere friendly as well, which is one of the big complaints sometimes about GSOC. And this is the first desktop tool for OAM. So we're hoping that other software will begin to integrate with it as well. So it's a pretty typical QGIS plugin. You can go search for imagery. You can upload your imagery. And then you can do some basic editing of your settings as well. So hopefully you're all here because you want to help. There's lots of different ways to help. I put coding at the top, but to be honest, there's some of these other ones that would be really great. I think community outreach is something where we could really use help because we want people to share imagery. So just asking a lot of times is a good way to get that. Documentation always important as well as legal help. And that's part of if groups do want to open their imagery, what's it mean to release your imagery under a Creative Commons Attribution license, for example? Fairly typical open data problems. And do you have imagery? Looking for contributors in that way as well. So one of the ways to get involved in sharing imagery is you can go to github.com, open imagery network. And if you're willing to host your own imagery, you can go register your bucket of imagery there. And what this does is it's just essentially a text index where someone can then crawl all the imagery buckets, index them, picture sort of how a search engine works, and then build tools on top of it. Open aerial map, for example, being one of those tools. Not the greatest screenshot I've taken, but you can also upload. So the Humanitarian OpenStreetMap team is providing an S3 Amazon bucket where you can upload your imagery and it goes into Hots bucket in this case. I would envision this use case being more like you flew your drone and you have one image to upload. The bucket use case would be you have a ton of imagery that you want to share. You can also get in touch with us as well. If you go look through the open aerial map github tickets, there's people who have imagery who open tickets and ask for assistance as well. And so this is sort of the pointer project. I feel like these days a lot of open source projects, it gets a bit confusing because you end up with 10 github repositories for essentially one web application. So this is your starting place. So if you want to connect to the community, we have weekly meetings at 1800 GMT. We meet on Gitter. If you're not familiar with Gitter, it's kind of like IRC. Basically if you go to the open aerial map github page, there's a big button that says join Gitter. And if you click on it, you're in Gitter. And we've found that's a little bit easier as far as barrier to entry to get non-technical people involved in essentially IRC. You can also join the mailing list. There's an open aerial map Google group. So there's a couple other things we're thinking about in the future. Right now it's heavily dependent on S3. We want to make it deployable and integratable with other cloud systems as well to give some options as well as have offline appliances. The offline appliance scenario is primarily focused on humanitarian response. I'm sure there's other use cases where more and more UAVs are flown after a disaster now. But then that imagery isn't always that useful these days. People either can't get it off the ground to the internet so that they can do processing because there's no connectivity. Or people might be really good at flying UAVs but they aren't so good at doing anything with the pictures once they have them. So this offline appliance, someone would be able to take it with them to a humanitarian crisis, fly their imagery, put it on the appliance, it would tile it, index it, and they could use it on their own network is the idea behind this. Not requiring internet access. It's easy to forget, I think, sometimes that internet access still is definitely a problem. I'm sure it won't be forever but we can still save lives with this use case. Any questions? Yes? So what kind of file are you expecting to come off with a drone or are you expecting like a georeferenced image? So we want geotifs. The open drone map project is creating an open source tool chain to do that. You're taking the pictures off the drone and then ending up at geotif which is why we're working with them because we're not focused on that at all. We're sort of doing the best case scenario where you're giving us a geotif. Erin? What's the nature and the searchiness of the metadata associated with any of these, any of the symmetry? What metadata properties are you trying to capture? What's the minimum viable data set? Yeah, it's actually pretty lightweight. It's, I think there's like seven or eight attributes. I'm drawing a blank on what they are but I can just pull it up. Actually, maybe I'm in the wrong place and I've said this is too easy. Let's see. So can I ask a question where you can put that? So is there any gatekeeper that you can go there? Yeah, so there's no gatekeeper right now. Originally when we were discussing it, it was getting really complicated. We're using the model if anyone can contribute. I suspect what will happen is, let's say in a perfect scenario that we have so much imagery that it's hard to figure out what you want to use that, I hate to say reading the imagery but they'll probably prefer providers or something I could see at some point. To go back to Erin's question. So this is the metadata, you see there's not a lot. It's basically where is it a picture of? A few little details about the actual imagery and some contact information. Jeff? Can you explain more like the intersection between open imagery network and open-air map? I'm particularly interested in like the policy side. So on several different projects I work on, we acquire imagery, it'd be nice to have some like standard legal language to stick into these contracts when we purchase it so that it can be open. I mean I'm just on the policy side, less about technical. Yeah, so I, and this is what I think is going to happen, is I think they'll end up being an open imagery network foundation at some point to support that when I don't know. Because there's things needed like that. The technical problems are sort of coming together on their own but the policy is actually the main issue. Yeah, I mean you have all these municipalities get their money together but then they don't pay attention to the license, they just license it themselves. They'll probably be glad if they live in Florida. And I really think that that advocacy and that template language is what's going to really be key to change that. And I think an organization is probably going to have to support that. And the humanitarian open stream map team has been supporting this as it is now but that's a really specific use case in an organization with a specific mission. So I suspect this will end up as its own thing at some point. We have a ton of processed images from Lensate, maybe PanShop and also generate like GeoTif. Is that the imagery you want to share? We want to share or whatever. Potentially with Lensate, I suspect at least Lensate 8 which is already hosted on an S3 bucket. Ideally we'd want to just point to that bucket basically. But if there's other imagery that's not conveniently stored like that we would definitely be interested. So you want to create original imagery, not processed imagery. So we want raw imagery GeoTif. So for this we said we're just focusing on visible right now. That's a pretty common question. I suspect that could change but it was a matter of you have to start somewhere. To be honest I don't know. I haven't been as involved in this project as I was when I worked for HOT and I'm not responsible for the Amazon bill anymore or that sort of stuff so I'm not sure. That's true. So at the moment there's only two buckets. There's more I was thinking about the majority of it is in HOT's bucket. You could figure out, if you went to everyone's bucket you could figure it out since there's only two it would be easy right now. Erin? Yeah, I think it's gone on the days. And maybe it's too soon but has anyone thought about how the data gets replicated so that it's not just in a single bucket? Yeah, we haven't really worked on that but definitely the offline use case definitely needs it and there's plenty of others as well. Thanks everyone. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
|
Aerial imagery is a core base of many mapping related projects. With the increase in the use of Unmanned Aerial Vehicles (UAVs), lower cost satellites and more generally openly licensed imagery, there is a need for one place to search and access that imagery. This is a big undertaking, and certainly can't be left in the hands of a single company or organization. OpenAerialMap (OAM) addresses these issues with transparency and open access. A project for 10 years, OAM has been relaunched as of 2014. Now under active development, a dedicated open-source community is emerging. This presentation will cover the following topics: * The general architecture of OAM * How to contribute data directly to the system or by hosting your own node * The future road map and how volunteers can contribute
|
10.5446/32068 (DOI)
|
Year 1 It's not that I like to dream about or fifty- visitor All these civil society, these middle names, now even great, going up in the researchers, Duptield is a law bench in the country of law, say it's been working, doing all the padlocking. Seriously, anMR, I've been here 15 phone screens in the country helium reads after engineering cooled state what's participating, but it did not make sense at the end if there was no real capability needed and сообщed that there was no væista, as it does some, but it didn't make sense if there are no means of communication two o t sonic programmecome im澳 👋 magouawl occup rising and definitivкой pokusasin parj�w bizarre xoi equalsoy bez t-remde ch MV pag-ie jelis eventually entre u tomorrow borders h confidence clearly ar joulinj anstrang megas by top of belly bracket was, so jay anstrang kole pdrying men incy cras cone two kun이� legislature amphib brands jum che Matho, ᵉ ᵏ ᶜ ᶇ,, ᵏ ᵉl-ᵉ'ᵉ Ḋᶉ-ᶂᶉ ᵀ- ᵦ- ᶓ-ᶉ- ᶒᵈ-ᵄ ⌃ᵒ ⛅ Bonnie ⛑ ᵐ- ᵉʃ ᵰ-ᵒ ᵜʃ ⲅD additional j Ghost Hy've An iceosed Serbia who has a calculation recipe such as 1, vontade- by pity 1,ücke gen裏 w shallow, assa w that libraries were calm with us at the spoke ta school. Sonya use may ع pareing a bodies hide, oname lu tired ye *** *** *** *** *** *** *** *** **** *** *** *** *** *** 영 acceseÓqu e worng Passage in suis mon dormit ruins w ange konoha ratnήb yon d'ylotim ne'y pure BitTar durak kanall yon yon ol&jo a orchestra a longή d'ylotim yon w ka nyukan konun t' kat 주 të veej n' ll agilel métait yon m'zireseo cat property itle yon n bith ape d'ilat makom t'ez lthera Fortunately, tu' ffeso t'zeko la lij, t'vid científic... as an thak Career kit le' ya drudund...... kroi ek toe ek solparin zest e ky k소elin di zis txt who, don't cutter their Entreprene divine art sa d много palagia dis Gen틱 by tour ped During the ancient timesn't have any ideanan in tw Half-Sile that they refused him to leave FacePray by following the mobile s chips, activate difficulty, boost change, level of security. via Google's Mais OA W my Consumer Mmm, ото preserved rambut. Krè asiskat housekeeping. Mentradкая autoklaia akizuniversa............. ZHACTA Produkt at ixon. Continue to our clarification of the �houette sok daytime ster � subur g soapidi atheül konkret kan straw of it'n bul chin'f ε burn jicolaroll jicolaroll jicolaroll. w these are once biggest beftatnieTwegen lofts a kn encourages tiy'we. xt she said, xt she was iets Srge十 hn xt was xt liea xt load konwan stdts shepherd kono mese fok권 cause war ripping machine motor to the events is harmful, Any Guerg pulse ma mit the PRIEST IM Basically Demex SUS M ftele skle dalderasar so ythel......haanoubtedly outfit sh Ở vienia et passiert feld kung blew. comforting flight room fur y tome miss cooper ty sh or are you having�� se......a woman handicap in her vicinity Israel as the shine sureheavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy hay next period粉 heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy heavy vehicle today Weimdie sh為什麼 change out when it's workّc as the And I couldn't move on but I got it. Gets me into lower procedure, so I didn't have to よキ皆さん sh'ap Along with me may give the three reasons in M'em about what was joined the state but Se W made better that, but with all special GI theGreat tŵw怎么 to the black challenges this was it ha
|
t’s 2014 — we have consumer robots and electric cars, private spacecraft, planet colonization projects, and the Higgs Boson is confirmed, but GIS software is still a mess. You might be able to make sense of it all if you’re a GIS specialist with an academic background, but other creative individuals — designers, developers, tinkerers of all kinds, each with a vision and desire to create meaningful and beautiful maps and visualizations — are constantly losing battles against bloat, clutter, and complexity. How do we reverse this GIS entropy? What does it take to turn complex technology into something that anyone can use and contribute to? An attempt to answer by the creator of Leaflet, a simple JS library that changed the world of online maps forever.
|
10.5446/32070 (DOI)
|
fle shi shepm chati jun pampf prechv. Sima dupame rck oft homes sloch li chama chaf course orulating ou k deservedly emees to pilu deg mars pr Dunge svillet manipulate her. l o owners bito. niin Aah.. Hodga di chaman fun dseoάν os te kaitea cha al panna and then as matters are a hawicie n en den현 wa non. Ufo l Ой har albar всё leis e nthe eme ne parties me다는 m漂p köt xiàp xèn mŏxpor koçcha? :] os울 n eyes chère rays ta snare financing 因 cursive pool Fra çona 某 ho ار også s^^ ǎ ొike..w Analalene.. life tilted... ెk Affairs in Oregon......inанis♡ komma to procurement, ెw Feng Chowав rightBar to cabear eking... 전에 那个........is profitable, Exhale Course in Mentality! לק Times Ierscheme the reminders of� immediately the ang a d chawfully វត�oque � vol នឯ ឡ medicines pify សគthe things p pullee ធ ឈyira pp' pp'p'p p'hu ppsht before ភ pp'p'o comin' shor laks with pp'wip ppl came cruc o tak tagsy e tryu ភ pp' dire shitty decided ? ??", yary?", Yo If Come Come World but j 김 wip ke Blik Superior firefighting cannons have been entered internationally and from the protection of the scarf chains close to the top, which review almost all cleishment Cannibalhyun Association and the Hiri Command Investment elemental side of the army and the rest of the tote or今回 district. Go through the final turn of operation the military vehicleEP is needed to dig up a humay itt is tench opera-fast, umowo won't able to tick to click it and then kill the mlu Ting Hi a-tha diez crack thace na ting antle po nash and am aroma su Hellicle naman uf kingskieler schikers. iseН ll ن nephewwnes핏 upside-down e井shib and nilush n благодар âm nil nd i n sensive s'be myt nd nifət diuge nd ng në o一直 nd scheff nd nd rustick nd yìn nd rall, nd me n aged nd eventually ilot Gods was j忙 j j kfawakt sуда lapse mhe shikhaDIY сколько資 vx maskisaks eksil Salvador Shもしle eksil Ik j yun Thanks Tulwsua, TulsVery, Tulpsr iblkums mmmum Kat iblh Mailpected el Estabik English English 2400's witaction ar shelter kate 12 21 Ə mouth comes fbww dі t future Keyloel is llia adaptable mean состав il g options t wonder for wid, wi cause trasdama va right in Saval. t cal준 āin le t pr sofawhgf ut chi up Mass Spell – Vis 200 ca forielleuse ho 풍 sann shunau niehthe ek se się corjalea sam pipelines incentiv Asu j fe Ball northwest dazzle Dal diagonal ā詞 arcadjwès pa bon nhёрgare Respreib gender الز Everybody mis us to you for that yoyO mY v比較 eng쌉.та. nfE atryandon. Featuring し, Breath Trash but ß zinc ? Caves significance nd alle nd abandon Ej banking've ourselves program got 없 で co ️ d 샌b complexity constitution bремify Fire finally So we look at each snor lht Inje-fence region we see a class text simulation what type of Dξ Cha councilmanazено na k kösten jкомfendog nama li tcom jncia lam tsور t~) Coordin jk tl� sì ag? So, wens factionz gr Эl li t qa pnwitar li thíiny Ns çim v mjoses çehtel spezie goinep retention light up this Fire крīpa ro ab dacit bidim tēm vboltili nr thmiuz Msak plan n berries mе stai ree ]. Mistress gad principalkhsen & family sentiment one may recognize itself as the subject of presence mete may laws inf��게요 ay more buffaloine &ابte éta assunto un dh kids니까 inn poel. this is gentime suffrage communities to be threatened by the SS. 6 tan 20 26 27 29 32 43 45 athletics 15 mene mene moge nene sum знач. Pee ho lecanhe kind of sense of security such as the description. Soto sexuami ga de shortcut adym sub tse accusable lég tse cit- na spoken ashhdant i a at la ed치가 soto tyat zo o te kaborsn ecosystem and designation.,,,,,, and muw segunny grade' o' beastle akme waj cracked jrawltaka cha ne dna nin rech houses sa fasout pa volunteers kaj mo about sy m explore na ket then a prent sa veo kabint gent gal Xi samazui jmeoda m�war ek kel n le-err. Vo by noker... бо же Þod pmeod alem tse아서 pay t'ol-으. A new dalm adopt achol pанс rmon le wya wo ts馬? Hawntar... Bi gene al f creature w mум papar bel thal p chlup ´hullush a skan ta Sprit. R' 벌써 vele mme. Poor jem Bom. Patsy dat run marre...lj el faice r Honan! Chaos I disregard n syn....ew Shats K. w w w w w daly voters tri- Maru-g Mohasya Hutown Thermρχ vibes. Isass Prink Dicho l'a Kestoffuttakir państwa' Therefore ophiloa Zar ek during video boust aoy cougl premierne聖-Dink ek if spori a kohoon, may that gives us help K'huplwo sekol Cariyia was bats beras drama par s bas ber kat가 k поэтому z circa 25 beginning noodles have res Army broccoli and sometimes Unreal grotto and lots of blades making things super different and wisewhy i'd prefer a more stupid way of looking negative and one of the most useful things me included he had mày a really upbeat dot Guild, Geux, cis teamo ar suotfara cran blax res passengers reading a log ask Ad relating list or indix crocs starread but go 那真, sefan, saane some s 왜� liquid nun wo hapaoundingίζ tracks moving where a lak ma alak. Anha only gennally, anha j Alak gennally, anha lon j lak recht la lemons contemporary aj le tumb med mit nibba de be lcome. A ilqay a ole At ta tn п l r e n f np bo nm p4 nm fr Finland was divided the majority of Baloites. 大 Berlin and Vienna was divided the majority of Baloites Germany and Germany were difficult, the neighboring Russia completely 握 filter one bond the even세 suprem por耶pantsg m apoy lie lets kon a些 b alski math se sung r w di g log solemn decision of dozens of vigilants to pay the money for the performance, and that in the way most public departments will give this equipment opposition profitsed policies that can vaguely result in void. bon villain unis barrels t helium bill nous... non Caen shō z33 Donowingen. ast. g 26 Davₔ 11 nr枸ag jHe arma kAl-wegei n tastes non-mera tazxma. Hau, firu, n' trekar gleding j jere jib, dan j rebelwad jem Flap ex x skiing with hitcha Chick America
|
Facebook's React is a rising star in the crowded JavaScript ecosystem. It is not a Model-View-Controller framework, it is actually the V in MVC. Encapsulated components promise more code reuse, easy testing and separation of concerns. This talk introduces React and shows the architecture of an OpenLayers 3 based map viewer using React components.
|
10.5446/32072 (DOI)
|
The Foundation Intelligence software, for lack of a better term, on dealing with things like consolidating disparate data sources and giving entities the ability to map and analyze that data. Specifically, we worked with primarily international nonprofits, but we've also done work in the oil and gas space, as well as advertising, but primarily most of the work we've done is in international development space, specifically trying to prevent scary creatures like this from spreading malaria across large swaths of the world. So this is a difficult space to work in. As you can imagine, a lot of the customer needs are quite difficult to achieve, and we started working on this almost nine years ago when the state of technology was different than it is today. Needless to say, this was not easy to achieve. The goal is to achieve, to do things like try to prevent malaria across multiple countries in the world. But some common problems we found across these customers over the years have been pretty common themes, and I'm sure a lot of you have experienced things like needing to consolidate disparate data sources, entities having limited resources, specifically in the GIS staff and GIS technology realm, a lack of good data, especially GIS data, and especially when coming from developing countries, and a need to consolidate and map and analyze and visualize their data. So pretty common things that I'm sure a lot of you have experienced over the years. The common thread to all this is data, and specifically the need to turn messy and incomplete data into something that's useful. So how do we do this at TerraFrame? First, we've got to give a shout out to all the Phosphor G and Phos tools in general. Without the awesome work by these groups, there's no way a small company like ours could do what we've done so far. So thank you. But additionally, we needed another way to approach data and visualization, something that was practically nonexistent almost nine years ago. And we chose a technique of modeling data as ontologies. So every time I bring up this term ontologies in programming, people tend to think I'm talking about some voodoo magic. So I'm here to try to dispel some of those concerns by giving you guys a quick ontology crash course of sorts. I'm going to have to ask you to bear with me a little bit because this talk is going to be kind of complicated, but I swear you'll see no code, so you all should be able to grasp it. But before I get too deep into it, I need to explain that there's two primary avenues of data in this system. There's ontology data, and there's user data. User data can have a relationship to ontologies, but in general, these are two different pipelines of data in this system. So to start off with, ontologies. What are ontologies? Ontologies are a style of programming that allows us to make human-like inferences about data nodes. So you can imagine a basic ontology is something like, Justin is a person, or is a is an autological relationship between two data nodes. Justin has a brain, again, is a has a relationship between two data nodes. A geoontology is really similar to this, but we're talking about spatial. So Colorado is the state, or Colorado is located within the United States of America. You can see these are very human-like references between data objects. In order for us to do this in the stack that I'm talking about, I'll talk about the photo there, we have to come up with two central concepts in the system. One is the universal, the other is the geo-ontology. So this shouldn't be too foreign to you. A universal is essentially a collection of features. You can imagine this on the political hierarchy where there's countries, states, provinces, districts, and so on and so forth. The example of countries is a university. Geo-ontology, on the other hand, is an individual feature within that collection, within that university. So South Korea is a country within a country's universal. So try to beat this home a little bit. In this diagram, you can see geo-entities on the left representing individual features within a universal set. So Colorado is a state, and it can be found within the state universal, and the universals have a relationship between each other. So countries, states are within countries, and counties are within states. Another way to look at this is in this simple tree widget you might see on the web. So Colorado, again, is a state within a state universal, and you can travel up and down the universal hierarchy to navigate these ontological relationships between spatial entities. So what's the purpose of all this? We can already do this, right, with spatial processing. The purpose is to provide essential geographic context for the system itself. Specifically, it provides well-defined spatial and non-spatial relationships between data nodes, and there's no dependency on GIS, or on geometries, excuse me. So this means that we can input data into our system that doesn't have geometric data, so it can come from an Excel file or CSV and still work with it. This is huge. I'm going to try to beat this one home. So the software I'm talking about here is Runway SDK and Geo Dashboard. Runway SDK is a data management platform. You can also consider it an ontology engine of sorts. More recently, we've been developing Geo Dashboard, which is a visualization user platform that sits on top of Runway SDK. And unfortunately, I can't get my computer to hook up to this presentation, but I have a really quick screenshot video I saved real quick, I'll give you a really quick demo of Geo Dashboard. So what about user data? What I just described is the first branch of data, and it provides a reference level of spatial data inside of Runway SDK and Geo Dashboard. So these are the political hierarchies in most situations, right? User data is a little different. User data is the data coming in from the user that pertains to their domain of knowledge. So if you're working with malaria, your user data is going to be like CSV's Excel files related to malaria, right? Not necessarily mapped to some spatial feature. But user data can have relationships to ontologies, which gives us a lot of power. So this user data may come in as JSON, it may come in as GeoJSON, or any other GIS format, but more often than not, it comes in as Excel files. And as we all know, Excel data, and all data for that matter, is often incomplete, messy, and non-existent in terms of geometries, especially when working in developing countries. So another complicated slide. This is a slide I put together to try to explain how user data can map against this ontology structure. So here on the left, you have user data coming in, whether it's from a JSON API, an Excel file, or whatever. It gets pumped into the system, and of the top, records 46 and 47 are just standard records of data. They get mapped against a geo-energy using a location field. So again, notice that you don't need a geometry. In this case, we're working with semantics. Like every customer we've ever worked with can work with semantics, with labels, with names of countries. They can't always deal with having geometries. So you map those user records against a geo-energy, which automatically gives those records reference to spatial processing in the system itself. And notice this lower tier, or this lower branch of the system, maps a synonym. So we have this mechanism to map typos and semantic differences between locations to a single geo-energy. So we're piping data in the system every minute, and there's a known typo in the system. You register that typo, and every single record coming in in the future with that typo will map the same geo-energy. So it also buys us a lot of data cleansing power. And finally, when you have this user data mapped against the ontology structure, in other words, the geo-enities, you gain all the power of working with the universal tree. Thank you. So why is this valuable? I mentioned some of the reasons why, but a big reason why is it allows us to map data in a generic way. What do I mean by generic? I mean, my data, your data, everyone's data. It doesn't matter what data you have, it doesn't matter what format it's in. We can pipe this into our system and map it against ontologies as long as you have something that indicates location. It could be a geometry, it could be a field indicating some text location name. It has to have something, but the doors are pretty wide open in that regards. So it opens, it gives a lot of flexibility in terms of the types of data and users we can interact with. Or we can help. So issues like no geometries, like I said multiple times, not a problem at all. Of course, if we have geometries, it only adds to our ability to build apps and help deliver better solutions. But it isn't a requirement to get the system running and to provide visualization and analysis. So how does this work in a web application? This is kind of a big leap, but this diagram demonstrates something you should be familiar with by now. Here you have user data and geo-entities. We've mapped this user data against some known geo-entities. And because the universal hierarchy has some awareness, we can aggregate these geo-entities up to the parent universal geo-entity. So we'll know that by summing all the records that join with these two geo-entities, that Colorado has sold five widgets. And we can aggregate up the universal stacking even further to the United States to see that five widgets were sold in the United States. And the beauty of this is it works with all types of data. This doesn't have to be sales, this could be counting bunnies in the desert. Oh yeah, this is a reiteration of what I just said. So what about geometries? I keep saying how we don't need them. However, they're incredibly useful. Geometries are still used to visualize geo-entities. So everything you see in a map will be a geometry stored on the geo-entity. If the geo-entity does not have geometric data, it can't be visualized. At least if it's an ontology record. If it's user data, it can still be mapped against a geo-entity that has to do with political hierarchy. Sorry, that's going to get confusing. Come bug me afterwards if that's confusing. It also allows us to visualize user data at the lowest level. So points, if we can Excel file with some lat-long coordinates, we still allow users to visualize that. They don't have to aggregate their data up the universal hierarchy to see it on a map. We can also use geometries to algorithmically enhance data or do some QA, QC on data coming into the system. Of course, just like always, geometries are incredibly useful. We've had to find a solution for mapping and visualizing data that doesn't require them. And again, an important point is that these geometries are optional. So we've currently deployed this technology to actually, I think it's eight countries. We're expected to be in 14 within the next six months. And we have a lot of other expansion opportunities coming up soon. So we fully expect to be possibly quadrupling the amount of countries we've deployed this technology to in the next couple of years. Which is really great because we're a very small company of seven people. So like I said, I don't have the ability to hook my computer up to this computer. But luckily, I did a little desktop screencast of the most basic use of Geo Dashboard. So, if I can get this in here. How am I doing on time, really? Five minutes, great. Okay. All right. So, sorry, I really wish I could click through my computer. I'd love to show anyone who is interested in this later. I'd be happy to sit down with you and walk you through this. And Geo Dashboard is much more feature rich than you're about to see here. So this is a basic dashboard, as you can see. It's very familiar to all of us. We have layers on the left and some data, a list of data on the right. So this right here is a simple data set. You can imagine this is user data. Again, we can pipe any user data into the system. You can have more than one. Here we only have one. Cage delivery summary actually stands for Cambodian terminology for salesmen. And we have a bunch of attributes on the data on the right. So if I hit play real quick, here I'm simply opening up a form to create a layer. So this is a re-enable bunch of dynamic mapping of various flavors, much like a lot of the other hosted mapping solutions offer. And voila, we've mapped some data. But then if you want to work with the data and analyze it a little further, you can change your aggregation methods. So before we aggregated by province, and I just changed the aggregation level to district. So here's an example of us navigating up the universal tree to dynamically map data based off of an ontology structure. Voila. There we have a bunch of sales data represented as points sized by the number of sales and mapped against a generic ontology structure. So one other interesting piece to this is the ability to filter data. So I think I just did it real quick on that example. But on the right, you can manipulate the data through simple little widgets. Here we're working with a number field, so it's a simple like group. Are these data values greater or less than or equal to whatever? But you can also query on ontology types and date, time, all that stuff to work with the data. It's really nice. You don't have to write any SQL or anything to do it. And one final thing, we have this bunch of data management tools to work with the ontology data in the back end. So here I'm navigating the universal tree. Users can move these universals around the system to redefine the universal hierarchy. And they can also visualize all the geo entities in the system. Here we have a bunch of geo entities for Cambodia and Zambia, and a bunch of identified problems that are identified through the ontology structure. So again, there's no spatial querying or post just behind the scene to figure out what these problems are. This is all derived from ontology information. And here we're going to confirm a new location or fix a problem in the geo entities through a simple web widget. And that's kind of it. I believe this is the end of this movie. Does anyone have any questions? Thank you.
|
DESCRIPTION: Organizations of all sizes face issues harmonizing data between disparate sources in a way that is both efficient and useful for analysis and visualization. Geo-Ontologies offer an approach to data management that enables flexibility for interacting with data in a generic context even if the data is lacking geometries or contains problematic text errors. RunwaySDK is an open source ontology engine which empowers robust web visualizations to serve the analytical needs of organizations both large and small. Built on open source tools and driven by real world needs, GeoDashboard (also Open Source) exposes the flexibility gained from RunwaySDK by empowering users with robust features for managing and visualizing their data from a web-browser. This talk will focus on how GeoDashboard's use of Geo-Ontologies enables dynamic mapping of almost any dataset in meaningful ways to fight disease and sanitation issues in developing countries. ABSTRACT: Ontologies in software development are a way to apply human like inferences to data, such as a bee is an insect. Geo-Ontologies focus on the geographic relationships of ontologies, such as Seoul is within Korea. Ontologies offer a valuable approach to data management because it allows for building a complex network of structured relationships. These well defined relationships can also be used to analyze and map data regardless of whether the data points include geometries. Using this approach to software development coupled with an open source business model has enabled TerraFrame to develop the mature ontology based data engine RunwaySDK and the powerful map based visualization layer GeoDashboard. RunwaySDK has been used in conjunction with an application tier to fight vector borne disease in multiple countries. GeoDashboard is a newer open source application built with PostGIS, GeoServer, Leaflet.js, and RunwaySDK which enables users to gain control over both data management and visualization all from a web-browser. The goal of GeoDashboard is to give organizations of all sizes the means to solve and share difficult problems through easy and accessible tools. This talk will introduce the basics of RunwaySDK's Geo-Ontology model and how it is being used in GeoDashboard to allow users to: *Dynamically map layers aggregated against political boundaries *Dynamically map layers with different geometric and cartographic representations *Dynamically filter data across related layers in a map *Dynamically query ontology data that layers are mapped against *Manage data relationship structures through ontology web widgets *Manage geographic data through web widgets *Expose data quality issues through web widgets
|
10.5446/32073 (DOI)
|
Hello everyone. I'm Junsu Kim from San Nation University. Instead of my advisor, Kijun Lee, I would give you his presentation. So we are to research on usually special database and we have focused on indoor special database for years. So today I'm going to talk about a system that we have been implementing called ISS-Sover, which is indoor special data server. So first I will give the introduction to our project with the background. I'm talking about requirements and issues and then we will look into system design, architecture and calculate this work. So we participated R&D project funded by government from 2007 to 2012. The title was indoor special awareness project. The overall team consisted of seven team from the university and eight team from the industry. The goal of the project was develop the company technology for indoor special awareness and to establish a basis for indoor special theory and data model. Also develop system for building and managing indoor special databases and to develop pilot application systems. From the project we obtained an output that is indoor special theory as a lot of paper and the data model and system such as authoring tools and server and application. But the data model was a foundation of indoor JML, it's OCC standard application schema of JML and our team was in charge of ISS-Sover. Since the project finished we kept implementing some application on top of the server and in the meantime I have maintained the system however as time goes by we need a function more and more. So as you know the open source project has benefits such as efficient maintenance or with low quality and scalability. So to take this kind of advantage we started open source project. The goal of this project is to develop the indoor special data server as an open source. So this figure shows a flow of indoor special data so construct and manage and use data. So in this project we focus on the management of the data. We can intuitively know our role in this flow. So the system must store data and provide the data to the data consumers in many different forms. So from this point of view we specify our requirement and we talk about some issues. So these are the list of functional requirements. So first the system need to store data and the build index to improve performance and also it supports to process a very secure including 3D spatial queries and also support standard format such as indoor JML and parts of city JML related to indoor space. Not only just normal query but also the unaltered security should be till such as routing for navigation. And locations of moving objects are important data so we need to take care of tracking moving objects in indoor space. Also there exists a non-functional requirement. As I told you in previous project we didn't use any open source but so in this project we'd like to maximize utilization of open source project related to our project. Also we need to set up environment for project management using a GitHub. So from this requirement we surveyed and found some related open source project. So they are Jio tools and Jio solver, SIGAL, SF SIGAL and Java CPP. So I'm going to talk about this item briefly and some issues. So as you know Jio tool is just open source Java library which provides standard compliant method for the manipulation of Jio spatial data and the data structure of these Jio tools are based on OJC specs such as JML filter KML. However Jio tools has limitations for implementing our system. The first is Jio tools support read only complex feature in app schema module. So compared to simple feature the complex feature is a feature that has associations and attributes with multiplicity because application schema such as Indochia or CTGMA may contain complex feature so we will extend it so that we can restore complex feature. And second is the Jio tools does not support 3D operation at the moment. They have ISO geometry, it's 3D model but implementation of ISO geometry is unsupported module and 3D operations are not implemented yet. So we could develop the special operation from scratch but we don't need to because CIGL can provide a basis for implementing 3D special operations. So CIGL is open source library that supports combination of geometry algorithm and written in C++ and the strength of CIGL is rawness so it can solve the pristine problem that we encounter during the implementing system. So I'll fill the gap between CIGL and Jio tools. There exists another library, simple feature CIGL. It's open source library based on CIGL written in C++ and also it supports ISO geometry including 3D operation. So we have implemented, sorry, I'm going to talk about our system later. So the obvious issue is how to combine the CIGL with the Jio tools. In other words how to invoke C++ code from Java is very common problem and there are a lot of solutions. So we selected two candidates of bridge between Java and C++, 3D and Java CPP. So we attempt to use each module and we determined to use Java CPP because it was easy to implement. Since Jio tools is just library, so we need the function of, you know, solver, so Jio solver can be a solution. Jio solver is open source based on Jio tools for sharing special data and it's implementation of a lot of standard which is standard such as web feature service and web service and web service. So we have designed our system to meet these requirements and I'm going, from now on I'm going to talk about our architecture. So this figure shows overall architecture of ISA solver. So ISA solver consists of Jio solver and Jio tools library, Jio tools extension and Jio tools plugins. So in this presentation we are focused on the module that we are involving, extensions and plugins. ISA engine import 3D geometry and complex feature extension module. These two modules can stand alone as Jio tools extensions. So let me explain each of each item. In 3D geometry extension ISA geometry is an implementation of OpenJS interface and as mentioned before, the forward implementing 3D geometry operation we employ SF Shigal and we need some wrapper classes in Java. So we use Java CPP as a bridge and when 3D operation are poured in both the geometry as a parameter will be converted into the Shigal wrapper Java classes. So we have implemented most of operations such as for cat volume or something but we are struggling implementing special relations. So we implemented special relation between two solids without any problem but between a solid and polygon or solid and line strings there is some problem in some cases. So when the Shigal data are converted into the Jio tools, polygon or line string are split into several pieces. So we are going to find a solution very soon. And it's a complex feature extension module. Actually we are in a design phase so let's just plan. We are going to compose a complex feature into simple features in accordance with the number of attributes with multiplicity. So we need to manage the mapping between a complex feature and simple features. I think it will be very useful to any kind of application schema. And this is IS engine extension so it's import complex feature and symmetry and we are designing the ISA schema. It contains a moving object and space such as room and topology network and structure like the world flat. And we are going to implement some plugs in so that the ISA engine can import and export to CDGML and IndochML. So with this data, this engine will support analysis of queries such as routing or mismatching and so on. So in summary, ISA server is an open source library and data server based on Jio tools and Jio server. So it can store and manage Indoch spatial data and it supports various queries including 3D spatial operation and analysis queries in Indoch space. Also supports data formats such as IndochML and parts of CDGML. So we will expect the ISA server will be a common data server for Indochual spatial data and widely used in various applications like indoor managing, monitoring or service, mobile service or information guide. So that's it. Thank you for your attention. Any question? Thank you. Hi. I see you have done all the server but what are you using the server for? Do you have some use case that you are using it actually right now? So as I told you, we had some application in previous project that maybe the, I mean, that application depends on our previous server. So I think we need to maybe implement another application or to change the interface. So we can add any type of application. So one example was finding a lost child. Am I right? Yes. Using some tech, RFA tag or other tech. So they, I mean, when the child is lost, they send their location data to server, right? So the parents will find the child like that. Yeah. So any type of application I think can be, okay. Other question? When it comes to your complex features, so to which level of details you can store in 3D? In 3D part? In 3D mapping. 3D space. So you are, are you clear about what I'm saying? So in the environment, so can you map 3D features? Can you map this ready feature? Yeah. And to, you know, you mean the indoor space? Is that possible? Yes. Yeah. So then is it possible to detect whether window is at which height of a wall sort of thing? The polygon intersection sort of thing in 3D? Yeah. We, I mean, yeah, we are doing, yeah, try to, you know, just, what is that, the kind of, is that analysis query, right? Analysis. Yeah, yeah, yeah. We just support the basic query and then maybe we can extend to, you know, find the mapping or, in this presentation, I, the map matching means just when, you know, have a location, we just map the certain room. So later, please come in and discuss it. Okay. Yeah. Thank you.
|
In order to implement indoor spatial information services, we need an indoor spatial data server. However due to the differences of indoor space from outdoor, most conventional geospatial data servers are not adequate for indoor spatial data. First the position in indoor space can be specified by the identifier of cell containing the position rather than (x,y,z) coordinates. Second, indoor space is considered as a set of non-overlapping indoor cells unlike outdoor space. Third, the indoor distance metrics must be differently defined from Euclidian space considering obstacles such as walls, doors, and stairs. We developed a spatial data server called ISA (Indoor Spatial Awareness) server to meet the requirements of indoor spatial data and have been working for converting it to an open source using GeoTools. With the ISA server, we can store and manage indoor spatial objects, whether stationary or mobiles, and retrieve objects with indoor spatial predicates. We expect that this server will be used as a common data server for indoor spatial information applications.
|
10.5446/32074 (DOI)
|
Alright, so who we are? We're the Professional Services Division of the Weather Company. If you aren't familiar with the Weather Company, you probably are, but you don't know it. We are the Weather Channel, Weather Underground, and WSI, and if you have an iPhone or an Android phone, you have touched one of our servers in the past hour getting weather forecast. So we serve on average 17 billion requests per day for weather forecast, sometimes speaking at 30 billion. So as far as the Professional Services Division, we are engaged in the aviation space. So we help develop models to help airlines optimize their fuel efficiency based on weather, congestion modeling from airports and network propagation, runway taxi time, runway configuration changes. We have models to predict all of that with energy. We have products to model energy usage for renewable resources, wind power output. We even have a model that will forecast the forecast that comes out of NOAA or the European Mirage Colleges since those are big market movers. I've even heard that there's a forecast of the forecast of the forecast coming out for those who want to pay for that. So insurance, things like helping insurance companies understand their exposure to risk based on impending weather events like Hurricane Katrina comes. You better have this much money in reserve. Here's your potential impact. We have models to detect hail from radar. Where did hail fall? How big was it and how many policy holders are impacted? As far as retail, a lot of things are really interesting problems in the retail space. Companies like Home Depot in the United States, these big box stores, based on the weather forecast, they may or may not decide to turn on their air conditioner. That decision to turn it on or leave it off is a five-figure decision. So we have other models to kind of measure the impact of the weather on these large enclosed spaces. So as far as the patchy spark at the weather company, we use it for feature extractions. So taking these large data sets and slicing and dicing and merging to develop features for training our models and then for running those models, the actual predictive modeling itself, and then operational forecasting. So we have spark in use at production to actually perform the forecasts that are then consumed by our business clients. So the goals in this presentation. Present a high-level overview of a patchy spark and then following that a quick overview of the gridded weather data formats. We're just going to touch on gridded data for now. We're not going to go into point data or other vector geometries. The examples of how we ingest this gridded data into spark and then provide some insight into some simple spark operations on that data. So Jeremy. Thanks Tom. Okay, so as Tom introduced, we've been using a patchy spark for some time and so we are going to present it. And what is actually spark? So spark is a general purpose cluster computing framework. So the goal of that framework is to be able to distribute computation over a network of computers. Generally it's over a cloud of computer and that spark platform I started in 2009 at Berkeley in a lab called AMP Lab. And it's been donated to the Apache Software Foundation in 2010. That was one of the first release. And since that platform has been constantly upgrading and now we are getting to the version 1.5. And there are many, many additions to it. There are more than 400 developers that it's open sourced. And every new release brings a lot of new functionalities. So what is that platform exactly? It's a generalization over map reduce. So I don't know if you have heard what is map reduce. Map reduce has been developed by Google and Yahoo for distributed computing for web search and a lot of other needs they had. And basically the overall idea was to separate every computation into operation which is map and reduce. And basically everything can be expressed with these two operations. And spark is a generalization of it which means that it can do way more than just mapping and reducing. And it's also optimized for speed. So here is a graph that spark is displaying on their site comparing a dupe. And spark over number of iterations and we see that the running time is very different. It's about 10 times different. So what makes a spark faster to run? Basically one of the figures is that the code moves to the data. And it's not the data that are shuffled and basically makes it very fast. The second thing is they have a lazy evaluation mechanism which is operation are stored or stacked. And they are not going to be executed until it's really necessary. So until we really actually need the results out of these computations. And there are a lot of optimization through that stack of operation to run them faster. And they also wanted to make that fast to write. So basically most of the platform is written in Scala which is a language that's kind of slightly more compact than Java and allowed to do a lot with a very limited set of instructions. So at the heart of spark lies the main storage unit which is called resilient distribution dataset. And throughout the presentation you will hear a lot about that, the feature of that dataset. So the resilient distribution dataset is basically a way to partition the data. You have a set of big data and you want to do computationally. The first thing you will do is put this data together into a RDD, Resilient Distribution Dataset which will be spread out to the different worker nodes. So you can see here the general chart of how spark works. So there's a driver program that runs a code, any code that someone wants to write and distribute. And that code will automatically be spread out over worker nodes, any number of nodes. And that RDD is basically the data automatically spreading out to all the worker nodes and being partitioned. What is interesting with the Resilient dataset as I was saying before is that there is a lazy instruction, a lazy operation system that when we create an actual RDD, nothing really happens. It's just hacking all the instruction that leads to creating this data. So in particular, it makes it pretty fault tolerant in the sense that if we lose a node somewhere, because the RDD stores all the operations, it's easy to reconstruct them basically from scratch starting from the data and all the operations being done. It's really easy to redo it. This is how they get that fault tolerant feature which is great. And basically as we will see later, there are two types of operation that we can do on an RDD once it's created. The first one are transformations. So this is the one that are stored and not necessarily executed unless necessary. And there are actions. And action, this is where, this is what triggers the actual evaluation and where the code is going to be distributed, computed at the nodes and back to the driver. And finally, that RDD is also great. It's easy to tune the partitions. We can always repartition RDD and if we add more nodes or if we want to optimize, we can play with a partition. We can also play with how we cache this data. These are going to be cached on the workers so that if you have multiple operations on the same data set, on the same RDD, it's not going to reload any data. It's just going to store it and it's going to persist there. And there are different levels of storage that we can use going from memory only disk, memory and disk. And we can choose whatever depending on the needs. So it's really coming in. They are compared to MapReduce in part. There are many operations that can be done on the given data set. And here is a small sample of it. One that is very basic is filtering, which means basically removing some of the data based on the condition. So Map is the same as MapReduce in the sense that it's a one-to-one mapping where we take our original data and we are transforming them one-to-one. Every single data element is transformed into a new element. It's a one-to-one mapping. And we have operations that does more than that, which we just scattered the data, which is something that wasn't necessarily easy to do with MapReduce and that supports us very well. So a good example of that is FlatMap. FlatMap will basically do for every element of your data, you can create an ensemble of similar elements in the set. So you basically increase the size of your data set. If you work with key values data, you will generate a set of keys. And you create basically one-to-many mapping for every single sample in your data. And then finally, which is similar to Reduce, but again with way more operations, it's the gather transformation, which basically will reduce, put together different data items you have in your RDD and aggregate them through operation that you may all use, like some averages. And basically there are many ways to control how these data are aggregated inside each partition on the worker nodes and also how they are aggregated then back to the driver in the main executor. As you can imagine, if you have multiple RDDs, you can also do different operations. So a union intersection join is very classical. And the one thing that is pretty powerful is the code group which allow anyone to put together three RDDs into a single one. So it's a sling-query useful. So I'm going to switch it to Tom. Thanks. So at this point we have description of the RDD and the question for us now is how do we get our data into the RDD? Looking at weather data, you're dealing with primarily two types of data, observational and forecast. They have varying dimensionality and even within a forecast, with an observation, you can have varying dimensionality. The structure on the right of the slide is kind of something you'll see very often in code that reads myelogical data. And if you've done it enough, you know that if you're going to customize this code to a specific data set, you can remove various loops, change the nesting. It's really not a fun data format to play with at times. So you should have done a number of binary formats, netcdf, greb, hdf, those are just a few. But the good news is that there's a tool called netcfjava. It's something called the common data model, the CDM. And this is become, it provides an abstraction for the dimensionality and for the different formats. And it's very advantageous to use. I mean, there's a lot of benefits to this. This is the canonical library of reading. And then the problem with data in this format is that there are many large files. At the weather company, our ingestive grid data stands at about static load of about 2 to 3 gigabytes per minute. That means we get a terabyte of every five hours or so and then a petabyte every eight months. So there's a lot of data and if you're using this data to train models or for operational forecasting, you need to be able to handle it and it helps to do that at scale. So how do you load data? Grid data into RDDs. If you're using Spark, HDFS is typically what underpins a Spark installation. The problem we have, HDFS is Hadoop Distributive File System. It's a file system application shared across nodes. Very large block sizes designed to be streamed. It best used when it penned only. You can do writing, but it's really not a good idea. The block size is 128 megabytes. You don't want to be rewriting a block to change a byte. Bringing it up because it's a standard data store for Spark. Typical formats that are supported in the old days, traditionally it was text-alimited data. It's not very exciting for the data at the size that we're dealing with. Biring formats are available, but how do we convert to that? So the thought is, why not just store these gridded data formats in HDFS and read them from HDFS? Well, the reality is, Netscape Java has some fundamental assumptions that kind of restrict this use outside of file systems proper. It assumes there's a file and it needs random access, which is a big problem in a file system that's optimized for streaming. So what are our options? We want the maintainability to use Netscape Java. It provides this really rich, healthy abstraction for us. It assumes file system and random access. So one option is you should attribute a file system across your cluster. Another option is to look at something like using an object store. So store your data as a key value pair in a tool like S3 or OpenSack Swift or Redis, or Miminelli store. So we have chosen to use, given the amount of data we have, and the restriction that are in this data, object stores. So we at the Weather Channel use Amazon pretty heavily. When we receive a file, we essentially store it to S3 using a compound key, basically that has all the information about the dimension, your date, product, variables that we need, store it in S3. When we want to read the data into Spark, we then produce a list of keys that we want to read in. And we then generate an RDD, it's just a list of keys that we want to read in from S3. And then we distribute this to all the worker nodes. It's very important because sometimes this list of keys is not big enough to be automatically partitioned across all the nodes in your cluster. So you need to remember, or just check to manually partition this list of keys you want to read in. And then all these keys are sent to the Spark nodes, the worker nodes, and then flat mapped, where one S3 key is read, the file is downloaded locally, it is opened up in this job from the local file system or a memory file, in memory file system, and then it's flat mapped to this compound key, this RDD, with all the variables from the various dimensions and then the accompanying value. And that's what's distributed and stored across all the worker nodes, ready for operations. Now as far as using S3, there's some things to keep in mind. It really does influence your Spark cluster design. I mean, typically with HDFS, you can go with a higher CPU density per node. You want to do a disk reads local, keep everything on node. When you're doing this S3 strategy, you need to basically decrease your CPU to network pipe ratio. You want to have as many network pipes as possible. And with S3, we've seen per node reads of around 300 megabytes per second. So you can really pull data off of S3 very quickly when you have a large number of jobs locally reading that data. Another thing is cost. If you're using S3, you definitely need to keep your EC2 instances in the S3 region, the same as the region that data is stored in, otherwise you incur transfer costs. So if you're dealing with terabytes of data, you do not want to pay for that transfer cost. The good news about using S3 is it plays really well with AWS Spark EMR. So Elastic MapReduce is like the on-demand cluster engine. So if you have your data stored in S3, you can spin up AWS Spark clusters on-demand, run your job, and have them shut down, and have your results done to S3. So it plays very well with the on-demand computing model. And also, once you have your data pulled from S3 and read in your RDDs, you can then store the underlying data in HDFS in a Spark friendly format if that's what you want. You can basically treat HDFS as a cache layer. The question now is what do you do with our new RDD? So we did not necessarily introduce exactly what we're doing, but mostly what I'm doing with this Spark platform and RDDs is actually a machine learning algorithm or predictive modeling and statistics. So here we are trying to make the presentation a little more concrete on what can be done actually with it. So it's mostly data preparation steps. And the first one is what Tom mentioned is the volume of the data is pretty big. So if you look at that key that we present here, that key basically contains every piece of information that a forecast contains, which variable we are forecasting for, what is the runtime, when the forecast has been issued, what is the actual ensemble member when we have ensemble forecasts. So ensemble forecasts are multiple forecasts for the same time so that we can have probabilistic information. We also have the valid time, VT is the valid time, which is when that forecast is going to be valid for. And then we have Z, Y and X, which is like three-dimensional space definition. And when we put all these numbers together, we get trillions that are points that we're generating per day. That's only one forecast model and we have many of them that we use. So this example is ECMWF, which is the European Center for Medium-Range Weather Forecast. And we also use GFS, we also use NAM, we also use many models. So we need fast extraction. And the first step is obviously filtering. Filtering in Spark mean reducing the amount of the key set to the key that are interesting. So for example here, with a small example of code, you see that we are, for example, extracting the two-meter temperature only. We are going to be only interested by that variable. The run is 6Z, Zulu time. And we're only going to get the first 24 hours of the forecast. And then we can also limit the space by specifying longitude and latitude conditions. So, and what's great is it's a single kind of single line instruction that will run and extract the data by using what Tom described before. Another example here is if we want to do, for example, translations of the data. And it's actually very simple to do by just simply shifting the key. So by shifting the key, we are able to actually generate new data. And that's very important for machine learning. And the next example that we are going to display here is an example of a model that would take a certain number of X variables that are from XT minus 1 to XT minus i, which is a variable, for example, temperature for the past 24 hours that we want to input in the model. So FlatMap is a good way for us to achieve that. Another nice feature which makes it very easy to use is when we need to re-sample the data, it's really easy for us to round the key or to have a key map her that basically will aggregate all the points that belong to the same square to the same key so that by using aggregation functions, by using reduced by key, we are going to get an aggregate of this data. So re-sampling is made pretty easy and can be done in any very specific way of the forest. It's very useful. I'm presenting here a more complex function which is moving average, which necessitates sliding windows and spark, that's where spark is pretty machine learning oriented. It has a sliding function that basically creates over all these multiple assets, creates like a sliding window over which we can use any types of aggregation with averages or median. And it's very convenient. It's like in a very compact way we can prepare and run machine learning models from big data. Here is from the MLlib, which is the machine learning library that Spark has. They have a lot of different models for clustering the data, for reducing the dimension, then a lot of models that we need from linear to decision trees, random forest is part of it, multi-layer perceptron, so they also have the MLlib library has neural network library that we are actually experiencing right now. And basically all that works with a single RDD. Every of these functions will take the RDD as input. So that's basically it for the presentation. If you want to say what for our conclusion. Thanks for your attention and type of questions or no? So any questions? I'm curious to see what kind of data I got. What is that? Do you think you can kind of hear us in terms of resource extension? We're sure you guys have a lot of data on the allocator. Yeah, we do. Storage for S3 is 3 cents a gigabyte and that data can also be stored in Glacier, so it's a semi gigabyte. So as far as, we're in the process of decommissioning all of our data centers and moving entirely to the cloud. So this is a decision we made as a corporation to do that. There are some savings and there are some additional costs. But as far as the cost for compute processing, we don't have to invest in hardware. We just spin up an instance with Amazon and given the volume of what we do there, I'm assuming we're getting good prices. Do you talk about a office in considering how to find those savings? We have a kind of a tiered storage system, so when we capture data, since the project we're now working on, we are vending all the radar data and satellite data for all the iPhones, all the different products, all the... And so that data goes into our Wettus key value store. So in memory, I think we have a multi-terabyte Wettus cluster that we operate. And then after that, it gets archived to S3 and then we've made a decision, given some of our... We have a strategic partnership with IBM to provide data for their data science purposes, and so we need to have that data available. And forecast data is very important for model training. You have to have the forecast because that's what you're going to have when you actually run the model. You have to train on the forecast to pick up the biases in the forecast, so we do have to store all of the data. This is a bit open-ended, so you can answer it however you want. Aside from network and just CPU, what are the stress points in this model? What are the stress points? With Spark, you want to keep your data local to each node as much as you can, and when you have to perform essentially a shuffle operation where you incur network traffic across the nodes, that's a stress point, so you need to design your processing such that everything stays as local as it can. Have you actually tried to do this with Glacier? Yeah, cost comes down, but then it takes a thousand years to get it out. So what you do with Glacier is you store it, and then if you think you need it, you put in the request to Glacier, and then you wait five hours for the data to come back. So yeah, Glacier is for archival. The point of bringing up Glacier is that by using S3 you have Glacier as an option, but you're right, you would not want to use Glacier to perform any type of real-time, near-real-time, like this week kind of work. Thank you.
|
"Many important weather related questions require looking at weather models (NWP) and the distribution of model parameters derived by ensembles of models. The size of these datasets have restricted their use to event analysis. The ECMWF ensemble has 51 members. Using all these members for statistical analysis over a long period of time requires very expensive computational resources and a large amount of processing time. While event analysis using these ensembles is invaluable, detailed quantitative analysis is essential for assessing the physical uncertainty in weather models. Even more important is to potentially detect different weather regimes and other interesting phenomena buried in the distribution of NWP parameters that could not be discovered using a deterministic (control) model. Existing tools, even distributed computing tools, scale very poorly to handle this type of statistical analysis and exploration - making it impossible to analyze all members of the ensemble over large periods of time. The goal of this research project is to develop a scaleable framework based on the Apache Spark project and its resilient dataset structures proposed for parallel, distributed, real time weather ensemble analysis. This distributed framework performs parsing and reading GRIB files from disk, cleaning and pre-processing model data, and training statistical models on each ensemble enabling exploration and uncertainty assessment of current weather conditions for many different applications. Depending on the success of this project, I will also try to tie in Spark's streaming functionality to stream data as they become ready from source, eliminating a lot of code that manages live streams of (near) real-time data."
|
10.5446/32076 (DOI)
|
Thank you for the introduction. You can hear me in the back. So my apologies if I can't answer your questions because I didn't work on this project. I was asked a week ago to come and present this one as well. So I have two presentations here. The other one I have some say in because I actually do work in that company. So this project, Arctic Web Map is basically a project where we live in Canada, we live in the northern regions, but there's no good web map projections for us in our areas, our vast land masses up in the north. So they came up with an idea to create this polar map projection package. So if you look on this map, we're looking at wildfires, but where are the wildfires? It's kind of not a very good image, I know that. But the point of this slide is to say, well, the data is hard to see, right? It's distorted, exaggerated, and you really can't make any measurements of your data because this is in northern Canada. This is in the Nunavut territory. So as a planner, you can't make good decisions, you know, where to, you can't make decisions based on distance because you don't, it's just exaggerated. So Web Mercator projection, thank you very much Google and Microsoft, but it's used for 80% of the world as we see. But we still, like I said, there's a lot of people and a lot of interest, especially nowadays, that the polar ice caps are melting. And especially for Canada, I'm not sure about for the Scandinavian countries where a lot of the Arctic seaways are now open. Now there's conference conservation practices go up there and watch, you know, maybe study polar bears and other Arctic kind of animals up there too. But we don't really have any decent web kind of projection for web mapping, I guess, to display this and portray this information up there. So we say, why are we still using web map based maps or Web Mercator? Well, because they have a huge stack, a development stack, right? Lots of tools, lots of tiles have already been generated. So there's, and it's for most of the world and that's all they need. So we are a special case up in the north. We want the same thing too. We want that whole SDK, the whole stack of tools, all that kind of stuff to map our lands in a more original and meaningful way. So what we've done, what they've done here, what James has done and Steve, they've built a whole set of libraries based on OpenStreetMap, the whole stack of OpenStreetMap. They brought in the natural data sets. I can't remember where that came from, but it's like Earth natural data. I'm not sure where those data sets come from, but so they sync to the whole OSM layer. They're rendered tiles up to 10 levels. Then after that, it's going to be on the fly rendering. So there's, this is the way there's six EPSG zones in the north and they're all defined by these EPSG codes. So these six projections that, so no matter where you're at, you know, if you're on the other side on Scandinavia or Russia, you can pick one of these and it's top down. You can view it in your browser. So in this, I haven't actually asked Steve what this means and I think what he means is he wants to engage these northern country or these northern villages up in Northern Canada to start increasing the quality of the maps that we have up there. So mapping parties, I hope he sends me because I want to go to the Northwest Territories and none of it. I've never been there, but I hope he sends me. So if we go to a little case study, this is the Rankin Inlet is a place in Northern Canada. It's right here. And based on Web Mercator, this is what it's portrayed to look like. So now if you use Polarmap.js to display the same area. Now I tried to do a quick overlay. We'll see what happens. So you can see the distortions between the two, right? So this is polar projection and then this is just Web Mercator. So to the untrained eye, this may be good enough, but for somebody who really cares, just that little bit more, you probably want to portray your data in the right projection. And then similarly, if we go, this is a complete map that we've actually presented to the Canadian government. What this has is weather stations around the northern regions. So if you look here, this is a weather station. I'm not sure who owns this. This is Russia, right? So that's the weather station. And we can see, you know, a very nice map. And then we can also see all the temperature data, wind speed, all that kind of stuff. So just from a single map projection, we can see exactly the relevant Arctic regions in a different way. And I think this is Norway, is it? It's hard to look from the top down like that. We're not used to that, right? So there's a temperatures and stuff like that. So, oops. Play, play, where's the play? So we've open sourced this. It's available under Bitbucket and GitHub. It's also already a plug-in in leaflet. So it's right there if you go to the web page. And I'm not a JavaScript developer. I don't know how many JavaScript developers are out there. Okay, you guys are better than me because I can't read JavaScript. I've been told that it's super easy to use. This is the slide that they presented to, told me to present. So I don't know. That looks pretty easy to me. So do you have any questions? This is all I have, basically. And please don't make any technical questions. And if you do have any questions, these are the people who actually worked on it. So if you can write down their email addresses and email them directly. Okay. Thank you very much.
|
Arctic Web Map (AWM) is an Arctic-specific web mapping tool allowing researchers and developers to customize map projections for scientifically accurate visualization and analysis, a function that is critical for arctic research but not easy to do with existing web mapping platforms. It provides a visually appealing tool for education and outreach to a wider audience. Arctic Web Map has two components: An Arctic-focused tile server offering mapping tiles, and a Leaflet-based client library. By providing tiles in multiple Arctic projections, data can be more accurately visualized compared to most Mercator projected map tiles. The open source client library, PolarMap.js, is designed to be easy to use and easy to extend. It does this by providing a simple wrapper for building a typical Leaflet map, and also by providing base classes that can be customized to build a web map for your specific situation. This presentation will present and demonstrate the AWM and PolarMap.js and some real-world applications will also be discussed and demonstrated.
|
10.5446/32077 (DOI)
|
Okay. Second presentation is over me. Let me introduce myself again. My name is Pyong-Hyuk Yu. Please remember my name. My name is very easy because my last name, family name is Yu. So please call me Mr. Yu. It's very easy. Now I'm working at the Korea National Park Service. It's a public organization that manages protected area of Korea and I'm in charge of GIS at the organization. I'm a public official. In this time, I want to share our application cases of open source GIS for the park management. So my title is an open source GIS application for scientific national park management. In fact, in Korea, I did this presentation several times, but also I have another topic about the open source GIS application, but I want to share about this because I'm sure open source GIS is a good solution for the good governance of a good public organization. My presentation order is like this. First, I will introduce about our organization, what the Korean National Park Service is, and secondly, I will explain briefly why I institute the open source GIS in our organization. And last time, I will show you some application cases of open source GIS, representative QGIS. Here is Korean National Park area. This yellow line means the protected area of Korea. In Korea, the National Park area is covering 3.9% of a total level region. And look at this beautiful picture. This is Korean National Park's palestial type, marine type, and historical type. There are 21 national parks in Korea. And Korean people really loves to hike in the mountain. I don't know why, but for one year, over 47 million people are visiting the National Park continuously. It's a very interesting item in Korea, Korean hobby. And 2100 people are managing this protected area, and we call our jobs as park rangers, including me. I'm wearing this nice hat, and we are managing the protected area in Korea. In this protected area, we do various works about the park management like this picture. And also, in this work, we can collect various geospatial data like this picture. For example, we can collect trail park office, shelter, danger area, and information in many things. We can collect, and we have a chance to convert this location to the GIS data. It's called data of our organization. But there are two GIS persons in our organization, including me. I'm one of them. So I was always busy. I was always busy, so I concerned about the diffusion of GIS in our organization. So my concern was what is the best GIS tool for park rangers? And I selected one of the open-source GIS software. It was QGIS. QGIS was a fantastic solution for us. I had a blueprint about the use of open-source GIS. I thought if our employee park rangers can do self-crading of GIS data, and we can open our public data to the public or private enterprises like Naver or Daum. Naver or Daum is like Korean Google Mac company. And then they will create the tile maps, and I can make tile maps any more if this blueprint will be success. And so I studied to make QGIS training manual for our employee, and also I tried to make an education course for open-source GIS, and the results were successful. At the first time, I started to make a tutorial for QGIS like this PDF book. It was just a draft for the education for our employee, and actually I studied to teach our park rangers, our employee, like this picture. They really enjoyed the GPS surveying and open-source GIS software, and they studied to make the various maps about the park management by using QGIS. Successfully, we registered our education program as an official park GIS annual education course in the Ministry of Environment, like these pictures. And now we are making our national park maps by ourselves by using QGIS like this picture. This map shows our maps. Now we are creating by ourselves by using QGIS. And as a result, we studied to make various maps by using QGIS, and I defined these application cases as a park GIS. So by using QGIS, our park GIS projects was studied. I will show you some application cases of QGIS for our park management. The main representative application case is to map natural resources distribution, like this picture. This map shows the wild pigs infested area in the national park area by using Hitmap Program. We can map the distribution area of wild animals like this picture. And there are also useful programs for park management in QGIS. By using any mobile program, we can map the inhabitation range for wild animals like this picture. And also the QGIS can be used to control the safety for park rangers. We have a desk point in national park area, and there are many desk locations in the area. And by analyzing the data, we can map the accident location distribution map like this picture. And by using QGIS, we studied to portal data based on geotagging standard. By using geotagging portal and portal program, we can store our portal data based on GPS location. And now we are managing the portal data based on GPS location. And we are opening this data to the private enterprises like neighbor or down company. And there are many useful QGIS program for park management like trail profile analysis program and 3D visualization like QGIS to 3JS program. And sometimes I am using web map service program, 3JS program as you know. And by using QGIS, we are analyzing the satellite data like this picture. Sometimes we need to analyze the lens surface temperature by using open data like lens set or spot data. And QGIS is a very useful tool for analyzing this. This is a sample image of the analysis of lens surface temperature in marine national parks. And vegetation health analysis is being analyzed by using QGIS and open data like a spot vegetation time series. And nowadays we also use the QGIS for the big data analysis like this picture. This map means the visitor density and we are analyzing visitor density by using the big data like telecommunication information, track log information and counting machine station statistics data. And we are public organizations so our mission, one of our mission is to share our public data to the other public organization like IUCN or UN. So we are uploading our public GIS data to the global project like this protected planet. We also using QGIS for the works. Nowadays we are also using QGIS for the drone technology. In our organization we have 13 drones. There are two types like this picture fixed wing drone and rotary wing drone. This is the resultant image of a drone image and we need to analyze this drone image data specifically for the park management like classification or change detection. And now we are using QGIS software. And open drone map is also good tool for data processing about the drone data. And now in Korea, Korean government have official open data policy, the name of this policy is government 3.0. Korean government are public organization people or know about government 3.0 data policy. We have a good government 3.0 data policy. There are some projects like this picture. One of the government 3.0 project is national season, by using this map service you can use our beautiful national park location as your destination by using your mobile navigation app. By using this app you can select various national park location as your destination and then you can use this destination like or navigation like this picture. And the second project is national park 3D view service. This service was developed for the weak pedestrian they cannot go up to the peak of the mountain. So we want to transfer virtual tracking experience for weak pedestrian and the project is very similar is Google Street View platform. So look at this picture. Like this picture our park rangers is surveying the 3D view data by using this very heavy equipment. This is a sample video about our national park 3D view service. By using this national park 3D view service you can feel our beautiful national parks in your home anytime anywhere. It is based on the use of open source GIS software QGIS. We also have a smartphone application for the park visitors. The title, the name of this app is National Park High King Information, MPHI app. We developed this service for the public of safety and convenience. By using this app you can use the trail navigation is like a carnivigation and tracking and many functions you can enjoy many useful national park functions in this navigation. This navigation is based on the open source GIS technology. So as a result we our organization received the president prize last year. It was a government 3 point contribution using open source GIS. My presentation is ended. As one of the public official I think open source GIS is a good paradigm for good governance like our organization. Thank you.
|
This presentation introduces application cases of open source GIS for scientific national park management in Korea. Korea National Park Service (KNPS) is a public organization that manages almost all domestic national parks. GIS is a core technology for the park management, but the cost of commercial software had been limited the diffusion of GIS. Now, park rangers of KNPS are using QGIS that is a representative open source geospatial software, and they make themselves various GIS and remote sensing-based maps. For this, KNPS launched a QGIS education program for employee training. As a result, they started making maps using QGIS and many useful plugins, including Animove for QGIS, Semi-Automatic Classification Plugin (SCP), and Oceancolor Data Downloader. A variety of natural resources maps can be made from GPS field data, and time-series satellite images can be processed into climate change effect maps such as forest health, sea surface temperature (SST). Moreover, a graphical modeler feature of QGIS enables an automatic data processing. The Drone Flight Simulator called Park Air System, is also being developed using open source geospatial libraries. Using QGIS, KNPS makes all geospatial data like a trail, facility, and natural resources and is opening to the public freely. KNPS won the President's Prize in 2014 for the hard work.
|
10.5446/32078 (DOI)
|
감사합니다. I am working in KOREA. My presentation is Open twohost ж met�で 이 자agens jologyech part on my computer and my name is My preview presentation is about developed geometry hydolith, what-to-se bud and perless-UD dynamics model humane- bedroom. And to develop D-reduction using the DjS in future. 만족 AG purpos Kirk 비교 등 우려 되었고,osing 내 물IR 경 enfer وت � Superman 모쌤 Да에 제 research objective is to try to apply general political, hydrological, new type of method on engaged mountain basin through a standard G-square WMS model. And try to suggest proper estimated method for engaged mountain basin. It developed over FSCARPUB. It is developed by using G-square WMS M-Model and GCH-Permula. Next, proceed for research. First of all, building data form, GRS data and hydrologic data. And actually, unit hydrograph, next review of unit hydrograph method. For example, GCH, Clark, Snide, SSSM, and so on. And it is classified as weather one, not the fluid routing method. Name is unit hydrological method, fluid routing, and so on. And next, the runoff review by fluid type event. And the press-flood solution one. And the research trend is, you can read it, press-flood, general-project hydrology, and so on. And the background utilizing open source GIS is usability is coastal reduction by using free offer. And can be used in various operations. Pro-N can be revised easily. And new and various functions can be used through continuous update. And use of this study estimates of anthropological factors, DEM, orthocell map, and so on. And estimate of hydrologic variance, bridge ratio, extension ratio, areas, and so on. And it's online, is developed GIS-based geomorphological or cell model. In order to use the geomorphological hydrograph to engage the mount basin, GSCFWMS, and develop KGCH and FSCRPUV. And using KGCH and developed GSCFWMS model, and is linked to FSCRPUV model. study areas in case KGCH, SCRPUVs is Andong-Tang, etc. and the FSCRPUV is Soeum-Stem, Odm-Mounting. Next is dataset like this. And generation of anthropological data using QGIS is the result. DEM, Soeum-Mount, Land Use, and Gruev, Soeum-Basen, and Crecipite. Next is Anington-Rothocell topological data like this. And Rothocell characteristic and GCH parameters. CN number, bridge ratio, extension ratio, and so on. And input data for Waskin and Kunji, Rautin characteristic data is like this. And the data of Soeum-Stem in Odm-Mounting, WS, characteristic is KGCH parameter. And K-Korea G1PROGDHU development, which try to drive a pressure of G1PROGDHU driver. And we have found interesting thing about 50km, square kilometer under and over. So we divide it 50km, square kilometer under over and partly we drive KGCH, KGCH. And next is G-Square development. When do2, do3, do4 is what said R1, R2 is channel, J1, J2, J3 is junction point. R1 is reservoir site. So GCH parameter and topological variables is calculated by using the jazz program. What said R1PROGDHU is calculated by KGCH, GCH, CLAC, and so on. Prog-Routing is channel routing, reservoir routing is machine-cunji or pulse, modified pulse. pulse chart is next. And grid hydrographic comparison by duration time, GCH, CLAC, SNIDER, SCS. So peak charging time of concentration of unit hydrograph is calculated. And actual representative unit hydrograph and GCH hydrograph, hydrographic time of concentration, TPHE is very similar to 12, 11.5. QPHE is very similar, all cases. GCHF parameter is very complicated. ProgDHU can be simplified through GIS technique and program. And next is Strengths and Mutants of Unit Hyrograph. We applied actual product event, rainfall, runoff simulation using actual precipitation data at single end, divide what said. See what said is comparison CLAC, linear, GCH, nonlinear, GCH, unit hydrograph, divide what said is the same. CLAC method, single water set is not so good, TPHE and QPHE. But GCH linear model is so good. And GCH nonlinear model is very exact and excellent. correlation factor is very high. CLAC method divide what said and GCH linear divide what said is similar to all. And design procession, we try to undo them, each input data. And compare the design procession, frequency for unknown water set of 48 hours, you compare the G2MNS and leader HHMS. The result is very similar. And also we developed fresh-forwarded prediction model f-scale PUB, TUB. This part is C-scale WS and cleaning system and TSM polygon method and WS is f-scale PUB and pattern. Fresh-forwarded WS, W3 for example of what said can be calculated like this. And we calculated the result of fresh-forwarded WN3M4. When EASER loss I0 is I8 is 0, critical depth is 0.5 meter, critical discharge QD is 17.07 CMS. It is travel time critical rainfall 9.9 7 10.5 8. It is display pressure protection warning pressure is if rainfall duration 10 minutes over at over in condition of water level 0.5 meter 9mm is warning case. And 10mm over evacuation 1.1 11mm over case is evacuation 1.2. So we run this case of this. So we developed Korean Geo-operative hydro-cleaner hydro-procaged HHM and G-scale build up G-scale MNS. And developed fresh-forwarded production system F-scale PUB. By developing KGCH is a simplified calculation process from the pre-existing complicated GCH parameters. Thank you.
|
In several decades, understanding and predicting the flood discharge at mountain and ungaged region have been of great concern to the hydrologists an water resources engineers n real fields. Even f Several methodologies and approaches relating geomorphological aspects of those basins have been developed and applied to solve that questions, it is still in the stage of progress in this field due to its difficulties obtaining appropriate geomorphological information and systemizing the geomorphological approaches to traditional approaches. Moreover, the flash flood prediction and warning system at the ungaged regions have been interested in the hydrological fields recently because of its high frequent occurrence and serious damage features caused by climate change ans land use practice over the world. The flash flood problems have been known as one of challengeable topics, otherwise the related researches may not be satisfied until now. In this study, we developed two model system which can consider both the flood hydrograph analysis based on geomorpho-hydrological theory called Open Source GIS-based Geomorpho-hydrological Watershed Modeling System(OSG²WMS) and can be applicable to the flash flood warning scheme called Flash Flood Prediction model in Ungaged Basins(F²PUB). In addition to adapting GCUH process in G² WMS, we developed the modified geomorpho-hydrological unit hydrograph method called Korea-GCUH(K-GCUH) which contained the watershed and river characteristics of own Korean mountain regions and can be applied simply with basic river characteristics without complex geophysical analysis by Open Source GIS.
|
10.5446/32085 (DOI)
|
Yeah, so my name is Charthomb Jersit. I'm a developer from Norway. I work at a company called Moorkart. And I'm here to talk about revolutionizing map views in Norwegian newspapers. Now I realize that's a quite bold title. And if you came here thinking you're going to see a Norwegian version of Mike Vastak, you might end up being a bit disappointed. But I am going to talk about generating dollars using cheap student labor, open source software, and open data. And before I start with that, I want to say as well that normally when people get up here they have a lot of fancy stories about how their company has helped out in the open source community. I cannot really say the same yet for the company I work for. We're a very old, highly traditional software vendor who works off of selling, developing, and maintaining proprietary software. But in 2013 we started a new branch called Web Atlas and now we're really trying to get into open source web development. So the backdrop for this story is the land registry in Norway and some events that took place in 2014. Prior to 2014 the digital version of this dataset was monopolized by one state owned company. As of 1st of January 2014 however, this was made publicly available and Web Atlas sees this opportunity to get hold of the dataset. You still have to buy it but at least it's available for people. So what is exactly land registry data? Well it's basically a transactional dataset who sold which property to whom and for how much money. And how would that be interesting? Well especially in Norway it's very interesting. I think this goes for any major city in the world. The prices are going up and when they are going up as in Norway for 25 years straight you end up with either not being able to get into the market, you can have bubbles. At the end of the day it's a very interesting topic for many people. And how has it been covered in this prior to 2014 situation with the monopolized dataset? Well as you can imagine with one state owned company the only owner of this dataset you end up with not too many exciting uses of the data. So the land registry data was posted in tabular form every 30 days and then I mean in a literary newspaper not online. So and this is despite the fact that in Norway we use online newspapers way more often than we read paper newspapers and this also holds true for the older generations so that this was the fact, the case back in 2014 is simply quite bad. So the challenge for us then was to turn this data into the old tabular form that we saw in newspapers into something more online and available for people that actually are on the internet. And the solution that we had for solving this challenge was to hire two summer interns. We hired them for two months and at the end of that project we were supposed to start owning the market. Now it needs to be taken into account that I mean there were no one else there so basically as soon as we got something into this market we would own it. So that was our basic strategy. And doing that is very easy when you can stand on the shoulders of giants. So we basically took this data set and just squeezed it into post GIS. We're getting updates every 15 seconds from the mapping authority through that deal that we kind of got with them. We then used G-server to serve the data via WFS calls which can then be visualized in leaflet that is the basic outline of the whole thing. On top of this we also, the students also used a bit of PHP to kind of tie it all together for handling different user requests and so forth. It is also important to say here that the students actually didn't have that much programming experience. Eda here is an electronics engineer who hadn't really been programming at all. Turbine is a bit of a geek. He bunched of JavaScript from earlier but he had never worked with post-GIS or geo-server. So that was kind of the outslides like the base for the whole project. And they got to coding together with a local newspaper that was kind of our pilot customer who helped them realize what do we need, how should this be solved and so on. They also tried to do a bit of open source development. That is a bit difficult. I mean you can just throw something out on GitHub and expect it to become a popular library but if you want to check it out you're free to check out our GitHub account on GitHub. The embed builder that they came up with, it basically allows you to select a region of interest and different filters based on for instance time for the transactions. Also the amount for the transaction that occurred over time. The different types of property that was sold, was it an auction, was it a foreclosure and so on. This makes it very easy to tailor the data set to the news article that you are writing. And it's also extremely easy to use. You basically click the get code button there and then you can just squeeze this into your news article. This is something that has been used by a lot of old editors and journalists in Norway. It's simple, it's no carto db but it works quite well. This is where you end up with when you create, when you click the get code and then you paste this thing into your news article. Then you get a bunch of markers in a map with the tag information for the different transactions that has taken place. It also contains the history of the transaction so you can pan back to the beginning of the data set and see who originally bought the house and so on. This actually turned out despite the simplicity of the solution to be a great success. Of course part of the reason is that it was released in 2014 and there was no one else doing this at the time so it was very easy to get in there. We basically just had to contact people and suddenly we had paying customers only four months after the start of the project and one year after the majority of the newspapers in Norway actually used this solution. So you can see that this really shows how open data has, opening up data from the government side, from the mapping authority and so on. It really helps generate or improve innovation. We, Iida is actually still working at our company. She basically created her own job by finishing this project and she's now got a 20% part time job while she's finishing her studies. We also see that we were seeing a lot of, we would find that we have improved the products and services in this region of the media in Norway. But we also see that all our competitors have, after we finished this project, started challenging us as well and created maybe even more impressive things. We also see that the news media has really gotten the rise up for maps and started looking more into maps. But really I would say that this was one of those sparks that kind of started it all. The road ahead, so Norquert has basically, by doing this whole stunt, started a new market area. Now we can kind of use the reputation that we gained and further develop the existing products but also develop other products. So actually last night I think there was a general election in Norway and we then created a live update map for the election results in Norway and that is kind of something that has spun off of this project because we basically got into a new market and that really opens a lot of possibilities. You kind of get a foot in the door in many ways. On top of this we're also partnering up with CartotDB actually. We have realized that a lot of what is needed here is actually the underlying dataset. You have so many really good online applications now and clearly this does not really compare to CartotDB for instance. But what we can help the customers with is actually gathering all the datasets that are needed. We can compile them, analyze them and generate new datasets and then just directly dump it into CartotDB. So that's actually a new business model that we have started using now and I think the journalists really enjoy it because now they can use their favorite tool for creating maps for their news articles. So to round up, like I said in the beginning it is probably a bit of an overstatement to say that we have a revolutionist media but I would say if you look at the past and what it looks like now, we really helped push forward with the maps in Norwegian media. It has also actually changed our business strategy and we have realized things along the way and it really helped us to kind of embrace open source in open data in a new way. Like I already have said, it has increased innovation in the media industry in Norway and yeah, we got a new student employer. Maybe we'll keep her after she graduated as well. Now, I said that I talked quite quickly here so I think we are actually before schedule now. But yeah, that was it for me. Are there any questions? Do you want to say something about your return on investment? Oh yeah, the return on investment is actually pretty neat. So basically we had two months, two students, so I don't know what that is in dollars but maybe 20 and it's at least tenfold the annual revenue that we have from this is actually ten times the initial investment. Of course, we still have to go and maintain code and so on but it has definitely been a very profitable endeavor. I hope that can kind of inspire people to do the same, like use open source and use open data and create more innovation. Was there any resistance or pushback to using open source? Was that an unfamiliar topic, the open source software approach to types that had ultimately decided, yes, let's go ahead and do this? Are you thinking inside the firm or yeah? Okay, yeah, so definitely I think Northcott a few years back there would have been a lot of resistance. Like I said, this branch started off in 2013 but we are of course seeing a bit of like, you know, inter department discussion because if you have based your entire business model off of proprietary one for so long, it takes a bit of time to kind of get out of that mindset. I think people are, you know, just by looking at what's out there, it's kind of like there's not really a discussion I would say anymore. Hello, Jokodon Ti from National Lounge of the Film and do the main media houses or newspapers do they not have any own news discs dealing with these kinds of services in Norway? Because in film and play and even the public broadcast company do have their own new digital news service? Yeah, that's absolutely free for Norway as well. So NRK which is the Norwegian broadcast channel like BBC, they have their own team, they're actually using a lot of CartotDB. Also the major newspapers do have their own teams. So who we basically targeted with this approach was not the major newspapers like Vega or Afton Posten because they have their own team or teams working on that. But at the end of the day, whether or not you have 20 small ones or two big ones, it's going to generate the same amount of money. So to us that's not so important. But yeah, they definitely do have their own teams. And that's also why we kind of want to partner up with CartotDB and maybe provide them with data because then that's often what they use, I've seen at least. And so then we kind of can get in there as well without actually developing the code. Yeah, we'll answer your question. Yeah. Why was so early in developing this idea? Were other actors not aware of it or were they not interested in the possibilities? It could have been a bit of a lucky shot actually. So I wasn't even hired when this started. But I think it was a bit of a shot in the dark and then it really just hit the hammer on something. And then they saw that this would actually take off. So they were just a bit lucky. But I mean, soon after, so basically there came things on the market soon after. So maybe they had started and we just don't really know how much they were influenced by us or not. Okay. Thank you. Thank you.
|
Norway represents one of the countries with most newspapers and media outlets per person. One topic that has an everlasting interest is land registration data - or more commonly: Who bought which properties and what was the price. Land registration data has always been a public data set. Every citizen can request specific information on who has rights to which properties. Up until 1. January 2014 the digital version of this data set was monopolized by law to one vendor - obviously inhibiting innovation. Starting in 2014 - land registration data has been opened and is now accessible to everyone. Webatlas seized this opportunity and hired two summer interns. The task was fairly easy: "Revolutionize the way land registration data is used in local newspapers." After two hard-working months the resulting web application was used by a local newspaper with great results. The newspaper could finally showcase an interactive leaflet map displaying all real estate transactions in the area of interest. Behind the scenes the interns experienced a steep learning curve using PostGIS, GeoServer, Leaflet and a range of excellent plugins. Some of the more stable parts made it to the general use with an Open Source license on GitHub. Today. The solution is used in the majority of Norways newspapers - now showcasing more maps than ever! All made possible by two excellent interns, open data sets and well proven Open Source software components.
|
10.5446/31985 (DOI)
|
Good evening everyone. I am Sarthak Agarwal from International Institute of Information Technology Hyderabad India. I am here to discuss the performance of new SQL databases versus SQL databases with respect to routing algorithms. So this is our motivation of the research. Suppose we need to route in a place that does not have an internet connection. Suppose I am from India in Korea. I do not have an internet connection right now and I want to come from my hotel to the conference. So what should I do? I have a smartphone that is capable of doing the routing but I do not have an internet connection. So it is kind of useless to have a smartphone and not have an internet connection. Why do we need an internet connection to know the directions from one point to another? Why cannot we deploy the routing server on the phone itself? Is it possible to replace heavy SQL servers with something that is more scalable and in a way efficient to be deployed on the phone server still give good results for routing? Do we need to query SQL servers every time? This was the main motivation of the research. Is there any alternative database technology that could be used for routing on much smaller mobile devices? So currently special databases we have two technologies in databases SQL databases and no SQL databases. With respect to special context SQL databases we primarily use Postcase SQL POS GIS version of POS GIS as our SQL database and for no SQL databases we have many implementation currently. We have MongoDB some function that is for a special context and we have Neo4j databases as well. There is an extension for Postgrace SQL in the new version. They also support some functions of no SQL databases, document based databases but we have not considered those implementation in our results. SQL databases primarily special databases are currently based on relational database management systems where we have tables and where we join those tables to retrieve knowledge about the database. POS GIS is a very famous implementation of IDMS and special databases. They have a great potential to store and manage very large datasets but however they sometimes fail scalability and agility challenges as well. Special SQL databases generally have a fixed schema. We have to declare the schema of the database before we actually use them in the tables. We have a fixed schema for a geometry column. If we have a special column we cannot add non-spatial attributes in that column. As well as there is a lot of joins in those queries and lots of joins requires a lot of computations. In routing algorithms, for example, if we use Digixtra's algorithm, we only need a node table. We need a table from one node which gives the distance to all other nodes. We need a single table but can that be in a way used to increase the performance. Sometimes we need non-spatial attributes for routing as well. For example, if we have a hotel from which is the destination, we want to know the timings of that hotel or that particular restaurant. These non-spatial attributes cannot be integrated without having a different table. If we need those attributes, we need to make a join from our special table and non-spatial table. These kind of challenges we face in SQL databases. For our comparison, we have used PgRouting as the SQL candidate. PgRouting is based on PgRouting, which is an abstraction over PgRouting, which provides spatial functions which to pose SQL object relation database. PgRouting extends Posias Geospatial Database to provide Geospatial Routing and other network analysis functionality. But for our case, we have used PgRouting just for routing. We have neglected all other functionalities of PgRouting. We have used symbol PgR underscore Digixtra's function for our performance analysis. PgRouting has support for many other algorithms as well. But for our case, we have just used Digixtra's algorithm. If we talk about no SQL databases, these are non-relational databases. We do not have tables and rows in no SQL databases. We have collections which have document. In no SQL databases also, there is a great potential to manage and store very large queries. The query response times is better in most of the cases in no SQL databases. One of the primary advantage of no SQL databases is that they can be scalable over multiple servers. So if we talk about WebGIS, currently we use many engines. WebGIS engines use Posias. But if we use no SQL databases for WebGIS, which are scalable over multiple servers very easily, there is an advantage of that also. No SQL databases can handle rising data storage both vertically and horizontally. By that, what I mean is horizontally in a way that if we add more attributes to a particular row, that can be easily done in no SQL databases as they are schema-less databases. More and more rows can be added anytime. Special applications deal with problems like over time evolution of schema and data size. The schema of our special databases is over evolving and it changes with time. We have some attributes to a given geometry right now. We can add more attributes for that geometry. So we need a schema-less database as well. So for no SQL database, candidate we have used MongoDB. As in the previous discussion, we found the performance of MongoDB native functions somewhat better than the Posias native special functions. MongoDB is a document-oriented data store. It is a high performance, but as well retains some friendly properties of SQL as well. One of the most important advantage of MongoDB that we found was support for GeoJSON objects. GeoJSON objects are designed for storing geometries. GeoJSON objects can store geometries like point, line strings, polygon as well as non-special attributes as well. In MongoDB, we can have multiple GeoSpecial indexes per collection. We can have 2D index for flat geometries. We can add 2D sphere index of round geometries. So indexing helps in improving the query performance and MongoDB does not lack on indexing than Posias. So data importing is very easy in MongoDB. We have OSM files that support XML that can be easily converted into GeoJSON objects just by simple passing. However, there is no support for arteries in MongoDB right now, but in the near future, there might be support for arteries. If we compare special databases, SQL databases are not primarily designed for distributed systems. However, in non-SQL databases, they could be spread over multiple servers. However, SQL databases cannot be distributed over servers. But the performance of non-SQL is much better than SQL databases. SQL databases are good for structured data and unstructured data like point and lines are not that much suitable. However, non-SQL databases are schema-less databases where multiple geometries can be stored within the same column. For example, if we are storing a row-size house, a table of house, some of the house may have parking, some of the houses may not have parking. In SQL, we need to have a different table of parking and we have to join both the tables to know the association between housing. But in non-SQL databases, we can have that information within the same table that coordinates for that parking and the position of the home. So theoretically, it seems that non-SQL may perform better than SQL for routing purposes. But does non-SQL hold a promise in context of special databases and special queries? That is the question we were trying to answer. So how do we compare the both of the databases? For POS-ES, as we use PZ routing, we use the OSM data for our performance analysis. OSM data for Australia region was considered. This is the Australia region. Firstly, small boxes were used and for more tests, the size of the box was increased linearly. Standard machine was used to run all the tests. We didn't run any of the tests on any cluster and only simple computers, I7 machines were used to run all the tests. Small data size to very big data size, as we discussed, the data size increased linearly. All the data was processed using in-memory and no secondary memory algorithm was used. Everything analyzed was done in primary memory. For non-SQL databases, we used MongoDB as our primary engine. MongoDB do not have a routing function. So we wrote a C plus plus wrapper for MongoDB. We used the C plus plus MongoDB C plus plus driver to do our write and read operations. Firstly, custom import function was written for OSM data to import OSM data into the database. Then we used adjacency list to store graph in the database. The list was in a sense, it has two columns. Firstly, the source node and the coordinates for the source node and non-special properties of the source node. The next column will contain all the points directly linked to the source node and the distance between the source node and that particular node. Like this, we have stored all the points in our database as an adjacency list. Each row have two columns, the source node and all the nodes connected to it and their respective weights. Standard Dijkstra's algorithm was used and written in C plus plus using MongoDB C plus plus driver. We used the standard algorithm and no improvement or specification was changed in the Dijkstra's algorithm. MongoDB was used to read the values from the database. The read operations, the find function of MongoDB was used to read all the values from the database. In this also, all the data was analyzed using primary memory and no secondary memory was used for the analysis. Now, we talk about the performance. Since we wrote our own Dijkstra's function and only read operations were performed on MongoDB, the performance is not very better than PG routing. The import function took most of the time. Most of the time, it was taking the data from the USM file and parsing it and storing in our MongoDB database. Very simple implementation right now. Just to test the initial performance, in this research, we wanted to test whether or not MongoDB is even close to PG routing for routing algorithms. Performance was restricted by the implementation and optimization of the Dijkstra's algorithm and the C++ driver. As the read and write operations were written in C++, the performance is mostly restricted how well our algorithm is written and is there any optimization possible in the algorithm. More tests are required for the analysis and the more optimization are needed in the algorithm. Many assumptions were taken in the test, like normalization of the way it was done on the same matrix as PG routing. Now, what PG routing does is we have different types of roads in OSM data. PG routing classifies that roads based on their own matrix and assign weights to those roads. Same weights were used in our research also. We had only no table while PG routing have classes export DB, ways and relation table but we didn't maintain any ways and relation table in our research. Results, PG routing performs much better right now than MongoDB current implementation but we are looking for more optimization and since we have got better results in MongoDB native functions, we are quite positive that MongoDB will perform better in routing algorithms as well. However, PG routing fails at very large data sets. MongoDB still performs but the performance is not very good. Performance is better in some cases but no conclusive results can be drawn from the test. Discussions, potential to be implemented in servers with limited computation power since MongoDB are document oriented database functions. So there is a potential that it can be implemented in servers that have limited computational power such as our mobile devices. This can be helpful for routing in remote and rural areas. If we are successful in making an engine that do routing without internet services, we can have routing in remote and rural areas as well where there is no reach of internet. It could be really helpful for the villages and farmers. This can be a milestone for WebJS as well as we discussed before. In the future, this project is right now in very initial stage and in future we are going to expand it very fast. A simple implementation currently but just to test the initial performance of MongoDB database. We need to optimize the implementation of Digistras and find alternative platforms to run the test. We are planning to use bounding boxes and other features of special functions to optimize the performance of Digistras. We are planning to use the native MongoDB functions like line intersection, point containment and near functions in our routing algorithm. We are also starting to port the PGA routing current, post-GIS implementation to MongoDB and test whether that performs better or not. That is it for the presentation. Any questions? Thank you, sir. If there is any question for Sarta? Thank you, Ms. Lutu, for your effort so far. My question is that right now there is an OSM hand, a mobile application that doesn't use internet. I don't know if you are aware of it. If you can take a look at the algorithm that is presently, all it needed is I download the data onto your phone and everything is just ready to use. What is the name of the application? OSM hand. OSM hand, yes. Oh, I am not aware of that. Yes, try and look at it so that you can improve. Okay, okay. I am going to provide you with it. So, for you planning to run MongoDB on mobile telephone, is it possible? No, if you... In many internet it is not there. That means the database has to be on your device. Yes, sir. So, how do you plan to have this database on the mobile device? Yes, mobile... MongoDB servers do not require as much computation as our SQL databases require. And we haven't thought of how we are going to do it, but it seems through the papers we have read that it is possible. And if not possible, in the future we are planning to write our own database for routing such as the SQLite database. Yes, SQLite already has some routing functions. So, maybe if you want to run a mobile device, maybe SQLite may be the better option, I think. Yes, sir. CoachDB... Yes, sir. CoachDB... CoachDB also, yes, sir. Yes, sir. Another thing is just... Sometimes you use the word rows and column for MongoDB. CoachDB didn't I understand? Yes, this is a little bit misunderstanding because NoSQL has no row and column. Just for the analogy, I use those. I have a question about mobile phone, because you have to achieve this algorithm with a mobile phone, which is not mobile phone, but I mean on a cell phone. But I know that OSG we have released the SQLite. It's implemented on the SQLite. So, it's a very small database. We can store the network in SQLite. It's very simple. And also it supports some index. I see that MongoDB doesn't support the R3, so I think it's not that efficient if we don't have index on special. So, do you have a consider to compare MongoDB or a project with SQLite? Not right now, but as the sir also recommended, we'll be using the... We'll try and analyze the performance of SQLite as well. Thank you sir. Thank you. Thank you. Thank you.
|
With the increased shift towards GeoSpatial Web Services on both the Web and mobile platforms especially in the usercentric services, there is a need to improve the query response time. The traditional routing algorithm requires server to process the query and send the results to a client but here we are focussing on query processing within the client itself. This paper attempts to evaluate the performance of an existing NoSQL database and SQL database with respect to routing algorithm and evaluate whether or not we can deploy the computations on the client system only. While SQL databases face the challenges of scalability and agility and are unable to take the The advantage of the abundant memory and processing power available these days, NoSQL databases are able to use some of these features to their advantage. The nonrelational databases are more suited for handling the dynamic rise in the data storage and the increased frequency of data accessibility. For this comparative study, MongoDB is the NoSQL engine while the PostgreSQL is the chosen SQL engine. The dataset is a synthetic dataset of road network with several nodes and we find the The distance between source and destination using various algorithms. As a part of paper The implementation we are planning on using pgRouting for the analysis which currently uses PostgreSQL at the backend and implements almost all the routing algorithms essential in practical scenarios. We have currently analyzed the performance of NoSQL databases for various spatial queries and have extended that work to routing. Initial results suggest that MongoDB performs faster by an average factor of 15x which increases exponentially as the path length and network data size increases in both indexed and nonindexed operations. This implies that nonrelational databases are more suited to the multiuser query systems and has the potential to be implemented in servers with limited computational power. Further studies are required to identify its appropriateness and incorporate a range of spatial algorithms within nonrelational databases.
|
10.5446/31986 (DOI)
|
Good afternoon, Chairperson, ladies and gentlemen. I try to have a kind of, we have a challenge in an attempt for us to provide, to improve and optimize our first delivery project in Northern Nigeria. The challenge that we have was the reason we have to look in what. Before I start, permit me to briefly provide an anonymity and contribution before my presentation. I am so grateful for all of you for attending, for taking time to attend this time. And I'm also grateful to the organizer for giving me the opportunity, a travel grant. I so much appreciate it. I also, I stress my gratitude to my organization for support and everybody in our organization. This is how my heart can look like. I try to have a justification for this study. I try to also provide some brief conceptualization of how we think we can solve this. And my methodology, the outcomes and the findings. And we also, as we have recommendations about this. The challenges that we have, let me be detailed about this problem, is the fact that we all know in recent time now, we have proliferation of routing, open source routing tools. And we have a lot of brilliant idea coming up. One of it is what my previous presenter just informed us about. And some of these ideas were wonderful, they were, but then there are quite a number of challenges that goes with this number of tools. Some of them is that I tend to look at how best to use them. We realize that the result we get, the routing output that we get from these tools are not the same. And then the question is why? If these routing tools use the same base map and we presume that they use the same algorithm in their routing, then how come the output is not the same? There's a lot of challenge for us who want to use the routing. The outcome of this is to decide to plan the project to run costs, a cost that is based on distances. Because we have cost in per kilometer. And so if your distances are not right, if you're not sure they are not covered in the distance, it has a lot of implication for planning. So this is the huge challenge that we have that informed the need to actually evaluate these open source routing tools, look at their strengths, look at their limitations and then provide a kind of ranking based on certain parameters that we have identified. This is a sort of the justification, right? And the need for us to eliminate software costs, I'm sure everyone of us would like to. And certainly to optimize our delivery protocol. These are some of the reasons why we think the open source routing tools is a way to go. It is obvious that we need to take advantage of this new technology. Yeah, we also want this exercise to also provide a built-in confidence level in the use of open source routing as a result of provision that we have talked about. This exercise will provide a kind of confidence in people that want to use this. And where we all know most routing use graph theory or shortest path algorithm, that's the media conceptualization that is underlying these things. But from the studies, from the background that we have done, we realized that you'll be surprised that software-based routing tools use a Koflai according to what you see there. We found it very worrisome in some hands like that because it is not reflective of what we should have. But compared to the special route that we have downstairs, these are some of the... Yeah. And then in trying to evaluate, to carry out this evaluation, the approaches that we have used is we have considered drive test software. We have in this study, we are considering about five routing tools, five open source routing tools. And we are looking at what we are using, we are doing a desktop routing estimation using these five. And we are comparing the output we get from here with the drive test software that we carry out on about four or something kilometer distances. Then after this, we are trying to... we now apply a multi-criteria ranking procedure to now provide ranking of all these tools based on this parameter that we have defined. I'll give you the details. So the drive test software that we carried out, I mean a lot of people use the drive test approach for various reasons. But we have used it to provide a kind of benchmark, a kind of standard for us to charge if the output that we get from these routing tools is reliable, not reliable. And trying to come up with a kind of error margin that we use as a threshold to determine how good or how best the output we would arrive with. And so in coming up with the drive test, we try to be sure that the kind of distances that we want to cover is representative enough. We try to look at the way we determine what is the mean of this, because we cannot do this routing for all the F facilities that we consider in our project. So what we did was to use a kind of an online calculator to determine what is the farthest from the coast to the solar storm. Where all these drivers pick the farthest from and to the farthest. In each of these, we have defined. So what we did was to look at the farthest and the shortest and look at the percentage we use an online calculator. And this is the sample distributions that came up for us. In whole, we can say we have about 404, the most simple side that is required is still around 26 kilometers. But in the software that we did, we still cover about four out of one kilometer for all the head facility. And so in the process, doing the drive test survey, our driver we have already defined, it proved that it's supposed to pass. So what we did was to use the whole SMH that I talked about to record the travel path that this driver take. The whole SMH is a mobile hub that records the way the driver travel. At the end of the day, this allows us to compare the route, the path from the one that we have identified as optimum. And calculate the distance and extract it. And these are the facility, about 10 facilities that have been considered in this survey. And the drivers, we have about five, no, we have about three drivers covering this facility in this survey. And these are the routing tools that we have used. We have used QGs with a road graph plugin. The road graph is the routing algorithm that we use. And then the, I mean, our popular open street map routing machine. We have Contra's Google map engine, GraphOpa and HoSMH. These are the five routing tools that we have done. So we have compared, we have carried out the desktop routing estimation using this five and using this five for the 10 facilities. And it's important for me to also do something to our attention that you can see the tools and scratcher that I have here is for two, that's for GraphOpa and this for HoSMH. And these are the two facilities that you can see the travel difference, the choose different routes. Even though the base map, they use the same base map. This is an example of what we're trying to talk about that something is definitely not the same. The routing algorithm considering these two have quite different. Yeah. So in talking about the methodology, then we, with the multi criteria, criteria approach that we have used is to look at how do we, how do we, the parameters. I mean the conditions that we have used to run them and how do we come up with the real score, the appropriate real score that we're going to assign to each of these. And to this table, you can see these are the criteria we use about criteria. This is criteria. The first one is the one that now consider the heroin margin between the Soviet, the Soviet we have done and the output that we have derived from the desktop routing estimation. And then the unit, the measure for that unit is in meter. Then secondly, we have looked at all of them, all these routing, we have looked at the base map we use. How complete are they? What's the quality of the base map of the contents? We have a method that we use. We create a cluster for, we have a five cluster for the same area for all of them. And we find the average in terms of the number of complete, compared to satellite image that we have. Then the capacity, all of them, all these routing, it's not all of them that have multiple routing options. Most of them like the same routing machine can only route single, I mean, you understand. But it's not, it's possible to do multiple routing at the back end. Like we have a back end configure on the server that use multiple, my colleague gave a presentation about that earlier today. It's possible, I'm not saying it's not possible, but the tool that is available to the public only allows you to do. So it's important for us, it's an issue because in this routing that we do, in the delivery system that we do, we don't just deliver to two. It's a multiple delivery that we do. So that is a condition that we have also looked at. Then the support for traffic imputes, like most of them, hardly provide support for you to put traffic as a condition. Most official way you do this in the metropolis is a big issue. You need to find a way to factor in the traffic as a constraint or impendence to your routing. And the routing platform, exactly this, the application that we have looked at in the category, there's one desktop routing, online routing and mobile routing. So the platform that we have considered, the platform has very very key here. What we find out is that the desktop, the two that run on the desktop in terms of geo-positioning accuracy, look to be more precise than most of the one on online. So we have considered an option for alternative routes. And in this category, Google Maps seems to be the only one. It's the only one that will give you okay. If you don't want to follow this path, there's another one you can. So we find a way to look at these six parameters and provide ranking. And these are the measures for the ranking that we have provided at the other side. And this is the way we have defined the threshold for those criteria that we are. For the first criteria, we have looked at the range, the range of the error that was reported from the drive test survey and the routing estimation. And we can see this, that's the minimum and that's the maximum. And so we find the median. So we determine the threshold to be those that are both the median of the range to be assigned with zero and one. Then for the two, we have looked at where we have a positive COVID that is slow and system giving zero and one. For multiple years or no, this is how we have assigned to normalize the value for all these parameters. And okay, we have discussed about the output. The output that we get from the routing tools from here, these are the outputs for all of them. For the 10F facility that we have, the distances that we have talked about for each of the tools are there. And if we look at this thing, we realize that the maximum distance for the OSM, we have a coverage about between 467 to about 488, which happened for OSM routing. The difference in this discrepancy is about 2.36, which is about 4% of the entire coverage. This look, I mean 4% can look small to us, but it's enormous. But the time we convert it to the kind of planning that we do, I mean maybe about a dollar to deliver a vaccine to a ride kilometer. I mean it could be enormous when you look at it at the cumulative end. So in our, this precision is very important for us and this is the reason why we, like I was saying earlier about some of the discrepancies that this thing can throw. This discrepancy sometimes when you look at it, we can see these are the two different routing patterns that decided to take different paths. And the geoposition accuracy, the base map quality, and the variation in the inbuilt routing algorithm are some of the reasons why we have some of this discrepancy. And we need to find a way to de-limited once and for all. These are the outcome for the discrepancy that we have noted in all the five, in the whole route they have considered for these health facilities. Now, this is the outcome of the ranking. Maybe I should, I need to throw more light on this. Based on the table that I've shown, the way I've defined the threshold for this table, for each of these two, this is the outcome that we have. The key harm is the key g's with the routes, whatever. For the first category, key g's and who has the same hand, they did poorly. They did not do well at all. For the second category, we have the who has the same hand and the who has the same routing machine, not doing fine at all. By the time we have all of this and we find the cumulative, I mean, that the cumulative is what leads to this ranking that we have adopted, we see that in overall, key g's route and route graph is having the lead and why the who has the same hand. Let me quickly provide some kind of explanation. The only thing that we observe for some distance was the fact that in the urban, the facility, this facility that we have considered are predominantly within the urban center. And if you take me to the area that we cover, Google map do well in the urban center, but in the rural, the base map quality is very poor and shallow. Otherwise, Google map have come to the level that it is, probably at three. So in whole, Google map, key g's having poorly in the routing outcome of one is a bad indication of the routing algorithm that the route graph is using. Because if we look at the, there's a particular portion here where we realize that a grasshopper, grasshopper routing algorithm, if you look at the result that it provided here was very, very, the was the closest. In terms of the deviation from the drive test outcome and the result that we have, key graph hopper has the smallest range. It was the one that is very close to the survey outcome, which is an indication that the routing algorithm used on graph hopper, me developer need to look at it and find a way to implement it for the route graph that the key g's is having. This is one of the very, very important findings that we have realized in this study. And also, the fact that the drive test and secondly, Google map drive test outcome was also very, very fantastic. And this is only because the places that we have considered are within the Hoban centers. If we have considered the Rura, I think it will not be the same. Then part of the important observation that I need to call our attention to is the fact that key g's is very promising. The key g's with the route graph plug-in is very promising because it has a capacity, it gives us independence. You have all units to do some manipulation. For instance, the speed, the travel time that we determine there is determined by the route class and the speed limit that has been assigned. So if you want to factor in the traffic condition, all you need to do once you have the traffic count on each of the delivery routes that you are studying, is to change it. And this provides for you another alternative to factor in the traffic count. So we felt in conclusion, we felt that key g's route graph is promising, provided the developer can look at the algorithm rather than the algorithm that graph opa uses. Because graph opa has the best minimal drive test error among all of them. Even though all of them use the same base map. So if the developer can take a look at this critically and try to implement the routing, the similar routing algorithm that route graph uses, I think it's promising. Qg's route graph is a promising word for us. Thank you for listening. Thanks for coming. Please say any question. Thank you, Kahinde, for my presentation and excellent timing also. If you have any questions, we have few minutes for questions. Hello. Thank you for your presentation. Why don't you consider PG routing for test? Yeah. Thank you very much. In fact, I started with PG routing, but PG routing does not do well when integrated with Qg's. We had a mail list to connect with the developer and it seems nothing is, I think, is a dead end for now. And even from the previous presentation, I've heard what Fiki presented on that yesterday and she admits that PG routing will not do anything serious for large data set. The large data set that we're talking about. Thank you. Any other question? I have one small question. When you consider about Grasshopper and OSM Android, are you using the pre-version or they have paid version also? No. You mean paid version? Paid? No. Is it an open source? Open source, but they have services. Grasshopper, they have paid version also. Then other facilities also come with that? I'm not aware of the paid version. The version I use is purely open source. Open source and freely available versions. Free available. Yeah. There can be a limitation with the freely available versions. Maybe you can just check. Okay. I will check. Thank you very much. Thank you for your interesting presentation. Thanks.
|
In view of recent proliferation of online/desktop routing tools (such as qgis road graph plugin, osm routing machine, google maps engine, routexl, OpenRouteService etc), it is imperatives to provide empirical evaluation of comparative strength and weakness of a number of predominant routing algorithms. This is crucial in view of its implication on the success and otherwise on routing related projects such as supply chain logistics, supply/delivery operations, and emergency services, among others. In this paper, comparative evaluation of these tools has been carried out in terms of weaknesses and strength with respect to healthcare delivery service through routine vaccine delivery in Kano, Nigeria. Kano state being one of the states in Nigeria with huge burden of health challenges with records of 3062 maternal death between 2005 – 2010 (Ibrahim, 2014). Thus vaccine delivery is one of such healthcare delivery programme used to addressing some of these health challenges. The primary objective of this paper is to demonstrate comparative advantage of using open source applications to optimize the vaccine delivery process such that there would be significant reduction in logistics and manpower (travel time I.e travel route distance, road type/quality, traffic and travel speed, vehicle/driver, delivery schedule, among other parameters). The capacity of few selected routing tools was evaluated against this backdrop. Hence drive test analysis was carried on selected number of delivery routes and the results were compared with values derived from routing using these tools. QGIS routing tool is the only desktop tool using OSM vector base map and a routing plugin (road graph) while others such as Google Map Engine, OpenRouteService, OSM Routing Machine, and RouteXl were all online platform. The drive test result was used as the benchmark for determining the best routing tool. The overall outcome indicated QGIS to have the closest routing value compared to drive test result for most of the considered delivery routes. This was largely as a result of rich content of the OSM vector base map (which our team had extensively worked on) as well as the geo-positioning of QGIS as primarily mapping software. This was the advantage QGIS had over others, even when compare with OSM RM that uses same base map with QGIS. OSM RM had deficiency in geo positioning accuracy thus explain slight discrepancy noted in OSM RM output compared with QGIS and drive test result. Google map engine routing output had capacity for multiple routing (origin to multiple destinations) but the content of the vector base map is very limited for Kano State. The response time to make amendment to google vector base map before it is available for routing usually takes 12 – 24hrs, longer than its take using JOSM. RouteXL is consistently constrained by geo-positioning accuracy which affects its routing output. The routing output derived from desktop QGIS powered by road graph plugin provide the best routing output compared to the output from drive test result but it is limited by the fact that it does not have provision for multiple (batch) routing.
|
10.5446/31993 (DOI)
|
Good morning everybody. I'm presenting a project related to localize landmark model based on OSM for navigation system based on landmark systems. And I'm presenting a work actually done by my students so they're having another parallel session so I'm going to present the work. So this is, we are doing this as part of research work at Sri Lanka Institute of Information Technology, Sri Lanka. And my motivation for the study actually, so this is from my country, Kalama Main City. So we have faced like 30 years of long struggling war and then within like last five years or so after the war was over like a lot of domestic tourism occur. So people started moving from one place to another very much for tourism because for a long period of time they have been constrained to different regions. So then come to the problem, then we had the problems like, so you may know how to move from say for example we can come to solve but within the solve from my hotel come to this place so to the conference hall that sort of thing it might be we need little bit of more details not just the basic information we need more details to travel. So even within local community it become problem because they are not very much with these areas. So my motivation for the study is to assist these people who are traveling from different parts of the country regularly and so you can see the Sri Lanka this small country if you consider about one particular area for example one city then it would be very very congested. So a lot of roads, a lot of buildings so then within that small area how to navigate and so what are the solutions available in such a situation. So basically even though that we are not very much developed so mobile phone access is very high and 3G access is there so people try to use mobile phones and find their ways but you can see sometimes the output we receive this sort of roads we will never use practically because practically that is not the best route to use and then we have the alternative so the most common alternative so just stop at nearby shop ask from the people around they will definitely help you so then they also give instruction. So then social practice common social practice is not using the mobile guidance we use that the word of mouth what people says so we try to explore whether there's a difference. So if you consider about navigation instruction instructions provided by navigation guide for a particular place in Sri Lanka you will get something like this turn slide to left to say 7.1 kilometers the specific road but if you ask from a person it will be something different it will be a little bit elaborated so same information so turn to left near particular junction out of the put a junction there's a major city and it's a four-way junction and there's a big NAR tree that's a typical very very large tree in Sri Lanka and towards left side and small tea shops so you will not miss it. So which one is more human friendly for me second one for my community second one so we are trying to map between these two link between these two so you can see the second one the most important things are landmarks landmarks which are important to people were in that community so if you consider this is from OISM particular area in Sri Lanka you can see in the map itself we have a lot of landmarks so not just the roads not just the building and so many things are landmarks related to local communities so can we use them to support navigation so with that we try to move to the landmark based navigation for local context I try to emphasize the fact the local context because when it comes to things like landmarks the how they are interpreted by local community maybe the foreigners it might be different so it's something related to their culture their social background so we have a lot of say trees and religious places and some other sort of buildings from ancient ruins so they become landmarks not just the man-made monument sort of thing and so we have to incorporate these things which give sense to people in the landmark based navigation model with that we try to develop a landmarks at this navigation model and then we identify we have to incorporate significance of them in the path planning so this significance we have to so main thing is the prominence of landmarks of course you have to observe them you have to see them so the prominence and then it depends on different human factors maybe familiarity of the area and for example age also come to the picture maybe if I ask from a young student the he described path will be different from same path described by a old person so it will be same way will be interpreted by different people so while doing this modeling part we talk with different people and try to get the idea so for young crowd it may be the person may be understand it very easily okay get down near KFC something like that for a old person so that person they may not know about the KFC symbol this thing it might be difficult for them so then we have seasonal variations or things like this image so particular period of year for several months you will see this sort of thing I turn in rods but not in other parts other other periods of time and the day night visibility also important sometimes you will see landmarks during day sometimes usually them only in during night so there are so many different things but we will not be able to capture everything so what we try to do is reduce this to so all landmarks we're trying to reduce to point location to make it them simple and then add attributes to them to describe or get some idea about the significance so then we have I triple identify attributes like this again only that blue color ones we are going to consider in this study so age and seasonal factors we are not going to consider at this level even the basic attributes actually we are still in the refinement level of modifying this attribute and when they are contribute so we will look at this thing little bit detail so we consider one important factor as a height because from long distance you can see this is from my country so again in order to collect landmark information we are going to use community based approach so I'll come to that very shortly but in that case we can't expect people to add say particular height specific height as well as it's not important so we are trying to like simplify the approach using this three sort of value so it can be tall medium short so people can understand easily so they can add this is for adding actually the attributes and then we think about the spread so this is really just place in eastern part of my country so you can see along the road you can see this sort of thing very very the spread is very much the width is very much so you you will not miss so then we have day night visibility we consider sorry so some of these things actually during day night daytime you will see it like this so it's very prominent and during so during nighttime you will see like this very prominent during daytime still you will see because it's in white color pure white you will see and some of these places you will see during both day and night if it's a normal building during night time you will not see but this sort of building yes you will see during night also we are going to consider that and then we try to consider about cultural significance and the social significance and from that for that we consider about OSM tags place of worship and from that we try to extract whether they are related to certain cultural background so based on religion actually it's a little bit limited at the time being and whether it's a statue and tree from these already existing parameters we are going to extract data and then we think about social significance so from the amenity tag we are going to extract whether it's a restaurant place of hangout shopping complex I have miss it because it is cool so so they have social value so people know about this thing so this particular very popular kind of bus stop but very large bus stop so everybody know about these places so then with these parameters how are going to represent them in a GIS system so we try to follow OSM tag system and we are possible we try to use already existing tags as I have explained earlier for amenity tags and building and this thing we can use but when it when things are not there we try to introduce new tags for example height and the spread and day night visibility we introduce new tags to know about this thing and you can see here the first one show already existing tag and by some other user and new tag with place of worship put is some such so based on our tagging system we add to OSM and then come the very important point how to collect and maintain this landmarks so can we expect that someone person or one company to collect landmarks for us I don't think it's feasible or anyway sustainable because it's a social concept people know about landmarks much more than community don't know about landmarks much more than any other person so then we assume it's better we let the community collect them at your OSM and then use them for path planning so we develop a social application considering that and it's a mobile application but having social aspect also then it would be like added advantage so you can use you can register the application then you can add friends you can add groups and then you can add or edit landmarks as well so when you're adding landmarks that tag system will come based on that you can add and then to make it more attractive if it's just for adding a landmarks nobody will use it but we have social concept for that so you can search for friends nearby you can see actually this snapshot we had taken once we reach here so me and my students so our location based on the location we can adjust the proximity and you can select identify where the where your friends are depend on the friend circle and then here you can see the top one the user and the depend on the landmarks you're adding to you will get score and you can see adding landmarks also at the corner and the rating friend request so normal social application things are there and here the second one actually it's from our country and our unicity area you can see the radius can be adjusted that's our requirement so it's 500 meters 2 kilometers likewise there are three defined sizes you can adjust so actually what we're trying to do ultimately is having a landmark model with these attributes so for that we define localized landmarks model and then use landmarks to aid navigation so we had to use them and develop a path planning mechanism and but in order to show them on a mobile interface so mobile interface it's not very large so you have a very small area to show everything so we are trying to reduce the map and help cut a free map using linear maps and in order to collect data we are going to use voluntary geographic information approach with social application actually the red color one some I have presented here about these two so the path planning and linear mapping it's more towards like algorithm and defined something so my students are presenting paper related to this section in the academic track ultimately we are going to develop a landmark base actually that it's being developed landmark based mobile navigation application and we want to make it cross-platform it will not it's a mobile web mobile internet base so it's not Android it's not iPhone it's not Windows so anybody can access as long as you have internet so we are using all open source technologies so client side jQuery mobile and client technologies and so side we have OSM and then we downloaded control data to post this and then manipulate it but ultimately we are going to upload everything so once things are fine lies still fine tuning our parameters then it will be like synchronized with OSM and all developers in open source technologies and so as I mentioned you so we have paper and more details will be presented in the academic track and I'm really appreciate the support we received from for spoji for attendance conference for my students and for me for the travel gas support and we have initiated this study under OSTO lab and your informatics research group so this is actually the first research work we have started officially and it's been supported by sleep research grant fund also now so we are going to continue with that and come out with good results so thank you everybody for paying attention so if you have any queries I may be able to help so we have a five minutes to ask questions so anyone have a question thank you for your presentation it's really interesting to know human area oriented in the OSM so can you tell me how community choose the landmark or the city together or the community choose by themselves which landmark will you use or something and can you show me slide number 18 yes that the system are you developed by yourself or it is a system by OSM or software or what kind of software thank thank you we ask we let the community to add landmarks there's no moderation or certain thing something like that but we give rate into people depend on the landmarks and the parts so based on that landmarks the path planning will be done so based on that rating will be given to the user so when the system grows so more rating if that more ratings are there for the user that means we the credibility is high there's no specific validation process in the system and when it come to the application this is totally depend on mobile web it's not a depend on particular mobile operating system it's a internet application client side processing only so JavaScript jQuery mobile so client side application any other questions so I have a question for you so you said so the season correct the data so how the season react to this map navigation so people join to correct many people join actually we are very like for our country these concepts are very new so what we are trying to do is first get the idea from students so because the student community is reachable for us so what sort of landmarks they will use for describing path or navigate so we develop questionnaire and based on that so so that variation one is you describe the path to somebody not aware about that place and then we let people to use a vehicle and travel not using a travel guide using that's a normal way landmarks so then they have to note down so some other person has to note down actually what sort of landmarks they are looking at and the parameters so based on that we try to identify what sort of parameters are useful for community so still we are refining that so we have extracted these parameters height with these things but then how much weightage we have to give still we are refining that so for the time being we have defined like certain values and based on that calculate the path planning mechanism we are the distance and the landmark significance both are considered for example if more land more landmarks are available in a path that will give more priority than a path without having a single landmark but then distance also considered if the distance is very high then we don't consider even though the landmarks are more it's not a good path okay thank you very much sorry I don't know if I have correctly answered sorry right that using this yeah thank you for asking that question that is one place actually you can try in our laptops in the mobile application what happens is we have to host everything so when you are trying to host is the hosting application that we are using it's a very good question so we are seeing a post-risk geo server and subside this in post-vis, post-risk and the geo server we need a server which can host everything but there are no free servers which available with public access so we are trying to host it in a one there's one particular server at the university environment it is sent to a server still I'm having problem configuring certain things related to Segal so through the mobile phone you can't access it but in the so location access everything is possible through the laptop itself because location detection is enabled yeah you can try it in our laptop so because of the installation issue we have that limitation for the time being okay thank you very much
|
The following document covers an abstracct on our research on Localized Landmark model based on OSM data for Socialized Landmark based Navigation System. It is a group project which is been carried out by 4 students and our supervisor. The other to members are listed below. Dananjaya Thathsara - Sri Lanka Institute of Information Technology Irendra Koswatte - Sri Lanka Institute of Information Technology
|
10.5446/31995 (DOI)
|
Good afternoon. First let me introduce myself. You can see in the slide I'm Ramon Aiko Jr. but you can call me Ayo. I'm from the Philippines and as our chair said, we are colleagues. We work at the project funded by the Department of Science and Technology. We call it Project NOAA. I'm going to talk about that a little bit. But actually there will be another session about Project NOAA later in the next session. So I have some interesting things for you this afternoon. We are really proud of it. We call it WebSafe. We like to think of it as an online exposure and risk assessment tool. But before we go into the details of that, I would like to give you a picture of what it's like in the Philippines when it comes to disaster management and mitigation. So here is an animation, a GIF of all the tracks of typhoons from the 50s up to the 90s. So if we wait a little bit, we will see that almost all of the country is covered by typhoons. And actually we have around 20 to 25 on the average typhoons per year. So disaster management is a big thing. I mean, it's a necessity in our country. So disaster. What is disaster? So the question is, does having typhoons automatically translate to having disaster? Well, studies show that no, typhoons are just natural hazards. Having these hazards does not mean there's going to be a disaster. You also need the things that are going to be affected by the hazards. We call this exposure. Exposure could be us, humans, or the buildings infrastructure. And also another thing that determines the disaster would be vulnerability. So how vulnerable are these things to the natural hazards? So these things all contribute to having a disaster. So that's why, so here's a quote from the World Risk Index report from 2013. It says here, a country's risk of becoming a victim of a disaster is not determined solely by its exposure to natural hazards. But so a crucial extent also by the society's state of development. So how ready are you when the hazards come? Okay. So here's a picture where we can see. So this explains what I've been talking about. There's a natural hazards fear to a disaster, and then there's that societal fear. So when it comes to natural hazard, we can't really do anything, do much about it. We can't say to a typhoon that, well, you can't say it should postpone its arrival just because we're not ready. But we can do something about the societal sphere. We can provide tools, we can educate the people, and make them more ready for the hazard. So by decreasing the vulnerability, we also decrease the possibility of having a disaster. That's why Project NOAA was born. Project NOAA, or nationwide operational assessment of hazards, is a nationwide disaster management program which aims to improve the government and Filipino people's capacity to respond against impact and effect of extreme weather conditions. So what we do at Project NOAA is we actually gather all information, all data that we think will help when it comes to disaster management and mitigation. Also, we empower stakeholders. We educate them. We provide tools for them that they can access openly and freely. So one of the tools that we developed at Project NOAA is WebSafe. So basically, what WebSafe does is it takes a hazard map which tells us where it's going to have, I mean, where it's going to have hazard. For this example, this is a flood map. It tells us where it's going to have floods in the case of extreme rain. And then it overlays that with another map, an exposure map. In this case, this is a population density map. It tells us where people live and then intersects these two maps and gives us some analysis that we think would be valuable when it comes to disaster management. Okay? So WebSafe stands for Web-based Scenario Assessment for Emergencies. It is included in, it's one of the tools that we include in our website. We're actually working on the version two of the Project NOAA website. This is a screenshot of it. So we have other features there, but WebSafe is this one. Later, we're going to have, as I said a while ago, there will be a talk in the next session. It's titled, Continuous Improvement on the NOAA Initiative. There will be more details regarding this website. So for now, let's focus on WebSafe. WebSafe is built with free and open source software for Geospatial. It is a tool to calculate the needs of a community considering the effects of a particular hazard. So let's go back to the last slide. You can see here, it tells us what percentage of the population, of the total population, will be hit, will be affected by the flood. So out of 1,925,000, says 606,000 will be affected. And then it tells us how many of these people will be in the different hazard areas. For this example, it's a flood map. It tells us how many people will be in this flood height. These are different flood heights. So red means it's just above your jaw, average Filipino height. Orange means just in your knee. So 197,000 will be flooded up to the knee. And then low hazard areas means it will be a little bit above your foot, 184,000. And then also it provides you with an estimation of the needs that they will be consuming when you evacuate them. So 1,696,000, sorry, 1,696,800 kilograms of rice will be needed if you evacuate these people and so on. So Project NOAA and the World Bank partnered in the development of WebSafe initially. So after the funding from the World Bank, Project NOAA initiated some other improvements for WebSafe and it is now backed up by UNICEF. WebSafe aims to aid local government units in their response to our disasters. So as I said, it's part of the version 2 website of Project NOAA. So before we proceed with my slides, I would like to show you the demo of the website. So let's go take a look at the website. So we haven't officially launched this version 2 so it's in beta phase. You can access it for testing. It's beta.noaa.usd.gov.ph. Then you can access this website. So here you can access WebSafe by clicking this icon. And to use it, you're going to have to choose two layers, the hazard layer and the exposure layer. So let's take a look at where I live in Manila. So we have, so, sorry, this drop down gives you all the hazard layers that we have prepared. We're continuously generating hazard maps for this. Right now we only have flood hazard maps but we're looking at also providing hazard maps for storm surge and landslide. So let's take a look at Manila. So that's the flood map for Manila. Then let's overlay the population density map of Manila. These maps are all generated at Project Noaa. We have components that are focused on doing the generation of these maps. So here's the sample report of WebSafe. Sorry. Show you this a while ago in the screenshot. So that's how you use WebSafe along with other features at the Project Noaa website. So there. I would like to go through the different technologies that we use in the development and maybe provide a brief explanation of what it does. So first, Inasafe. It's a QGIS plug-in built with Python. The core calculation capability of WebSafe uses the Inasafe API. So you can check the code for this with this in GitHub. Next, we use Tornado Web. It's a web server but also built with Python. It uses non-blocking asynchronous technology which enables the web app to accommodate requests asynchronously. For the database, we use PostgreSQL, database of the different hazard maps available and locations. Next, we have GeoServer to serve the maps that are being displayed. The hazard maps and the exposure maps are all stored in GeoServer. We use OpenLayers3 for overlaying the maps, displaying the maps in the browser. For building the web app, we use AngularJS as the framework and some bootstrap for responsiveness. And other technologies we use for the inputs, for the generation of hazard maps, we use Flow2D. And we extract data from OpenStreetMap for the building footprints. I don't think I have shown you the building footprints feature. So let's go to that. So you can... So I'll show you the building footprints for Tacloban. It's the place where SuperThaiFoonHayan hit. So here it is. These are buildings, building footprints extracted. Let's wait for it to load from OpenStreetMap. So the limitation here is that OpenStreetMap continuously updates its data. So we need to also regularly check for new data. So let's try it. So that's one of the limitations there. And as you can see, there's a part of the place that does not have some buildings from it. So we have GIS specialists who regularly check OpenStreetMap for that. If you calculate it, it will also give you... Give us an analysis when it comes to the infrastructure of Tacloban. So calculation for the building footprints takes a little bit longer than when using a population density map. But here it is. So it tells us that out of 30,416 buildings, 10,124 will be affected. 18 of those will be in high hazard and so on. In the future, this is under development. As I said, it's still in better phase. We will be able to release the different building types so that disaster managers can see which buildings will be highly affected. So that's it. That's WebSafe. Thank you. Mr. Eni, any questions for Mr. Aiko? Yes. On the roadmap for WebSafe in the future, I'm working in the World Bank with Vivian and everybody and we're just working with Cartelza and Tim and just curious what you guys have coming up. So right now, as I said a while ago, we are looking at providing more hazard types. Right now we have hazard maps for floodings, but we're generating more even for storm surges and landslides. Also, we are integrating additional information from the Philippine Statistical Authority regarding the census so we can provide age groups and gender groupings in the report so we can see how many children are affected by this hazard. And also, we're improving the user experience and reports so that it can be easily understood by the users. Thank you. How closely do you work with the actual local government level? Very close. We regularly get visitations from local government leaders and also we conduct I think twice every two weeks some lectures to explain how they can use it. Also, we have partnerships with the DILG, that's Department of Internal... Interior and local government. That's why when we release it, we hope that the DILG will be using it as one of the main tools for DRRM in the Philippines. That's helpful. I think a lot of times when I work there, I find there's a disconnect between the local level and the national government agencies. There's not enough coordination or communication. So I thoroughly encourage you to communicate with the local government. So we agree. We also notice that and we have a component that is dedicated for communicating the different information that we think will be necessary during disasters. Is there any more questions for Mr Aiko? Yes, ma'am. Hello. Thank you for your interesting presentation because I have also developed for my reasons such kind of a similar tool. So I'm wondering when you calculate in the website, do you calculate risk on the fly? For example, when you choose the Hazelia and the Esposalia, do you calculate them interactively when you press the calculate button on the website? The calculation happens in the back end. So we use Inasef for the calculation capabilities. It's in Python. So when the user chooses the hazard map and the exposure map, this information is thrown to the back end server and then the back end calculates it and then throws the result to the front end server. So it's done within the post GIS database? Yes. We use a database for the database of the different hazard maps available. Thank you. Is there any more questions? Okay. So thank you, Mr Aiko. Okay, let's give him a round of applause.
|
WebSAFE (Web-based Scenario Assessment for Emergencies) is an impact assessment tool used in the Philippines to calculate the needs of a community considering the effects of a particular hazard. DOST-Project NOAH and The World Bank partnered in developing WebSAFE to increase the country's disaster preparedness measures. Using Project NOAH's LiDAR and IFSAR-based flood, landslide, and storm surge hazard maps for the whole country and OpenStreetMap information, WebSAFE aims to aid Local Government Units in their response toward disasters. A community of volunteer mappers for disaster risk reduction, called MapaSalba (a local pun that loosely translates to "to save using maps"), was also started last year to encourage local participation and enrich the OpenStreetMap database. All of these efforts were shown to contribute to the Philippines's generally improved disaster preparedness and over-all decline in human and economic losses from disasters for the past two years. WebSAFE uses InaSAFE API, a free and open source plugin for QGIS software. With the help of its developers, we modified and developed it into a web application. Project NOAH envisions a disaster-free Philippines where communities are empowered through open access to accurate, reliable and timely hazard and risk information.
|
10.5446/31996 (DOI)
|
Good afternoon. I'm Eric from DOSD Project NOAA. I am about to share our experiences in our country, the Philippines, in using free and open source software in our continuous efforts in the improvement of the NOAA initiative. Okay, so just a brief outline. So here we go. What is Project NOAA? NOAA stands for Nationwide Operational Assessment of Hazards. It is a disaster management program that aims to improve the Philippine government's and Filipino people's capacity to respond against the impact and effect of extreme weather conditions. So, it was mandated by the President of the Republic through the Department of Science and Technology and it was launched last July in the year 2012 in Marikina City. So, basically it's a flood mitigation system. The basic function is a six-hour flood early warning system for the communities along the 18 major river basins in the country. So, we are also tasked to improve the geohazard maps in the country. And it's also for rain-induced hazards. That's the priority. So, how do we do this? Of course, advanced science research and multidisciplinary assessment of hazards. And then we came up with tools that would enable the prevention and mitigation of disasters. It would be used by LGUs, planners, policy makers, communities, and even individuals. So, an example of that tool is a web app or a website. It's NOAA.DUST.gov.ph. So, this is the homepage of our website. So, here we go. Some highlights of times when the local governments used it. So, this was in 2012, Taipun Pabdo in Kagayan De Oro City. International name is Bopa. So, the year before, Taipun Sendong ravaged through CDO and 1,000 plus debts. So, that was the year before. So, in 2012, the city was able to use Project NOAA, the website for early warning and evacuation measures. So, cash royalties were avoided. This one shows how they used it. So, these are water level sensors in the streams. That graph over there. It's a sensor upstream. So, when they noticed that there was a sudden change in the water level, they were able to evacuate the lower-lying communities. So, they were able to reduce the number of zero debts that year. So, again, this is in Marikina. Marikina City was one of the most devastated areas during Taipun Katsana in 2009. So, as I have told earlier, Project NOAA was launched in Marikina. So, ever since the city used the website as a tool in early warning system to augment their existing resources. So, in 2013, the Habagat, it's the Monson rains in our country. So, this was, this is in Marikina. So, that year in 2013, they were able to use again the website. However, there are some limitations in using that website. So, the first one is, it's in Google Maps API. So, the KML size, we're not able to display highly-solution hazard maps. So, it has a limit of 3 MB. And, of course, limited base maps. You can only use Google Maps like this one. And, limited knowledge on open source technologies. So, yeah. What we did is to develop a new version of the website. It's in beta right now. It's not officially launched. It will be launched later this year. So, with this, we can now able to display high-resolution hazard maps. This is a flood, a honey year flood return hazard map of Metro Manila. This one is a landslide hazard map of Baguio City. We 10 meter resolution. This is a storm surge inundation map of Metro Manila. So, we can also use different base maps. That one on the left, the right most is Bing and this one is in OpenStreetMap. And Google as well. So, we're now using OpenLayers and GeoServer. And then, post-GRE SQL and post-GIS for our database. So, some stories again, success stories. This is in Northern Samar, Taifun Ruby in back in December 2014. So, we were able to give storm surge advisories. And as we can see, the damage done by the storm surge. But, fortunately, there are no casualties. This is another town in Samar. This one is the recent Taifun ineng last August. So, on the left is forecast for... No, no. This one is forecast one day before the actual landfill of the Taifun. So, those are rains with 100 or more millimeters of rain. And the other one is the actual rainfall contour of... during the Taifun. So, we were able to generate a map of the places where rains may trigger landslide and floods. So, okay. Just to show, this one is in Ilocosur, the northernmost province. And then, this was in... this was in Benguet. So, yep. Okay. So, just to conclude, we are still exploring the endless possibilities that free and offensive softwares can give us. So, we in our team, we're still doing some research on how to improve our website. So, that's all. Thank you. No questions from the audience? So, it seems like quite interesting. And is this project... There's no question from the audience. I will... just few questions to clarify things. Is this project linked with that INASafe project? Have you heard about? Yes, yes, yes. Is that... As we can see... yep, um, back then, um, we have a collaboration with World Bank and with INASafe. So, we tried to develop... So, the back-end is linked to... back-end is INASafe. It is. It is. So, you are developing the web interface for that. Okay. Quite interesting that it's actually... I think it's a one limitation in INASafe. And another question for my information. Is this application practically used in your country? Yeah, yeah. Especially during typhoon season. The local governments and even individuals can use this. Okay. So, is there any different logins or any restrictions for use? Anybody can use it? Anybody. Right now, anybody can use it. Anybody can use it. Okay. Thank you. Thank you. So...
|
As a means of mitigating risks in a hazard-prone archipelago, the Philippine government through the Department of Science and Technology launched the Nationwide Operational Assessment of Hazards (DOST Project NOAH) on July 2012. This program aims to integrate various research and technology development efforts in improving flood, landslide and storm surge hazard maps and in instituting effective early warning systems for these hazards. These initiatives produced various geospatial datasets relating to hydrometerological hazards such as satellite imageries, LiDAR maps, Doppler radars, localized weather forecasting models and a vast nationwide network of automated weather and water level sensors. With such wealth of datasets, these information are processed and visualized through a near real-time web-based spatial data infrastructure. The NOAH website (www.noah.dost.gov.ph) serves as a information and communication platform for government agencies, rescue and disaster-related organizations and the general public to effectively prepare for impending hazards. Designed to be a web geographic information system, the NOAH website now uses Geoserver and OpenLayers API to handle, process, analyze hazard maps, and exposure and vulnerability datasets. Since the launch of the portal, thousands of lives had been saved from the dangers during the annual monsoon events since 2012, Supertyphoon Bopha in 2013 and in typhoons Rammasun and Hagupit in 2014.
|
10.5446/31997 (DOI)
|
I'm Hangang from NIER. Today I'm here to tell you a bit about the proposal of water pollution process, very much based on open source GIS. I'm going to split my talk three parts. First introduction, second analysis method, finally future plan. First introduction is National Institute of Environmental Research. NIER is my workplace. This figure is showing the Internet home page. Since its establishment as the National Environmental Protection Institute in 1978, the National Institute of Environmental Research has been committed to supporting the development of Korea environmental policy and institution while addressing or joint environmental issues and reading environmental research. My department is water quality assessment research division in NIER. So, what came to improve the water quality eco policy? Next is water emission management system. WEMS is one of our duties. This system is input gestation data and management system. Fifth introduction, water pollution is a source of a river or lake pollution. Water pollution is composed of eight pollution source groups. Eight pollution source groups is life system, lifestyle, industrial, land, other water pollution source, water culture, environmental infrastructure. These pollution source groups can be classified as point source and non-point source. Point source are available to compound the emission place and emission pass. Non-point source are not exactly the emission place and emission pass and mainly are discharged by rainfall. Next page is investigation items and contents. The first life system is collection of vague data and procreation using vague data. For example, a vague data is pollution and water use data. In second life system, industrial system, land system, etc. investigate the item such as this. Why is open source GIS? We perform the investigation annually and the investigation data is published in a book and the text type of document. Open source GIS is used for more accurate results of investigation. In addition, eliminate the uncertainty of occupation areas through the spatial data construction of pollution source. I will explain about the analysis method from this page. PILOT target based in Incheon and Gyeonggi-do in Korea. Satellite radar was used as the tennis Korea town satellite and coordinate system is applied to the custom EPSG 2097. This table is showing create and use GIS data. First, use data on pollution source create new layer of the type of point and polygon. Next, collect layer of the type of line and polygon such as wage area, the swear pipe network, etc. Next, data generated process. This process is classified as step seven. Step one is collect GIS data and one is basic data. Step two is HGIS data. Vision layer attribute table, pin equal for verification and pin equal separation and you do collect create after data inserted. Step three is create key value each region. Create key value and verification. Step four is HGIS data and delete only these columns. Step five is create key value of WANTH data. These period are shown for example using concrete function and create key value. Step six is join GIS and WANTH data. Using QGIS and GIS join function and join GIS and WANTH data. Step seven is create result file. Next, next page is on analyzing the salt using the creation layer. At first, using clip method to separate the urban and suburban area. Input layer is land use area of the regular category of 28-part feature selected. Clip layer is housing commerce industry called feature select. The result such as this map shows urban area shown in purple. And next is to separate the area and non-sphere area. Area is located on the three layer and three are pipe network for distance 15 meters. B layer is research tool selected by location use select feature show layer is in total select intersex feature area. B layer is defines final target area. Next page is population analysis on the life system. This layer indicates urban area and non-sphere area of population. B layer clip select of the geofrotesting tools. Input data is population layer. Clip data is fridge area. See layer clip select of the geofrotesting tools. This analysis method can visualize the state of population. Next method is water use analysis on the life system. This method is same with our previous page. Input data is used to water use layer. And also can visualize the state of water use. Next method is livestock industrial and environment infrastructure analysis. This figure is showing distribution of pollution source within the sewage area. This analysis method can visualize the state of industrial livestock and environment infrastructure within the sewage area. Next method is represents the existing land use such as echo pared, porous land, building site, pet peeve, etc. This figure is showing distribution of the state of type. And next is find error data through verification of the location information. At first each of the pollution sources is located as same address. This figure is showing overwrite data. Secondly this data is included in sewage treatment data but the pollution is located in the north sewage area. So this error data must be reviewed for accurate investigation. Finally future frame. Number one using data analysis development. Number two development of analysis process. Number three how to read simplified data. Number four QGIS often used to know if our energy is plug in development. Hereby I finished my presentation. If you have any question or photo information I'm sorry but I don't speak English well. Please send me your question through email and I'll send it back with the answer. Thank you for listening.
|
Korea is managing the water pollutants in the national institute of environmental research part of the ministry of environment. The national institute of environmental research is managing which was classified pollution of domestic, industry, livestock, aquaculture industry, land, basic environmental foundational facilities through investigation of pollution source all over the country, The national institute of environmental research, in order to efficiently manage the vast amounts of data, it is a graft of a variety of ways. Currently, methods are on the way of development for the management of the National source material through the GIS. In particular, a module of Plug-in form is developing by utilizing QGIS. In addition, data verification method is developing to check and confirm the national pollution source data. Also, the procedure of data verification and examination based on Open Source GIS were developed and utilized on the actual projects. The water pollution sources is managed efficiently utilizing Open Source GIS. Especially, Open Source GIS is introduced on government management plan and gradually utilized and with the case presentation like this, utilizing Open Source GIS can be discussed at the national level. Through such cases, it is possible to know that you Open Source GIS introduced at the national level.
|
10.5446/32001 (DOI)
|
Good afternoon everyone. My name is Vinay. I am a PhD candidate from Osaka City University, Japan. So basically my research is about estimating near short bathymetry from remote sensing imageries, basically optical remote sensing imageries. So we were testing so many algorithms and improving the algorithms and modifying the existing algorithms and all. Here I will talk about the geographical weighted regression model which is using to estimate the water depth which is basically using for all kind of specially related estimation which is a linear regression actually. So this is the outline of my presentation today. In the introduction part I will talk about briefly talk about how optical remote sensing can be used for near short depth estimation and the advantages of using optical remote sensing for depth estimation and the physical principle behind that. And in the materials and method I will talk about the correction methods which I have been used to correct the components which is already there in the content in the satellite imageries which should be removed for better estimation and then the global regression method which is a conventional method which is a simple multiple linear regression method which is used by all the others to estimate the depth and then the proposed method which is called GWR model. So finally evaluate the results and compare the results between by the ground truth data is collected. So these are the both types which mainly used to collect depth from the near short ocean. So one is LIDAR airborne LIDAR bathymetry and the other one is multi beam echo sounding. So both are very good accuracy and higher resolution data sets but there are disadvantages because due to inaccessibility of such kind of big data and time consuming and then of course cost is also a problem. So there are availability of some data sets of depth which are already there and some of them are free and not all among that JABCO is giving good resolution which is not high resolution which is only 900 meter resolution I don't think which can be used for any kind of costal dynamic modeling. So we must have so many others we are trying to have an alternative to get depth in an easy way. So the alternative method we have been used that multi spectral imageries. So there are lots of advantages and some of some disadvantages also. The advantage is that there are wide wide availability of data sets like so many satellite imageries which have lots of bands can be used to estimate effectively maybe reliable quality and things like that. It is relatively low cost when you compare with the field depth collections and large spatial coverage and high spatial resolution and also temporal data accessibility and archives of satellite imageries that way you can have and you can zero bathymetry it is 20 meter I have written here which is which can be changed according to water quality mainly the water quality the stuff how can the light can penetrate to the bottom of the water if in case of costal costal environmental water is very complex or it is not easy to penetrate up to the bottom. So the third advantage is it is relatively low accuracy and comparing with the other multi beam echo sounding methods and you need a water depth any way you have to collect water depth data to calibrate or compare or evaluate your results. So this is the physical principle of the algorithm which actually attenuation of light in water is a function of wavelength if you if you take a short wavelength light in the electromagnetic spectrum which can penetrate maximum water depth which is written here 200 meter which is not a cannot be applied in a costal complex water which can be very less maybe 30 or some 30 meter or something like that and if you when your wavelength increases your attenuation speed decreases. So if you go to the high wavelength I mean long wavelength components of the electronic electromagnetic spectrum which cannot penetrate to the water bottom which will attenuate from the shallow water itself. So other concept is that there are four main components which is container satellite imagery that is one is water surface reflectance and other one is inner water volume which is water column properties when the light penetrates to the bottom and you will of course have a bottom reflectance if it could reach to the bottom and and by going back the signals it is the atmospheric scattering. So in my in my case my interest is only bottom reflectance I have to remove all three other components. So this is the study area pure Puerto Rico which which is lives on northeastern Caribbean sea. There are two reasons to select a pure Puerto Rico as the study area one is that free availability of high-ranial solution like depth data which could be used for calibration and and comparison of the cells and the water quality of the water is very good which is a coral reef area and even you can see satellite imagery the bottom itself that means clear water you have in this region. So you select a pure Puerto Rico as our study area to apply our algorithm and evaluate it. These are the data sets the institute estimate depth I have collected two two satellite images one is land-satid which is freely available open data and rapid eye which has high special resolution and high radiometric resolution of 12-bit dynamic range and even and the light out of depth data which is also freely available from NOAA. So I will briefly briefly talk about the correction method I have been able to remove which is an existing method which I have made modification in terms of in in terms of in case of lands at aid data there there are there are two new bands available compared to lands at seven which is called coastal band which is used for estimation because it it is very short wavelength it can penetrate and short wave infrared band which is which is an high wavelength band and this band is used for correction and the other bands are using for estimation this this much bands are using for estimation and short wave infrared band using for correction because short wave in short wave infrared cannot penetrate to the water bottom it will activate when it when it reach to the water so all the other components are acquired by the short wave infrared and only bottom and bottom reflectance will not be there so you can use effectively use to remove the remove the un-intended components from the image. So this is the water depth retrieval model global regression model is a conventional model which is used by so many others and which was giving good resolution which was giving good resolution results but but when you examine results you can find that it is not able to able to address the heterogeneity in the data which is due to different bottom types and different water quality especially spatial problems of having bottom types and different water quality so we I was thinking of having a new model which can be effectively address the problem due to the spatial heterogeneity so this is the procedure which is shown on the left side and this much is just by just subtracting the estimated by global regression model and and from the leader leader depth so the result is showing that the residues are not uniformly distributed on the map and it is it has a cluster of residuals but that means that some of the local areas have influence on estimating depth and here if I go to tell you one thing in the global regression model we are it is a simple multiple linear regression model which is having single set of coefficients which is you develop a single set of coefficient and you apply for all the all the area so it is just getting a depth and so that will not be that will not be a good idea to have single set of coefficient to estimate all over the data so I use this special residual clusters to produce a classified map so that I thought that class I can be used for better estimations so in grass sorry sorry in grass we have used Imax leak to estimate supervised just supervised classification to use at the bottom classes and this signature we have selected the area where the where the changes in the residuals and the graph here by various scatterplot here showing that on the X axis transformed each band and the leader depth and you can see that each each classes having different type of scatter so that means it could be effectively addressed by having a different coefficients instead of having a single set of coefficient so after that after estimating the I have set the residual map again by the class based model which is better than the previous which is a uniform distributed and some of the areas having still problem but it is far better than the previous correlation coefficient and RMS is shown on the table the global model is giving 0.8th correlation coefficient but the class based model which is giving 0.9 for correlation which is good and keep in mind that idea we have introduced it is not it is an existing algorithm which is we used for so many land based regression analysis which is called GWR model geographically weighted regression in case of GWR model it is a weighted multiple linear regression the other case it was just multiple linear regression so here we make a kernel like this and each kernel in the kernel you will have a centroid point you estimate coefficient for this centroid point by by by weighting your adjacent points if you the weight from this point will come as more more weight for this point and if you go away from the centroid point the weight will be less so that if you have a spatial correlation you will get you will get a good good correlation from the spatially spatially weighted regression so bandwidth is the main factor here if you have more denser data sets the bandwidth can be less I mean small it can be a small circle which can give good accuracy results in GWR model if you are if you are ground with data to calibrate the results is very sparse then very less number then your kernel will be radius of the kernel or bandwidth of the kernel will be big and it will not good way it will not give very good resolution but not very significant reduction in the results and this is the equation as to weight the each each point I mean ith point here so that way it will create estimate coefficients for each pixel in the data so that you will have depth for each pixels that is what is mainly doing so results and comparison we have just made different scenarios to compare the results and in that mainly we are focused the other entity of the in situ adapt to estimate or calibrate the depth in that case it you can have lower solution in situ depth to estimate sometime you will get lower solution and the other time you will get more good good dense data and it will be having only first area and you want to estimate for the other other area kind of extopulation so the first thing you there are 10 times of case of rapid I 60,000 points were used and the evolution is different points were used to estimate the depth and the global model results but ZW model is significantly in terms of RSE and R square model and this is the bi-various scatter port for each model which is also showing that the ZW model is far better than the global regression model and the profile also showing in the global model and the leader of the baton and the baton and ZW is very close on global model is little bit it is around 2 meter 3 meter difference you can see everywhere and this is the second scenario which we have selected 1 750 points to see the actual exam in the performance of the algorithm and the interval between each point is 300 meter you have the 300 meter interval points so that but the global regression model will not change much but even though the ZW model is good accuracy here it is little bit worse in the accuracy compared to the scenario one still the ZW model is far better compared to the global model in scenario two also. This is the third scenario which is 600 meter interval and which is also given same kind of same trend of results and the thing is 450 points were used to estimate the depth and this 4-bit point means it is very less because the coastal area is around 14 kilometer long so for 15 meter is very less number of in situ depth you need to generate reliable depth estimation. So this is the Bayou variate scatter plot still showing same trend in the Bayou variate scatter plot and cross section profiles. Now we need to extrapolate the estimation because if you have small area in situ depth you want to estimate for the all the scene where you have a coastal area that in I have made classes for all area like that and each classes estimated coefficients from this point and that coefficients I applied all the other areas. So that is also given better than global model which is applied at a time you run global model and you have the other one you have different different coefficients for us all here. So that is also given better accuracy than the global model. So this is the Bayou variate scatter plot of that study and then we wanted to come by the cross profiles of which is drawn on near shore cross profile space 2.5 kilometer away and these profiles also showing good results in the GWR model. And the conclusion is that 12 bit dynamic ranges of rapid eye and landsat aid data can be effectively used for good estimation of depth from the near shore and global regression model is not able to address the heterogeneity of the data and but meanwhile the GWR can address the problem effectively and here after the algorithm which have been developed will be created as a module in the future studies. Thank you. No question. Actually I will ask a question. I am a TISCAM member of GEPCO so I really enjoy your presentation. I wonder what is the optimum size optimum diameter or your kernel? Kernel. It is depends upon the it is depends upon your institute depth which is which you have been used for calibration. If you have good denser data sets you can have now it is 50 meter for this data sets 50 not 50 meter 50 pixels means 1 pixel is 5 meter then 15 to 5 that much is the radius of the kernel and if you have large number of data set this is just 10,000 point I have used I have lots of points if you have large sets the bandwidth will be reduced. I think your phrase can be converted to any formula or some simple word a simple formula then might be clear because if it is denser the diameter the shorter diameter may be okay so there should be some mathematics relationship so that is my comment. Is there any more no more question? Okay. Thank you Vinnie. Thank you.
|
There is often a need for making a high-resolution or a complete bathymetric map based on sparse point measurements of water depth. The common practice of previous studies has been to calibrate a single global depth regression model for an entire image. The performance of conventional global models is limited when the bottom type and water quality vary spatially within the scene .For a more accurate and robust water-depth mapping , this study proposes a regression model for a geographical region or local area rather than using a global regression model. The global regression model and Geographical Weighted Regression (GWR) model are applied to Landsat 8 and RapidEye satellite images. The entire data analysis workflow was carried out using GRASS GIS Version 7.0.0. Comparison of results indicates that the GWR model improves the depth estimation significantly, irrespective of the spatial resolution of the data processed. GWR is also seen to be effective in addressing the problem introduced by heterogeneity of the bottom type and provide better bathymetric estimates in near coastal waters. The study was carried out at Pureto Rico, northeastern Caribbean sea. Two different satellite data were collected in order to test the algorithm with high and moderate resolution data. RapidEye data has 12-bit radiometric resolution and 5 meter spatial resolution. Even though Landsat 8 data also has 12-bit radiometric resolution, it provides 30 m spatial resolution. In order to calibrate and evaluate the estimated depth, high accuracy LiDAR depth data (4 m resolution) provided by NOAA is used. The study was demonstrating GWR model to estimate depth, evaluate and compare the results with a global conventional regression model. The comparative study between conventional global model and GWR model shows that GWR model significantly increases the accuracy of the depth estimates and addresses spatial heterogeneity issue of the bottom type and water quality. The GWR model provide better accuracy at both Landsat 8 (R-squared=0.96 and RMSE=1.37m) and RapidEye (R-squared=0.95 and RMSE=1.63m) than global model at Landsat 8 (R-squared=0.71 and RMSE=3.71m) and RapidEye (R-squared=0.71 and RMSE=4.04m).
|
10.5446/32003 (DOI)
|
So my name is Simon Monkreef, I'm going to be presenting a paper I did, or presentation based on a paper I did on dynamic styling for thematic mapping, which is a paper that E.K. Gulland and I wrote together for this. So it's clear that things are moving more towards data exploration now. So we have more data science and GIS is both the same they're moving that way. So this has been enabled by automated data processing and this enables hypothesis testing. For example you can interact with derived outputs of a data set, explore it visually and do both hypothesis generation by exploring an aspect and then hypothesis testing by visualizing a different way. So what I'm interested in is more how do we enable this to do, how do we enable dynamic styling in this sort of construct. So it's also clear that we need to adopt a user driven approach to WebGIS. So what's the user inputs a query and that query is essentially everything and that gives it a flexible approach rather than a supply or push so you publish a result this is more of a user pool so you let the user derive the result on the fly for themselves based on the question they're interested in. And so the aim, the eventual aim is to develop a web service for this thematic visualization. So the idea is you have a piece of data, it goes to the web service with some metadata tags on parameters on how to style it and it generates the visualization. So this paper is really just an exploration of methods, functionality parameters, different ways to answer different questions visually of some data. So as I said, a data exploration and in particular I'm interested in the presentation of data so we have a data access through WFS and other means, you know, REST queries and with WPS we're also introducing data interpretation. So this is very query driven, you input a data set, a complex data set, you process it to produce an output. And then the final step is data presentation, how do you present this output? So this is the thematic styling. So there are a couple of ways to, a couple of phases I get different types of thematic styling. One is you can use it to present a result. So in this case you're publishing results derived from data. So what's good, I'll say population density, that sort of thing. This is a static result so it's a sort of, you can apply a known styling technique. It can be dynamic styling because you can tie to the Z value and that sort of thing. But the result is known. The next one is what I'm sort of referring to as interactive data. So you're really trying to present data to a user rather than a derived result that someone else has used. So you know, let the user answer the question for themselves. And to do this is far more dynamic because, well the data is processed on the fly, choosing a virtual layer. So what happens there is you don't know the final layer prior. So you have to produce, you have to give flexible methods for the user to be able to style this data presentation. And that flexibility is crucial because as I said we don't know what the questions the user is going to ask of the data. But we want to, you know, enable them to interact with this data. So a bit of sidebar. I guess part of the thing driving this is information visualization. So this is interactively presenting data visually. So enabling a user to explore and identify patterns within the data. And part of this is to make it very dynamic and to try and visually encapsulate underlying trends within the data that sometimes become very, I mean humans are good at pattern recognition. And so some of these trends can become, given the right visualization, very evident. So you know, outlier detection, when you look at a graph and you can see the outlier. So in spatial data, this is systematic maps. So for example a choreopla, so essentially polygons and each polygon is labelled according to, or is coloured according to the value within that. So derived using, you can use a map classification technique. So a method to just partition the feature space of the polygons. And then a choice of colours represent the value within that polygon. So there's also a number of methods that we can use but what's the right one? So is it, should it be a data specific method to determine the map classification, the colouring and other styling factors? Or is it question specific? So you know, what question the user is asking? And I think it's actually both and depends on the data, it depends on the situation. So and then the other one I want to introduce is, okay, so you have the visualization process is you extract data and then you render it. When you're doing sort of analysis, you're presenting the data, you can actually derive multiple results from this data set. And so for each result you can then also produce multiple visualizations. And it's this flexibility because as I said, you don't know the pattern in the data before and so you want the user to be able to tease it out. You have to give them the flexibility to view it in multiple ways to try and find the information they're looking for. And again we can produce multiple views simultaneously. That's pretty normal. And so just a quick styling to map styling, so map classification. This should be style of description. So style a description geo server, you can have an XML which defines how a value and a polygon or point should be drawn to color that sort of thing. So to do this, we classify the feature space into indiscrete categories using different criteria. So it's an equal intervals, natural breaks, those sorts of map classification algorithms. And then so PySel is really good for this. And you can just make it available through a web service. And it sort of becomes this nice map classification where it returns the bins and the counts per bin and the upper and lower bound and that sort of thing. And for color schemes, I adopted colorbrow which are sort of derived from mapplotlib. So if you like, you can derive n values in this color scheme automatically or you can store 256 in a database and just choose and pick the ones you want. There's sort of advantages to both. But and then the other option is where is this style descriptor generated. So it can generate on the server. So geo server WPS, it generates the style descriptor, WMS you can specify the style descriptor and also that this style descriptor server. So you give it some data and it gives you back the styling. And then it can also be generated on the client. So it uses selects, classification, color and then the color and the map changes. So this is a very broad level of the base architecture required to do this sort of thing. So you have a view site which is essentially do you style based on the local extent of the extent of the map that the user is looking at? Or do you style based on all the data in a sort of data set? And then the analysis method which is a map classification and then a color scheme. So classification is determined and based on the number of bins the color is determined and then the data vectors are input with the feature space. Now that data that can be anything it can be WFS service and WPS service that sort of thing is the idea is to make it very restful-ish and then you can output vectors. But on top of that what are the parameters? There's a lot and this isn't exhaustive. So you can the styling attribute or attribute. So what attributes in this virtual layer created on the fly should be used to create the theme. So for polygon you can have a boundary, color, thickness, that sort of thing, opacity but this can be done on the client so it's less crucial. And do you provide a label? Is there that kind of interfere with the visualization? Point options, that's a bit fun. You can have an XY radius so you can have an ellipse if you like. So X and Y can differ and it can be linked to different variables within the dataset. And it can also be determined relative to the map. Do you include a label or not? So one label will provide context but color is more I guess intuitive. And then the opacity and border colors and that sort of thing. So what I'm going to do is present a whole bunch of different visualizations of some data that I've sort of messed with in the last year or so. So one is a health dataset which was 11 million hospitalization records and I calculate summary statistics using a WPS and then the somatic map is the output. And sensor data, so gauge data for example, rainfall. So that's it. The health data is spatially contextual so it's a spatial context is applied to the records for accounting. It's not technically a spatial dataset so the way to view it is very much a polygon and summarize the values within polygons. The sensor data was point data so the gauge data, rainfall gauge, stream flow gauges and then board data for aqua for for board gauges for aqua for data. And then the final one I looked at is service data. So I actually looked at bus stops providing public transport services but this can be applied to health and hospital so that sort of thing as well. So the basic interaction that I'm envisioning essentially is or eventually the data is input and then there's you can specify the style data and the parameters. Extra metadata I use at the moment. The system sort of injects the metadata to determine the styling and then some of it can also come from the user but my real question is what do I need to supply a user for them to theme the data the way that makes sense. So and then the style service will term the style and either say if it gets a GAJson file as input it can inject the color into each polygon point or other and then that can be viewed as a charter table or a map on the client side. You can use essentially a WMS style thematically and then that just slip your map client and then the final one is the style descriptor itself the sort of end bins with a number of counts per bin can also be used as a sort of summary of the data set. So the first one is essentially it's a calculation of the probability of access to a service. It's frequent so it's not usually accurate but it's essentially showing that within a region this is a probability of the people having access to a service. So the one on the left is start using equal intervals with 10 intervals so it's probably to be in zero and zero point one that sort of thing. So it very nicely partitions the probability space into sort of intuitive numbers. The one on the right is more geared towards answering a specific question. So in this case if you're aiming to provide at least 75% coverage to a region so probability of 0.75 you can use as a pivot point and then have a divergent color scheme around that. So very quickly you can see red okay those areas need work, white's about right and green is good. So that one is more answering that specific question for I guess my aim is to have 75% of the population covered where isn't and so that's visualizations very much I guess was that but the probabilistic one the equal intervals is more okay so this is my sort of distribution over the between zero and one. Okay so this one is a rate ratio so what it is is a disease prevalence rate compared to another rate. So in this case each sort of area so yes essentially a census area is the rate is calculated for the area and the rate is calculated for the entire state of Western Australia which is where I live and compared to that so red is essentially the rate for the region is higher than the normal rate in the state, green is lower and white is about right. But because of the way this one is calculated it really only makes sense to visualize it as a divergent color scheme because it's essentially three classes so zero one and two and so this divides the three classes up in the way it makes sense so giving a user the ability to change that color scheme doesn't make sense because this is essentially this is a data driven a result driven from the data visualization so you don't want to change this too much. On the other hand this is a disease prevalence rate so this is the rate of disease per person as smooth. The one on the left again is done using equal intervals and because of outliers equal intervals has a tendency to sort of blanket everything within a few bins and then have large gaps and then you might have one sort of at the end so it's and then the one on the right is quintile so that divides into five equal number bins. So the one on the right shows a far better spread of the you know what diseases are occurring where the one on the left is very good for outlier detection so that's your question show me the outliers that'll work but if it's show me the distribution of the disease prevalence the one on the right is a better way of showing that distribution apart from the color because that's completely wrong because green is the highest prevalence of the disease rate which intuitively makes no sense because you want red so in this case and so this is you know one one of the parameters you either have to have a reverse for a color scheme so you know is high bad or as high good well in terms of coverage high as good like coverage for a service high as good in terms of disease it's not so and then the other part is well this really should be the red yellow green or green yellow red that way with red showing you know bad just how we interpret stuff so you know green is good so low rates good red is bad so high rate bad so while it shows a better spread color scheme just doesn't work if you sort of looking at that you see oh green that's fine I bet any of you actually look at the legend think oh well not so much so there are you know things to account for in this sort of so you're forgiving a user a lot of leverage sometimes doesn't help so in this one so these are based on gauge data so it's 40 years of daily readings and so the way it's generally visualized way it's generally summarized is a short term over long term is it great or a less than that sort of thing so the one on the left shows what you can sort of compare around points quite easily one on the right though is again a divergent based on well one is is equal to the long term you know short term is equal to long term average so that actually is better for comparing within a sense of reading so you know it really depends what the user wants to compare against each other or compare within and you can also show multiple attributes I've got to speed it up but essentially this one shows the radius is based on one attribute the colors based on another and the label is based on another so we can put three attributes on there this is more of a presentation I call this a contextual visualization because you're putting too much maybe information in there for you know so someone to interpret so if you want to publish this this may be a good one but if you want to sort of intuit it and maybe less so this is a radius relative to ground so this is the service coverage so this is the so each the color represents the number of say units or people who have access to that bus stop and so what I did here is the radius is actually 500 meters is 500 meter buffer because that's how the they calculate the coverage so you can actually just visualize the radius that they use for the buffer on the map and sort of see where the coverage is what you draw the houses behind it and so see that sort of thing another way to view is the size represents the coverage so size and color in this case I'm matched and third way to view is ellipse which the y is the number of houses covered the x is the number of services per bus stop and the color is again the same as the y and this one is quite useful you can't see it which is actually quite good but if you had an elongated x which is large number of service so if you had an ellipse is basically like that you'd have a lot of services but not many people covered so this is a very nice way to sort of intuit that okay hang on we've got to change that one or why are we covering so much this one is essentially you can inject a star later script to within the metadata so this is a fishnet grid applied to summarize the point sensor data within each grid cell and they have a very particular way to interpret that so that that can be injected but what I also did with this one is I'm calling it a meta layer so in the metadata I put the points in the layer so I can just draw them in the same layer so that's one layer with the grid and the points used to derive the grid I'm not sure if meta layer is gonna work you can have link visualizations again colors linked and this one the left is sort of a male disease rate the right map is the female disease rate and so the x-axis is the left map and the y-axis is the right map and again they're linked by color and you can link them by hover and all that sort of thing and it's just a quick way to do a spatial comparison but then you also have the graph to do the comparisons kind of plot which leads me to this some of the data I'm looking at aren't technically or I just traditionally GIS fields so epidemiologists like tables but that doesn't mean we can't put the cartographic styling inside the table and maybe start them into it in that color scheme means this and so when they see the map they can sort of make that link and again you can do it with a multiple dimension so this this one is a disease rate whether this the cartographic styling applied and then a bunch of different attributes so you can do a some multi-bunt dimensional analysis based on that and again you can show multiple visualizations so this is the same as when we saw before this is using a quantiles but this is showing the feature space so this is a way of showing the outliers there so we can show the outliers while showing a sensible styling method as well so you don't so you get those two pieces of information at once you don't have to do equaling tools and then do something which will give you a better spread of the disease and the last one is glyphs essentially so that the sad emoticon essentially there's no result here so I'm thinking that essentially we should be able to just like normal maps you know tree represents a forest we should be have iconography which represents something within that so if you can't generate result everyone realizes that I'm sad face and you know the result is hidden due to privacy there's actually a moticon with a metal plate over the mouth that sort of thing and basically that's it Any questions? I may add more but I really enjoy your presentation because many tools give us many choices but there are no philosophy or the agony how to express or what is liar or what is right or wrong but I think you gave us what should be considered I don't have that answer either this is more me trying to figure out engaging by users reactions so you can tell sometimes I slide up ah okay that's the one to show in that one and but other times people like both and then other people like one and not the other so the idea is to be flexible but then how flexible is also another question and also I mean how do you get a user to say you know okay I'm really asking this question you can't put that into a form essentially I'm after this and then you know a general sentence so yeah flexibility is a really important aspect of your presentation but I wonder you showed the one data set can be divided or expressed to three different results or it should go to another place to visualizations but sometimes you can put the two data and mix it together right so yeah yeah yeah so I think I think all year your post presentation slide will have one data and some results and visualization this one is one data set but then I filter by male and female so yeah it's two results and then there's a spatial visualization and then the scatter plot visualization but again the thematic styling is adopted in all visualizations okay good okay thank you very much for everybody's Gene plating and here listen thank you
|
Current web standards have facilitated the online production and publication of thematic maps as a useful aid to interpretation of spatial data and decision making. Patterns within the raw data can be highlighted with careful styling choices, which can be defined for online maps using tools such as Styled Layer Descriptor (SLD) XML schema. Dynamic generation of maps and map styles extends their use beyond static publication and into exploration of data which may require multiple styles and visualisations for the same set of data. This paper explores the application of thematic styling options to online data, including mapping services such as Open Geospatial Consortium (OGC)-compliant Web Mapping and Web Feature Services. In order to be relevant for both user-specified and automated styling, a prototype online service was developed to explore the generation of styling schema when given data records plus the required output data type and styling parameters. Style choices were applied on-the-fly and to inform the styling characteristics of non-spatial visualisations. A stand-alone web service to produce styling definitions requires a mechanism, such as a RESTful interface, to specify its own capabilities, accept style parameters, and produce schema. The experiments in this paper are an investigation into the requirements and possibilities for such a system. Styles were applied using point and polygon feature data as well as spatially-contextual records (for example, data that includes postal codes or suburb names but no geographical feature definitions). Functionality was demonstrated by accessing it from an online geovisualisation and analysis system. This exploration was carried out as a proof of concept for generation of a map styling web service that could be used to implement automated or manual design choices.
|
10.5446/32005 (DOI)
|
Good afternoon, I am Ken Salanio of the Data Archiving and Distribution Component from the Phil Lider One Program of the Philippines and I will discuss our development of a Data Archiving and Distribution System for the Philippines Lider Program using Object-Stored Systems. So, a little introduction and then I will proceed to the related work to our project and then the working design we have implemented and I will discuss further the Stuff Object Storage System as well as Archiving Process Flow and then a short summary. So first, the Philippines is situated both in the Pacific Ring of Fire and the Pacific Typhoon Belt. It is visited by an average of 19 typhoons a year so it's very prone to typhoons, earthquakes and other hazards. It is also abundant in natural resources. Therefore, there is a need for mapping to assess the disaster risk and for accounting of these natural resources. So the Department of Science and Technology in the Philippines with the higher education institutions, different schools among the country, organized programs for mapping which is the Phil Lider One and Phil Lider Two. These are extensions of a previous program which is the Disaster Risk and Exposure Assessment for Mitigation Lider Program. These two programs for Phil Lider One, it is in charge of the acquisition, the validation, processing as well as training, Data Archiving and Flat Modeling Parts. While the Phil Lider Two is mostly on natural resource accounting with regards to agriculture, forest, coastal, energy and hydrology. Because of these two programs, acquiring the Lider Data for the entire country, it's good to use the Lider because it produces high-resolution geospatial data. But the problem is this data also leads to humongous data sizes, very big. So storage indexing, retrieval and distribution proves a challenge for us. So I will proceed to what we've reviewed from other sources. First is the basic storage system, the file storage system. It is very commonly used because it comes with the operating systems you install in your computer. There is little setup needed once you install your OS, there's a file system there. And it is a pervasive technology, meaning everyone is using a file system, whether we know it or not. And the problem is once the processes of, like in our program, become complex and several processes after processes from the raw data produces several other data, the complexity of the directory structure we use would increase us with that. So as an example, this is one of our directory structures for one component only. Our Lider One program has six or five components. And as you can see, the more processes that are involved, the more folders in the directory gets more complex. So that's one disadvantage. Next, we can also use geospatial databases or JS-enabled relational databases. There are two types of design for this. One is that the indexing is created as a separate layer from the actual database because database handles data by columns. And usually you cannot index geospatial indexing is a different thing. And one other approach is to use, to specialize spatial columns to expose the columns directly to spatial processing, geospatial processing. There is an indexing overhead, especially if your data is constantly updated by new data, especially if the indexing is separate. And there are scalability and query time issues due to this indexing overhead. And there is limited support for point cloud data, although there is like PG point cloud for post-GIS. But from what we've evaluated, it's still better to just use LAZ or LAS zip so it compresses more efficiently for point cloud data. So next, why not combine the approaches so that we can maximize the advantages? Use slider flat tiles in a storage system. Or use a relational database to manage the beta data for indexing of the flat tiles. And then using additionally a distributed infrastructure for replication and scaling so that if you need additional sample disk space, you can easily increase the disk space. So the slider tiles will be stored in a dedicated distributed storage. And processing can be carried out by a high performance computing system, like for example, if you can implement a cloud system like OpenStack. And next perfect example for this is OpenTovography's architecture, which is composed of various software and hardware resources. They store the data, the actual data as last format on a dedicated storage server. And then their metadata relational database is an IBM DB2 database. They have a SDSE cloud platform for storing other data. SDSE is for the San Diego Supercomputer Computing Center in California. And processing and visualization are handled by a very large multi-processor system. All computers working on processing. While highly appealing, this raises the following concerns. One is the cost and difficulty of this infrastructure. Because a fully deployed cloud system requires lots and lots of hardware, as well as internet connection speed and reliability for a country like Philippines. The internet connection is comparatively slow compared to other countries. And reliability is also an issue. We often encounter timeouts and drop connections from our partner higher education institutions when they try to connect to our servers. Even if you have a high performance data center, if your client cannot connect to the server, well, it's kind of disconcerting. So for our working design, we still chose a combined approach, but a tone down version. So what we use is GeoNode, which has GeoServer on its backend. And then we use Ceph to store the larger files, larger data like last files, and the larger raster files. And we tile them and put them into Ceph, which is our object storage. More on that later. And GeoNode is, we customize it, and we made it into Web Portal, which is what we call our LiPAD portal for archiving and distribution or LiPAD. And we added some features which we found necessary for our purposes. So LiPAD access our Web Portal and WebUser interface. WebUser restores the smaller shape files and raster and vector data. Well, the large files are tiled and indexed into the Ceph object storage. The metadata of these tiled files are then uploaded into the LiPAD portal. We have represented by a shape file of the Philippines. We'll show in the next slide. So, oh, sorry. The added features we added from GeoNode include authentication using Active Directory and indexing for those files restore and Ceph as well as a tiled selection of data and a data cart. So this is a screenshot of how our tiled selection works. We display a gridded map of the Philippines and a color coding scheme regarding which data is available, be it last, DTM or DSM. And then you can select them for tile or you can drag and drop a bounding box or specify a bounding box on the form on the right hand corner. So once the selected tiles are then displayed below and once you click submit, it will list down all those styles selected for confirmation or download. Then after, so I will now discuss Ceph object storage system. First what is an object storage here as opposed to a file system? Data is managed as objects, which are basically storage objects are contained on its own and they can be accessed like a file. But instead of accessing it like if you access a file in an operating system, you need the full path of the file from the home directory. The objects are retrieved by a unique ID which you can determine on your own. You can use a hash if you're managing so many objects. And this offers storage size scalability because it is abstracted by the object storage system adding disks for additional disk space and it also offers replicated backups of your whatever you store inside it. So the features of Ceph specifically is number one, it is open source. It is also compatible with OpenStack and Amazon AWS if you choose it as the underlying object storage. There is a support for a broad spectrum of programming languages to interface with this object storage. There's Java, C++, PHP, Python and Ruby. And then it can run in commodity hardware. You just need several of those though. You do not need specialized hardware for this. You can set up them over several desktops as long as you have enough of them. It is designed to be self-feeling and self-managing meaning if there are errors on example one object it will check for replicless along the system so it can correct it. And it manages all the objects on its own, it has its own indexing atop of what it shows the user and it also has representable say, Spranesfer or REST API. REST is basically like the HTTP protocol you can get, put, delete but instead of web pages any services, web services that you can provide over a connection. That's how REST works. So features of CEP also include a block storage which means you can, CEP can provide a virtual hard disk which can be mounted over a network. It's basically like a network attached storage. So you can provide additional hard disk space on demand as long as there is space. And then you can just mount it to your any computer connected on the network which CEP resides. And then there is the object storage is exposed to like I said before the REST API. You can use OpenStack or Amazon API or any among any of these languages to interface with the CEP libraries. It uses the RADOS library which means reliable autonomous distributed object store. And the architecture of CEP, this is, so example if you have want to set up CEP, the usual architecture is there's one computer or host that serves as the gateway. This is how the other computers access CEP by the gateway, what we call the gateway node. And then this gateway host or node is connected to several monitor nodes which ensures high availability. What does that mean? So example if one monitor node fails, another one takes over. So you can keep the connection live for as long as possible. As long as not all the monitor nodes fail. And the OSDs are basically your hard disks which are managed by an additional hardware to interface with the monitor node. And for our archiving process flow, what we do is simply is after the data has been post-processed and validated and marked for archiving, we tile them into a, into one over server, using one of our servers. So we automate the tiling using one of the servers in our setup. And then once the files are tiled, they are uploaded by another automated script towards CEP which generates metadata. This is usually just a long log file. And this log file, we upload it into our LiDAR portal so that it will be indexed. And then you can, the tile selection I showed you before will update so that it will show that this data is already available using a color coded scheme. And so yes, that brings me to the summary. So as I said before, there is a need for LiDAR mapping the Philippines for hazard assessment and natural resource mapping. But LiDAR mapping, archiving the LiDAR data as well as indexing and distributing proved to be a challenge. So we used a combined approach using GNode with G-server and the CEPH object storage. And this setup also faves the road for if ever, for our, ever we want to migrate to a fully distributed computing platform like, let's say OpenStack. So and just my references. So I'll just go through this. And just for acknowledgments, the authors would like to acknowledge the support of our Department of Science and Technology, the Febbing Council for Industry and Energy and Emerging Technology Research and Development, and the Philora Inter-1 Research and Training staff. So thank you very much. Thanks, Ken. We have questions from the floor. No, no. Come on, guys. Okay, I have a couple of questions. Sure. So you used the GNode in coordinates for the filenames of tiles. Oh, yes. What about if you have multi-temporal datasets? More. If your data is collected at different times over the same area? Yes. So far we've only resorted to just one data constantly being updated. We haven't yet planned for that, but if ever we did that, we'll probably add an additional indexing to every one of the tiles, I guess, which will include the time of which it was. Another question is, what is the coverage of your LiDAR dataset? Is it national coverage or just a... Right now, what we have is our 18 major river basins because that is what the initial mandate of our program indicated. But now we're trying to cover most, if not all, of the river basins, so it's not entirely the Philippines. Just the ones that are more prone to flooding and along the fault lines as well for earthquakes. So the more risk-prone areas are what we have data for now. Yes. Let's see. Yeah. I have two questions. Our first question is, I was just kind of curious, you had an EPSG, a coordinate reference system indicated there. So what system are you storing the LiDAR in? Do you know? What coordinate system do you have? As I remember, it was EPSG, but I think we had a consensus because the other team was introducing EPSG. I forgot which one. Right now, sorry, EPSG is just the system, the actual number of... Oh, the actual number. The coordinate reference system. Because in the Philippines... Yeah, there are many....there are many. Yeah, so......the coordinate reference system. I will return to the rest slide to see what number it is. And the other question while you're looking that up is, is this LiDAR data available to all researchers or is it kind of... The thing is the data is not open. Okay. It's not open yet. At least not yet. Okay. Because it's part of the mandate of the program that there's a data policy. And it is... It can be retrieved by request, although I'm not sure what the decision was with regards to the international community. Because some higher ups within the DOS are a little bit iffy with opening the data. Yeah, no problem. I mean, having worked in the Philippines for over 10 years, I think you may want to tell you higher ups that they need to not just worry about how to store and collect the data, but to make that available to the community, including municipalities, any researchers easily. Yeah. Because I found it extremely difficult to extract anything from the government, good or bad data. Okay. So... Thanks for curiousity, do you remember what's the file size of one tile? Oh, the file size of one tile is around... 14, 15 to 20 MB, if I recall. Megabytes. Megabytes, yes. So... It's very small. That's... That's just one out of three. Because that's, I think, the raster or the orthophoto. Another one of those, because there's... We have... There's an orthophoto, there's a DTM and there's a DSM as well as a LAS. So ranges... One ranges from... Actually, I think it's 10 to 20. So times four around and then times the square kilometer area of the Philippines. And there are different versions of each. It's not necessarily the time it was taken, but more of what process it went through. So there are several tiles, I would guess. So one tile represents one square kilometer. Yes. No questions? Okay. Thank you very much, Ken. Thank you very much. Thank you very much.
|
The Philippines' Department of Science and Technology in collaboration with Higher Education Institutions (HEIs), lead by the University of the Philippines, has embarked on a program for producing hazard maps on most major river systems in the Philippines. Realising the utility of LiDAR and its derived datasets, a concurrent program on resource assessment was also initiated. These endeavors aims to produce essential products such as DEMs, Orthophotos and LAS data that can be used for different purposes such as urban planning, resource planning, and other purposes these geospatial data might be able to provide. The result of both programs are large amounts of data that needs to be distributed and archived at a fast rate. As with other LiDAR operations handling large swaths of spatial data is not an option, hence data sets are organized in contiguous blocks, subdivided by files and grouped by river systems and local government units. Existing spatial content management systems and geoportal solutions were designed and have capabilities for handling rasters and vectors but not for point-cloud data distribution. This study discusses the development of a simple and straightforward system for storing and delivering LiDAR and LiDAR-derived data using Ceph as object storage system coupled with a spatial content management system derived from GeoNode. This approach hinges on our requirements of being scalable yet robust without much deviations from the current file system based storage structure. While most operations like data acquisition, preprocessing and quality checking are done centrally, the system aims to address our programs' needs for data exchange between spatially distributed to autonomous partner HEIs who perform data processing, and validation. The system also targets to semi-automate our data distribution process which caters to government institutions and the general public.
|
10.5446/32007 (DOI)
|
이후 가상 핸드폰 위에 3 authorization 가 pill pursued 통화 были 하지만 끝까지inga 확인 내부에 통avingcio targets TGI 국가 curiosity 기간 그리고алось와 기간 고비 으로서 현지amy搞 Dragonaca graphical analysis certainly relates in relationships, happy values for raspberry and parallel. How we can adapt to those you will be really invited and 이 데이터는 공개하는 데이터에서 공개하는 데이터가 있습니다. 데이터는 공개하는 데이터에서 공개하는 데이터가 있습니다. 오픈 데이터 서비스, 웹사이트, 그리고 DEM, high-resolution 이미지 등이 있습니다. 이 데이터는 데이터의 싱글 프레임으로 만들어진 데이터입니다. 지난해, 시스템 디자인, 프로젝트, 2년에 공개하는 데이터가 있습니다. 오픈소스에 대한 지지된 지지된 데이터가 있습니다. 다음은 오픈핀 마인과 스태벌이싱 컨셉에 대해 공개하는 데이터입니다. 64 오픈핀 시멘트 마인의 강원의 이유는 이 마인의 기본 데이터와 정보를 공개하는 데이터와 공개하는 데이터와 유지된 마인의 강원의 의미는 없었습니다. 이 마인의 강원과 비교하는 마인의 강원의 주의, 라파즈 하다 시멘트 마인, 동양 시멘트 마인의 강원의 주의, 동양 시멘트 마인의 강원의 주의, Ск faith data collection according to monitoring area and is upgrading means are also set. The figure is an example of a sub divided monitoring area of an active mine, La Pazza Halacement Mine. This is the two dimensional analysis result using differential image method with 2017와 2014의 Errborn-Laser 1 경기 같은 영역 sober-duty 리가 이 пойд 2014년 2007년 데이터의 전체 데이터를 업그레이드하고 새로운 데이터의 전체는 제공되었습니다. 데이터의 전체 프로젝트는 전체 데이터와 이미지 서비스에 지효 스파셔를 오픈 플랫폼과 오픈 데이터 서비스 사회적 그리고 프라이베이터 웹사이트에 전체 데이터에 업그레이드한 전체는 2가지 방법이 있습니다. 데이터의 전체 에어본 레이저 서비스와 전체 어MIN에 레벨고라 서비스를 그나저나 그라스의 매우 똑같이 구성과 에 presidents 에어본 레이저 사용 진 U want 각도 영 🞘기 yous beneficial 그리고 atte-music-ten we are 우리는 이 von ferrera가 오디에 대한 setting and basting civs databases 세상이 조욕 оч Discover 옥 제 afin 한 나더� Pangasho demographic Somebody traceressing infinis WILL 발걸음과 봐줄 전문 팔이 assembled 와 기록대형цип fik성 대전으로 경우ex criticize 간단한 knot 시스템을 사용하게 되었습니다. 얘들아 c buried 바로里 색과 한쪽 Solcido 흙은 다른�一次 질화와 같이 다�akh? 이거의 downtrend에vi patch SRTM-DEM이 배터리에 관한 것입니다. 그는 모니터링 시스템을 사용하는 GEOSUVT provider를 사용하는 것입니다. 이 기술은 SRTM-DEM을 업그레이드하고 GEOSUVT provider를 사용하는 GEOSUVT provider를 사용하는 GEOSUVT provider를 사용하는 GEOSUVT provider를 사용하는 것입니다. 이 기술은 GEOSUVT provider를 사용하는 GEOSUVT provider를 사용하는 GEOSUVT provider를 사용하는 다음은 모니터링 시스템의 디자인과 개발을 하고 있습니다. 이 기술을 통해 GEOSUVT provider의 기술을 통해 모니터링 시스템의 기술과 기술을 통해 와인, nodes Sultan and Siegroom 이 기술은 기술 부alen으로 전자au필 조건과 의미가 수 NFL 바로 regardingиров을built humble 오픈소스 플랫폼을 제공한 원模이�atem 헤어 Tatley rework Kazanda 슈가 MP게으�这个 på 다음은 banks를 모이게 longtime로 수 packagepeople가ні는 이 다음은аж략됨에서 � Studium의 유튜브 수줍은 90%가 수준onsاط치는 DM를 broth doctored by Ellevan laser serving 좀омXT DOT 리조 town을Ms 다를 Match on the 2D lunch funeral<|mi|>itional following Osdert om ARlet missing AMC understands 사이에게 채사�lie가 최선glich에는 50%. 이 투코가 3종입니다. fermentation was OG 3Dking Residalia Dipedo between 2007 to 2014 Due to the clove's decester After the transistor, the 3Dmentional Change Detection using terrestrial LIDA Solbswing and Residual DF, the area could been utilized on earth's volume calculation and as well established restoration plan for the area. 3D 아재리시즈가 2,014TL of 2,014TL of 3D Restoring Area verifies the process of the digester restoration. 3D modeling of the digester restoring area has made the difference between before and after the digester restoration recognizable. 다음은 프로트타입의 모논 시스템을 제작하고 있습니다. 이 시스템은 그 시스템의 main interface입니다. red symbol은 active mine, blue symbol은 inactive mine입니다. Also, 클릭한 symbol은 이렇게 인공을 제작할 수 있습니다. 왼쪽을 보여주세요. The upper left button is select.dm. The bottom left button will fly to the mine location. Let's go to the La Pazhara cement mine. 이 시스템은 cshum에서 사용하는 데이터를 보여주세요. 이 시스템은 main interface입니다. 이 시스템은 cshum에서 높은 기능을 보여주세요.аж기자�夢찌��場 лет이 불현ATION 7mmott ketchupÉ 따라는 1mmoth ketchupÉcamera SEO relationship 그리고 photographed emt thereby 정권의 Brexit 날씨가 ㅅㅅ 이 지역은 정리로드 지역입니다. 2007년 2.0g의 이미지로드에서 엘포니저 서비스에 오게 되었습니다. 2008년 2.0g의 이미지로드, 2010년 2.0g의 이미지로드, 2012년 2.0g의 이미지로드, 2012년 2.0g의 이미지로드에서 엘포니저 서비스에 오게 되었습니다. 2013년 2.0g의 이미지로드에서 엘포니저 서비스에 오게 되었습니다. 2014년 2.0g의 이미지로드에서 엘포니저 서비스에 오게 되었습니다. 2015년 2.0g의 이미지로드에 오게 되었습니다. 음악ietet이 이 이� Monster using August 2012 영 mum이라면 이 아 sab nepered iron wire. fascination. 방금 ter Prop Listen 2 HA 2015에서 이 Sequential Restoring Area를 정밈네 gemald 불평화된 destructive? �амен집을 통해서 세 phase-dimension-change의 마지막으로 아 ép이мовPeMouch 4라는 parametricedgeinina 다른 이는 담으� Nicema의 다양한Would time audience의 질문을 드릴게요 영어는 못해 영어는 못해 미안해 미안해 영어는 못해 괜찮은데 괜찮은데 질문을 드릴게요 네 네, 감사합니다
|
Large-scale open-pit mines are critical infrastructure for acquiring natural resources. However, this type of mine can experience environmental and safety problems during operations and thus requires continuous monitoring. In this study, a web three-dimensional (3D)-based monitoring system is constructed using open-source geospatial information software and targeting the open-pit mine in Gangwon-do, Korea. The purpose is to develop a monitoring system of open-pit mines that enables any person to monitor the topographic and environmental changes caused by mine operations and to develop and restore the area��s ecology. Open-pit mines were classified into active or inactive mines, and monitoring items and methodologies were established for each type of mine. Cesium, which is a WebGL-based open-source platform, was chosen as it supports dynamic data visualization and hardware-accelerated graphics related to elapsed time, which is the essential factor in the monitoring. The open-pit mine monitoring system was developed based on the geospatial database, which contains information required for mine monitoring as time elapses, and by developing the open-source-based system software. The geospatial information database for monitoring consists of digital imagery and terrain data and includes vector data and the restoration plan. The basic geospatial information used in the monitoring includes high resolution orthophoto imagery (GSD 0.5 m or above) for all areas of the mines. This is acquired by periodically using an airborne laser scanning system and a LiDAR DEM (grid size 1 m �� 1 m). In addition, geospatial information data were acquired by using an UAV and terrestrial LiDAR for small-scale areas; these tools are frequently used for rapid and irregular data acquisition. The geospatial information acquired for the monitoring of the open-pit mines represents various spatial resolutions and different terrain data. The database was constructed by converging this geospatial information with the Cesium-based geospatial information open platform of the ESRI World Imagery map and with SDK World Terrain meshes. The problems that resulted from the process of fusing aerial imagery and terrain data were solved in the Cesium-based open source environment. The prototype menu for the monitoring system was designed according to the monitoring item, which was determined by the type of mine. The scene of the mine and changes in terrain were controlled and analyzed using the raster function of PostGIS according to the elapsed time. Using the GeoServer, the aerial imagery, terrain, and restoration information for each period were serviced using the web standard interface, and the monitoring system was completed by visualizing these elements in Cesium in 3D format according to the elapsed time. This study has established a monitoring methodology for open-pit mines according to the type of mine and proposes a method for upgrading the imagery and terrain data required for monitoring. The study also showed the possibility of developing a Web 3D-based open-pit mine monitoring system that is applicable to a wide range of mashup service developments. Acknowledgments This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean Government (MSIP) (NRF-2013R1A2A2A01068391).
|
10.5446/32009 (DOI)
|
Okay, good morning everyone. I am Donnie from the University of Victoria as you know. I'm here to present our findings on the research we did on augmented reality specifically for mobile devices. The focus of the paper was to develop or to identify and develop an augmented reality framework that could be used to develop a mobile application to solve certain use cases. Now for those who are unfamiliar with the term augmented reality, here we referred to superimposing of digital information onto a real-time view of the user surroundings. The research conducted here forms part of a greater research endeavor that we are doing in geocoded addressing specifically to display an augmented reality. Okay, for our first use case we look at disaster relief. In situations such as this emergency workers would need to get to a specific location. Now street names and street markers or house addresses would be destroyed in a situation like this. Much more would most likely be destroyed. Entire blocks of buildings or street networks would be gone, making conventional methods of navigation obsolete. Now in order for an emergency worker to navigate toward a location, he would access an off-site data repository and use his information on an everyday mobile device. The AR application would then give him the digital view and allow him to navigate toward a specific location without being familiar with the location. In our second use case we looked at household surveys. Here we would aid in conducting household surveys specifically in rural or informal areas. Now in situations such as this there is no address infrastructure, no official roads. The only official roads are those connecting one informal settlement to another. Now this is quite common in South Africa. As you can see here we have Alaska Monmeloady which is an informal settlement and you can see the houses are fairly scattered. There is no address marker and no official streets. An enumerator could use augmented reality applications such as this by assigning digital numbers or random numbers to an aerial photograph and uploading this data to the augmented reality application, giving you a sense of a, how can you say, virtual address network or a makeshift address network if you will. The third use case we looked at was data quality management. Now this would provide governmental bodies a unique method of validating their address data from say a database which has been recently upgraded or updated to physical markers in the real world. Now this is Alaska Monmeloady again and as you can see these are all valid addresses from some official body or some addressing body. Now you also see that there is duplication and this is actually the norm in this area. Most of the house numbers are just painted on with duplication bringing in more errors. So the application would allow us to validate these addresses. A field worker could go out and specifically assign a unique house number or choose one of the numbers to be displayed as the official number. From the three use cases we identified four basic requirements that we selected our frameworks from. First of all no internet access. In a disaster situation the internet infrastructure would be damaged or most likely be damaged or in informal areas there would be no internet infrastructure. So the application would have to function without the connection. It had to be free of charge. In an emergency situation emergency workers wouldn't or would be limited by licensing restrictions. It depends on rapid response. If they have to wait for licensing in a specific region the application would not be as effective as intended. The last two were high precision. The framework needs to be able to identify between a specific address and adjacent structures. Now these structures next to or in a densely populated area structures are very how can I say on top of each other and the problem becomes that the application might confuse the intended address with those next to it. The last requirement we looked at was the calculation of distance. A user using application would most likely deploy it on foot and having the ability to calculate the distance between the intended address and the user is quite a useful tool. An example of densely populated areas these are for values just outside of Copacabana. As you can see they're quite densely populated and they go up an incline making the ability to judge distance and selecting a specific address quite difficult. Now the application would have to discern between different buildings and be able to calculate distance based on elevation. Our evaluation method for selecting the candidate frameworks was structured on a two-phase approach. In the first phase we did a broad evaluation for the sensor availability and the activity of the frameworks that we selected. In the second phase we did a detailed evaluation of each of the candidate sensors and we looked at the general functional and non-functional requirements of each to identify their strengths and weaknesses. So in the first phase the first step was to select or to identify frameworks. To do this we first looked at an online comparison tool called social compare. From this we identified 66 possible candidates for phase one. Guided by user comments on this website we selected two additional frameworks to be added allowing us or giving us 68 candidate frameworks for the rest of phase one. First of all we evaluated the sensor capabilities. So from the 68 original candidates we looked at GPS and IMU sensors and eliminated those that did not have the requirements resulting in 12 potential viable options. Further review of the frameworks we looked at the website, the activity of the website and the purpose of the application or the frameworks. Now in this situation augmented reality has quite a wide view or wide field application. There are gaming applications, fashion applications and then of course navigation. Now gaming and fashion obviously do not need the GPS that is why we eliminated them based on their main focus. This left us with a final seven. Now the seven moved on to phase two of our evaluation which was the detailed or in depth evaluation. First we got AR lab. AR lab comes in a modular structure so we looked at the browser module specifically. AR toolkit and where were the two frameworks included from the user guided comments on social compare. The remaining four were droid AR, methyl, panic AR and wickitude. So all these went on to phase two. Now I'm just going to discuss some of the results that we found in phase two of the evaluation. First we looked at the platform and the programming language as these two were interconnected. From our results we found that Android and iOS were the main two focuses of the platforms. All of them of all of these platforms were deployed on these two, all of the frameworks were deployed on these two platforms apart from droid AR which is only available on Android. Now the programming languages obviously follow suit with Android being Java and iOS using Objective C. The next set of general criteria was the licensing in the implemented standards. Now for the licensing we found that only two of the frameworks were open source. AR toolkit for iOS specifically and droid AR were our two open source options. The rest were proprietary offering various forms of freeware. Our implemented standards, I don't know how many of you are familiar with ARML 2.0. It is fairly new standard and it is only implemented on three of the frameworks, Metaio, Wikichude and Layar. The rest of the frameworks did not implement the standard AR ever, specifically the open source who were lagging behind in any standard implementations. Offline availability as I mentioned earlier was one of our primary focuses. We looked at an areas where we do not have an internet structure or damage infrastructure and the ability to access the data offline. So here we saw that AR toolkit and droid AR both being our open source options had the ability to operate and access data offline. The rest of the frameworks all had the capability but required extensive programming in order to alter this. Seeing as they were proprietary frameworks, altering their source code was not a viable option in many cases but it was possible. So we looked at data sources after the offline availability. All of the frameworks supplied web services or use web services as their method of data acquisition. Now for Layar and Metaio, we found Layar, Metaio and Wikichude all use proprietary methods of acquiring the data. The secondary way of acquiring the data was in the native code. Now again the problem came in with open source and proprietary frameworks. Open source was quite easy to implement in this method where altering the native code of the proprietary software was limited. Looking at the functional criteria, we looked at the data display, the object events and the display radius of the markers. First of all the data display. We looked at the visual representation of how the address data would be displayed on the user's view. Here we saw that all of them could alter the display, changing the bubbles, changing the text. It was primarily text focus. Therefore this was quite an important feature for us that it was simplified that it's still possible. Now as you can see Metaio was one of the only ones we couldn't alter immediately. The problem here was the styling is relatively branded, specifically the proprietary frameworks. They can be altered again but there are problems. Object events refer to editing or selecting the data to expand and retrieve more data from a server or from a preloaded file. If we look at a basic example, you can see that with the display radius we want something to alter the max distance of markers to be displayed. This is where this feature came in. If the user can change the amount of detail to be shown within a specific radius, they can apply it into various situations. The object events and the data display we looked at the physical display markers on the screen. For instance, candor takes the change, the basic font styling and things like that. The other one was selecting the markers to expand them. Instead of just displaying a house address, we wanted to display more such as a street address or a city. The last of the functional criteria was the visual search functionality. Now this part was not initially or this criteria was not selected based on the three use cases. We do everything that it was useful in the applications or it would be useful. Now visual search refers to using point data or edge detection in order to measure a dwelling's geometric properties and then identifying it based on these properties. Here we found the three of the frameworks had the capability, the time, the weekitude and the error lab. However, error lab we evaluated the browser and error lab is a modular solution. The visual search functionality was not possible in the browser module. The final set of criteria was the nonfunctional. Here we looked at first of all the ease of integration with other GIS software. Now the idea behind this was to grab capabilities from external software and use it directly into the framework. For instance, data acquisition would have been greatly affected by this if we could use something like post-GIS as our primary source of data acquisition. Unfortunately, none of the frameworks applied this, which was kind of odd as this functionality can be implemented or hopefully this can be implemented. Ease of extending the frameworks, we looked at the ability to add additional functionalities. For instance, the distance calculation. If the framework can be altered, the previous criteria would then be nullified or the impact would be lessened. So ease of extending the frameworks, we found that only the two open source options provided this functionality. Usability, this was the basic installation, implementation of a general application. All the frameworks were easy to use and easy to implement. And finally, the documentation support functionality. Now here, another, how can I say, divide came in with the open source and the proprietary networks. So the open source was more focused on the user community and the proprietary was more focused on the official documentation or had more of a supply. So the biggest divide that we found was between these two. In no way are we putting the two against each other. It's not a versus. This is simply to display that open source and proprietary had different strengths and weaknesses. We found on the open source side, AR toolkit for iOS used, which was our open source solution. And then the proprietary networks, we looked at the other six or other five options. So here we defined that the open source options had active user communities and could be modified in multiple ways. They also supplied the offline availability, which was a strong point in their favor. With the proprietary networks, we found that the official support was quite extensive, but the extreme cost to most of the applications here and the limitations to altering the source code counted against the proprietary networks or the proprietary frameworks. So what happened is we took these frameworks, these two frameworks, specifically we looked at prototyping on the university, on the campus of the university of Pretoria. Two frameworks were selected, AR toolkit, which was for our open source option and compared it against the tile, which was deployed on Android. We assessed the functionality and actually developed the two applications to display augmented reality successfully. During the type testing, we did however find additional problems again. In the community driven projects such as our toolkit, we found that there were deprecated features, which required extensive programming to alter. And with Matio, it became apparent again that altering the method of display was quite an issue. Also, the sensor limitations became apparent, which would require further testing in future works. So for future works, what we actually needed was a new approach to the whole idea of the frameworks. First of all, we actually need something that can implement a shape file or integrate with an external GIS software such as postures or queues and then alter the data. Now, if we have custom libraries and cross platform availability, this option would be greatly increased or the solution be greatly increased. One of the most important functions that we looked at was support structure. Now, all of the frameworks, as you can see, there are many augmented reality frameworks, but many of them have become outdated and deprecated. The developers have moved on to bigger things. Now, the structure support system, something for instance from OSGO, would provide augmented reality for future development, a unique structure legally and organizationally to continue and grow. Thank you very much. Any questions? I was excited to find somebody else working on this. I'm curious why you didn't just consider using a more common framework to do augmented reality. We're working with 3Mobile on classes, but it's a 3D globe, so you have all your normal GIS functionality, it's open source, cross platform. The guys aren't here if you want some support. I wonder why you didn't consider working with a GIS-centric framework because most of the augmented reality features that are in these toolkits you're not really using, because you're not doing targets or image recognition or computer vision, you're doing like strictly GIS-based functions. I just wonder why you didn't go that far. One of the reasons we looked at existing frameworks was because not everyone had a GIS background. We specifically looked at existing frameworks because there were so many. That's one of the main reasons. On that social compare site, you can find many solutions to this. Not requiring any GIS scales or solutions. Very little program is needed in most cases. This is one of the reasons that we didn't look at GIS as a solution directly. We looked at what is out there instead of creating a whole new approach to the solution of augmented reality. I just wanted to know whether you could recap the advantages for an emergency responder on the ground to use augmented reality over a 2D map, because it seems to me quite a complex environment. I wondered if you'd spoken to any emergency responders and had that like, who else is doing this in the ground or do you have any case studies of people doing it in the real world? We didn't speak to any emergency responders directly. We did, however, just look at some of the simple features or some of the obstacles that we, or how can I say, that we encountered in the field ourselves while taking address data and things like that. One of the reasons for the business calculation was specifically because we struggled to navigate towards a specific structure in an organized environment and we just worked it back to something that would be in foreign or an unorganized environment. The same with the high precision. These were just extrapolated from our general sense of what happened in the field and we applied it to another field such as the emergency workers. Sure, but if you've got a 2D initial map with a GPS point on, with the address of the house, that's why the emergency... Can I add something to that? So, Dhani doesn't know about the discussions that we had and why we did this project. So I just want to add to that. At the ISO meetings, where we were developing the international addressing standard, we are now looking at a standard for displaying addresses not only on 2D maps, but also in 3D environments. And in actual fact, the guys from New Zealand and Christchurch said that something like this would have been really useful in the case. If I could just add also to that. One of the things that specifically in the emergency situations, a 2D map that we looked at, the conventional methods of navigation might be altered, for instance terrain. This is where the augmented reality would also give strength to, or how can I say, give a new solution to the emergency situation. If an entire network of streets and buildings have been shifted due to something like a tsunami, a conventional map would not necessarily provide the solution. Although it could still be used and the user of the map could obviously use its own initiative to navigate toward the situation, this would just provide speed to the solution. Okay, we have time just for one question. We have one question, so if there is nobody else with one question. Okay, it doesn't remember the questions. Yes. So we made a bit of memory. So I have to remember now. I was going to say, Mattia was definitely a cautionary tale and not why we don't use proprietary software, right? Because they had all these developers and then they got bought by Apple. They just shut it down. They don't offer any more support and everybody screwed that was using Mattia. At the start of our research, we started the research just before Mattia was enveloped by Apple. So yes, this was the case with one of the proprietary softwares. We did all, for most of the research for this framework, we found that it was fairly useful. There were limitations, obviously. And then halfway through our research before publishing or before finishing the paper, it disappeared from the map.
|
Addresses play a key role in facilitating service delivery, such as mail, electricity or waste removal, in both urban and rural areas. Today, preparation of digital geocoded address data in a geographic information system is a reasonably simple task. However, erecting and maintaining address signs in the physical world may take time due to lengthy procurement processes and vandalism or a disaster may cause signs to disappear. Displaying addresses in augmented reality could close the gap between digital address data and the physical world. In augmented reality, a live view of the real world is superimposed with computer-generated information, such as text or images. Augmented reality applications have received significant attention in tourism, gaming, education, planning and design. Points of interest are sometimes displayed, but addresses in augmented reality have not yet been explored. The goal of this article is to present the results of a two-step evaluation of augmented reality mobile development frameworks for address visualization. First, we evaluated eight frameworks. Based on the evaluation, we implemented an application in two of the frameworks. Three use cases informed the evaluation: 1) disaster management, e.g. address signs are destroyed by an earthquake; 2) household surveys, e.g. locating dwellings in informal settlements or rural areas where addresses are not assigned in any specific sequence and signs do not exist; and 3) address data quality management, e.g. validating digital address data against addresses displayed in the physical world. Evaluation criteria included developer environment; distribution options, location-based functionality, standards compliance, offline capabilities, integration with open source products, such as QuantumGIS and PostGIS, and visualization and interaction capabilities. Due to procurement challenges in the use cases, open source licensing and integration with open source products was a strong requirement. Results show that very few open source frameworks exist and those that do exist, seem to be dormant, i.e. the latest versions are not in sync with the latest mobile operating systems. The use cases require offline capabilities (e.g. due to internet downtime after a disaster or lack of connectivity in rural areas), but few frameworks provided such support. The recently published Open Geospatial Consortium (OGC) Augmented Reality Markup Language (ARML) was implemented in only three of the frameworks. The evaluation results can guide developers in choosing an open source framework best suitable for their specific needs and/or for integration with open source products. In future work, we plan to evaluate the impact of internet connectivity and limitations of sensors in mobile phones on the precision of address visualization in augmented reality in the three use cases.
|
10.5446/32011 (DOI)
|
So good afternoon everyone. I'm Anu Surya. You can just call me Anu. I'm from Mineral Resources flagship of CSIRO. So CSIRO is a federal organization for scientific research in Australia. Today I'm going to talk about a metadata model and a web service which we develop to support the registration and management of different kind of environmental samples in CSIRO. Example of environmental samples are for example physical specimens like water, plants, rocks, insects. So in current practice these samples are collected and stored by different entities. This includes for example individual researcher, laboratories, universities, state agencies, for example geological survey and museums. And each sample corrector just uses their own way of documenting the sample description. This leads to several problems. For example different names can be used to describe the same sample. This is possible. And also it is possible that the sample name changes over the time. For example when they relocate one sample to another location, from one location to another location, then they rename the samples. So if you want to use the sample within the same organization then you won't have any problem because basically maybe you already have some records about the samples and you can easily identify the sample. But what happens when you want to expose these samples to somebody outside your organization then unique identification of the sample is an important factor. So it's similar to the current identification system. For example we have an international serial book number, ISBN, and this is a globally identifier which can be used to identify books. So in a similar case we have a digital object identifier which can be used to identify publication. So in a similar manner we have ISBN, it stands for international geosample number. This is actually 9 digit alphandoric code which can be used to identify sample and specimens. And it is persistent. What I mean by persistent is that it has a stable link to the samples compared to URL because if you use URL, the URL can change over the time. But if it is a persistent identifier it has a stable link which contains which point to the description of the sample. So IGSN, this is an example of IGSN where it consists of first two characters representing the agent. I will talk about what is agent later but it consists of namespace followed by code. So the namespace represents the allocating agent and the code is actually assigned by the user and it consists of the data center and the followed by some numbers such as a combination of number and alphabets. So in this case this is a function which is from the interdisciplinary data alliance. So this IEDA, AIDA or AIDA, they are one of the allocating agents which formerly registered with the IGSN agency. And they have the namespace called IE and existing center or project or individual researcher can register the sample through this namespace IE. So in this case for example the core repository from the non-Earth observatory registered this parcel. So CCR stands for core repository and 001 is some number which is assigned to this parcel. So this is a persistent unique identifier which identified this parcel. So this is how IGSN code works. Another important aspect about the IGSN is that like I said before any project, existing project, individual researcher or any data center if they want to obtain this IGSN number they have to register through allocating agent. And this allocating agent is for example IEDA and CSIRO is one of the allocating agent which formerly registered with the IGSN top level agency. So you can only obtain this persistent identifier with the allocating agent that formerly registered with the top level agency. So in a CSIRO, we as the allocating agent what we would like to do is we have a lot of samples, millions of samples, different type of samples. But these samples are isolated, they are some kept by the researcher, some are in the rock star. So we would like to actually assign IGSN. So for this purpose we want to develop a system. So this is what I am going to talk today. Some history about how we became an IGSN member. So we became a member in 2013 and it started from the flagship where I am from, mineral resources flagship. And currently there is a three projects or rock star which will use the system that I will describe in this presentation. And also to point out that the work I am going to present today is relevant to the ongoing effort with other two allocating agents in Australia which we have a collaboration. For example, the Geoscience Australia and then the Curtin University. There is already some work from the Laman Observatory. They already have developed the metadata model and the surveys. But the existing work is mainly focused on geochemical samples. But in CSIRO we have different kind of samples beside the geochemical. So therefore there is several technical limitations in terms of the surveys and the metadata model. That is the reason why we would like to develop one for CSIRO actually to cater different type of samples registration. So two things I will present today. The contribution. First is the metadata model, the descriptive metadata model. And the second one is the web service which we call as allocating agent web service. And both of these are currently being used in CSIRO. Again, let's revisit again this diagram because I think it's a very important diagram to understand which part this work belong to. So the client is the existing three project which I mentioned before in CSIRO and they have a different kind of samples. And they will actually use the metadata model that we developed to actually send to the web service. So the web service is run by the allocating agent which is the CSIRO. And then we have the service talk to the top level service. The top level service will actually register the persistent identifier and return to the allocating agent service. And then this service will send the persistent IGSN code to the client. But why we need another metadata model because from the allocating agent to the top level agency, the metadata model only cover registration information. So there's no information about sample descriptions. So as allocating agent, it is our responsible to develop a metadata model that can capture characteristics of different kind of samples. Okay. And another, so the whole idea is also to use the service and then expose the data service to the public. Whatever sample description capture here, we would like to expose through OAPMH. OAPMH is just like a harvesting protocol. And so that the public can automatically get the descriptions from this service. All right. So more information about the metadata model now. I'm not going to explain in detail each element, but I have grouped the element into several groups. So basically we have some element describing the sample identification, some elements describing how the samples are collected in the field and then where it is stored, who stored the time dimension and also other related information. Other related information, for example, we have several relation which we can use to say that this sample is sub sample from other sample or that is also data attached to the sample. So these are different kind of relation you can use to describe the samples. Although there are several elements, only few elements are mandatory. So for example, sample number, which is the IDS number, the sample name, which is the local name of the sample, whether it is the public or private, because I think this is very important because in some project, you want to get the number, but you don't want to release the metadata to the public yet. Then before you can say it's a private or public. And we have also landing page. What is landing page? Landing page provide you further information about the sample. Like I said before, we only capture the characteristics of sample, but if you have more detailed information of the sample, then this is going to be obtained through the landing page. And then of course the sample type, whether it's a rock, water, plant and so on. And then the sample correction is very important where the sample is located. Some information, we also use the concept of link data. Actually, for example, we use some control vocabulary to describe sample types and feature types. I think this is actually to use the power of link data to give the user more meaningful information about the concepts. And we also reuse some element from the IGS registry schema. What is the registry schema is here? For example, we use also the lock elements and also the related relations, which I described before. This is all derived from the top level schema. So this actually to show that what we develop is not a new schema, but reuse the existing schema and then customize accordingly to get a different type of samples. All right. So just some example of what I mean by identification and sampling activity. For example, identification consists of number, name, other name, sample type, classification concept, why it is collected and so on. Sampling activity, for example, the collection information, the location where it is collected from, the time, the sampling feature, the host, where the sample, for example, observation well, you collect the water. So the observation well is a sampling feature. The host, where the sample is collected from and who collected the sample, the size, the measurement and the method, the campaign, for example, and so on. So this is an overview of descriptive metadata. Okay. Once we have the metadata, the next step is we develop a service and this service will be used by client who are the client existing project. It can be individual researcher. So they can actually format their data according to this descriptive metadata. They can format their sample description using this metadata model and then send to the service. So the service is implement a REST API. These are some operations which have been supported. For example, to see all the namespaces, to register namespaces, also to register samples, get more information about the metadata through a sample number. So let's look at the more detail. For example, register the samples and this will actually is a post method and it will return a list of successful and unsuccessful sample registration. I know this is really small. I don't know. It looks good on the projector, but here I don't know. But anyway, I would like to say that what, so you have the client, for example, the existing project in CSIRO, the allocating agent is the service and then this is the top level agency. So first you send the XML and then we do the validation, use the validation, the XML validation, whether the data is validated properly and then we also validate namespace. So because we want to ensure that only, so each data center or our client can actually request a unique name and we want to ensure that only those very unique names can register the sample. And then so here in this process, you can assume like the client program sending about 200, 500 or 10,000 samples descriptions. But the problem is that in this part, which is between the allocating agent and IGSN, they only support sequential registration. So you only can register one sample at one time. There's no support to represent multiple samples. So what we do is we iterate, make sure that all samples are registered. Once only for successful sample, which are registered will be inserted into our database. So we keep a copy of the sample registration here and then send to the client the list of successful and unsuccessful samples. Because here there's a highlight that not all sample can obtain the IGSN. This is possible that if you send 100, maybe you will get only 80 due to network failure. So what we do is to ensure that we only successfully register successfully registered sample description here and then send to the client what are the samples which are registered successfully and unsuccessfully. So I would like to show you one example how this is the schema, the metadata schema and the data model have been applied. So there's a project called Capricorn Distur Footprint Project. Basically this project is the members of the project coming from UWA, University in Western Australia, CYRO and also Geological Survey of Western Australia. And this project basically finds minerals, finding interesting mineral, golds or corpus. And within this project they collect different kinds of samples like a plant, water, soil and rock. So we have implemented the system basically, the Capricorn Correction System will send the request and the service will mint, mint, get the IGSN from the top level registration agency and then store the description and then send back the IGSN to the client which is the sample correction system of Capricorn Project. So this is the results. So if you see this is example of XML which is created based on the metadata model I described before and that is an IGSN CSCAB001 CSS for CSIRO. This is the namespace for allocating agent of CSIRO. CAP is a prefix for the Capricorn Project. This is a prefix for the data center and this is followed by something which is assigned by the data center and then we register. So this is the main registration agency which shows that this sample has been registered. So that is the handler. This is the persistent identifier. So if you navigate to this 10273 is the namespace for IGSN and this is followed by the IGSN number. So if you link, if you actually, this is a persistent actionable persistent link. So if you click on the link, this will give you more detailed information about the samples. Alright, to conclude, so what I have described so far is the development of a metadata model of samples and a web service implementation for CSIRO. The contribution here is that the metadata model is not domain specific. It is meant to capture information for different samples. So it is not meant to develop for a specific type of sample but rather it can be used to describe the main properties of different type of samples. And this is, in this solution both schema and the service is important to actually, to facilitate the sharing of sample description with the inside and outside CSIRO. What we would like to do is, we would like to test the solution to the rest, the other two repository. Actually we already registered some sample from the Australian Resource Research Centre. So we already actually registered some sample about 1500 sub collections from this ARRC. And the next step is also to register the sample from the mineral reflectance spectra. So another mineral star. And we would like to also formally document the mapping between other metadata models and existing metadata models, for example ISO and OGC. This is to ensure the correct application of the metadata model which we develop across different domain. And finally we will also develop a web portal with Curtin University and Geoscience Australia. And this portal will be used to hours the sample description from different allocating agency in Australia. Thank you for your attention. Thank you Adu. Are there any questions from the floor? I wanted to ask you, you mentioned these ISO standards here at the back on the last slide. Is that where the 1-1 observations and measurements comes in? So how does that relate to what you are doing? When we want to develop the metadata model, we look into existing open standards because we do want to develop a new one. The problem with the ISO standard is that first it's not suitable for describing different kind of samples, physical objects and it is complex. Some construct can be used but not for all core characteristics which I described on the slide with the different groups. But with the OGC, they have observation and measurement part 2 which is addressing sampling features and we see there is an overlap but the modeling scope is different because in the OGC's path it's more observation centric how the sampling is done. Whereas in our case it's more about the sample and the core property. Of course the observation is one of the elements. So what I meant is that just to show what are the elements in our metadata model and how this is aligned to the current standards like OGC. So if you say ISO you are talking about 191.5, a metadata one. Because there is also one on observations and measurements. And this is from there. No, it's from ISO also. It went to ISO. Yes, but this is originally from OGC. And there is a path 2 for sampling features. Thank you. You wanted to say something. Any other questions? I have another question, sorry for all my questions. If you go back to the front Anna, it's not something that you do but I was just curious. Right, where you explain the IGSN. Yes, there is only 4 digits here. So are there never more than 1000 samples or 9999? Yes, there are. So first this way, according to the IGSN documentation, the recommended length is 9. The recommended length is 9. That consists of the agent namespace and the code. And the code consists of, yes, data center. But if the allocating agent think that there are samples, for example the marine ships, right, they have really long lines. Then the allocating agents have the right to actually accommodate this kind of use cases. So the recommended is 9 because this is easily readable in the barcode. It's easy, but it's recommended. Have you thought about including some way, including the location code also in the sample number? Like there is something called the location coding system for, with the 12 digit you can define the latitude and latitude of the sample. Actually that's the idea of IGSN because let me go to the last slide. Because this is an identity file and it's actionable link. When you click this link, it will give you a more descriptive metadata that includes the location information. Because not all samples have locations. For example, in taxi samples, in taxi samples, there are never locations. And not all are geo-referential locations. For example, something generated, like a 3D printer or a police, there are never locations. It's more like a localities and geo-referential location. But that's the nice thing about having an identity file. It's a unique, persistent, actionable. When you click this, this will go through the landing page that give you more information about the sample. Thank you. Any other questions? Thank you very much.
|
Records of environmental samples, such as minerals, soil, rocks, water, air and plants, are distributed across legacy databases, spreadsheets or other proprietary data systems. Sharing and integration of the sample records across the Web requires globally unique identifiers. These identifiers are essential in order to locate samples unambiguously and to manage their associated metadata and data systematically. The International Geo Sample Number (IGSN) is a persistent, globally unique label for identifying environmental samples. IGSN can be resolved to a digital representation of the sample trough the Handle system. IGSN names are registered by end-users through allocating agents, which are the institutions acting on behalf of the IGSN registration agency. As an IGSN allocating agent, we have implemented a web service based on existing open source tools to streamline the processes of registering IGSNs and for managing and disseminating sample metadata. In this paper we present the design and development of the web service and its database model for capturing various aspects of environmental samples. Previous work by the System for Earth Sample Registration (SESAR) was aimed primarily at individual investigators, whereas our work focuses on curating sample descriptions from larger collaborative projects. The paper describes the linkage between the IGSN metadata elements and the sampling concepts specified in existing common data standards, e.g., the Open Geospatial Consortium (OGC) Observations and Measurements standard. This mapping allows the application of the IGSN model across different science domains. In addition, we show how existing controlled vocabularies are incorporated into the service development to support the metadata registration of different types of samples. The proposed sample registration and curating approach has been trialled in the context of the Capricorn Distal Footprints project on a range of different sample types, varying from water to hard rock samples. The observed results demonstrate the effectiveness of the service while maintaining the flexibility to adapt to various media types, which is critical in the context of a multi-disciplinary project.
|
10.5446/32012 (DOI)
|
Okay, good afternoon everybody. Today I'd like to talk about my research. The title was presentation into developing around this change like this. Okay. I'm working in the National Institute for Agro-environmental Sciences and also the member of Oisijyo Foundation Japanese chapter. And this is an outline of today's presentation. The first I'd like to explain about our study background and the development of the land use database. The third one is the progress of the database development and the one application for the land use change evaluations. And finally I talk about the publishing data of the open data. First I'd like to explain about the importance of historical information. Just one week ago, really one week ago, the Jusocity, just here is our institute about the 10 km in west affected the flat disaster by typhoon 18. So like this. Here is that this picture may detect this area. This is the broken the bike and water from the area come to this area. This blue color area is flooded area and about 30 km, 30 square km was affected by a flood like this. And the Japanese Army came to bring people from the top of the roof. Here when we compare with the old map, this is old map in the 80s and this is the airfoil photograph after the flood. We can see the old map area not so affected by the flood. So for example here I can see this website right here. So for this area is a planned field and this area is, this yellow color means the party fields. The planned field and the party field is completely spread to the flood but here is the old town. Here old town is not flooded. So that the concern is that the path run is very very important for disaster mitigation. And here I'd like to explain about this map. This map is called Lapitz survey map. And surveyed in the early measure, this means from the 1880s to 1886. Lapitz survey map means the method for measuring the map. And also the name of the map, the cities of the maps. And the surveyed area is like this. And most of the country playing, the Tokyo is here, our altitude here. And the scale of 1880 something and 900 maps. The total of the map number of the mine 900. And this is very important materials. So this is a typical image of Lapitz survey map. The characteristic of the Lapitz map is the color map. We can see here the yellow color means the party field. And the green color means the uterus or forest. And this kind, this light brown color means the planned field. And here is a Chinese character, this means pine. And we can understand what species, the species of the forest from this map. And also here is some kind of the sketch of the landscape. And we can image 130 years landscape from this map. But there are some problems. One is this is, this map is held by the Japanese, Japanese, Japanese, GSI, Geographal Survey Institute. Geographal, Geospatial Information Authority of Japan, they are printing the Japanese map. But this is the set of the map only 160. And about 10,000 dollars, so expensive. So we are making, we try to make some of the web GIS system to disseminate this kind of the map. Anyway, and many people using this map for the land use and land scape changes. For example, one is the Shihai Conducted Land Use Change in Shimosa, Pratunakil. Ichikawa Conducted Land Use Change in Tamahills. And this plug Conducted Land Use Change in Ushiku. And Koyama is conducted here. And me conducted both of the hills. But the present study is only limited areas. No one knows how land use changes in country areas. So, my object is this study is we developed, the purpose of this paper is developing a quantifiable land use database. We are already developing the web GIS system so that everybody see the map as a last data. But it is difficult to evaluate land use change or something like that. So now we try to make a land use database. And we try to publish this database as open data. Next is first which we try, we conduct it, sorry, we developed the map around this database. And also we conducted accuracy assessment. So in this study we make a land use database as a 100 meter grid point data like this. Of course, the previous study we make a polygon land use data. But this time the polygon data take a very long time. So that we use this kind of, we use the point type land use data in here. And we developed the point data input system using force for Z like this. The interface is developed by QGIS and QGIS API. And the database is server. The server is developed by, developed using the PostgreSQL and PostGIS. And so I'd like to show you like this. So here is the point data. In this column there is a land use site, mine, party, twiz field, or threes, orchard or something like that. And we interpolate the land use under this point. For example, this here is a party field. So we input one, for example, if land use is a field, we key, that's two. You can change from the number key and enter and input next data and something like this. And we input, input, input data, data, data, data. And after then, we make around this data such like this. And after input data, first we conduct it accuracy assessment. Because the point data is, do not erase rate the whole map data, map data. So we compare vector based in land use data and this kind of the point based land use data. This is the result of the evaluation. This is a land use ratio in the 41 data. This axis means the point and the point data, the land use data. Almost the land use ratio is almost same so that we consider the point data accuracy is reliable for land use change analysis, I think. Next, we conduct it run. First, in the next, I will explain about our progress of the input data. Total grid is 193 and 180 data was already inputted. And in this study, we explained the land use change in this red areas. And here we compare the land use in 1980s and 1975. And this is why I said in 1975, this period may be under the influence of Japanese high economic growth period. And this data was obtained from national round-the-world information, neutralization, segmentation, in a difficult but about 100 grid data. And we compare point data and 100 grid data. And this is the result. So we displayed the party, upland, woodland, grassland, village and urban areas and water areas. The most significant change is, first is grassland. Grassland is almost banished. And here, the urban areas are increasingly. And I will show the map. The left one is in the 1980s and the right one is in the 1970s. So, sorry. And in this tables, the decrease of the grassland and increase of the urban areas shows a serious value. But that not means grassland changing to the urban areas. So, in this map, the grassland indicated this orange color. But it is a little bit difficult. So, I will show you the next map. This is a separate, this, we extracted grassland area. And this orange color. And this blue color means the marsh or some kind of the wetland. Anyway, so grassland is distributed to the foot slope of the mountain or top of our plateau areas. And this is the land exchange. The grassland area is changing to which land uses. This green means the forest and the brown color means the upland field. So, most of the grassland in the hill area changing to the forest. And most of the grassland in the proactive area changing into the upland fields. And I will show you about the landless matrix in the 1980s to the 1970s. And 16.6% of the upland fields is changing to the urban areas. And 21.3% of the forest is changing to the upland field. And 12.6% to urban areas. That means upland field and forest is decreasing from the 1980s to the 1970s. But the decreased upland field and forest area is covered by the former grassland. And also, this is a kind of a symmetric diagram in Japan. Landless changing in the contour area. And 18.8% is the sole land of the grassland. That means the traditional agriculture style needs the grassland. Because for the fertilizer or the material of the house or something like that. But the first, early of the age era, the tax system is changed. Before the tax of changing, they have to pay the tax based on the production of the agricultural products. But after the system is changed, they have to pay based on the land areas. So they don't want to use the grassland, not generate money. So that's changing to the forest or upland field. But forest is also changing to the upland field because the fair revolution or the fertilizer revolution. That means chemical fertilizer becomes more popular in Japan in the 1960s. Then forest changes to the upland field. And upland field changes to the farming field. And after then, there was also many rapid economic growth and rapid population increase. This kind of the land is changing to the urban areas. But in the future, in Japan, the population is now decreasing. And it might happen urban generations. So the future is not clear, but we can conclude that the driving force is different in each time, each period. And we are publishing this kind of the data as open data. First, we published the last data as an inventory format. This format can be read by Joe Papalach or something like that. So that you can bring the data to the user this time, some kind of the smartphones. And also the point data is generated in this research. We already uploaded some part of the data to the GitHub. This is the GitHub pages. You can see the geoges on files and click here. Form of it. You can see it here. But the GitHub page is difficult to understand what kind of land is in here. So that we also developed using a developed tentative website for the display land use using the refresh. So you can click and you can check the land use. We can select the background. This one is current map. And the other is the previous map. That is original data. We can switch here like this. Yes. And the conclusion. So we developed land use database as 100 meter point data and about 90% has been inputted. The accuracy of the point data is reliable for the XR9 land use changes. We conducted analysis of the country. We can see the land use changes. And the concept of the database has been published. And this data is created by here. And the previous study we published rapid survey map as a web GIS and type map data. This data promoted utilization of the rapid survey map not only for academic but also for individual interests. There are some kind of people using this map as their hobby or something like that. And we hope that this database also contributes to the dissemination of rapid survey map utilization. Not only for academic research but also for the public interest. Thank you very much. We have time for one or two questions. I have two questions. The first one. When you say point, is there a point with certain radius or you just look at one point and see it's a land type 1 or 2. Do you have a radius within the point when you evaluate the land use? Just one point. And then you have this semantic flow of... There was one... There was one hierarchical diagram. And this is derived from collections of finds. How do you come to these conclusions? Because this flow means applicable to the whole area. Yes. And applicable whole area but depending on the for example topographical situation or historical situation. For example, in the east part of Japan is relatively not developed in 100 years ago. So that there still remained grassland. But none of the Tokyo area relatively developed. And so that there was no already no. Already banished with the growth of some so grassland is already banished. And already grassland changes the forest or upland fields. So that's not as depend on the... The region, the study region. So that's not applicable to the entire whole area. But the sequence is... So maybe similar. For example, this change is applicable for the east part or not developed area. This sequence is applicable for the near by Tokyo. For example, like that. And this happened about 100 years ago near by Tokyo. But this happened in 100 years ago, far from Tokyo. After then, for example, 15 years ago, this phenomena happened in far from Tokyo. That's okay. Yes, quite interesting. It's quite structurally. Okay, we unfortunately have to move on so that we can stick to our schedule. Thank you very much.
|
Historical land use records are valuable information for biodiversity protection, disaster management, rural area planning and many other uses. The Rapid Survey Maps (RSM) that were surveyed in the 1880's (early Meiji Era), are the first modern cartographical map series of Japan and important sources of information on traditional land use in early modern Japan. We had been analyzing these maps based on polygon data and raster based Web-GIS System to disseminate the Rapid Survey Maps using FOSS4G, but, these are difficult to apply for quantitative analyses of land use change. Thus, we developed a grid based land use database using QGIS and PostGIS, and published the database using GitHub. First, we developed a land use data input system consisting of a client and server. The client was developed using QGIS API and the server was a PostGIS database. Point data as a 100 m grid was stored in the PostGIS server and land use category underneath each point was input using the QGIS application. About 1,400 thousand records (70%) have already been inputted. Error of grid based land use data is less than 1% compared with vector based land use data. We analyzed land use change from the 1880's to 1975's. The most significant difference between the 1880's and 1970's is the area of urban land use and rough land such as grassland and bush. Urban area increased remarkably and grassland area almost disappeared. That does not mean grassland changed to urban area. Most grassland changed to agricultural land uses and forest, and urban area was formerly mainly agricultural land use and forest. Some inputted data have been copied to GeoJSON and uploaded to GitHub (https://github.com/wata909/habs_test/) as open data (Creative Commons BY 2.1 Japan). A tentative data browsing site was constructed with Leaflet (for example, http://wata909.github.io/habs_test/rapid544000.html). In this site, it is possible to compare point type land use data in the 1880's and present topographic map/RSM raster data. We hope that this database contributes to not only academic research, but also business, government, and public interest.
|
10.5446/32013 (DOI)
|
Okay, how many people come from an academic background? Just so I know. Okay, so just a few so I can give you a bit of a background on what we do. So if you're here for the opening session, there's a little joke kind of made, Phosphor G or ESR-LXC. And the takeaway now, well who cares what the answer is, the very fact that the question is being asked, means we've succeeded, or the open source community has succeeded, and they become a player. Well as an academic I started to think about that too, and I thought in a similar way for academic institutions to be giving out degrees with a G on this track to not to ignore the Phosphor G completely, because it's wrong in the sense that it's arrived. So I don't know that we can move forward and say, we're going to give an academic degree in G on this, and not include open source as part of that stack. So what I'm going to be talking to you about is what we're doing. We don't have entire courses in open source, but we're in the process of finding out what this is. So if it didn't raise your hand, or didn't raise your hand when I asked about academia, the mission of an academic university is generally three-fold, teaching, which most people are familiar with, research, and then outreach, and sometimes we call that extension, but it's a way in which to better the world around us in areas that don't include teaching and research. So there's other ways in which we do that. So what we're talking about is each of those areas and how we're starting to use open source tools in the way. So the first one would be teaching, which a lot of you are probably more familiar with, from an academic institution. For us, we have about eight more geography departments. My university is what is considered a regional institution. So for those of you here in Korea, like Cheonbap, or Cheonnam University, it serves a region. So in our case, we serve the mid-Atlantic region, and fortunately we're near Washington, D.C., so that helps us out a lot in terms of having a good audience that's an influential audience. So we have about eight courses in GIS, but once I have it read, are the ones that we started to fold Phosphor G-type concepts into the courses? So that includes cartographic visualization, a GIS program class, advanced GIS, enterprise GIS, and a spatial modeling course. Cartographic visualization is the classic class you may have taken on how to do cartography in a digital environment. It aren't necessarily covered in a GIS class, color theory, different ways of classifying data and presenting it. I've got some examples here. Now, the reality is in the United States, it is an ESRI world. So we do a lot with that. So our traditional class was always ArcGIS and Adobe Illustrator and teaching students to do some advanced maps. What we've been doing now and what we've been folding in is using things like Mapbox and Tomo to fold in certain projects. Let's see if I can find it here. Independent. Here we go. Oops, not that one. So some of the things that the students had to do was, if I can find it here, part of this, there's some interactive maps. So again, using things like Mapbox, learning how to do things like symbolizing data. I'm not sure if I'm popping up yet, so I'm going to leave that out. This one. All right. So here's another one that a student had done. He's looking at barnacle GIS and you can see it's, this thing is going to be moving and this chart is moving. So this is a student, typically, a sophomore, junior level, 19, 20-year-old. We're starting to invent, not only to keep some principles of layout and maps, but also how to make use of an open source tool to present that work in what they do. So that's been integrated in as part of their projects. Advanced GIS is just what we expect. It goes beyond our traditional GIS courses. And again, traditionally, we rely heavily on the ESRI software suite, typically introducing into the more advanced concepts of geo-processing. So we'll use things like spatial analysts or 3D analysts or statistical analysts or network analysts in ArcGIS. So it became a very ESRI-centric course. What we've been doing is folding in more open source. So if you were here yesterday, there was a nice talk on these, the 10-list tools to have in QGIS, right, all the different plugins. And those plugins had terrain analysis stuff, right? So just like 3D analysts, they had spatial analysis stuff. Just like spatial analysts, they had statistical stuff. So all these different things. So here's an example of a lab exercise and data in South Carolina, which is on the east coast of the United States. Classic example of a factory blows up and there's a school nearby and how many kids are going to have to be evacuated. So the typical rule-based kind of GIS thing we do. So you can see in one case they do their lab in ArcGIS, which is always a traditional way. We now have them replicate that lab in QGIS. It's not a QGIS course though, right? So this is kind of an after we're bolting this on, we're following it in. But at this point they're getting an appreciation for the kinds of plugins and the kind of relative speed that you can get out of an open source product. So that's complementing their advanced GIS course. So we're exposing to those kinds of plugins and duplicating some of those different kinds of activities in QGIS. GIS programming is another course. Now that was, this course is half and half. So half of the course is learning ArcPy and Python. Geography students have to take Python as an introductory course and they take this one. But we're using a spatial SQL with post-GIS. So in this case the students learn how to program classic GIS tasks. We're using this book called How Do I Do That in Post-GIS. Let's see if that pops up. Let's click on it. So in this book, if those of you may be familiar, there was a book that came out, How Do I Do That in ArcGIS in Manaford. There was about 70,000 downloads we had of that. I'm not going to bother with that. So this is just another kind of a book that was written to do the same thing. But how do you do all these classic QGIS-GIS defined tasks in Post-GIS? Here's an example from in the book. So the students will learn how to do an ArcInfo style intersect with Post-GIS. The intersect union identity, all kinds of buffering. So we include that in there as well. They also perform a classic GIS project that they either get to choose ArcPy or SQL with Post-GIS. We're on camera, so I'm not going to say which one they prefer. But you might be able to guess. The other thing we do is we make sure that we do this project from their other class. But now they do it in Post-GIS. So the same kinds of rule-based. So the time they're done with their education with us, they're sick of these kids that are near the factory that blows up. Because we have them do this project over and over again in different software suites. We have them doing QGIS, ArcGIS, and then here in Post-GIS as well. Another thing that we introduce is called Enterprise GIS. Now, as a small regional university, this was first started out as an independent study for some students. So this is where the students actually design a multi-user, multi-vendor Enterprise GIS. And we use Post-GIS on the back end. Just to show you an example, Post-GIS kind of drives the show. But we have multiple clients that simulate departments. And we have ArcMap, Fugis, AutoCAD, and Manifold, each hitting it. So the students are building a GIS database and accessing it with multiple users with multiple software products. Those of you who aren't from America may not understand these names, but we set up example users. Mo, Larry, Curly, and Shen, those are called the three students. So each of them have different kinds of permissions. So Mo might be allowed to edit some data. Larry can only look at it, but can he get out of the other data. What this does though is it teaches the students how to define groups and roles and users and really work in a true multi-user environment. So they replicate a simultaneous multi-user GIS doing this. The other thing we do is, as part of our advanced GIS class, the previous one, they build a geodatabase. So they learn about things like the subtypes and domains that you, yes, or I call them geodatabase. Then we kind of do it for real in a real GIS. So that's where we create views and we create groups and user logins and domains. So a lot of different terminology you'd see in the database world. At the end of the day, they have built a multi-user, multi-vendor enterprise GIS that we test out simultaneous user editing as well. So this is, again, junior center level. So 21, 22-year-old geography major. So we're trying to give them access to that kind of activity. Also some special topics, classes. I mentioned that how do I do that in ArcGIS and Manifold book, where we had my students have a special topic, how do I do that in quantum GIS? So the same task that we're in that book that got about 70,000 downloads, they took a two-credit class from me and I said, well, you pop open QGIS and create a book so I can get you access to that. Just talks about all the different tasks. Then another student who took an independent study. This is about, again, this is a 21-year-old geography student, not a programmer, but we're teaching them how to use some open source tools. Here he's using PostGIS and, I'm bringing in the parcels as well, PostGIS with leaflet and GeoJSON is back and forth. So again, this is what it is. But remember, this is a student in about a two-week timeframe. So we're trying to teach them how do you spin up, also know, so we're using node for our server, PostGIS is the back end, leaflet for the front end. And as he would finish things, he'd say, okay, the map's great. Now build me something that can select attributes. So he built a little query engine here that I won't step through, but the idea is you can then select the parcels that meet a certain criteria. So he's learning how to make that connection between the browser and the server, which is kind of cool. Okay, so those are the courses. Here's another special course we did. This is a blend of taking computer science students, geography students, and business students. Every year our university holds something called the Segal Century, and that's a 100-mile bike ride. We have what's called a metric century, too, 100 meters or 62-mile bike ride. About 4,000 people come on to our community for this. So what our students did was they built a smartphone application that included geospatial components to it. So there's a number of different rides that they can click on. And here again we're using leaflet in the back end with GeoJSONs. The geography students made the maps. The marketing students or the business school students, they went out to the community to find the different vendors who would participate. We have vendor locations along there. And then the computer science students wrote it both for the iPhone and then also for the Android. And they've got little buttons here so in case people got in trouble, you can click here and it would automatically call the wagon to come pick you up. And if you go to GitHub, all the code is on there. This is taking a while to pop up. So that's on there on GitHub so you can pull that software down. Let me just put it out of there. There we go. Yeah, so it's all there. So this is a great way to blend all the different students together. All right, the next thing we're going to do is research. So this is not teaching, but again we're trying. And we do a lot of ESRI-based research as well, but we're trying to again fold more open source work in. This project I'm working on with the National Science Foundation is part of our research experience for undergraduates. We're performing parallel processing using graphical processors, GPU technology. So using the processors on the graphics card to perform mathematical calculations, just in the interest of the time, we'll cut to the chase. We were working with what's called embarrassingly parallel algorithms to work with terrain modeling. So slow, fast back, terrain ruggedness. And learning how to take the pixels, send them to the GPU or the graphics processing system, and then have a thousand computations happening in one time stamp. What came out, this is very interesting, in blue was when we didn't do it with parallel processing. And the red, I don't know, blue is when we did it with parallel processing, the red is when we didn't. Look at the processing time here. The card we had had about 900 GPUs on it, so 900 simultaneous processing. We drove this thing down to zero essentially. You may as well just get rid of it. Even if I double that, it's nothing. So the GPU made it fast, but if you notice, we're still dealing with some IO bottlenecks. So this was fun too. So the students had to learn how do we think about other ways to read the data in in order to get rid of some of these bottlenecks. So we worked on that. One thing we discovered, especially for terrain functions, terrain functions have lots of data, these big raster files, but few computations per data element, right? So something like slope has nine computations in a 3x3 kernel. If we go to a 5x5 or 7x7 or a 9x9, now we have many, many computations per data element. And that's when we saw 60 time improvements. Here we saw about two and a half times, because they just weren't a lot of calculations. So for the graphical processing unit, the way this really works well is if you have massive numbers of calculations per data element versus lots of data elements with few calculations. So the next thing we did in the next year was we looked at multiple CPUs with parallel processing at the National Science Foundation. Here we use something called Hadoop. Everybody familiar with Hadoop? Okay. The idea is to break up a dataset and throw it onto multiple servers. So let's say we have a big map, one of the maps we used, and we could break it into four different areas. And one of these masters, one of the masters computer, drives slave computers. So each of these slaves performed their process and then take all the data back again. So this is sort of what happened when we increased the number of nodes. So we did a point in polygon analysis with this. This is the science that is working. So we took a process that literally went from days. We were able to bring it down to hours through some data reconfiguration. We got it to minutes when we built our own cluster in our lab of about four or five computers. But we eventually went to the Amazon EC2 cloud with our own code, just popped it on there where we could use ten different computers, but they have much faster memory, they have faster connectivity. So we got it down to seconds. Actually it wasn't seconds, it was like 190 seconds. I promised the students, I said, if you can get it under 200 seconds, I'll send you did it in seconds. If it's above 200, it's more like minutes. But there was something that we estimated four days by using Hadoop. We were able to get it down earlier. So again, more parallel processing that we don't necessarily see in a conference like this. Here's what I did, hold on a minute, the United States Department of Agriculture. This is an analysis for food change. So we did this with Cornell University and Tufts University. And basically what this is looking at is potential for food consumption and how we can optimize food production in this case in New York State. Let's go to Syracuse. So this is an example of, wait for it to pop up, it's taking a little while. Yeah, that's not where, I was working before, let's try, I'll go past this one. This one was developed in Newfoot on the front end and using GeoJSON in the back. So we won't wait for it to come up yet. This is a nice evolution for us too because we went from manifold GIS to flex and flash. Then the iPad came out so we moved things to open layer, then we would leaflet and GeoJSON. Okay, the next part is outreach. So this is not pure research. This is more ways in which we serve our community. So we do this through things like the digitized manhole cover. So this is the staff that we have in our lab. We have about 14 employees with 10 or so students that come through every year. And there's a lot of different things, we do digitize and we do data collection. But the thing that I'm going to focus on is what we call GeoDashboards. In this case we bring our students in to work alongside our computer programmers that work in the lab. And again, this was something we did originally with flash and had to migrate that as the iPad came out. So now our entire implementation is using Postgres on the back end with Node as a web server and then as the front end. So our computer service interns, these are the students, we work along our professional staff to create some of these. So just show you an example of one here, if it pops up. So this is what we did for the state of Maryland called the Workforce Development. So this is called the Dashboard and this is looking at, this one is looking at work and workforce and unemployment issues. And we can look at it based on the counting. And we can look at individual counties. Or here I'll go to the county that I live in. We click on that one and then all of these things change. This tells us something about the number of people seeking a job in my county versus the number of jobs that are available in my county. We can also look at the fashion on the different counties. You can see where we're at with terms of workers and jobs. Go over in this location. We have other things that we'll look at, things like educational requirements. So this helps the state to figure out, well, here's a county that has lots of jobs that require computer scientists. But when you look at the top programs by graduate in the area, do we have computer scientists coming out? So there's a gap there. So this is used for that. What we did for a local government. I guess that one's turned off. I'm not going to bother with it. We do things for emergency response as well. We just show you one other one if I can get to the browser real quick. I think I got a couple minutes. So this is the lab. We'll bring up some of the things we're doing here. Look at base. So this is also used by the state to monitor pollution in the Chesapeake Bay. So we can look at nitrogen or phosphorus and what are the causes? I want to see the causes related to farms. We'll wait for that to come up and things are a little bit slow on this network. So I'm going to not bother with this. But again, this is again a leaflet front end with a post-GIS background. So the output for this is just kind of as a wrap up. We've been, as we've integrated classes and research into the program, we've had our students actually present open source training classes here at the Maryland GIS conference. The students actually came out and presented a workshop to a packed room of over 60 people who wanted to learn about QGIS, which is really nice. Of course, the thing that we're happy about, some of you know Bill Dowlands, who's an open source blogger. And this thing pops up here. But what he said about it is the best thing I saw at Towson GIS was our work. So I'm not sure why this is a very slow connection here today. But yeah, so here again, the best thing I saw to just show you was 2013, Bill Dowlands had indicated that. So what we're going to show that is it showed that our students were actually doing some really good work that was appreciated by the professional community. We did the same thing at the Delaware GIS conference. We had an enterprise GIS training class. So we showed them how to do that same enterprise class we did with them. And they just gave us a little view of the results of that. And we developed a bunch of training classes that people can come to for boot camp training in open source, whether it's a plan in GIS, Python, or enterprise. So just a conclusion is we're in a work in process as we begin to bring more open source into our teaching, into our research, and into our outreach. It takes more work. It's certainly a whole lot easier to just stick with what we're doing. But again, I think we don't give the students a well-rounded education at this point if we leave out what's happening in the open source community because I think it has a role, like we said yesterday. It also avoids stagnation for the faculty. I get really excited about learning new things. And that's what I get paid to do, is to just learn stuff. So learning GIS over, learning post-GIS has been really great for us to not feel stagnant. It also exposes students to other ways to accomplish GIS tasks, which I think gives them a greater breadth. And we've seen that in their job interviews. They're going to these one-on-one students just recently went to an interview, and he told the police, he was telling them everything they were doing with enterprise and post-GIS, and the person interviewing said, stop, stop. I have no idea what you're talking about. And this is a person hiring for a GIS job, but he worked with a particular software system that didn't know anything that occurred around it. So our 21-22-year-old students are getting exposed to these things, which is really good. And it also allows us to discover some unique and oftentimes better options for solving problems. And that goes back to some of the things that we've done with base statin and some of our other programs. Here we go. So you can see the, if we look at wastewater treatment plants and the impact that they have. Let's see, nitrogen. And nitrogen with wastewater treatment plants. So these are tools we developed all with open source products. So again, we like this a lot better. The use of, we can do it based on tribes, based on different counties. You can click on account and see its contribution to the Bay. So we like this because, again, it gives us a lot more flexibility. So the use of open sources has helped us really discover new tools that we hadn't known existed before. On the last slide, sort of a dig on, you know, you can teach a GIS course to ignore phosphor G. That's all you have. We call that resourcefulness. I only have one class to teach. But you can longer issue academic degrees in higher education with a concentration in GIS and ignore phosphor G. That's called fraud. I mean, that's just, that's, or malpractice is maybe a better word. I think, thank you open sources here. And I think it's got to be addressed and dealt with by the academic community. And that's what we're trying to do. Thank you. Thank you for that inspiring talk. Do you have any questions? Yeah. I was going to ask you what your students' reactions were to free and open source software. Free and open source software being added to the curriculum. But towards the end there, you showed them working, you know, in trainings. Yeah, this is, I mean, I didn't think about it way back for some of them. You're 21, 22 years old, and you're giving a talk on QGIS at a GIS conference to people who have no idea what this is, because they've been thinking about one way of doing GIS. And then at the end, there's a line of people coming up to ask you questions. I mean, you can put your GIS up as a 21-year-old. So that's part of it. Going to these interviews and being and exposing them to these things is great. And I think the last thing is when we, I kept showing you that one example we keep using about the factory blowing up. That's a perfect question I get to ask them at the end of their education site. Tell them we can think about all this. And now they can articulate the pluses and minuses of ESRI software, open source software. And I think that gets them really excited that they're learning something new and cutting edge. Any other questions? I cannot ask, but to ask, I didn't forget you on the GEO for all network. No, I just found out about it yesterday. So I can root that down. Yeah, no, no, no, really exciting, because I think that was my last slide. We can't ignore this anymore. I was kind of doing this on my own, but to see that there's this network in, what, 42 universities? 100 and 200. So we're still getting in on the ground floor of this, so any academics who want to jump in on it, you're still one of the first 103 people to get involved. And I think that's the early adopters. And I think we then become the people who really shape the way academia views this. Yeah, that is really inspiring. If there are no other questions, let's conclude this academic track session. Thank you.
|
The Department of Geography at Salisbury University has a long tradition of teaching geographic information science. Until recently, most of the courses and research activities have focused on commercial software offerings. However, the Department has recently integrated Free and Open Source Software for GIS (FOSSG) into it's curriculum, research, and outreach. Curriculum changes included introducing students to FOSSG in traditional GIS courses using QGIS, and allowed the creation of two entirely new courses in Enterprise GIS and GIS Programming using PostGIS, GDAL, and SpatialLite. Through a competitive National Science Foundation (NSF) Research Experience for Undergraduates grant (REU), students participated in cutting edge research projects in parallel processing with Hadoop and spatialHadoop for cluster computer, and CUDA for GPGPU calculation on embarrassingly parallel processes for raster data. Finally, undergraduate interns working in the Department's Eastern Shore Regional GIS Cooperative (ESRGC) developed geodashboards using node.js, PostGIS, and Leaflet, while a special topics course developed a GIS based iphone and Android application used by 4,000 participants in the annual Sea Gull Century bike ride using GeoJSON, Leaflet, and javascript. In addition to highlighting the successes of these activities, this paper will discuss the process we used to make the necessary changes in our curriculum, secure the necessary funding for external projects, and the training approach we used to get our computer science students proficient in programming with FOSSG tools.
|
10.5446/32014 (DOI)
|
I don't think I've had such a welcome to a presentation ever in my life. Apologies to all the PhD students at the back who have just been forced to attend this session. So I guess we can have more of a conversation. So if you have any questions as I'm talking or if I'm talking too quickly, because I'm a native English speaker and people often complain that I speak too quickly, then tell me and I will speak slower. I want to talk to you about a project or more of an idea, but then I'm going to present a project of proof of case study around the use of open source GIS to pull together different information sources in a disaster risk management context. And so I should also mention that there's a few of us that work on this project. I work at the University of Wollongong in Australia, and I'm from the smart infrastructure facility there. And so hats off to Etienne Rohan and Matthew who co-lead this project with me. And so we're working on a project which I actually presented the very beginnings of at Phosphor G last year in Portland. And so it's really nice to be able to be here again and now kind of show some results of the fruit of our labour and show what we've done. And so bit of context, so I live here in Australia, and our focus is on Southeast Asia. And our focus is on Southeast Asia is because that's where everybody lives, or at least half the people. So half the world live in this circle by population, and there are 20 mega cities within that circle. And of those mega cities, 14 of them sit on river deltas. And so you can kind of guess where I'm going to go. We know I'm talking about disaster risk management, we'll keep going. 18 of them have experienced flooding in the past decade. And so the IPCC recognises that flooding will be one of the most significant impacts to mega cities in the future as a result of climate change, because as sea levels rise and precipitation patterns change, our ability to deal with all of that water inside a mega city is decreasing, and so it's at least challenging us in new ways. And so there's a large group of people that live there, and we're focusing this project, the Peder Jakarta, the Map Jakarta project, on the city of Jakarta in Indonesia. So this is Indonesia, an archipelago of 16,000 islands, and then this is the city of Jakarta. So in this mega city, the capital of Indonesia resides the population of Australia in the wider urban conurbation. And in the city itself, this light grey area in the middle, there are 14 million people. And this city is served by 13 rivers that flow from the south to the north to the sea. And so that causes quite a problem. And so every year during the monsoon season, the rains come and the city floods. And so the city, this is the CBD, this is the Bank of Indonesia, or the Gran Indonesia Hotel, and this is all of the financial district in the city centre. All of these key economic institutions are being flooded because of the city's inability to manage the movement of water through the city. And it's worth noting that last year, the city experienced 50% more rainfall than the previous year. So a 50% increase in water. And so what's quite nice about this photo is that this fountain, which is part of the independence, one of the independence monuments in Indonesia, people have sat with their backs to the fountain looking out on the flood. It's a very Indonesian outlook on life to be sandwiched between two pieces of water to kind of observe the chaos going on in the city. And so this becomes an annual fact of life. During the monsoon season, people here evacuating as their houses are being flooded. And it's important to realise that this disaster happens every year. So we're not on an earthquake, which you can predict to some extent, but may happen at any time. Just this morning there was an earthquake in Chile, which has caused tsunamis in the Pacific. But this type of disaster happens every year. So you know that it's coming and we know when it's going to occur. And it happens because of the features that are shown on this map. And so here's where all the people live. There's 14 million people in Jakarta City proper. So there's 13 key rivers that flow through the city from the mountains in the south. And this is the ocean. Now already something like 40% of the city is at risk from flooding because it's below sea level. And the city, this dark blue bit, are the areas that are sinking the fastest. And those areas are sinking, thanks. Those areas are sinking up to, doesn't work, quarter of a metre per year. Well, it's half a metre, quarter of a metre per year, every year. And the city is sinking. And it's sinking because, I'll go back to using this, it's sinking because not one drop of the water that comes from the rivers is used for drinking. So every house and every building in the city of Jakarta has an aquifer, like it has a pump down to the aquifer and is abstracting that groundwater for drinking purposes. And so the city is slowly sinking into the sea. So the sea wall runs along here and the sea wall is as tall as I am. So you go to the edge and the sea wall, which is about this wide, is here. The school is where this guy is sat and the fishing fleet is above you because the ocean is now above the city. And so we have this phenomenal density of urban infrastructure in a mega-city environment with this very serious condition of flooding, which happens every year. And a condition which is increasing in severity year upon year because of climate change and because of urban densification. So this is a really interesting image because we see one of the main canals, one of the main rivers in Jakarta. And so this is a lock gate which controls the flow of water through the city. This is the original Dutch lock gate built in the 1800s by the Dutch settlers of the city of Patavia as it was then. So this used to be the edge of the city. It's now the middle of the city. And you see the main east-west railway line, so this is the main train station here, which abuts the lock gate. And then you see the main north-south road, which goes below the level of the water. So this infrastructure density is also another critical factor in understanding why Jakarta is so severely prone to problems when we have flooding. And so this image really typifies the whole essence of the project that we're working on, where we have the flood condition and then we have an informal economy which has sprung up around that. So these guys, garbage collectors, so they have a Garibak cart. Normally they would be collecting the garbage from everyone's house. But in this case, this guy is paying them to give them a lift across the intersection, across the flooded road. So it's important to realise that in a city of 14 million people, you can't evacuate everybody. You have to just move everyone around to the driest bits possible at that time. But what's really interesting about this image is what this guy is doing here, is tweeting it out. Saying, hey, if you're coming to work on this highway, the guys are here, just give them a couple of dollars and they'll take you across the flooded intersection. And so with this, we had a thought. Can we use that Twitter data, the social media activity, as an indicator of where flooding is happening in real time? So these are all the tweets, and there are around 4 million of them, that occurred with the word flooding, or banjir in Indonesian, during the 2000, 2014, 2013, 2014, one soon season. And what's amazing about this image is the density of coverage. So what's important to realise is that more than half the population of Jakarta have two mobile phones. So people don't really have desktop computers, but everyone has a phone and a tablet, or two phones, or even three phones. And so everyone is using social media, perhaps like in the West we originally started using SMS. And so there's a vast quantity of social media conversations going on in response to the real time events taking place in the city, in this case, flooding. And what's really fascinating is that you can almost pull out the transportation networks of people on their way, like that guy, and then stuck in the traffic jam because the road is flooded ahead. And so sending the tweet saying, alright, I'm stuck again. The flood has come. And so one last image. So you see here, this is a really nice image of one of the canals down near the coast, and people actually graffiti with their Twitter handles. So it's important to realise that Twitter and social media as a whole is really a part of daily life in Jakarta. But I think what's interesting to note is that what's going on in the city is that people are having conversations about flooding, but those conversations are really relevant to the situation at that time. And so in previous projects that we've seen, some of the work done by, for example, Usahidi and the humanitarian open street map team and Patrick Myers group is where they've maybe passively taken social media activity, scraped, if you like, to try and infer what's going on, to try and say, well, if we know people are talking about flooding, we can just take that information and we can make a map. Now our approach is slightly different because if you just take people's information, if you just take their Twitter handles and their Twitter conversations, we don't know if you're talking about flooding now or last year or somewhere else in the city. So what we did is we convinced Twitter to allow us to send out a message to everyone in the city when they said the word flood. So if you guys in the front are having a conversation about flooding in Jakarta or with this keyword flood, then you would get a really nice message from us saying, hey, are you talking about flooding? Are you being flooded now? Can you send us a selfie of the flood, of you in the flood? And so we created a very short video which was sent out to 2 million people in Indonesia, in Jakarta this year, to ask them to tell us about the flooding in real time and that we would put that information on the map. So I'll show you this video, see if this works. There is a new tool in Jakarta bringing together mobile mapping and local flood information. This community flood map is available anywhere, alerting you to water impasses in real time to help you navigate the city. Wow, now that the citizens of Jakarta have the best information on flooding conditions. You are already tweeting each other, helping friends and family avoid hazards around the city. Peta Jakarta uses this on the ground information to give you a comprehensive map of the flood conditions. When you see a flood, tweet Benjir at Peta Jakarta and your report will appear on the map, alerting the community to the flood. Remember to turn on your phone's geolocation so we can pinpoint the report. The more people use Peta Jakarta, the better the map will be. Working together, we can help everyone bypass flooded areas, saving time and avoiding danger. Visit Peta Jakarta.org to get started. So this is the point that I got to at the end of last year's FOSFORG presentation in the US. We have this idea, Twitter is on board, we're going to send out these videos so now I can tell you what happened. I can tell you the story of flooding in Jakarta during this year's monsoon season from December to March. We had these wonderful messages. This is a timeline of Twitter messages through our automated process of asking these people to confirm. This first message is just someone you can see here saying, there's something happening, he's retweeting the news company, TV1 News in Jakarta, talking about the flooding. He gets a message from us in an automated manner that says, are you flooded? If so, activate your geolocation and send your report to us and then check the map at Peta Jakarta.org. He says, yeah, it's flooding, it's 50 to 60 centimetres, here's a photo of what's going on in my street. Great, this is where this is my postcode effectively and I'm in North Jakarta. Then we send him a message back saying, thanks, Nikki, check it out, your report's now on the map. The map will be used or the map is publicly available so anyone can see what's going on. Then we see an activity graph that happens like this of five key flooding events occurring during the monsoon season when 6000 people were evacuated. We see these spikes of Twitter impressions as we send out these messages, automatically saying, please tell us, please confirm and tell us what the situation is on the ground. What do we do with those reports when we get them from the users? Well, we do two things but with one map. It's a GIS conference, right? There's got to be some maps in there somewhere. The first thing is when you visit Peta Jakarta in the city on your mobile device, then just like Google or any other proprietary maps, you get the blue dot, shows you where you are and then you see all the reports that are around you. So if you're going to work or if you're taking the train to school, you can see what's going on and say, oh, I should go this way or there's flooding coming or I should just check in with my neighbors. But then if you load the same map on a desktop device, you see an aggregate overview of activity as an indicator of potential flooding across the whole city. And so this design was conceived in response to the government of Jakarta's need for real-time information. So prior to this system, it took them six hours to compile all of the 911 calls and all of the formal information about flooding that they had to produce a map of where the flooding was in the city to then action in response to send the boats to create an evacuation shelter to send aid to the different villages within the city, different neighborhoods. And so a brief word on the software, which is called Cognicity, that allows us to collect these tweets and to put them in a database and then put them on a map. And it's free and open source and I'm not going to go into this too much because it's really boring to see schemas on a slide. But so, into some reports, put in a database, serve it out, put it on a map, set it on your phone, use it for disaster response. And so here's a screenshot of the desktop map during flooding in January this year. And so it got those guys were getting pretty wet in northeast Jakarta because it was flooding quite a lot. And so you can see all the rivers and you can also see all of the pumps and the floodgates that control the flow of water through the city. Now if you remember I said that flooding was compounded, if impact of flooding was compounded because of the infrastructure density in the mega city. And so what happened this year is because electrocution is the biggest cause of death during the flooding of people going to the water where the electricity is on and so then electrocuted, the power company often turns electricity off to neighborhoods to say we're going to turn the power off so that we know it's safe. Unfortunately, they turned the power off to some of the pumps and so the water couldn't be pumped over the seawall and so it just starts to fill up because it's a big ball. And so by trying to turn the power off to a neighborhood, they turned the power off to the pumps causing cascading failure and an exponential increase in the amount of water which then proceeded to back up into the CBD. It's worth also noting that many of the other government systems that are available for the government to see the emergency management agency to see where flooding is happening in real time were offline by this point. I guess because some of the servers got quite wet. Luckily we were still operational and so we kind of inadvertently became the first line of hazard information for the government to see where flooding was happening. So here's an example of if you take that map and then drill down because you want to see a specific neighborhood or a specific message, I can see one of these tweets that's been put on the map and there's a link and I apologize that you can't read it but it's saying, someone's saying, yep, here's a report flooding in my neighborhood and here's a photo that I've taken, two photos actually to show that flood. Here's the same map on the mobile device so this is where we are at this point in time and then I can see the tweets around me and I can also see the flood infrastructure and the pumps to give me some geographical context to the flood information that I'm seeing from other citizens. And so here's a time series of maps for those same five key flood events that I showed on the original bar graph so we can see quite high levels of activity in February actually the flood, the monsoon came quite late this year, the monsoon pattern is changing. And so flood three and four and five were quite severe and required some significant evacuations. And so over the whole monsoon period which is about 60 days, we had a thousand of these confirmed flood reports so a thousand citizens saying yes, I would like to say that it's flooding here. There were about 70,000 users on the website and there were over 100,000 flood conversations going on in the city. Now this is really important to note based on what we're trying to achieve here is that there were 100,000 tweets with flood or banjir in. So originally people scraping that data just taking it and then passively looking and it might say there's 100,000 flood events or there's 100,000 occurrences of people being flooded. But what we've shown is actually that's not true because a large proportion of these are people talking, are you okay, the news and the media they're not flooded where they are but they're reporting about the flooding and so they're sending those messages out. Now what we've developed is a real time filtering process to say was 100,000 conversations which translate to a thousand confirmed reports on the ground of the flood hazard. And I think this is really important because we see a lot of people approaching this kind of challenge under the guise of big data and saying we've got a big data set, we've got 100,000 tweets, we need to try and understand them. So let's teach a machine to think like a human to understand what the tweet said, machine learning. What we did is just ask the people who are already really clever and say can you just tell us if it's flooding. And then what we're doing is developing Twitter, not developing a new app, not developing Twitter for emergencies but just using the existing Twitter communication network that people are already using because there's already 100,000 conversations going on anyway without us to then act as a process of please tell us in a crowdsourcing manner please confirm the situation on the ground. And so the information was used live by the emergency management agency in their control room to make decisions. And so just a couple of examples of people sending us sending some great tweets in and I think this is a really nice example flowline again of tweets to really summarize the civic management part of my talk which I haven't really touched on yet. But one of the things that we're trying to say is that if the citizens can report to the government and the government can report back to the citizens in real time about a disaster and that's really a process of civic co-management through geo-social intelligence. So we're taking a social media network that exists already and we're putting two players together who probably don't trust or necessarily want to talk to each other, the government and its citizens, but we've found them a way through Twitter that they can have a conversation in real time about what's going on. So here the governor of Jakarta is commenting about the flooding and saying if there's a flood please report it to us via Twitter at the Petit Jakarta using the Petit Jakarta system so that the emergency management agency can see what's going on so that we can respond so that we can send a vote so we can set up an age shelter. And I just love this response of this person saying yeah there is some flooding happening maybe we could do with some aid here and this guy who's ingeniously found some sort of magical plug for his bathtub and it's just paddling down the street. Amazing. Now in the last few minutes of my presentation I just want to then talk about where we go in the future and the work that we're trying to do this year. So that's great so we've got all of these people who are tweeting and this is a system which we can easily access as an API, it's geospatial data, we can ask people to turn on your phone, send us a selfie and the flood like this guy. What do we do about all of the other sources of information that exist for the government in a disaster risk management context for both preparing for disasters, responding to them in real time and then the management that obviously comes after a disaster has occurred. I really kind of want to throw this down as a bit of a challenge to the free and open source geospatial community because I think that free and open source GIS offers us an ecosystem within which to bring all of these things together. And we've seen some great presentations and Alicia touched on this from Mappzen this morning, her keynote, talking about how we are an ecosystem and we're all some of a number of parts and that we can work together now with these very mature and stable open technologies to say well we can take that data and whatever format it is and put it into some sort of system so that we can understand it and make it actionable. And so here's an example of a paper based report of an area that's been flooded that is still being collected during the monsoon by volunteers on the ground. And so there's some geospatial information here, there's an address, postcode, phone number even, but these are still being given in to the emergency management agency. So what does the system look like where you have both this amazing tweets going on but then also has all of these other ancillary sources of information that exist. And I just want to touch on this as a good example. So some of the communities that are worst affected by the flooding in Jakarta are the urban poor. And that's because all of the high bits of land are where the rich people live. And so when you arrive to move to Jakarta, you're an economic migrant, you've moved from elsewhere in Indonesia, you come to the city to look for work, the available area of land for you to build a home or to set up a home is typically near the river or nearest the river. And so it's the urban poor who are most frequently affected by flooding because they live nearest the waterways. Well actually many of these communities are incredibly self-resilient to a lot of these processes already. And so that we see these communities independent of the government or independent of any agent or actor outside of the community developing processes of community resilience. And so this is a main street that you can walk that's main street of this community. And what they do is they put this rope line in because the floodwater comes really fast and it's dark and the power has been turned off. But you know if you need to get out then you can get on the rope line and you can pull yourself to safety. And everyone in the community agrees that if you go that way then it's going to go to high ground and you're going to be safe. Now the government response to flooding is this, which is great, it's fantastic, it's a boat, it's what you need. But unfortunately there's a really strong material conflict between the propeller on the boat and the rope line. So when the government goes to rescue the people in the community that are most affected by the flooding, they override the two systems clash with each other and they override the existing processes of community resilience because they cut the rope line. So what can we do? Well can we make a map of where the rope lines are before the flood using free and open source GIS to then give to the government in advance so they can see it in real time so they can ring the boat driver and say don't go down that street because there's a rope line there, you need to go around the back. And so that's what we did. We started to try and do this mapping process but as well as just physically mapping where the rope lines are, we also asked people about how they were impacted by the flooding last year. And so here's a map of that, this is one community which is on the Chiliwong River and you can see how they're really affected by flooding because they're really in the middle of these two big meanders and so the water comes every year and so this red line here is the rope line, these are the main streets so that you know if you're going this way then you're going to pull yourself to safety or indeed up here this corner with a bridge over the river. We also asked those people, how are we doing for time? This is very difficult as far as I think. Now 47 minutes. Okay, we were very quick. We also asked these people how they were affected by flooding last year and I can talk to people about this after and say like how high was the flood? When the flood came? How were you affected? And we're also trying to understand those existing processes of mapping that are already within the community and so many communities are already doing their mapping like this on paper but it's quite a challenge to then georectify a paper map in the GIS to say well this is the actual representation space that you all work on but here it is in real geographical space. And so here's a map of that den georectified, it's very crude, very quick drawing that we did in the field but then it allows us to see which homes were most affected. So we've got the darkest blue saying this is where the water was highest last year so we can start to do planning processes for DRM to say well these are the areas that we need to prepare for the most. So I'm at the end, last two slides. So our proposed system is using open source GIS so we have a post GIS database, we're not building new technologies per se, we're using what's there, we use JavaScript, we use Node.js, we use GeoJSON to say how can we pull all of these things together to harness them in one way so that the government can see all of that information but so can the citizens to make more informed decisions during the flood. And so we've got different data source sets coming on the left hand side and then we're pushing those into the map and then the government can make a decision and then they can push out a new map to say the boat's on the way or this is where the age shelter is. And so this is a prototype of a system that we're working on this year, ready for the monsoon season in December. And the summary of the whole presentation is that the free and open source GIS that's going on down here is the tool and the technique and the ecosystem that enables citizens and the government to work together and build a process of civic co-management for disaster risk management. Thank you. Any question and comment? Just one person. Go, shoot. So how real time is the data that you're getting and then showing back to the community? So the tweets data of the people saying it's flooded on the ground? Well, not just a tweet but on the maps you're showing the time series maps with confirmation of floods. Yeah. So the individual tweets are refreshed every 60 seconds but then the aggregates are available at one hour, six hour or 12 hour intervals. So you can see the aggregate of the last hour basically and then you can drill down and see what's happened over the last hour but they're refreshed every 60 seconds. Any questions? Cool.
|
The use of mobile devices for identifying risk and coordinating disaster response is well accepted and has been proven as a critical element in disaster risk management [1,2]. As new tools, applications, and software are adopted by municipal governments and NGOs for the identification and management of urban risk, the need for greater integration of the various data they aid in collecting becomes acute. While the challenge of integrated data management is substantial, it is aided by the fact that many new tools have been developed to include an Application Programming Interface (API), which allows the machine-to-machine (i.e. automated) sharing of open data [3]. While some proprietary platforms for the management of urban data are currently available, they are extremely costly and very limited in terms of data inputs; to date there are no open source geospatial software tools for the integrated management of various API sources. A key to improving disaster risk management as an element of risk identification is the development of an integrated open source Decision-Support Risk Matrix that enables: 1) automated integration of multiple geospatial and non-geosapatial API sources into a low cost, user-oriented dashboard; 2) backend database and software design for the Risk Matrix that enables data sources to be parameterized and interrogated; 3) the development of an output API stream that allows additional secondary applications to optimize their evaluations and analyses through open access to critical risk information. Jakarta and its surrounding conurbation (Jabodatabek) has the highest rate of urbanization in the world and comprises the second-largest contiguous settlement on earth. With a greater metropolitan area hosting 13 rivers, 1100 kilometers of canals, and over 28 million residents, Jakarta is a key case study for the development of improved risk management through new tools and open software [4]. Risk information and coordination through open data protocols is critical to support decision-making about disaster response, emergency planning, and community resilience. Furthermore, rich suites of open and accessible geospatial risk data generate activity in NGOs and the private sector, especially for longer term planning tools and economic calculators. The development of application-driven data collection via mobile devices allows for unprecedented data collection capacities, but to be effective, these technologies require coordination through open source software. CogniCity is a GeoSocial Intelligence Framework developed by the SMART Infrastructure Facility, University of Wollongong and the emergency management agency of Jakarta (BPBD DKI). CogniCity is a geographical information system that allows collection and visualization of geospatial data on flood alerts (via Twitter) and the use of spatio-topological network models of hydraulic networks. Through its implementation PetaJakarta.org (Map Jakarta), CogniCity has been proven in an operational manner to improve government response to flooding in Jakarta [4]. This paper presents the next version of CogniCity to support an Application Programming Interface (API)-enabled Decision-Support Matrix. The result is an open source platform capable of transforming real-time data about flooding in the city of Jakarta into open, accessible and actionable information by government agencies, NGOs and the public.
|
10.5446/32015 (DOI)
|
Hello everyone. Today I will present part of my PhD research which is the evaluation of a prototype for the open source PASWATCHIS collaborative platform for the disaster management with which we evaluated the prototype for the disaster solution. This is a bit of introduction of this platform and this kind of collaborative projects. It's important. Actually, if we look at the PASWATCHIS platform, we can see that there is a lot of good communication and collaboration between the disaster organization who is especially interested in this preparation and management. And actually, if we combine, if we could purchase a different type of material, which are collaboratively and collaboratively, then it could produce a variation of appropriation which is more acceptable and appropriate for the communities and for the authorities. Also, by engaging different stakeholders in this type of platform, then we could consider the different types of professionals of stakeholders in the session making process. So the two main alternatives of this collaborative disaster prototype is to visit the site and integrate the disaster samples for the foundations of the disaster management different types of communication structures mainly oriented for the interventions such as flood, flood overflow, and the second option is to visit the different professionals of stakeholders in the participative and collaborative manner. First of all, if there is a flood, then what kind of material could be purposeful to mitigate the flood? So for example, you could have houses or relocate the houses and put the right side of the house, for example, these kind of measures. So this is a brief of both the background architecture of the platform. Now it's a prototype that is fully working and the background architecture of the platform opens up the homeless, open up your framework and describes the building's sticky. So if you see a lot of the data, this is the purchase of the basecrests and for applications, you can see a lot of the digital grid and the changes are for the map and also some JavaScript for the subscription and for the interface for the framework, we use the geostation to access the access to the barriers to. So here this is a buffer of the main interface of the platform. So this is three parts. So this part is for mapping where you could see some houses and building areas. This is a WFLO map for these packages and the area of the building for the space, as the special information and this is the main navigation panel where you can see different data packages. You can see the information of the data for the broader screen. So in this presentation, I have the data for the access where you could get the interface interactively for the WFLO. So this is a different expert, so you don't get to find it. What are the basic measures that you need to work with? What are the different measures? What is the agreement of localization? What is the a field of certain measures in the area as well. So that's why I use this, not quite to my evaluation tools. And the other side, we have the alternative, which is relative access to holders, which is involved in the process. So, generally, in our books, we have the four of these three areas, so in Italy, Poland, France, and France, here. So we presented this book to staff holders, and the authorities in the three case, three area, quality platforms, and we found the platform is activated, yes, we are supported, but we still have to improve the user-friendly and support the customer of the platform. But we didn't test the platform early with the staff holders, so this one, as a second stage, we tested this book to dive with the next three of these two. So this one, I'll show you, there is a test, which is the extractor of the information systems, where you can see the stage one. In the visual stream, how they identify the e-mail screens, which is the area that has been given a hard time, and they tap, and then this time, they have to see which area which be designed for the device. And then, in the second stage, they formulate the possible measures to educate the rest of the studio, and in the next stage, they set the standard we have in the first stage three. So this is a core evolve, you can see that how the last test is defined, and the first test is the special planers, and the first test is the session on the main platform. And for the feedback of the platform, we call it a different feedback, a written feedback form, at the end of the each stage, and now, at the final of the exercises. So now, I'm going to do the first test now. So for the first test, so when the test is working, so only the strength is there, that they have to go to the test, and that the visual is up in the area, and then they only define which of the points have been afforded by the W4. And then, the second stage, as you can see, when one group is doing the exercises, they propose different type of mitigation measures in the area to protect the houses, which is very effective in the area. So for each set of the proposed three mitigation measures, one is to keep the retention pattern, which is already accessible in the area, to make a pre-merges, and so to put the area net in the scope, and also to put the retention patterns in the area. So I'll take the first stage by two, three groups of students and propose their measures differently by logging into the platform. Yep, and for the third stage, this is both the next and the next stage for them, because in the stage, the F3 for group runs for the cognitive and cognitive and so again, it's the first and the second stage for them, which is the third stage also, and three groups that groups for the experts, the first and the second and for the third stage. So in this stage, I'd only like to do this because I just need to give the weight of the different criteria. For example, what is the in their opinion which criteria is best used to do. For example, this group, I'll give a third of the and the second stage for them for us. So this, they give the weight of five, which is about 33% of the five criteria. And this gives us quotes of the measures that are really that important. So they give the weight as well. And after giving weight, then you can see already the rest of the alternative what we are doing here. So the alternative is for the selection for this group, for the next selection. And so you can see the difference that different are all the in the back to them. The weight of the different criteria for the growths for the affected or the growth of the potentials. And in the platform, we can also see the different conditions we put as part of the best time for us. For example, for the community group, they take the alternative to the response for various selection for them. And this is the same group, same thing, the same team. But for the expert, which is the two measures, stretcher planar and the next one is for the alternative to the response for the selection. The in-game planar. So for the future question, actually there are two types of future questionaries I already introduced. The one which is the detail for the in-game planar and which is the stretcher planar. And then another one is the final one, which is the in-game planar. So in every question, either we have the open-line access question, which is more like, for example, what is the set of which answers are for the different areas. And so in the experience, some cases are waiting for a question, whether we will give the scale of one to five seconds. We'll give them four questions to find out if actually they're in the planar planar. So here, artists should be able to apply for the final answers at a total of 21 questions, but here I have a few. So as you can see, for example, 3.5 square questions. Which is more than enough for how many of us just finished questions in the night in the first. So the answer is 3.5. Yes, we have to improve for this one as well. And why? That's because you can really suffer from the visualisation of the plan chat. And for the answer scale, it's 2.5 square for this part of the interface. And this is the final answer of the prototype or the answer of the process. So as you can see from the chat, the answer is 4.5 square. And it's 3.4 square. One of the expert sessions we have to improve and it's a little bit more. And it's just a satisfaction to achieve the score of 3.5. Actually, I'm going to improve the score a little bit more because I'm going to improve any type of half-docalisation. So maybe the students will also comment that it might be better if we put up a set of triggers and give them a share of the video explanation of the tools for the exercise that we'll be having for the... And this is the feedback of the exercise. So actually, one of the questions like, do the students put all these study exercises in interest for them? And do you think that this is useful for doing the exercise punishment? And this exercise is the real exercise punishment or I'd like to continue this kind of interactive exercise in the future. And one of the questions that we're going to do today is to clarify the study of the students. We have the expert in the game exercise before. And let's say, okay, first you can see the scores really high for all of them. Why? The scores really low for the... for the student and one of few of the experts in the exercise. But all of them, we agree that this exercise for the training of Adria Flatter, we are a punishment for the students. What we can fix? What we can fix? So, it's going to be a good exercise for the students in the future. So, for the discussions, I'll tell you why, as I've said before, the students already have a chance to do 75% they will respond it for the... and show interest in going doing exercise for the future for the future. And also, we have said that the expert in the exercise for the support if the decision supports you in this management, why we still have to improve for the activities and we have to have the reformation as a result. And the interactive as well. So, we need to update for the tutorial, the implementation, the training session for the students before doing exercise. And also, to have a good little time for the discussion for the students, we will go to the table now if they have no time to discuss with any of the exercises. So, to be concluded, for this presentation, the last presentation, we will discuss the policies we integrated the exercise based on the interface with the decision supports to you, which is the multi-credit evaluation for the regular alternatives. This evaluation was to provide the further improvement of the platform. So, for the movie, it's a potential application for doing this profile in the future for the upcoming students. And it's a lot of efforts to ensure the situation. We will also try to experience the co-dice involved in this important. So, in the future, we will do a test for the platform for the other better course and other diverse practices for the testing about 150 students. Yeah. So, that's that, yes. Yes, enough. But, Adika, if you are interested, you can put the things here. Yeah. Yeah. She was so committed. What paper if you are interested? If you are interested, just put it. Any question? I'm going to just one person. Yeah, good. Yes, no person. You have to say time. Good presentation. Okay. Yeah.
|
Over the last decades, advancements in web services and web-based geospatial technologies have led to increasing delivery, access and analysis of rich spatial information over the web. With the use of open access data and open-source technology, it has become possible to make better, transparent and informed decisions for policy and decision makers. Under the framework of the European FP7 Marie Curie ITN CHANGES project, a prototype web-based collaborative decision support platform was developed for the evaluation and selection of risk management strategies, mainly targeting to flood and landslide hazards. The conceptual framework was designed based on the initial feedback and observations obtained from field visits and stakeholder meetings of the case study areas of the project. A three-tier client-server architecture backed up by Boundless (OpenGeo) was applied with its client side development environment for rapid prototyping. The developed prototype was tested with university students to obtain feedback on the conceptual and technical aspects of the platform as well as to analyze how the application of interactive tools in the exercise could assist students in their learning and understanding of risk management. During the exercise, different roles (authorities, technicians, community) were assigned to each group of students for proposition and selection of risk mitigation measures in a study area, Cucco village located in Malborghetto Valbruna commune of North-Eastern Italy. Data were collected by means of written feedback forms on specific aspects of the platform and exercise. A subsequent preliminary analysis of the feedback reveals that students with previous experiences in GIS (Geographical Information Systems) responded positively and showed their interests in performing exercises with this type of interactive tools compared to the ones with fewer or no GIS experience. These preliminary results also show that the prototype is useful and supportive as a decision support tool in risk management while user-friendliness and practical aspects of the platform could be better improved.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.